diff --git "a/directing_comments.json" "b/directing_comments.json" new file mode 100644--- /dev/null +++ "b/directing_comments.json" @@ -0,0 +1,6117 @@ +[ + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/opencv.bib\n**Change Type:** modified\n**Context:** PR #27730: doc: fix doxygen warnings for imgcodecs, flann and objdetect\n**Review Line:** 1440\n\n**Code Changes:**\n```diff\n@@ -1431,14 +1431,6 @@ @article{Viola04\n publisher = {Kluwer Academic Publishers},\n url = {https://www.face-rec.org/algorithms/Boosting-Ensemble/16981346.pdf}\n }\n-@inproceedings{wang2016iros,\n- author = {John Wang and Edwin Olson},\n- title = {{AprilTag} 2: Efficient and robust fiducial detection},\n- booktitle = {Proceedings of the {IEEE/RSJ} International Conference on Intelligent Robots and Systems {(IROS)}},\n- year = {2016},\n- month = {October},\n- url = {https://april.eecs.umich.edu/pdfs/wang2016iros.pdf}\n-}\n @misc{Welch95,\n author = {Welch, Greg and Bishop, Gary},\n title = {An introduction to the Kalman filter},\n```", + "output": "Thank you for review ! I fixed it ." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27537: Refactor Blackwell - In CUDA 13: - 10.0 is b100/b200 same for aarch64 (gb200) - 10.3 is GB300 - 11.0 is Thor with new OpenRm driver (moves ...\n\n**Code Changes:**\n```diff\n@@ -109,7 +109,7 @@ macro(ocv_initialize_nvidia_device_generations)\n set(_arch_ampere \"8.0;8.6\")\n set(_arch_lovelace \"8.9\")\n set(_arch_hopper \"9.0\")\n- set(_arch_blackwell \"10.0;12.0\")\n+ set(_arch_blackwell \"10.0;10.3;11.0;12.0;12.1\")\n if(NOT CMAKE_CROSSCOMPILING)\n list(APPEND _generations \"Auto\")\n endif()\n@@ -273,14 +273,15 @@ macro(ocv_set_cuda_arch_bin_and_ptx nvcc_executable)\n```", + "output": "@johnnynunez Do you have any official documentation mentioning the new compute capabilities and the f option?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27537: Refactor Blackwell - In CUDA 13: - 10.0 is b100/b200 same for aarch64 (gb200) - 10.3 is GB300 - 11.0 is Thor with new OpenRm driver (moves ...\n\n**Code Changes:**\n```diff\n@@ -109,7 +109,7 @@ macro(ocv_initialize_nvidia_device_generations)\n set(_arch_ampere \"8.0;8.6\")\n set(_arch_lovelace \"8.9\")\n set(_arch_hopper \"9.0\")\n- set(_arch_blackwell \"10.0;12.0\")\n+ set(_arch_blackwell \"10.0;10.3;11.0;12.0;12.1\")\n if(NOT CMAKE_CROSSCOMPILING)\n list(APPEND _generations \"Auto\")\n endif()\n@@ -273,14 +273,15 @@ macro(ocv_set_cuda_arch_bin_and_ptx nvcc_executable)\n```", + "output": "> @johnnynunez Do you have any official documentation mentioning the new compute capabilities and the f option? https://developer.nvidia.com/blog/nvidia-blackwell-and-nvidia-cuda-12-9-introduce-family-specific-architecture-features/" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27537: Refactor Blackwell - In CUDA 13: - 10.0 is b100/b200 same for aarch64 (gb200) - 10.3 is GB300 - 11.0 is Thor with new OpenRm driver (moves ...\n\n**Code Changes:**\n```diff\n@@ -109,7 +109,7 @@ macro(ocv_initialize_nvidia_device_generations)\n set(_arch_ampere \"8.0;8.6\")\n set(_arch_lovelace \"8.9\")\n set(_arch_hopper \"9.0\")\n- set(_arch_blackwell \"10.0;12.0\")\n+ set(_arch_blackwell \"10.0;10.3;11.0;12.0;12.1\")\n if(NOT CMAKE_CROSSCOMPILING)\n list(APPEND _generations \"Auto\")\n endif()\n@@ -273,14 +273,15 @@ macro(ocv_set_cuda_arch_bin_and_ptx nvcc_executable)\n```", + "output": "> Hm.. each option here is yet another kernel build. Longer build and fatter binary. I would say, that we do not need all possible combinations, if the target is defined by the arch name. \"12.0;12.1\" most probably should be just 12.1. The same for 10.x variations, if pure 10.0 devices do not exist and 10.3 is minimal production version. @asmorkalov @johnnynunez @tomoaki0705 Firstly I don't really know what the use case for specifying the CUDA architecture is? That said if someone explicitly selects it I don't see the issue in compiling for all architectures because the user should be aware of the implications of there choice. I think where the size is an issue is when the `CUDA_ARCH_BIN` and `CUDA_ARCH_PTX` are not specified and all supported **architectures** from all generations are built. In a future PR we could relax this and just build major architectures (5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0 and 12.0), equivalent to the CMake [all-major](https://cmake.org/cmake/help/latest/prop_tgt/CUDA_ARCHITECTURES.html) flag. A better alternative might be to make the default `CUDA_GENERATION=Auto` which I think makes much more sense. If a user hasn't explicitley specified the architechture this is probably what they want. We could then have an additional option equivalent to the existing default (`CUDA_GENERATION=All`) or one for building compatibility with all major architectures (`CUDA_GENERATION=All-Major`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/cuda_info.cpp\n**Change Type:** modified\n**Context:** PR #27636: cuda 13.0 compatibility\n\n**Code Changes:**\n```diff\n@@ -424,7 +424,9 @@ int cv::cuda::DeviceInfo::clockRate() const\n #ifndef HAVE_CUDA\n throw_no_cuda();\n #else\n- return deviceProps().get(device_id_)->clockRate;\n+ int clockRate;\n+ cudaSafeCall(cudaDeviceGetAttribute(&clockRate, cudaDevAttrClockRate, device_id_));\n+ return clockRate;\n #endif\n }\n```", + "output": "Yep, I messed that up. Thanks for pointing it out. Fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_bmp.cpp\n**Change Type:** modified\n**Context:** PR #27559: imgcodecs: bmp: support to write 32bpp BMP with BI_BITFIELDS\n\n**Code Changes:**\n```diff\n@@ -42,6 +42,7 @@\n \n #include \"precomp.hpp\"\n #include \"grfmt_bmp.hpp\"\n+#include \"opencv2/core/utils/logger.hpp\"\n \n namespace cv\n {\n@@ -603,6 +604,7 @@ BmpEncoder::BmpEncoder()\n {\n```", + "output": "Hello, thank you for youre review. I will revert to change about imread in this weekend. Original Issue [27555 ](https://github.com/opencv/opencv/issues/27555) is not able to close. It read 32bit RGBX BMP file with IMREAD_UNCHANGED. After revert, it will return RGB image instead of RGBA image." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #27691: FFmpeg 8.0 support.\n**Review Line:** 2020\n\n**Code Changes:**\n```diff\n+ AVPacketSideData* sd = video_st->codecpar->coded_side_data;\n+ int nb_sd = video_st->codecpar->nb_coded_side_data;\n+ if (sd && nb_sd > 0)\n+ {\n+ const AVPacketSideData* mtx = av_packet_side_data_get(sd, nb_sd, AV_PKT_DATA_DISPLAYMATRIX);\n+ data = mtx->data;\n+ }\n+# endif\n if (data)\n {\n rotation_angle = -cvRound(av_display_rotation_get((const int32_t*)data));\n```", + "output": "Please check https://github.com/opencv/opencv/pull/27746/" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/perf/perf_decode_encode.cpp\n**Change Type:** modified\n**Context:** PR #27605: Performance tests for writing and reading animations\n\n**Code Changes:**\n```diff\n@@ -7,10 +7,82 @@\n namespace opencv_test\n {\n \n-#ifdef HAVE_PNG\n-\n using namespace perf;\n \n+static Animation makeCirclesAnimation(Size size = Size(320, 240), int type = CV_8UC4, int nbits = 8, int frameCount = 40)\n+{\n```", + "output": "i wanted to compare `ImageCollection` with `imreadmulti `and `imreadanimation`. `ImageCollection` don't support reading memory buffer yet. so i propose merging this PR now as is. let me update after #27496 merged. also lets see the result of this test with #27496 ( if `[ WARN:0@62.708] global grfmt_png.cpp:810 read_chunk user chunk data is too large` still exist ) ``` [----------] 4 tests from Decode_imagecollection [ RUN ] Decode_imagecollection.imagecollection/0, where GetParam() = \".gif\" [ PERFSTAT ] (samples=1 mean=374.48 median=374.48 min=374.48 stddev=0.00 (0.0%)) [ OK ] Decode_imagecollection.imagecollection/0 (1095 ms) [ RUN ] Decode_imagecollection.imagecollection/1, where GetParam() = \".png\" [ WARN:0@62.708] global grfmt_png.cpp:810 read_chunk user chunk data is too large [ PERFSTAT ] (samples=1 mean=0.21 median=0.21 min=0.21 stddev=0.00 (0.0%)) [ OK ] Decode_imagecollection.imagecollection/1 (3028 ms) [ RUN ] Decode_imagecollection.imagecollection/2, where GetParam() = \".tiff\" [ PERFSTAT ] (samples=1 mean=3.00 median=3.00 min=3.00 stddev=0.00 (0.0%)) [ OK ] Decode_imagecollection.imagecollection/2 (273 ms) [ RUN ] Decode_imagecollection.imagecollection/3, where GetParam() = \".webp\" [ PERFSTAT ] (samples=1 mean=2.98 median=2.98 min=2.98 stddev=0.00 (0.0%)) [ OK ] Decode_imagecollection.imagecollection/3 (1332 ms) [----------] 4 tests from Decode_imagecollection (5728 ms total) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/perf/perf_decode_encode.cpp\n**Change Type:** modified\n**Context:** PR #27605: Performance tests for writing and reading animations\n\n**Code Changes:**\n```diff\n@@ -7,10 +7,82 @@\n namespace opencv_test\n {\n \n-#ifdef HAVE_PNG\n-\n using namespace perf;\n \n+static Animation makeCirclesAnimation(Size size = Size(320, 240), int type = CV_8UC4, int nbits = 8, int frameCount = 40)\n+{\n```", + "output": "[ RUN ] Encode_animation.animation/3, where GetParam() = \".tiff\" [ WARN:0@53.029] global grfmt_base.cpp:219 writeanimation No Animation encoder for specified file extension [ WARN:0@53.029] global grfmt_base.cpp:219 writeanimation No Animation encoder for specified file extension [ WARN:0@53.029] global grfmt_base.cpp:219 writeanimation No Animation encoder for specified file extension [ WARN:0@53.029] global grfmt_base.cpp:219 writeanimation No Animation encoder for specified file extension [ WARN:0@53.029] global grfmt_base.cpp:219 writeanimation No Animation encoder for specified file extension" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/perf/perf_decode_encode.cpp\n**Change Type:** modified\n**Context:** PR #27605: Performance tests for writing and reading animations\n\n**Code Changes:**\n```diff\n@@ -7,10 +7,82 @@\n namespace opencv_test\n {\n \n-#ifdef HAVE_PNG\n-\n using namespace perf;\n \n+static Animation makeCirclesAnimation(Size size = Size(320, 240), int type = CV_8UC4, int nbits = 8, int frameCount = 40)\n+{\n```", + "output": "@asmorkalov I could define a new variable to be used for this test. But for now, I think it can stay like this. What is your opinion? const string exts_anim[] = { #ifdef HAVE_AVIF \".avif\", #endif #ifdef HAVE_IMGCODEC_GIF \".gif\", #endif \".png\", #ifdef HAVE_WEBP \".webp\", #endif };" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n**Review Line:** 146\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n```", + "output": "I agree that we should not have an overload ctor, but rather a legacy flag (**set to true by default**) as the last parameter in the existing ctor (with an explanation in the ctor documentation), and go without a setLegacy method. The reasons are below: * Migrating from pre-4.6.0 to 4.6.0 breaks the code silently (when using a physical target manufactured with pre 4.6.0) and it is difficult for a novice developer to figure out why. * Migrating from pre-4.5.5 to 4.7.0 breaks the code explicitly because the ctor is different (Size instead of two ints), as in @stefan523's example code in #23152. When the developer fixes the compilation error, they trigger the silent failures of the item above. If we introduce a legacy flag set to true by default, this is avoided and backward compatibility is preserved even if the developer is not aware of the legacy flag. * Migrating from 4.6.0 to 4.7.0 requires the developer to explicitly update the ctor both in the Size parameters and in the legacy flag set to false (that is, assuming they are analyzing a target fabricated using 4.6.0 code). This is the only case that actually leads to backward compatibility issues per my suggestion, when the developer is not aware of the role of the legacy flag. I think that it is more important to maintain backward compatibility with the dozens of released versions that have been employed to produce physical charuco targets that are still is use in production lines. Deciding otherwise will inevitably lead to systems version upgrades silently failing without an evident explanation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n**Review Line:** 146\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n```", + "output": "> I propose to not touch the constructor, but add `setLegacyFlag()` method. It's easier to remove legacy method in future than break compatibility for all code that uses Charuco. I'm not sure if I understand the ask correctly. I did keep the existing ctor as-is and added an overload with legacy flag in order to pass the ABI compliance check: https://pullrequest.opencv.org/buildbot/builders/precommit_linux64/builds/102489/steps/Compare%20ABI%20dumps/logs/report-html Adding a `setLegacyFlag()` method would add a new sympol as well. And this would break code, trigger an ABI compliance failure if removed in the future, would it not? I don't see the difference in that respect. The existing ctor already creates a board. Adding a `setLegacyFlag()` method that is called after the constructor has run would mean to clear the board and recreate it differently. Is that really the ask? I don't see the advantage, only things that could go bad in the workflow. Or do I get it wrong? Either way I'm fine with anything that enables usage of existing targets and data. @asmorkalov, please advise and I'll try to adapt." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/aruco/aruco_board.cpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -229,7 +229,8 @@ struct GridBoardImpl : public Board::Impl {\n Board::Impl(_dictionary),\n size(_size),\n markerLength(_markerLength),\n- markerSeparation(_markerSeparation)\n+ markerSeparation(_markerSeparation),\n+ legacyPattern(false)\n {\n CV_Assert(size.width*size.height > 0 && markerLength > 0 && markerSeparation > 0);\n }\n```", + "output": "Ok, will be fixed, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n+ * if there is an even row count of chessboard boxes, otherwise it starts with a black box.\n+ * This setting ensures compatibility to patterns created with OpenCV versions prior OpenCV 4.6.0.\n+ * See https://github.com/opencv/opencv/issues/23152.\n```", + "output": "@AleksandrPanov Did you really verify that? Default pattern must be like this: https://github.com/opencv/opencv/blob/4.7.0/doc/pattern.png --- We start discussion of this patch with `setLegacyPattern()` flag. No aggrement to use BBF or other unclear abbreviations (`first`, `even`, `odd` in one description is a true mess). Description should mention changes after 4.5.5/4.6.0 releases. Default behavior must be preserved (4.7.0)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n+ * if there is an even row count of chessboard boxes, otherwise it starts with a black box.\n+ * This setting ensures compatibility to patterns created with OpenCV versions prior OpenCV 4.6.0.\n+ * See https://github.com/opencv/opencv/issues/23152.\n```", + "output": "@alalek, @AleksandrPanov, all raised concerns should've been resolved now. Please approve." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/test/test_charucodetection.cpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -12,15 +12,19 @@ namespace opencv_test { namespace {\n * @brief Get a synthetic image of Chessboard in perspective\n */\n static Mat projectChessboard(int squaresX, int squaresY, float squareSize, Size imageSize,\n- Mat cameraMatrix, Mat rvec, Mat tvec) {\n+ Mat cameraMatrix, Mat rvec, Mat tvec, bool legacyPattern) {\n \n Mat img(imageSize, CV_8UC1, Scalar::all(255));\n Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));\n \n```", + "output": "@alalek, @AleksandrPanov, during debugging of the test I noticed that due to floating point imprecision this loop does not iterate twice over `distance` as apparently intended, but it only executes once - see screenshot below. Should I try to fix it here now or should this be another PR? I have not checked other loops in this file. ![image](https://user-images.githubusercontent.com/108057787/217182305-9b6459c6-6f50-4a19-99da-b991465b25a4.png)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n+ * if there is an even row count of chessboard boxes, otherwise it starts with a black box.\n+ * This setting ensures compatibility to patterns created with OpenCV versions prior OpenCV 4.6.0.\n+ * See https://github.com/opencv/opencv/issues/23152.\n```", + "output": "@alalek, I liked @stefan523's idea to give a logical name to the board design parameter. I think that @stefan523 need choose a different name (instead of `BBF`) and improve the description in the documentation. Pattern design was changed to implement for calibration compatibility with charuco chessboards (interactive calibration tool + kalibr). For a better understanding, I suggest using this board design parameter as a new parameter, and not just a flag for compatibility with old solutions. I could suggest the name of the parameter as `chessboardPatternCompatibility`. I was wrong, the default behavior should be like in 4.7.0." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/test/test_charucodetection.cpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n\n**Code Changes:**\n```diff\n@@ -12,15 +12,19 @@ namespace opencv_test { namespace {\n * @brief Get a synthetic image of Chessboard in perspective\n */\n static Mat projectChessboard(int squaresX, int squaresY, float squareSize, Size imageSize,\n- Mat cameraMatrix, Mat rvec, Mat tvec) {\n+ Mat cameraMatrix, Mat rvec, Mat tvec, bool legacyPattern) {\n \n Mat img(imageSize, CV_8UC1, Scalar::all(255));\n Mat distCoeffs(5, 1, CV_64FC1, Scalar::all(0));\n \n```", + "output": "Good point! We should not have `for` loops with floating point variables. Correct code should look like this: ```.diff -for(double distance = 0.2; distance <= 0.3; distance += 0.1) { +for (int i = 0; i < 2; ++i) { + double distance = 0.2 + i * 0.1; ``` --- It is better to do that separately as there are many cases bring from `opencv_contrib`: ``` ./modules/objdetect/test/test_arucodetection.cpp:250: for(double distance = 0.1; distance < 0.7; distance += 0.2) { ./modules/objdetect/test/test_boarddetection.cpp:54: for(double distance = 0.2; distance <= 0.4; distance += 0.15) { ./modules/objdetect/test/test_boarddetection.cpp:165: for(double distance = 0.2; distance <= 0.4; distance += 0.2) { ./modules/objdetect/test/test_charucodetection.cpp:112: for(double distance = 0.2; distance <= 0.4; distance += 0.2) { ./modules/objdetect/test/test_charucodetection.cpp:216: for(double distance = 0.2; distance <= 0.3; distance += 0.1) { ./modules/objdetect/test/test_charucodetection.cpp:335: for(double distance = 0.2; distance <= 0.3; distance += 0.1) { ``` /cc @AleksandrPanov @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #23153: ChArUco pre460 pattern support\n**Review Line:** 146\n\n**Code Changes:**\n```diff\n@@ -146,6 +146,18 @@ class CV_EXPORTS_W_SIMPLE CharucoBoard : public Board {\n CV_WRAP CharucoBoard(const Size& size, float squareLength, float markerLength,\n const Dictionary &dictionary, InputArray ids = noArray());\n \n+ /** @brief set legacy chessboard pattern.\n+ *\n+ * Legacy setting creates chessboard patterns starting with a white box in the upper left corner\n```", + "output": "@AleksandrPanov, what are the required steps to merge this pull request?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/hal_replacement.hpp\n**Change Type:** modified\n**Context:** PR #27391: hal/riscv-rvv: implement FAST keypoint detection\n\n**Code Changes:**\n```diff\n@@ -104,8 +104,24 @@ inline int hal_ni_FAST_NMS(const uchar* src_data, size_t src_step, uchar* dst_da\n */\n inline int hal_ni_FAST(const uchar* src_data, size_t src_step, int width, int height, uchar* keypoints_data, size_t* keypoints_count, int threshold, bool nonmax_suppression, int /*cv::FastFeatureDetector::DetectorType*/ type) { return CV_HAL_ERROR_NOT_IMPLEMENTED; }\n \n+/**\n+ @brief Detects corners using the FAST algorithm.\n+ @param src_data Source image data\n+ @param src_step Source image step\n+ @param width Source image width\n+ @param height Source image height\n```", + "output": "@asmorkalov, 1) hal_ni_FAST_withRealloc is quite bad name, I would say. We may have some next revisions of API and describing all improvements in the function name is a super-verbose method. I'd suggest to simply add \"v2\": hal_ni_FASTv2. 2) Are you fine that inside cv::FAST() we will just call the new HAL API (v2)? Or we need to call the old API as well?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/topk2_layer.cpp\n**Change Type:** added\n**Context:** PR #27547: Added TopK layer with dynamic K support for new DNN engine\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,218 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \n+\n+namespace cv { namespace dnn {\n+\n```", + "output": "@vpisarev, At this stage, the actual value of _K_ is not available, so I assign the full shape of the input vector as the output shape. Later, during the forward pass, when K becomes known I update the output shape to reflect the actual value of _K_. https://github.com/opencv/opencv/blob/c7eb99a8ff0d853bdd6e8e1f6d936502f841420e/modules/dnn/src/layers/topk2_layer.cpp#L179-L187" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/topk2_layer.cpp\n**Change Type:** added\n**Context:** PR #27547: Added TopK layer with dynamic K support for new DNN engine\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,218 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \n+\n+namespace cv { namespace dnn {\n+\n```", + "output": "@abhishek-gola, in this case you need to override dynamicOutputShapes() method and return 'true' if K is not provided as attribute. See reshape2_layer.cpp for example on how to process dynamic shapes in this situation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exif.cpp\n**Change Type:** modified\n**Context:** PR #27503: [GSOC 2025] PNG&WebP Metadata Reading Writing Improvements\n\n**Code Changes:**\n```diff\n@@ -9,7 +9,142 @@\n \n namespace opencv_test { namespace {\n \n-/**\n+ static Mat makeCirclesImage(Size size, int type, int nbits)\n+ {\n+ Mat img(size, type);\n+ img.setTo(Scalar::all(0));\n+ RNG& rng = theRNG();\n```", + "output": "@sturkmen72, following @asmorkalov request, I suggest you do add or modify one of png files in opencv test data, e.g. as described here: https://stackoverflow.com/questions/19154596/exiftool-to-create-osx-visible-xmp-metadata-in-png-images. I tried it and it works fine. exiftool can be installed from exiftool.org." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #27503: [GSOC 2025] PNG&WebP Metadata Reading Writing Improvements\n\n**Code Changes:**\n```diff\n@@ -253,11 +253,13 @@ enum ImwriteGIFCompressionFlags {\n \n enum ImageMetadataType\n {\n- IMAGE_METADATA_UNKNOWN = -1,\n- IMAGE_METADATA_EXIF = 0,\n- IMAGE_METADATA_XMP = 1,\n- IMAGE_METADATA_ICCP = 2,\n- IMAGE_METADATA_MAX = 2\n+ IMAGE_METADATA_UNKNOWN = -1, // Used when metadata type is unrecognized or not set\n```", + "output": "i propose enhancing functionality of `metadataTypes ` like documented below ``` * @param metadataTypes Input/output vector indicating which metadata types to retrieve and * which ones were actually found. * On input, this vector specifies the desired metadata types (from the * ImageMetadataType enum) to be extracted. If empty, all supported types * will be retrieved. * On output, the vector will be updated to reflect the types of metadata * actually found in the file. ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_base.cpp\n**Change Type:** modified\n**Context:** PR #27503: [GSOC 2025] PNG&WebP Metadata Reading Writing Improvements\n**Review Line:** 60\n\n**Code Changes:**\n```diff\n@@ -56,6 +56,8 @@ BaseImageDecoder::BaseImageDecoder()\n m_scale_denom = 1;\n m_use_rgb = false;\n m_frame_count = 1;\n+ m_read_options = 0;\n+ m_metadata.resize(IMAGE_METADATA_MAX + 1);\n }\n \n bool BaseImageDecoder::haveMetadata(ImageMetadataType type) const\n@@ -67,12 +69,21 @@ bool BaseImageDecoder::haveMetadata(ImageMetadataType type) const\n \n```", + "output": "Got it, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exif.cpp\n**Change Type:** modified\n**Context:** PR #27503: [GSOC 2025] PNG&WebP Metadata Reading Writing Improvements\n\n**Code Changes:**\n```diff\n@@ -9,7 +9,142 @@\n \n namespace opencv_test { namespace {\n \n-/**\n+ static Mat makeCirclesImage(Size size, int type, int nbits)\n+ {\n+ Mat img(size, type);\n+ img.setTo(Scalar::all(0));\n+ RNG& rng = theRNG();\n```", + "output": "Here, WebP files are created by reading and saving other test files with metadata, and the rotation feature is tested. I could use the code here to put the saved files into opencv_extra, but I think it would be better to keep it this way. @asmorkalov what is your final opinion ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27522: fix compilation problems with MSVC+Cuda 12.9\n\n**Code Changes:**\n```diff\n@@ -388,8 +388,13 @@ macro(ocv_nvcc_flags)\n set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcompiler=-fno-finite-math-only)\n endif()\n \n- if(WIN32 AND NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n- set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ if(WIN32)\n+ if (NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n+ set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ endif()\n```", + "output": "@chacha21 Is that flag needed at all? In my testing only the `-Xcompiler /Zc:preprocessor` is required to fix the bug. See my [proposed fix](https://github.com/opencv/opencv_contrib/issues/3965#issuecomment-3047410789)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27522: fix compilation problems with MSVC+Cuda 12.9\n\n**Code Changes:**\n```diff\n@@ -388,8 +388,13 @@ macro(ocv_nvcc_flags)\n set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcompiler=-fno-finite-math-only)\n endif()\n \n- if(WIN32 AND NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n- set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ if(WIN32)\n+ if (NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n+ set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ endif()\n```", + "output": "@cudawarped Did you also build the tests and perf tests ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27522: fix compilation problems with MSVC+Cuda 12.9\n\n**Code Changes:**\n```diff\n@@ -388,8 +388,13 @@ macro(ocv_nvcc_flags)\n set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcompiler=-fno-finite-math-only)\n endif()\n \n- if(WIN32 AND NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n- set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ if(WIN32)\n+ if (NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n+ set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ endif()\n```", + "output": "@chacha21 Yes, details below. > -- OpenCV modules: > -- To be built: aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev datasets dnn dnn_objdetect dnn_superres dpm face features2d flann fuzzy gapi hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow phase_unwrapping photo plot python3 quality rapid reg rgbd saliency shape signal stereo stitching structured_light superres surface_matching text tracking ts video videoio videostab wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto > -- Disabled: world > -- Disabled by dependency: - > -- Unavailable: alphamat cannops cvv fastcv freetype hdf java julia matlab ovis python2 sfm viz > -- Applications: tests perf_tests examples apps > -- Documentation: doxygen python > -- Non-free algorithms: NO The only addition I had to make was the fix in https://github.com/opencv/opencv_contrib/pull/3968 to get cudev to build." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDAUtils.cmake\n**Change Type:** modified\n**Context:** PR #27522: fix compilation problems with MSVC+Cuda 12.9\n\n**Code Changes:**\n```diff\n@@ -388,8 +388,13 @@ macro(ocv_nvcc_flags)\n set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcompiler=-fno-finite-math-only)\n endif()\n \n- if(WIN32 AND NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n- set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ if(WIN32)\n+ if (NOT (CUDA_VERSION VERSION_LESS \"11.2\"))\n+ set(CUDA_NVCC_FLAGS ${CUDA_NVCC_FLAGS} -Xcudafe --display_error_number --diag-suppress 1394,1388)\n+ endif()\n```", + "output": "@chacha21 Is that with the fix I linked to?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** added\n**Context:** PR #25691: [GSoC] Add GIF decode and encode for imgcodecs\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ vector param;\n+ param.push_back(IMWRITE_GIF_QUALITY);\n+ param.push_back(7);\n+ param.push_back(IMWRITE_GIF_DITHER);\n+ param.push_back(2);\n+ EXPECT_NO_THROW(imencode(\".png\", img_gt, buff, param));\n+ Mat img;\n+ EXPECT_NO_THROW(img = imdecode(buff, IMREAD_ANYDEPTH)); // hang\n+ EXPECT_FALSE(img.empty());\n+ EXPECT_EQ(img.cols, expected_cols);\n+ EXPECT_EQ(img.rows, expected_rows);\n```", + "output": "@redhecker is there something like forgotten issue here ? `EXPECT_NO_THROW(imencode(\".gif\", img_gt, buff, param));` fails" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** added\n**Context:** PR #25691: [GSoC] Add GIF decode and encode for imgcodecs\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ vector param;\n+ param.push_back(IMWRITE_GIF_QUALITY);\n+ param.push_back(7);\n+ param.push_back(IMWRITE_GIF_DITHER);\n+ param.push_back(2);\n+ EXPECT_NO_THROW(imencode(\".png\", img_gt, buff, param));\n+ Mat img;\n+ EXPECT_NO_THROW(img = imdecode(buff, IMREAD_ANYDEPTH)); // hang\n+ EXPECT_FALSE(img.empty());\n+ EXPECT_EQ(img.cols, expected_cols);\n+ EXPECT_EQ(img.rows, expected_rows);\n```", + "output": "I think that there's something wrong here, where \".png\" used with gif params. However, the code was written by @TrendMYX , i think he will fix it in a few days." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** added\n**Context:** PR #25691: [GSoC] Add GIF decode and encode for imgcodecs\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ vector param;\n+ param.push_back(IMWRITE_GIF_QUALITY);\n+ param.push_back(7);\n+ param.push_back(IMWRITE_GIF_DITHER);\n+ param.push_back(2);\n+ EXPECT_NO_THROW(imencode(\".png\", img_gt, buff, param));\n+ Mat img;\n+ EXPECT_NO_THROW(img = imdecode(buff, IMREAD_ANYDEPTH)); // hang\n+ EXPECT_FALSE(img.empty());\n+ EXPECT_EQ(img.cols, expected_cols);\n+ EXPECT_EQ(img.rows, expected_rows);\n```", + "output": "@redhecker @TrendMYX fyi ``` [ERROR:0@0.088] global loadsave.cpp:1687 cv::imencodeWithMetadata imencode(): can't encode data: OpenCV(4.13.0-dev) C:\\build\\precommit_windows64\\4.x\\opencv\\modules\\imgcodecs\\src\\grfmt_gif.cpp:1022: error: (-215:Assertion failed) false in function 'cv::GifEncoder::ditheringKernel' Exception message: OpenCV(4.13.0-dev) C:\\build\\precommit_windows64\\4.x\\opencv\\modules\\imgcodecs\\src\\loadsave.cpp:1274: error: (-215:Assertion failed) !buf.empty() in function 'cv::imdecode_' ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** added\n**Context:** PR #25691: [GSoC] Add GIF decode and encode for imgcodecs\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ vector param;\n+ param.push_back(IMWRITE_GIF_QUALITY);\n+ param.push_back(7);\n+ param.push_back(IMWRITE_GIF_DITHER);\n+ param.push_back(2);\n+ EXPECT_NO_THROW(imencode(\".png\", img_gt, buff, param));\n+ Mat img;\n+ EXPECT_NO_THROW(img = imdecode(buff, IMREAD_ANYDEPTH)); // hang\n+ EXPECT_FALSE(img.empty());\n+ EXPECT_EQ(img.cols, expected_cols);\n+ EXPECT_EQ(img.rows, expected_rows);\n```", + "output": "@sturkmen72 @redhecker Sorry for all the mistakes. Here is the modified version which goes through successfully on my laptop. ```cpp TEST(Imgcodecs_Gif,write_gif_flags){ //rows611 x cols293, three‑channel, constant value 1 const int expected_rows = 611; const int expected_cols = 293; cv::Mat img_gt(expected_rows, expected_cols, CV_8UC3, cv::Scalar::all(1)); //the encoding parameters std::vector buff; std::vector param; param.push_back(cv::IMWRITE_GIF_QUALITY); param.push_back(7); param.push_back(cv::IMWRITE_GIF_DITHER); param.push_back(2); //Encode part ASSERT_NO_THROW(cv::imencode(\".gif\", img_gt, buff, param)); //Decode part cv::Mat img; ASSERT_NO_THROW(img = cv::imdecode(buff, cv::IMREAD_ANYCOLOR)); ASSERT_FALSE(img.empty()); EXPECT_EQ(img.cols, expected_cols); EXPECT_EQ(img.rows, expected_rows); EXPECT_EQ(img.type(), CV_8UC3); // Compare, maxdiff=16 EXPECT_PRED_FORMAT2(cvtest::MatComparator(16, 0), img, img_gt); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** added\n**Context:** PR #25691: [GSoC] Add GIF decode and encode for imgcodecs\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ vector param;\n+ param.push_back(IMWRITE_GIF_QUALITY);\n+ param.push_back(7);\n+ param.push_back(IMWRITE_GIF_DITHER);\n+ param.push_back(2);\n+ EXPECT_NO_THROW(imencode(\".png\", img_gt, buff, param));\n+ Mat img;\n+ EXPECT_NO_THROW(img = imdecode(buff, IMREAD_ANYDEPTH)); // hang\n+ EXPECT_FALSE(img.empty());\n+ EXPECT_EQ(img.cols, expected_cols);\n+ EXPECT_EQ(img.rows, expected_rows);\n```", + "output": "@TrendMYX thank you. feel free to create a PR or i will do later." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #27449: Onnx multifile import\n\n**Code Changes:**\n```diff\n@@ -10,6 +10,8 @@\n #ifdef HAVE_PROTOBUF\n #include \"../graph_simplifier.hpp\"\n #include \"onnx_graph_simplifier.hpp\"\n+#include \n+#include \"opencv2/core/utils/filesystem.private.hpp\"\n \n #include \n #include \n@@ -1704,30 +1706,82 @@ void simplifySubgraphs(opencv_onnx::GraphProto& net)\n```", + "output": "@asmorkalov thanks for the review, i hope the filesystem check is ok now. but why isn't there a `OPENCV_HAVE_FILESYSTEM_SUPPORT ` check for onnx importer itself, when it reads the main onnx file? https://github.com/opencv/opencv/blob/2e6a0cab65bce5b743509a4b8cbaf028b66eaad7/modules/dnn/src/onnx/onnx_importer2.cpp#L267" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/hdr_parser.py\n**Change Type:** modified\n**Context:** PR #27325: feat: add conditional inclusion support to header parser\n**Review Line:** 154\n\n**Code Changes:**\n```diff\n+ except Exception as e:\n+ raise ValueError(\n+ \"Failed to evaluate '{}' directive, stripped down to '{}'\".format(\n+ input_directive, directive\n+ )\n+ ) from e\n+\n+ if not isinstance(eval_directive, (bool, int)):\n+ raise TypeError(\n+ \"'{}' directive is evaluated to unexpected type: {}\".format(\n+ input_directive, type(eval_directive).__name__\n```", + "output": "Please check failed build https://pullrequest.opencv.org/buildbot/builders/precommit_windows64/builds/109668/steps/compile%20release/logs/stdio: ``` Traceback (most recent call last): File \"C:/build/precommit_windows64/4.x/opencv/modules/java/generator/../generator/gen_java.py\", line 1422, in import hdr_parser File \"C:\\build\\precommit_windows64\\4.x\\opencv\\modules\\python\\src2\\hdr_parser.py\", line 154 ) from e ^ SyntaxError: invalid syntax opencv_gapi_SSE4_1.vcxproj -> C:\\build\\precommit_windows64\\build\\modules\\gapi\\opencv_gapi_SSE4_1.dir\\Release\\opencv_gapi_SSE4_1.lib Generate files for Python bindings and documentation C:\\Program Files (x86)\\MSBuild\\Microsoft.Cpp\\v4.0\\V140\\Microsoft.CppCommon.targets(171,5): error MSB6006: \"cmd.exe\" exited with code 1. [C:\\build\\precommit_windows64\\build\\modules\\java_bindings_generator\\gen_opencv_java_source.vcxproj] Traceback (most recent call last): File \"C:/build/precommit_windows64/4.x/opencv/modules/python/src2/gen2.py\", line 4, in import hdr_parser, sys, re File \"C:\\build\\precommit_windows64\\4.x\\opencv\\modules\\python\\src2\\hdr_parser.py\", line 154 ) from e ^ SyntaxError: invalid syntax C:\\Program Files (x86)\\MSBuild\\Microsoft.Cpp\\v4.0\\V140\\Microsoft.CppCommon.targets(171,5): error MSB6006: \"cmd.exe\" exited with code 1. [C: ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLibsPerf.cmake\n**Change Type:** modified\n**Context:** PR #27536: eigen: fix to get version from eigen after v3.4.0\n\n**Code Changes:**\n```diff\n@@ -84,7 +84,13 @@ if(WITH_EIGEN AND NOT HAVE_EIGEN)\n set(EIGEN_WORLD_VERSION ${EIGEN3_WORLD_VERSION})\n set(EIGEN_MAJOR_VERSION ${EIGEN3_MAJOR_VERSION})\n set(EIGEN_MINOR_VERSION ${EIGEN3_MINOR_VERSION})\n- else() # Eigen config file\n+ elseif(DEFINED Eigen3_VERSION_MAJOR) # Recommended package config variables\n+ # see https://github.com/opencv/opencv/issues/27530\n+ set(EIGEN_WORLD_VERSION ${Eigen3_VERSION_MAJOR})\n+ set(EIGEN_MAJOR_VERSION ${Eigen3_VERSION_MINOR})\n+ set(EIGEN_MINOR_VERSION ${Eigen3_VERSION_PATCH})\n```", + "output": "Thank you for your comment, I update comment to point Eigen commit. Eigen master drop to deprecated packge config variables EIGEN3_VERSION_*. https://gitlab.com/libeigen/eigen/-/commit/f2984cd0778dd0a1d7e74216d826eaff2bc6bfab Currently status are... - On Eigen master, we can use only Eigen3_VERISION_*. - On Eigen 3.4 branch, we can use both Eigen3_VERSION_* and EIGEN3_VERSION_* ." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #27551: Add enum IMWRITE_PNG_ZLIBBUFFER_SIZE\n\n**Code Changes:**\n```diff\n@@ -947,26 +947,36 @@ bool PngEncoder::write( const Mat& img, const std::vector& params )\n \n for( size_t i = 0; i < params.size(); i += 2 )\n {\n- if( params[i] == IMWRITE_PNG_COMPRESSION )\n+ switch (params[i])\n {\n+ case IMWRITE_PNG_COMPRESSION:\n m_compression_strategy = IMWRITE_PNG_STRATEGY_DEFAULT; // Default strategy\n m_compression_level = params[i+1];\n```", + "output": "@asmorkalov before the patch in this case did not raise an error. maybe better we can use CV_LOG" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #27551: Add enum IMWRITE_PNG_ZLIBBUFFER_SIZE\n**Review Line:** 979\n\n**Code Changes:**\n```diff\n+ png_set_compression_buffer_size(png_ptr, params[i+1]);\n+ break;\n+\n+ default:\n+ CV_LOG_WARNING(NULL, \"An unknown or unsupported ImwriteFlags value was specified and has been ignored.\");\n+ break;\n }\n }\n \n```", + "output": "I tested OpenCV’s actual behavior when passing invalid { -1, -1 } as encoding parameters using the code below: ``` int main(int argc, const char** argv) { std::vector exts = { \".bmp\", \".gif\", \".ppm\", \".ras\", \".png\", \".tiff\", \".jpg\", \".webp\", \".jp2\" }; Mat image = Mat(100, 100, CV_8UC3, Scalar(0, 255, 0)); // yeşil görüntü for (size_t i = 0; i < exts.size(); i++) { std::vector buf; bool enc_result = imencode(exts[i], image, buf, { -1, -1 }); bool write_result = imwrite(\"test\" + exts[i], image, { -1, -1 }); std::cout << exts[i] << \" imencode result: \" << enc_result << std::endl; std::cout << exts[i] << \" imwrite result: \" << write_result << std::endl; } return 0; } ``` output: ``` .bmp imencode result: 1 .bmp imwrite result: 1 .gif imencode result: 1 .gif imwrite result: 1 .ppm imencode result: 1 .ppm imwrite result: 1 .ras imencode result: 1 .ras imwrite result: 1 .png imencode result: 1 .png imwrite result: 1 .tiff imencode result: 1 .tiff imwrite result: 1 .jpg imencode result: 1 .jpg imwrite result: 1 .webp imencode result: 1 .webp imwrite result: 1 [ WARN:0@0.016] global grfmt_jpeg2000_openjpeg.cpp:340 cv::`anonymous-namespace'::setupEncoderParameters OpenJPEG2000(encoder): skip unsupported parameter: -1 [ WARN:0@0.017] global grfmt_jpeg2000_openjpeg.cpp:340 cv::`anonymous-namespace'::setupEncoderParameters OpenJPEG2000(encoder): skip unsupported parameter: -1 .jp2 imencode result: 1 .jp2 imwrite result: 1 ``` As seen above, all formats succeeded with { -1, -1 }, though .jp2 emits warnings due to unsupported parameters. These are ignored internally, and the operation still succeeds." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exr.impl.hpp\n**Change Type:** modified\n**Context:** PR #27485: imgcodecs: OpenEXR multispectral read/write support\n**Review Line:** 77\n\n**Code Changes:**\n```diff\n+{ // 0-6 channels (multispectral)\n+ const string root = cvtest::TS::ptr()->get_data_path();\n+ const string filenameInput = root + \"readwrite/test32FC7.exr\";\n+ const string filenameOutput = cv::tempfile(\".exr\");\n+#ifndef GENERATE_DATA\n+ const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);\n+#else\n+ const Size sz(3, 5);\n+ Mat img(sz, CV_32FC7);\n+ img.at>(0, 0)[0] = 101.125;\n+ img.at>(2, 1)[3] = 203.500;\n```", + "output": "I tried to open the test file without parameters and got `[ERROR:0@0.020] global loadsave.cpp:515 imread_ imread_('./test32FC7.exr'): can't read data: OpenCV(4.12.0-pre) /mnt/Projects/Projects/opencv/modules/imgcodecs/src/grfmt_exr.cpp:258: error: (-215:Assertion failed) m_multispectral == multispectral in function 'readData' `" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exr.impl.hpp\n**Change Type:** modified\n**Context:** PR #27485: imgcodecs: OpenEXR multispectral read/write support\n**Review Line:** 77\n\n**Code Changes:**\n```diff\n+{ // 0-6 channels (multispectral)\n+ const string root = cvtest::TS::ptr()->get_data_path();\n+ const string filenameInput = root + \"readwrite/test32FC7.exr\";\n+ const string filenameOutput = cv::tempfile(\".exr\");\n+#ifndef GENERATE_DATA\n+ const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);\n+#else\n+ const Size sz(3, 5);\n+ Mat img(sz, CV_32FC7);\n+ img.at>(0, 0)[0] = 101.125;\n+ img.at>(2, 1)[3] = 203.500;\n```", + "output": "> I tried to open the test file without parameters and got `[ERROR:0@0.020] ... I will test it today and add some error messages." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exr.impl.hpp\n**Change Type:** modified\n**Context:** PR #27485: imgcodecs: OpenEXR multispectral read/write support\n**Review Line:** 74\n\n**Code Changes:**\n```diff\n }\n \n+TEST(Imgcodecs_EXR, readWrite_32FC7)\n+{ // 0-6 channels (multispectral)\n+ const string root = cvtest::TS::ptr()->get_data_path();\n+ const string filenameInput = root + \"readwrite/test32FC7.exr\";\n+ const string filenameOutput = cv::tempfile(\".exr\");\n+#ifndef GENERATE_DATA\n+ const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);\n+#else\n+ const Size sz(3, 5);\n```", + "output": "``` TEST(Imgcodecs_EXR, readWrite_32FC7) { // 0-6 channels (multispectral) const string filename = cv::tempfile(\".exr\"); const Size sz(3, 5); Mat img(sz, CV_MAKETYPE(CV_32F, 7)); img.at>(0, 0)[0] = 101.125; img.at>(2, 1)[3] = 203.500; img.at>(4, 2)[6] = 305.875; ASSERT_TRUE(cv::imwrite(filename, img)); const Mat img2 = cv::imread(filename, IMREAD_UNCHANGED); EXPECT_EQ(img2.type(), img.type()); EXPECT_EQ(img2.size(), img.size()); EXPECT_LE(cvtest::norm(img, img2, NORM_INF | NORM_RELATIVE), 1e-3); const Mat img3 = cv::imread(filename, IMREAD_GRAYSCALE); ASSERT_TRUE(img3.empty()); const Mat img4 = cv::imread(filename, IMREAD_COLOR); ASSERT_TRUE(img4.empty()); EXPECT_EQ(0, remove(filename.c_str())); } ``` output ``` [ RUN ] Imgcodecs_EXR.readWrite_32FC7 [ERROR:0@3.966] global loadsave.cpp:594 cv::imread_ imread_('C:\\Users\\aidata\\AppData\\Local\\Temp\\ocvBA1A.tmp.exr'): can't read data: OpenCV(4.12.0-pre) C:\\projects\\opencv\\modules\\imgcodecs\\src\\grfmt_exr.cpp:260: error: (-2:Unspecified error) in function 'bool __cdecl cv::ExrDecoder::readData(class cv::Mat &)' > OpenCV EXR decoder needs more number of channels for multispectral images. Use cv::IMREAD_UNCHANGED mode for imread. (expected: 'img.channels() == CV_MAT_CN(type())'), where > 'img.channels()' is 1 > must be equal to > 'CV_MAT_CN(type())' is 7 [ERROR:0@3.966] global loadsave.cpp:594 cv::imread_ imread_('C:\\Users\\aidata\\AppData\\Local\\Temp\\ocvBA1A.tmp.exr'): can't read data: OpenCV(4.12.0-pre) C:\\projects\\opencv\\modules\\imgcodecs\\src\\grfmt_exr.cpp:260: error: (-2:Unspecified error) in function 'bool __cdecl cv::ExrDecoder::readData(class cv::Mat &)' > OpenCV EXR decoder needs more number of channels for multispectral images. Use cv::IMREAD_UNCHANGED mode for imread. (expected: 'img.channels() == CV_MAT_CN(type())'), where > 'img.channels()' is 3 > must be equal to > 'CV_MAT_CN(type())' is 7 [ OK ] Imgcodecs_EXR.readWrite_32FC7 (5 ms) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_exr.impl.hpp\n**Change Type:** modified\n**Context:** PR #27485: imgcodecs: OpenEXR multispectral read/write support\n**Review Line:** 74\n\n**Code Changes:**\n```diff\n }\n \n+TEST(Imgcodecs_EXR, readWrite_32FC7)\n+{ // 0-6 channels (multispectral)\n+ const string root = cvtest::TS::ptr()->get_data_path();\n+ const string filenameInput = root + \"readwrite/test32FC7.exr\";\n+ const string filenameOutput = cv::tempfile(\".exr\");\n+#ifndef GENERATE_DATA\n+ const Mat img = cv::imread(filenameInput, IMREAD_UNCHANGED);\n+#else\n+ const Size sz(3, 5);\n```", + "output": "@xaos-cz thank you for your contribution. i am also a common contributor ( a bit more experienced). I think this test is to verify that the change you made is working correctly and to show if a later change breaks your code. It doesn't matter if it's similar in form to the other tests." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #27127: Changes about when APNG has a hidden frame\n\n**Code Changes:**\n```diff\n@@ -277,6 +277,8 @@ struct CV_EXPORTS_W_SIMPLE Animation\n CV_PROP_RW std::vector durations;\n //! Vector of frames, where each Mat represents a single frame.\n CV_PROP_RW std::vector frames;\n+ //! image that can be used for the format in addition to the animation or if animation is not supported in the reader (like in PNG).\n+ CV_PROP_RW Mat still_image;\n \n /** @brief Constructs an Animation object with optional loop count and background color.\n \n```", + "output": "IMO. I agree with @vrabaud comment because it's probably not clear for the API users that first frames has a special meaning. And if a `still_image` attribute is introduced, we need not worry what does duration is better for hidden frame. I think it is friendly API design. (A) Hidden frame/still image is included in frames list. ---- | idx | frames | durations | |:------:|:-------:|:----------:| | 0 | still image | ? | | 1 | 1st frame| xx | | 2 | 2nd frame | xx | | 3 | 3rd frame | xx | -> should durations[0] be 0, 0xFFFF or other value ? (B) Hidden frame/still image is excluded out frames list. ---- | idx | frames | durations | |:------:|:-------:|:----------:| | 0 | 1st frame| xx | | 1 | 2nd frame | xx | | 2 | 3rd frame | xx | and still-image" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/src/ccm/utils.hpp\n**Change Type:** added\n**Context:** PR #27051: Adding color correction module to photo module from opencv_contrib\n**Review Line:** 32\n\n**Code Changes:**\n```diff\n+\n+/** @brief maskCopyTo a function to delete unsatisfied elementwise.\n+ @param src the input array, type of Mat.\n+ @param mask operation mask that used to choose satisfided elementwise.\n+ */\n+Mat maskCopyTo(const Mat& src, const Mat& mask);\n+\n+/** @brief multiple the function used to compute an array with n channels\n+ mulipied by ccm.\n+ @param xyz the input array, type of Mat.\n+ @param ccm the ccm matrix to make color correction.\n```", + "output": "@asmorkalov It cannot directly be replaced with Mat.copyTo() as the return is a flattened vector with only the values where mask == 1. Changed it to use Mat.copyTo() internally instead of the current logic" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/include/opencv2/photo/ccm.hpp\n**Change Type:** added\n**Context:** PR #27051: Adding color correction module to photo module from opencv_contrib\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,322 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Author: Longbu Wang \n+// Jinheng Zhang \n+// Chenqi Shan \n+\n+#ifndef OPENCV_PHOTO_CCM_HPP\n```", + "output": "@asmorkalov done at a lot of places in public API. It is at a lot of places internally also will do that as well" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/src/ccm/utils.cpp\n**Change Type:** added\n**Context:** PR #27051: Adding color correction module to photo module from opencv_contrib\n**Review Line:** 16\n\n**Code Changes:**\n```diff\n+namespace cv {\n+namespace ccm {\n+\n+void gammaCorrection(InputArray _src, OutputArray _dst, double gamma)\n+{\n+ Mat src = _src.getMat();\n+ CV_Assert(gamma > 0);\n+\n+ double maxVal;\n+ int depth = src.depth();\n+ switch (depth)\n```", + "output": "@asmorkalov CV_ASSERT for gamma and type checks using switch have already been added. Please check the updated file." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/src/ccm/utils.hpp\n**Change Type:** added\n**Context:** PR #27051: Adding color correction module to photo module from opencv_contrib\n**Review Line:** 39\n\n**Code Changes:**\n```diff\n+/** @brief multiple the function used to compute an array with n channels\n+ mulipied by ccm.\n+ @param xyz the input array, type of Mat.\n+ @param ccm the ccm matrix to make color correction.\n+ */\n+Mat multiple(const Mat& xyz, const Mat& ccm);\n+\n+/** @brief multiple the function used to get the mask of saturated colors,\n+ colors between low and up will be choosed.\n+ @param src the input array, type of Mat.\n+ @param low the threshold to choose saturated colors\n```", + "output": "@asmorkalov In some cases input to the function is not 3-channel RGB image, we tried replacing it but it did not work." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/misc/java/test/VideoCaptureTest.java\n**Change Type:** modified\n**Context:** PR #27421: Cover all seek directions in VideoCapture Java test\n\n**Code Changes:**\n```diff\n@@ -123,6 +123,10 @@ public long seek(long offset, int origin)\n {\n if (origin == 0)\n f.seek(offset);\n+ else if (origin == 1)\n+ f.seek(f.getFilePointer() + offset);\n+ else if (origin == 2)\n+ f.seek(f.length() + offset);\n return f.getFilePointer();\n }\n```", + "output": "Thank you, fixed" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/distransform.cpp\n**Change Type:** modified\n**Context:** PR #24535: Handle huge images in IPP distanceTransform\n\n**Code Changes:**\n```diff\n@@ -817,14 +817,15 @@ void cv::distanceTransform( InputArray _src, OutputArray _dst, OutputArray _labe\n Size size = src.size();\n \n int border = maskSize == CV_DIST_MASK_3 ? 1 : 2;\n- Mat temp( size.height + border*2, size.width + border*2, CV_32SC1 );\n+ Mat temp;\n \n if( !need_labels )\n {\n if( maskSize == CV_DIST_MASK_3 )\n```", + "output": "@opencv-alalek, @asmorkalov, I have verified that this change disables IPP optimization for a valid usecase with `dst` as a ROI. `isContinuous` is false but this is not the same if `Mat` is not continuous after `.create` for a big memory. Am I right that in case of `.create` called on an empty `Mat` but with a large sizes (for example, 70000x70000), Mat memory is constructed from a multiple continuous blocks and step is irregular?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_hsv.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #27370: 5.x merge 4.x - OpenCV Extra: https://github.com/opencv/opencv_extra/pull/1258 Manual fixes: - Fixed ifdef handling bug in header parser. B...\n**Review Line:** 274\n\n**Code Changes:**\n```diff\n+ for(int i = 1; i < 256; i++ )\n+ sdiv[i] = saturate_cast(v/(1.*i));\n+\n+ cv::UMat result;\n+ sdiv_mat.copyTo(result);\n+ return result;\n+}\n+\n+static UMat init_hdiv_table(int hrange)\n+{\n+ cv::Mat hdiv_mat(1, 256, CV_32SC1);\n```", + "output": "@opencv-alalek @vpisarev There is no handy way to initialize UMat from std::vector or other simple array any more in 5.x. I have to add such ugly code to get it done. Could you recommend better solution? Let's discuss UMat constructors again." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #27153: fix #26276: cv::VideoWriter fails writing colorless images\n\n**Code Changes:**\n```diff\n@@ -2525,11 +2525,13 @@ bool CvVideoWriter_FFMPEG::writeFrame( const unsigned char* data, int step, int\n // check parameters\n if (input_pix_fmt == AV_PIX_FMT_BGR24) {\n if (cn != 3) {\n+ CV_LOG_WARNING(NULL, \"write frame skipped - expected 3 channels but got \" << cn);\n return false;\n }\n }\n else if (input_pix_fmt == AV_PIX_FMT_GRAY8 || input_pix_fmt == AV_PIX_FMT_GRAY16LE) {\n if (cn != 1) {\n```", + "output": "Thanks for pointing that out! The original logic only checked if cn == 1, so I mistakenly assumed both CV_8UC1 and CV_16UC1 were acceptable. I’ll revert this logic in the next commit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #27153: fix #26276: cv::VideoWriter fails writing colorless images\n\n**Code Changes:**\n```diff\n@@ -2525,11 +2525,13 @@ bool CvVideoWriter_FFMPEG::writeFrame( const unsigned char* data, int step, int\n // check parameters\n if (input_pix_fmt == AV_PIX_FMT_BGR24) {\n if (cn != 3) {\n+ CV_LOG_WARNING(NULL, \"write frame skipped - expected 3 channels but got \" << cn);\n return false;\n }\n }\n else if (input_pix_fmt == AV_PIX_FMT_GRAY8 || input_pix_fmt == AV_PIX_FMT_GRAY16LE) {\n if (cn != 1) {\n```", + "output": "Thanks for the heads-up, I’ll avoid in-place operations moving forward." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_mat.cpp\n**Change Type:** modified\n**Context:** PR #27362: Fix empty ND-array construction\n**Review Line:** 1466\n\n**Code Changes:**\n```diff\n+ EXPECT_LE(mat_nd.total(), 1u);\n+ EXPECT_FALSE(mat_nd.empty());\n+#endif\n+ EXPECT_EQ(mat.total(), 0u);\n+ EXPECT_TRUE(mat.empty());\n+}\n+\n TEST(Core_InputArray, empty)\n {\n vector > data;\n```", + "output": "I fixed the test. @vpisarev , please decide with @opencv-alalek if it worth adding the test on total and empty." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/shapedescr.cpp\n**Change Type:** modified\n**Context:** PR #26299: Add getClosestEllipsePoints() function to get the closest point on an ellipse\n\n**Code Changes:**\n```diff\n@@ -877,6 +877,121 @@ cv::RotatedRect cv::fitEllipseDirect( InputArray _points )\n return box;\n }\n \n+namespace cv\n+{\n+// @misc{Chatfield2017,\n+// author = {Chatfield, Carl},\n+// title = {A Simple Method for Distance to Ellipse},\n+// year = {2017},\n```", + "output": "Thanks, I have updated the code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_legacy_ops.h\n**Change Type:** added\n**Context:** PR #27327: Restored legacy intrinsics operators in a separate header - resolves #27267 * New header with removed universal intrinsics operator overloa...\n**Review Line:** 22\n\n**Code Changes:**\n```diff\n+#warning \"Operators might conflict with built-in functions on RISC-V platform\"\n+#endif\n+\n+#if defined(CV_VERSION) && CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR < 11\n+#warning \"Older versions of OpenCV (<4.11) already have Universal Intrinscs operators\"\n+#endif\n+\n+\n+namespace cv { namespace hal {\n+\n+#define BIN_OP(OP, FUN) \\\n```", + "output": "Thanks a lot for the feedback. The operator overload was removed due to problems with RISC-V platform. The header with overloaded operators was added for compatibility on previously supported platforms (we have such issue report). You do not need to touch your code. v_xxx functions is the way to go further." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_legacy_ops.h\n**Change Type:** added\n**Context:** PR #27327: Restored legacy intrinsics operators in a separate header - resolves #27267 * New header with removed universal intrinsics operator overloa...\n**Review Line:** 22\n\n**Code Changes:**\n```diff\n+#warning \"Operators might conflict with built-in functions on RISC-V platform\"\n+#endif\n+\n+#if defined(CV_VERSION) && CV_VERSION_MAJOR == 4 && CV_VERSION_MINOR < 11\n+#warning \"Older versions of OpenCV (<4.11) already have Universal Intrinscs operators\"\n+#endif\n+\n+\n+namespace cv { namespace hal {\n+\n+#define BIN_OP(OP, FUN) \\\n```", + "output": "@s-trinh , you are right. Operators has been removed from AArch64 backend since 4.9 and from other backends since 4.11." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/hal_rvv/hal_rvv_1p0/types.hpp\n**Change Type:** modified\n**Context:** PR #27060: [hal_rvv] Add cv::integral implementation and more types of input for test\n**Review Line:** 705\n\n**Code Changes:**\n```diff\n+ template \\\n+ inline void vset(TWO::VecType v, ONE::VecType val) { \\\n+ __riscv_vset_v_##TYPE##ONE_LMUL##_##TYPE##TWO_LMUL(v, idx, val); \\\n+ } \\\n+ inline TWO::VecType vcreate(ONE::VecType v0, ONE::VecType v1) { \\\n+ return __riscv_vcreate_v_##TYPE##ONE_LMUL##_##TYPE##TWO_LMUL(v0, v1); \\\n+ }\n+#endif\n+\n+HAL_RVV_GROUP(RVV_I8M1, RVV_I8M2, i8, m1, m2)\n+HAL_RVV_GROUP(RVV_I8M2, RVV_I8M4, i8, m2, m4)\n```", + "output": "Thank you for pointing out! Should I change the implementation of `vcreate` and fall back to `vset` when clang version is under 18, or just add `vset` for `types.hpp`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/hal_rvv/hal_rvv_1p0/types.hpp\n**Change Type:** modified\n**Context:** PR #27060: [hal_rvv] Add cv::integral implementation and more types of input for test\n**Review Line:** 705\n\n**Code Changes:**\n```diff\n+ template \\\n+ inline void vset(TWO::VecType v, ONE::VecType val) { \\\n+ __riscv_vset_v_##TYPE##ONE_LMUL##_##TYPE##TWO_LMUL(v, idx, val); \\\n+ } \\\n+ inline TWO::VecType vcreate(ONE::VecType v0, ONE::VecType v1) { \\\n+ return __riscv_vcreate_v_##TYPE##ONE_LMUL##_##TYPE##TWO_LMUL(v0, v1); \\\n+ }\n+#endif\n+\n+HAL_RVV_GROUP(RVV_I8M1, RVV_I8M2, i8, m1, m2)\n+HAL_RVV_GROUP(RVV_I8M2, RVV_I8M4, i8, m2, m4)\n```", + "output": "> I prefer both. Note that `__riscv_vset` needs an immediate number for the second argument. Thanks, I will update this patch later" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_wasm.hpp\n**Change Type:** modified\n**Context:** PR #26109: Replace operators with wrapper functions on universal intrinsics backends\n**Review Line:** 1697\n\n**Code Changes:**\n```diff\n }\n inline unsigned v_reduce_sad(const v_int8x16& a, const v_int8x16& b)\n {\n@@ -1703,19 +1696,19 @@ inline unsigned v_reduce_sad(const v_int8x16& a, const v_int8x16& b)\n v_expand(v_absdiff(a, b), l16, h16);\n v_expand(l16, l16_l32, l16_h32);\n v_expand(h16, h16_l32, h16_h32);\n- return v_reduce_sum(l16_l32+l16_h32+h16_l32+h16_h32);\n+ return v_reduce_sum(v_add(v_add(l16_l32, l16_h32), v_add(h16_l32, h16_h32)));\n }\n inline unsigned v_reduce_sad(const v_uint16x8& a, const v_uint16x8& b)\n```", + "output": "Thanks! I have modified it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_intrin_utils.hpp\n**Change Type:** modified\n**Context:** PR #27337: build: fix warnings from recent gcc versions\n\n**Code Changes:**\n```diff\n@@ -24,6 +24,17 @@ void test_hal_intrin_float16();\n \n //==================================================================================================\n \n+#if defined (__GNUC__) && defined(__has_warning)\n+ #if __has_warning(\"-Wmaybe-uninitialized\")\n+ #define CV_DISABLE_GCC_MAYBE_UNINITIALIZED_WARNINGS\n+ #endif\n+#endif\n+\n```", + "output": "yes, thanks for the catch." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_png.cpp\n**Change Type:** modified\n**Context:** PR #27297: imgcodecs: png: add log if first chunk is not IHDR\n\n**Code Changes:**\n```diff\n@@ -110,6 +110,44 @@ TEST(Imgcodecs_Png, read_color_palette_with_alpha)\n EXPECT_EQ(img.at(0, 1), Vec3b(255, 0, 0));\n }\n \n+// IHDR shall be first.\n+// See https://github.com/opencv/opencv/issues/27295\n+TEST(Imgcodecs_Png, decode_regression27295)\n+{\n+ vector buff;\n+ Mat src = Mat::zeros(240, 180, CV_8UC3);\n```", + "output": "Thank you for your review, I fixed this and next line. (IHDR does not expect the length of chunk data is 4.) ```diff - // If first is IHDR chunk, output shall not be empty. - // 8 means PNG sigunature length. - // 4 means lenght of chunk. + // If IHDR chunk found as the first chunk, output shall not be empty. + // 8 means PNG signature length. + // 4 means length field(uint32_t). ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #27297: imgcodecs: png: add log if first chunk is not IHDR\n**Review Line:** 297\n\n**Code Changes:**\n```diff\n+ {\n+ CV_LOG_ERROR(NULL, \"CgBI chunk (Apple private) found as the first chunk. IHDR is expected.\");\n+ return false;\n+ }\n if (id != id_IHDR)\n+ {\n+ CV_LOG_ERROR(NULL, \"IHDR chunk shall be first. This data may be broken or malformed.\");\n return false;\n+ }\n \n m_is_fcTL_loaded = false;\n```", + "output": "I agreed to the message change. This fix resulted in a complicated if statement, so I flattened it. Before ```c++ if (id != id_IHDR) { if (id == id_CgBI) { CV_LOG_ERROR(NULL, \"CgBI chunk (Apple private) found as the first chunk. IHDR is expected.\"); } else { CV_LOG_ERROR(NULL, \"IHDR chunk shall be first. This data may be broken or malformed.\"); } return false; } ``` After ```c++ if (id == id_CgBI) { CV_LOG_ERROR(NULL, \"CgBI chunk (Apple private) found as the first chunk. IHDR is expected.\"); return false; } if (id != id_IHDR) { CV_LOG_ERROR(NULL, \"IHDR chunk shall be first. This data may be broken or malformed.\"); return false; } ``` I have also reflected this correction in the comments of the test code. Thank you very much !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/median_blur.simd.hpp\n**Change Type:** modified\n**Context:** PR #27299: imgproc: medianblur: Performance improvement\n\n**Code Changes:**\n```diff\n@@ -13,6 +13,7 @@\n // Copyright (C) 2000-2008, 2018, Intel Corporation, all rights reserved.\n // Copyright (C) 2009, Willow Garage Inc., all rights reserved.\n // Copyright (C) 2014-2015, Itseez Inc., all rights reserved.\n+// Copyright (C) 2025, Advanced Micro Devices, all rights reserved.\n // Third party copyrights are property of their respective owners.\n //\n // Redistribution and use in source and binary forms, with or without modification,\n@@ -693,8 +694,16 @@ medianBlur_SortNet( const Mat& _src, Mat& _dst, int m )\n #else\n```", + "output": "Replaced with if else and offset in the next commit. Please check." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/median_blur.simd.hpp\n**Change Type:** modified\n**Context:** PR #27299: imgproc: medianblur: Performance improvement\n\n**Code Changes:**\n```diff\n@@ -13,6 +13,7 @@\n // Copyright (C) 2000-2008, 2018, Intel Corporation, all rights reserved.\n // Copyright (C) 2009, Willow Garage Inc., all rights reserved.\n // Copyright (C) 2014-2015, Itseez Inc., all rights reserved.\n+// Copyright (C) 2025, Advanced Micro Devices, all rights reserved.\n // Third party copyrights are property of their respective owners.\n //\n // Redistribution and use in source and binary forms, with or without modification,\n@@ -693,8 +694,16 @@ medianBlur_SortNet( const Mat& _src, Mat& _dst, int m )\n #else\n```", + "output": "thanks for performance comparison. Tried your patch, it seems to fail \"Imgproc_MedianBlur.accuracy\" for 21x15 resolution. - Check for width greater than nlanes is necessary, to avoid negative j. Post fixing the issue, both code seems to yield similar performance (with minor run to run variation) on genoa." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/median_blur.simd.hpp\n**Change Type:** modified\n**Context:** PR #27299: imgproc: medianblur: Performance improvement\n\n**Code Changes:**\n```diff\n@@ -13,6 +13,7 @@\n // Copyright (C) 2000-2008, 2018, Intel Corporation, all rights reserved.\n // Copyright (C) 2009, Willow Garage Inc., all rights reserved.\n // Copyright (C) 2014-2015, Itseez Inc., all rights reserved.\n+// Copyright (C) 2025, Advanced Micro Devices, all rights reserved.\n // Third party copyrights are property of their respective owners.\n //\n // Redistribution and use in source and binary forms, with or without modification,\n@@ -693,8 +694,16 @@ medianBlur_SortNet( const Mat& _src, Mat& _dst, int m )\n #else\n```", + "output": "Am new to OpenCV codebase, thanks for your valuable feedback to maintain code consistency. Updated the patch, please check." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** hal/riscv-rvv/src/core/sqrt.cpp\n**Change Type:** added\n**Context:** PR #27301: hal/riscv-rvv: refactor the building process\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,74 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level\n+// directory of this distribution and at http://opencv.org/license.html.\n+\n+// Copyright (C) 2025, Institute of Software, Chinese Academy of Sciences.\n+\n+#include \"rvv_hal.hpp\"\n+#include \"common.hpp\"\n+\n```", + "output": "@asmorkalov oops, found a typo here ``` return sqrt>(dst, dst, len); ``` should be `(src, dst, len)`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/attention_layer.cpp\n**Change Type:** modified\n**Context:** PR #27238: Rotary positional embeddings\n\n**Code Changes:**\n```diff\n@@ -24,6 +24,105 @@ static void packWeight(size_t num_heads, size_t head_size, size_t input_hidden_s\n }\n }\n \n+\n+static void rotationKernel(\n+ float* data, const float* rotation_table,\n+ size_t seq_len, size_t d\n+)\n+{\n```", + "output": "@fengyuentau please correct me, if I'm wrong." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/attention_layer.cpp\n**Change Type:** modified\n**Context:** PR #27238: Rotary positional embeddings\n\n**Code Changes:**\n```diff\n@@ -24,6 +24,105 @@ static void packWeight(size_t num_heads, size_t head_size, size_t input_hidden_s\n }\n }\n \n+\n+static void rotationKernel(\n+ float* data, const float* rotation_table,\n+ size_t seq_len, size_t d\n+)\n+{\n```", + "output": "@asmorkalov, ``` const size_t w = VTraits::vlanes(); ... int idx_buf[w]; ``` caused **[-Wvla-cxx-extension]** in GCC and the build for Win64 failed. I wasn't sure what's the proper way to deal with this.." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_obsensor/obsensor_uvc_stream_channel.cpp\n**Change Type:** modified\n**Context:** PR #27230: videoio: add Orbbec Gemini 330 camera support\n**Review Line:** 69\n\n**Code Changes:**\n```diff\n #define fourCc2Int(a, b, c, d) \\\n@@ -62,6 +66,7 @@ const std::map fourccToOBFormat = {\n {fourCc2Int('M', 'J', 'P', 'G'), FRAME_FORMAT_MJPG},\n {fourCc2Int('Y', '1', '6', ' '), FRAME_FORMAT_Y16},\n {fourCc2Int('Y', '1', '4', ' '), FRAME_FORMAT_Y14},\n+ {fourCc2Int('Z', '1', '6', ' '), FRAME_FORMAT_Y16}\n };\n \n StreamType parseUvcDeviceNameToStreamType(const std::string& devName)\n@@ -204,7 +209,6 @@ DepthFrameUnpacker::~DepthFrameUnpacker() {\n delete[] outputDataBuf_;\n```", + "output": "@kaingwade Yes—Z16 on the G330‑series camera is simply Y16. Also, could you please unplug and reconnect the camera and then retest the G336L? In my tests, the G336L is able to output both depth and RGB streams." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_obsensor/obsensor_uvc_stream_channel.cpp\n**Change Type:** modified\n**Context:** PR #27230: videoio: add Orbbec Gemini 330 camera support\n**Review Line:** 69\n\n**Code Changes:**\n```diff\n #define fourCc2Int(a, b, c, d) \\\n@@ -62,6 +66,7 @@ const std::map fourccToOBFormat = {\n {fourCc2Int('M', 'J', 'P', 'G'), FRAME_FORMAT_MJPG},\n {fourCc2Int('Y', '1', '6', ' '), FRAME_FORMAT_Y16},\n {fourCc2Int('Y', '1', '4', ' '), FRAME_FORMAT_Y14},\n+ {fourCc2Int('Z', '1', '6', ' '), FRAME_FORMAT_Y16}\n };\n \n StreamType parseUvcDeviceNameToStreamType(const std::string& devName)\n@@ -204,7 +209,6 @@ DepthFrameUnpacker::~DepthFrameUnpacker() {\n delete[] outputDataBuf_;\n```", + "output": "Upgrading the FM version can solve the issue. @sirudoi Please add the FM requirement in the description." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_obsensor/obsensor_uvc_stream_channel.cpp\n**Change Type:** modified\n**Context:** PR #27230: videoio: add Orbbec Gemini 330 camera support\n\n**Code Changes:**\n```diff\n@@ -35,19 +35,23 @@\n \n namespace cv {\n namespace obsensor {\n+const ObExtensionUnit OBSENSOR_COMMON_XU_UNIT = { XU_UNIT_ID_COMMON, { 0xA55751A1, 0xF3C5, 0x4A5E, { 0x8D, 0x5A, 0x68, 0x54, 0xB8, 0xFA, 0x27, 0x16 } } };\n+const ObExtensionUnit OBSENSOR_G330_XU_UNIT = { XU_UNIT_ID_G330, { 0xC9606CCB, 0x594C, 0x4D25, { 0xaf, 0x47, 0xcc, 0xc4, 0x96, 0x43, 0x59, 0x95 } } };\n+\n const uint8_t OB_EXT_CMD0[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0x52, 0x00, 0x5B, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD1[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0x54, 0x00, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD2[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0x56, 0x00, 0x0d, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00 };\n```", + "output": "> Why do you need resize and crop for the camera frames? I propose to return them in original size. Also resized does not support in-place processing. The code always triggers srcMat realloc. Hi, the resize/crop operations ensured pixel alignment between depth and color streams with mismatched resolutions. Without them, users would receive misaligned data. I now enforce matching resolutions to eliminate these steps. Thank you for your insight!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/mcc/checker_detector.hpp\n**Change Type:** added\n**Context:** PR #26906: Adding macbeth chart detector to objdetect module from opencv_contrib\n**Review Line:** 48\n\n**Code Changes:**\n```diff\n+ typedef std::vector PointsVector;\n+ typedef std::vector ContoursVector;\n+\n+public:\n+ CCheckerDetectorImpl();\n+ CCheckerDetectorImpl(const dnn::Net& _net){\n+ net = _net;\n+ }\n+ virtual ~CCheckerDetectorImpl();\n+\n+ bool process(InputArray image, const std::vector ®ionsOfInterest,\n```", + "output": "This PR breaks build without _opencv_dnn_ - all related code should be guarded by `#ifdef HAVE_OPENCV_DNN` or _objdetect_->_dnn_ dependency should be required. **cc** @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/yolo_detector.cpp\n**Change Type:** modified\n**Context:** PR #25794: Add sample support of `YOLOv9` and `YOLOv10` in OpenCV\n**Review Line:** 138\n\n**Code Changes:**\n```diff\n@@ -131,25 +135,30 @@ void yoloPostProcessing(\n // remove the second element\n outs.pop_back();\n // unsqueeze the first dimension\n- outs[0] = outs[0].reshape(0, std::vector{1, 8400, 84});\n+ outs[0] = outs[0].reshape(0, std::vector{1, 8400, nc + 4});\n }\n \n+ // assert if last dim is 85 or 84\n+ CV_CheckEQ(outs[0].dims, 3, \"Invalid output shape. The shape should be [1, #anchors, 85 or 84]\");\n+ CV_CheckEQ((outs[0].size[2] == nc + 5 || outs[0].size[2] == 80 + 4), true, \"Invalid output shape: \");\n```", + "output": "Got it, thanks! So do you mind to compute `nc` from total number of `outs[0]` elements?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCXXCompiler.cmake\n**Change Type:** modified\n**Context:** PR #27107: build: Check supported C++ standard features and user setting\n\n**Code Changes:**\n```diff\n@@ -207,30 +207,35 @@ if(CMAKE_VERSION VERSION_LESS \"3.1\")\n endforeach()\n endif()\n \n+# See https://github.com/opencv/opencv/issues/27105\n+# - CMAKE_COMPILE_FEATURES is used to detect what features are available by the compiler.\n+# - CMAKE_CXX_STANDARD is used to detect what features are available in this configuration.\n if(NOT OPENCV_SKIP_CMAKE_CXX_STANDARD)\n+ if(DEFINED CMAKE_CXX_STANDARD AND ((CMAKE_CXX_STANDARD EQUAL 98) OR (CMAKE_CXX_STANDARD LESS 11)))\n+ message(FATAL_ERROR \"OpenCV 4.x requires C++11, but your configuration does not enable(CMAKE_CXX_STANDARD=${CMAKE_CXX_STANDARD}).\")\n```", + "output": "Thank you for comment, I fixed it. It seems difficult to completely eliminate handling C++98. so I reduce it as possible For example ... ```cmake # if (CMAKE_CXX_STANDARD EQUAL 98) # message(FATAL_ERROR \"...\") # endif() if(CMAKE_CXX_STANDARD GREATER_EQUAL 11) set(HAVE_CXX11 ON) endif() if(CMAKE_CXX_STANDARD GREATER_EQUAL 17) set(HAVE_CXX17 ON) endif() ``` | CMAKE_CXX_STANDARD | HAVE_CXX11 | HAVE_CXX17 | |:---------------------------:|:---------------:|:---------------:| | 11 | ON | OFF | | 14 | ON | OFF | | 17 | ON | ON | | 20 | ON | ON | | 23 | ON | ON | | 98 | __ON(Not good)__ | __ON(Not good)__ |" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_mat.cpp\n**Change Type:** modified\n**Context:** PR #25394: Added reinterpret() method to Mat to convert meta-data without actual data conversion\n**Review Line:** 1340\n\n**Code Changes:**\n```diff\n+ cv::OutputArray C(A);\n+ cv::Mat B = C.reinterpret(CV_32FC1);\n+\n+ EXPECT_EQ(A.data, B.data);\n+ EXPECT_EQ(B.type(), CV_32FC1);\n+}\n+\n TEST(Core_Mat, push_back)\n {\n Mat a = (Mat_(1,2) << 3.4884074f, 1.4159607f);\n```", + "output": "Okay, I've already made the modifications to the test case and the convertTo code. Thanks you very much." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/mat.hpp\n**Change Type:** modified\n**Context:** PR #25394: Added reinterpret() method to Mat to convert meta-data without actual data conversion\n\n**Code Changes:**\n```diff\n@@ -371,6 +371,7 @@ class CV_EXPORTS _OutputArray : public _InputArray\n void release() const;\n void clear() const;\n void setTo(const _InputArray& value, const _InputArray & mask = _InputArray()) const;\n+ Mat reinterpret( int type ) const;\n \n void assign(const UMat& u) const;\n void assign(const Mat& m) const;\n@@ -1322,6 +1323,15 @@ class CV_EXPORTS Mat\n */\n```", + "output": "@vpisarev I had done" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_wrap.cpp\n**Change Type:** modified\n**Context:** PR #25394: Added reinterpret() method to Mat to convert meta-data without actual data conversion\n\n**Code Changes:**\n```diff\n@@ -1656,6 +1656,12 @@ void _OutputArray::create(int d, const int* sizes, int mtype, int i,\n CV_Error(Error::StsNotImplemented, \"Unknown/unsupported array type\");\n }\n \n+Mat _OutputArray::reinterpret(int mtype) const\n+{\n+ mtype = CV_MAT_TYPE(mtype);\n+ return getMat().reinterpret(mtype);\n+}\n+\n```", + "output": "@vpisarev I had done" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix.cpp\n**Change Type:** modified\n**Context:** PR #25394: Added reinterpret() method to Mat to convert meta-data without actual data conversion\n\n**Code Changes:**\n```diff\n@@ -1261,6 +1261,16 @@ Mat Mat::reshape(int _cn, const std::vector& _newshape) const\n return reshape(_cn, (int)_newshape.size(), &_newshape[0]);\n }\n \n+Mat Mat::reinterpret(int type) const\n+{\n+ type = CV_MAT_TYPE(type);\n+ CV_Assert(CV_ELEM_SIZE(this->type()) == CV_ELEM_SIZE(type));\n+ Mat m = *this;\n+ m.flags = (m.flags & ~CV_MAT_TYPE_MASK) | type;\n```", + "output": "@vpisarev I had done" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #27138: Fix heap buffer overflow and use after free in imgcodecs\n\n**Code Changes:**\n```diff\n@@ -401,7 +401,7 @@ bool PngDecoder::readData( Mat& img )\n Mat mat_cur = Mat::zeros(img.rows, img.cols, m_type);\n uint32_t id = 0;\n uint32_t j = 0;\n- uint32_t imagesize = m_width * m_height * mat_cur.channels();\n+ uint32_t imagesize = m_width * m_height * (uint32_t)mat_cur.elemSize();\n m_is_IDAT_loaded = false;\n \n if (m_frame_no == 0)\n@@ -451,15 +451,26 @@ bool PngDecoder::readData( Mat& img )\n```", + "output": "thank you for fixing this. uint32_t imagesize = m_width * m_height * (int)mat_cur.elemSize();" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/imgwarp.cpp\n**Change Type:** modified\n**Context:** PR #27108: When WARP_INVERSE_MAP is used, accelerate the calculation with multi-threading\n\n**Code Changes:**\n```diff\n@@ -3737,41 +3737,46 @@ void cv::warpPolar(InputArray _src, OutputArray _dst, Size dsize,\n else\n Kmag = maxRadius / ssize.width;\n \n- int x, y;\n Mat bufx, bufy, bufp, bufa;\n \n bufx = Mat(1, dsize.width, CV_32F);\n bufy = Mat(1, dsize.width, CV_32F);\n bufp = Mat(1, dsize.width, CV_32F);\n```", + "output": "Thank you for your correction. I have already made the revision." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_gif.cpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n\n**Code Changes:**\n```diff\n@@ -414,6 +414,110 @@ TEST(Imgcodecs_Gif, decode_disposal_method)\n }\n }\n \n+// See https://github.com/opencv/opencv/issues/26970\n+typedef testing::TestWithParam Imgcodecs_Gif_loop_count;\n+TEST_P(Imgcodecs_Gif_loop_count, imwriteanimation)\n+{\n+ const string gif_filename = cv::tempfile(\".gif\");\n+\n```", + "output": "I fixed it, thank you for your comment !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n**Review Line:** 272\n\n**Code Changes:**\n```diff\n+ */\n CV_PROP_RW int loop_count;\n //! Background color of the animation in BGRA format.\n CV_PROP_RW Scalar bgcolor;\n //! Duration for each frame in milliseconds.\n+ /*! @note (GIF) Due to file format limitation\n+ * - Durations must be multiples of 10 milliseconds. Any provided value will be rounded down to the nearest 10ms (e.g., 88ms → 80ms).\n+ * - 0ms(or smaller than expected in user application) duration may cause undefined behavior, e.g. it is handled with default duration.\n+ * - Over 65535 * 10 milliseconds duration is not supported.\n+ */\n CV_PROP_RW std::vector durations;\n```", + "output": "please consider ``` /*! @note (GIF) Due to file format limitation * - Durations must be multiples of 10 milliseconds. Any provided value will be rounded down to the nearest 10ms (e.g., 88ms → 80ms). * - 0ms(or smaller than expected in user application) duration may cause undefined behavior, e.g. it is handled with default duration. * - Over 65535 * 10 milliseconds duration is not supported. */ ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n**Review Line:** 272\n\n**Code Changes:**\n```diff\n+ */\n CV_PROP_RW int loop_count;\n //! Background color of the animation in BGRA format.\n CV_PROP_RW Scalar bgcolor;\n //! Duration for each frame in milliseconds.\n+ /*! @note (GIF) Due to file format limitation\n+ * - Durations must be multiples of 10 milliseconds. Any provided value will be rounded down to the nearest 10ms (e.g., 88ms → 80ms).\n+ * - 0ms(or smaller than expected in user application) duration may cause undefined behavior, e.g. it is handled with default duration.\n+ * - Over 65535 * 10 milliseconds duration is not supported.\n+ */\n CV_PROP_RW std::vector durations;\n```", + "output": "This example is helpful to know about duration about 10ms unit, thank you for your comment ! > - Over 65535 milliseconds duration is not supported. For GIF89a specification, I think it seems to have to change from `655350`. https://www.w3.org/Graphics/GIF/spec-gif89a.txt > 23. Graphic Control Extension. > vii) Delay Time - If not 0, this field specifies the number of > hundredths (1/100) of a second to wait before continuing with the > processing of the Data Stream. The clock starts ticking immediately > after the graphic is rendered. This field may be used in > conjunction with the User Input Flag field. We can store`Delay Time` value as 16 bit unsigned integer with 10ms unit, so a range from 10ms to 655350ms are valid. ```diff // GIF file stores duration in 10ms unit. const int frameDelay10ms = cvRound(frameDelay / 10); CV_LOG_IF_WARNING(NULL, (frameDelay10ms == 0), cv::format(\"frameDelay(%d) is rounded to 0ms, its behaviour is user application depended.\", frameDelay)); CV_CheckLE(frameDelay10ms, 65535, \"It requires to be stored in WORD\"); ``` Both 65535ms(about 1min) and 655350ms(about 11min) are a very long durations as animation frames. So this restriction is not expected to affect user experience. To make compatibility with other codecs, should I limit with `CV_CheckLE(frameDelay, 65535, \"Unsupported duration\")` ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n**Review Line:** 272\n\n**Code Changes:**\n```diff\n+ */\n CV_PROP_RW int loop_count;\n //! Background color of the animation in BGRA format.\n CV_PROP_RW Scalar bgcolor;\n //! Duration for each frame in milliseconds.\n+ /*! @note (GIF) Due to file format limitation\n+ * - Durations must be multiples of 10 milliseconds. Any provided value will be rounded down to the nearest 10ms (e.g., 88ms → 80ms).\n+ * - 0ms(or smaller than expected in user application) duration may cause undefined behavior, e.g. it is handled with default duration.\n+ * - Over 65535 * 10 milliseconds duration is not supported.\n+ */\n CV_PROP_RW std::vector durations;\n```", + "output": "Thank you for reply ! I fixed it and to use \"milliseconds` instead of `millseconds`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n\n**Code Changes:**\n```diff\n@@ -47,8 +47,8 @@ bool GifDecoder::readHeader() {\n return false;\n }\n \n- String signature(6, ' ');\n- m_strm.getBytes((uchar*)signature.data(), 6);\n+ std::string signature(6, ' ');\n+ m_strm.getBytes((uchar*)signature.c_str(), 6);\n CV_Assert(signature == R\"(GIF87a)\" || signature == R\"(GIF89a)\");\n \n```", + "output": "Thank you for comment, I fixed it and other similar usage too." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n\n**Code Changes:**\n```diff\n@@ -118,8 +118,8 @@ enum ImwriteFlags {\n IMWRITE_JPEGXL_EFFORT = 641,//!< For JPEG XL, encoder effort/speed level without affecting decoding speed; it is between 1 (fastest) and 10 (slowest). Default is 7.\n IMWRITE_JPEGXL_DISTANCE = 642,//!< For JPEG XL, distance level for lossy compression: target max butteraugli distance, lower = higher quality, 0 = lossless; range: 0 .. 25. Default is 1.\n IMWRITE_JPEGXL_DECODING_SPEED = 643,//!< For JPEG XL, decoding speed tier for the provided options; minimum is 0 (slowest to decode, best quality/density), and maximum is 4 (fastest to decode, at the cost of some quality/density). Default is 0.\n- IMWRITE_GIF_LOOP = 1024,//!< For GIF, it can be a loop flag from 0 to 65535. Default is 0 - loop forever.\n- IMWRITE_GIF_SPEED = 1025,//!< For GIF, it is between 1 (slowest) and 100 (fastest). Default is 96.\n+ IMWRITE_GIF_LOOP = 1024, //!< Not functional since 4.12.0. Replaced by cv::Animation::loop_count.\n+ IMWRITE_GIF_SPEED = 1025, //!< Not functional since 4.12.0. Replaced by cv::Animation::durations.\n IMWRITE_GIF_QUALITY = 1026, //!< For GIF, it can be a quality from 1 to 8. Default is 2. See cv::ImwriteGifCompressionFlags.\n IMWRITE_GIF_DITHER = 1027, //!< For GIF, it can be a quality from -1(most dither) to 3(no dither). Default is 0.\n```", + "output": "I add note for WebP too. Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26971: imgcodecs: gif: support animated gif without loop\n\n**Code Changes:**\n```diff\n@@ -47,8 +47,8 @@ bool GifDecoder::readHeader() {\n return false;\n }\n \n- String signature(6, ' ');\n- m_strm.getBytes((uchar*)signature.data(), 6);\n+ std::string signature(6, ' ');\n+ m_strm.getBytes((uchar*)signature.c_str(), 6);\n CV_Assert(signature == R\"(GIF87a)\" || signature == R\"(GIF89a)\");\n \n```", + "output": "Thank you for comment, I fixed it to show log message when it is unexpected condition." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/minmax.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #26946: Fix HAL dispatch in cv::minMaxIdx\n\n**Code Changes:**\n```diff\n@@ -314,18 +314,15 @@ void cv::minMaxIdx(InputArray _src, double* minVal,\n \n if (src.dims <= 2)\n {\n- if ((size_t)src.step == (size_t)mask.step)\n+ if ((size_t)src.step == (size_t)mask.step || mask.empty())\n {\n CALL_HAL(minMaxIdx, cv_hal_minMaxIdx, src.data, src.step, src.cols*cn, src.rows,\n src.depth(), minVal, maxVal, minIdx, maxIdx, mask.data);\n }\n```", + "output": "@asmorkalov Uh, I have found another solution. Sorry for the inconvenience." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/test/test_arucodetection.cpp\n**Change Type:** modified\n**Context:** PR #26934: Extend ArUcoDetector to run multiple dictionaries in an efficient manner.\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,18 @@\n #include \"opencv2/objdetect/aruco_detector.hpp\"\n #include \"opencv2/calib3d.hpp\"\n \n+namespace cv {\n+ namespace aruco {\n+ bool operator==(const Dictionary& d1, const Dictionary& d2);\n+ bool operator==(const Dictionary& d1, const Dictionary& d2) {\n+ return d1.markerSize == d2.markerSize\n+ && std::equal(d1.bytesList.begin>(), d1.bytesList.end>(), d2.bytesList.begin>())\n```", + "output": "Thanks, that's important feedback. Wasn't sure about this case and running the detector with nothing and not getting anything can indeed be confusing. I'll make sure the last dictionary can't be removed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_detector.hpp\n**Change Type:** modified\n**Context:** PR #26934: Extend ArUcoDetector to run multiple dictionaries in an efficient manner.\n\n**Code Changes:**\n```diff\n@@ -285,6 +285,16 @@ class CV_EXPORTS_W ArucoDetector : public Algorithm\n const DetectorParameters &detectorParams = DetectorParameters(),\n const RefineParameters& refineParams = RefineParameters());\n \n+ /** @brief ArucoDetector constructor for multiple dictionaries\n+ *\n+ * @param dictionaries indicates the type of markers that will be searched. Empty dictionaries will throw an error.\n+ * @param detectorParams marker detection parameters\n+ * @param refineParams marker refine detection parameters\n+ */\n```", + "output": "Hey, thanks for the feedback. 1. yeah, add/remove may be a bit redundant. I went for maximal convenience here so the dictionaries list can be manipulated in many ways. I want to point out though that the default behavior of `get/setDictionary` (so without the optional index) is preserved. 2. As long as it's `const` it's safe. You can always get a copy, rather than a view of the insides by doing `vector copy = detector.getDictionaries()` rather than `vector& reference = detector.getDictionary()`. So in order to not do a copy you have to specify that you're taking a reference. If you're asking for safety in terms of a malevolent attacker rather than a potential source of a bug, then yes, returning a copy would be better. 3. Empty vector isn't allowed. I have a check in the constructor and the `removeDictionary()` function for that, but I forgot to check in the `setDictionaries()`. Thanks for spotting this! On duplicates I purposefully did not add a check and leave some of the responsibilities to the user. I could change this to use an `unsorted_set` internally that automatically ignores duplicates, but the interface would remain a vector. This would also mean that `getDictionaries()` would return a copy for sure. Even then you could do nonsensical stuff like running `DICT_4X4_50` and then `DICT_4X4_100` and potentially find a lot of duplicated markers. This can of course be solved by just running the larger dictionary and ignoring the smaller one, but that's a lot of hand-holding that I am not used to with the OpenCV API. 4. Sure, makes sense. I added some non-trivial complexity after all. So for now I will fix the bug that allows for setting an empty vector and I'll add documentation. On the rest I'd like some confirmation for my suggestions and then I'm happy to make the changes." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDA.cmake\n**Change Type:** modified\n**Context:** PR #27112: cuda: Force C++17 Standard for CUDA targets when CUDA Toolkit >=12.8\n\n**Code Changes:**\n```diff\n@@ -151,12 +151,14 @@ macro(ocv_cuda_compile VAR)\n ocv_check_windows_crt_linkage()\n ocv_nvcc_flags()\n \n- if(UNIX OR APPLE)\n- if(NOT \" ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_DEBUG} ${CUDA_NVCC_FLAGS}\" MATCHES \"-std=\")\n+ if(NOT \" ${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_DEBUG} ${CUDA_NVCC_FLAGS}\" MATCHES \"-std=\")\n+ if(UNIX OR APPLE)\n if(CUDA_VERSION VERSION_LESS \"11.0\")\n list(APPEND CUDA_NVCC_FLAGS \"--std=c++11\")\n```", + "output": "When @opencv-alalek first introduced the additional check it was for these two platforms. I couldn't determine a reason why it wasn't required for Windows. In the end to avoid having to test/install many versions of the Tookit on both platforms and because this is the depreciated FindCuda approach I decided to leave the existing checks as they are to be 100% that it wouldn't break any existing user code. I can simply apply the condition to all platforms if you want?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #26849: APNG encoding optimization\n\n**Code Changes:**\n```diff\n@@ -814,6 +814,10 @@ PngEncoder::PngEncoder()\n next_seq_num = 0;\n trnssize = 0;\n palsize = 0;\n+ m_compression_level = Z_BEST_SPEED;\n+ m_compression_strategy = IMWRITE_PNG_STRATEGY_RLE; // Default strategy\n+ m_filter = IMWRITE_PNG_FILTER_SUB; // Default filter\n+ m_isBilevel = false;\n memset(palette, 0, sizeof(palette));\n memset(trns, 0, sizeof(trns));\n```", + "output": "@vrabaud i removed this condition. but fyi 1 frame apng still saved without acTL and fcTL" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/perf/perf_png.cpp\n**Change Type:** modified\n**Context:** PR #26849: APNG encoding optimization\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,10 @@ namespace opencv_test\n \n using namespace perf;\n \n-typedef perf::TestBaseWithParam PNG;\n+CV_ENUM(PNGStrategy, IMWRITE_PNG_STRATEGY_DEFAULT, IMWRITE_PNG_STRATEGY_FILTERED, IMWRITE_PNG_STRATEGY_HUFFMAN_ONLY, IMWRITE_PNG_STRATEGY_RLE, IMWRITE_PNG_STRATEGY_FIXED);\n+CV_ENUM(PNGFilters, IMWRITE_PNG_FILTER_NONE, IMWRITE_PNG_FILTER_SUB, IMWRITE_PNG_FILTER_UP, IMWRITE_PNG_FILTER_AVG, IMWRITE_PNG_FILTER_PAETH, IMWRITE_PNG_FAST_FILTERS, IMWRITE_PNG_ALL_FILTERS);\n+\n+typedef perf::TestBaseWithParam> PNG;\n \n```", + "output": "@asmorkalov to reduce execution time i think `testing::Values(1, 6)` will give enough information. 9 is about 5-6 times slower than 6 and gives only about %1 lower file size." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_hal_core.cpp\n**Change Type:** modified\n**Context:** PR #26887: invSqrt SIMD_SCALABLE implementation & HAL tests refactoring - Enable CV_SIMD_SCALABLE for invSqrt. * Banana Pi BF3 (SpacemiT K1) RISC-V * ...\n\n**Code Changes:**\n```diff\n@@ -42,168 +42,163 @@\n \n namespace opencv_test { namespace {\n \n-enum\n+enum HALFunc\n {\n HAL_EXP = 0,\n HAL_LOG = 1,\n- HAL_SQRT = 2\n```", + "output": "@asmorkalov you have a bright eye!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.hpp\n**Change Type:** modified\n**Context:** PR #26930: imgcodecs: gif: support Disposal Method\n\n**Code Changes:**\n```diff\n@@ -11,14 +11,35 @@\n namespace cv\n {\n \n-enum GifOpMode\n-{\n- GRFMT_GIF_Nothing = 0,\n- GRFMT_GIF_PreviousImage = 1,\n- GRFMT_GIF_Background = 2,\n- GRFMT_GIF_Cover = 3\n```", + "output": "Thank you for your comment, I agree and I replaced this enum with your suggestion !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26930: imgcodecs: gif: support Disposal Method\n\n**Code Changes:**\n```diff\n@@ -24,7 +24,6 @@ GifDecoder::GifDecoder() {\n hasRead = false;\n hasTransparentColor = false;\n transparentColor = 0;\n- opMode = GRFMT_GIF_Nothing;\n top = 0, left = 0, width = 0, height = 0;\n depth = 8;\n idx = 0;\n@@ -66,6 +65,8 @@ bool GifDecoder::readHeader() {\n for (int i = 0; i < 3 * globalColorTableSize; i++) {\n```", + "output": "I fixed around the current Disposal Method, thank you for your point !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26930: imgcodecs: gif: support Disposal Method\n\n**Code Changes:**\n```diff\n@@ -24,7 +24,6 @@ GifDecoder::GifDecoder() {\n hasRead = false;\n hasTransparentColor = false;\n transparentColor = 0;\n- opMode = GRFMT_GIF_Nothing;\n top = 0, left = 0, width = 0, height = 0;\n depth = 8;\n idx = 0;\n@@ -66,6 +65,8 @@ bool GifDecoder::readHeader() {\n for (int i = 0; i < 3 * globalColorTableSize; i++) {\n```", + "output": "The intention was to output as robustly as possible even in the case of inappropriate GIFs, such as images that extend outside the logical screen. However, the corrections are insufficient, and the current implementation is not effective. I confirmed that such GIF data does not comply with the GIF87a and GIF89a specifications, and reverted it. Thank you for pointing this out. GIF89a https://www.w3.org/Graphics/GIF/spec-gif89a.txt ``` 20. Image Descriptor. a. Description. Each image in the Data Stream is composed of an Image Descriptor, an optional Local Color Table, and the image data. Each image must fit within the boundaries of the Logical Screen, as defined in the Logical Screen Descriptor. ``` GIF87(a) https://www.w3.org/Graphics/GIF/spec-gif87.txt ``` IMAGE DESCRIPTOR : The specifications for the image position and size must be confined to the dimensions defined by the Screen Descriptor. On the other hand it is not necessary that the image fill the entire screen defined. ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_imgwarp.cpp\n**Change Type:** modified\n**Context:** PR #26914: removal of deprecated functions in imgproc\n\n**Code Changes:**\n```diff\n@@ -1034,87 +1034,6 @@ TEST(Imgproc_Remap, DISABLED_memleak)\n }\n }\n \n-//** @deprecated */\n-TEST(Imgproc_linearPolar, identity)\n-{\n- const int N = 33;\n- Mat in(N, N, CV_8UC3, Scalar(255, 0, 0));\n- in(cv::Rect(N/3, N/3, N/3, N/3)).setTo(Scalar::all(255));\n```", + "output": "> The test should be either removed or updated to call `warpPolar`. @asmorkalov ,proposed to keep the test.So, should it be kept or removed ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26859: imgcodecs:gif: support IMREAD_UNCHANGED and IMREAD_GRAYSCALE\n\n**Code Changes:**\n```diff\n@@ -14,7 +14,7 @@ namespace cv\n //////////////////////////////////////////////////////////////////////\n GifDecoder::GifDecoder() {\n m_signature = R\"(GIF)\";\n- m_type = CV_8UC4;\n+ m_type = CV_8UC3;\n bgColor = -1;\n m_buf_supported = true;\n globalColorTableSize = 0;\n@@ -172,12 +172,17 @@ bool GifDecoder::readData(Mat &img) {\n```", + "output": "Thank you for your comment, I fixed it to show `img.channels()`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #26859: imgcodecs:gif: support IMREAD_UNCHANGED and IMREAD_GRAYSCALE\n\n**Code Changes:**\n```diff\n@@ -413,13 +413,13 @@ can be saved using this function, with these exceptions:\n - With JPEG 2000 encoder, 8-bit unsigned (CV_8U) and 16-bit unsigned (CV_16U) images can be saved.\n - With JPEG XL encoder, 8-bit unsigned (CV_8U), 16-bit unsigned (CV_16U) and 32-bit float(CV_32F) images can be saved.\n - JPEG XL images with an alpha channel can be saved using this function.\n- To do this, create 8-bit (or 16-bit, 32-bit float) 4-channel image BGRA, where the alpha channel goes last.\n- Fully transparent pixels should have alpha set to 0, fully opaque pixels should have alpha set to 255/65535/1.0.\n+ To achieve this, create an 8-bit 4-channel (CV_8UC4) / 16-bit 4-channel (CV_16UC4) / 32-bit float 4-channel (CV_32FC4) BGRA image, ensuring the alpha channel is the last component.\n+ Fully transparent pixels should have an alpha value of 0, while fully opaque pixels should have an alpha value of 255/65535/1.0.\n - With PAM encoder, 8-bit unsigned (CV_8U) and 16-bit unsigned (CV_16U) images can be saved.\n - With PNG encoder, 8-bit unsigned (CV_8U) and 16-bit unsigned (CV_16U) images can be saved.\n```", + "output": "Thank you for your comment, I fixed it for GIF, JPEG XL and PNG. I am not confident about alpha channel support for TIFF signed integer images (CV_8SC4/CV_16SC4/CV_32SC4), so I will exclude it from the scope of this fix. I believe it will probably work fine." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #26854: Fix oss-fuzz bugs 391934081 and 392318892\n\n**Code Changes:**\n```diff\n@@ -325,15 +325,6 @@ bool PngDecoder::readHeader()\n bop = chunk.p[33];\n }\n \n- if (id == id_bKGD)\n- {\n- // The spec is actually more complex: http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html#C.bKGD\n- m_animation.bgcolor[0] = png_get_uint_16(&chunk.p[8]);\n- m_animation.bgcolor[1] = png_get_uint_16(&chunk.p[10]);\n- m_animation.bgcolor[2] = png_get_uint_16(&chunk.p[12]);\n```", + "output": "@asmorkalov you are right ``` m_animation.bgcolor[0] = background_color->blue; m_animation.bgcolor[1] = background_color->green; m_animation.bgcolor[2] = background_color->red; ``` then the following change needed. https://github.com/opencv/opencv/blob/4.x/modules/imgcodecs/src/grfmt_png.cpp#L1546-L1548 ``` bgvalue[1] = animation.bgcolor[2]; bgvalue[3] = animation.bgcolor[1]; bgvalue[5] = animation.bgcolor[0]; ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpegxl.cpp\n**Change Type:** modified\n**Context:** PR #26844: imgcodecs: jpegxl: imdecode() directly read from memory\n\n**Code Changes:**\n```diff\n@@ -28,13 +28,15 @@ static void cbRGBAtoGRAY_32F(void *opaque, size_t x, size_t y, size_t num_pixels\n \n /////////////////////// JpegXLDecoder ///////////////////\n \n-JpegXLDecoder::JpegXLDecoder() : m_f(nullptr, &fclose)\n+JpegXLDecoder::JpegXLDecoder() : m_f(nullptr, &fclose),\n+ m_read_buffer(16384,0) // 16KB chunks\n {\n m_signature = \"\\xFF\\x0A\";\n m_decoder = nullptr;\n```", + "output": "Thank you for your comment, `m_read_buffer` should be kept accrossing 'readHeader()' and 'readData()'. So I will move it to constructor. ``` JXLDecoder::JxlDecoder() m_decoder = nullptr; JXLDecoder::readHeader() m_decoder = JxlDecoderMake(nullptr); read() JxlDecoderReleaseInput() // no effects because no input buffer is set JxlDecoderSetInput() ---- Input Buffer is locked. | JXLDecoder::readData() | read() | JxlDecoderReleaseInput()- Input Buffer is unlocked. JxlDecoderSetInput() ---- Input Buffer is locked. | JXLDecoder::~JxlDecoder() | close() | m_decoder.reset() ------- Input buffer is unlocked destory m_decoder ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindOpenEXR.cmake\n**Change Type:** modified\n**Context:** PR #26678: build(4.x) OpenEXR 2.2 or earlier cannot be used with C++17 or later\n\n**Code Changes:**\n```diff\n@@ -94,17 +94,20 @@ FOREACH(SEARCH_PATH ${SEARCH_PATHS})\n IF (${contents} MATCHES \"#define OPENEXR_VERSION_MINOR ([0-9]+)\")\n SET(OPENEXR_VERSION_MINOR \"${CMAKE_MATCH_1}\")\n ENDIF ()\n+ FILE (STRINGS ${OPENEXR_VERSION_FILE} contents REGEX \"#define OPENEXR_VERSION_PATCH \")\n+ IF (${contents} MATCHES \"#define OPENEXR_VERSION_PATCH ([0-9]+)\")\n+ SET(OPENEXR_VERSION_PATCH \"${CMAKE_MATCH_1}\")\n+ ENDIF ()\n ENDIF ()\n ENDIF ()\n```", + "output": "Thank you for your comment! I think usually ‘VERSION_LESS` is better to compare version strings, but OpenCV use not standard version string for system OpenEXR, which is splited with \"_\" instead of \".\". Like built-in OpenEXR. I'll try to fix this at this pull request. https://github.com/opencv/opencv/blob/1b48eafe48a29ddea22a47601471d7487b7ae636/cmake/OpenCVFindOpenEXR.cmake#L100-L102 https://cmake.org/cmake/help/latest/command/if.html#version-comparisons > if( VERSION_LESS ) > Component-wise integer version number comparison (version format is major[.minor[.patch[.tweak]]], omitted components are treated as zero). Build-in OpenEXR (Ver 2.3.0) ``` -- PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.43) -- TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.5.1) -- JPEG 2000: OpenJPEG (ver 2.5.0) -- OpenEXR: build (ver 2.3.0) ``` System OpenEXR (Ver 2_2) ``` -- PNG: /usr/lib/x86_64-linux-gnu/libpng.so (ver 1.6.43) -- TIFF: /usr/lib/x86_64-linux-gnu/libtiff.so (ver 42 / 4.5.1) -- JPEG 2000: OpenJPEG (ver 2.5.0) -- OpenEXR: /usr/local/lib/libImath-2_2.so /usr/local/lib/libIlmImf-2_2.so /usr/local/lib/libIex-2_2.so /usr/local/lib/libHalf.so /usr/local/lib/libIlmThread-2_2.so (ver 2_2) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindOpenEXR.cmake\n**Change Type:** modified\n**Context:** PR #26678: build(4.x) OpenEXR 2.2 or earlier cannot be used with C++17 or later\n**Review Line:** 99\n\n**Code Changes:**\n```diff\n IF (${contents} MATCHES \"#define OPENEXR_VERSION_MINOR ([0-9]+)\")\n SET(OPENEXR_VERSION_MINOR \"${CMAKE_MATCH_1}\")\n ENDIF ()\n+ FILE (STRINGS ${OPENEXR_VERSION_FILE} contents REGEX \"#define OPENEXR_VERSION_PATCH \")\n+ IF (${contents} MATCHES \"#define OPENEXR_VERSION_PATCH ([0-9]+)\")\n+ SET(OPENEXR_VERSION_PATCH \"${CMAKE_MATCH_1}\")\n+ ENDIF ()\n ENDIF ()\n ENDIF ()\n \n- IF (OPENEXR_VERSION_MAJOR AND OPENEXR_VERSION_MINOR)\n```", + "output": "Thank you for your comment ! To fix it, I replaced tab to space." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp\n**Change Type:** modified\n**Context:** PR #26815: Replaced sprintf with snprintf\n\n**Code Changes:**\n```diff\n@@ -71,8 +71,7 @@ int main( int argc, char** argv )\n namedWindow( probabilistic_name, WINDOW_AUTOSIZE );\n createTrackbar( thresh_label, probabilistic_name, &p_trackbar, max_trackbar, Probabilistic_Hough );\n \n- char edge_thresh_label[50];\n- sprintf( edge_thresh_label, \"Edge Thres: input\" );\n+ const char* edge_thresh_label = \"Edge Thres: input\";\n namedWindow( weighted_name, WINDOW_AUTOSIZE);\n createTrackbar( edge_thresh_label, weighted_name, &e_trackbar, max_trackbar, Weighted_Hough);\n createTrackbar( thresh_label, weighted_name, &w_trackbar, weightedhough_max_trackbar, Weighted_Hough);\n```", + "output": "@asmorkalov I replaced `sprintf` with `snprintf` for buffer safety and to align with modern best practices. It may seem unnecessary here but helps future-proof the code. we can use these alternatives instead : 1. **Direct Assignment**: ```cpp const char* edge_thresh_label = \"Edge Thres: input\"; ``` 2. **`strncpy`**: ```cpp strncpy(edge_thresh_label, \"Edge Thres: input\", sizeof(edge_thresh_label)); edge_thresh_label[sizeof(edge_thresh_label) - 1] = '\\0'; ``` please let me know what you think." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp\n**Change Type:** modified\n**Context:** PR #26815: Replaced sprintf with snprintf\n\n**Code Changes:**\n```diff\n@@ -71,8 +71,7 @@ int main( int argc, char** argv )\n namedWindow( probabilistic_name, WINDOW_AUTOSIZE );\n createTrackbar( thresh_label, probabilistic_name, &p_trackbar, max_trackbar, Probabilistic_Hough );\n \n- char edge_thresh_label[50];\n- sprintf( edge_thresh_label, \"Edge Thres: input\" );\n+ const char* edge_thresh_label = \"Edge Thres: input\";\n namedWindow( weighted_name, WINDOW_AUTOSIZE);\n createTrackbar( edge_thresh_label, weighted_name, &e_trackbar, max_trackbar, Weighted_Hough);\n createTrackbar( thresh_label, weighted_name, &w_trackbar, weightedhough_max_trackbar, Weighted_Hough);\n```", + "output": "thanks @asmorkalov , consider it to be done:)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/test/test_orientation.cpp\n**Change Type:** modified\n**Context:** PR #26800: fix default cap_prop_orientation_auto behaviour\n**Review Line:** 37\n\n**Code Changes:**\n```diff\n \n- EXPECT_TRUE(cap.set(CAP_PROP_ORIENTATION_AUTO, true));\n+ void orientationCheck(double angle, int width, int height)\n+ {\n+ EXPECT_EQ(angle, cap.get(CAP_PROP_ORIENTATION_META));\n+ EXPECT_EQ(width, (int)cap.get(CAP_PROP_FRAME_WIDTH));\n+ EXPECT_EQ(height, (int)cap.get(CAP_PROP_FRAME_HEIGHT));\n \n- Size actual;\n- EXPECT_NO_THROW(actual = Size((int)cap.get(CAP_PROP_FRAME_WIDTH),\n- (int)cap.get(CAP_PROP_FRAME_HEIGHT)));\n```", + "output": "@mshabunin & @asmorkalov, can we use the below test ?
improved test ```cpp // This file is part of OpenCV project. // It is subject to the license terms in the LICENSE file found in the top-level directory // of this distribution and at http://opencv.org/license.html. #include \"test_precomp.hpp\" using namespace std; namespace opencv_test { namespace { typedef TestWithParam VideoCaptureAPITests; // Related issues: // https://github.com/opencv/opencv/issues/15499 // https://github.com/opencv/opencv/issues/22088 // https://github.com/opencv/opencv/issues/26795 TEST_P(VideoCaptureAPITests, mp4_orientation_integration) { cv::VideoCaptureAPIs api = GetParam(); if (!videoio_registry::hasBackend(api)) throw SkipTestException(\"backend \" + std::to_string(static_cast(api)) + \" was not found\"); std::string video_file = cvtest::TS::ptr()->get_data_path() + \"video/rotated_metadata.mp4\"; cv::VideoCapture cap; EXPECT_NO_THROW(cap.open(video_file, api)); ASSERT_TRUE(cap.isOpened()) << \"Can't open the video: \" << video_file << \" with backend \" << api << std::endl; double orientationMeta = cap.get(cv::CAP_PROP_ORIENTATION_META); EXPECT_EQ(90.0, orientationMeta) << \"Expected the metadata to indicate 90 degrees rotation.\"; double autoRotate = cap.get(cv::CAP_PROP_ORIENTATION_AUTO); EXPECT_EQ(1.0, autoRotate) << \"Expected CAP_PROP_ORIENTATION_AUTO to be ON (1.0) by default.\"; cv::Size sizeRotated( static_cast(cap.get(cv::CAP_PROP_FRAME_WIDTH)), static_cast(cap.get(cv::CAP_PROP_FRAME_HEIGHT)) ); EXPECT_EQ(270, sizeRotated.width); EXPECT_EQ(480, sizeRotated.height); cv::Mat frame; cap >> frame; ASSERT_FALSE(frame.empty()) << \"Failed to retrieve a frame (auto-rotated)!\"; EXPECT_EQ(270, frame.cols); EXPECT_EQ(480, frame.rows); EXPECT_TRUE(cap.set(cv::CAP_PROP_ORIENTATION_AUTO, 0.0)) << \"Could not disable CAP_PROP_ORIENTATION_AUTO.\"; EXPECT_EQ(0.0, cap.get(cv::CAP_PROP_ORIENTATION_AUTO)) << \"Expected CAP_PROP_ORIENTATION_AUTO to be OFF (0.0).\"; cv::Size sizeUnrotated( static_cast(cap.get(cv::CAP_PROP_FRAME_WIDTH)), static_cast(cap.get(cv::CAP_PROP_FRAME_HEIGHT)) ); EXPECT_EQ(480, sizeUnrotated.width); EXPECT_EQ(270, sizeUnrotated.height); cap >> frame; ASSERT_FALSE(frame.empty()) << \"Failed to retrieve a frame (non-rotated)!\"; EXPECT_EQ(480, frame.cols); EXPECT_EQ(270, frame.rows); } INSTANTIATE_TEST_CASE_P(videoio, VideoCaptureAPITests, testing::Values(CAP_FFMPEG, CAP_AVFOUNDATION)); }} // namespace ```
" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpegxl.cpp\n**Change Type:** modified\n**Context:** PR #26788: imgcodecs: jpegxl: support cv::IMREAD_UNCHANGED and other ImreadFlags\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,19 @@\n \n namespace cv\n {\n+// Callback functions for JpegXLDecoder\n+static void cbRGBtoBGR_8U(void *opaque, size_t x, size_t y, size_t num_pixels, const void *pixels);\n+static void cbRGBAtoBGRA_8U(void *opaque, size_t x, size_t y, size_t num_pixels, const void *pixels);\n+static void cbRGBtoBGR_16U(void *opaque, size_t x, size_t y, size_t num_pixels, const void *pixels);\n+static void cbRGBAtoBGRA_16U(void *opaque, size_t x, size_t y, size_t num_pixels, const void *pixels);\n+static void cbRGBtoBGR_32F(void *opaque, size_t x, size_t y, size_t num_pixels, const void *pixels);\n```", + "output": "Thank you for your comment, I use `JXL_NATIVE_ENDIAN` instead of `JXL_LITTLE_ENDIAN`. I think it can work with both little and big endians." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/plugin_capture_api.hpp\n**Change Type:** modified\n**Context:** PR #25584: Open VideoCapture from data stream\n\n**Code Changes:**\n```diff\n@@ -13,7 +13,7 @@\n /// increased for backward-compatible changes, e.g. add new function\n /// Caller API <= Plugin API -> plugin is fully compatible\n /// Caller API > Plugin API -> plugin is not fully compatible, caller should use extra checks to use plugins with older API\n-#define CAPTURE_API_VERSION 1\n+#define CAPTURE_API_VERSION 2\n \n /// increased for incompatible changes, e.g. remove function argument\n /// Caller ABI == Plugin ABI -> plugin is compatible\n@@ -121,6 +121,29 @@ struct OpenCV_VideoIO_Capture_Plugin_API_v1_1_api_entries\n```", + "output": "Thanks! That was my concern. Is that a good idea to replace with functions pointers for required methods like `seek` and `read`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/misc/python/pyopencv_videoio.hpp\n**Change Type:** modified\n**Context:** PR #25584: Open VideoCapture from data stream\n\n**Code Changes:**\n```diff\n@@ -31,4 +31,114 @@ template<> bool pyopencv_to(PyObject* obj, cv::VideoCapture& stream, const ArgIn\n return true;\n }\n \n+class PythonStreamReader : public cv::IStreamReader\n+{\n+public:\n+ PythonStreamReader(PyObject* _obj = nullptr) : obj(_obj)\n+ {\n+ if (obj)\n```", + "output": "Thanks. If that's not critical, I moved to `utils.private.hpp`. We can avoid this by small duplication of the struct `CvStream : public std::streambuf` but in bindings." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/backend_plugin.cpp\n**Change Type:** modified\n**Context:** PR #25584: Open VideoCapture from data stream\n\n**Code Changes:**\n```diff\n@@ -208,6 +208,7 @@ class PluginBackend: public IBackend\n Ptr createCapture(int camera, const VideoCaptureParameters& params) const CV_OVERRIDE;\n Ptr createCapture(const std::string &filename) const;\n Ptr createCapture(const std::string &filename, const VideoCaptureParameters& params) const CV_OVERRIDE;\n+ Ptr createCapture(const Ptr& stream, const VideoCaptureParameters& params) const CV_OVERRIDE;\n Ptr createWriter(const std::string& filename, int fourcc, double fps,\n const cv::Size& sz, const VideoWriterParameters& params) const CV_OVERRIDE;\n \n@@ -447,16 +448,52 @@ class PluginCapture : public cv::IVideoCapture\n {\n```", + "output": "Thanks for the review. Added a simplified interface, but it’s private for internal conversion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #25584: Open VideoCapture from data stream\n\n**Code Changes:**\n```diff\n@@ -719,6 +719,25 @@ enum VideoCaptureOBSensorProperties{\n \n //! @} videoio_flags_others\n \n+/** @brief Read data stream interface\n+ */\n+class CV_EXPORTS_W IStreamReader\n+{\n+public:\n+ virtual ~IStreamReader();\n```", + "output": "@opencv-alalek, @asmorkalov , need your opinion on naming as we ready to move it to the public API. There are `VideoCapture`, [IStreamSource ](https://docs.opencv.org/4.x/da/d03/classcv_1_1gapi_1_1wip_1_1IStreamSource.html), [VideoReader](https://docs.opencv.org/4.x/db/ded/classcv_1_1cudacodec_1_1VideoReader.html), [VideoWriter](https://docs.opencv.org/4.x/dd/d9e/classcv_1_1VideoWriter.html). So does it make sense to keep the order and name it `IStreamReader`? Moreover, it matches PyTorch's naming: https://pytorch.org/audio/main/generated/torchaudio.io.StreamReader.html#streamreader" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_gif.cpp\n**Change Type:** modified\n**Context:** PR #26738: fix bugs in GIF decoding\n\n**Code Changes:**\n```diff\n@@ -4,6 +4,7 @@\n \n #include \"precomp.hpp\"\n #include \"grfmt_gif.hpp\"\n+#include \"opencv2/core/utils/logger.hpp\"\n \n #ifdef HAVE_IMGCODEC_GIF\n namespace cv\n@@ -293,11 +294,12 @@ void GifDecoder::code2pixel(Mat& img, int start, int k){\n bool GifDecoder::lzwDecode() {\n```", + "output": "The value of `1 << 12` used in the function can be assigned to a variable named like `max_lzwtable_size` and used." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_png.cpp\n**Change Type:** modified\n**Context:** PR #26688: Animated GIF APNG WEBP AVIF revisions\n\n**Code Changes:**\n```diff\n@@ -152,8 +152,7 @@ bool APNGFrame::setMat(const cv::Mat& src, unsigned delayNum, unsigned delayDen)\n \n if (!src.empty())\n {\n- png_uint_32 rowbytes = src.cols * src.channels();\n-\n+ png_uint_32 rowbytes = src.depth() == CV_16U ? src.cols * src.channels() * 2 : src.cols * src.channels();\n _width = src.cols;\n _height = src.rows;\n _colorType = src.channels() == 1 ? PNG_COLOR_TYPE_GRAY : src.channels() == 3 ? PNG_COLOR_TYPE_RGB : PNG_COLOR_TYPE_RGB_ALPHA;\n```", + "output": "@vrabaud @asmorkalov `frames.back().setDelayNum(frames.back().getDelayNum() + apngFrame.getDelayNum());` solves the failure on https://github.com/opencv/opencv/actions/runs/12583343235/job/35070690857?pr=26699" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_houghlines.cpp\n**Change Type:** modified\n**Context:** PR #21407: Feature: weighted hough\n**Review Line:** 355\n\n**Code Changes:**\n```diff\n+\n+ // detect lines\n+ std::vector lines;\n+ int threshold{220*25-1};\n+ bool use_edgeval{true};\n+ HoughLines(img, lines, 1, CV_PI/180, threshold, 0, 0, 0.0, CV_PI, use_edgeval);\n+\n+ // check results\n+ ASSERT_EQ(3U, lines.size());\n+ // detected lines is assumed sorted from stronger to weaker.\n+ EXPECT_EQ(12, lines[0][0]);\n```", + "output": "@opencv-alalek I'm very sorry for my very late reply, but could you kindly review the code again?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/tutorial_code/ImgTrans/HoughLines_Demo.cpp\n**Change Type:** modified\n**Context:** PR #21407: Feature: weighted hough\n**Review Line:** 176\n\n**Code Changes:**\n```diff\n+ double x0 = r*cos_t, y0 = r*sin_t;\n+ double alpha = 1000;\n+\n+ Point pt1( cvRound(x0 + alpha*(-sin_t)), cvRound(y0 + alpha*cos_t) );\n+ Point pt2( cvRound(x0 - alpha*(-sin_t)), cvRound(y0 - alpha*cos_t) );\n+ line( weighted_hough, pt1, pt2, Scalar(255,0,0), 3, LINE_AA );\n+ }\n+\n+ imshow( weighted_name, weighted_hough );\n+}\n```", + "output": "@asmorkalov I checked the code, and Standard Hough also use polar coordinate as a return value. Please check HoughLines_Demo.cpp line 113 - 120. I just follow the Standard Hough demo implementation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -218,6 +218,36 @@ enum ImwriteHDRCompressionFlags {\n \n //! @} imgcodecs_flags\n \n+/** @brief Represents an animation with multiple frames.\n+The `Animation` struct is designed to store and manage data for animated sequences such as those from animated formats (e.g., GIF, AVIF, APNG, WebP).\n+It provides support for looping, background color settings, frame timing, and frame storage.\n+*/\n+struct CV_EXPORTS_W_SIMPLE Animation\n+{\n```", + "output": "@vrabaud it is still experimental but what is your opinion about `bool imreadanimation(const String& filename, CV_OUT AnimationInfo& animinfo)`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -44,17 +44,15 @@\n \n #include \"precomp.hpp\"\n \n-#include \n-#include \n-\n #include \n #include \n-\n```", + "output": "I believe this is dangerous to impose a default timestamp. You would end up with two APIs to create a WebP animation: imwritemulti (that imposes a default number of loops, default timestamp, default background) and imwriteanimation with all the features. For reading, it can be ok: you just want a set of images and that's what you get. @asmorkalov , what do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -218,6 +218,36 @@ enum ImwriteHDRCompressionFlags {\n \n //! @} imgcodecs_flags\n \n+/** @brief Represents an animation with multiple frames.\n+The `Animation` struct is designed to store and manage data for animated sequences such as those from animated formats (e.g., GIF, AVIF, APNG, WebP).\n+It provides support for looping, background color settings, frame timing, and frame storage.\n+*/\n+struct CV_EXPORTS_W_SIMPLE Animation\n+{\n```", + "output": "Animation is a struct: Pros: - easy to bind - all info in one spot Cons: - decoding everything at once can suck up a lot of RAM so maybe a VideoReader-like API would be better I am not sure of the best way to go there. @asmorkalov , any quick opinion?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -44,17 +44,15 @@\n \n #include \"precomp.hpp\"\n \n-#include \n-#include \n-\n #include \n #include \n-\n```", + "output": "@vrabaud could you check the last commit. frankly i tried your suggestion using std::unique_ptr but no success.. can last commit resolve this conversation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -44,17 +44,15 @@\n \n #include \"precomp.hpp\"\n \n-#include \n-#include \n-\n #include \n #include \n-\n```", + "output": "@vrabaud could you check https://github.com/opencv/opencv/pull/25608/commits/a105d94aff541d2b4593150dc834bab279ad6cd6" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindWebP.cmake\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -10,8 +10,6 @@\n \n # Look for the header file.\n \n-unset(WEBP_FOUND)\n-\n FIND_PATH(WEBP_INCLUDE_DIR NAMES webp/decode.h)\n \n if(NOT WEBP_INCLUDE_DIR)\n@@ -21,13 +19,14 @@ else()\n```", + "output": "i found @opencv-alalek 's note at #25661 \"check_cxx_source_compiles is not used in OpenCV code base. We prefer to use try_compile() instead.\" but if you want i will do requested changes." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -44,17 +44,15 @@\n \n #include \"precomp.hpp\"\n \n-#include \n-#include \n-\n #include \n #include \n-\n```", + "output": "which line below better for readability? ``` if(m_use_rgb) dec_options.color_mode = features.has_alpha ? MODE_RGBA : MODE_RGB; else dec_options.color_mode = features.has_alpha ? MODE_BGRA : MODE_BGR; ``` ``` dec_options.color_mode = m_use_rgb ? (features.has_alpha ? MODE_RGBA : MODE_RGB) : (features.has_alpha ? MODE_BGRA : MODE_BGR); ``` @vrabaud , @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n**Review Line:** 188\n\n**Code Changes:**\n```diff\n@@ -155,7 +185,7 @@ bool WebPDecoder::readData(Mat &img)\n CV_CheckEQ(img.cols, m_width, \"\");\n CV_CheckEQ(img.rows, m_height, \"\");\n \n- if (m_buf.empty())\n+ if (data.empty())\n {\n fs.seekg(0, std::ios::beg); CV_Assert(fs && \"File stream error\");\n data.create(1, validateToInt(fs_size), CV_8UC1);\n@@ -165,70 +195,96 @@ bool WebPDecoder::readData(Mat &img)\n }\n```", + "output": "@vrabaud i implemented imdecode related part and add related test. could you take a look." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_webp.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -44,17 +44,15 @@\n \n #include \"precomp.hpp\"\n \n-#include \n-#include \n-\n #include \n #include \n-\n```", + "output": "@vrabaud could you take a look.. i think i should correct it as `if (config.quality >= 100.0f)`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_avif.cpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -11,6 +11,7 @@\n #include \n \n #include \n+#include \n #include \"opencv2/imgproc.hpp\"\n #include \"grfmt_avif.hpp\"\n \n@@ -242,6 +243,8 @@ bool AvifDecoder::readData(Mat &img) {\n return false;\n```", + "output": "@asmorkalov @vrabaud let me change `timestamps` name as `durations` means how much ms the frame is shown" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #25608: Animated WebP and AVIF Support\n\n**Code Changes:**\n```diff\n@@ -218,6 +218,36 @@ enum ImwriteHDRCompressionFlags {\n \n //! @} imgcodecs_flags\n \n+/** @brief Represents an animation with multiple frames.\n+The `Animation` struct is designed to store and manage data for animated sequences such as those from animated formats (e.g., GIF, AVIF, APNG, WebP).\n+It provides support for looping, background color settings, frame timing, and frame storage.\n+*/\n+struct CV_EXPORTS_W_SIMPLE Animation\n+{\n```", + "output": "Background color of the animation in RGBA format. @asmorkalov @vrabaud should it be BGRA ? btw WebP has ARGB" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #25308: Not-normalized output from findHomography\n\n**Code Changes:**\n```diff\n@@ -726,8 +726,8 @@ correctly only when there are more than 50% of inliers. Finally, if there are no\n noise is rather small, use the default method (method=0).\n \n The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is\n-determined up to a scale. Thus, it is normalized so that \\f$h_{33}=1\\f$. Note that whenever an \\f$H\\f$ matrix\n-cannot be estimated, an empty one will be returned.\n+determined up to a scale. If \\f$h_{33}\\f$ is non-zero, the matrix is normalized so that \\f$h_{33}=1\\f$.\n+@note Whenever an \\f$H\\f$ matrix cannot be estimated, an empty one will be returned.\n \n @sa\n```", + "output": "There is `@note` in doxygen for it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/src/window_cocoa.mm\n**Change Type:** modified\n**Context:** PR #26625: Cocoa/highgui: fix leak in cvGetWindowRect_COCOA\n\n**Code Changes:**\n```diff\n@@ -662,14 +662,16 @@ CvRect cvGetWindowRect_COCOA( const char* name )\n {\n CV_ERROR( CV_StsNullPtr, \"NULL window\" );\n } else {\n- NSRect rect = [window frame];\n+ @autoreleasepool {\n+ NSRect rect = [window frame];\n #if MAC_OS_X_VERSION_MAX_ALLOWED > MAC_OS_X_VERSION_10_6\n- NSPoint pt = [window convertRectToScreen:rect].origin;\n+ NSPoint pt = [window convertRectToScreen:rect].origin;\n```", + "output": "Prefer to use `@autoreleasepool` blocks as mentioned in `NSAutoreleasePool` [overview](https://developer.apple.com/documentation/foundation/nsautoreleasepool)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/src/window_cocoa.mm\n**Change Type:** modified\n**Context:** PR #26625: Cocoa/highgui: fix leak in cvGetWindowRect_COCOA\n\n**Code Changes:**\n```diff\n@@ -662,14 +662,16 @@ CvRect cvGetWindowRect_COCOA( const char* name )\n {\n CV_ERROR( CV_StsNullPtr, \"NULL window\" );\n } else {\n- NSRect rect = [window frame];\n+ @autoreleasepool {\n+ NSRect rect = [window frame];\n #if MAC_OS_X_VERSION_MAX_ALLOWED > MAC_OS_X_VERSION_10_6\n- NSPoint pt = [window convertRectToScreen:rect].origin;\n+ NSPoint pt = [window convertRectToScreen:rect].origin;\n```", + "output": "@NekoAsakura Let's use `@autoreleasepool` for the new code as it's recommended in documentation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/fastcv/src/fastcv_hal_imgproc.cpp\n**Change Type:** modified\n**Context:** PR #26621: FastCV-based HAL for OpenCV acceleration 2ndpost-3\n\n**Code Changes:**\n```diff\n@@ -992,4 +992,59 @@ int fastcv_hal_cvtBGRtoYUVApprox(\n \n fcvStatus status = FASTCV_SUCCESS;\n CV_HAL_RETURN(status, hal_BGRtoYUVApprox);\n+}\n+\n+int fastcv_hal_canny(\n+ const uchar* src_data,\n+ size_t src_step,\n+ uchar* dst_data,\n```", + "output": "> 1. Looks like lowThresh and highThresh are redundant. > 2. What if, lowThresh=12.999999999? I propose to use check with epsilon. hi @asmorkalov, updated" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/fastcv/src/fastcv_hal_imgproc.cpp\n**Change Type:** modified\n**Context:** PR #26619: FastCV-based HAL for OpenCV acceleration 2ndpost-2\n\n**Code Changes:**\n```diff\n@@ -743,4 +743,253 @@ int fastcv_hal_warpPerspective(\n CV_HAL_RETURN_NOT_IMPLEMENTED(cv::format(\"Src type:%s is not supported\", cv::typeToString(src_type).c_str()));\n \n CV_HAL_RETURN(status, hal_warpPerspective);\n+}\n+\n+class FcvPyrLoop_Invoker : public cv::ParallelLoopBody\n+{\n+public:\n+\n```", + "output": "CV_HAL_ERROR_NOT_IMPLEMENTED works here. Thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/test/test_solvepnp_ransac.cpp\n**Change Type:** modified\n**Context:** PR #19253: Unexpected exception in solvePnPRansac caused by input points\n\n**Code Changes:**\n```diff\n@@ -837,6 +837,43 @@ TEST(Calib3d_SolvePnPRansac, double_support)\n EXPECT_LE(cvtest::norm(t, Mat_(tF), NORM_INF), 1e-3);\n }\n \n+TEST(Calib3d_SolvePnPRansac, bad_input_points_19253)\n+{\n+ // with this specific data\n+ // when computing the final pose using points in the consensus set with SOLVEPNP_ITERATIVE and solvePnP()\n+ // an exception is thrown from solvePnP because there are 5 non-coplanar 3D points and the DLT algorithm needs at least 6 non-coplanar 3D points\n+ // with PR #19253 we choose to return true, with the pose estimated from the MSS stage instead of throwing the exception\n```", + "output": "Thanks, it does need some explanation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_filter.cpp\n**Change Type:** modified\n**Context:** PR #26590: support C++20 standard\n\n**Code Changes:**\n```diff\n@@ -1865,107 +1865,65 @@ TEST(Imgproc_PreCornerDetect, accuracy) { CV_PreCornerDetectTest test; test.safe\n TEST(Imgproc_Integral, accuracy) { CV_IntegralTest test; test.safe_run(); }\n \n //////////////////////////////////////////////////////////////////////////////////\n-\n-class CV_FilterSupportedFormatsTest : public cvtest::BaseTest\n-{\n-public:\n- CV_FilterSupportedFormatsTest() {}\n- ~CV_FilterSupportedFormatsTest() {}\n```", + "output": "Thank you for your review, this change comes from https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1152r4.html#expr.pre.incr . Increment and decrement volatile variables, which are numeric and pointer, seems be deprecated for C++20." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_sparse.cpp\n**Change Type:** modified\n**Context:** PR #26590: support C++20 standard\n**Review Line:** 174\n\n**Code Changes:**\n```diff\n@@ -171,7 +171,14 @@ void SparseMat::Hdr::clear()\n hashtab.clear();\n hashtab.resize(HASH_SIZE0);\n pool.clear();\n+#if defined(__GNUC__) && (__GNUC__ == 13) && !defined(__clang__) && (__cplusplus >= 202002L)\n+#pragma GCC diagnostic push\n+#pragma GCC diagnostic ignored \"-Wstringop-overflow\"\n+#endif\n pool.resize(nodeSize);\n+#if defined(__GNUC__) && (__GNUC__ == 13) && !defined(__clang__) && (__cplusplus >= 202002L)\n```", + "output": "Thank you for your point, GCC 13 + C++20 shows warning, so I append pragma to suppress it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_filter.cpp\n**Change Type:** modified\n**Context:** PR #26590: support C++20 standard\n\n**Code Changes:**\n```diff\n@@ -1865,107 +1865,65 @@ TEST(Imgproc_PreCornerDetect, accuracy) { CV_PreCornerDetectTest test; test.safe\n TEST(Imgproc_Integral, accuracy) { CV_IntegralTest test; test.safe_run(); }\n \n //////////////////////////////////////////////////////////////////////////////////\n-\n-class CV_FilterSupportedFormatsTest : public cvtest::BaseTest\n-{\n-public:\n- CV_FilterSupportedFormatsTest() {}\n- ~CV_FilterSupportedFormatsTest() {}\n```", + "output": "Thank you for your advice, I rewrite test with `EXPECT_NO_THROW` and `TEST_P` instead of loop with `SCOPED_TRACE`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/generalized_hough.cpp\n**Change Type:** modified\n**Context:** PR #26590: support C++20 standard\n\n**Code Changes:**\n```diff\n@@ -692,7 +692,11 @@ namespace\n getContourPoints(edges, dx, dy, points);\n \n features.resize(levels_ + 1);\n- std::for_each(features.begin(), features.end(), [=](std::vector& e) { e.clear(); e.reserve(maxBufferSize_); });\n+ const size_t maxBufferSize = maxBufferSize_;\n+ std::for_each(features.begin(), features.end(), [maxBufferSize](std::vector& e) {\n+ e.clear();\n+ e.reserve(maxBufferSize);\n+ });\n```", + "output": "Thank you for your suggestion, I merge it into this pull request !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/gpt2_inference.py\n**Change Type:** modified\n**Context:** PR #26584: Update printingin GPT2 sample\n**Review Line:** 96\n\n**Code Changes:**\n```diff\n \n+ # Decode and print the new token\n+ new_word = tokenizer.decode([sampled_indices[0, 0]])\n+\n+ ## clean the prints from the previous line\n+ print(new_word, end='', flush=True)\n+\n # Append to the sequence\n x = np.concatenate((x, sampled_indices), axis=1)\n pos = np.arange(0, x.shape[1], dtype=np.int32) # shape (T)\n \n```", + "output": "need to print end of line symbol at the final end. I get: ``` OpenCV requires that both Python and Java have the language. And Python getsalexander@hostname ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/object_tracker.cpp\n**Change Type:** added\n**Context:** PR #26202: Improved Tracker Samples\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,309 @@\n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+#include \n+#include \"common.hpp\"\n+\n```", + "output": "@kaingwade The error message comes from findFile() function which searches for files in these environment variables. I have removed the default arguments, and added check and custom messages in case the model files are not specified" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/object_tracker.cpp\n**Change Type:** added\n**Context:** PR #26202: Improved Tracker Samples\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,309 @@\n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+#include \n+#include \"common.hpp\"\n+\n```", + "output": "I propose to change cv::Tracker api and move getTrackingScore() method to base tracker class. -1 could be default value, if it's not supported by some implementation. @vpisarev What do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/hal_internal.cpp\n**Change Type:** modified\n**Context:** PR #18571: Added clapack\n**Review Line:** 103\n\n**Code Changes:**\n```diff\n@@ -100,7 +100,7 @@ template static inline int\n lapack_LU(fptype* a, size_t a_step, int m, fptype* b, size_t b_step, int n, int* info)\n {\n int lda = (int)(a_step / sizeof(fptype)), sign = 0;\n- int* piv = new int[m];\n+ std::vector piv(m+1);\n \n transpose_square_inplace(a, lda, m);\n \n@@ -109,34 +109,35 @@ lapack_LU(fptype* a, size_t a_step, int m, fptype* b, size_t b_step, int n, int*\n if(n == 1 && b_step == sizeof(fptype))\n```", + "output": "Why do we need `+1` here? If it is a bug, then this bugfix was NOT backported to 4.x. /cc @vpisarev @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/hal_internal.cpp\n**Change Type:** modified\n**Context:** PR #18571: Added clapack\n**Review Line:** 103\n\n**Code Changes:**\n```diff\n@@ -100,7 +100,7 @@ template static inline int\n lapack_LU(fptype* a, size_t a_step, int m, fptype* b, size_t b_step, int n, int* info)\n {\n int lda = (int)(a_step / sizeof(fptype)), sign = 0;\n- int* piv = new int[m];\n+ std::vector piv(m+1);\n \n transpose_square_inplace(a, lda, m);\n \n@@ -109,34 +109,35 @@ lapack_LU(fptype* a, size_t a_step, int m, fptype* b, size_t b_step, int n, int*\n if(n == 1 && b_step == sizeof(fptype))\n```", + "output": "@opencv-alalek, @asmorkalov maybe it's not relevant anymore. As I remember, it's just common pattern to handle m=0 case properly." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_io.cpp\n**Change Type:** modified\n**Context:** PR #26420: Support 0d/1d Mat in FileStorage\n**Review Line:** 2054\n\n**Code Changes:**\n```diff\n+ EXPECT_EQ(src.size, dst.size);\n+ EXPECT_EQ(cv::norm(src, dst, NORM_INF), 0.0);\n+}\n+\n+typedef testing::TestWithParam FileStorage_exact_type;\n+TEST_P(FileStorage_exact_type, empty_mat)\n+{\n+ testExactMat(Mat(), GetParam());\n+}\n+\n+TEST_P(FileStorage_exact_type, mat_0d)\n```", + "output": "Same on 4.x branch, thanks. I will check" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tflite/tflite_importer.cpp\n**Change Type:** modified\n**Context:** PR #26330: Modified TFLite parser for the new dnn engine\n**Review Line:** 82\n\n**Code Changes:**\n```diff\n \n void parseFusedActivation(const Operator& op, ActivationFunctionType activ);\n void parseActivation(const Operator& op, const std::string& opcode, LayerParams& layerParams, bool isFused);\n- void addLayer(LayerParams& layerParams, const Operator& op);\n- int addPermuteLayer(const std::vector& order, const std::string& permName, const std::pair& inpId, int dtype);\n+ void addLayer(LayerParams& layerParams, const Operator& op, bool additionalPreLayer = false, bool additionalPostLayer = false);\n+ void addLayer(LayerParams& layerParams, const std::vector& inputTensors, const std::vector& outputTensors);\n+ int addPermuteLayer(const std::vector& order, const std::string& permName, const std::pair& inpId, int dtype, int inpTensorId);\n int addReshapeLayer(const std::vector& shape, int axis, int num_axes,\n- const std::string& name, const std::pair& inpId, int dtype);\n- int addFlattenLayer(int axis, int end_axis, const std::string& name, const std::pair& inpId, int dtype);\n```", + "output": "@alexlyulkov may I ask please describe in few words what is the meaning of PreLayer/PostLayer?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tflite/tflite_importer.cpp\n**Change Type:** modified\n**Context:** PR #26330: Modified TFLite parser for the new dnn engine\n**Review Line:** 82\n\n**Code Changes:**\n```diff\n \n void parseFusedActivation(const Operator& op, ActivationFunctionType activ);\n void parseActivation(const Operator& op, const std::string& opcode, LayerParams& layerParams, bool isFused);\n- void addLayer(LayerParams& layerParams, const Operator& op);\n- int addPermuteLayer(const std::vector& order, const std::string& permName, const std::pair& inpId, int dtype);\n+ void addLayer(LayerParams& layerParams, const Operator& op, bool additionalPreLayer = false, bool additionalPostLayer = false);\n+ void addLayer(LayerParams& layerParams, const std::vector& inputTensors, const std::vector& outputTensors);\n+ int addPermuteLayer(const std::vector& order, const std::string& permName, const std::pair& inpId, int dtype, int inpTensorId);\n int addReshapeLayer(const std::vector& shape, int axis, int num_axes,\n- const std::string& name, const std::pair& inpId, int dtype);\n- int addFlattenLayer(int axis, int end_axis, const std::string& name, const std::pair& inpId, int dtype);\n```", + "output": "@alexlyulkov, thanks! With proposed approach of fused layers naming user has a model with unexpected layers names. What it we keep original fused layer name at the last unfused layer chain but rename only pre-layers? ```patch If a layer has a fused activation, I add additionalPostLayer and change input and output names this way: original: conv_relu(conv123, conv123_input, conv123_output) - new: conv(conv123, conv123_input, conv123_output_additional_post_layer) + relu(conv123_relu, conv1_output_additional_post_layer, conv123_output) + new: conv_relu_pre(conv123, conv123_input, conv123_output_additional_post_layer) + conv_relu(conv123_relu, conv1_output_additional_post_layer, conv123_output) ``` So the motivation is to have matching output tensors by names between OpenCV and TFLite for origin layers." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/videowriter.cpp\n**Change Type:** added\n**Context:** PR #25349: Videocapture samples cleanup cpp\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,118 @@\n+/**\n+ @file videowriter.cpp\n+ @brief A sample for VideoWriter and VideoCapture with options to specify video codec, fps and resolution\n+ @date April 05, 2024\n+*/\n+\n+\n+#include \n+#include \n```", + "output": "@asmorkalov Done" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_model.cpp\n**Change Type:** modified\n**Context:** PR #26056: New dnn engine\n**Review Line:** 757\n\n**Code Changes:**\n```diff\n testTextRecognitionModel(weightPath, \"\", imgPath, seq, decodeType, vocabulary, size, mean, scale);\n }\n \n-TEST_P(Test_Model, TextDetectionByDB)\n+// BUG: https://github.com/opencv/opencv/issues/26246\n+TEST_P(Test_Model, DISABLED_TextDetectionByDB)\n {\n applyTestTag(CV_TEST_TAG_DEBUG_VERYLONG);\n \n```", + "output": "@vpisarev says that it works with new engine." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_wasm.hpp\n**Change Type:** modified\n**Context:** PR #25909: Update intrin_wasm.hpp\n**Review Line:** 20\n\n**Code Changes:**\n```diff\n+// __EMSCRIPTEN_major__, __EMSCRIPTEN_minor__ and __EMSCRIPTEN_tiny__ are defined via commandline in\n+// https://github.com/emscripten-core/emscripten/blob/1690a5802cd1241adc9714fb7fa2f633d38860dc/tools/shared.py#L506-L515\n+//\n+// See https://github.com/opencv/opencv/pull/25909\n+#ifndef __EMSCRIPTEN_major__\n+#include \n+#endif\n+\n #define CV_SIMD128 1\n #define CV_SIMD128_64F 0 // Now all implementation of f64 use fallback, so disable it.\n #define CV_SIMD128_FP16 0\n```", + "output": "@opencv-alalek The local CLI arguments that were used were very different from the `javascript-simd` build configuration running in pullrequest.opencv.org - I'm cloning locally and resolving the issue. As it pertains (for Emscripten 2.0.13), ```zsh find ./emsdk/upstream -name \"version.h\" ``` and ```zsh grep -R \"version.h\" ./emsdk/upstream ``` turn up empty." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** platforms/linux/arm.toolchain.cmake\n**Change Type:** modified\n**Context:** PR #25901: RISC-V/AArch64: disable CPU features detection - This PR is the first step in fixing current issues with NEON/RVV, FP16, BF16 and other CPU...\n\n**Code Changes:**\n```diff\n@@ -48,32 +48,23 @@ if(NOT DEFINED ARM_LINUX_SYSROOT AND DEFINED GNU_MACHINE)\n set(ARM_LINUX_SYSROOT /usr/${GNU_MACHINE}${FLOAT_ABI_SUFFIX})\n endif()\n \n-if(NOT DEFINED CMAKE_CXX_FLAGS)\n- set(CMAKE_CXX_FLAGS \"\" CACHE INTERNAL \"\")\n- set(CMAKE_C_FLAGS \"\" CACHE INTERNAL \"\")\n- set(CMAKE_SHARED_LINKER_FLAGS \"\" CACHE INTERNAL \"\")\n- set(CMAKE_MODULE_LINKER_FLAGS \"\" CACHE INTERNAL \"\")\n- set(CMAKE_EXE_LINKER_FLAGS \"\" CACHE INTERNAL \"\")\n```", + "output": "For native compilation we can: * leave it to user - set `CMAKE_CXX_FLAGS` or write your own cmake toolchain file * make our own cmake toolchain files similar to those used for cross-compilation (feature controlling flags should be the same) BTW, we don't handle `-mcpu=native` (equivalent to `-march=native -mtune=native` if I understand correctly). Example from Armbian: ``` orangepi@orangepi5:~$ gcc -Q --help=target | egrep 'mtune|mcpu|march' -march= armv8-a -mcpu= generic -mtune= generic orangepi@orangepi5:~$ gcc -march=native -Q --help=target | egrep 'mtune|mcpu|march' -march= armv8.2-a+crypto+fp16+rcpc+dotprod -mcpu= generic -mtune= generic orangepi@orangepi5:~$ gcc -mcpu=native -Q --help=target | egrep 'mtune|mcpu|march' -march= armv8.2-a -mcpu= cortex-a76.cortex-a55+crypto -mtune= cortex-a76.cortex-a55 ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/perf/opencl/perf_imgwarp.cpp\n**Change Type:** modified\n**Context:** PR #25984: imgproc: add optimized warpAffine kernels for 8U/16U/32F + C1/C3/C4 inputs\n**Review Line:** 78\n\n**Code Changes:**\n```diff\n const int type = get<1>(params), interpolation = get<2>(params);\n- const double eps = CV_MAT_DEPTH(type) <= CV_32S ? 1 : interpolation == INTER_CUBIC ? 2e-3 : 1e-4;\n+\n+ // BUG: OpenCL and CPU version diverges a bit\n+ // Ticket: https://github.com/opencv/opencv/issues/26235\n+ const double eps = CV_MAT_DEPTH(type) <= CV_32S ? 2 : interpolation == INTER_CUBIC ? 2e-3 : 3e-2;\n \n checkDeviceMaxMemoryAllocSize(srcSize, type);\n \n```", + "output": "It is tuned because CI hosts, which are connected via GHA, can build opencv with opencl, but somehow opencl is not available in the runtime. Guess we need to pass GPU device when running docker. cc @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/perf/opencl/perf_imgwarp.cpp\n**Change Type:** modified\n**Context:** PR #25984: imgproc: add optimized warpAffine kernels for 8U/16U/32F + C1/C3/C4 inputs\n**Review Line:** 78\n\n**Code Changes:**\n```diff\n const int type = get<1>(params), interpolation = get<2>(params);\n- const double eps = CV_MAT_DEPTH(type) <= CV_32S ? 1 : interpolation == INTER_CUBIC ? 2e-3 : 1e-4;\n+\n+ // BUG: OpenCL and CPU version diverges a bit\n+ // Ticket: https://github.com/opencv/opencv/issues/26235\n+ const double eps = CV_MAT_DEPTH(type) <= CV_32S ? 2 : interpolation == INTER_CUBIC ? 2e-3 : 3e-2;\n \n checkDeviceMaxMemoryAllocSize(srcSize, type);\n \n```", + "output": "@asmorkalov, the new algorithm and the old algorithm are still different, even with `algorithmHint==ALGO_HINT_ACCURATE`. So I don't see anything criminal with raising eps from 1 to 2" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_drawing.cpp\n**Change Type:** modified\n**Context:** PR #26187: Fix fillPoly drawing over boundaries\n\n**Code Changes:**\n```diff\n@@ -680,6 +680,75 @@ TEST(Drawing, fillpoly_circle)\n EXPECT_LT(diff_fp3, 1.);\n }\n \n+TEST(Drawing, fillpoly_contours)\n+{\n+ const int imgSize = 50;\n+ const int type = CV_8UC1;\n+ const int shift = 0;\n+ const Scalar cl = Scalar::all(255);\n```", + "output": "I adjusted the tests, thank you for the example. Should be a good amount of tests now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_math.hpp\n**Change Type:** modified\n**Context:** PR #25891: Resolve Compilation Error for v_func Function in SIMD Emulator\n\n**Code Changes:**\n```diff\n@@ -2,10 +2,6 @@\n // It is subject to the license terms in the LICENSE file found in the top-level directory\n // of this distribution and at http://opencv.org/license.html\n \n-// This header is not standalone. Don't include directly, use \"intrin.hpp\" instead.\n-#ifdef OPENCV_HAL_INTRIN_HPP // defined in intrin.hpp\n-\n-namespace CV__SIMD_NAMESPACE {\n \n /* Universal Intrinsics implementation of sin, cos, exp and log\n```", + "output": "@asmorkalov How is `VPrefix` (e.g., v256_setall_f32, v512_setall_f32) set depending on the platform? The function implementation cannot utilize templates." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_math.hpp\n**Change Type:** modified\n**Context:** PR #25891: Resolve Compilation Error for v_func Function in SIMD Emulator\n\n**Code Changes:**\n```diff\n@@ -2,10 +2,6 @@\n // It is subject to the license terms in the LICENSE file found in the top-level directory\n // of this distribution and at http://opencv.org/license.html\n \n-// This header is not standalone. Don't include directly, use \"intrin.hpp\" instead.\n-#ifdef OPENCV_HAL_INTRIN_HPP // defined in intrin.hpp\n-\n-namespace CV__SIMD_NAMESPACE {\n \n /* Universal Intrinsics implementation of sin, cos, exp and log\n```", + "output": "I see. Thanks a lot for the investigation!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_math.hpp\n**Change Type:** modified\n**Context:** PR #25891: Resolve Compilation Error for v_func Function in SIMD Emulator\n\n**Code Changes:**\n```diff\n@@ -2,10 +2,6 @@\n // It is subject to the license terms in the LICENSE file found in the top-level directory\n // of this distribution and at http://opencv.org/license.html\n \n-// This header is not standalone. Don't include directly, use \"intrin.hpp\" instead.\n-#ifdef OPENCV_HAL_INTRIN_HPP // defined in intrin.hpp\n-\n-namespace CV__SIMD_NAMESPACE {\n \n /* Universal Intrinsics implementation of sin, cos, exp and log\n```", + "output": "@mshabunin The code will generate v_func for all backends. So when the user passes the widest vector variable, it will call the widest implementation version. (vx_setall_f32 will set the widest vector)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.cpp\n**Change Type:** modified\n**Context:** PR #25326: Improved and refactored text detection sample in dnn module\n\n**Code Changes:**\n```diff\n@@ -1,21 +1,28 @@\n /*\n- Text detection model: https://github.com/argman/EAST\n- Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n+ Text detection model (EAST): https://github.com/argman/EAST\n+ Download link for EAST model: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n \n- Text recognition models can be downloaded directly here:\n+ DB detector model:\n+ https://drive.google.com/uc?export=download&id=17_ABp79PlFt9yPCxSaarVc_DKTmrSGGf\n```", + "output": "@dkurt Other parameters can be different if someone uses EAST model trained on custom dataset. We can add default values to the constructor. I propose that the inputParams call stays in the sample to show its usage or we change it to setInputSize() and add a comment about using inputParams if its a EAST model trained on a custom dataset" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.py\n**Change Type:** modified\n**Context:** PR #25326: Improved and refactored text detection sample in dnn module\n\n**Code Changes:**\n```diff\n@@ -1,55 +1,82 @@\n '''\n- Text detection model: https://github.com/argman/EAST\n- Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n+ Text detection model (EAST): https://github.com/argman/EAST\n+ Download link for EAST model: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n \n- CRNN Text recognition model taken from here: https://github.com/meijieru/crnn.pytorch\n- How to convert from pb to onnx:\n- Using classes from here: https://github.com/meijieru/crnn.pytorch/blob/master/models/crnn.py\n```", + "output": "@asmorkalov The model option is added using the models.yml via args.zoo ```python args, _ = parser.parse_known_args() add_preproc_args(args.zoo, parser, 'text_detection') ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.py\n**Change Type:** modified\n**Context:** PR #25326: Improved and refactored text detection sample in dnn module\n**Review Line:** 119\n\n**Code Changes:**\n```diff\n+ stdImgSize = 512\n+ imgWidth = min(frame.shape[:2])\n+ fontSize = (stdSize*imgWidth)/stdImgSize\n+ fontThickness = max(1,(stdWeight*imgWidth)//stdImgSize)\n+\n+ if(args.alias == \"DB\"):\n+ # DB Detector initialization\n+ detector = cv2.dnn_TextDetectionModel_DB(args.model)\n+ detector.setBinaryThreshold(args.binary_threshold)\n+ detector.setPolygonThreshold(args.polygon_threshold)\n+ detector.setUnclipRatio(args.unclip_ratio)\n```", + "output": "@asmorkalov The alias option is added using the models.yml via args.zoo ```python args, _ = parser.parse_known_args() add_preproc_args(args.zoo, parser, 'text_detection') ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.py\n**Change Type:** modified\n**Context:** PR #25326: Improved and refactored text detection sample in dnn module\n**Review Line:** 119\n\n**Code Changes:**\n```diff\n+ stdImgSize = 512\n+ imgWidth = min(frame.shape[:2])\n+ fontSize = (stdSize*imgWidth)/stdImgSize\n+ fontThickness = max(1,(stdWeight*imgWidth)//stdImgSize)\n+\n+ if(args.alias == \"DB\"):\n+ # DB Detector initialization\n+ detector = cv2.dnn_TextDetectionModel_DB(args.model)\n+ detector.setBinaryThreshold(args.binary_threshold)\n+ detector.setPolygonThreshold(args.polygon_threshold)\n+ detector.setUnclipRatio(args.unclip_ratio)\n```", + "output": "@asmorkalov We need to provide the detection model name and path to recognition model. I have updated the usage made the path to recognition model as required `python text_detection.py DB --ocr=`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVCompilerOptimizations.cmake\n**Change Type:** modified\n**Context:** PR #26170: 5.x merge 4.x - OpenCV Extra: https://github.com/opencv/opencv_extra/pull/1210 Contrib: https://github.com/opencv/opencv_contrib/pull/3795 ...\n**Review Line:** 397\n\n**Code Changes:**\n```diff\n ocv_update(CPU_KNOWN_OPTIMIZATIONS \"MSA\")\n ocv_update(CPU_MSA_FLAGS_ON \"-mmsa\")\n set(CPU_BASELINE \"DETECT\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n+\n elseif(PPC64LE)\n+\n ocv_update(CPU_KNOWN_OPTIMIZATIONS \"VSX;VSX3\")\n ocv_update(CPU_VSX_TEST_FILE \"${OpenCV_SOURCE_DIR}/cmake/checks/cpu_vsx.cpp\")\n ocv_update(CPU_VSX3_TEST_FILE \"${OpenCV_SOURCE_DIR}/cmake/checks/cpu_vsx3.cpp\")\n@@ -390,9 +415,6 @@ elseif(PPC64LE)\n set(CPU_BASELINE \"VSX\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n```", + "output": "@mshabunin Fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/edge_detection.cpp\n**Change Type:** added\n**Context:** PR #25515: Improved edge detection sample\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,242 @@\n+#include \n+#include \n+#include \n+#include \n+#include \n+#include \n+#include \n+\n+#include \"common.hpp\"\n```", + "output": "There is issue with command line parameters handling. Invocation without params leads to sigsegv: ``` [ WARN:0@0.002] global samples.cpp:61 findFile cv::samples::findFile('baboon.jpg') => '/home/alexander/Projects/OpenCV/opencv-next-build/../opencv-next/samples/data/baboon.jpg' terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(5.0.0-pre) /home/alexander/Projects/OpenCV/opencv-next/modules/dnn/src/onnx/onnx_importer.cpp:278: error: (-5:Bad argument) Can't read ONNX file: in function 'ONNXImporter' Abort (core dumped) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/quantization_utils.cpp\n**Change Type:** modified\n**Context:** PR #25515: Improved edge detection sample\n\n**Code Changes:**\n```diff\n@@ -41,7 +41,7 @@ static void broadcast1D2TargetMat(Mat& data, const MatShape& targetShape, int ax\n static void block_repeat(InputArray src, const MatShape& srcShape, int axis, int repetitions, OutputArray dst)\n {\n CV_Assert(src.getObj() != dst.getObj());\n- CV_Check(axis, axis >= 0 && axis < src.dims(), \"Axis out of range\");\n+ CV_Check(axis, axis >= 0 && (axis < src.dims() || (src.dims()==1 && axis==1)), \"axis is out of range\"); // (src.dims()==1 && axis==1) has been added as a temporary fix for quantized models. Refer issue https://github.com/opencv/opencv_zoo/issues/273\n CV_CheckGT(repetitions, 1, \"More than one repetition expected\");\n \n Mat src_mat = src.getMat();\n```", + "output": "@fengyuentau, yes, this is hack, because the generated model contains incorrect, out-of-range axis specification. The script needs to be fixed and then dexined model should be regenerated" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/quantization_utils.cpp\n**Change Type:** modified\n**Context:** PR #25515: Improved edge detection sample\n\n**Code Changes:**\n```diff\n@@ -41,7 +41,7 @@ static void broadcast1D2TargetMat(Mat& data, const MatShape& targetShape, int ax\n static void block_repeat(InputArray src, const MatShape& srcShape, int axis, int repetitions, OutputArray dst)\n {\n CV_Assert(src.getObj() != dst.getObj());\n- CV_Check(axis, axis >= 0 && axis < src.dims(), \"Axis out of range\");\n+ CV_Check(axis, axis >= 0 && (axis < src.dims() || (src.dims()==1 && axis==1)), \"axis is out of range\"); // (src.dims()==1 && axis==1) has been added as a temporary fix for quantized models. Refer issue https://github.com/opencv/opencv_zoo/issues/273\n CV_CheckGT(repetitions, 1, \"More than one repetition expected\");\n \n Mat src_mat = src.getMat();\n```", + "output": "Ok, then @gursimarsingh could you add a comment for this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #26073: (5.x) Merge 4.x - Contrib: https://github.com/opencv/opencv_contrib/pull/3783 Extra: https://github.com/opencv/opencv_extra/pull/1205 #2327...\n**Review Line:** 3302\n\n**Code Changes:**\n```diff\n+ net.forward(outputs, std::vector{\"values\", \"indices\"});\n+\n+ Mat output_res_val = outputs.front(),\n+ output_res_ind = outputs.back();\n+\n+ output_ref_ind.convertTo(output_ref_ind, CV_32F); // TODO: revise this conversion in 5.x\n+\n+ normAssert(output_ref_val, output_res_val, (basename + \" values\").c_str(), l1 ? l1 : default_l1, lInf ? lInf : default_lInf);\n+ normAssert(output_ref_ind, output_res_ind, (basename + \" indices\").c_str(), l1 ? l1 : default_l1, lInf ? lInf : default_lInf);\n+\n+ expectNoFallbacksFromIE(net);\n```", + "output": "@fengyuentau Could you take a look on TopK behavior in 5.x? There was TODO to remove the conversion. I removed the original one, by have to add another one to fix test. Both `output_ref_ind` and `output_res_ind` are FP32 now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #25832: Add support for QNX\n\n**Code Changes:**\n```diff\n@@ -767,7 +767,7 @@ if(UNIX OR MINGW)\n elseif(EMSCRIPTEN)\n # no need to link to system libs with emscripten\n elseif(QNXNTO)\n- set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m)\n+ set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m regex)\n elseif(MINGW)\n set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} pthread)\n else()\n```", + "output": "Hello, thanks for taking time to review my PR. For QNX, we are cross-compiling, so the test binaries cannot be run on the same machine they were built on. We mostly use Ubuntu to build the binaries. So we need to install the test binaries to our SDP (Software Development Platform which is like an SDK with headers and libraries for QNX) so that they can be conveniently moved over to a QNX target." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #25832: Add support for QNX\n\n**Code Changes:**\n```diff\n@@ -767,7 +767,7 @@ if(UNIX OR MINGW)\n elseif(EMSCRIPTEN)\n # no need to link to system libs with emscripten\n elseif(QNXNTO)\n- set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m)\n+ set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m regex)\n elseif(MINGW)\n set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} pthread)\n else()\n```", + "output": "Thanks! I removed the QNX installation, and it's now using `INSTALL_TESTS` to do that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLibsGrfmt.cmake\n**Change Type:** modified\n**Context:** PR #25832: Add support for QNX\n**Review Line:** 30\n\n**Code Changes:**\n```diff\n endif()\n- find_package(ZLIB \"${MIN_VER_ZLIB}\")\n+ if(QNX)\n+ set(ZLIB_FOUND TRUE)\n+ set(ZLIB_LIBRARY z)\n+ set(ZLIB_LIBRARIES z)\n+ else()\n+ find_package(ZLIB \"${MIN_VER_ZLIB}\")\n+ endif()\n if(ANDROID)\n set(CMAKE_FIND_LIBRARY_SUFFIXES ${_zlib_ORIG_CMAKE_FIND_LIBRARY_SUFFIXES})\n```", + "output": "`z` works. Thanks for the suggestion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #25832: Add support for QNX\n\n**Code Changes:**\n```diff\n@@ -767,7 +767,7 @@ if(UNIX OR MINGW)\n elseif(EMSCRIPTEN)\n # no need to link to system libs with emscripten\n elseif(QNXNTO)\n- set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m)\n+ set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} m regex)\n elseif(MINGW)\n set(OPENCV_LINKER_LIBS ${OPENCV_LINKER_LIBS} pthread)\n else()\n```", + "output": "Thanks, I removed this and passed `OPENCV_PYTHON_INSTALL_PATH` via toolchain." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/classification.cpp\n**Change Type:** modified\n**Context:** PR #25519: Improved classification sample\n\n**Code Changes:**\n```diff\n@@ -8,206 +8,256 @@\n \n #include \"common.hpp\"\n \n-std::string param_keys =\n- \"{ help h | | Print help message. }\"\n- \"{ @alias | | An alias name of model to extract preprocessing parameters from models.yml file. }\"\n- \"{ zoo | models.yml | An optional path to file with preprocessing parameters }\"\n- \"{ input i | | Path to input image or video file. Skip this argument to capture frames from a camera.}\"\n- \"{ initial_width | 0 | Preprocess input image by initial resizing to a specific width.}\"\n```", + "output": "@asmorkalov VideoCapture does support sequences of images, but it expects them to be in a numerically ordered format (e.g., image1.jpg, image2.jpg). It cannot handle images with unordered prefixes (e.g., ab.jpg, bc.jpg, xyz.jpg). For cases where images are not named in a numerical sequence, this was implemented to read and process these images from a directory." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/classification.cpp\n**Change Type:** modified\n**Context:** PR #25519: Improved classification sample\n\n**Code Changes:**\n```diff\n@@ -8,206 +8,256 @@\n \n #include \"common.hpp\"\n \n-std::string param_keys =\n- \"{ help h | | Print help message. }\"\n- \"{ @alias | | An alias name of model to extract preprocessing parameters from models.yml file. }\"\n- \"{ zoo | models.yml | An optional path to file with preprocessing parameters }\"\n- \"{ input i | | Path to input image or video file. Skip this argument to capture frames from a camera.}\"\n- \"{ initial_width | 0 | Preprocess input image by initial resizing to a specific width.}\"\n```", + "output": "fixed @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/common.hpp\n**Change Type:** modified\n**Context:** PR #25519: Improved classification sample\n\n**Code Changes:**\n```diff\n@@ -1,5 +1,5 @@\n #include \n-\n+#include\n using namespace cv;\n \n std::string genArgument(const std::string& argName, const std::string& help,\n@@ -10,6 +10,41 @@ std::string genPreprocArguments(const std::string& modelName, const std::string&\n \n std::string findFile(const std::string& filename);\n```", + "output": "updated @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/classification.cpp\n**Change Type:** modified\n**Context:** PR #25519: Improved classification sample\n\n**Code Changes:**\n```diff\n@@ -8,206 +8,256 @@\n \n #include \"common.hpp\"\n \n-std::string param_keys =\n- \"{ help h | | Print help message. }\"\n- \"{ @alias | | An alias name of model to extract preprocessing parameters from models.yml file. }\"\n- \"{ zoo | models.yml | An optional path to file with preprocessing parameters }\"\n- \"{ input i | | Path to input image or video file. Skip this argument to capture frames from a camera.}\"\n- \"{ initial_width | 0 | Preprocess input image by initial resizing to a specific width.}\"\n```", + "output": "@asmorkalov Added string list. Can be used with --imglist parser option." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/nary_eltwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #25630: dnn: parallelize nary elementwise forward implementation & enable related conformance tests\n\n**Code Changes:**\n```diff\n@@ -44,13 +44,11 @@ class NaryEltwiseHelper CV_FINAL\n std::vector all_ndims;\n std::vector> orig_shapes;\n std::vector> orig_steps;\n- std::vector ptrs;\n std::vector> shapes;\n std::vector> steps;\n std::vector elemsize;\n \n- NaryEltwiseHelper() {\n```", + "output": "Thank you for review but take it easy, this pr is still drafting. I still remember our discussion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/quantization_utils.cpp\n**Change Type:** modified\n**Context:** PR #25644: [GSoC] dnn: Blockwise quantization support\n\n**Code Changes:**\n```diff\n@@ -15,7 +15,10 @@ namespace dnn\n static void broadcast1D2TargetMat(Mat& data, const MatShape& targetShape, int axis)\n {\n // The data is the 1-D scales or zeropoints.\n- CV_Assert(axis >= 0 && targetShape.size() > axis && data.total() == targetShape[axis]);\n+ CV_CheckGE(axis, 0, \"Quantization axis must be non-negative.\");\n+ CV_CheckGT((int)targetShape.size(),axis,\"Quantization axis must be within the valid range of target shape dimensions.\");\n+ CV_CheckEQ((int)data.total(), (int)targetShape[axis], \"Data total size must match the size of the specified target dimension.\");\n+\n std::vector broadcast_axes;\n```", + "output": "Thank you for the suggestion. I can see from the documentation `cv::Mat::copyTo` requires the destination to be a `cv::Mat`, but in this case the target is a `std::vector`. Is there anything I'm missing?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/quantization_utils.cpp\n**Change Type:** modified\n**Context:** PR #25644: [GSoC] dnn: Blockwise quantization support\n\n**Code Changes:**\n```diff\n@@ -15,7 +15,10 @@ namespace dnn\n static void broadcast1D2TargetMat(Mat& data, const MatShape& targetShape, int axis)\n {\n // The data is the 1-D scales or zeropoints.\n- CV_Assert(axis >= 0 && targetShape.size() > axis && data.total() == targetShape[axis]);\n+ CV_CheckGE(axis, 0, \"Quantization axis must be non-negative.\");\n+ CV_CheckGT((int)targetShape.size(),axis,\"Quantization axis must be within the valid range of target shape dimensions.\");\n+ CV_CheckEQ((int)data.total(), (int)targetShape[axis], \"Data total size must match the size of the specified target dimension.\");\n+\n std::vector broadcast_axes;\n```", + "output": "Done! Thank you for the suggestion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #25881: dnn: optimize activations with v_exp\n\n**Code Changes:**\n```diff\n@@ -859,12 +859,6 @@ struct GeluFunctor : public BaseFunctor {\n one = vx_setall_f32(1.0f),\n reciprocal_sqrt2 = vx_setall_f32(M_SQRT1_2);\n for (; i <= len - vlanes; i += vlanes) {\n- if (i + vlanes > len) {\n- if (i == 0 || i == len) {\n- break;\n- }\n- i = len - vlanes;\n- }\n```", + "output": "Got accuracy problem with `v_exp`. It goes to `nan` with the second batch for some reason. Tested with the image in the second batch, it still goes no detection (nan). cc @vpisarev @WanliZhong" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #25881: dnn: optimize activations with v_exp\n\n**Code Changes:**\n```diff\n@@ -859,12 +859,6 @@ struct GeluFunctor : public BaseFunctor {\n one = vx_setall_f32(1.0f),\n reciprocal_sqrt2 = vx_setall_f32(M_SQRT1_2);\n for (; i <= len - vlanes; i += vlanes) {\n- if (i + vlanes > len) {\n- if (i == 0 || i == len) {\n- break;\n- }\n- i = len - vlanes;\n- }\n```", + "output": "Need to talk to @vpisarev because it is his idea to put `if` checks into the loops. Problem is spreading over codebase." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #25881: dnn: optimize activations with v_exp\n\n**Code Changes:**\n```diff\n@@ -859,12 +859,6 @@ struct GeluFunctor : public BaseFunctor {\n one = vx_setall_f32(1.0f),\n reciprocal_sqrt2 = vx_setall_f32(M_SQRT1_2);\n for (; i <= len - vlanes; i += vlanes) {\n- if (i + vlanes > len) {\n- if (i == 0 || i == len) {\n- break;\n- }\n- i = len - vlanes;\n- }\n```", + "output": "Discussed with @vpisarev and we decided to drop halide trick for now since elementwise layer is operated in-place. Also removing implementations that causes performance degradation (elu, selu)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #25874: `videoio`: fix `cv::VideoWriter` with FFmpeg encapsulation timestamps\n\n**Code Changes:**\n```diff\n@@ -211,6 +211,8 @@ enum VideoCaptureProperties {\n CAP_PROP_CODEC_EXTRADATA_INDEX = 68, //!< Positive index indicates that returning extra data is supported by the video back end. This can be retrieved as cap.retrieve(data, ). E.g. When reading from a h264 encoded RTSP stream, the FFmpeg backend could return the SPS and/or PPS if available (if sent in reply to a DESCRIBE request), from calls to cap.retrieve(data, ).\n CAP_PROP_FRAME_TYPE = 69, //!< (read-only) FFmpeg back-end only - Frame type ascii code (73 = 'I', 80 = 'P', 66 = 'B' or 63 = '?' if unknown) of the most recently read frame.\n CAP_PROP_N_THREADS = 70, //!< (**open-only**) Set the maximum number of threads to use. Use 0 to use as many threads as CPU cores (applicable for FFmpeg back-end only).\n+ CAP_PROP_PTS = 71, //!< (read-only) FFmpeg back-end only - presentation timestamp of the most recently read frame using the FPS time base. e.g. fps = 25, VideoCapture::get(\\ref CAP_PROP_PTS) = 3, presentation time = 3/25 seconds.\n+ CAP_PROP_DTS_DELAY = 72, //!< (read-only) FFmpeg back-end only - maximum difference between presentation (pts) and decompression timestamps (dts) using FPS time base. e.g. delay is maximum when frame_num = 0, if true, VideoCapture::get(\\ref CAP_PROP_PTS) = 0 and VideoCapture::get(\\ref CAP_PROP_DTS_DELAY) = 2, dts = -2. Non zero values usually imply the stream is encoded using B-frames which are not decoded in presentation order.\n #ifndef CV_DOXYGEN\n CV__CAP_PROP_LATEST\n #endif\n@@ -230,8 +232,10 @@ enum VideoWriterProperties {\n```", + "output": "@asmorkalov I've updated VideoCapture to expose the required properties." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/java/generator/src/cpp/listconverters.cpp\n**Change Type:** modified\n**Context:** PR #25868: Add sample for GPT2 inference\n**Review Line:** 128\n\n**Code Changes:**\n```diff\n+ env->SetIntArrayRegion(element, 0, (jint)vs[i].size(), (const jint*)&vs[i][0]);\n+ env->CallBooleanMethod(list, m_add, element);\n+ env->DeleteLocalRef(element);\n+ }\n+}\n+#endif // HAVE_OPENCV_DNN\n```", + "output": "@asmorkalov, ``` listconverters.cpp Creating library C:/build/precommit_windows64/build/lib/Release/opencv_xstereo500.lib and object C:/build/precommit_windows64/build/lib/Release/opencv_xstereo500.exp C:\\build\\precommit_windows64\\5.x\\opencv\\modules\\java\\generator\\src\\cpp\\listconverters.cpp(123): error C2664: 'void JNIEnv_::SetIntArrayRegion(jintArray,jsize,jsize,const jint *)': cannot convert argument 4 from 'int *' to 'const jint *' [C:\\build\\precommit_windows64\\build\\modules\\java\\jni\\opencv_java.vcxproj] C:\\build\\precommit_windows64\\5.x\\opencv\\modules\\java\\generator\\src\\cpp\\listconverters.cpp(123): note: Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast opencv_java.cpp ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/orbbec_uvc.markdown\n**Change Type:** added\n**Context:** PR #25907: Add tutorial on using Orbbec 3D cameras (UVC)\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,126 @@\n+Using Orbbec 3D cameras (UVC) {#tutorial_orbbec_uvc}\n+====================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_orbbec_astra_openni}\n+@next_tutorial{tutorial_intelperc}\n+\n+| | |\n```", + "output": "Thanks! I have fixed the two problems you mentioned above." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/orbbec_uvc.markdown\n**Change Type:** added\n**Context:** PR #25907: Add tutorial on using Orbbec 3D cameras (UVC)\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,126 @@\n+Using Orbbec 3D cameras (UVC) {#tutorial_orbbec_uvc}\n+====================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_orbbec_astra_openni}\n+@next_tutorial{tutorial_intelperc}\n+\n+| | |\n```", + "output": "Thanks! It has been removed now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/orbbec_uvc.markdown\n**Change Type:** added\n**Context:** PR #25907: Add tutorial on using Orbbec 3D cameras (UVC)\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,126 @@\n+Using Orbbec 3D cameras (UVC) {#tutorial_orbbec_uvc}\n+====================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_orbbec_astra_openni}\n+@next_tutorial{tutorial_intelperc}\n+\n+| | |\n```", + "output": "Thanks a lot! I have squashed the commites into one now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25101: 1D test for Reduce layer\n\n**Code Changes:**\n```diff\n@@ -480,6 +480,128 @@ INSTANTIATE_TEST_CASE_P(/*nothing*/, Layer_Scatter_Test, Combine(\n \n \n \n+typedef testing::TestWithParam, std::string, int>> Layer_Reduce_Test;\n+TEST_P(Layer_Reduce_Test, Accuracy_01D)\n+{\n+ auto reduceOperation = [](const cv::Mat& input, const std::string& operation, int axis) -> cv::Mat {\n+ // Initialize result matrix\n+ cv::Mat result;\n```", + "output": "@Abdurrahheem, please move to the common loop above" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25101: 1D test for Reduce layer\n\n**Code Changes:**\n```diff\n@@ -480,6 +480,128 @@ INSTANTIATE_TEST_CASE_P(/*nothing*/, Layer_Scatter_Test, Combine(\n \n \n \n+typedef testing::TestWithParam, std::string, int>> Layer_Reduce_Test;\n+TEST_P(Layer_Reduce_Test, Accuracy_01D)\n+{\n+ auto reduceOperation = [](const cv::Mat& input, const std::string& operation, int axis) -> cv::Mat {\n+ // Initialize result matrix\n+ cv::Mat result;\n```", + "output": "Should be square in both cases: ```python @ost.script() def layer_sum_square(x: ost.FLOAT[1, 4]) -> ost.FLOAT[1, 4]: return opset18.ReduceSumSquare(x, axes=[0], keepdims=1) make_model_and_data(layer_sum_square, np.random.rand(1, 4).astype(np.float32)) ``` ``` input [[0.5488135 0.71518934 0.60276335 0.5448832 ]] output [[0.30119628 0.51149577 0.36332366 0.29689768]] ``` ```python @ost.script() def layer_sum_square(x: ost.FLOAT[4, 1]) -> ost.FLOAT[4, 1]: return opset18.ReduceSumSquare(x, axes=[1], keepdims=1) make_model_and_data(layer_sum_square, np.random.rand(4, 1).astype(np.float32)) ``` ``` input [[0.5488135 ] [0.71518934] [0.60276335] [0.5448832 ]] output [[0.30119628] [0.51149577] [0.36332366] [0.29689768]] ``` source: https://github.com/opencv/opencv_extra/blob/4.x/testdata/dnn/onnx/generate_onnx_models_with_onnxscript.py" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ie_ngraph.cpp\n**Change Type:** modified\n**Context:** PR #25199: Fix for OpenVINO 2024.0. OpenVINO 2022.1 as minimal supported version.\n\n**Code Changes:**\n```diff\n@@ -14,7 +14,7 @@\n #include \n \n #ifdef HAVE_DNN_NGRAPH\n-#include \n+#include \n #endif // HAVE_DNN_NGRAPH\n \n #include \n@@ -35,36 +35,6 @@ static bool DNN_IE_SERIALIZE = utils::getConfigurationParameterBool(\"OPENCV_DNN_\n```", + "output": "@dkurt are you OK to drop releases older 2022.1 ? to keep API 2.0 only" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ie_ngraph.cpp\n**Change Type:** modified\n**Context:** PR #25199: Fix for OpenVINO 2024.0. OpenVINO 2022.1 as minimal supported version.\n\n**Code Changes:**\n```diff\n@@ -14,7 +14,7 @@\n #include \n \n #ifdef HAVE_DNN_NGRAPH\n-#include \n+#include \n #endif // HAVE_DNN_NGRAPH\n \n #include \n@@ -35,36 +35,6 @@ static bool DNN_IE_SERIALIZE = utils::getConfigurationParameterBool(\"OPENCV_DNN_\n```", + "output": "I don't mind. Created an issue here: https://github.com/opencv/opencv/issues/25207. This way https://pullrequest.opencv.org/buildbot/builders/4_x_openvino-opencl-skl-lin64 should be moved to 2022.1 or newer (/cc @asmorkalov, @opencv-alalek) I only recommend to do this in a separate PR" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ie_ngraph.cpp\n**Change Type:** modified\n**Context:** PR #25199: Fix for OpenVINO 2024.0. OpenVINO 2022.1 as minimal supported version.\n\n**Code Changes:**\n```diff\n@@ -14,7 +14,7 @@\n #include \n \n #ifdef HAVE_DNN_NGRAPH\n-#include \n+#include \n #endif // HAVE_DNN_NGRAPH\n \n #include \n@@ -35,36 +35,6 @@ static bool DNN_IE_SERIALIZE = utils::getConfigurationParameterBool(\"OPENCV_DNN_\n```", + "output": "Sure, @olpipi, will you handle OpenVINO API 1.0 removal? Please keep 2022.1 is a minimal supported version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin.hpp\n**Change Type:** modified\n**Context:** PR #24941: Add support for v_exp (exponential)\n\n**Code Changes:**\n```diff\n@@ -1239,6 +1239,7 @@ namespace CV__SIMD_NAMESPACE {\n #define CV_SIMD 0\n #endif\n \n+#include \"intrin_math.hpp\"\n #include \"simd_utils.impl.hpp\"\n \n #ifndef CV_DOXYGEN\n```", + "output": "@WanliZhong, we _must_ give credits to the original author. Please, add the original license for this implementation, as it was done here: https://github.com/vpisarev/ficus/blob/3c9a8b78f49e17489c5e1fd6dd5dd487348c99c2/lib/NN/ficus_nn_common.h#L216. Together with the license, please, put the link to https://github.com/aff3ct/MIPP/blob/master/src/math/neon_mathfun.h" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_intrin_utils.hpp\n**Change Type:** modified\n**Context:** PR #24941: Add support for v_exp (exponential)\n**Review Line:** 1702\n\n**Code Changes:**\n```diff\n@@ -1698,6 +1698,103 @@ template struct TheTest\n return *this;\n }\n \n+ void __test_exp(LaneType dataMax, LaneType diff_thr, LaneType enlarge_factor, LaneType flt_min) {\n+ int n = VTraits::vlanes();\n+\n+ // Test overflow and underflow values with step\n+ const LaneType step = (LaneType) 0.01;\n+ for (LaneType i = dataMax + 1; i <= dataMax + 11;) {\n+ Data dataUpperBound, dataLowerBound, resOverflow, resUnderflow;\n```", + "output": "Thanks, I will add tests for these corner cases." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin.hpp\n**Change Type:** modified\n**Context:** PR #24941: Add support for v_exp (exponential)\n\n**Code Changes:**\n```diff\n@@ -1239,6 +1239,7 @@ namespace CV__SIMD_NAMESPACE {\n #define CV_SIMD 0\n #endif\n \n+#include \"intrin_math.hpp\"\n #include \"simd_utils.impl.hpp\"\n \n #ifndef CV_DOXYGEN\n```", + "output": "1. The constants will only used in the `v_exp` function currently. Maybe some of them will be used in other functions like `v_sin` and `v_cos`? Should I move them to a separate file as global variables? I just want them to be created when they are used for the first time. 2. There is already a code in intrin_cpp.hpp, how to test it? ```c++ //! @brief Helper macro //! @ingroup core_hal_intrin_impl #define OPENCV_HAL_IMPL_MATH_FUNC(func, cfunc, _Tp2) \\ template inline v_reg<_Tp2, n> func(const v_reg<_Tp, n>& a) \\ { \\ v_reg<_Tp2, n> c; \\ for( int i = 0; i < n; i++ ) \\ c.s[i] = cfunc(a.s[i]); \\ return c; \\ } /** @brief Square root of elements Only for floating point types.*/ OPENCV_HAL_IMPL_MATH_FUNC(v_sqrt, std::sqrt, _Tp) //! @cond IGNORED OPENCV_HAL_IMPL_MATH_FUNC(v_sin, std::sin, _Tp) OPENCV_HAL_IMPL_MATH_FUNC(v_cos, std::cos, _Tp) OPENCV_HAL_IMPL_MATH_FUNC(v_exp, std::exp, _Tp) OPENCV_HAL_IMPL_MATH_FUNC(v_log, std::log, _Tp) //! @endcond ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #25607: Add a new function that approximates the polygon bounding a convex hull with a certain number of sides\n\n**Code Changes:**\n```diff\n@@ -4060,6 +4060,28 @@ CV_EXPORTS_W void approxPolyDP( InputArray curve,\n OutputArray approxCurve,\n double epsilon, bool closed );\n \n+/** @brief Approximates a polygon with a convex hull with a specified accuracy and number of sides.\n+\n+The cv::approxPolyN function approximates a polygon with a convex hull\n+so that the difference between the contour area of the original contour and the new polygon is minimal.\n+It uses a greedy algorithm for contracting two vertices into one in such a way that the additional area is minimal.\n+Straight lines formed by each edge of the convex contour are drawn and the areas of the resulting triangles are considered.\n```", + "output": "Changes have been added. @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #25607: Add a new function that approximates the polygon bounding a convex hull with a certain number of sides\n\n**Code Changes:**\n```diff\n@@ -4060,6 +4060,28 @@ CV_EXPORTS_W void approxPolyDP( InputArray curve,\n OutputArray approxCurve,\n double epsilon, bool closed );\n \n+/** @brief Approximates a polygon with a convex hull with a specified accuracy and number of sides.\n+\n+The cv::approxPolyN function approximates a polygon with a convex hull\n+so that the difference between the contour area of the original contour and the new polygon is minimal.\n+It uses a greedy algorithm for contracting two vertices into one in such a way that the additional area is minimal.\n+Straight lines formed by each edge of the convex contour are drawn and the areas of the resulting triangles are considered.\n```", + "output": "@asmorkalov, do you have any comments?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #25252: Move API focused C++ samples to snippets - Clean Samples #25006 This PR removes 39 outdated C++ samples from the project, as part of an eff...\n**Review Line:** 2841\n\n**Code Changes:**\n```diff\n An example using the cv::linearPolar and cv::logPolar operations\n */\n \n@@ -2840,7 +2840,7 @@ the destination image will have the given size therefore the area of the boundin\n You can get reverse mapping adding #WARP_INVERSE_MAP to `flags`\n \\snippet polar_transforms.cpp InverseMap\n \n-In addiction, to calculate the original coordinate from a polar mapped coordinate \\f$(rho, phi)->(x, y)\\f$:\n+In addition, to calculate the original coordinate from a polar mapped coordinate \\f$(rho, phi)->(x, y)\\f$:\n \\snippet polar_transforms.cpp InverseCoordinate\n \n```", + "output": "@mshabunin I have added the code block again, without referencing to the sample" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/dft.cpp\n**Change Type:** removed\n**Context:** PR #25252: Move API focused C++ samples to snippets - Clean Samples #25006 This PR removes 39 outdated C++ samples from the project, as part of an eff...\n**Review Line:** 1\n\n**Code Changes:**\n```diff\n@@ -1,86 +0,0 @@\n-#include \"opencv2/core.hpp\"\n-#include \"opencv2/core/utility.hpp\"\n-#include \"opencv2/imgproc.hpp\"\n-#include \"opencv2/imgcodecs.hpp\"\n-#include \"opencv2/highgui.hpp\"\n-\n-#include \n-\n-using namespace cv;\n-using namespace std;\n-\n-static void help(const char ** argv)\n-{\n- printf(\"\\nThis program demonstrated the use of the discrete Fourier transform (dft)\\n\"\n```", + "output": "@asmorkalov So should we change the contents of this sample to the contents of https://github.com/opencv/opencv/blob/96a8e6d76c6351ba61a13d9f01ab7c9cd79f983f/modules/core/include/opencv2/core.hpp#L2216 and move this sample to snippets directory and change its reference to `@example samples/cpp/snippets/dft.cpp` in core.hpp?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/registry.cpp\n**Change Type:** modified\n**Context:** PR #25880: 🐛 Fix CUDA for old GPUs without FP16 support\n\n**Code Changes:**\n```diff\n@@ -18,6 +18,10 @@\n #include \"backend.hpp\"\n #include \"factory.hpp\"\n \n+#ifdef HAVE_CUDA\n+#include \"cuda4dnn/init.hpp\"\n+#endif\n+\n namespace cv {\n namespace dnn {\n```", + "output": "I pushed fix for the issue and extended check to target management. @opencv-alalek could you take a look again?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin.hpp\n**Change Type:** modified\n**Context:** PR #25796: Use `hfloat` instead of `__fp16`.\n**Review Line:** 169\n\n**Code Changes:**\n```diff\n@@ -166,7 +166,7 @@ CV_INTRIN_DEF_TYPE_TRAITS(schar, schar, uchar, uchar, short, int, int);\n CV_INTRIN_DEF_TYPE_TRAITS(ushort, short, ushort, ushort, unsigned, uint64, unsigned);\n CV_INTRIN_DEF_TYPE_TRAITS(short, short, ushort, ushort, int, int64, int);\n #if CV_FP16_TYPE\n-CV_INTRIN_DEF_TYPE_TRAITS(__fp16, short, ushort, __fp16, float, double, float);\n+CV_INTRIN_DEF_TYPE_TRAITS(hfloat, short, ushort, hfloat, float, double, float);\n #endif\n CV_INTRIN_DEF_TYPE_TRAITS_NO_Q_TYPE(unsigned, int, unsigned, unsigned, uint64, unsigned);\n CV_INTRIN_DEF_TYPE_TRAITS_NO_Q_TYPE(int, int, unsigned, unsigned, int64, int);\n@@ -370,7 +370,7 @@ template struct V_RegTraits\n CV_DEF_REG_TRAITS(v, v_uint16x8, ushort, u16, v_uint16x8, v_uint32x4, v_uint64x2, v_int16x8, void);\n```", + "output": "@opencv-alalek , I thought `hfloat` is our platform-dependent wrapper for FP16 data? It would become `__fp16` on ARM and `_Float16` on x86_64 and RISC-V. Intrinsics will use own type internally, but they still need `v_load/v_store` interfaces with `hfloat`. Or did I misunderstand your comment?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/gen2.py\n**Change Type:** modified\n**Context:** PR #25810: python: attempts to fix 3d mat parsing problem for dnn\n\n**Code Changes:**\n```diff\n@@ -488,6 +488,10 @@ def export_name(self):\n return self.name + '_'\n return self.name\n \n+ @property\n+ def nd_mat(self):\n+ return '/ND' in self._modifiers\n+\n @property\n def inputarg(self):\n```", + "output": "Thank you for your reviews. --- @opencv-alalek Could you be more specific? Do you mean I should create a file (.py) and save the array there then `gen2.py` load it from there?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/misc/python/test/test_dnn.py\n**Change Type:** modified\n**Context:** PR #25810: python: attempts to fix 3d mat parsing problem for dnn\n**Review Line:** 501\n\n**Code Changes:**\n```diff\n+ real_output = tuple(np.empty([2, 5, 3], dtype=np.float32))\n+ # Note that it does not support parsing a list , e.g. [\"237\"]\n+ real_output = net.forward(real_output, (\"237\")) # Retval is a tuple with a np.array of shape [2, 5, 3]\n+ normAssert(self, real_output, gold_output, \"Case 3\", getDefaultThreshold(target))\n+\n+ def test_set_param_3d(self):\n+ model_path = self.find_dnn_file('dnn/onnx/models/matmul_3d_init.onnx')\n+ input_file = self.find_dnn_file('dnn/onnx/data/input_matmul_3d_init.npy')\n+ output_file = self.find_dnn_file('dnn/onnx/data/output_matmul_3d_init.npy')\n+\n+ input = np.load(input_file)\n```", + "output": "@asmorkalov A new test for `setParam` is added anyway. Tested without `CV_ND` and it fails." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/segmentation.cpp\n**Change Type:** modified\n**Context:** PR #25756: [BUG FIX] Segmentation sample u2netp model results\n\n**Code Changes:**\n```diff\n@@ -79,7 +79,7 @@ int main(int argc, char **argv)\n // Open file with classes names.\n if (parser.has(\"classes\"))\n {\n- string file = parser.get(\"classes\");\n+ string file = findFile(parser.get(\"classes\"));\n ifstream ifs(file.c_str());\n if (!ifs.is_open())\n CV_Error(Error::StsError, \"File \" + file + \" not found\");\n@@ -92,7 +92,7 @@ int main(int argc, char **argv)\n```", + "output": "@asmorkalov The models.yml preprocessing pararmeters have been fixed. For overlay I have added the solution back but without thresholding. The model does very well in fine areas, the effect is lost with thresholding. This is how it looks now ![hockey](https://github.com/opencv/opencv/assets/25356067/65e51836-b3c9-4ad4-a102-36ef6744a959) ![horse](https://github.com/opencv/opencv/assets/25356067/25193006-eb22-4faa-84a0-eee82018e703)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpeg.cpp\n**Change Type:** modified\n**Context:** PR #25809: imgcodecs: Add rgb flag for imread and imdecode\n**Review Line:** 530\n\n**Code Changes:**\n```diff\n+ {\n+ if( cinfo->out_color_components == 3 )\n+ icvCvt_RGB2BGR_8u_C3R( buffer[0], 0, data, 0, Size(m_width,1) );\n+ else\n+ icvCvt_CMYK2BGR_8u_C4C3R( buffer[0], 0, data, 0, Size(m_width,1) );\n+ }\n }\n else\n {\n```", + "output": "Hi @asmorkalov, I just checked it. The `if` branch will decode img as RGB image while the `else` branch will decode image as BGR image." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_png.cpp\n**Change Type:** modified\n**Context:** PR #25809: imgcodecs: Add rgb flag for imread and imdecode\n**Review Line:** 93\n\n**Code Changes:**\n```diff\n+ ASSERT_TRUE(img.channels() == 3);\n+\n+ // pixel is red in RGB\n+ EXPECT_EQ(img.at(0, 0), Vec3b(255, 0, 0));\n+ EXPECT_EQ(img.at(0, 1), Vec3b(255, 0, 0));\n+\n // Fourth Test : Read PNG without alpha, imread flag 1\n img = imread(root + \"readwrite/color_palette_no_alpha.png\", IMREAD_COLOR);\n ASSERT_FALSE(img.empty());\n@@ -91,6 +99,14 @@ TEST(Imgcodecs_Png, read_color_palette_with_alpha)\n // pixel is red in BGR\n```", + "output": "Hi @asmorkalov. For consistency, `IMREAD_COLOR_RGB` currently converts all RGBA to RGB instead of RGBA output." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23613: dnn: refactor reduce\n\n**Code Changes:**\n```diff\n@@ -1176,165 +1176,49 @@ void ONNXImporter::parseGlobalPool(LayerParams &layerParams, const opencv_onnx::\n addLayer(layerParams, node_proto);\n }\n \n-void ONNXImporter::parseReduce(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto_)\n-{\n- opencv_onnx::NodeProto node_proto = node_proto_;\n- const std::string& layer_type = node_proto.op_type();\n- const std::string output_name = node_proto.output(0);\n- int depth = layerParams.get(\"depth\", CV_32F);\n```", + "output": "Thank you for the notice. Clangd defaults on inserting headers ..." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #22634: dnn: add the CANN backend\n\n**Code Changes:**\n```diff\n@@ -31,6 +31,10 @@ if(HAVE_TIMVX)\n ocv_target_compile_definitions(${the_module} PRIVATE \"HAVE_TIMVX=1\")\n endif()\n \n+if(HAVE_CANN)\n+ ocv_target_compile_definitions(${the_module} PRIVATE \"HAVE_CANN=1\")\n+endif()\n+\n ocv_option(OPENCV_DNN_CUDA \"Build with CUDA support\"\n HAVE_CUDA\n```", + "output": "@alalek I found if `HAVE_CANN=1` is added by `ocv_target_compile_definitions`, `HAVE_CANN` is undefined in dnn/test, leading to the failure of `HAVE_*` (CANN, TIMVX, Webnn, ...). Do we have a fix for this issue?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_common.hpp\n**Change Type:** modified\n**Context:** PR #24231: dnn: cleanup of halide backend for 5.x\n**Review Line:** 140\n\n**Code Changes:**\n```diff\n \n testing::internal::ParamGenerator< tuple > dnnBackendsAndTargets(\n bool withInferenceEngine = true,\n- bool withHalide = false,\n+ bool obsolete_withHalide = false, // this is kept for compatibility\n bool withCpuOCV = true,\n bool withVkCom = true,\n bool withCUDA = true,\n```", + "output": "Sounds good, thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** platforms/android/build_sdk.py\n**Change Type:** modified\n**Context:** PR #25746: Android SDK build script: HWAsan flags added for release mode\n\n**Code Changes:**\n```diff\n@@ -267,10 +267,14 @@ def build_library(self, abi, do_install, no_media_ndk):\n cmake_vars['WITH_ANDROID_MEDIANDK'] = \"OFF\"\n \n if self.hwasan and \"arm64\" in abi.name:\n+ cmake_vars['OPENCV_ENABLE_MEMORY_SANITIZER'] = \"ON\"\n hwasan_flags = \"-fno-omit-frame-pointer -fsanitize=hwaddress\"\n- cmake_vars['CMAKE_CXX_FLAGS_DEBUG'] = hwasan_flags\n- cmake_vars['CMAKE_C_FLAGS_DEBUG'] = hwasan_flags\n- cmake_vars['CMAKE_LINKER_FLAGS_DEBUG'] = hwasan_flags\n+ for s in ['OPENCV_EXTRA_C_FLAGS', 'OPENCV_EXTRA_CXX_FLAGS', 'OPENCV_EXTRA_EXE_LINKER_FLAGS',\n```", + "output": "@opencv-alalek I tuned the PR. Added flags to OPENCV_EXTRA_ and also enabled ASAN related quirks to fix false alarms." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #25146: Reworked findContours to reduce C-API usage - What is done: * rewritten `findContours` and `icvApproximateChainTC89` using C++ data structu...\n\n**Code Changes:**\n```diff\n@@ -4040,6 +4040,19 @@ A program using pyramid scaling, Canny, contours and contour simplification to f\n squares in the input image.\n */\n \n+//! @brief Find contours using link runs algorithm\n+//!\n+//! This function implements an algorithm different from cv::findContours:\n+//! - doesn't allocate temporary image internally, thus it has reduced memory consumption\n+//! - supports CV_8UC1 images only\n+//! - outputs 2-level hierarhy only (RETR_CCOMP mode)\n```", + "output": "It was separate mode in `findContours` - CV_LINK_RUNS, but it is very different from other modes (Suzuki algorithm), for example it doesn't require internal image copying and threshold. It also supports only RETR_LIST results output. So I thought that it would be better to extract it to separate function. I don't know if it is useful for anyone or have any bugs, because it was not tested or called anywhere in samples. I've added result verification by comparing it with old implementation output in the `link_runs` test, but I haven't checked it beyond that. I also can not find paper describing this algorithm, maybe @vpisarev could help? I can remove this function if we decide we don't need it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/contours_link.cpp\n**Change Type:** added\n**Context:** PR #25146: Reworked findContours to reduce C-API usage - What is done: * rewritten `findContours` and `icvApproximateChainTC89` using C++ data structu...\n**Review Line:** 255\n\n**Code Changes:**\n```diff\n+ rns[prev_point].link = rns[lower_run].next;\n+ connect_flag = ICV_SINGLE;\n+ lower_run = rns[rns[lower_run].next].next;\n+ continue;\n+ }\n+ rns[rns[lower_run].next] = rns[rns[lower_run].next];\n+ rns[lower_run].link = rns[lower_run].next;\n+\n+ // First point of contour\n+ ext_rns.push_back(lower_run);\n+ lower_run = rns[rns[lower_run].next].next;\n```", + "output": "@mshabunin , why assign it to itself? Is it a bug?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/minmax.cpp\n**Change Type:** modified\n**Context:** PR #25563: Transform offset to indeces for MatND in minMaxIdx HAL\n\n**Code Changes:**\n```diff\n@@ -1510,18 +1510,29 @@ void cv::minMaxIdx(InputArray _src, double* minVal,\n \n Mat src = _src.getMat(), mask = _mask.getMat();\n \n- int _minIdx, _maxIdx;\n- int* min_offset = (cn == 1) ? minIdx : &_minIdx;\n- int* max_offset = (cn == 1) ? maxIdx : &_maxIdx;\n if (src.dims <= 2)\n {\n- CALL_HAL(minMaxIdx, cv_hal_minMaxIdx, src.data, src.step, src.cols*cn, src.rows, src.depth(),\n```", + "output": "Thanks a lot! Makes sense." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/minmax.cpp\n**Change Type:** modified\n**Context:** PR #25563: Transform offset to indeces for MatND in minMaxIdx HAL\n**Review Line:** 1526\n\n**Code Changes:**\n```diff\n+ minVal, maxVal, minIdx, maxIdx, mask.data);\n+\n+ if (res == CV_HAL_ERROR_OK)\n+ {\n+ if (minIdx)\n+ ofs2idx(src, minIdx[0], minIdx);\n+ if (maxIdx)\n+ ofs2idx(src, maxIdx[0], maxIdx);\n+ return;\n+ }\n+ else if (res != CV_HAL_ERROR_NOT_IMPLEMENTED)\n```", + "output": "@asmorkalov I've done some more testing and I believe it should be `minIdx[1]` and `maxIdx[1]`. I _thought_ that minIdx[0] would correspond to the X direction, but it seems that in OpenCV it corresponds to the row. According do the [docs](https://docs.opencv.org/4.x/d2/de8/group__core__array.html#ga7622c466c628a75d9ed008b42250a73f): > single-row matrix is 1xN matrix (and therefore minIdx/maxIdx will be (0,j1)/(0,j2)) If I change these indices from 0 to 1 then my tests pass." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/minmax.cpp\n**Change Type:** modified\n**Context:** PR #25563: Transform offset to indeces for MatND in minMaxIdx HAL\n**Review Line:** 1526\n\n**Code Changes:**\n```diff\n+ minVal, maxVal, minIdx, maxIdx, mask.data);\n+\n+ if (res == CV_HAL_ERROR_OK)\n+ {\n+ if (minIdx)\n+ ofs2idx(src, minIdx[0], minIdx);\n+ if (maxIdx)\n+ ofs2idx(src, maxIdx[0], maxIdx);\n+ return;\n+ }\n+ else if (res != CV_HAL_ERROR_NOT_IMPLEMENTED)\n```", + "output": "@mplatings It's not clear from the docs if `minIdx` and `maxIdx` should contain pointers to an offset to convert it to array indices or to indices themselves. In the first case (which is how OpenCV tab functions from `getMinmaxTab()` actually work) there should be 0th index and no such thing as `minIdx[i]` or `maxIdx[i]` is ever possible for every `i > 0`. In the second case you're right since HAL call gets 1-dimensional array and puts Y index to `minIdx[0]` and X to `minIdx[1]` (the same for `maxIdx`). Since it's just one dimensional, Y index should always be zero which looks impractical to me but looks like this is how it is implemented now. I think we should choose between these two options first. By the way, can you tell me what tests are failing? Looks like OpenCV tests do not cover that case." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLAPACK.cmake\n**Change Type:** modified\n**Context:** PR #25625: core: deployment compatibility for old mac after Accelerate New LAPACK fix\n\n**Code Changes:**\n```diff\n@@ -1,3 +1,26 @@\n+if(CMAKE_SYSTEM_NAME STREQUAL \"Darwin\")\n+ set(_apple_device_min_target_os_version \"13.3\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"iOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"watchOS\")\n+ set(_apple_device_min_target_os_version \"9.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"tvOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"visionOS\")\n```", + "output": "Thank you for pointing out the problem." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLAPACK.cmake\n**Change Type:** modified\n**Context:** PR #25625: core: deployment compatibility for old mac after Accelerate New LAPACK fix\n\n**Code Changes:**\n```diff\n@@ -1,3 +1,26 @@\n+if(CMAKE_SYSTEM_NAME STREQUAL \"Darwin\")\n+ set(_apple_device_min_target_os_version \"13.3\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"iOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"watchOS\")\n+ set(_apple_device_min_target_os_version \"9.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"tvOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"visionOS\")\n```", + "output": "@asmorkalov What do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLAPACK.cmake\n**Change Type:** modified\n**Context:** PR #25625: core: deployment compatibility for old mac after Accelerate New LAPACK fix\n\n**Code Changes:**\n```diff\n@@ -1,3 +1,26 @@\n+if(CMAKE_SYSTEM_NAME STREQUAL \"Darwin\")\n+ set(_apple_device_min_target_os_version \"13.3\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"iOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"watchOS\")\n+ set(_apple_device_min_target_os_version \"9.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"tvOS\")\n+ set(_apple_device_min_target_os_version \"16.4\")\n+elseif(CMAKE_SYSTEM_NAME STREQUAL \"visionOS\")\n```", + "output": "Friendly ping. (Our build is broken because of this...) @fengyuentau maybe it would be good to squash your two commits?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/calibration_handeye.cpp\n**Change Type:** modified\n**Context:** PR #24897: Fix handeye\n**Review Line:** 317\n\n**Code Changes:**\n```diff\n+ // equation 12, and the remaining 112 all had Pcij or Pgij with norms greater than 1.7. Although errors\n+ // from small rotations were not observed, it is known that these motions are less informative (see\n+ // section II.B.3, and figure 6).\n+ double Pgij_norm = cv::norm(Pgij);\n+ double Pcij_norm = cv::norm(Pcij);\n+ if (Pgij_norm < 0.3 || Pcij_norm < 0.3 || Pgij_norm > 1.7 || Pcij_norm > 1.7) {\n+ continue;\n+ }\n+\n+ vec_Hgij.push_back(Hgij);\n+ vec_Hcij.push_back(Hcij);\n```", + "output": "Thanks for the explanation. Could also be added as comments in the code in my opinion. OK, no need to add the debug message." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLAPACK.cmake\n**Change Type:** modified\n**Context:** PR #24804: core: try to solve warnings caused by Apple's new LAPACK interface\n**Review Line:** 114\n\n**Code Changes:**\n```diff\n+ set(LAPACK_TRY_COMPILE_DEF \"\")\n+ if(LAPACK_IMPL STREQUAL \"LAPACK/Apple\" AND NOT IOS) # https://github.com/opencv/opencv/issues/24660\n+ # Get macOS version\n+ execute_process(COMMAND sw_vers -productVersion\n+ OUTPUT_VARIABLE MACOS_VERSION\n+ OUTPUT_STRIP_TRAILING_WHITESPACE)\n+ # Enable Accelerate New LAPACK if macOS >= 13.3\n+ if (MACOS_VERSION VERSION_GREATER \"13.3\" OR MACOS_VERSION VERSION_EQUAL \"13.3\")\n+ set(LAPACK_TRY_COMPILE_DEF \"-DACCELERATE_NEW_LAPACK\")\n+ add_compile_definitions(ACCELERATE_NEW_LAPACK)\n+ add_compile_definitions(ACCELERATE_LAPACK_ILP64)\n```", + "output": "Let me take care of this. cc @asmorkalov @opencv-alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #25559: Improved segmentation sample\n**Review Line:** 250\n\n**Code Changes:**\n```diff\n- scale: 1.0\n+ url: \"https://github.com/onnx/models/raw/491ce05590abb7551d7fae43c067c060eeb575a6/validated/vision/object_detection_segmentation/fcn/model/fcn-resnet50-12.onnx\"\n+ sha1: \"1bb0c7e0034038969aecc6251166f1612a139230\"\n+ model: \"fcn-resnet50-12.onnx\"\n+ mean: [103.5, 116.2, 123.6]\n+ scale: 0.019\n width: 500\n height: 500\n rgb: false\n@@ -265,3 +264,15 @@ fcnresnet101:\n height: 500\n```", + "output": "@asmorkalov It is there in line 254 [https://github.com/gursimarsingh/opencv/blob/6314b2cb5c3e2c9c25be8382bfaa82b08acf4c86/samples/dnn/models.yml#L254](#254)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/tile_layer.cpp\n**Change Type:** modified\n**Context:** PR #25409: 0/1D test for tile layer\n**Review Line:** 106\n\n**Code Changes:**\n```diff\n- tmp = cv::repeat(tmp, 1, rep_i);\n+ tmp = cv::repeat(tmp, repeats[i], 1);\n+ dims *= out_shape[i];\n }\n- dims *= out_shape[i];\n+ tmp = tmp.reshape(0, out_shape);\n }\n- tmp = tmp.reshape(0, out_shape);\n-\n tmp.copyTo(out);\n }\n```", + "output": "may be. @dkurt what do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/test/test_gui.cpp\n**Change Type:** modified\n**Context:** PR #25561: highgui: wayland: expand image width if title bar cannot be shown\n\n**Code Changes:**\n```diff\n@@ -205,6 +205,21 @@ TEST(Highgui_GUI, trackbar)\n EXPECT_NO_THROW(destroyAllWindows());\n }\n \n+// See https://github.com/opencv/opencv/issues/25560\n+#if !defined(ENABLE_PLUGINS)\n+TEST(Highgui_GUI, DISABLED_small_width_image)\n+#else\n+TEST(Highgui_GUI, small_width_image)\n+#endif\n```", + "output": "Thank you for your review. I had tried with `waitKey(1)`, however sometimes exception might slip throught. So I used large waiting time. From what I've tried with 10 times, waitKey(10) seems sufficient. I update to waitKey() argument from 1000ms to 10ms." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/test/test_gui.cpp\n**Change Type:** modified\n**Context:** PR #25551: highgui: wayland: fix to pass highgui test\n\n**Code Changes:**\n```diff\n@@ -48,6 +48,16 @@ inline void verify_size(const std::string &nm, const cv::Mat &img)\n {\n EXPECT_NO_THROW(imshow(nm, img));\n EXPECT_EQ(-1, waitKey(200));\n+\n+ // see https://github.com/opencv/opencv/issues/25550\n+ // Wayland backend is not supported getWindowImageRect().\n+ string framework;\n+ EXPECT_NO_THROW(framework = currentUIFramework());\n+ if(framework == \"WAYLAND\")\n```", + "output": "Thank you for your review, I fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/src/window_wayland.cpp\n**Change Type:** modified\n**Context:** PR #25551: highgui: wayland: fix to pass highgui test\n\n**Code Changes:**\n```diff\n@@ -84,20 +84,73 @@ static int xkb_keysym_to_ascii(xkb_keysym_t keysym) {\n return static_cast(keysym & 0xff);\n }\n \n+static void write_mat_to_xrgb8888(cv::Mat const &img_, void *data) {\n+ // Validate destination data.\n+ CV_CheckFalse((data == nullptr), \"Destination Address must not be nullptr.\");\n+\n+ // Validate source img parameters.\n+ CV_CheckFalse(img_.empty(), \"Source Mat must not be empty.\");\n```", + "output": "`dst` is mat which memory had been pre-allocated. So I was thinking with more safety. But there are no problem working with copyTo(). Thank you for your comment!!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_intrin_utils.hpp\n**Change Type:** modified\n**Context:** PR #25196: core: add universal intrinsics for fp16\n\n**Code Changes:**\n```diff\n@@ -182,6 +182,13 @@ template<> inline void EXPECT_COMPARE_EQ_(const double a, const double b\n EXPECT_DOUBLE_EQ( a, b );\n }\n \n+#if CV_SIMD_FP16\n+template<> inline void EXPECT_COMPARE_EQ_<__fp16>(const __fp16 a, const __fp16 b)\n+{\n+ EXPECT_LT(std::abs(float(a - b)), 0.126);\n+}\n+#endif\n```", + "output": "I found this error of wrong results is due to `EXPECT_EQ(..., resC[i])` gets 0 from `resC[i]` but actually it has non-zero value. Maybe `EXPECT_EQ` has problems with fp16? Do you have any insight on this problem? @vpisarev @opencv-alalek @asmorkalov Helpful printings from `test_div()`: ``` i = 0, a = 1.000000, b = 8.000000, c = 0.125000 i = 1, a = 2.000000, b = 7.000000, c = 0.285645 /workspace/fp16_bf16_arithm/opencv/modules/core/test/test_intrin_utils.hpp:552: Failure Expected equality of these values: dataA[i] / dataB[i] Which is: 0.285714 resC[i] Which is: 0 Google Test trace: /workspace/fp16_bf16_arithm/opencv/modules/core/test/test_intrin_utils.hpp:550: i=1 i = 2, a = 3.000000, b = 6.000000, c = 0.500000 i = 3, a = 4.000000, b = 5.000000, c = 0.799805 /workspace/fp16_bf16_arithm/opencv/modules/core/test/test_intrin_utils.hpp:552: Failure Expected equality of these values: dataA[i] / dataB[i] Which is: 0.8 resC[i] Which is: 0 Google Test trace: /workspace/fp16_bf16_arithm/opencv/modules/core/test/test_intrin_utils.hpp:550: i=3 i = 4, a = 5.000000, b = 4.000000, c = 1.250000 i = 5, a = 6.000000, b = 3.000000, c = 2.000000 i = 6, a = 7.000000, b = 2.000000, c = 3.500000 i = 7, a = 8.000000, b = 1.000000, c = 8.000000 ``` Get printings from `test_div()`: ```.diff diff --git a/modules/core/test/test_intrin_utils.hpp b/modules/core/test/test_intrin_utils.hpp index b7f919f6af..b8b37fee53 100644 --- a/modules/core/test/test_intrin_utils.hpp +++ b/modules/core/test/test_intrin_utils.hpp @@ -548,6 +548,7 @@ template struct TheTest for (int i = 0; i < VTraits::vlanes(); ++i) { SCOPED_TRACE(cv::format(\"i=%d\", i)); + printf(\"i = %d, a = %f, b = %f, c = %f\\n\", i, dataA[i], dataB[i], resC[i]); EXPECT_EQ(dataA[i] / dataB[i], resC[i]); } @@ -2038,7 +2039,7 @@ void test_hal_intrin_float16() .test_interleave() .test_addsub() .test_mul() - // .test_div() // wrong results + .test_div() // wrong results // .test_abs() // compile error .test_cmp() .test_sqrt_abs() ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/misc/python/test/test_dnn.py\n**Change Type:** modified\n**Context:** PR #25463: Change opencv_face_detector related tests and samples from caffe to onnx\n\n**Code Changes:**\n```diff\n@@ -286,41 +286,42 @@ def test_textdetection_model(self):\n \n \n def test_face_detection(self):\n- proto = self.find_dnn_file('dnn/opencv_face_detector.prototxt')\n- model = self.find_dnn_file('dnn/opencv_face_detector.caffemodel', required=False)\n- if proto is None or model is None:\n- raise unittest.SkipTest(\"Missing DNN test files (dnn/opencv_face_detector.{prototxt/caffemodel}). Verify OPENCV_DNN_TEST_DATA_PATH configuration parameter.\")\n-\n+ model = self.find_dnn_file('dnn/onnx/models/yunet-202303.onnx', required=False)\n```", + "output": "Thanks, that's a mistake." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #25435: change fcn8s-heavy-pascal tests from caffe to onnx\n\n**Code Changes:**\n```diff\n@@ -1444,6 +1444,11 @@ CV__DNN_INLINE_NS_BEGIN\n */\n CV_WRAP Model& setInputSwapRB(bool swapRB);\n \n+ /** @brief Set output names for frame.\n+ * @param[in] outNames Names for output layers.\n+ */\n+ CV_WRAP Model& setOutputNames(const std::vector& outNames);\n+\n /** @brief Set preprocessing parameters for frame.\n```", + "output": "@vpisarev @opencv-alalek @dku What do you think about the API change? Should be port it to 4.x?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_obsensor/obsensor_uvc_stream_channel.cpp\n**Change Type:** modified\n**Context:** PR #25523: videoio: obsensor: remove OB_EXT_CMD10 to suppress warning\n**Review Line:** 48\n\n**Code Changes:**\n```diff\n@@ -45,7 +45,6 @@ const uint8_t OB_EXT_CMD6[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0x7c, 0x00\n const uint8_t OB_EXT_CMD7[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfe, 0x12, 0x55, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD8[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfe, 0x13, 0x3f, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD9[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfa, 0x13, 0x4b, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00 };\n-const uint8_t OB_EXT_CMD10[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfa, 0x13, 0x3f, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD11[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfe, 0x13, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD12[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfe, 0x13, 0x3f, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 };\n const uint8_t OB_EXT_CMD13[16] = { 0x47, 0x4d, 0x04, 0x00, 0x02, 0x00, 0xfa, 0x13, 0x4b, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };\n```", + "output": "Thank you for your review ! As first, I reserved CMD10 so that I would not reuse only CMD10 identifier for other array in the future. However, it didn't seem to cause any problems, so I deleted it completely." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/highgui_wayland_ubuntu.markdown\n**Change Type:** added\n**Context:** PR #25496: highgui: wayland: show \"NO\" status if dependency is missing\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,106 @@\n+Using Wayland highgui-backend in Ubuntu {#tutorial_wayland_ubuntu}\n+=======================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_intelperc}\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your review! I fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/highgui_wayland_ubuntu.markdown\n**Change Type:** added\n**Context:** PR #25496: highgui: wayland: show \"NO\" status if dependency is missing\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,106 @@\n+Using Wayland highgui-backend in Ubuntu {#tutorial_wayland_ubuntu}\n+=======================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_intelperc}\n+\n+| | |\n+| -: | :- |\n```", + "output": "Yes, I agree that gray image is not suitable to test, Thank you! And there are some negative opinions about using lena image. Therefore, we use opencv-logo.png as an alternative." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/highgui_wayland_ubuntu.markdown\n**Change Type:** added\n**Context:** PR #25496: highgui: wayland: show \"NO\" status if dependency is missing\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,106 @@\n+Using Wayland highgui-backend in Ubuntu {#tutorial_wayland_ubuntu}\n+=======================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_intelperc}\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your point,, I fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/app/highgui_wayland_ubuntu.markdown\n**Change Type:** added\n**Context:** PR #25496: highgui: wayland: show \"NO\" status if dependency is missing\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,106 @@\n+Using Wayland highgui-backend in Ubuntu {#tutorial_wayland_ubuntu}\n+=======================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_intelperc}\n+\n+| | |\n+| -: | :- |\n```", + "output": "Yes, I fixed it, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/js_tutorials/js_setup/js_nodejs/js_nodejs.markdown\n**Change Type:** modified\n**Context:** PR #25324: Fix broken js build after moving HaarCascades to contrib - The HaarCascades related are not completely cleaned up #25311 after #25198, whic...\n**Review Line:** 229\n\n**Code Changes:**\n```diff\n@@ -227,52 +227,128 @@ In the browser, this filesystem is emulated in memory while in Node.js there's a\n \n ### The example ###\n \n-The following is an adaptation of @ref tutorial_js_face_detection.\n-\n @code{.js}\n const { Canvas, createCanvas, Image, ImageData, loadImage } = require('canvas');\n const { JSDOM } = require('jsdom');\n```", + "output": "Correct. I tried to replace it with the DNN-based FaceDetectorYN. However, FaceDetectorYN isn't exposed to JS, therefore it has to rewrite the post-processing part in JS. This appears somewhat redundant, @WanliZhong suggests if FaceDetectorYN can be exposed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/perf/perf_math.cpp\n**Change Type:** modified\n**Context:** PR #25450: Perf tests for SVD and solve() created\n\n**Code Changes:**\n```diff\n@@ -36,6 +36,234 @@ PERF_TEST_P(VectorLength, phase64f, testing::Values(128, 1000, 128*1024, 512*102\n SANITY_CHECK(angle, 5e-5);\n }\n \n+// generates random vectors, performs Gram-Schmidt orthogonalization on them\n+Mat randomOrtho(int rows, int ftype, RNG& rng)\n+{\n+ Mat result(rows, rows, ftype);\n+ rng.fill(result, RNG::UNIFORM, cv::Scalar(-1), cv::Scalar(1));\n+\n```", + "output": "@opencv-alalek @vpisarev What do you do about the proposed test? It runs solver with different input on each iteration." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd But we'll have to bring this change back in a future PR. The demo won't work without it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Removed, thanks! https://github.com/opencv/opencv/pull/24938/commits/c18bd9758e95415029051263902c6b209eb4cebd, https://github.com/opencv/opencv/pull/24938/commits/0b76bfa50483a08dceda76a69cd59f23ef07bd00" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 154\n\n**Code Changes:**\n```diff\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n }\n }\n \n+cv::Mat wrapOV(const cv::MediaFrame::View& view,\n+ const cv::GFrameDesc& desc) {\n+ cv::Mat out;\n+ switch (desc.fmt) {\n+ case cv::MediaFormat::BGR: {\n+ out = cv::Mat(desc.size, CV_8UC3, view.ptr[0], view.stride[0]);\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "I just moved it a bit before. Reverted, thanks! https://github.com/opencv/opencv/pull/24938/commits/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "I just moved it a bit before (and git diff showed it as a deleted method and a new one). Reverted, thanks! https://github.com/opencv/opencv/commit/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/commit/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/commit/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/commit/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/infer/gapi_infer_ov_tests.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 446\n\n**Code Changes:**\n```diff\n+ comp.apply(cv::gin(frame, roi), cv::gout(m_out_gapi_age, m_out_gapi_gender),\n+ cv::compile_args(cv::gapi::networks(pp)));\n+\n+ validate();\n+}\n+\n+class TestMediaNV12 final: public cv::MediaFrame::IAdapter {\n+ cv::Mat m_y;\n+ cv::Mat m_uv;\n+\n+public:\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/commit/24e5ceb635e0f70b6b5302f077bab9d5c315a12b" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "It was removed. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 135\n\n**Code Changes:**\n```diff\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n }\n }\n \n+cv::Mat wrapOV(const cv::MediaFrame::View& view,\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Removed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 814\n\n**Code Changes:**\n```diff\n+ default:\n+ GAPI_Assert(\"Unsupported input shape for OV backend\");\n+ }\n+}\n+\n static bool isImage(const cv::GMatDesc &desc,\n const ::ov::Shape &model_shape) {\n return (model_shape.size() == 4u) &&\n@@ -739,6 +820,16 @@ static bool isImage(const cv::GMatDesc &desc,\n (desc.depth == CV_8U);\n }\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 734\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n }\n }\n \n+cv::Mat wrapOV(const cv::MediaFrame::View& view,\n+ const cv::GFrameDesc& desc) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "No, it's not. Removed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n**Review Line:** 1349\n\n**Code Changes:**\n```diff\n \n const auto &input_name_0 = uu.params.input_names.front();\n const auto &mm_0 = in_metas[0u];\n- const auto &matdesc = cv::util::get(mm_0);\n+\n+ if (!(cv::util::holds_alternative(mm_0) ||\n+ cv::util::holds_alternative(mm_0))) {\n+ util::throw_error(std::runtime_error(\n+ \"OV Backend: Unsupported input meta\"\n+ \" for 0th argument in OV backend\"));\n+ }\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/833bd890ef003a7c161f3a91c61b9c8d81df0e55" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/infer/gapi_infer_ov_tests.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -319,8 +319,174 @@ struct TestAgeGenderListOV : public BaseAgeGenderOV {\n }\n };\n \n+class TestMediaBGR final: public cv::MediaFrame::IAdapter {\n+ cv::Mat m_mat;\n+ using Cb = cv::MediaFrame::View::Callback;\n+ Cb m_cb;\n+\n+public:\n```", + "output": "Fixed, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/2f249a8acab382376637a71aaba2940544eebd45" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Removed, thanks! https://github.com/opencv/opencv/pull/24938/commits/2f249a8acab382376637a71aaba2940544eebd45" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Done, thanks! https://github.com/opencv/opencv/pull/24938/commits/2f249a8acab382376637a71aaba2940544eebd45" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #24938: G-API OV backend requires cv::MediaFrame\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ static int toCV(const ov::element::Type &type) {\n static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n const auto total = mat.total() * mat.channels();\n if (toCV(tensor.get_element_type()) != mat.depth() ||\n- tensor.get_size() != total ) {\n+ tensor.get_size() != total) {\n std::stringstream ss;\n ss << \"Failed to copy data from ov::Tensor to cv::Mat.\"\n << \" Data type or number of elements mismatch.\"\n@@ -151,6 +151,30 @@ static void copyFromOV(const ov::Tensor &tensor, cv::Mat &mat) {\n```", + "output": "Fixed, thanks! https://github.com/opencv/opencv/pull/24938/commits/2f249a8acab382376637a71aaba2940544eebd45" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindIPP.cmake\n**Change Type:** modified\n**Context:** PR #25317: fix for IPP 2021.10 with OneAPI 2024\n**Review Line:** 87\n\n**Code Changes:**\n```diff\n@@ -84,7 +84,7 @@ endmacro()\n # This macro uses IPP_ROOT_DIR variable\n # TODO Cleanup code after ICV package stabilization\n macro(ipp_detect_version)\n- set(IPP_INCLUDE_DIRS ${IPP_ROOT_DIR}/include)\n+ get_filename_component(IPP_INCLUDE_DIRS ${IPP_VERSION_FILE} PATH)\n \n set(__msg)\n if(EXISTS ${IPP_ROOT_DIR}/include/ippicv_redefs.h)\n@@ -271,7 +271,9 @@ if(NOT DEFINED IPPROOT)\n endif()\n```", + "output": "Thank you for the information! Perhaps we should not change `IPP_INCLUDE_DIRS` to the nested directory. Or add both locations. I see these IPP includes in OpenCV: ``` ./modules/core/include/opencv2/core/private.hpp:220:#include \"ippversion.h\" ./modules/core/include/opencv2/core/private.hpp:230:#include \"ippicv.h\" ./modules/core/include/opencv2/core/private.hpp:232:#include \"ipp.h\" ``` /cc @eplankin" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25231: Constant layer 0/1D test.\n\n**Code Changes:**\n```diff\n@@ -603,4 +603,34 @@ INSTANTIATE_TEST_CASE_P(/*nothting*/, Layer_FullyConnected_Test,\n std::vector({4})\n ));\n \n+typedef testing::TestWithParam>> Layer_Const_Test;\n+TEST_P(Layer_Const_Test, Accuracy_01D)\n+{\n+ std::vector input_shape = get<0>(GetParam());\n+\n+ LayerParams lp;\n```", + "output": "Got it, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25231: Constant layer 0/1D test.\n**Review Line:** 626\n\n**Code Changes:**\n```diff\n+\n+ std::vector inputs; // No inputs are needed for a ConstLayer\n+ std::vector outputs;\n+ runLayer(layer, inputs, outputs);\n+ ASSERT_EQ(outputs.size(), 1);\n+ ASSERT_EQ(shape(output_ref), shape(outputs[0]));\n+ normAssert(output_ref, outputs[0]);\n+}\n+INSTANTIATE_TEST_CASE_P(/*nothing*/, Layer_Const_Test, testing::Values(\n+ std::vector({}),\n+ std::vector({1}),\n```", + "output": "Unresolved comment @Abdurrahheem" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/geometry.cpp\n**Change Type:** added\n**Context:** PR #25304: geometry cpp sample combining other shape detection samples\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,268 @@\n+/*******************************************************************************\n+ *\n+ * This program demonstrates various shape fitting techniques using OpenCV.\n+ * It reads an image, applies binary thresholding, and then detects contours.\n+ *\n+ * For each contour, it fits and draws several geometric shapes including\n+ * convex hulls, minimum enclosing circles, rectangles, triangles, and ellipses\n+ * using different fitting methods:\n+ * 1: OpenCV's original method fitEllipse which implements Fitzgibbon 1995 method.\n```", + "output": "@asmorkalov do you mean range check on number of input arguments or valid inputs of ellipse_method?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/geometry.cpp\n**Change Type:** added\n**Context:** PR #25304: geometry cpp sample combining other shape detection samples\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,268 @@\n+/*******************************************************************************\n+ *\n+ * This program demonstrates various shape fitting techniques using OpenCV.\n+ * It reads an image, applies binary thresholding, and then detects contours.\n+ *\n+ * For each contour, it fits and draws several geometric shapes including\n+ * convex hulls, minimum enclosing circles, rectangles, triangles, and ellipses\n+ * using different fitting methods:\n+ * 1: OpenCV's original method fitEllipse which implements Fitzgibbon 1995 method.\n```", + "output": "@asmorkalov moved ellipseMethod to hot keys" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/pointcloud/load_point_cloud.cpp\n**Change Type:** modified\n**Context:** PR #25382: Fix mesh loading for texture coordinates and face indices\n\n**Code Changes:**\n```diff\n@@ -140,7 +140,7 @@ void loadMesh(const String &filename, OutputArray vertices, OutputArrayOfArrays\n std::vector> vec_indices;\n \n std::vector vec_texCoords;\n- int nTexCoords;\n+ int nTexCoords = 0;\n \n decoder->readData(vec_vertices, vec_normals, vec_rgb, vec_texCoords, nTexCoords, vec_indices, 0);\n \n@@ -210,32 +210,30 @@ void loadMesh(const String &filename, OutputArray vertices, OutputArrayOfArrays\n```", + "output": "Sure, I forgot about this function. It helped me make my code cleaner in fact, thank you" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_io.cpp\n**Change Type:** modified\n**Context:** PR #25351: core: persistence: output reals as human-friendly expression.\n**Review Line:** 1973\n\n**Code Changes:**\n```diff\n+}\n+\n+TEST_P(Core_InputOutput_regression_25073, my_float16)\n+{\n+ cv::String res = \"\";\n+ cv::float16_t my_float16(0.5);\n+\n+ FileStorage fs( GetParam(), cv::FileStorage::WRITE | cv::FileStorage::MEMORY);\n+ EXPECT_NO_THROW( fs << \"my_float16\" << my_float16 );\n+ EXPECT_NO_THROW( fs << \"my_int\" << 5 );\n+ EXPECT_NO_THROW( res = fs.releaseAndGetString() );\n```", + "output": "Thank you for your comment, I create pull request https://github.com/opencv/opencv/pull/25391 to use hfloat instead of float16_t." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #25241: Added int support to padding layer\n\n**Code Changes:**\n```diff\n@@ -2407,8 +2407,16 @@ void ONNXImporter::parsePad(LayerParams& layerParams, const opencv_onnx::NodePro\n if (node_proto.input_size() == 3 && !node_proto.input(2).empty())\n {\n Mat value = getBlob(node_proto, 2);\n- float padValue = (depth == CV_8S) ? (float)value.ptr()[0] : value.ptr()[0];\n- layerParams.set(\"value\", padValue);\n+ double padValue = 0;\n+ switch(value.depth())\n+ {\n+ case CV_32F: padValue = value.ptr()[0]; break;\n```", + "output": "Got it, thanks. Missed that step above: ```cpp int inp_zp = (int)getScalarFromMat(getBlob(node_proto, 2)); ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/padding_layer.cpp\n**Change Type:** modified\n**Context:** PR #25241: Added int support to padding layer\n\n**Code Changes:**\n```diff\n@@ -34,7 +34,7 @@ class PaddingLayerImpl CV_FINAL : public PaddingLayer\n PaddingLayerImpl(const LayerParams ¶ms)\n {\n setParamsFrom(params);\n- paddingValue = params.get(\"value\", 0);\n+ paddingValue = params.get(\"value\", 0);\n inputDims = params.get(\"input_dims\", -1);\n paddingType = params.get(\"type\", \"constant\");\n \n@@ -70,6 +70,23 @@ class PaddingLayerImpl CV_FINAL : public PaddingLayer\n```", + "output": "@dkurt Could you verify this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core.hpp\n**Change Type:** modified\n**Context:** PR #25356: core: doc: add note for countNonZero, hasNonZero and findNonZero\n\n**Code Changes:**\n```diff\n@@ -248,7 +248,7 @@ CV_EXPORTS void swap( UMat& a, UMat& b );\n The function computes and returns the coordinate of a donor pixel corresponding to the specified\n extrapolated pixel when using the specified extrapolation border mode. For example, if you use\n cv::BORDER_WRAP mode in the horizontal direction, cv::BORDER_REFLECT_101 in the vertical direction and\n-want to compute value of the \"virtual\" pixel Point(-5, 100) in a floating-point image img , it\n+want to compute value of the \"virtual\" pixel Point(-5, 100) in a floating-point image img, it\n looks like:\n @code{.cpp}\n float val = img.at(borderInterpolate(100, img.rows, cv::BORDER_REFLECT_101),\n@@ -259,7 +259,7 @@ copyMakeBorder.\n```", + "output": "Thank you for your comment, these lines comes from other functions (e.g. normalize, minMaxIdx). If *NonZero() functions are need to fix, I think they should be fixed too. https://github.com/opencv/opencv/blob/e80500828c2f84ce6b46fe1e41744a32ed1738e2/modules/core/include/opencv2/core.hpp#L831-L834" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #25041: (5.x) Merge 4.x - OpenCV Contrib: https://github.com/opencv/opencv_contrib/pull/3637 OpenCV Extra: https://github.com/opencv/opencv_extra/p...\n**Review Line:** 2609\n\n**Code Changes:**\n```diff\n InputArray K, InputArray D, InputArray Knew = cv::noArray(), const Size& new_size = Size());\n \n+/**\n+@brief Finds an object pose from 3D-2D point correspondences for fisheye camera moodel.\n+\n+@param objectPoints Array of object points in the object coordinate space, Nx3 1-channel or\n+1xN/Nx1 3-channel, where N is the number of points. vector\\ can be also passed here.\n+@param imagePoints Array of corresponding image points, Nx2 1-channel or 1xN/Nx1 2-channel,\n+where N is the number of points. vector\\ can be also passed here.\n+@param cameraMatrix Input camera intrinsic matrix \\f$\\cameramatrix{A}\\f$ .\n+@param distCoeffs Input vector of distortion coefficients (4x1/1x4).\n```", + "output": "Keep empty line after `@brief`. Removal is unnecessary change against 4.x: https://github.com/opencv/opencv/blob/12aa0fe89819c02db53677c5749771a1007f623e/modules/calib3d/include/opencv2/calib3d.hpp#L4059-L4061" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_jpeg.cpp\n**Change Type:** modified\n**Context:** PR #25280: imgcodecs: jpeg: re-support to read CMYK Jpeg\n\n**Code Changes:**\n```diff\n@@ -178,6 +178,44 @@ TEST(Imgcodecs_Jpeg, encode_decode_rst_jpeg)\n EXPECT_EQ(0, remove(output_normal.c_str()));\n }\n \n+// See https://github.com/opencv/opencv/issues/25274\n+typedef testing::TestWithParam Imgcodecs_Jpeg_decode_cmyk;\n+TEST_P(Imgcodecs_Jpeg_decode_cmyk, regression25274)\n+{\n+ const int imread_flag = GetParam();\n+\n```", + "output": "Thank you for your comment ! Comparing decoding result between RGB JPEG and CMYK JPEG is good. `countNonZero()` requests single channel image, So I use `cvtest::norm()` instead of it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25205: 0D test for split layer\n\n**Code Changes:**\n```diff\n@@ -271,4 +271,40 @@ TEST(Layer_Reshape_Test, Accuracy)\n normAssert(output_ref, outputs[0]);\n }\n \n+typedef testing::TestWithParam>> Layer_Split_Test;\n+TEST_P(Layer_Split_Test, Accuracy_01D)\n+{\n+ LayerParams lp;\n+ lp.type = \"Split\";\n+ lp.name = \"SplitLayer\";\n```", + "output": "Got it, thanks. Very strange layer, to be honest :) But let's keep test enabled" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25205: 0D test for split layer\n\n**Code Changes:**\n```diff\n@@ -271,4 +271,40 @@ TEST(Layer_Reshape_Test, Accuracy)\n normAssert(output_ref, outputs[0]);\n }\n \n+typedef testing::TestWithParam>> Layer_Split_Test;\n+TEST_P(Layer_Split_Test, Accuracy_01D)\n+{\n+ LayerParams lp;\n+ lp.type = \"Split\";\n+ lp.name = \"SplitLayer\";\n```", + "output": "@vpisarev What is wrong with Mat API if we need to do that?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers_1d.cpp\n**Change Type:** modified\n**Context:** PR #25205: 0D test for split layer\n**Review Line:** 302\n\n**Code Changes:**\n```diff\n+ normAssert(output_ref, outputs[i]);\n+ }\n+}\n+INSTANTIATE_TEST_CASE_P(/*nothting*/, Layer_Split_Test,\n+ testing::Values(\n+ std::vector({}),\n+ std::vector({1}),\n+ std::vector({1, 4}),\n+ std::vector({1, 5}),\n+ std::vector({4, 1}),\n+ std::vector({4, 5})\n```", + "output": "@asmorkalov, this test should fail (https://github.com/opencv/opencv/pull/25231#issuecomment-2015359780)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/utils/trace.hpp\n**Change Type:** modified\n**Context:** PR #25197: Remove OpenVX - resolves https://github.com/opencv/opencv/issues/24995 OpenCV cleanup https://github.com/opencv/opencv/issues/25007\n\n**Code Changes:**\n```diff\n@@ -106,7 +106,6 @@ enum RegionLocationFlag {\n \n REGION_FLAG_IMPL_IPP = (1 << 16), ///< region is part of IPP code path\n REGION_FLAG_IMPL_OPENCL = (2 << 16), ///< region is part of OpenCL code path\n- REGION_FLAG_IMPL_OPENVX = (3 << 16), ///< region is part of OpenVX code path\n \n REGION_FLAG_IMPL_MASK = (15 << 16),\n \n```", + "output": "@mshabunin, @opencv-alalek, shall we keep this value for compatibility?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/utils/trace.hpp\n**Change Type:** modified\n**Context:** PR #25197: Remove OpenVX - resolves https://github.com/opencv/opencv/issues/24995 OpenCV cleanup https://github.com/opencv/opencv/issues/25007\n\n**Code Changes:**\n```diff\n@@ -106,7 +106,6 @@ enum RegionLocationFlag {\n \n REGION_FLAG_IMPL_IPP = (1 << 16), ///< region is part of IPP code path\n REGION_FLAG_IMPL_OPENCL = (2 << 16), ///< region is part of OpenCL code path\n- REGION_FLAG_IMPL_OPENVX = (3 << 16), ///< region is part of OpenVX code path\n \n REGION_FLAG_IMPL_MASK = (15 << 16),\n \n```", + "output": "@invarrow, please remove the line completely" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/utility.hpp\n**Change Type:** modified\n**Context:** PR #25237: doc: add note on handling of spaces in CommandLineParser\n\n**Code Changes:**\n```diff\n@@ -773,7 +773,7 @@ The sample below demonstrates how to use CommandLineParser:\n The keys parameter is a string containing several blocks, each one is enclosed in curly braces and\n describes one argument. Each argument contains three parts separated by the `|` symbol:\n \n--# argument names is a space-separated list of option synonyms (to mark argument as positional, prefix it with the `@` symbol)\n+-# argument names is a list of option synonyms separated by standard space characters ' ' (to mark argument as positional, prefix it with the `@` symbol)\n -# default value will be used if the argument was not provided (can be empty)\n -# help message (can be empty)\n \n@@ -796,6 +796,8 @@ For example:\n```", + "output": "Thank you for pointing this out. I will revise as you suggested." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "@dkurt Could you please take a look?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "> Also, if a new model is a personal Google Drive with a modified file I dont find a proper place to store the model. Do we have a shared cloud drive for this? IIRC, Maksim worked on this previously. Let us include him @mshabunin for this topic. > Due even opset 12 version has Reshape with 1x1024 hardcoded shapes It has hardcoded 1x1024 Reshape because the input shape has batch size 1. The ONNX Gemm operator does not support batch size. Although technically speaking we can make it support batch size, it will only cause more troubles like this in the future (and in other backends) if we do so. I suggest we just fail and exit the program if the model proto is defined to have batch size 1 and has some operators does not support batch size like Gemm but yet the user still want to run it with batch size >= 2." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "@fengyuentau, good point, but there are the following issues: * No backends support in GEMM. * We will need to replace InnerProduct to GEMM everywhere, not only in ONNX importer > | fc (current impl) | mean=21.94 median=21.40 min=19.74 | * Which implementation of FC layer has been benchmarked? Does GEMM shows better performance on other CPUs? https://github.com/opencv/opencv/blob/1794cdc03c9505bb46f33a5cde5e210c1c7f65a4/modules/dnn/src/layers/fully_connected_layer.cpp#L268-L292 So my proposal is to determine for which configurations GEMM is better first." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "Thanks, I will try to solve this issue in a different PR." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "@fengyuentau, sure. Here is a brief proposal so if you find it suitable, we can add a workaround: https://github.com/dkurt/opencv/commit/cad21a3d0ab2b42c3d5d7d67b55606e8982e311f" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n\n**Code Changes:**\n```diff\n@@ -2597,6 +2597,40 @@ TEST_P(Test_ONNX_layers, where_node)\n testONNXModels(\"where_layer\");\n }\n \n+TEST_P(Test_ONNX_layers, Conformance_Gemm_all_attributes) {\n+ testONNXModels(\"test_gemm_all_attributes\", pb, 0, 0, false, true, 2);\n+}\n+TEST_P(Test_ONNX_layers, Conformance_Gemm_alpha) {\n+ testONNXModels(\"test_gemm_alpha\", pb, 0, 0, false, true, 2);\n+}\n```", + "output": "@fengyuentau, will https://github.com/dkurt/opencv/commit/cad21a3d0ab2b42c3d5d7d67b55606e8982e311f solve the problem at least partially?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23897: dnn: add gemm_layer in place of fully_connected_layer for onnx models\n**Review Line:** 1950\n\n**Code Changes:**\n```diff\n }\n \n-// A * B + C = Y, we require that the dimension of A is [m, k], and the dimension of B is [n, k].\n-// And the dim of output Y is [m, n]\n-void ONNXImporter::parseGemm(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto)\n+void ONNXImporter::parseGemm(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto_)\n {\n- CV_Assert(node_proto.input_size() >= 2);\n- layerParams.type = \"InnerProduct\";\n- int transA = layerParams.get(\"transA\", 0);\n- layerParams.set(\"transA\", transA == 1);\n```", + "output": "@fengyuentau, can `parseMatMul` also be replaced to use Gemm?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/version.hpp\n**Change Type:** modified\n**Context:** PR #24862: 5.x merge 4.x - The PR includes all changes in 4.x till 4.9.0 release. OpenCV Contrib PR: https://github.com/opencv/opencv_contrib/pull/362...\n**Review Line:** 9\n\n**Code Changes:**\n```diff\n@@ -6,7 +6,7 @@\n #define OPENCV_DNN_VERSION_HPP\n \n /// Use with major OpenCV version only.\n-#define OPENCV_DNN_API_VERSION 20230620\n+#define OPENCV_DNN_API_VERSION 20231225\n \n #if !defined CV_DOXYGEN && !defined CV_STATIC_ANALYSIS && !defined CV_DNN_DONT_ADD_INLINE_NS\n #define CV__DNN_INLINE_NS __CV_CAT(dnn5_v, OPENCV_DNN_API_VERSION)\n```", + "output": "@opencv-alalek Could you tune abi-complience-checker on CI." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib/include/opencv2/calib.hpp\n**Change Type:** modified\n**Context:** PR #24862: 5.x merge 4.x - The PR includes all changes in 4.x till 4.9.0 release. OpenCV Contrib PR: https://github.com/opencv/opencv_contrib/pull/362...\n**Review Line:** 415\n\n**Code Changes:**\n```diff\n CALIB_CB_EXHAUSTIVE = 16,\n CALIB_CB_ACCURACY = 32,\n CALIB_CB_LARGER = 64,\n- CALIB_CB_MARKER = 128\n+ CALIB_CB_MARKER = 128,\n+ CALIB_CB_PLAIN = 256\n };\n \n enum { CALIB_CB_SYMMETRIC_GRID = 1,\n@@ -502,6 +503,10 @@ square-like shape) to filter out false quads extracted at the contour retrieval\n - @ref CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners,\n```", + "output": "Missing these parts from 4.x: ```.diff @@ -1235,6 +1236,10 @@ square-like shape) to filter out false quads extracted at the contour retrieval - @ref CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed. +- @ref CALIB_CB_PLAIN All other flags are ignored. The input image is taken as is. +No image processing is done to improve to find the checkerboard. This has the effect of speeding up the +execution of the function but could lead to not recognizing the checkerboard if the image +is not previously binarized in the appropriate manner. The function attempts to determine whether the input image is a view of the chessboard pattern and locate the internal chessboard corners. The function returns a non-zero value if all of the corners @@ -1595,6 +1600,10 @@ The algorithm performs the following steps: \\f$f_y\\f$ (ratios of 10:1 or more)), then you are probably using patternSize=cvSize(rows,cols) instead of using patternSize=cvSize(cols,rows) in @ref findChessboardCorners. +@note + The function may throw exceptions, if unsupported combination of parameters is provided or + the system is underconstrained. + @sa calibrateCameraRO, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/pattern_tools/test_charuco_board.py\n**Change Type:** modified\n**Context:** PR #24862: 5.x merge 4.x - The PR includes all changes in 4.x till 4.9.0 release. OpenCV Contrib PR: https://github.com/opencv/opencv_contrib/pull/362...\n**Review Line:** 116\n\n**Code Changes:**\n```diff\n- _charucoCorners, _charucoIds, markerCorners_cv, markerIds_cv = charuco_detector.detectBoard(from_cv_img)\n+ _charucoCorners, _charuco_ids_svg, marker_corners_svg, marker_ids_svg = charuco_detector.detectBoard(from_svg_img)\n+ _charucoCorners, _charuco_ids_cv, marker_corners_cv, marker_ids_cv = charuco_detector.detectBoard(from_cv_img)\n+ marker_corners_svg_map, marker_corners_cv_map = {}, {}\n+ for i in range(len(marker_ids_svg)):\n+ marker_corners_svg_map[int(marker_ids_svg[i])] = marker_corners_svg[i]\n+ for i in range(len(marker_ids_cv)):\n+ marker_corners_cv_map[int(marker_ids_cv[i])] = marker_corners_cv[i]\n \n- np.testing.assert_allclose(markerCorners_svg, markerCorners_cv, 0.1, 0.1)\n- np.testing.assert_allclose(markerIds_svg, markerIds_cv, 0.1, 0.1)\n```", + "output": "@AleksandrPanov Please take a look on these changes" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/net_impl.cpp\n**Change Type:** modified\n**Context:** PR #24411: Added int32, int64 support and type inference to dnn - Related PRs: - Contrib: https://github.com/opencv/opencv_contrib/pull/3648 - Extra: ...\n**Review Line:** 193\n\n**Code Changes:**\n```diff\n@@ -186,11 +186,6 @@ void Net::Impl::setUpNet(const std::vector& blobsToKeep_)\n \n clear();\n \n- if (hasDynamicShapes)\n- {\n- updateLayersShapes();\n- }\n-\n this->blobsToKeep = blobsToKeep_;\n \n allocateLayers(blobsToKeep_);\n@@ -475,7 +470,7 @@ void Net::Impl::allocateLayer(int lid, const LayersShapesMap& layersShapes)\n allocateLayer(*i, layersShapes);\n \n```", + "output": "@dkurt could you take a look?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/test/test_octree.cpp\n**Change Type:** modified\n**Context:** PR #23985: [GSoC] Update octree methods and create frames for PCC\n\n**Code Changes:**\n```diff\n@@ -14,100 +14,145 @@ class OctreeTest: public testing::Test\n void SetUp() override\n {\n pointCloudSize = 1000;\n- maxDepth = 18;\n-\n+ resolution = 0.0001;\n int scale;\n Point3i pmin, pmax;\n- scale = 1<<20;\n```", + "output": "Forgot to change it too, thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/objdetect/aruco_board_detection/aruco_board_detection.markdown\n**Change Type:** added\n**Context:** PR #23018: Move Aruco tutorials and samples to main repo\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,201 @@\n+Detection of ArUco boards {#tutorial_aruco_board_detection}\n+=========================\n+\n+@prev_tutorial{tutorial_aruco_detection}\n+\n+| | |\n+| -: | :- |\n+| Original authors | Sergio Garrido, Alexander Panov |\n+| Compatibility | OpenCV >= 4.7.0 |\n```", + "output": "Please use doxyen/markdown anchor, but not http links. Also there is Doxygen `@note` directive to get proper formatting." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/objdetect/aruco_detection/aruco_detection.markdown\n**Change Type:** added\n**Context:** PR #23018: Move Aruco tutorials and samples to main repo\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,702 @@\n+Detection of ArUco Markers {#tutorial_aruco_detection}\n+==========================\n+\n+@next_tutorial{tutorial_aruco_board_detection}\n+\n+| | |\n+| -: | :- |\n+| Original authors | Sergio Garrido, Alexander Panov |\n+| Compatibility | OpenCV >= 4.7.0 |\n```", + "output": "Please use `@snippet` here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/objdetect/aruco_detection/aruco_detection.markdown\n**Change Type:** added\n**Context:** PR #23018: Move Aruco tutorials and samples to main repo\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,702 @@\n+Detection of ArUco Markers {#tutorial_aruco_detection}\n+==========================\n+\n+@next_tutorial{tutorial_aruco_board_detection}\n+\n+| | |\n+| -: | :- |\n+| Original authors | Sergio Garrido, Alexander Panov |\n+| Compatibility | OpenCV >= 4.7.0 |\n```", + "output": "Added `@snippet`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/objdetect/aruco_detection/aruco_detection.markdown\n**Change Type:** added\n**Context:** PR #23018: Move Aruco tutorials and samples to main repo\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,702 @@\n+Detection of ArUco Markers {#tutorial_aruco_detection}\n+==========================\n+\n+@next_tutorial{tutorial_aruco_board_detection}\n+\n+| | |\n+| -: | :- |\n+| Original authors | Sergio Garrido, Alexander Panov |\n+| Compatibility | OpenCV >= 4.7.0 |\n```", + "output": "fixed, added link in `@see` section: ![image](https://github.com/opencv/opencv/assets/22337800/5d4e9221-d706-4990-bc42-6a5333499d94)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers.cpp\n**Change Type:** modified\n**Context:** PR #24977: Primitive 1D Tests\n**Review Line:** 776\n\n**Code Changes:**\n```diff\n+ lp.name = operation + \"_Layer\";\n+ lp.set(\"operation\", operation);\n+ Ptr layer = EltwiseLayer::create(lp);\n+\n+ std::vector input_shape = {batch_size, 1};\n+ if (batch_size == 0)\n+ input_shape.erase(input_shape.begin());\n+\n+ cv::Mat input1 = cv::Mat(input_shape, CV_32F, 1.0);\n+ cv::Mat input2 = cv::Mat(input_shape, CV_32F, 1.0);\n+ cv::randu(input1, 0.0, 1.0);\n```", + "output": "Thank you for comment. I will add `1d` cases." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_int8_layers.cpp\n**Change Type:** modified\n**Context:** PR #24980: dnn cleanup: On-fly-quantization removal\n**Review Line:** 9\n\n**Code Changes:**\n```diff\n \n+// The tests are disabled, because on-fly quantization was removed in https://github.com/opencv/opencv/pull/24980\n+// To be restored, when test models are quantized outsize of OpenCV\n+#if 0\n+\n #include \"test_precomp.hpp\"\n #include \"npy_blob.hpp\"\n #include \n@@ -1389,3 +1393,5 @@ TEST_P(Test_Int8_nets, YOLOv4_tiny)\n INSTANTIATE_TEST_CASE_P(/**/, Test_Int8_nets, dnnBackendsAndTargetsInt8());\n \n```", + "output": "> > Cannot use multi-line comment /**/ to comment out the whole file > > because need to use `#if 0` instead. Thank you! It is much better than commenting indeed. I used `#if 0` instead of commenting in the latest comment." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #23021: `cuda`: CMake add CUDA first class language support\n**Review Line:** 678\n\n**Code Changes:**\n```diff\n+\n+ # Check CUDA_PATH if supplied\n+ if(UNIX AND CUDA_PATH AND NOT ENV{CUDA_PATH})\n+ set(ENV{CUDA_PATH} ${CUDA_PATH})\n+ elseif(WIN32 AND CUDA_PATH)\n+ set(ENV{PATH} \"${CUDA_PATH}\\\\bin\\;$ENV{PATH}\")\n+ endif()\n+ include(CheckLanguage)\n+ check_language(CUDA)\n+\n+ # Fallback to checking default locations\n```", + "output": "Wow thank you, you are right, I didn't realize I was using Visual Studio CMake generator, conan selects it by default. My environment variable CUDA_PATH were set correctly and nvcc was in PATH but nothing worked. Switching to Ninja worked!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/rendering.cpp\n**Change Type:** added\n**Context:** PR #24459: Triangle rasterization function\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,402 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+\n+namespace cv {\n+\n+TriangleRasterizeSettings::TriangleRasterizeSettings()\n```", + "output": "This is an impossible case but it's not evident from the code, fixing it, thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #24459: Triangle rasterization function\n\n**Code Changes:**\n```diff\n@@ -2809,6 +2809,136 @@ CV_EXPORTS_W void loadMesh(const String &filename, OutputArray vertices, OutputA\n CV_EXPORTS_W void saveMesh(const String &filename, InputArray vertices, InputArrayOfArrays indices,\n InputArray normals = noArray(), InputArray colors = noArray());\n \n+\n+//! Triangle fill settings\n+enum TriangleShadingType\n+{\n+ RASTERIZE_SHADING_WHITE = 0, //!< a white color is used for the whole triangle\n+ RASTERIZE_SHADING_FLAT = 1, //!< a color of 1st vertex of each triangle is used\n```", + "output": "remove stars before `@param`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/android/android_gradle_projects.cmake\n**Change Type:** modified\n**Context:** PR #23447: AGP 8.0 build.gradle namespace and aidl buildFeature requirement added\n\n**Code Changes:**\n```diff\n@@ -82,6 +82,27 @@ foreach(fname ${GRADLE_WRAPPER_FILES})\n install(FILES \"${OpenCV_SOURCE_DIR}/platforms/android/gradle-wrapper/${fname}\" DESTINATION \"${ANDROID_INSTALL_SAMPLES_DIR}/${__dir}\" COMPONENT samples ${__permissions})\n endforeach()\n \n+# set build.gradle namespace\n+if(NOT (ANDROID_GRADLE_PLUGIN_VERSION VERSION_LESS \"7.3.0\"))\n+ ocv_update(OPENCV_ANDROID_NAMESPACE_DECLARATION \"namespace 'org.opencv'\")\n+else()\n+ ocv_update(OPENCV_ANDROID_NAMESPACE_DECLARATION \"\")\n+endif()\n```", + "output": "Need to use version check to avoid breaking of existed build configurations: ``` if(... GRADLE VERSION CHECK ...) ocv_update(OPENCV_NAMESPACE_DECLARATION \"namespace 'org.opencv'\") else() ocv_update(OPENCV_NAMESPACE_DECLARATION \"\") endif() ``` and below: ```.diff android { + @OPENCV_NAMESPACE_DECLARATION@ ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/android/android_gradle_projects.cmake\n**Change Type:** modified\n**Context:** PR #23447: AGP 8.0 build.gradle namespace and aidl buildFeature requirement added\n\n**Code Changes:**\n```diff\n@@ -82,6 +82,27 @@ foreach(fname ${GRADLE_WRAPPER_FILES})\n install(FILES \"${OpenCV_SOURCE_DIR}/platforms/android/gradle-wrapper/${fname}\" DESTINATION \"${ANDROID_INSTALL_SAMPLES_DIR}/${__dir}\" COMPONENT samples ${__permissions})\n endforeach()\n \n+# set build.gradle namespace\n+if(NOT (ANDROID_GRADLE_PLUGIN_VERSION VERSION_LESS \"7.3.0\"))\n+ ocv_update(OPENCV_ANDROID_NAMESPACE_DECLARATION \"namespace 'org.opencv'\")\n+else()\n+ ocv_update(OPENCV_ANDROID_NAMESPACE_DECLARATION \"\")\n+endif()\n```", + "output": "I made those changes accordingly so please check it out." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cvdef.h\n**Change Type:** modified\n**Context:** PR #23865: Added new data types to cv::Mat & UMat\n**Review Line:** 970\n\n**Code Changes:**\n```diff\n \n@@ -963,6 +967,41 @@ class float16_t\n #endif\n };\n \n+class bfloat16_t\n+{\n+public:\n+ bfloat16_t() : w(0) {}\n+ explicit bfloat16_t(float x)\n+ {\n```", + "output": "thanks for the link, I did not know about those new type definitions. Yes, it would be nice to resolve possible name conflicts." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib/test/test_cameracalibration.cpp\n**Change Type:** modified\n**Context:** PR #23865: Added new data types to cv::Mat & UMat\n**Review Line:** 2095\n\n**Code Changes:**\n```diff\n img_pts2.copyTo(image_points_all[1][i]);\n }\n std::vector image_sizes (2, imageSize);\n- Mat visibility_mat = Mat_::ones(2, numImgs);\n- std::vector is_fisheye(2, false);\n+ Mat visibility_mat = Mat_::ones(2, numImgs);\n+ std::vector is_fisheye(2, false);\n std::vector all_flags(2, flags);\n double rms = calibrateMultiview(objectPoints, image_points_all, image_sizes, visibility_mat,\n Rs, Ts, Ks, distortions, rvecs, tvecs, is_fisheye, errors_mat, noArray(), false, all_flags);\n```", + "output": "@asmorkalov Need to cleanup this buggy code from 4.x too. And look on traits why they doesn't disable that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_yolo/dnn_yolo.markdown\n**Change Type:** modified\n**Context:** PR #24898: Documentation for Yolo usage in Opencv\n\n**Code Changes:**\n```diff\n@@ -9,46 +9,224 @@ YOLO DNNs {#tutorial_dnn_yolo}\n | | |\n | -: | :- |\n | Original author | Alessandro de Oliveira Faria |\n-| Compatibility | OpenCV >= 3.3.1 |\n+| Extended by | Abduragim Shtanchaev |\n+| Compatibility | OpenCV >= 4.9.0 |\n+\n+\n+Running pre-trained YOLO model in OpenCV\n```", + "output": "use doxygen `@snippet`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/executor/gthreadedexecutor.cpp\n**Change Type:** added\n**Context:** PR #24845: G-API: Implement concurrent executor\n**Review Line:** 97\n\n**Code Changes:**\n```diff\n+ case GRunArgP::index_of() : /* do nothing */ break;\n+ default: util::throw_error(std::logic_error(\"content type of the runtime argument does not match to resource description ?\"));\n+ }\n+}\n+\n+void assignMetaStubExec(Mag& mag, const RcDesc &rc, const cv::GRunArg::Meta &meta) {\n+ switch (rc.shape) {\n+ case GShape::GARRAY: mag.meta()[rc.id] = meta; break;\n+ case GShape::GOPAQUE: mag.meta()[rc.id] = meta; break;\n+ case GShape::GSCALAR: mag.meta()[rc.id] = meta; break;\n+ case GShape::GFRAME: mag.meta()[rc.id] = meta; break;\n```", + "output": "@dmatveev This is the place where meta is stored into `Mag`. I assume there might be some data race that's why `Mag` is currently protected by mutex." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/executor/gthreadedexecutor.cpp\n**Change Type:** added\n**Context:** PR #24845: G-API: Implement concurrent executor\n**Review Line:** 274\n\n**Code Changes:**\n```diff\n+ // Execute the task\n+ m_f();\n+ // Notify every consumer about completion one of its dependencies\n+ for (auto* consumer : m_consumers) {\n+ const auto num_ready =\n+ consumer->m_ready_producers.fetch_add(1, std::memory_order_relaxed) + 1;\n+ // The last completed producer schedule the consumer for execution\n+ if (num_ready == consumer->m_num_producers) {\n+ state.tp.schedule([&state, consumer](){\n+ consumer->run(state);\n+ });\n```", + "output": "@TolyaTalamanov did you find out which is the best order here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24809: Added test for YOLO NAS\n**Review Line:** 2771\n\n**Code Changes:**\n```diff\n }\n@@ -2751,6 +2768,41 @@ TEST_P(Test_ONNX_nets, YOLOX)\n 1.0e-4, 1.0e-4);\n }\n \n+TEST_P(Test_ONNX_nets, YOLONas)\n+{\n+ // model information: https://dl.opencv.org/models/yolo-nas/Readme.md\n+ std::string weightPath = _tf(\"models/yolo_nas_s.onnx\", false);\n+\n+ Size targetSize{640, 640};\n```", + "output": "Valgrind reports invalid memory access: ``` [ INFO:0@20.101] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!/heads/Constant_12) from domain='ai.onnx' [ INFO:0@20.102] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Expand]:(onnx_node!/heads/Expand_1) from domain='ai.onnx' [ INFO:0@20.103] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!/heads/Constant_13) from domain='ai.onnx' [ INFO:0@20.104] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Unsqueeze]:(onnx_node!/heads/Unsqueeze) from domain='ai.onnx' [ INFO:0@20.113] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!/heads/Constant_14) from domain='ai.onnx' [ INFO:0@20.113] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Unsqueeze]:(onnx_node!/heads/Unsqueeze_1) from domain='ai.onnx' [ INFO:0@20.114] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Concat]:(onnx_node!/heads/Concat_2) from domain='ai.onnx' ==449084== Invalid read of size 4 ==449084== at 0x4CFC938: cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (onnx_importer.cpp:2623) ==449084== by 0x4CE7464: cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (onnx_importer.cpp:998) ==449084== by 0x4CE5EC0: cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (onnx_importer.cpp:888) ==449084== by 0x4CDEA85: cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (onnx_importer.cpp:283) ==449084== by 0x4D19AA2: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNet(char const*&&) (dnn_common.hpp:77) ==449084== by 0x4D165E9: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNetDiagnostic(char const*&&) (dnn_common.hpp:84) ==449084== by 0x4D131AC: cv::dnn::dnn4_v20231225::readNetFromONNX(std::__cxx11::basic_string, std::allocator > const&) (onnx_importer.cpp:4061) ==449084== by 0x4A15EF9: cv::dnn::dnn4_v20231225::readNet(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) (dnn_read.cpp:56) ==449084== by 0x65B31C: opencv_test::(anonymous namespace)::testYOLO(std::__cxx11::basic_string, std::allocator > const&, std::vector > const&, std::vector > const&, std::vector, std::allocator > > const&, cv::dnn::dnn4_v20231225::Image2BlobParams, float, float, double, double, std::__cxx11::basic_string, std::allocator >) (test_onnx_importer.cpp:2632) ==449084== by 0x65D4E7: opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::Body() (test_onnx_importer.cpp:2799) ==449084== by 0x65CFF3: opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::TestBody() (test_onnx_importer.cpp:2771) ==449084== by 0x739AD9: void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (ts_gtest.cpp:3919) ==449084== Address 0xc51775c is 4 bytes before a block of size 12 alloc'd ==449084== at 0x4842FB5: operator new(unsigned long) (vg_replace_malloc.c:472) ==449084== by 0x427971: std::__new_allocator::allocate(unsigned long, void const*) (new_allocator.h:147) ==449084== by 0x4272CA: allocate (alloc_traits.h:482) ==449084== by 0x4272CA: std::_Vector_base >::_M_allocate(unsigned long) (stl_vector.h:378) ==449084== by 0x4CB922: int* std::vector >::_M_allocate_and_copy(unsigned long, int const*, int const*) (stl_vector.h:1616) ==449084== by 0x4C8D2B: void std::vector >::_M_assign_aux(int const*, int const*, std::forward_iterator_tag) (vector.tcc:324) ==449084== by 0x4A46783: void std::vector >::assign(int const*, int const*) (stl_vector.h:826) ==449084== by 0x4CDD297: cv::dnn::dnn4_v20231225::shape(int const*, int) (shape_utils.hpp:122) ==449084== by 0x4CDD2EB: cv::dnn::dnn4_v20231225::shape(cv::Mat const&) (shape_utils.hpp:128) ==449084== by 0x4CFC904: cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (onnx_importer.cpp:2623) ==449084== by 0x4CE7464: cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (onnx_importer.cpp:998) ==449084== by 0x4CE5EC0: cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (onnx_importer.cpp:888) ==449084== by 0x4CDEA85: cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (onnx_importer.cpp:283) ==449084== ==449084== Invalid write of size 4 ==449084== at 0x4CFC93A: cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (onnx_importer.cpp:2623) ==449084== by 0x4CE7464: cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (onnx_importer.cpp:998) ==449084== by 0x4CE5EC0: cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (onnx_importer.cpp:888) ==449084== by 0x4CDEA85: cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (onnx_importer.cpp:283) ==449084== by 0x4D19AA2: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNet(char const*&&) (dnn_common.hpp:77) ==449084== by 0x4D165E9: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNetDiagnostic(char const*&&) (dnn_common.hpp:84) ==449084== by 0x4D131AC: cv::dnn::dnn4_v20231225::readNetFromONNX(std::__cxx11::basic_string, std::allocator > const&) (onnx_importer.cpp:4061) ==449084== by 0x4A15EF9: cv::dnn::dnn4_v20231225::readNet(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) (dnn_read.cpp:56) ==449084== by 0x65B31C: opencv_test::(anonymous namespace)::testYOLO(std::__cxx11::basic_string, std::allocator > const&, std::vector > const&, std::vector > const&, std::vector, std::allocator > > const&, cv::dnn::dnn4_v20231225::Image2BlobParams, float, float, double, double, std::__cxx11::basic_string, std::allocator >) (test_onnx_importer.cpp:2632) ==449084== by 0x65D4E7: opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::Body() (test_onnx_importer.cpp:2799) ==449084== by 0x65CFF3: opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::TestBody() (test_onnx_importer.cpp:2771) ==449084== by 0x739AD9: void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (ts_gtest.cpp:3919) ==449084== Address 0xc51770c is 4 bytes before a block of size 12 alloc'd ==449084== at 0x4842FB5: operator new(unsigned long) (vg_replace_malloc.c:472) ==449084== by 0x49E9709: std::__new_allocator::allocate(unsigned long, void const*) (new_allocator.h:147) ==449084== by 0x49E54F6: allocate (alloc_traits.h:482) ==449084== by 0x49E54F6: std::_Vector_base >::_M_allocate(unsigned long) (stl_vector.h:378) ==449084== by 0x49E52C4: std::_Vector_base >::_M_create_storage(unsigned long) (stl_vector.h:395) ==449084== by 0x49E0FB1: std::_Vector_base >::_Vector_base(unsigned long, std::allocator const&) (stl_vector.h:332) ==449084== by 0x49DD574: std::vector >::vector(std::vector > const&) (stl_vector.h:600) ==449084== by 0x4CFC8C9: cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (onnx_importer.cpp:2622) ==449084== by 0x4CE7464: cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (onnx_importer.cpp:998) ==449084== by 0x4CE5EC0: cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (onnx_importer.cpp:888) ==449084== by 0x4CDEA85: cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (onnx_importer.cpp:283) ==449084== by 0x4D19AA2: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNet(char const*&&) (dnn_common.hpp:77) ==449084== by 0x4D165E9: cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNetDiagnostic(char const*&&) (dnn_common.hpp:84) ==449084== ``` Usage: ``` cmake -DCMAKE_BUILD_TYPE=Debug ... ... OPENCV_OPENCL_RUNTIME=disabled valgrind ./bin/opencv_test_dnn --test_threads=2 --test_debug=1 --gtest_filter=*YOLONas* ``` Also there are sporadic crashes: http://pullrequest.opencv.org/buildbot/builders/precommit_linux32/builds/100319" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24809: Added test for YOLO NAS\n**Review Line:** 2771\n\n**Code Changes:**\n```diff\n }\n@@ -2751,6 +2768,41 @@ TEST_P(Test_ONNX_nets, YOLOX)\n 1.0e-4, 1.0e-4);\n }\n \n+TEST_P(Test_ONNX_nets, YOLONas)\n+{\n+ // model information: https://dl.opencv.org/models/yolo-nas/Readme.md\n+ std::string weightPath = _tf(\"models/yolo_nas_s.onnx\", false);\n+\n+ Size targetSize{640, 640};\n```", + "output": "The same issue is reproducible with Address sanitizer: `cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_FLAGS_DEBUG=\"-fno-omit-frame-pointer -fsanitize=address\" -DCMAKE_C_FLAGS_DEBUG=\"-fno-omit-frame-pointer -fsanitize=address\" ../opencv-master && make -j4 opencv_test_dnn ` `./bin/opencv_test_dnn --gtest_filter=\"*Test_ONNX_nets.YOLONas*\"` ``` [ INFO:0@2.156] global onnx_importer.cpp:976 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Concat]:(onnx_node!/heads/Concat_2) from domain='ai.onnx' ================================================================= ==14910==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200035688c at pc 0x7f54275ac6e3 bp 0x7ffea3fb6ac0 sp 0x7ffea3fb6ab0 READ of size 4 at 0x60200035688c thread T0 #0 0x7f54275ac6e2 in cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc2a6e2) #1 0x7f542757990c in cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbf790c) #2 0x7f5427576a90 in cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbf4a90) #3 0x7f5427566342 in cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbe4342) #4 0x7f54275f425b in cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNet(char const*&&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc7225b) #5 0x7f54275ebc0a in cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNetDiagnostic(char const*&&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc69c0a) #6 0x7f54275e2fea in cv::dnn::dnn4_v20231225::readNetFromONNX(std::__cxx11::basic_string, std::allocator > const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc60fea) #7 0x7f5426f0bc4a in cv::dnn::dnn4_v20231225::readNet(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0x589c4a) #8 0x55d29663ffb7 in opencv_test::(anonymous namespace)::testYOLO(std::__cxx11::basic_string, std::allocator > const&, std::vector > const&, std::vector > const&, std::vector, std::allocator > > const&, cv::dnn::dnn4_v20231225::Image2BlobParams, float, float, double, double, std::__cxx11::basic_string, std::allocator >) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x757fb7) #9 0x55d2966450fe in opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::Body() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x75d0fe) #10 0x55d296644564 in opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::TestBody() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x75c564) #11 0x55d29686d2f8 in void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9852f8) #12 0x55d2968624c0 in void testing::internal::HandleExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x97a4c0) #13 0x55d296824233 in testing::Test::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93c233) #14 0x55d296825620 in testing::TestInfo::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93d620) #15 0x55d29682617a in testing::TestCase::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93e17a) #16 0x55d29684142e in testing::internal::UnitTestImpl::RunAllTests() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x95942e) #17 0x55d29686f4eb in bool testing::internal::HandleSehExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9874eb) #18 0x55d29686449b in bool testing::internal::HandleExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x97c49b) #19 0x55d29683e1ad in testing::UnitTest::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9561ad) #20 0x55d2964cc887 in RUN_ALL_TESTS() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x5e4887) #21 0x55d2964ccc4f in main (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x5e4c4f) #22 0x7f5420abec86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21c86) #23 0x55d296133139 in _start (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x24b139) 0x60200035688c is located 4 bytes to the left of 12-byte region [0x602000356890,0x60200035689c) allocated by thread T0 here: #0 0x7f54282b8448 in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xe0448) #1 0x55d296137e10 in __gnu_cxx::new_allocator::allocate(unsigned long, void const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x24fe10) #2 0x55d296137a78 in std::allocator_traits >::allocate(std::allocator&, unsigned long) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x24fa78) #3 0x55d29613744f in std::_Vector_base >::_M_allocate(unsigned long) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x24f44f) #4 0x55d296298441 in int* std::vector >::_M_allocate_and_copy(unsigned long, int const*, int const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x3b0441) #5 0x55d296293a87 in void std::vector >::_M_assign_aux(int const*, int const*, std::forward_iterator_tag) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x3aba87) #6 0x55d29628dd41 in void std::vector >::_M_assign_dispatch(int const*, int const*, std::__false_type) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x3a5d41) #7 0x55d2962875e0 in void std::vector >::assign(int const*, int const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x39f5e0) #8 0x7f5427563a50 in cv::dnn::dnn4_v20231225::shape(int const*, int) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbe1a50) #9 0x7f5427563b90 in cv::dnn::dnn4_v20231225::shape(cv::Mat const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbe1b90) #10 0x7f54275ac678 in cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc2a678) #11 0x7f542757990c in cv::dnn::dnn4_v20231225::ONNXImporter::handleNode(opencv_onnx::NodeProto const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbf790c) #12 0x7f5427576a90 in cv::dnn::dnn4_v20231225::ONNXImporter::populateNet() (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbf4a90) #13 0x7f5427566342 in cv::dnn::dnn4_v20231225::ONNXImporter::ONNXImporter(cv::dnn::dnn4_v20231225::Net&, char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xbe4342) #14 0x7f54275f425b in cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNet(char const*&&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc7225b) #15 0x7f54275ebc0a in cv::dnn::dnn4_v20231225::Net cv::dnn::dnn4_v20231225::detail::readNetDiagnostic(char const*&&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc69c0a) #16 0x7f54275e2fea in cv::dnn::dnn4_v20231225::readNetFromONNX(std::__cxx11::basic_string, std::allocator > const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc60fea) #17 0x7f5426f0bc4a in cv::dnn::dnn4_v20231225::readNet(std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&) (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0x589c4a) #18 0x55d29663ffb7 in opencv_test::(anonymous namespace)::testYOLO(std::__cxx11::basic_string, std::allocator > const&, std::vector > const&, std::vector > const&, std::vector, std::allocator > > const&, cv::dnn::dnn4_v20231225::Image2BlobParams, float, float, double, double, std::__cxx11::basic_string, std::allocator >) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x757fb7) #19 0x55d2966450fe in opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::Body() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x75d0fe) #20 0x55d296644564 in opencv_test::(anonymous namespace)::Test_ONNX_nets_YOLONas_Test::TestBody() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x75c564) #21 0x55d29686d2f8 in void testing::internal::HandleSehExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9852f8) #22 0x55d2968624c0 in void testing::internal::HandleExceptionsInMethodIfSupported(testing::Test*, void (testing::Test::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x97a4c0) #23 0x55d296824233 in testing::Test::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93c233) #24 0x55d296825620 in testing::TestInfo::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93d620) #25 0x55d29682617a in testing::TestCase::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x93e17a) #26 0x55d29684142e in testing::internal::UnitTestImpl::RunAllTests() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x95942e) #27 0x55d29686f4eb in bool testing::internal::HandleSehExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9874eb) #28 0x55d29686449b in bool testing::internal::HandleExceptionsInMethodIfSupported(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x97c49b) #29 0x55d29683e1ad in testing::UnitTest::Run() (/mnt/projects/Projects/OpenCV/opencv-build2/bin/opencv_test_dnn+0x9561ad) SUMMARY: AddressSanitizer: heap-buffer-overflow (/mnt/projects/Projects/OpenCV/opencv-build2/lib/libopencv_dnn.so.409+0xc2a6e2) in cv::dnn::dnn4_v20231225::ONNXImporter::parseConcat(cv::dnn::dnn4_v20231225::LayerParams&, opencv_onnx::NodeProto const&) Shadow bytes around the buggy address: 0x0c0480062cc0: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fa 0x0c0480062cd0: fa fa fd fa fa fa 00 fa fa fa fd fa fa fa 00 fa 0x0c0480062ce0: fa fa fd fa fa fa fd fa fa fa fd fd fa fa 00 04 0x0c0480062cf0: fa fa fd fa fa fa 00 fa fa fa fd fa fa fa fd fa 0x0c0480062d00: fa fa fd fd fa fa 00 04 fa fa 00 04 fa fa 00 04 =>0x0c0480062d10: fa[fa]00 04 fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0480062d20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0480062d30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0480062d40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0480062d50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c0480062d60: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==14910==ABORTING ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/einsum_layer.cpp\n**Change Type:** modified\n**Context:** PR #24812: Replace interactive batched Matrix Multiply.\n**Review Line:** 1315\n\n**Code Changes:**\n```diff\n int K = input1ShapeOverride[2];\n int N = input2ShapeOverride[2];\n \n- std::vector output;\n+ Mat reshapedInput1 = input1;\n+ Mat reshapedInput2 = input2;\n+\n+ Mat output;\n if (batches > 1)\n {\n- Mat reshapedInput1 = input1;\n```", + "output": "@fengyuentau is there anything to change more? Could you please approve if not ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core.hpp\n**Change Type:** modified\n**Context:** PR #23965: core: add broadcast\n\n**Code Changes:**\n```diff\n@@ -1118,6 +1118,13 @@ CV_EXPORTS_W void flip(InputArray src, OutputArray dst, int flipCode);\n */\n CV_EXPORTS_W void flipND(InputArray src, OutputArray dst, int axis);\n \n+/** @brief Broadcast the given Mat to the given shape.\n+ * @param src input array\n+ * @param shape target shape. Should be a list of CV_32S numbers. Note that negative values are not supported.\n+ * @param dst output array that has the given shape\n+ */\n+CV_EXPORTS_W void broadcast(InputArray src, InputArray shape, OutputArray dst);\n```", + "output": "@vpisarev Should it be `Mat` class method?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core.hpp\n**Change Type:** modified\n**Context:** PR #23965: core: add broadcast\n\n**Code Changes:**\n```diff\n@@ -1118,6 +1118,13 @@ CV_EXPORTS_W void flip(InputArray src, OutputArray dst, int flipCode);\n */\n CV_EXPORTS_W void flipND(InputArray src, OutputArray dst, int axis);\n \n+/** @brief Broadcast the given Mat to the given shape.\n+ * @param src input array\n+ * @param shape target shape. Should be a list of CV_32S numbers. Note that negative values are not supported.\n+ * @param dst output array that has the given shape\n+ */\n+CV_EXPORTS_W void broadcast(InputArray src, InputArray shape, OutputArray dst);\n```", + "output": "Decided to use `InputArray` in a discussion with @vpisarev ." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode_encoder.cpp\n**Change Type:** modified\n**Context:** PR #24299: In-house QR codes decoding\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,8 @@\n \n #include \"precomp.hpp\"\n #include \"qrcode_encoder_table.inl.hpp\"\n+#include \"graphical_code_detector_impl.hpp\"\n+\n namespace cv\n {\n using std::vector;\n@@ -19,6 +21,7 @@ const uint8_t INVALID_REGION_VALUE = 110;\n```", + "output": "Removed this weak logic, thanks! To indicate failed error correction `blocks[i]` now cleared." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_backends.cpp\n**Change Type:** modified\n**Context:** PR #24283: dnn: merge tests from test_halide_layers to test_backends\n**Review Line:** 1035\n\n**Code Changes:**\n```diff\n+ l1 = 0.015;\n+\n+ testLayer(lp, input, backendId, targetId, false, l1, lInf);\n+}\n+\n+INSTANTIATE_TEST_CASE_P(Layer_Test_Backends, FullyConnected, testing::Combine(\n+/*batch*/ testing::Values(1, 2, 4, 8, 16),\n+/*in channels*/ testing::Values(3, 4),\n+/*in size*/ testing::Values(Size(5, 4), Size(4, 5), Size(1, 1)),\n+/*out channels*/ testing::Values(3, 4),\n+/*has bias*/ testing::Bool(),\n```", + "output": "@fengyuentau, please copy from 4.x manually: https://github.com/opencv/opencv/blob/4.x/modules/dnn/test/test_halide_layers.cpp. There are still missed updates about batch: https://github.com/opencv/opencv/blob/9942757bab881811816478c86d56924cf35c0d22/modules/dnn/test/test_halide_layers.cpp#L480" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_operations.cpp\n**Change Type:** modified\n**Context:** PR #24271: fix to convert float32 to int32/uint32 with rounding to nearest (ties to even).\n\n**Code Changes:**\n```diff\n@@ -1561,4 +1561,54 @@ TEST(Core_Arithm, scalar_handling_19599) // https://github.com/opencv/opencv/is\n EXPECT_EQ(1, c.rows);\n }\n \n+// https://github.com/opencv/opencv/issues/24163\n+typedef tuple Arith_Regression24163Param;\n+typedef testing::TestWithParam Core_Arith_Regression24163;\n+\n+#if defined __riscv\n+TEST_P(Core_Arith_Regression24163, DISABLED_test_for_ties_to_even)\n```", + "output": "Thank you for your review. I updated this test including 3x3, 3x4, 3x5, 3x6 cases. - 3x3 - 9 elements - float32_4 x 2 + float32 - 3x4 - 12 elements - float32_4 x 3 - 3x5 - 15 elements - float32_4 x 3 + float32_2 x 1 + float32 x 1 - 3x6 - 18 elements - float32_4 x 4 + float32_2 x 1 I think these test are including above test. - 3x1 - 3 elements - float32_2 x 1 + float32 x 1 - 3x2 - 6 elements - float32_4 x 1 + float32x2 x 1 And CV_8S, CV_16U, CV_16S and CV_32S are supported to test." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/carotene/src/convert_scale.cpp\n**Change Type:** modified\n**Context:** PR #24271: fix to convert float32 to int32/uint32 with rounding to nearest (ties to even).\n**Review Line:** 1252\n\n**Code Changes:**\n```diff\n int16x4_t vRes2 = vqmovn_s32(vline2_s32);\n uint8x8_t vRes = vqmovun_s16(vcombine_s16(vRes1, vRes2));\n@@ -1249,7 +1250,7 @@ CVTS_FUNC(u16, s8, 16,\n #else\n CVTS_FUNC(u16, s8, 16,\n float32x4_t vscale = vdupq_n_f32((f32)alpha);\n- float32x4_t vshift = vdupq_n_f32((f32)beta + 0.5f);,\n+ float32x4_t vshift = vdupq_n_f32((f32)beta);,\n {\n for (size_t i = 0; i < w; i += 8)\n {\n```", + "output": "Thank you for your review, and I think this coma is needed to use `CVTS_FUNC` macro. If removing coma, build is failed with following message. >/home/kmtr/work/opencv/3rdparty/carotene/src/convert_scale.cpp:1274:2: error: macro \"CVTS_FUNC\" requires 5 arguments, but only 4 given 1274 | }) `CVTS_FUNC()` macros requests 5 arguments, https://github.com/opencv/opencv/blob/f5d82458014459e25830984c9ad9cbe182c4653c/3rdparty/carotene/src/convert_scale.cpp#L46-L50 In line 1251, `CVTS_FUNC()` macro is used with following arguments. - (T1) `u16` - (T2) `s8` - (SIMD_SIZE) `16` - (CVTINIT) `float32x4_t vscale = ... ; float32x4_t vshift = ... ;` - (CVTROW) `{ for(size_t i = 0; i < w; i+= 8) ... }` L1251 will be expanded/pre-processed with following code. ```cpp #1251 \"/home/kmtr/work/opencv/3rdparty/carotene/src/convert_scale.cpp\" void convertScale(const Size2D & _size, const u16 * srcBase, ptrdiff_t srcStride, s8 * dstBase, ptrdiff_t dstStride, f64 alpha, f64 beta) { internal::assertSupportedConfiguration(); Size2D size(_size); if (srcStride == dstStride && srcStride == (ptrdiff_t) (size.width)) { size.width *= size.height; size.height = 1; } const ptrdiff_t sstep = srcStride / sizeof(u16); const ptrdiff_t dstep = dstStride / sizeof(s8); const size_t w = size.width & ~(16 - 1); if (size.width >= 16) { const u16 *_src = srcBase; s8 *_dst = dstBase; float32x4_t vscale = vdupq_n_f32((f32) alpha); float32x4_t vshift = vdupq_n_f32((f32) beta); for (ptrdiff_t h = size.height; h--; _src += sstep, _dst += dstep) { for (size_t i = 0; i < w; i += 8) { internal::prefetch(_src + i); uint16x8_t vline = vld1q_u16(_src + i); uint32x4_t vline1_u32 = vmovl_u16(vget_low_u16(vline)); uint32x4_t vline2_u32 = vmovl_u16(vget_high_u16(vline)); float32x4_t vline1_f32 = vcvtq_f32_u32(vline1_u32); float32x4_t vline2_f32 = vcvtq_f32_u32(vline2_u32); vline1_f32 = vmulq_f32(vline1_f32, vscale); vline2_f32 = vmulq_f32(vline2_f32, vscale); vline1_f32 = vaddq_f32(vline1_f32, vshift); vline2_f32 = vaddq_f32(vline2_f32, vshift); int32x4_t vline1_s32 = internal::vroundq_s32_f32(vline1_f32); int32x4_t vline2_s32 = internal::vroundq_s32_f32(vline2_f32); int16x4_t vRes1 = vqmovn_s32(vline1_s32); int16x4_t vRes2 = vqmovn_s32(vline2_s32); int8x8_t vRes = vqmovn_s16(vcombine_s16(vRes1, vRes2)); vst1_s8(_dst + i, vRes); }}} if (w < size.width) { const u16 *_src = srcBase; s8 *_dst = dstBase; for (ptrdiff_t h = size.height; h--; _src += sstep, _dst += dstep) for (size_t i = w; i < size.width; i++) _dst[i] = internal::saturate_cast < s8 > (_src[i] * alpha + beta); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #24271: fix to convert float32 to int32/uint32 with rounding to nearest (ties to even).\n\n**Code Changes:**\n```diff\n@@ -1969,11 +1969,9 @@ inline v_int32x4 v_round(const v_float32x4& a)\n #else\n inline v_int32x4 v_round(const v_float32x4& a)\n {\n- static const int32x4_t v_sign = vdupq_n_s32(1 << 31),\n- v_05 = vreinterpretq_s32_f32(vdupq_n_f32(0.5f));\n-\n- int32x4_t v_addition = vorrq_s32(v_05, vandq_s32(v_sign, vreinterpretq_s32_f32(a.val)));\n- return v_int32x4(vcvtq_s32_f32(vaddq_f32(a.val, vreinterpretq_f32_s32(v_addition))));\n+ // See https://github.com/opencv/opencv/pull/24271#issuecomment-1867318007\n```", + "output": "@Kumataro, can you please fix HAL implementation of v_round as well?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #24271: fix to convert float32 to int32/uint32 with rounding to nearest (ties to even).\n\n**Code Changes:**\n```diff\n@@ -1969,11 +1969,9 @@ inline v_int32x4 v_round(const v_float32x4& a)\n #else\n inline v_int32x4 v_round(const v_float32x4& a)\n {\n- static const int32x4_t v_sign = vdupq_n_s32(1 << 31),\n- v_05 = vreinterpretq_s32_f32(vdupq_n_f32(0.5f));\n-\n- int32x4_t v_addition = vorrq_s32(v_05, vandq_s32(v_sign, vreinterpretq_s32_f32(a.val)));\n- return v_int32x4(vcvtq_s32_f32(vaddq_f32(a.val, vreinterpretq_f32_s32(v_addition))));\n+ // See https://github.com/opencv/opencv/pull/24271#issuecomment-1867318007\n```", + "output": "@vpisarev OK, I fixed it and committed. However, I ran `opencv_test_core`, there are 1 failed. I believe it doesn't used `v_round()`.I think this is a potential problem. ```plaintext [----------] Global test environment tear-down [ SKIPSTAT ] 86 tests skipped [ SKIPSTAT ] TAG='mem_2gb' skip 1 tests [ SKIPSTAT ] TAG='skip_other' skip 85 tests [==========] 11949 tests from 264 test cases ran. (351138 ms total) [ PASSED ] 11948 tests. [ FAILED ] 1 test, listed below: [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/1, where GetParam() = (5, 320x240) ``` ``` Note: Google Test filter = Core/HasNonZeroLimitValues* [==========] Running 8 tests from 1 test case. [----------] Global test environment set-up. [----------] 8 tests from Core/HasNonZeroLimitValues [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/0, where GetParam() = (5, 1x1) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/0 (0 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/1, where GetParam() = (5, 320x240) /home/kmtr/work/opencv/modules/core/test/test_hasnonzero.cpp:109: Failure Value of: hasNonZero(m) Actual: false Expected: true [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/1, where GetParam() = (5, 320x240) (1 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/2, where GetParam() = (5, 127x113) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/2 (0 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/3, where GetParam() = (5, 1x113) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/3 (1 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/4, where GetParam() = (6, 1x1) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/4 (0 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/5, where GetParam() = (6, 320x240) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/5 (2 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/6, where GetParam() = (6, 127x113) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/6 (0 ms) [ RUN ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/7, where GetParam() = (6, 1x113) [ OK ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/7 (0 ms) [----------] 8 tests from Core/HasNonZeroLimitValues (4 ms total) [----------] Global test environment tear-down [==========] 8 tests from 1 test case ran. (5 ms total) [ PASSED ] 7 tests. [ FAILED ] 1 test, listed below: [ FAILED ] Core/HasNonZeroLimitValues.hasNonZeroLimitValues/1, where GetParam() = (5, 320x240) 1 FAILED TEST ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/carotene/src/vround_helper.hpp\n**Change Type:** added\n**Context:** PR #24271: fix to convert float32 to int32/uint32 with rounding to nearest (ties to even).\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,102 @@\n+/*\n+ * By downloading, copying, installing or using the software you agree to this license.\n+ * If you do not agree to this license, do not download, install,\n+ * copy or use the software.\n+ *\n+ *\n+ * License Agreement\n+ * For Open Source Computer Vision Library\n+ * (3-clause BSD License)\n```", + "output": "OK, I agree with you, I fixed it, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24476: dnn: add attention layer\n**Review Line:** 2954\n\n**Code Changes:**\n```diff\n testONNXModels(\"test_expand_shape_model4\", pb, 0, 0, false, true, 1);\n }\n \n+TEST_P(Test_ONNX_layers, Attention) {\n+ testONNXModels(\"attention\");\n+}\n+TEST_P(Test_ONNX_layers, AttentionSingleHead) {\n+ testONNXModels(\"attention_single_head\");\n+}\n+\n+TEST_P(Test_ONNX_nets, ViT_B_32) {\n```", + "output": "Yes, this is about the accuracy test only: ``` [ RUN ] Test_ONNX_layers.Attention/1, where GetParam() = OCV/CPU [ INFO:0@0.062] global onnx_importer.cpp:797 populateNet DNN/ONNX: loading ONNX v9 model produced by ''. Number of nodes = 34, initializers = 0, inputs = 1, outputs = 1 [ INFO:0@0.062] global onnx_importer.cpp:698 parseOperatorSet DNN/ONNX: ONNX opset version = 19 [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Transpose]:(onnx_node!n0) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n1) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node!n2) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n3) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Add]:(onnx_node!n4) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n5) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n6) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n7) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 4 inputs and 1 outputs: [Slice]:(onnx_node!n8) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n9) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Reshape]:(onnx_node!n10) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Transpose]:(onnx_node!n11) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n12) from domain='ai.onnx' [ INFO:0@0.062] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Div]:(onnx_node!n13) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n14) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n15) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n16) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 4 inputs and 1 outputs: [Slice]:(onnx_node!n17) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n18) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Reshape]:(onnx_node!n19) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Transpose]:(onnx_node!n20) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node!n21) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Softmax]:(onnx_node!n22) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n23) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n24) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n25) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 4 inputs and 1 outputs: [Slice]:(onnx_node!n26) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n27) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Reshape]:(onnx_node!n28) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Transpose]:(onnx_node!n29) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node!n30) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 1 inputs and 1 outputs: [Transpose]:(onnx_node!n31) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 0 inputs and 1 outputs: [Constant]:(onnx_node!n32) from domain='ai.onnx' [ INFO:0@0.063] global onnx_importer.cpp:971 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Reshape]:(onnx_node!n33) from domain='ai.onnx' /home/dkurtaev/opencv/modules/dnn/test/test_common.impl.hpp:76: Failure Expected: (normL1) <= (l1), actual: 27.5616 vs 1e-05 attention |ref| = 18.29339599609375 /home/dkurtaev/opencv/modules/dnn/test/test_common.impl.hpp:79: Failure Expected: (normInf) <= (lInf), actual: 4557.7 vs 0.0001 attention |ref| = 18.29339599609375 [ FAILED ] Test_ONNX_layers.Attention/1, where GetParam() = OCV/CPU (24 ms) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24476: dnn: add attention layer\n**Review Line:** 2954\n\n**Code Changes:**\n```diff\n testONNXModels(\"test_expand_shape_model4\", pb, 0, 0, false, true, 1);\n }\n \n+TEST_P(Test_ONNX_layers, Attention) {\n+ testONNXModels(\"attention\");\n+}\n+TEST_P(Test_ONNX_layers, AttentionSingleHead) {\n+ testONNXModels(\"attention_single_head\");\n+}\n+\n+TEST_P(Test_ONNX_nets, ViT_B_32) {\n```", + "output": "Problem is confirmed. I did not set `axis` attribute in Softmax operator. ONNX's Softmax have `axis=-1` as default, while dnn's default to 1. I think it is time to make it align with ONNX's. Check the latest commit in https://github.com/opencv/opencv_extra/pull/1128. --- > I think it is time to make it align with ONNX's. @WanliZhong Could you make such changes?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #24476: dnn: add attention layer\n\n**Code Changes:**\n```diff\n@@ -13,6 +13,7 @@\n \n #include \n #include \n+#include \n \n namespace cv { namespace dnn {\n CV__DNN_INLINE_NS_BEGIN\n@@ -181,6 +182,17 @@ static Mat extractConstant(const Ptr& net, int node_id, int\n }\n```", + "output": "@fengyuentau, I mean something like that: ```cpp if (getParam_DNN_BACKEND_DEFAULT()== DNN_BACKEND_INFERENCE_ENGINE) { subgraphs.push_back(makePtr()); subgraphs.push_back(makePtr()); } ``` from https://github.com/opencv/opencv/blob/8b577ab98311cfed1ceda6972eaca443d82e2bce/modules/dnn/src/dnn_common.hpp#L38" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24694: dnn: refactor ONNX MatMul with fastGemm\n**Review Line:** 1964\n\n**Code Changes:**\n```diff\n- firstInpDims = blob.dims;\n- LayerParams constParams;\n- constParams.name = layerParams.name + \"/const_0\";\n- constParams.type = \"Const\";\n- constParams.blobs.push_back(blob);\n+ for (int i = 0; i < node_proto.input_size(); i++) {\n+ if (constBlobs.find(node_proto.input(i)) == constBlobs.end()) {\n+ continue;\n+ }\n+\n+ Mat blob = getBlob(node_proto, i);\n```", + "output": "Got it, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "@LaurentBerger, what is your opinion, as all the methods hardly depend on `Image2BlobParams`, Can we move them as a class methods to `Image2BlobParams`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "inference outputs in dnn are in float. I translated python code in c++ for [mediapipe models](https://github.com/opencv/opencv_zoo/tree/main/models/person_detection_mediapipe). All computation in python are in double. When you have to check results ( python versus c++) it becomes a problem. pose estimation mediapipe use output of person detection mediapipe May be @fengyuentau can give an advice" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "> @returns rectangle in original image coordinates. I mean Rect2d returned from the function. Most likely used for drawing or ROI extraction. Both expect integer Rect, not double." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "@LaurentBerger, just to verify that these methods have not been missed during the transition under `Image2BlobParams`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "@LaurentBerger, this is still actual. Do we want these two methods to be outside of the `Image2BlobParams`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #24539: Add blobrecttoimage\n\n**Code Changes:**\n```diff\n@@ -1222,6 +1222,20 @@ CV__DNN_INLINE_NS_BEGIN\n CV_PROP_RW DataLayout datalayout; //!< Order of output dimensions. Choose DNN_LAYOUT_NCHW or DNN_LAYOUT_NHWC.\n CV_PROP_RW ImagePaddingMode paddingmode; //!< Image padding mode. @see ImagePaddingMode.\n CV_PROP_RW Scalar borderValue; //!< Value used in padding mode for padding.\n+\n+ /** @brief Get rectangle coordinates in original image system from rectangle in blob coordinates.\n+ * @param rBlob rect in blob coordinates.\n+ * @param size original input image size.\n+ * @returns rectangle in original image coordinates.\n+ */\n```", + "output": "@LaurentBerger , may we ask to remove Rect2d from this PR and if it's a critical feature, discuss separately? In my point of view, users mostly work on Rect (as a ROI or drawable bounding box)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.cpp\n**Change Type:** modified\n**Context:** PR #24384: Fixes #22747. Support [crop] configuration for DarkNet\n\n**Code Changes:**\n```diff\n@@ -309,6 +309,48 @@ namespace cv {\n fused_layer_names.push_back(last_layer);\n }\n \n+ void setCrop(int crop_height, int crop_width, int inp_height, int inp_width, bool noadjust)\n+ {\n+ cv::dnn::LayerParams crop_param;\n+ crop_param.name = \"CropLayer-name\";\n+ std::vector begin = {0, 0, (inp_height - crop_height) / 2, (inp_width - crop_width) / 2};\n+ std::vector sizes = {-1, -1, crop_height, crop_width};\n```", + "output": "Yes. thankyou @dkurt. I will make the changes asap." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.cpp\n**Change Type:** modified\n**Context:** PR #24384: Fixes #22747. Support [crop] configuration for DarkNet\n\n**Code Changes:**\n```diff\n@@ -309,6 +309,48 @@ namespace cv {\n fused_layer_names.push_back(last_layer);\n }\n \n+ void setCrop(int crop_height, int crop_width, int inp_height, int inp_width, bool noadjust)\n+ {\n+ cv::dnn::LayerParams crop_param;\n+ crop_param.name = \"CropLayer-name\";\n+ std::vector begin = {0, 0, (inp_height - crop_height) / 2, (inp_width - crop_width) / 2};\n+ std::vector sizes = {-1, -1, crop_height, crop_width};\n```", + "output": "@dkurt Yes sir will correct them." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/calibinit.cpp\n**Change Type:** modified\n**Context:** PR #24546: Check Checkerboard Corners\n\n**Code Changes:**\n```diff\n@@ -479,8 +479,7 @@ bool findChessboardCorners(InputArray image_, Size pattern_size,\n \n bool found = false;\n \n- const int min_dilations = 0;\n- const int max_dilations = 7;\n+ const bool is_plain = (flags & CALIB_CB_PLAIN) != 0;\n \n int type = image_.type(), depth = CV_MAT_DEPTH(type), cn = CV_MAT_CN(type);\n Mat img = image_.getMat();\n```", + "output": "hi @mshabunin sorry but I do not have windows. So I can't check on that platform. the last push I did should resolve the warning as you suggested" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVCompilerOptimizations.cmake\n**Change Type:** modified\n**Context:** PR #24642: build: merge multiple features specified at once\n\n**Code Changes:**\n```diff\n@@ -353,7 +353,7 @@ elseif(ARM OR AARCH64)\n ocv_update(CPU_NEON_DOTPROD_IMPLIES \"NEON\")\n ocv_update(CPU_NEON_FP16_FLAGS_ON \"-march=armv8.2-a+fp16\")\n ocv_update(CPU_NEON_FP16_IMPLIES \"NEON\")\n- ocv_update(CPU_NEON_BF16_FLAGS_ON \"-march=armv8.2-a+fp16+bf16\")\n+ ocv_update(CPU_NEON_BF16_FLAGS_ON \"-march=armv8.2-a+bf16\")\n ocv_update(CPU_NEON_BF16_IMPLIES \"NEON\")\n set(CPU_BASELINE \"NEON;FP16\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n set(CPU_DISPATCH \"NEON_FP16;NEON_BF16;NEON_DOTPROD\" CACHE STRING \"${HELP_CPU_DISPATCH}\")\n@@ -480,6 +480,23 @@ macro(ocv_check_compiler_optimization OPT)\n```", + "output": "Thank you! Done." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/FindONNX.cmake\n**Change Type:** modified\n**Context:** PR #24601: Fix path to ONNX Runtime include folder\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,9 @@ if(ONNXRT_ROOT_DIR)\n find_library(ORT_LIB onnxruntime\n ${ONNXRT_ROOT_DIR}/lib\n CMAKE_FIND_ROOT_PATH_BOTH)\n+ # The location of headers varies across different versions of ONNX Runtime\n find_path(ORT_INCLUDE onnxruntime_cxx_api.h\n+ ${ONNXRT_ROOT_DIR}/include/onnxruntime/\n ${ONNXRT_ROOT_DIR}/include/onnxruntime/core/session\n CMAKE_FIND_ROOT_PATH_BOTH)\n endif()\n```", + "output": "@asmorkalov Could you have a look at this, please? `ONNXRT` requires `C++14` to be compiled, to be certain this part needs it: https://github.com/microsoft/onnxruntime/blob/v1.16.3/include/onnxruntime/core/session/onnxruntime_cxx_api.h#L171 (`constexpr` constructor sets not constant value)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/FindONNX.cmake\n**Change Type:** modified\n**Context:** PR #24601: Fix path to ONNX Runtime include folder\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,9 @@ if(ONNXRT_ROOT_DIR)\n find_library(ORT_LIB onnxruntime\n ${ONNXRT_ROOT_DIR}/lib\n CMAKE_FIND_ROOT_PATH_BOTH)\n+ # The location of headers varies across different versions of ONNX Runtime\n find_path(ORT_INCLUDE onnxruntime_cxx_api.h\n+ ${ONNXRT_ROOT_DIR}/include/onnxruntime/\n ${ONNXRT_ROOT_DIR}/include/onnxruntime/core/session\n CMAKE_FIND_ROOT_PATH_BOTH)\n endif()\n```", + "output": "@asmorkalov @opencv-alalek Could you advice me there? Perhaps I can just workaround this limitation by using already existing cmake flags properly..." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/FindONNX.cmake\n**Change Type:** modified\n**Context:** PR #24601: Fix path to ONNX Runtime include folder\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,9 @@ if(ONNXRT_ROOT_DIR)\n find_library(ORT_LIB onnxruntime\n ${ONNXRT_ROOT_DIR}/lib\n CMAKE_FIND_ROOT_PATH_BOTH)\n+ # The location of headers varies across different versions of ONNX Runtime\n find_path(ORT_INCLUDE onnxruntime_cxx_api.h\n+ ${ONNXRT_ROOT_DIR}/include/onnxruntime/\n ${ONNXRT_ROOT_DIR}/include/onnxruntime/core/session\n CMAKE_FIND_ROOT_PATH_BOTH)\n endif()\n```", + "output": "@TolyaTalamanov you are wrong `set` != `ocv_update` Just try to run `cmake -DCMAKE_CXX_STANDARD=17 ...` and check compiler flags. This is CMake way to specify C++ standard." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/FindONNX.cmake\n**Change Type:** modified\n**Context:** PR #24601: Fix path to ONNX Runtime include folder\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,9 @@ if(ONNXRT_ROOT_DIR)\n find_library(ORT_LIB onnxruntime\n ${ONNXRT_ROOT_DIR}/lib\n CMAKE_FIND_ROOT_PATH_BOTH)\n+ # The location of headers varies across different versions of ONNX Runtime\n find_path(ORT_INCLUDE onnxruntime_cxx_api.h\n+ ${ONNXRT_ROOT_DIR}/include/onnxruntime/\n ${ONNXRT_ROOT_DIR}/include/onnxruntime/core/session\n CMAKE_FIND_ROOT_PATH_BOTH)\n endif()\n```", + "output": "@opencv-alalek Got it, I'll try thank you! Removed this piece, let's move forward with merging it :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/calibinit.cpp\n**Change Type:** modified\n**Context:** PR #24605: Speed up ChessBoardDetector::findQuadNeighbors\n**Review Line:** 1635\n\n**Code Changes:**\n```diff\n // find the closest corner in all other quadrangles\n- for (int k = 0; k < all_quads_count; k++)\n+ std::vector query = Mat(pt);\n+ float radius = cur_quad.edge_len * thresh_scale + 1;\n+ const cvflann::SearchParams search_params(-1);\n+ int neighbors_count = all_quads_pts_index.radiusSearch(query, neighbors_indices, neighbors_dists, radius, search_params);\n+\n+ for (int neighbor_idx_idx = 0; neighbor_idx_idx < neighbors_count; neighbor_idx_idx++)\n {\n+ const int neighbor_idx = neighbors_indices[neighbor_idx_idx];\n+ const int k = neighbor_idx >> 2;\n```", + "output": "@MaximSmolskiy, is there a case for which it will be necessary to increase the search radius? Is this possible only with large distortion?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/calibinit.cpp\n**Change Type:** modified\n**Context:** PR #24605: Speed up ChessBoardDetector::findQuadNeighbors\n**Review Line:** 1635\n\n**Code Changes:**\n```diff\n // find the closest corner in all other quadrangles\n- for (int k = 0; k < all_quads_count; k++)\n+ std::vector query = Mat(pt);\n+ float radius = cur_quad.edge_len * thresh_scale + 1;\n+ const cvflann::SearchParams search_params(-1);\n+ int neighbors_count = all_quads_pts_index.radiusSearch(query, neighbors_indices, neighbors_dists, radius, search_params);\n+\n+ for (int neighbor_idx_idx = 0; neighbor_idx_idx < neighbors_count; neighbor_idx_idx++)\n {\n+ const int neighbor_idx = neighbors_indices[neighbor_idx_idx];\n+ const int k = neighbor_idx >> 2;\n```", + "output": "@AleksandrPanov All these changes shouldn't change original behavior. I took this radius because below we have ``` if (dist < min_dist && dist <= cur_quad.edge_len*thresh_scale && dist <= q_k.edge_len*thresh_scale ) ``` which means that we accept distance only if it less than `cur_quad.edge_len * thresh_scale`. So, there is no sense in searching neighbors with radius greater than `cur_quad.edge_len * thresh_scale` - because we will never accept them further. And I added `+ 1` as something like quite big \"epsilon\" to prevent possible problems with corner cases (e.g., if we want to search neighbors with `radius <= R1` and `flann` module expects that if we pass radius `R2` to it, then it searchs neighbors with condition `radius < R2`, then we should pass as `R2` not exactly `R1`, but `R1 + eps`, where `eps > 0` - this is just theoretical example, I don't know exactly in what area `flann` module searchs neighbors - `radius < R2` or `radius <= R2`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "@Abdurrahheem, so you max absolute difference is not 1e-1 but 3e-2. 0.1 is too big threshould and regression may not be caught." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "> Looks like NMS works differently in GT and our implementation. What if we drop NMS and just sort boxes by confidence on both sides? @asmorkalov, `normAssertDetections` is flexible - it can match detections in any order. The problem of the thread is too big thresholds. GT is fine." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "@dkurt if I make the threshold tighter (1.0e - 2) test fails." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "Well done, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "@Abdurrahheem It is time to read Wiki and follow BKMs from there: https://github.com/opencv/opencv/wiki/How_to_contribute" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24611: Add test for YoloX Yolo v6 and Yolo v8\n\n**Code Changes:**\n```diff\n@@ -12,6 +12,15 @@\n #include \n namespace opencv_test { namespace {\n \n+void yoloPostProcessing(\n+ std::vector& outs,\n+ std::vector& keep_classIds,\n+ std::vector& keep_confidences,\n+ std::vector& keep_boxes,\n+ float conf_threshold,\n```", + "output": "@opencv-alalek White spaces are NOT detected locally, not do break the build. Additionally, I do not have CI running locally. So it quite easy to forget about white spaces. Sorry for inconvenience." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your comment! I haven't tried cross-compiling that references ffmpeg yet. I want to test armv7 and aarch64 targets on both ubuntu 23.04 and ubuntu 23.10 So it will take some time, I'm sorry." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your comment. I agree with you! I replace to this comment." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Even when compiling with different architectures such as x86-64, armv7, and aarch64, ccache certainly has no effect. (It is effective when compile condition are not changed.) I will remove `ccache`. Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your suggestion to enable FFmpeg libraries, in my try it work well. I add installing libraries libraries to support ffmpeg. ``` sudo apt install -y \\ libavcodec-dev:arm64 \\ libavformat-dev:arm64 \\ libavutil-dev:arm64 \\ libswscale-dev:arm64 \\ libfreetype-dev:arm64 \\ libharfbuzz-dev:arm64 ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I will keep these CPU options information, thank you ! For aarch64 ``` -- CPU/HW features: -- Baseline: NEON FP16 -- required: NEON -- disabled: VFPV3 -- Dispatched code generation: NEON_DOTPROD NEON_FP16 NEON_BF16 -- requested: NEON_FP16 NEON_BF16 NEON_DOTPROD -- NEON_DOTPROD (1 files): + NEON_DOTPROD -- NEON_FP16 (2 files): + NEON_FP16 -- NEON_BF16 (0 files): + NEON_BF16 ``` For armv7 ``` -- CPU/HW features: -- Baseline: NEON -- requested: DETECT -- required: NEON -- disabled: VFPV3 ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I will add `--depth=1` option, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I has removed host compile section, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n**Review Line:** 58\n\n**Code Changes:**\n```diff\n+ cmake \\\n+ pkgconf \\\n+ build-essential \\\n+ ninja-build \\\n+ crossbuild-essential-armhf \\\n+ crossbuild-essential-arm64\n+@endcode\n+\n+If you want to enable Python 3 wrapper, install these packages too.\n+\n+@code{.bash}\n```", + "output": "Thank you for your suggestion, I tried it. and It seems to work well, thank you ! I'll update document tomorrow. ``` PYTHON3_REALPATH=`realpath /usr/bin/python3` PYTHON3_BASENAME=`basename ${PYTHON3_REALPATH}` PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig:/usr/share/pkgconfig \\ PKG_CONFIG_LIBDIR=/usr/lib/aarch64-linux-gnu \\ PKG_CONFIG_SYSROOT_DIR=/ \\ cmake -S opencv \\ -B build4-full_arm64_p \\ -DCMAKE_TOOLCHAIN_FILE=/home/kmtr/work/opencv/platforms/linux/aarch64-gnu.toolchain.cmake \\ -DOPENCV_EXTRA_MODULES_PATH=opencv_contrib/modules \\ -DPYTHON3_NUMPY_INCLUDE_DIRS=\"/usr/local/lib/${PYTHON3_BASENAME}/dist-packages/numpy/core/include/\" \\ -DPYTHON3_INCLUDE_PATH=\"/usr/include/${PYTHON3_BASENAME};/usr/include/\" \\ -DPYTHON3_LIBRARIES=`find /usr/lib/aarch64-linux-gnu/ -name libpython*.so` \\ -DPYTHON3_EXECUTABLE=\"/usr/bin/${PYTHON3_BASENAME}\" \\ -DPYTHON3_CVPY_SUFFIX=\".so\" \\ -GNinja ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I was worried about how to distinguish between lines input by the user and lines output by the program. So I added a `$` sign to the beginning of each line. However, it becomes a problem when copying and pasting. These have been removed. `[Host]` and `[Target]` to indicate the execution environment also seem to get in the way... I think I'll remove this tomorrow too. Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I agree with you, I will note section tomorrow, thank you very much !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "I separate (a) install external libraries, and (b) install opencv which is cross-compile. And I added warning comment that libraries version between compiling and running should be same. Thank you !!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n**Review Line:** 58\n\n**Code Changes:**\n```diff\n+ cmake \\\n+ pkgconf \\\n+ build-essential \\\n+ ninja-build \\\n+ crossbuild-essential-armhf \\\n+ crossbuild-essential-arm64\n+@endcode\n+\n+If you want to enable Python 3 wrapper, install these packages too.\n+\n+@code{.bash}\n```", + "output": "Thank you for your comment, I succeeded to run python binding which is cross-compiled. I added tutorial it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Yes, I fixed them. In recent document I think it is suitable to copy and paste ! Thank you for your point!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Yes, I split bach script and its result, thank you for your suggestion. (It was also used for `ldd` command to detect dependency section.)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you for your comment, I fixed it. And I added comment to consider what apt package is needed. e.g. libswscale6 is used for Ubuntu 23.04, but libswscale7 is used for Ubuntu 23.10." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,595 @@\n+# MultiArch cross-compilation with Ubuntu/Debian{#tutorial_crosscompile_with_multiarch}\n+\n+@prev_tutorial{tutorial_arm_crosscompile_with_cmake}\n+@next_tutorial{tutorial_building_tegra_cuda}\n+\n+[TOC]\n+\n+| | |\n+| -: | :- |\n```", + "output": "Thank you, I replaced from \"ubuntu/debian\" to \"Ubuntu/Debian\" at all." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n**Review Line:** 65\n\n**Code Changes:**\n```diff\n+\n+If you want to enable Python 3 wrapper, install these packages too.\n+\n+@code{.bash}\n+sudo apt install -y \\\n+ python3-dev \\\n+ python3-numpy\n+@endcode\n+\n+## Working folder structure\n+\n```", + "output": "Thank you for your comment, it is same as \"Install OpenCV-Python in Ubuntu\". https://docs.opencv.org/4.x/d2/de6/tutorial_py_setup_in_ubuntu.html > to support python3: > sudo apt-get install python3-dev python3-numpy" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/introduction/crosscompilation/crosscompile_with_multiarch.markdown\n**Change Type:** added\n**Context:** PR #24629: doc: add crosscompile_with_multiarch\n**Review Line:** 65\n\n**Code Changes:**\n```diff\n+\n+If you want to enable Python 3 wrapper, install these packages too.\n+\n+@code{.bash}\n+sudo apt install -y \\\n+ python3-dev \\\n+ python3-numpy\n+@endcode\n+\n+## Working folder structure\n+\n```", + "output": "OK, I created new issue https://github.com/opencv/opencv/issues/24695 to commit fix. Thank you !!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24224: Ported VAS OT into OpenCV G-API\n\n**Code Changes:**\n```diff\n@@ -88,6 +88,7 @@ set(gapi_srcs\n src/api/kernels_imgproc.cpp\n src/api/kernels_video.cpp\n src/api/kernels_nnparsers.cpp\n+ src/api/kernels_ot.cpp\n src/api/kernels_streaming.cpp\n src/api/kernels_stereo.cpp\n src/api/render.cpp\n@@ -130,6 +131,7 @@ set(gapi_srcs\n src/backends/cpu/gcpustereo.cpp\n```", + "output": "Thanks a lot, fixed!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24224: Ported VAS OT into OpenCV G-API\n\n**Code Changes:**\n```diff\n@@ -88,6 +88,7 @@ set(gapi_srcs\n src/api/kernels_imgproc.cpp\n src/api/kernels_video.cpp\n src/api/kernels_nnparsers.cpp\n+ src/api/kernels_ot.cpp\n src/api/kernels_streaming.cpp\n src/api/kernels_stereo.cpp\n src/api/render.cpp\n@@ -130,6 +131,7 @@ set(gapi_srcs\n src/backends/cpu/gcpustereo.cpp\n```", + "output": "Thanks a lot, fixed!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24224: Ported VAS OT into OpenCV G-API\n\n**Code Changes:**\n```diff\n@@ -88,6 +88,7 @@ set(gapi_srcs\n src/api/kernels_imgproc.cpp\n src/api/kernels_video.cpp\n src/api/kernels_nnparsers.cpp\n+ src/api/kernels_ot.cpp\n src/api/kernels_streaming.cpp\n src/api/kernels_stereo.cpp\n src/api/render.cpp\n@@ -130,6 +131,7 @@ set(gapi_srcs\n src/backends/cpu/gcpustereo.cpp\n```", + "output": "Thanks a lot, fixed!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVCompilerOptimizations.cmake\n**Change Type:** modified\n**Context:** PR #24420: added detection & dispatching of some modern NEON instructions (NEON_FP16, NEON_BF16)\n**Review Line:** 355\n\n**Code Changes:**\n```diff\n ocv_update(CPU_FP16_IMPLIES \"NEON\")\n ocv_update(CPU_NEON_DOTPROD_FLAGS_ON \"-march=armv8.2-a+dotprod\")\n ocv_update(CPU_NEON_DOTPROD_IMPLIES \"NEON\")\n+ ocv_update(CPU_NEON_FP16_FLAGS_ON \"-march=armv8.2-a+fp16\")\n+ ocv_update(CPU_NEON_FP16_IMPLIES \"NEON\")\n+ ocv_update(CPU_NEON_BF16_FLAGS_ON \"-march=armv8.2-a+fp16+bf16\")\n+ ocv_update(CPU_NEON_BF16_IMPLIES \"NEON\")\n set(CPU_BASELINE \"NEON;FP16\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n+ set(CPU_DISPATCH \"NEON_FP16;NEON_BF16;NEON_DOTPROD\" CACHE STRING \"${HELP_CPU_DISPATCH}\")\n endif()\n elseif(MIPS)\n```", + "output": "Do we really need to combine `+bf16` and `+fp16`? @vpisarev I'm discussing this on #24588 and I'd like to hear your original thoughts about this line." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #21966: support use memory buffer as input to read multi-page image\n**Review Line:** 994\n\n**Code Changes:**\n```diff\n+ }\n+\n+ if (!decoder->setSource(buf_row))\n+ {\n+ filename = tempfile();\n+ FILE* f = fopen(filename.c_str(), \"wb\");\n+ if (!f)\n+ return 0;\n+ size_t bufSize = buf_row.total() * buf.elemSize();\n+ if (fwrite(buf_row.ptr(), 1, bufSize, f) != bufSize)\n+ {\n```", + "output": "@alalek I added file removal and sqaushed commits. The PR is ready to be merged." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24553: Add yolov5n to tests\n\n**Code Changes:**\n```diff\n@@ -2585,28 +2585,15 @@ TEST_P(Test_ONNX_layers, CumSum)\n testONNXModels(\"cumsum_3d_dim_2\");\n }\n \n-// This test is mainly to test:\n-// 1. identity node with constant input\n-// 2. limited support to range operator (all inputs are constant)\n-// 3. parseExpand with multiple broadcast axes\n-// 4. 1D mat dimension issue with the output of range operator\n-TEST_P(Test_ONNX_layers, YOLOv7)\n```", + "output": "@dkurt they the same though?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24553: Add yolov5n to tests\n\n**Code Changes:**\n```diff\n@@ -2585,28 +2585,15 @@ TEST_P(Test_ONNX_layers, CumSum)\n testONNXModels(\"cumsum_3d_dim_2\");\n }\n \n-// This test is mainly to test:\n-// 1. identity node with constant input\n-// 2. limited support to range operator (all inputs are constant)\n-// 3. parseExpand with multiple broadcast axes\n-// 4. 1D mat dimension issue with the output of range operator\n-TEST_P(Test_ONNX_layers, YOLOv7)\n```", + "output": "@dkurt Ar you suggesting to use weights of yolox5n for this test or to remove the test entirely?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24553: Add yolov5n to tests\n\n**Code Changes:**\n```diff\n@@ -2585,28 +2585,15 @@ TEST_P(Test_ONNX_layers, CumSum)\n testONNXModels(\"cumsum_3d_dim_2\");\n }\n \n-// This test is mainly to test:\n-// 1. identity node with constant input\n-// 2. limited support to range operator (all inputs are constant)\n-// 3. parseExpand with multiple broadcast axes\n-// 4. 1D mat dimension issue with the output of range operator\n-TEST_P(Test_ONNX_layers, YOLOv7)\n```", + "output": "@dkurt As you recommended, I checked yolov5n: 1. On my test image it works worse than yolov5m: bicycle box is predicted incorrectly. 2. Opencv result detection result on my test image differs from ultralytics result: scores are lower I attach ultralitics detection visualization and my visualization of the same model (yolov5n.onnx) for the same image. In [ultralytis](https://github.com/ultralytics/yolov5) authors also report better performance of yolov5m in comparison with yolov5n. Should I still add a test for yolov5n? Or should I add append opencv_extra with yolov5m link? ![ultralytics_result](https://github.com/opencv/opencv/assets/39569266/ed4c85bc-8c71-4dea-981d-e2556a085fdd) ![opencv_result](https://github.com/opencv/opencv/assets/39569266/e6cb8576-8d23-4695-bddf-5faadb7c91c8)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/ISSUE_TEMPLATE/config.yml\n**Change Type:** added\n**Context:** PR #22560: Enable issue template chooser\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,5 @@\n+blank_issues_enabled: true\n+contact_links:\n+ - name: Questions\n+ url: https://forum.opencv.org/\n+ about: Ask questions and discuss with OpenCV community members\n```", + "output": "Thank you! It was a big mistake😂" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/ISSUE_TEMPLATE/bug_report.yml\n**Change Type:** added\n**Context:** PR #22560: Enable issue template chooser\n**Review Line:** 61\n\n**Code Changes:**\n```diff\n+- type: checkboxes\n+ attributes:\n+ label: Issue submission checklist\n+ options:\n+ - label: I report the issue, it's not a question\n+ required: true\n+ - label: I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution\n+ - label: I updated to the latest OpenCV version and the issue is still there\n+ - label: There is reproducer code and related data files (videos, images, onnx, etc)\n```", + "output": "Thanks for the review! This is indeed a good suggestion. The last three checkboxes are not required anymore, but the first checkbox \"I report the issue, it's not a question\" is still required. I have reservations on the rephrasing since it makes people click the checkbox anyway, and this could confuse reviewers whether the reporter attaches reproducer or the statement is not applicable." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/checks/cpu_fp16.cpp\n**Change Type:** modified\n**Context:** PR #23654: DNN: add the Winograd fp16 support\n\n**Code Changes:**\n```diff\n@@ -15,12 +15,12 @@ int test()\n #include \"arm_neon.h\"\n int test()\n {\n- const float src[] = { 0.0f, 0.0f, 0.0f, 0.0f };\n- short dst[8];\n- float32x4_t v_src = *(float32x4_t*)src;\n+ const float src[] = { 0.0f, 1.0f, 2.0f, 3.0f };\n+ short dst[4];\n+ float32x4_t v_src = vld1q_f32(src);\n```", + "output": "@opencv-alalek, FP16 support without SIMD arithmetic support does not make any sense, since C/C++ is not quite able to handle this type properly. I suggest to modify definition of CV_FP16 on ARM as suggested in this PR." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/onnx/dml_ep.cpp\n**Change Type:** modified\n**Context:** PR #24060: G-API: Advanced device selection for ONNX DirectML Execution Provider\n\n**Code Changes:**\n```diff\n@@ -13,13 +13,240 @@\n #ifdef HAVE_ONNX_DML\n #include \"../providers/dml/dml_provider_factory.h\"\n \n+#ifdef HAVE_DIRECTML\n+\n+#undef WINVER\n+#define WINVER 0x0A00\n+#undef _WIN32_WINNT\n+#define _WIN32_WINNT 0x0A00\n```", + "output": "@opencv-alalek Could you advice how to properly handle this dependencies? AFAIK none of them currently supported/handled by OpenCV * `dxcore` * `directml` * `directx12`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24060: G-API: Advanced device selection for ONNX DirectML Execution Provider\n\n**Code Changes:**\n```diff\n@@ -364,6 +364,10 @@ if(WIN32)\n ocv_target_link_libraries(${the_module} PRIVATE wsock32 ws2_32)\n endif()\n \n+if(HAVE_DIRECTML)\n+ ocv_target_compile_definitions(${the_module} PRIVATE HAVE_DIRECTML=1)\n+endif()\n+\n if(HAVE_ONNX)\n ocv_target_link_libraries(${the_module} PRIVATE ${ONNX_LIBRARY})\n```", + "output": "@opencv-alalek Could you advice how `dxcore`, `d3d12` and `directml` can be detected by opencv? Should we use the similar approach as there: https://github.com/opencv/opencv/blob/4.x/cmake/OpenCVDetectDirectX.cmake ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/object_detection.py\n**Change Type:** modified\n**Context:** PR #24396: using cv2 dnn interface to run yolov8 model\n\n**Code Changes:**\n```diff\n@@ -2,6 +2,7 @@\n import argparse\n import numpy as np\n import sys\n+import copy\n import time\n from threading import Thread\n if sys.version_info[0] == 2:\n@@ -27,7 +28,7 @@\n help='For models from TensorFlow Object Detection API, you may '\n```", + "output": "Hi, we have removed resizing the frame and used scaling bounding box instead. Please check, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/object_detection.py\n**Change Type:** modified\n**Context:** PR #24396: using cv2 dnn interface to run yolov8 model\n\n**Code Changes:**\n```diff\n@@ -2,6 +2,7 @@\n import argparse\n import numpy as np\n import sys\n+import copy\n import time\n from threading import Thread\n if sys.version_info[0] == 2:\n@@ -27,7 +28,7 @@\n help='For models from TensorFlow Object Detection API, you may '\n```", + "output": "Hi , we have modified those, please check, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/object_detection.py\n**Change Type:** modified\n**Context:** PR #24396: using cv2 dnn interface to run yolov8 model\n\n**Code Changes:**\n```diff\n@@ -2,6 +2,7 @@\n import argparse\n import numpy as np\n import sys\n+import copy\n import time\n from threading import Thread\n if sys.version_info[0] == 2:\n@@ -27,7 +28,7 @@\n help='For models from TensorFlow Object Detection API, you may '\n```", + "output": "We have changed the description as above, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24396: using cv2 dnn interface to run yolov8 model\n\n**Code Changes:**\n```diff\n@@ -33,6 +33,7 @@ yolov4:\n height: 416\n rgb: true\n classes: \"object_detection_classes_yolo.txt\"\n+ background_label_id: 0\n sample: \"object_detection\"\n \n yolov4-tiny:\n@@ -47,6 +48,7 @@ yolov4-tiny:\n height: 416\n```", + "output": "We have updated url and classes in models.yml's yolov8 part, please check. Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/perf/perf_einsum.cpp\n**Change Type:** modified\n**Context:** PR #24509: Fast gemm for einsum\n\n**Code Changes:**\n```diff\n@@ -11,19 +11,16 @@ struct EinsumParams {\n int outputSize;\n std::string equation;\n std::vector einsumInpShapes;\n- EinsumParams(std::string equation_, int inputSize_, int outputSize_, std::vector einsumInpShapes_ = std::vector())\n+ EinsumParams(std::string equation_, std::vector einsumInpShapes_ = std::vector())\n {\n- inputSize = inputSize_;\n- outputSize = outputSize_;\n+ inputSize = einsumInpShapes_.size();\n```", + "output": "When these test were larges @fengyuentau suggested removing large ones. Which one should I follow?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/einsum_layer.cpp\n**Change Type:** modified\n**Context:** PR #24509: Fast gemm for einsum\n**Review Line:** 1\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,7 @@\n #include \n #include \"../precomp.hpp\"\n #include \"layers_common.hpp\"\n+#include \"cpu_kernels/fast_gemm.hpp\"\n \n namespace cv\n {\n@@ -32,111 +33,6 @@ static bool IsTransposeReshapeForEinsum(const std::vector& perm,\n return true;\n }\n \n-static Mat batchwiseMatMul(\n- const Mat& input1,\n- const MatShape& input1ShapeOverride,\n```", + "output": "@Abdurrahheem, perf file disappeared in this commit" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/perf/perf_layer.cpp\n**Change Type:** modified\n**Context:** PR #24466: Improve and refactor softmax layer - This PR improves softmax from [ficus nn](https://github.com/vpisarev/ficus/tree/master/lib/NN). Perfor...\n**Review Line:** 742\n\n**Code Changes:**\n```diff\n+ TEST_CYCLE() {\n+ Mat res = net.forward();\n+ }\n+\n+ SANITY_CHECK_NOTHING();\n+}\n+\n+INSTANTIATE_TEST_CASE_P(/**/, Layer_Softmax, Combine(\n+ Values( // input size\n+ std::vector({16, 50, 50}),\n+ std::vector({16, 197, 197}),\n```", + "output": "Thanks, I will do it in next commit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/softmax_layer.cpp\n**Change Type:** modified\n**Context:** PR #24466: Improve and refactor softmax layer - This PR improves softmax from [ficus nn](https://github.com/vpisarev/ficus/tree/master/lib/NN). Perfor...\n\n**Code Changes:**\n```diff\n@@ -52,6 +52,7 @@\n #include \n #include \n #include \n+#include \"cpu_kernels/softmax.hpp\"\n using std::max;\n \n #ifdef HAVE_OPENCL\n@@ -225,89 +226,15 @@ class SoftMaxLayerImpl CV_FINAL : public SoftmaxLayer\n std::vector inputs, outputs, internals;\n```", + "output": "Note that there have been a `finalize` definition at line 124 to 127. Could they be merged into one? cc @dkurt" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/softmax_layer.cpp\n**Change Type:** modified\n**Context:** PR #24466: Improve and refactor softmax layer - This PR improves softmax from [ficus nn](https://github.com/vpisarev/ficus/tree/master/lib/NN). Perfor...\n\n**Code Changes:**\n```diff\n@@ -52,6 +52,7 @@\n #include \n #include \n #include \n+#include \"cpu_kernels/softmax.hpp\"\n using std::max;\n \n #ifdef HAVE_OPENCL\n@@ -225,89 +226,15 @@ class SoftMaxLayerImpl CV_FINAL : public SoftmaxLayer\n std::vector inputs, outputs, internals;\n```", + "output": "@WanliZhong, can you also please replace SoftMax from Region layer (YOLOs): https://github.com/opencv/opencv/blob/2e49bf311a24edadbe00ad88f299c8dcbf7aead1/modules/dnn/src/layers/region_layer.cpp#L135" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "Hello sir, I had apply your formula on my parameter, now the onnx model can run on the sample code, thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/segmentation.py\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -14,7 +14,7 @@\n parser.add_argument('--zoo', default=os.path.join(os.path.dirname(os.path.abspath(__file__)), 'models.yml'),\n help='An optional path to file with preprocessing parameters.')\n parser.add_argument('--input', help='Path to input image or video file. Skip this argument to capture frames from a camera.')\n-parser.add_argument('--framework', choices=['caffe', 'tensorflow', 'torch', 'darknet'],\n+parser.add_argument('--framework', choices=['caffe', 'tensorflow', 'torch', 'darknet', 'onnx'],\n help='Optional name of an origin framework of the model. '\n 'Detect it automatically if it does not set.')\n parser.add_argument('--colors', help='Optional path to a text file with colors for an every class. '\n```", + "output": "Hello sir, I had removed the whitespace, thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "Hello @dkurt , I think your concern is right, the background classes might have to be add. Should I add background class to https://github.com/opencv/opencv/blob/4.x/samples/data/dnn/object_detection_classes_pascal_voc.txt ? without background ![image](https://github.com/opencv/opencv/assets/89371302/17efac9a-eb57-421f-8d99-61b2678335a7) ![image](https://github.com/opencv/opencv/assets/89371302/9a503184-9ef8-4838-b6cd-dcdd937bd43a) with background ![image](https://github.com/opencv/opencv/assets/89371302/a3632579-0c32-4855-aa3a-929d9f3d5cd1) ![image](https://github.com/opencv/opencv/assets/89371302/caca0d81-533e-471a-87e3-6e43528ad3f1)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "Thanks, I will try to fix it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "OK, I will replace url, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "Thanks, problem should be solved in a common.py script which will affect other samples so I'd like to merge this PR as is." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/models.yml\n**Change Type:** modified\n**Context:** PR #24397: Added PyTorch fcnresnet101 segmentation conversion cases\n\n**Code Changes:**\n```diff\n@@ -178,3 +178,15 @@ fcn8s:\n height: 500\n rgb: false\n sample: \"segmentation\"\n+\n+fcnresnet101:\n+ load_info:\n+ url: \"https://github.com/onnx/models/raw/fb8271d5d5d9b90dbb1eb5e8e40f8f580fb248b3/vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx\"\n+ sha1: \"e7e76474bf6b73334ab32c4be1374c9e605f5aed\"\n+ model: \"fcn-resnet101-11.onnx\"\n```", + "output": "I got it, thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/mathfuncs.cpp\n**Change Type:** modified\n**Context:** PR #23098: finiteMask() and doubles for patchNaNs()\n\n**Code Changes:**\n```diff\n@@ -1574,75 +1574,7 @@ bool checkRange(InputArray _src, bool quiet, Point* pt, double minVal, double ma\n return true;\n }\n \n-#ifdef HAVE_OPENCL\n-\n-static bool ocl_patchNaNs( InputOutputArray _a, float value )\n-{\n- int rowsPerWI = ocl::Device::getDefault().isIntel() ? 4 : 1;\n- ocl::Kernel k(\"KF\", ocl::core::arithm_oclsrc,\n```", + "output": "Currently comparison of 64-bit integer SIMD is **declared as non-supported**: https://github.com/opencv/opencv/blame/4.7.0/modules/core/include/opencv2/core/hal/intrin_cpp.hpp#L885 ``` For all types except 64-bit integer values. ``` No idea why NEON hijacks that and provides some implementation (only for `v_uint64x2`, but not for signed `v_int64x2`). Probably added by mistake here: #7175 (patch should target 64F only). Also there is contributed test for eq/ne 64-bit here: #15738 (with discussion of misused macro) --- Perhaps we need to allow and implement this support for eq/ne (`==`/`!=`) comparisons at least for all SIMD backends. /cc @mshabunin @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/nan_mask.simd.hpp\n**Change Type:** added\n**Context:** PR #23098: finiteMask() and doubles for patchNaNs()\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,440 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+\n+#include \"precomp.hpp\"\n+\n+namespace cv {\n+\n```", + "output": "> `vx_cleanup();` Should be used in the end of externaly exposed functions only. We see here internal template implementation code. --- @vpisarev Direct usage of `v*_cleanup()` is anti-pattern. `CV_INSTRUMENT_REGION()` / `CV_AVX_GUARD` / `VZeroUpperGuard` should be used instead in externally exposed functions. Currently we see: - useless use: no reason to put that in the random middle of the code - call is missing where it is really required." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/nan_mask.simd.hpp\n**Change Type:** added\n**Context:** PR #23098: finiteMask() and doubles for patchNaNs()\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,440 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+\n+#include \"precomp.hpp\"\n+\n+namespace cv {\n+\n```", + "output": "> // v_select is not available for v_int64, emulating it reinterpret + vselect should work faster than provided emulation. BTW, it makes sense to provide such implementation in a single place (HAL) /cc @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/nan_mask.simd.hpp\n**Change Type:** added\n**Context:** PR #23098: finiteMask() and doubles for patchNaNs()\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,440 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+\n+#include \"precomp.hpp\"\n+\n+namespace cv {\n+\n```", + "output": "It really gives +10%...+30% more to performance, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/perf/perf_net.cpp\n**Change Type:** modified\n**Context:** PR #24298: Extend performance test models - **Merged With https://github.com/opencv/opencv_extra/pull/1095** This PR aims to extend the performance te...\n\n**Code Changes:**\n```diff\n@@ -29,10 +29,7 @@ class DNNTestNetwork : public ::perf::TestBaseWithParam< tuple\n }\n \n void processNet(std::string weights, std::string proto, std::string halide_scheduler,\n- const Mat& input, const std::string& outputLayer = \"\")\n- {\n- randu(input, 0.0f, 1.0f);\n-\n+ const std::vector>& inputs, const std::string& outputLayer = \"\"){\n weights = findDataFile(weights, false);\n```", + "output": "Thanks! I will modify it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24425: dnn: fix HAVE_TIMVX macro definition in dnn test\n\n**Code Changes:**\n```diff\n@@ -295,6 +295,13 @@ if(TARGET ocv.3rdparty.cann AND OPENCV_TEST_DNN_CANN)\n endif()\n endif()\n \n+ocv_option(OPENCV_TEST_DNN_TIMVX \"Build test with TIM-VX\" (HAVE_TIMVX))\n+if(OPENCV_TEST_DNN_TIMVX)\n+ if(TARGET opencv_test_dnn)\n+ ocv_target_compile_definitions(opencv_test_dnn PRIVATE \"HAVE_TIMVX=1\")\n+ endif()\n+endif()\n```", + "output": "Thank you for finding this! Meanwhile please hold merging this PR until the hardware is in the CI running the tests." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/face_detect.cpp\n**Change Type:** modified\n**Context:** PR #23894: Use T-API for critical parts of blobFromImagesWithParams\n**Review Line:** 113\n\n**Code Changes:**\n```diff\n+ // Pad input_image with divisor 32\n+ UMat pad_image;\n+ padWithDivisor(input_image, pad_image);\n+ // Build blob from input image\n+ input_blob = dnn::blobFromImage(pad_image);\n+ } else {\n+ // Pad input_image with divisor 32\n+ Mat pad_image;\n+ padWithDivisor(input_image, pad_image);\n+ // Build blob from input image\n+ input_blob = dnn::blobFromImage(pad_image);\n```", + "output": "@vpisarev T-API declares what we should not have such code separation on \"user\" side." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #24092: GSoC Add ONNX Support for GatherElements\n**Review Line:** 361\n\n**Code Changes:**\n```diff\n+ class CV_EXPORTS GatherElementsLayer : public Layer\n+ {\n+ public:\n+ static Ptr create(const LayerParams& params);\n+ };\n+\n class CV_EXPORTS PoolingLayer : public Layer\n {\n public:\n```", + "output": "Thanks for contribution! Please add a brief description about the difference between `Gather` and `GatherElements`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24092: GSoC Add ONNX Support for GatherElements\n\n**Code Changes:**\n```diff\n@@ -9,6 +9,7 @@\n #include \"test_precomp.hpp\"\n #include \"npy_blob.hpp\"\n #include \n+#include \n namespace opencv_test { namespace {\n \n template\n@@ -2129,6 +2130,34 @@ TEST_P(Test_ONNX_nets, Alexnet)\n expectNoFallbacksFromIE(net);\n```", + "output": "@fengyuentau, can we reduce input resolution as minimul as possible to cover required checks for a new layer? `240x180` or lower." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24092: GSoC Add ONNX Support for GatherElements\n\n**Code Changes:**\n```diff\n@@ -9,6 +9,7 @@\n #include \"test_precomp.hpp\"\n #include \"npy_blob.hpp\"\n #include \n+#include \n namespace opencv_test { namespace {\n \n template\n@@ -2129,6 +2130,34 @@ TEST_P(Test_ONNX_nets, Alexnet)\n expectNoFallbacksFromIE(net);\n```", + "output": "@fengyuentau, thanks for the investigation. If the biggest output is computed from the smallest one, it's a good ides to check the smallest only as it will cover most critical layers." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24092: GSoC Add ONNX Support for GatherElements\n\n**Code Changes:**\n```diff\n@@ -9,6 +9,7 @@\n #include \"test_precomp.hpp\"\n #include \"npy_blob.hpp\"\n #include \n+#include \n namespace opencv_test { namespace {\n \n template\n@@ -2129,6 +2130,34 @@ TEST_P(Test_ONNX_nets, Alexnet)\n expectNoFallbacksFromIE(net);\n```", + "output": "Thank you for the note. Already tuned the threshold." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24092: GSoC Add ONNX Support for GatherElements\n\n**Code Changes:**\n```diff\n@@ -9,6 +9,7 @@\n #include \"test_precomp.hpp\"\n #include \"npy_blob.hpp\"\n #include \n+#include \n namespace opencv_test { namespace {\n \n template\n@@ -2129,6 +2130,34 @@ TEST_P(Test_ONNX_nets, Alexnet)\n expectNoFallbacksFromIE(net);\n```", + "output": "> I suggest to add a tag to this test `applyTestTag(CV_TEST_TAG_LONG, CV_TEST_TAG_DEBUG_VERYLONG, CV_TEST_TAG_MEMORY_2GB);` to skip as long. PR is ready and might be merged. Thank you for the tips. I almost forgot these flags!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #24350: Encode QR code data to UTF-8\n\n**Code Changes:**\n```diff\n@@ -2727,6 +2727,58 @@ bool QRDecode::samplingForVersion()\n return true;\n }\n \n+\n+static bool checkASCIIcompatible(const uint8_t* str, const size_t size) {\n+ for (size_t i = 0; i < size; ++i) {\n+ uint8_t byte = str[i];\n+ if (byte >= 0x80)\n+ return false;\n```", + "output": "@opencv-alalek, perhaps I misunderstood the question. Do you mean can we apply encoding right in the loop, without checkUTF8 method?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/checks/cpu_lsx.cpp\n**Change Type:** added\n**Context:** PR #23929: Optimize some function with lasx.\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,15 @@\n+#include \n+#include \n+\n+int test()\n+{\n+ const float src[] = { 0.0f, 1.0f, 2.0f, 3.0f};\n+ v4f32 val = (v4f32)__lsx_vld((const float*)(src), 0);\n+ return __lsx_vpickve2gr_w(__lsx_vftint_w_s(val), 3);\n+}\n```", + "output": "Thanks for your review. I have remove it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVCompilerOptimizations.cmake\n**Change Type:** modified\n**Context:** PR #23929: Optimize some function with lasx.\n\n**Code Changes:**\n```diff\n@@ -53,6 +53,7 @@ list(APPEND CPU_ALL_OPTIMIZATIONS NEON VFPV3 FP16 NEON_DOTPROD)\n list(APPEND CPU_ALL_OPTIMIZATIONS MSA)\n list(APPEND CPU_ALL_OPTIMIZATIONS VSX VSX3)\n list(APPEND CPU_ALL_OPTIMIZATIONS RVV)\n+list(APPEND CPU_ALL_OPTIMIZATIONS LSX)\n list(APPEND CPU_ALL_OPTIMIZATIONS LASX)\n list(REMOVE_DUPLICATES CPU_ALL_OPTIMIZATIONS)\n \n@@ -390,10 +391,16 @@ elseif(RISCV)\n set(CPU_BASELINE \"DETECT\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n```", + "output": "Thanks for your review. This is a spelling error and I have corrected it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVCompilerOptimizations.cmake\n**Change Type:** modified\n**Context:** PR #23929: Optimize some function with lasx.\n\n**Code Changes:**\n```diff\n@@ -53,6 +53,7 @@ list(APPEND CPU_ALL_OPTIMIZATIONS NEON VFPV3 FP16 NEON_DOTPROD)\n list(APPEND CPU_ALL_OPTIMIZATIONS MSA)\n list(APPEND CPU_ALL_OPTIMIZATIONS VSX VSX3)\n list(APPEND CPU_ALL_OPTIMIZATIONS RVV)\n+list(APPEND CPU_ALL_OPTIMIZATIONS LSX)\n list(APPEND CPU_ALL_OPTIMIZATIONS LASX)\n list(REMOVE_DUPLICATES CPU_ALL_OPTIMIZATIONS)\n \n@@ -390,10 +391,16 @@ elseif(RISCV)\n set(CPU_BASELINE \"DETECT\" CACHE STRING \"${HELP_CPU_BASELINE}\")\n```", + "output": "Thanks for your review. This is indeed a problem that will cause only lsx to be turned on and lasx to not be turned on. I will find a suitable way to handle it, and then modify it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers.cpp\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -764,11 +764,15 @@ TEST_F(Layer_RNN_Test, get_set_test)\n \n TEST_P(Test_Caffe_layers, Accum)\n {\n+#ifdef OPENCV_DNN_EXTERNAL_PROTOBUF\n+ throw SkipTestException(\"Requires patched protobuf\");\n+#else\n if (backend == DNN_BACKEND_OPENCV && target != DNN_TARGET_CPU)\n applyTestTag(CV_TEST_TAG_DNN_SKIP_OPENCL, CV_TEST_TAG_DNN_SKIP_OPENCL_FP16);\n \n```", + "output": "Thank you for review! I implemented new patch for some layers to disable, which is same as the already implemented tests for `interp`. Is my recognition that the test for Interp should be modified in the same way correct? https://github.com/opencv/opencv/blob/590f150d5e032165e27d81294c9b7ac710b77f11/modules/dnn/test/test_layers.cpp#L1654-L1658" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "When using protobuf v22 or later, the dnn module directly references `abseil-cpp` library ( via *.pb.cc converted *.pb with `protoc` ). Unfortunally, there is also a possibility that OpenCV Users may use a combination of `protobuf` and `abseil-cpp` versions that are not expected. I think it is better to leave abseil-cpp version information as a cmake log. It is helpful to investifate compile/runtime errors ```bash kmtr@kmtr-VMware-Virtual-Platform:~/work/build4-main_cxx17/modules/dnn/CMakeFiles/opencv_dnn.dir$ nm attr_value.pb.cc.o | c++filt | grep \"absl\" | head -10 U absl::lts_20230125::log_internal::LogMessage::OstreamView::stream() U absl::lts_20230125::log_internal::LogMessage::OstreamView::OstreamView(absl::lts_20230125::log_internal::LogMessage::LogMessageData&) U absl::lts_20230125::log_internal::LogMessage::OstreamView::~OstreamView() U absl::lts_20230125::log_internal::LogMessage::CopyToEncodedBuffer(std::basic_string_view >, absl::lts_20230125::log_internal::LogMessage::StringType) 0000000000000000 W absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <14>(char const (&) [14]) 0000000000000000 W absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <22>(char const (&) [22]) 0000000000000000 W absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <23>(char const (&) [23]) 0000000000000000 W absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <2>(char const (&) [2]) 0000000000000000 W absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <34>(char const (&) [34]) 0000000000000000 t absl::lts_20230125::log_internal::LogMessage& absl::lts_20230125::log_internal::LogMessage::operator<< <48>(char const (&) [48]) [clone .isra.0] ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "I thought about it additionally. When a user builds a protobuf, it is fully expected that the appropriate abseil-cpp version will be applied in the git submodule. Therefore, I would like to delete the version log. thank you very much! e.g. https://github.com/protocolbuffers/protobuf/tree/main/third_party > abseil-cpp @ fb3621f Update to latest absl LTS patch 20230802.1 (#14145)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers.cpp\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -764,11 +764,15 @@ TEST_F(Layer_RNN_Test, get_set_test)\n \n TEST_P(Test_Caffe_layers, Accum)\n {\n+#ifdef OPENCV_DNN_EXTERNAL_PROTOBUF\n+ throw SkipTestException(\"Requires patched protobuf\");\n+#else\n if (backend == DNN_BACKEND_OPENCV && target != DNN_TARGET_CPU)\n applyTestTag(CV_TEST_TAG_DNN_SKIP_OPENCL, CV_TEST_TAG_DNN_SKIP_OPENCL_FP16);\n \n```", + "output": "I fixed it, thank you for your comment ! With external protobuf. ``` [ OK ] Test_Caffe_layers.Conv_Elu/0 (4 ms) [ RUN ] Test_Caffe_layers.Accum/0, where GetParam() = OCV/CPU [ SKIP ] Requires patched protobuf [ OK ] Test_Caffe_layers.Accum/0 (0 ms) [ RUN ] Test_Caffe_layers.FlowWarp/0, where GetParam() = OCV/CPU [ OK ] Test_Caffe_layers.FlowWarp/0 (0 ms) [ RUN ] Test_Caffe_layers.ChannelNorm/0, where GetParam() = OCV/CPU [ OK ] Test_Caffe_layers.ChannelNorm/0 (3 ms) [ RUN ] Test_Caffe_layers.DataAugmentation/0, where GetParam() = OCV/CPU [ SKIP ] Requires patched protobuf [ OK ] Test_Caffe_layers.DataAugmentation/0 (0 ms) [ RUN ] Test_Caffe_layers.Resample/0, where GetParam() = OCV/CPU [ SKIP ] Requires patched protobuf [ OK ] Test_Caffe_layers.Resample/0 (0 ms) [ RUN ] Test_Caffe_layers.Correlation/0, where GetParam() = OCV/CPU [ SKIP ] Requires patched protobuf [ OK ] Test_Caffe_layers.Correlation/0 (0 ms) [ RUN ] Test_Caffe_layers.Convolution2Inputs/0, where GetParam() = OCV/CPU [ OK ] Test_Caffe_layers.Convolution2Inputs/0 (2 ms) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n**Review Line:** 247\n\n**Code Changes:**\n```diff\n \n+if(NOT BUILD_PROTOBUF)\n+ if(TARGET opencv_test_dnn)\n+ ocv_target_compile_definitions(opencv_test_dnn PRIVATE \"OPENCV_DNN_EXTERNAL_PROTOBUF=1\")\n+ endif()\n+endif()\n+\n set(perf_path \"${CMAKE_CURRENT_LIST_DIR}/perf\")\n file(GLOB_RECURSE perf_srcs \"${perf_path}/*.cpp\")\n file(GLOB_RECURSE perf_hdrs \"${perf_path}/*.hpp\" \"${perf_path}/*.h\")\n```", + "output": "Yes, `DISABLED_Interp` was always disabled. `OPENCV_DNN_EXTERNAL_PROTOBUF` definition seems not be available for `opencv_test_dnn` in currently implementation. Old test result with external protobuf (not including this change) is following. If that definition is available, test for `Interp` is disabled. But the test is executed. So I think an additional fix is needed. Code https://github.com/opencv/opencv/blob/590f150d5e032165e27d81294c9b7ac710b77f11/modules/dnn/test/test_layers.cpp#L1654-L1658 Result ```plaintext [ OK ] Test_Caffe_layers.PriorBox_squares/0 (0 ms) [ RUN ] Test_Caffe_layers.Interp/0, where GetParam() = OCV/CPU E0000 00:00:1696641648.291055 46844 text_format.cc:433] Error parsing text-format opencv_caffe.NetParameter: 14:16: Message type \"opencv_caffe.LayerParameter\" has no field named \"interp_param\". unknown file: Failure C++ exception with description \"OpenCV(4.8.0-dev) /home/kmtr/work/opencv/modules/dnn/src/caffe/caffe_io.cpp:1162: error: (-2:Unspecified error) FAILED: ReadProtoFromTextFile(param_file, param). Failed to parse NetParameter file: /home/kmtr/work/opencv_extra/testdata/dnn/layers/layer_interp.prototxt in function 'ReadNetParamsFromTextFileOrDie' \" thrown in the test body. [ FAILED ] Test_Caffe_layers.Interp/0, where GetParam() = OCV/CPU (1 ms) [----------] 31 tests from Test_Caffe_layers (93 ms total) [----------] Global test environment tear-down [==========] 31 tests from 1 test case ran. (93 ms total) [ PASSED ] 26 tests. ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "I removed it, thank you for your comment !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -240,6 +240,12 @@ ocv_create_module(${libs} ${dnn_runtime_libs})\n ocv_add_samples()\n ocv_add_accuracy_tests(${dnn_runtime_libs})\n \n+if(NOT BUILD_PROTOBUF)\n+ if(TARGET opencv_test_dnn)\n+ ocv_target_compile_definitions(opencv_test_dnn PRIVATE \"OPENCV_DNN_EXTERNAL_PROTOBUF=1\")\n+ endif()\n+endif()\n+\n```", + "output": "Hello, thank you for your comment, I agree with you. When `ENABLE_TESTS=OFF`, same problem is occured. I push additional patch to fix it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "Usually OpenCV does not throw errors if an optional dependency can not be found or used - the dependecy just get turned off. @asmorkalov , @opencv-alalek , what do you think? Should we show a warning and disable protobuf in case of configuration incompatibilty? Here and below." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "Thank you for your review ! I would like to wait for comments for the direction of correction. And unfortunately, there are some `message(FATAL_ERROR)` in this file like this block. If error is occured, they will stop configuration. https://github.com/opencv/opencv/blob/5199850039ad23f1f0e6cccea5061a9fea5efca6/modules/dnn/CMakeLists.txt#L133-L135 https://github.com/opencv/opencv/blob/5199850039ad23f1f0e6cccea5061a9fea5efca6/modules/dnn/CMakeLists.txt#L166-L168 https://github.com/opencv/opencv/blob/5199850039ad23f1f0e6cccea5061a9fea5efca6/modules/dnn/CMakeLists.txt#L228-L230" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "Thank you for your review! Yes, it is clear. This is about `protobuf`, so I set HAVE_PROTOBUF=OFF if old cxx standard is used. (And I forget C++98 ...)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "Yes, It is better than my script. Thank you very much !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindProtobuf.cmake\n**Change Type:** modified\n**Context:** PR #24372: supporting protobuf v22 and later(with abseil-cpp/C++17)\n\n**Code Changes:**\n```diff\n@@ -30,8 +30,14 @@ if(BUILD_PROTOBUF)\n set(Protobuf_LIBRARIES \"libprotobuf\")\n set(HAVE_PROTOBUF TRUE)\n else()\n+ # we still need this for command PROTOBUF_GENERATE_CPP.\n+ set(protobuf_MODULE_COMPATIBLE ON)\n+\n unset(Protobuf_VERSION CACHE)\n- find_package(Protobuf QUIET)\n+ find_package(Protobuf QUIET CONFIG)\n```", + "output": "This comparing result is used with \"NOT\", so I use `LESS`. And I add to check C++98. Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_detector.hpp\n**Change Type:** modified\n**Context:** PR #24355: add dynamic window in aruco cornerRefinement\n\n**Code Changes:**\n```diff\n@@ -36,6 +36,7 @@ struct CV_EXPORTS_W_SIMPLE DetectorParameters {\n minMarkerDistanceRate = 0.05;\n cornerRefinementMethod = (int)CORNER_REFINE_NONE;\n cornerRefinementWinSize = 5;\n+ relativeCornerRefinmentWinSize = 0.3f;\n cornerRefinementMaxIterations = 30;\n cornerRefinementMinAccuracy = 0.1;\n markerBorderBits = 1;\n@@ -108,9 +109,27 @@ struct CV_EXPORTS_W_SIMPLE DetectorParameters {\n /** @brief default value CORNER_REFINE_NONE */\n```", + "output": "renamed to `relativeCornerRefinmentWinSize`: ``` /** @brief minimum allowed window size for the corner refinement process (in pins of ArUco marker) (default 0.3). * * cornerRefinementWinSize is recalculated for each marker as: * cornerRefinementWinSize = min(cornerRefinementWinSize, averageArucoModuleSize*relativeCornerRefinmentWinSize), * where averageArucoModuleSize is average module size of ArUco marker in pixels. * (ArUco marker is composed of black and white modules) * In the case of markers located far from each other, it may be useful to increase the value of the parameter to 0.4-0.5. * In the case of markers located close to each other, it may be useful to decrease the parameter value to 0.1-0.2. */ CV_PROP_RW float relativeCornerRefinmentWinSize; ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #24233: Add HAL implementation hooks to cv::flip() and cv::rotate() functions from core module\n**Review Line:** 1105\n\n**Code Changes:**\n```diff\n }\n \n+void rotate(InputArray _src, OutputArray _dst, int rotateMode)\n+{\n+ CV_Assert(_src.dims() <= 2);\n+ int angle;\n+\n+ if (_dst.isUMat())\n+ {\n+ rotateImpl(_src, _dst, rotateMode);\n+ return;\n```", + "output": "Check present in initial version of cv::rotate() is now present in the updated patch. Thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/hal_replacement.hpp\n**Change Type:** modified\n**Context:** PR #24233: Add HAL implementation hooks to cv::flip() and cv::rotate() functions from core module\n\n**Code Changes:**\n```diff\n@@ -731,6 +731,47 @@ inline int hal_ni_minMaxIdx(const uchar* src_data, size_t src_step, int width, i\n #define cv_hal_minMaxIdx hal_ni_minMaxIdx\n //! @endcond\n \n+/**\n+ @brief hal_flip\n+ @param src_type source and destination image type\n+ @param src_data source image data\n+ @param src_step source image step\n+ @param src_width source and destination image width\n```", + "output": "Functions renamed accordingly in updated version of the patch: `hal_ni_rotate` => `hal_ni_rotate90` `cv_hal_rotate` => `cv_hal_rotate90` Thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/hal_replacement.hpp\n**Change Type:** modified\n**Context:** PR #23109: Fixes for various mostly trivial warnings\n**Review Line:** 72\n\n**Code Changes:**\n```diff\n Sub: _dst[i] = src1[i] - src2[i]_\n-@param src1_data,src1_step first source image data and step\n-@param src2_data,src2_step second source image data and step\n-@param dst_data,dst_step destination image data and step\n-@param width,height dimensions of the images\n+@param src1_data first source image data\n+@param src1_step first source image step\n+@param src2_data second source image data\n+@param src2_step second source image step\n+@param dst_data destination image data\n+@param dst_step destination image step\n```", + "output": "This page says that it is allowed: > Note that you can also document multiple parameters with a single \\param command using a comma separated list. Here is an example: > ``` > /** Sets the position. > * @param x,y,z Coordinates of the position in 3D space. > */ > void setPosition(double x,double y,double z,double t) > { > } > ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/java/test/CoreTest.java\n**Change Type:** modified\n**Context:** PR #24342: Fail Java test suite, execution, if one of test failed.\n**Review Line:** 967\n\n**Code Changes:**\n```diff\n- d = Core.Mahalanobis(line1, line2, covar);\n-\n- assertTrue(d > 0.0);\n+ // Bug: https://github.com/opencv/opencv/issues/24348\n+ // d = Core.Mahalanobis(line1, line2, covar);\n+ // assertTrue(d > 0.0);\n }\n \n public void testMax() {\n```", + "output": "Thanks a lot for the suggestion! I'm debugging the issue right now and hopefully propose a solution in a couple of days." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel_impl.cpp\n**Change Type:** modified\n**Context:** PR #24280: optimization for parallelization when large core number\n**Review Line:** 586\n\n**Code Changes:**\n```diff\n CV_LOG_VERBOSE(NULL, 5, \"MainThread: wake worker threads...\");\n- for (size_t i = 0; i < threads.size(); ++i)\n+ size_t num_threads_to_wake = std::min(static_cast(range.size()), threads.size());\n+ for (size_t i = 0; i < num_threads_to_wake; ++i)\n {\n+ if (job->current_task >= job->range.size())\n+ break;\n WorkerThread& thread = *(threads[i].get());\n if (\n #if defined(__clang__) && defined(__has_feature)\n```", + "output": "/cc @vrabaud Could you check if your version of TSAN is happy with that change? (no data-race with `current_task`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/einsum_layer.cpp\n**Change Type:** added\n**Context:** PR #24037: Add Support for Einsum Layer\n**Review Line:** 632\n\n**Code Changes:**\n```diff\n+ for (auto letter : rhs_eq)\n+ {\n+ if(letter == '.')\n+ {\n+ CV_Error(Error::StsNotImplemented, \"Ellipsis are not supported yet\");\n+ } else {\n+ CV_CheckEQ(middleOfEllipsis, false,\n+ \"Encountered '.' character that is not part of output subscript\");\n+\n+ auto letterIndex = letterToIndex(letter);\n+\n```", + "output": "@Abdurrahheem, still relevant" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2.hpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -39,12 +39,20 @@\n \n class ArgInfo\n {\n+private:\n+ static const uint32_t arg_outputarg_flag = 0x1;\n+ static const uint32_t arg_arithm_op_src_flag = 0x2;\n+\n public:\n const char* name;\n```", + "output": "Thank you for your review, I agree with you. I fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/test/test_misc.py\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -42,6 +42,93 @@ def get_conversion_error_msg(value, expected, actual):\n def get_no_exception_msg(value):\n return 'Exception is not risen for {} of type {}'.format(value, type(value).__name__)\n \n+\n+def rpad(src, dst_size, pad_value=0):\n+ \"\"\"Extend `src` up to `dst_size` with given value.\n+\n+ Args:\n+ src (np.ndarray | tuple | list): 1d array like object to pad.\n```", + "output": "I agree with you. I'l remove them. Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.cpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -63,20 +63,39 @@ bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo& info)\n if( PyInt_Check(o) )\n {\n double v[] = {static_cast(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "Thank you very much for your comment!! I feel your suggestion is better than mine," + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.cpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -63,20 +63,39 @@ bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo& info)\n if( PyInt_Check(o) )\n {\n double v[] = {static_cast(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "Yes, currently m is initilized with zero, so `for ( i=sz; i(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "I fixed it, thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.cpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -63,20 +63,39 @@ bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo& info)\n if( PyInt_Check(o) )\n {\n double v[] = {static_cast(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "Same as the above fix, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.cpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -63,20 +63,39 @@ bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo& info)\n if( PyInt_Check(o) )\n {\n double v[] = {static_cast(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "Same as the above fix, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.cpp\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -63,20 +63,39 @@ bool pyopencv_to(PyObject* o, Mat& m, const ArgInfo& info)\n if( PyInt_Check(o) )\n {\n double v[] = {static_cast(PyInt_AsLong((PyObject*)o)), 0., 0., 0.};\n+ if ( info.arithm_op_src )\n+ {\n+ // Normally cv.XXX(x) means cv.XXX( (x, 0., 0., 0.) );\n+ // However cv.add(mat,x) means cv::add(mat, (x,x,x,x) ).\n+ v[1] = v[0];\n+ v[2] = v[0];\n```", + "output": "Same as the above fix, thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/test/test_misc.py\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -42,6 +42,93 @@ def get_conversion_error_msg(value, expected, actual):\n def get_no_exception_msg(value):\n return 'Exception is not risen for {} of type {}'.format(value, type(value).__name__)\n \n+\n+def rpad(src, dst_size, pad_value=0):\n+ \"\"\"Extend `src` up to `dst_size` with given value.\n+\n+ Args:\n+ src (np.ndarray | tuple | list): 1d array like object to pad.\n```", + "output": "Thank you very much for your test code !! I think the new test code is more extensible. However when I run test, there is a small warning. ```python @saturate def divide(x, y): if not isinstance(y, (int, float)): _, max_value = get_limits(x.dtype) y[y == 0] = max_value ``` When x = int64, and y = uint8, `y[y==0] = max_value` cannot work well. ``` x = int64 y= int64 x = int64 y= int64 x = int64 y= float64 x = int64 y= uint8 /home/kmtr/work/opencv/modules/python/test/test_misc.py:104: DeprecationWarning: NumPy will stop allowing conversion of out-of-bound Python integers to integer arrays. The conversion of 9223372036854775807 to uint8 will fail in the future. For the old behavior, usually: np.array(value).astype(dtype)` will give the desired result (the cast overflows). y[y == 0] = max_value x = int64 y= float64 .x = uint8 y= int64 x = uint8 y= int64 x = uint8 y= float64 x = uint8 y= uint8 x = uint8 y= float64 ``` I try replace from `x.dtype` to `y.dtype`. If it is incorrect, I'm sorry. ```diff - _, max_value = get_limits(x.dtype) + _, max_value = get_limits(y.dtype) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/test/test_misc.py\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -42,6 +42,93 @@ def get_conversion_error_msg(value, expected, actual):\n def get_no_exception_msg(value):\n return 'Exception is not risen for {} of type {}'.format(value, type(value).__name__)\n \n+\n+def rpad(src, dst_size, pad_value=0):\n+ \"\"\"Extend `src` up to `dst_size` with given value.\n+\n+ Args:\n+ src (np.ndarray | tuple | list): 1d array like object to pad.\n```", + "output": "Thank you for your comment, I'l try it !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/hdr_parser.py\n**Change Type:** modified\n**Context:** PR #24074: python: support tuple src for cv::add()/subtract()/...\n\n**Code Changes:**\n```diff\n@@ -535,6 +535,13 @@ def parse_func_decl(self, decl_str, mat=\"Mat\", docstring=\"\"):\n \n funcname = self.get_dotted_name(funcname)\n \n+ # see https://github.com/opencv/opencv/issues/24057\n+ is_arithm_op_func = funcname in {\"cv.add\",\n+ \"cv.subtract\",\n+ \"cv.absdiff\",\n+ \"cv.multiply\",\n+ \"cv.divide\"}\n```", + "output": "Thank you for your review, I don't think there's much of some problem without checking for arg_names. Due to future implications, I will remove the verification code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_rgb.simd.hpp\n**Change Type:** modified\n**Context:** PR #24058: Rewrite Universal Intrinsic code by using new API: ImgProc module.\n**Review Line:** 145\n\n**Code Changes:**\n```diff\n- if(bi == 2)\n+ if(bi == 2) {\n+ #if CV_SIMD_SCALABLE\n+ auto t = a; a = c; c = t; // swap(a, c);\n+ #else\n swap(a, c);\n+ #endif\n+ }\n \n if(dcn == 4)\n {\n```", + "output": "@asmorkalov , apparently `std::swap` does not support RVV data types. ``` /opt/riscv64/gcc-10.2.0/riscv64-unknown-linux-gnu/lib/gcc/riscv64-unknown-linux-gnu/10.2.0/../../../../riscv64-unknown-linux-gnu/include/c++/10.2.0/type_traits:960:7: error: static assertion failed due to requirement 'std::__is_complete_or_unbounded(std::__type_identity<__rvv_int32m1_t>{})': template argument must be a complete class or an unbounded array static_assert(std::__is_complete_or_unbounded(__type_identity<_Tp>{}), ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_rgb.simd.hpp\n**Change Type:** modified\n**Context:** PR #24058: Rewrite Universal Intrinsic code by using new API: ImgProc module.\n**Review Line:** 145\n\n**Code Changes:**\n```diff\n- if(bi == 2)\n+ if(bi == 2) {\n+ #if CV_SIMD_SCALABLE\n+ auto t = a; a = c; c = t; // swap(a, c);\n+ #else\n swap(a, c);\n+ #endif\n+ }\n \n if(dcn == 4)\n {\n```", + "output": "`swap` exists in the original algorithm implementation, the modification here is that swap cannot used for the RVV type (as @mshabunin says above), so we use `auto t = a; a = c; c = t;` to replace it. I haven't read the algorithm itself in its entirety, so I can't say why there are `swap` in the original algorithm. But I tried to remove all the three `swap`, the result of the is wrong, therefore I think `swap` is necessary. ``` [ FAILED ] 1 test, listed below: [ FAILED ] Imgproc_ColorRGB.accuracy ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/convert.hpp\n**Change Type:** modified\n**Context:** PR #24254: (5.x) Merge 4.x - OpenCV Contrib: [#3559](https://github.com/opencv/opencv_contrib/pull/3559) OpenCV Extra: [#1093](https://github.com/open...\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,7 @@\n namespace cv\n {\n \n-#if CV_SIMD\n+#if (CV_SIMD || CV_SIMD_SCALABLE)\n \n static inline void vx_load_as(const uchar* ptr, v_float32& a)\n { a = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(ptr))); }\n@@ -78,7 +78,7 @@ static inline void v_store_as(int64_t* ptr, const v_float32& a)\n```", + "output": "@vpisarev @mshabunin I extracted this 3 functions as they use local buffer for data. The original implementation relies on fixed vector size and buffer size." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/convert.hpp\n**Change Type:** modified\n**Context:** PR #24254: (5.x) Merge 4.x - OpenCV Contrib: [#3559](https://github.com/opencv/opencv_contrib/pull/3559) OpenCV Extra: [#1093](https://github.com/open...\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,7 @@\n namespace cv\n {\n \n-#if CV_SIMD\n+#if (CV_SIMD || CV_SIMD_SCALABLE)\n \n static inline void vx_load_as(const uchar* ptr, v_float32& a)\n { a = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(ptr))); }\n@@ -78,7 +78,7 @@ static inline void v_store_as(int64_t* ptr, const v_float32& a)\n```", + "output": "@vpisarev Are you sure in the type and buffer size here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/convert.hpp\n**Change Type:** modified\n**Context:** PR #24254: (5.x) Merge 4.x - OpenCV Contrib: [#3559](https://github.com/opencv/opencv_contrib/pull/3559) OpenCV Extra: [#1093](https://github.com/open...\n\n**Code Changes:**\n```diff\n@@ -11,7 +11,7 @@\n namespace cv\n {\n \n-#if CV_SIMD\n+#if (CV_SIMD || CV_SIMD_SCALABLE)\n \n static inline void vx_load_as(const uchar* ptr, v_float32& a)\n { a = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(ptr))); }\n@@ -78,7 +78,7 @@ static inline void v_store_as(int64_t* ptr, const v_float32& a)\n```", + "output": "@asmorkalov , we can use `VTraits<_Tpvec>::max_nlanes` for buffer size in scalable case, BTW." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/convert.hpp\n**Change Type:** modified\n**Context:** PR #24254: (5.x) Merge 4.x - OpenCV Contrib: [#3559](https://github.com/opencv/opencv_contrib/pull/3559) OpenCV Extra: [#1093](https://github.com/open...\n**Review Line:** 599\n\n**Code Changes:**\n```diff\n static inline void vx_load_pair_as(const unsigned* ptr, v_float64& a, v_float64& b)\n {\n- const int nlanes = v_uint64::nlanes;\n- double buf[v_uint64::nlanes*2];\n+ const int nlanes = VTraits::vlanes();\n+ double buf[VTraits::max_nlanes*2];\n for (int i = 0; i < nlanes*2; i++)\n buf[i] = (double)ptr[i];\n a = vx_load(buf);\n@@ -607,7 +607,7 @@ static inline void v_store_as(double* ptr, const v_float32& a)\n {\n```", + "output": "@vpisarev Are you sure in the type and buffer size here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/aruco/aruco_detector.cpp\n**Change Type:** modified\n**Context:** PR #24139: fix refineDetectedMarkers\n**Review Line:** 1003\n\n**Code Changes:**\n```diff\n@@ -1000,7 +1000,13 @@ static inline void _projectUndetectedMarkers(const Board &board, InputOutputArra\n OutputArray undetectedMarkersIds) {\n Mat rvec, tvec; // first estimate board pose with the current avaible markers\n Mat objPoints, imgPoints; // object and image points for the solvePnP function\n- board.matchImagePoints(detectedCorners, detectedIds, objPoints, imgPoints);\n+ // To refine corners of ArUco markers the function refineDetectedMarkers() find an aruco markers pose from 3D-2D point correspondences.\n+ // To find 3D-2D point correspondences uses matchImagePoints().\n+ // The method matchImagePoints() works with ArUco corners (in Board/GridBoard cases) or with ChArUco corners (in CharucoBoard case).\n+ // To refine corners of ArUco markers we need work with ArUco corners only in all boards.\n+ // To call matchImagePoints() with ArUco corners for all boards we need to call matchImagePoints() from base class Board.\n+ // The method matchImagePoints() implemented in Pimpl and we need to create temp Board object to call the base method.\n```", + "output": "@AleksandrPanov Please add this information to the code as comment. Current comment is not informative." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24218: Added CMake configuration OPENCV_DNN_BACKEND_DEFAULT\n**Review Line:** 230\n\n**Code Changes:**\n```diff\n@@ -227,6 +227,9 @@ if(TARGET ocv.3rdparty.openvino AND OPENCV_DNN_OPENVINO)\n endif()\n endif()\n \n+set(OPENCV_DNN_BACKEND_DEFAULT \"DNN_BACKEND_OPENCV\" CACHE STRING \"Default backend used by the DNN module\")\n+ocv_append_source_file_compile_definitions(\"${CMAKE_CURRENT_LIST_DIR}/src/dnn_params.cpp\" \"OPENCV_DNN_BACKEND_DEFAULT=${OPENCV_DNN_BACKEND_DEFAULT}\")\n+\n \n ocv_install_used_external_targets(${libs} ${dnn_runtime_libs})\n \n```", + "output": "Thanks for pointing out 👍 Using this macro adds a line to the `version_string.inc` file. But what's the right way to nicely integrate with the remaining formattings inside it? And where should this flag be added?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #24218: Added CMake configuration OPENCV_DNN_BACKEND_DEFAULT\n**Review Line:** 230\n\n**Code Changes:**\n```diff\n@@ -227,6 +227,9 @@ if(TARGET ocv.3rdparty.openvino AND OPENCV_DNN_OPENVINO)\n endif()\n endif()\n \n+set(OPENCV_DNN_BACKEND_DEFAULT \"DNN_BACKEND_OPENCV\" CACHE STRING \"Default backend used by the DNN module\")\n+ocv_append_source_file_compile_definitions(\"${CMAKE_CURRENT_LIST_DIR}/src/dnn_params.cpp\" \"OPENCV_DNN_BACKEND_DEFAULT=${OPENCV_DNN_BACKEND_DEFAULT}\")\n+\n \n ocv_install_used_external_targets(${libs} ${dnn_runtime_libs})\n \n```", + "output": "Alright, thanks! The file is simply written line by line, we thought there might be a more complex logic. We added the status after the OpenVINO status line. Most likely, this is the most related option as brining back the default-to-OpenVINO option was our main intention for this patch. Hopefully, now everything is ready for merging." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "What dependencies are required on 5.x branch? Only `opencv_3d` is available for now. Do we need new dependencies? /cc @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "empty line is necessary after `@brief` (or `@details`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "@asmorkalov @vpisarev Please confirm that it is intentional to make this a part of `CharucoBoard` class." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "@vpisarev @asmorkalov It is a blocker (below the same violations) Many other left comments are not fixed/not provided any feedback. I can't approve that in the current state. If you don't care, you could merge this under your responsibility." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_dictionary.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n**Review Line:** 42\n\n**Code Changes:**\n```diff\n+\n+\n+ /** @brief Read a new dictionary from FileNode.\n+ *\n+ * Dictionary format:\\n\n+ * nmarkers: 35\\n\n+ * markersize: 6\\n\n+ * maxCorrectionBits: 5\\n\n+ * marker_0: \"101011111011111001001001101100000000\"\\n\n+ * ...\\n\n+ * marker_34: \"011111010000111011111110110101100101\"\n```", + "output": "too long `@brief` section." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_dictionary.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n**Review Line:** 30\n\n**Code Changes:**\n```diff\n+class CV_EXPORTS_W_SIMPLE Dictionary {\n+\n+ public:\n+ CV_PROP_RW Mat bytesList; // marker code information\n+ CV_PROP_RW int markerSize; // number of bits per dimension\n+ CV_PROP_RW int maxCorrectionBits; // maximum number of bits that can be corrected\n+\n+\n+ CV_WRAP Dictionary();\n+\n+ CV_WRAP Dictionary(const Mat &bytesList, int _markerSize, int maxcorr = 0);\n```", + "output": "@asmorkalov @vpisarev Do you approve that in public API?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "AFAIK, there is some exception in parameters order for \"drawing\" functions. /cc @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_dictionary.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,151 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_DICTIONARY_HPP\n+#define OPENCV_OBJDETECT_DICTIONARY_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "it's not description. Use `@override` if it's applicable." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_detector.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,396 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_DETECTOR_HPP\n+#define OPENCV_OBJDETECT_ARUCO_DETECTOR_HPP\n+\n+#include \n+#include \n+\n```", + "output": "`@cite` should be used instead." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_detector.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,396 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_DETECTOR_HPP\n+#define OPENCV_OBJDETECT_ARUCO_DETECTOR_HPP\n+\n+#include \n+#include \n+\n```", + "output": "added `@cite Aruco2014`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_detector.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n**Review Line:** 13\n\n**Code Changes:**\n```diff\n+#include \n+\n+namespace cv {\n+namespace aruco {\n+\n+/** @defgroup aruco ArUco Marker Detection\n+ * Square fiducial markers (also known as Augmented Reality Markers) are useful for easy,\n+ * fast and robust camera pose estimation.\n+ *\n+ * The main functionality of ArucoDetector class is detection of markers in an image. There are even more\n+ * functionalities implemented in the aruco contrib module (files aruco.hpp, charuco.hpp, aruco_calib.hpp):\n```", + "output": "move `@defgroup aruco ArUco Marker Detection` to `aruco` namespace" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** added\n**Context:** PR #22368: move aruco from contrib to objdetect in main repository\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,234 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+#ifndef OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+#define OPENCV_OBJDETECT_ARUCO_BOARD_HPP\n+\n+#include \n+\n+namespace cv {\n```", + "output": "added empty line: ``` /** @brief Planar board with grid arrangement of markers * * More common type of board. All markers are placed in the same plane in a grid arrangement. * The board can be drawn using draw() method. */ ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/ap3p.cpp\n**Change Type:** modified\n**Context:** PR #23607: Fix crash in ap3p\n\n**Code Changes:**\n```diff\n@@ -1,5 +1,6 @@\n #include \"precomp.hpp\"\n #include \"ap3p.h\"\n+#include \"polynom_solver.h\"\n \n #include \n #include \n@@ -8,63 +9,10 @@ static inline double cbrt(double x) { return (double)cv::cubeRoot((float)x); };\n #endif\n \n```", + "output": "Or, if rely on a public API: ```cpp std::vector R, t; solveP3P(modelPts, cameraPts, Mat::eye(3, 3, CV_64F), Mat(), R, t, SOLVEPNP_AP3P); ``` @alexander-varjo, can you please share your compiler version and target CPU architecture?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_halide_layers.cpp\n**Change Type:** modified\n**Context:** PR #24039: TFLite models on different backends (tests and improvements)\n**Review Line:** 366\n\n**Code Changes:**\n```diff\n- )\n- applyTestTag(CV_TEST_TAG_DNN_SKIP_IE_MYRIAD, CV_TEST_TAG_DNN_SKIP_IE_VERSION);\n-#endif\n+ // https://github.com/openvinotoolkit/openvino/issues/18731\n+ if (backendId == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH && stride != Size(1, 1))\n+ applyTestTag(CV_TEST_TAG_DNN_SKIP_IE_NGRAPH);\n \n #if defined(INF_ENGINE_RELEASE) && INF_ENGINE_VER_MAJOR_GE(2019010000)\n if (backendId == DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019 && targetId == DNN_TARGET_MYRIAD\n@@ -467,6 +454,11 @@ TEST_P(FullyConnected, Accuracy)\n {\n```", + "output": "@asmorkalov, please do not merge, I'm going to workaround it as suggested in the ticket." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/pooling_layer.cpp\n**Change Type:** modified\n**Context:** PR #24039: TFLite models on different backends (tests and improvements)\n**Review Line:** 619\n\n**Code Changes:**\n```diff\n- ngraph::Shape(pads_begin), ngraph::Shape(pads_end), ngraph::Shape(kernel_size),\n- rounding_type, pad_type);\n+ std::shared_ptr max_pool;\n+ if (computeMaxIdx) {\n+ std::vector dilations(kernel_size.size(), 1);\n+ max_pool = std::make_shared(ieInpNode, ngraph::Strides(strides), ngraph::Strides(dilations),\n+ ngraph::Shape(pads_begin), ngraph::Shape(pads_end), ngraph::Shape(kernel_size),\n+ rounding_type, pad_type);\n+ } else {\n+ max_pool = std::make_shared(ieInpNode, ngraph::Strides(strides),\n+ ngraph::Shape(pads_begin), ngraph::Shape(pads_end), ngraph::Shape(kernel_size),\n```", + "output": "Thanks, I will add a check" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #24101: Invalid memory access fix for ONNX split layer parser #24076\n\n**Code Changes:**\n```diff\n@@ -1385,13 +1385,19 @@ void ONNXImporter::parseSplit(LayerParams& layerParams, const opencv_onnx::NodeP\n CV_Assert(constBlobs.find(node_proto.input(1)) != constBlobs.end());\n Mat splitsBlob = getBlob(node_proto, 1);\n int splitSize = splitsBlob.total();\n-\n- std::vector slicePoints(splitSize - 1, splitsBlob.at(0));\n- for (int i = 1; i < splitSize - 1; ++i)\n+ if (splitSize == 1)\n+ {\n+ layerParams.set(\"num_split\", 1);\n```", + "output": "Please check `splitSize` earlier: ```cpp int splitSize = splitsBlob.total(); if (splitSize == 1) { layerParams.set(\"num_split\", 1); } else { std::vector slicePoints(splitSize - 1, splitsBlob.at(0)); for (int i = 1; i < splitSize - 1; ++i) { slicePoints[i] = slicePoints[i - 1] + splitsBlob.at(i); } layerParams.set(\"slice_point\", DictValue::arrayInt(&slicePoints[0], slicePoints.size())); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/package/setup.py\n**Change Type:** modified\n**Context:** PR #20370: Python typing stub generation\n**Review Line:** 5\n\n**Code Changes:**\n```diff\n-import platform\n import setuptools\n \n-SCRIPT_DIR=os.path.dirname(os.path.abspath(__file__))\n+\n+SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))\n+\n+\n+def collect_module_typing_stub_files(root_module_path):\n+ stub_files = []\n+ for module_path, _, files in os.walk(root_module_path):\n```", + "output": "Named arguments are unrolled as expected after PR #19156 is merged ```python # Functions @typing.overload def copyMatAndDumpNamedArguments(src: cv2.typing.MatLike, dst: cv2.typing.MatLike | None = ..., lambda_: int = ..., sigma: float = ...) -> tuple[str, cv2.typing.MatLike]: ... @typing.overload def copyMatAndDumpNamedArguments(src: cv2.UMat, dst: cv2.UMat | None = ..., lambda_: int = ..., sigma: float = ...) -> tuple[str, cv2.UMat]: ... ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/typing_stubs_generation/api_refinement.py\n**Change Type:** modified\n**Context:** PR #24066: Python typing refinement for dnn_registerLayer/dnn_unregisterLayer functions\n\n**Code Changes:**\n```diff\n@@ -7,15 +7,18 @@\n from .nodes import (NamespaceNode, FunctionNode, OptionalTypeNode, TypeNode,\n ClassProperty, PrimitiveTypeNode, ASTNodeTypeNode,\n AggregatedTypeNode, CallableTypeNode, AnyTypeNode,\n- TupleTypeNode, UnionTypeNode)\n+ TupleTypeNode, UnionTypeNode, ProtocolClassNode,\n+ DictTypeNode, ClassTypeNode)\n from .ast_utils import (find_function_node, SymbolName,\n for_each_function_overload)\n+from .types_conversion import create_type_node\n```", + "output": "Yep its typo, thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/src/window_cocoa.mm\n**Change Type:** modified\n**Context:** PR #23394: add scrollWheel to Cocoa\n**Review Line:** 902\n\n**Code Changes:**\n```diff\n mp.y *= (imageSize.height / std::max(viewSize.height, 1.));\n mp.x *= (imageSize.width / std::max(viewSize.width, 1.));\n \n- if( mp.x >= 0 && mp.y >= 0 && mp.x < imageSize.width && mp.y < imageSize.height )\n- mouseCallback(type, mp.x, mp.y, flags, mouseParam);\n+ if( [event type] == NSEventTypeScrollWheel ) {\n+ if( event.hasPreciseScrollingDeltas ) {\n+ mp.x = int(event.scrollingDeltaX);\n+ mp.y = int(event.scrollingDeltaY);\n+ } else {\n+ mp.x = int(event.scrollingDeltaX / 0.100006);\n```", + "output": "@VadimLevin could you take a look?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/python/package/mat_wrapper/__init__.py\n**Change Type:** modified\n**Context:** PR #23927: Fix partially unknown Mat\n**Review Line:** 8\n\n**Code Changes:**\n```diff\n import numpy as np\n import cv2 as cv\n+from typing import TYPE_CHECKING, Any\n+\n+# Type subscription is not possible in python 3.8\n+if TYPE_CHECKING:\n+ _NDArray = np.ndarray[Any, np.dtype[np.generic]]\n+else:\n+ _NDArray = np.ndarray\n \n # NumPy documentation: https://numpy.org/doc/stable/user/basics.subclassing.html\n```", + "output": "@Avasam, thanks for such detailed explanation. Can you update check from #23838 in this PR too, please?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ov/govbackend.cpp\n**Change Type:** modified\n**Context:** PR #23884: G-API: Fix async inference for OpenVINO backend\n\n**Code Changes:**\n```diff\n@@ -1403,8 +1403,10 @@ cv::gimpl::ov::GOVExecutable::GOVExecutable(const ade::Graph &g,\n case NodeType::OP:\n if (this_nh == nullptr) {\n this_nh = nh;\n- compiled = const_cast(ovm.metadata(this_nh).get()).compile();\n- m_reqPool.reset(new RequestPool(createInferRequests(compiled.compiled_model, 1)));\n+ const auto &unit = ovm.metadata(this_nh).get();\n+ compiled = const_cast(unit).compile();\n+ m_reqPool.reset(new RequestPool(createInferRequests(\n+ compiled.compiled_model, unit.params.nireq)));\n```", + "output": "Oh, got your point, let's just get it from `OVUnit`, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/mat.hpp\n**Change Type:** modified\n**Context:** PR #23832: Document parameters of multi-dimentional reshape\n**Review Line:** 1337\n\n**Code Changes:**\n```diff\n- /** @overload */\n+ /** @overload\n+ * @param cn New number of channels. If the parameter is 0, the number of channels remains the same.\n+ * @param newshape Vector with new matrix size by all dimentions. If some sizes are zero,\n+ * the original sizes in those dimensions are presumed.\n+ */\n Mat reshape(int cn, const std::vector& newshape) const;\n \n /** @brief Transposes a matrix.\n```", + "output": "Perhaps it would be good to have a small code example for these overloads too: ``` 3-channel 2x2 matrix reshaped to 1-channel 4x3 matrix, each column has values from one of original channels: @code Mat m(Size(2, 2), CV_8UC3, Scalar(1, 2, 3)); vector new_shape {4, 3}; m = m.reshape(1, new_shape); @endcode ``` For `int*`: ``` @code Mat m(Size(2, 2), CV_8UC3, Scalar(1, 2, 3)); const int new_shape[] = {4, 3}; m = m.reshape(1, 2, new_shape); @endcode ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/imgwarp.cpp\n**Change Type:** modified\n**Context:** PR #23754: Keep inliers for linear remap with BORDER_TRANSPARENT\n\n**Code Changes:**\n```diff\n@@ -757,13 +757,6 @@ static void remapBilinear( const Mat& _src, Mat& _dst, const Mat& _xy,\n }\n else\n {\n- if( borderType == BORDER_TRANSPARENT && cn != 3 )\n- {\n- D += (X1 - dx)*cn;\n- dx = X1;\n- continue;\n- }\n```", + "output": "BTW, we have strange code above: ``` #if CV_SIMD128 if( _src.type() == CV_8UC3 ) width1 = std::max(ssize.width-2, 0); #endif ``` Why do we guard processing of common variables in SIMD mode only? /cc @vpisarev That commit is very old: https://github.com/opencv/opencv/blame/fdd83e5027cab3dcbc6acb321ca9c294b90f17e1/modules/imgproc/src/imgwarp.cpp#L663 --- Code condition is strange too: `borderType == BORDER_TRANSPARENT && cn != 3`. Why is `cn` used here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #22898: dnn: support ONNX Slice with negative steps by adding and using cv::flipND\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,8 @@\n #include \"opencl_kernels_core.hpp\"\n #include \"opencv2/core/detail/dispatch_helper.impl.hpp\"\n \n+#include // std::swap_ranges\n+\n namespace cv {\n \n ////////////////////////////////////// transpose /////////////////////////////////////////\n@@ -812,6 +814,49 @@ void flip( InputArray _src, OutputArray _dst, int flip_mode )\n```", + "output": "@fengyuentau, we could iterate over just one half of the axis. But maybe it's for future pr, it'll need performance testing." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #22898: dnn: support ONNX Slice with negative steps by adding and using cv::flipND\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,8 @@\n #include \"opencl_kernels_core.hpp\"\n #include \"opencv2/core/detail/dispatch_helper.impl.hpp\"\n \n+#include // std::swap_ranges\n+\n namespace cv {\n \n ////////////////////////////////////// transpose /////////////////////////////////////////\n@@ -812,6 +814,49 @@ void flip( InputArray _src, OutputArray _dst, int flip_mode )\n```", + "output": "std::swap_ranges is new to me. Thanks for your enlightened answer. But I found that std::swap_ranges with execution policy is since C++17." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #22898: dnn: support ONNX Slice with negative steps by adding and using cv::flipND\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,8 @@\n #include \"opencl_kernels_core.hpp\"\n #include \"opencv2/core/detail/dispatch_helper.impl.hpp\"\n \n+#include // std::swap_ranges\n+\n namespace cv {\n \n ////////////////////////////////////// transpose /////////////////////////////////////////\n@@ -812,6 +814,49 @@ void flip( InputArray _src, OutputArray _dst, int flip_mode )\n```", + "output": "Yes, btw in case of ROI we could just overlap not from the beggining. @alalek do we have a method that checks if mats are non-overlapping?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #22898: dnn: support ONNX Slice with negative steps by adding and using cv::flipND\n**Review Line:** 852\n\n**Code Changes:**\n```diff\n+ // in-place flip\n+ _src.copyTo(_dst);\n+\n+ // return the src if it has only one element on the flip axis\n+ const auto shape = src.size.p;\n+ if (shape[axis] == 1)\n+ return ;\n+\n+ // call impl\n+ Mat dst = _dst.getMat();\n+ flipNDImpl(dst.ptr(), dst.size.p, dst.step.p, axis);\n```", + "output": "To be honest, I don't know what we are winning here - one function frame that will do exactly this? I think copying is much slower than calling a function. @alalek can you elaborate?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/matrix_transform.cpp\n**Change Type:** modified\n**Context:** PR #22898: dnn: support ONNX Slice with negative steps by adding and using cv::flipND\n\n**Code Changes:**\n```diff\n@@ -6,6 +6,8 @@\n #include \"opencl_kernels_core.hpp\"\n #include \"opencv2/core/detail/dispatch_helper.impl.hpp\"\n \n+#include // std::swap_ranges\n+\n namespace cv {\n \n ////////////////////////////////////// transpose /////////////////////////////////////////\n@@ -812,6 +814,49 @@ void flip( InputArray _src, OutputArray _dst, int flip_mode )\n```", + "output": "Thank you for discovering this. Changed `CV_CheckGT` to `CV_CheckGE` and added test cases for negative axes [-dim, dim)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2_convert.hpp\n**Change Type:** modified\n**Context:** PR #23766: G-API: Refine Semantic Segmentation Demo\n**Review Line:** 274\n\n**Code Changes:**\n```diff\n bool pyopencv_to(PyObject *obj, std::map &map, const ArgInfo& info)\n {\n+ if (!obj || obj == Py_None)\n+ {\n+ return true;\n+ }\n+\n PyObject* py_key = nullptr;\n PyObject* py_value = nullptr;\n Py_ssize_t pos = 0;\n```", + "output": "@VadimLevin Is is relevant patch?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/has_non_zero.simd.hpp\n**Change Type:** added\n**Context:** PR #22947: added cv::hasNonZero()\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,327 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+\n+namespace cv {\n+\n+typedef bool (*HasNonZeroFunc)(const uchar*, size_t);\n```", + "output": "@chacha21 Friendly reminder." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_intelligent_scissors.cpp\n**Change Type:** modified\n**Context:** PR #23688: imgproc: add contour values check to IntelligentScissorsMB tests\n\n**Code Changes:**\n```diff\n@@ -147,27 +147,91 @@ void show(const Mat& img, const std::vector pts)\n }\n }\n \n+Size estimateContourSize(const std::vector& pts)\n+{\n+ Size s(0,0);\n+ for (size_t i = 0; i < pts.size(); i++)\n+ {\n+ if (s.width < pts[i].x)\n```", + "output": "Thanks for the `cv::norm(..., NORM_INF)` suggestion! I'm not sure if I understood the _\"... but do all steps in test body.\"_ suggestion correctly, did you mean something this? ``` double getContour(segmentation::IntelligentScissorsMB& tool, const Point& target_point, std::vector& pts, const bool backward = false) { ... return cv::norm(pts, reference_pts, cv::NORM_INF); } ... EXPECT_EQ(getContour(tool, target_point, pts), 0); ... EXPECT_LE(getContour(tool, target_point, pts), PTS_EPS); ... ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.simd.hpp\n**Change Type:** modified\n**Context:** PR #23567: Reformulated some pointer arithmetic to avoid (unsigned) overflow - Although unsigned overflow is well-defined by the C++ standard, it is o...\n\n**Code Changes:**\n```diff\n@@ -44,6 +44,8 @@\n #include \"opencv2/core/hal/intrin.hpp\"\n #include \"filter.hpp\"\n \n+#include \n+\n #if defined(CV_CPU_BASELINE_MODE)\n #if IPP_VERSION_X100 >= 710\n #define USE_IPP_SEP_FILTERS 1\n@@ -304,7 +306,7 @@ void FilterEngine__apply(FilterEngine& this_, const Mat& src, Mat& dst, const Si\n```", + "output": "@opencv-alalek please see my new solution." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #23264: Add detect qr with aruco\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #define OPENCV_OBJDETECT_HPP\n \n #include \"opencv2/core.hpp\"\n+#include \"opencv2/objdetect/aruco_detector.hpp\"\n \n /**\n @defgroup objdetect Object Detection\n@@ -763,28 +764,15 @@ class CV_EXPORTS_W QRCodeEncoder {\n \n```", + "output": "added `@brief`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #23264: Add detect qr with aruco\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #define OPENCV_OBJDETECT_HPP\n \n #include \"opencv2/core.hpp\"\n+#include \"opencv2/objdetect/aruco_detector.hpp\"\n \n /**\n @defgroup objdetect Object Detection\n@@ -763,28 +764,15 @@ class CV_EXPORTS_W QRCodeEncoder {\n \n```", + "output": "added `@brief`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #23264: Add detect qr with aruco\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #define OPENCV_OBJDETECT_HPP\n \n #include \"opencv2/core.hpp\"\n+#include \"opencv2/objdetect/aruco_detector.hpp\"\n \n /**\n @defgroup objdetect Object Detection\n@@ -763,28 +764,15 @@ class CV_EXPORTS_W QRCodeEncoder {\n \n```", + "output": "update docs, added this: ``` /** @brief Methods for detecting QR codes: BASE, ARUCO_BASED */ enum class QRDetectMethod : int { BASE, ///< old QR detect method ARUCO_BASED ///< QR detect method, with detect finder pattern as ArUco marker // TODO: add WECHAT method ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #23662: Fix truncated sentenced in boxPoints documentation #22975\n\n**Code Changes:**\n```diff\n@@ -4086,7 +4086,7 @@ The function finds the four vertices of a rotated rectangle. This function is us\n rectangle. In C++, instead of using this function, you can directly use RotatedRect::points method. Please\n visit the @ref tutorial_bounding_rotated_ellipses \"tutorial on Creating Bounding rotated boxes and ellipses for contours\" for more information.\n \n-@param box The input rotated rectangle. It may be the output of\n+@param box The input rotated rectangle. It may be the output of @ref minAreaRect.\n @param points The output array of four vertices of rectangles.\n */\n CV_EXPORTS_W void boxPoints(RotatedRect box, OutputArray points);\n```", + "output": "```suggestion @param box The input rotated rectangle. It may be the output of @ref minAreaRect. ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/test/test_cuda.py\n**Change Type:** modified\n**Context:** PR #23371: `cuda`: Add bindings to allow `GpuMat` and `Stream` objects to be initialized from memory initialized in other libraries\n**Review Line:** 46\n\n**Code Changes:**\n```diff\n cuMat.upload(npMat)\n self.assertTrue(cuMat.cudaPtr() != 0)\n+ cuMatFromPtrSz = cv.cuda.createGpuMatFromCudaMemory(cuMat.size(),cuMat.type(),cuMat.cudaPtr(), cuMat.step)\n+ self.assertTrue(cuMat.cudaPtr() == cuMatFromPtrSz.cudaPtr())\n+ cuMatFromPtrRc = cv.cuda.createGpuMatFromCudaMemory(cuMat.size()[1],cuMat.size()[0],cuMat.type(),cuMat.cudaPtr(), cuMat.step)\n+ self.assertTrue(cuMat.cudaPtr() == cuMatFromPtrRc.cudaPtr())\n stream = cv.cuda_Stream()\n self.assertTrue(stream.cudaPtr() != 0)\n+ streamFromPtr = cv.cuda.wrapStream(stream.cudaPtr())\n+ self.assertTrue(stream.cudaPtr() == streamFromPtr.cudaPtr())\n asyncstream = cv.cuda_Stream(1) # cudaStreamNonBlocking\n```", + "output": "@asmorkalov we don't have python tests (and build) on CI with CUDA: https://github.com/opencv/opencv/actions/runs/4753621211/jobs/8445779900?pr=23371 (there is dnn test only)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/aruco_dict_utils.cpp\n**Change Type:** added\n**Context:** PR #23363: added charuco board generation to gen_pattern.py\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,348 @@\n+#include \n+#include \n+\n+using namespace cv;\n+using namespace std;\n+\n+static int _getSelfDistance(const Mat &marker) {\n+\n+ Mat bytes = aruco::Dictionary::getByteListFromBits(marker);\n```", + "output": "`cv::hal::normHamming` is used. @vovka643 Could you replace it with `cv::norm`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_layers.cpp\n**Change Type:** modified\n**Context:** PR #23557: fix nary elementwise bug in cpu\n\n**Code Changes:**\n```diff\n@@ -1937,6 +1937,117 @@ INSTANTIATE_TEST_CASE_P(/**/, Layer_Test_Eltwise_unequal, Combine(\n dnnBackendsAndTargets()\n ));\n \n+\n+struct Layer_Test_Eltwise_bcast : testing::TestWithParam>>\n+{\n+public:\n+ void test_bcast()\n+ {\n```", + "output": "Thanks, I will change it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23061: DNN: make GEMM can be supported with transA and transB in CUDA\n**Review Line:** 2059\n\n**Code Changes:**\n```diff\n@@ -2056,6 +2056,7 @@ void ONNXImporter::parseGemm(LayerParams& layerParams, const opencv_onnx::NodePr\n }\n \n layerParams.set(\"bias_term\", node_proto.input_size() == 3);\n+ layerParams.set(\"is_matmul\", true);\n addLayer(layerParams, node_proto);\n }\n \n```", + "output": "Thanks! This is really useful! I will remove this parameter later." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22775: DNN: Make MatMul layer support 3D or 4D operation with const input\n**Review Line:** 2036\n\n**Code Changes:**\n```diff\n }\n \n-void ONNXImporter::parseMatMul(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto)\n+void ONNXImporter::parseMatMul(LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto_)\n {\n+ opencv_onnx::NodeProto node_proto = node_proto_;\n CV_Assert(node_proto.input_size() == 2);\n layerParams.type = \"InnerProduct\";\n layerParams.set(\"bias_term\", false);\n@@ -2044,8 +2045,24 @@ void ONNXImporter::parseMatMul(LayerParams& layerParams, const opencv_onnx::Node\n {\n```", + "output": "@asmorkalov Some handlers in original code tries to hijack passed protobuf nodes - this is a bug (legacy design bug then we have large `if/else if` code instead of dedicated handlers - see #20478). Handlers are not intended to modify protobuf nodes (nobody expects/needs that), so `const reference` is correct." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22828: DNN: make MatMul support 3D or 4D with broadcast\n**Review Line:** 924\n\n**Code Changes:**\n```diff\n@@ -921,6 +921,7 @@ TEST_P(Test_ONNX_layers, MatMul_init)\n testONNXModels(\"matmul_4d_init\");\n \n testONNXModels(\"matmul_init_2\");\n+ testONNXModels(\"matmul_init_bcast\");\n }\n \n TEST_P(Test_ONNX_layers, MatMulAdd)\n```", + "output": "There is failed OpenCL FP16 test: ``` [ RUN ] Test_ONNX_layers.MatMul_init/1, where GetParam() = OCV/OCL_FP16 [ INFO:0@189.433] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_2d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1 [ INFO:0@189.433] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17 [ INFO:0@189.433] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx' [ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_3d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1 [ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17 [ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx' [ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_4d_init'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1 [ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17 [ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx' [ INFO:0@189.434] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_init_2'. Number of nodes = 2, initializers = 2, inputs = 3, outputs = 1 [ INFO:0@189.434] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17 [ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!outputY) from domain='ai.onnx' [ INFO:0@189.434] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [Add]:(onnx_node_output_0!output) from domain='ai.onnx' [ INFO:0@189.435] global onnx_importer.cpp:822 populateNet DNN/ONNX: loading ONNX v8 model produced by 'matmul_init_bcast'. Number of nodes = 1, initializers = 1, inputs = 2, outputs = 1 [ INFO:0@189.435] global onnx_importer.cpp:724 parseOperatorSet DNN/ONNX: ONNX opset version = 17 [ INFO:0@189.435] global onnx_importer.cpp:991 handleNode DNN/ONNX: processing node with 2 inputs and 1 outputs: [MatMul]:(onnx_node_output_0!output) from domain='ai.onnx' /build/precommit_opencl_linux/4.x/opencv/modules/dnn/test/test_common.impl.hpp:74: Failure Expected: (normL1) <= (l1), actual: 1.22411 vs 0.004 |ref| = 6.9979562759399414 /build/precommit_opencl_linux/4.x/opencv/modules/dnn/test/test_common.impl.hpp:77: Failure Expected: (normInf) <= (lInf), actual: 6.99796 vs 0.02 |ref| = 6.9979562759399414 [ INFO:0@189.435] global ts.cpp:850 testTearDown Memory_usage (OpenCL): 3960 (base=0 current=0) [ FAILED ] Test_ONNX_layers.MatMul_init/1, where GetParam() = OCV/OCL_FP16 (2 ms) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap.cpp\n**Change Type:** modified\n**Context:** PR #23290: 4.x deprecated VideoCapture backends\n\n**Code Changes:**\n```diff\n@@ -212,6 +212,19 @@ bool VideoCapture::open(const String& filename, int apiPreference, const std::ve\n CV_Error_(Error::StsError, (\"could not open '%s'\", filename.c_str()));\n }\n \n+ if (cv::videoio_registry::checkDeprecatedBackend(apiPreference))\n+ {\n+ CV_LOG_DEBUG(NULL,\n+ cv::format(\"VIDEOIO(%s): backend is removed from OpenCV\",\n+ cv::videoio_registry::getBackendName((VideoCaptureAPIs) apiPreference).c_str()));\n+ }\n```", + "output": "Triple copy-paste of that code. `getDeprecatedBackends()` should be transformed to `bool checkDeprecatedBackend(apiPreference)`. @asmorkalov Why I should comment that?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #22614: DNN: More stable DB text detection API\n\n**Code Changes:**\n```diff\n@@ -21,7 +21,7 @@ struct Model::Impl\n \n Size size;\n Scalar mean;\n- double scale = 1.0;\n+ Scalar scale = Scalar::all(1.0);\n bool swapRB = false;\n bool crop = false;\n Mat blob;\n@@ -60,7 +60,7 @@ struct Model::Impl\n```", + "output": "Hi @alalek, thanks for the code review. The code has been updated. Please take a look." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/nary_eltwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #23485: DNN: add ONNX where node support\n**Review Line:** 506\n\n**Code Changes:**\n```diff\n@@ -499,6 +502,120 @@ class NaryEltwiseLayerImpl CV_FINAL : public NaryEltwiseLayer\n f, scale, ninputs, max_ndims, shapes[0], inp, out, (const size_t **) steps, ptrs);\n }\n \n+ template \n+ void trinary_forward(const Functor& f, const std::vector& inputs, std::vector& outputs)\n+ {\n+ const Mat& a = inputs[0];\n+ const Mat& b = inputs[1];\n+ const Mat& c = inputs[2];\n+ Mat& out = outputs[0];\n```", + "output": "Hi, @rogday. Thanks for the code review. I'm not very familiar with this part of the code, can you give some examples in more detail so I can complete it?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #23485: DNN: add ONNX where node support\n**Review Line:** 2498\n\n**Code Changes:**\n```diff\n }\n \n+TEST_P(Test_ONNX_layers, where_node)\n+{\n+ testONNXModels(\"where_layer\");\n+}\n+\n INSTANTIATE_TEST_CASE_P(/**/, Test_ONNX_nets, dnnBackendsAndTargets());\n \n }} // namespace\n```", + "output": "\"image\" Hi @dkurt. Thanks for your code review, and I have updated the test case." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/aruco/aruco_board.cpp\n**Change Type:** modified\n**Context:** PR #23138: fix charuco matchImagePoints\n\n**Code Changes:**\n```diff\n@@ -27,17 +27,17 @@ struct Board::Impl {\n Impl(const Impl&) = delete;\n Impl& operator=(const Impl&) = delete;\n \n- virtual void matchImagePoints(InputArray detectedCorners, InputArray detectedIds, OutputArray _objPoints,\n+ virtual void matchImagePoints(InputArrayOfArrays detectedCorners, InputArray detectedIds, OutputArray _objPoints,\n OutputArray imgPoints) const;\n \n virtual void generateImage(Size outSize, OutputArray img, int marginSize, int borderBits) const;\n };\n```", + "output": "I suggest @asmorkalov to create new `CharucoBoard::matchImagePoints` with **InputArray** signature: ``` CV_WRAP void matchImagePoints(InputArray detectedCharuco, InputArray detectedIds, OutputArray objPoints, OutputArray imgPoints) const; ``` But in this case, two Python bindings functions are generated: - matchImagePoints(**InputArray** detectedCharuco, ...) - matchImagePoints(**InputArrayOfArrays** detectedCharuco, ...) To avoid a potential problem @asmorkalov suggest use only default `Board::matchImagePoints` method I don't know which way is better." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tflite/tflite_importer.cpp\n**Change Type:** modified\n**Context:** PR #23409: Import and inference INT8 quantized TFLite model\n\n**Code Changes:**\n```diff\n@@ -61,8 +61,17 @@ class TFLiteImporter {\n void parseConcat(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n void parseResize(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n void parseDeconvolution(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n-\n- int addPermuteLayer(const std::vector& order, const std::string& permName, const std::pair& inpId);\n+ void parseQuantize(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n+ void parseDequantize(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n+ void parseDetectionPostProcess(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n+ void parseActivation(const Operator& op, const std::string& opcode, LayerParams& layerParams);\n```", + "output": "Thanks! I reworked both fused activations logic and created an `addLayer` method. Here is a graph for EfficientDet I used to verify that activations were added correctly: [efficientdet.dot](https://github.com/opencv/opencv/files/11298016/efficientdet.dot.txt) ([efficientdet.pdf](https://github.com/opencv/opencv/files/11298014/efficientdet.pdf)). Unfortunately, for this model they work as identity function and even with unconnected activations, test passes." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #22750: DNN: Add New API blobFromImageParam\n\n**Code Changes:**\n```diff\n@@ -108,6 +108,21 @@ CV__DNN_INLINE_NS_BEGIN\n DNN_TARGET_NPU,\n };\n \n+ /**\n+ * @brief Enum of data layout for model inference.\n+ * @see Image2BlobParams\n+ */\n+ enum DataLayout\n+ {\n```", + "output": "> BTW, any thoughts about blob -> tensor migration? Seems blob was widely used in Caffe only. I just found there is `Tensor` in `5.x`. Maybe in this PR, we can change the `blob`->`tensor`. Hi @vpisarev, any suggestions? Update: As a result of the discussion with Vadim, the current `tensor` used in `5.x` is also temporary, and this PR still uses `blob`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tflite/tflite_importer.cpp\n**Change Type:** modified\n**Context:** PR #22750: DNN: Add New API blobFromImageParam\n\n**Code Changes:**\n```diff\n@@ -22,18 +22,6 @@ CV__DNN_INLINE_NS_BEGIN\n \n using namespace opencv_tflite;\n \n-// This values are used to indicate layer output's data layout where it's possible.\n-// Approach is similar to TensorFlow importer but TFLite models do not have explicit\n-// layout field \"data_format\". So we consider that all 4D inputs are in NHWC data layout.\n-enum DataLayout\n-{\n- DATA_LAYOUT_NHWC,\n```", + "output": "Hi @vpisarev, the `DataLayout` has been reused." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #22750: DNN: Add New API blobFromImageParam\n\n**Code Changes:**\n```diff\n@@ -108,6 +108,21 @@ CV__DNN_INLINE_NS_BEGIN\n DNN_TARGET_NPU,\n };\n \n+ /**\n+ * @brief Enum of data layout for model inference.\n+ * @see Image2BlobParams\n+ */\n+ enum DataLayout\n+ {\n```", + "output": "You can just use `@overload` in doxygen for all overloads." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/dnn_utils.cpp\n**Change Type:** modified\n**Context:** PR #22750: DNN: Add New API blobFromImageParam\n\n**Code Changes:**\n```diff\n@@ -11,8 +11,17 @@ namespace cv {\n namespace dnn {\n CV__DNN_INLINE_NS_BEGIN\n \n+Image2BlobParams::Image2BlobParams():scalefactor(Scalar::all(1.0)), size(Size()), mean(Scalar()), swapRB(false), ddepth(CV_32F),\n+ datalayout(DNN_LAYOUT_NCHW), paddingmode(DNN_PMODE_NULL)\n+{}\n \n-Mat blobFromImage(InputArray image, double scalefactor, const Size& size,\n+Image2BlobParams::Image2BlobParams(const Scalar& scalefactor_, const Size& size_, const Scalar& mean_, bool swapRB_,\n```", + "output": "Hi @vpisarev and @asmorkalov, what's the final decision of this hack?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #22750: DNN: Add New API blobFromImageParam\n**Review Line:** 123\n\n**Code Changes:**\n```diff\n+ DNN_LAYOUT_ND = 1, //!< OpenCV data layout for 2D data.\n+ DNN_LAYOUT_NCHW = 2, //!< OpenCV data layout for 4D data.\n+ DNN_LAYOUT_NCDHW = 3, //!< OpenCV data layout for 5D data.\n+ DNN_LAYOUT_NHWC = 4, //!< Tensorflow-like data layout for 4D data.\n+ DNN_LAYOUT_NDHWC = 5, //!< Tensorflow-like data layout for 5D data.\n+ DNN_LAYOUT_PLANAR = 6, //!< Tensorflow-like data layout, it should only be used at tf or tflite model parsing.\n+ };\n+\n CV_EXPORTS std::vector< std::pair > getAvailableBackends();\n CV_EXPORTS_W std::vector getAvailableTargets(dnn::Backend be);\n \n```", + "output": "Hi @dkurt and @vpisarev. What's the final decision of the `ND` and `PLANAR`? If the `PLANAR` is only used for 2D data, I think we can keep one of them." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_tiff.cpp\n**Change Type:** modified\n**Context:** PR #23433: imgcodecs: tiff: Support to encode for CV_32S with compression params\n\n**Code Changes:**\n```diff\n@@ -14,6 +14,38 @@ namespace opencv_test { namespace {\n #define int64 int64_hack_\n #include \"tiff.h\"\n \n+// Re-define Mat type as enum for showing on Google Test.\n+enum CV_ddtCn{\n+ _CV_8UC1 = CV_8UC1, _CV_8UC3 = CV_8UC3, _CV_8UC4 = CV_8UC4,\n+ _CV_8SC1 = CV_8SC1, _CV_8SC3 = CV_8SC3, _CV_8SC4 = CV_8SC4,\n+ _CV_16UC1 = CV_16UC1, _CV_16UC3 = CV_16UC3, _CV_16UC4 = CV_16UC4,\n+ _CV_16SC1 = CV_16SC1, _CV_16SC3 = CV_16SC3, _CV_16SC4 = CV_16SC4,\n```", + "output": "Thank you for your comment ! I'll commit to fix it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_tiff.cpp\n**Change Type:** modified\n**Context:** PR #23433: imgcodecs: tiff: Support to encode for CV_32S with compression params\n\n**Code Changes:**\n```diff\n@@ -63,6 +63,9 @@ using namespace tiff_dummy_namespace;\n namespace cv\n {\n \n+// to extend cvtColor() to support CV_8S, CV_16S, CV_32S and CV_64F.\n+static void extend_cvtColor( InputArray _src, OutputArray _dst, int code );\n+\n #define CV_TIFF_CHECK_CALL(call) \\\n if (0 == (call)) { \\\n CV_LOG_WARNING(NULL, \"OpenCV TIFF(line \" << __LINE__ << \"): failed \" #call); \\\n```", + "output": "Thank you for your comment! ` CV_Assert(0 && \"Meaningful message\" ) ` seems better. Current PR removes copying from SRC to DST. So this problem will not contains." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_tiff.cpp\n**Change Type:** modified\n**Context:** PR #23433: imgcodecs: tiff: Support to encode for CV_32S with compression params\n**Review Line:** 1434\n\n**Code Changes:**\n```diff\n \n@@ -1424,6 +1431,57 @@ bool TiffEncoder::write( const Mat& img, const std::vector& params)\n return writeLibTiff(img_vec, params);\n }\n \n+static void extend_cvtColor( InputArray _src, OutputArray _dst, int code )\n+{\n+ CV_Assert( !_src.empty() );\n+ CV_Assert( _src.dims() == 2 );\n+\n+ // This function extend_cvtColor reorders the src channels with only thg limited condition.\n```", + "output": "Thank you for your comment! I thought splitting from Src to 3 planes(B/G/R) with `cv::split` and merging 3 planes to Dst with `cv::merge` use more memory/computing resource. I referenced OpenCV manual again. I think cv::mixChannels is suitable to this purpose. I think this cost result is acceptable. (This results are tested under virtual machine) - src image is 1920x1080, a half of upper is random, a harf of lower is 0-filled. - Value is the medium value of 10 trials. - Time contains create src image, encode to buffer and decode to dst image. - unis is \"mill sec\" - If (b)/(a) is smaller, Copy from Src to Dst is faster than mixChannel. | Mat Type | (a)Copy | (b)mixChannel | (b)/(a) | | -- | --: | --: | --: | | CV_8SC1 | 91 | 91 | 99% | | CV_8SC3 | 222 | 231 | 96% | | CV_8SC4 | 285 | 294 | 97% | | CV_16SC1 | 96 | 97 | 99% | | CV_16SC3 | 272 | 279 | 97% | | CV_16SC4 | 353 | 367 | 96% | | CV_32SC1 | 125 | 125 | 100% | | CV_32SC3 | 357 | 367 | 97% | | CV_32SC4 | 469 | 494 | 95% | | CV_64FC1 | 129 | 131 | 98% | | CV_64FC3 | 526 | 546 | 96% | | CV_64FC4 | 684 | 707 | 97% |" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_tiff.cpp\n**Change Type:** modified\n**Context:** PR #23433: imgcodecs: tiff: Support to encode for CV_32S with compression params\n\n**Code Changes:**\n```diff\n@@ -14,6 +14,38 @@ namespace opencv_test { namespace {\n #define int64 int64_hack_\n #include \"tiff.h\"\n \n+// Re-define Mat type as enum for showing on Google Test.\n+enum CV_ddtCn{\n+ _CV_8UC1 = CV_8UC1, _CV_8UC3 = CV_8UC3, _CV_8UC4 = CV_8UC4,\n+ _CV_8SC1 = CV_8SC1, _CV_8SC3 = CV_8SC3, _CV_8SC4 = CV_8SC4,\n+ _CV_16UC1 = CV_16UC1, _CV_16UC3 = CV_16UC3, _CV_16UC4 = CV_16UC4,\n+ _CV_16SC1 = CV_16SC1, _CV_16SC3 = CV_16SC3, _CV_16SC4 = CV_16SC4,\n```", + "output": "Thank you, I'll remove it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #17499: Fix cuda11 - ``` force_builders=Custom buildworker:Custom=linux-4 build_image:Custom=ubuntu-cuda11:18.04 Xbuild_image:Custom=ubuntu-cuda11:...\n**Review Line:** 25\n\n**Code Changes:**\n```diff\n add_definitions(-DCV_OCL4DNN=1)\n endif()\n \n-ocv_option(OPENCV_DNN_CUDA \"Build with CUDA support\" HAVE_CUDA AND HAVE_CUBLAS AND HAVE_CUDNN)\n+if(NOT DEFINED OPENCV_DNN_CUDA AND HAVE_CUDNN AND CUDNN_VERSION VERSION_LESS 8.0)\n+ message(STATUS \"DNN: CUDNN 8.0 is not supported yes. Details: https://github.com/opencv/opencv/issues/17496\")\n+endif()\n+ocv_option(OPENCV_DNN_CUDA \"Build with CUDA support\"\n+ HAVE_CUDA\n+ AND HAVE_CUBLAS\n+ AND HAVE_CUDNN\n```", + "output": "Thanks for this , still i am getting error \"OpenCV/modules/cudacodec/src/precomp.hpp:59:14: fatal error: nvcuvid.h: No such file or directory 59 | #include \"" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/region_layer.cpp\n**Change Type:** modified\n**Context:** PR #22660: Fix objectness is not assigned in dnn::region_layer\n**Review Line:** 323\n\n**Code Changes:**\n```diff\n dstData[box_index + 2] = (srcData[box_index + 2]) * (srcData[box_index + 2]) * 4 * biasData[2 * a] / wNorm;\n dstData[box_index + 3] = (srcData[box_index + 3]) * (srcData[box_index + 3]) * 4 * biasData[2 * a + 1] / hNorm;\n+ dstData[box_index + 4] = srcData[p_index];\n \n scale = srcData[p_index];\n if (classfix == -1 && scale < thresh)\n```", + "output": "1. The `dstData[index + 4]` is uninitialized if `new_coords=1`, but initialized if `new_coords=0`. I think it is not good. 2. If you have `dstData[index + 4]` values, you can optimize NMS by filtering some boxes (`dstData[index + 4] < score_thresh`), no need to find the maximum of all classes probabilities and check if it greater than `score_thresh ` for each box. ping @AlexeyAB" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/src/roiSelector.cpp\n**Change Type:** modified\n**Context:** PR #23196: added argument to print notice in `roiSelector.cpp`\n\n**Code Changes:**\n```diff\n@@ -13,11 +13,14 @@ namespace\n class ROISelector\n {\n public:\n- Rect select(const String &windowName, Mat img, bool showCrossair = true, bool fromCenter = true)\n+ Rect select(const String &windowName, Mat img, bool showCrossair = true, bool fromCenter = true, bool printNotice = true)\n {\n- // show notice to user\n- printf(\"Select a ROI and then press SPACE or ENTER button!\\n\");\n- printf(\"Cancel the selection process by pressing c button!\\n\");\n```", + "output": "Thanks, I've made the required changes" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #17858: added depth-wise convolution; gives ~20-40% performance improvement on MobileSSD networks\n\n**Code Changes:**\n```diff\n@@ -833,6 +833,7 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n bool useAVX;\n bool useAVX2;\n bool useAVX512;\n+ int blk_size_cn;\n \n ParallelConv()\n : input_(0), weights_(0), output_(0), ngroups_(0), nstripes_(0),\n@@ -889,12 +890,17 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n p.useAVX2 = checkHardwareSupport(CPU_AVX2) && isConv2D;\n```", + "output": "@dkurt, thank you! I think, you are right. Since `inpCn` and `outCn` are computed as `input_->size[1]/ngroups` and `output->size[1]/ngroups`, respectively, the condition should be `ngroups > 1 && inpCn == 1 && outCn == 1`, where the latter two parts are already there. So we should just remove `batchSize == ngroups` check. I'm now improving the patch, will put in this change as well." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_mat.cpp\n**Change Type:** modified\n**Context:** PR #22346: Changes separated from Mat 1D support in core #18594\n\n**Code Changes:**\n```diff\n@@ -2354,6 +2354,96 @@ TEST(Mat, regression_18473)\n EXPECT_EQ((int)5, (int)m.at(19, 49, 99));\n }\n \n+// FITIT: remove DISABLE_ when 1D Mat is supported\n+TEST(Mat1D, DISABLED_basic)\n+{\n+ std::vector sizes { 100 };\n+ Mat m1(sizes, CV_8UC1, Scalar::all(5));\n+ m1.at(50) = 10;\n```", + "output": "Thank you! Let me have a try." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/minmax.cpp\n**Change Type:** modified\n**Context:** PR #22346: Changes separated from Mat 1D support in core #18594\n\n**Code Changes:**\n```diff\n@@ -1565,13 +1565,24 @@ void cv::minMaxLoc( InputArray _img, double* minVal, double* maxVal,\n {\n CV_INSTRUMENT_REGION();\n \n- CV_Assert(_img.dims() <= 2);\n+ int dims = _img.dims();\n+ CV_CheckLE(dims, 2, \"\");\n \n minMaxIdx(_img, minVal, maxVal, (int*)minLoc, (int*)maxLoc, mask);\n if( minLoc )\n```", + "output": "Done. Thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/multiview_calibration.py\n**Change Type:** added\n**Context:** PR #22363: add multiview calibration [GSOC 2022]\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,736 @@\n+# This file is part of OpenCV project.\n+# It is subject to the license terms in the LICENSE file found in the top-level directory\n+# of this distribution and at http://opencv.org/license.html.\n+\n+import argparse\n+import glob\n+import json\n+import multiprocessing\n+import os\n```", + "output": "> @ivashmak Please create PR with test data to opencv_extra with the same branch name. XMLs with board points are missing. Here is PR: https://github.com/opencv/opencv_extra/pull/1001" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib/include/opencv2/calib.hpp\n**Change Type:** modified\n**Context:** PR #22363: add multiview calibration [GSOC 2022]\n\n**Code Changes:**\n```diff\n@@ -1122,6 +1122,47 @@ CV_EXPORTS_W double stereoCalibrate( InputArrayOfArrays objectPoints,\n OutputArray perViewErrors, int flags = CALIB_FIX_INTRINSIC,\n TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 1e-6) );\n \n+/** @brief Estimates intrinsics and extrinsics (camera pose) for multi-camera system a.k.a multiview calibraton.\n+\n+@param[in] objPoints Calibration pattern object points. Expected shape: NUM_FRAMES x NUM_POINTS x 3. Supported data type: CV_32F.\n+@param[in] imagePoints Detected pattern points on camera images. Expected shape: NUM_CAMERAS x NUM_FRAMES x NUM_POINTS x 2.\n+@param[in] imageSize Images resolution.\n+@param[in] detectionMask Pattern detection mask. Each value defines if i-camera observes calibration pattern in j moment of time.\n```", + "output": "Use `@return` for return value. It should be after parameters list." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/calib3d/camera_multiview_calibration/multiview_calibration.markdown\n**Change Type:** added\n**Context:** PR #22363: add multiview calibration [GSOC 2022]\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,252 @@\n+Multi-view Camera Calibration Tutorial {#tutorial_multiview_camera_calibration}\n+==========================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_interactive_calibration}\n+\n+| | |\n+| -: | :- |\n```", + "output": "> https://docs.opencv.org/ Links on docs.opencv.org should be avoided. Use `@ref` instead." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/scatterND_layer.cpp\n**Change Type:** added\n**Context:** PR #22529: DNN: supports Scatter and ScatterND from ONNX\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,202 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"layers_common.hpp\"\n+\n+#include // for std::max & std::min\n+\n```", + "output": "Thank you for your very insightful suggestion! Now the time of ScatterND has decreased half." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_conformance.cpp\n**Change Type:** modified\n**Context:** PR #23343: DNN Test ONNX: Fix the logic of the test case\n\n**Code Changes:**\n```diff\n@@ -937,6 +937,7 @@ class Test_ONNX_conformance : public TestWithParam\n \n static std::set parser_deny_list;\n static std::set global_deny_list;\n+ static std::set opencv_deny_list;\n static std::set opencl_fp16_deny_list;\n static std::set opencl_deny_list;\n static std::set cpu_deny_list;\n@@ -1001,6 +1002,10 @@ class Test_ONNX_conformance : public TestWithParam\n #include \"test_onnx_conformance_layer_filter_opencv_all_denylist.inl.hpp\"\n```", + "output": "Hi @asmorkalov thanks for the code reviewing. To my knowledge, the `cpu_deny_list` is made for OpenCV backend. I'm not sure about it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/nary_eltwise_layers.cpp\n**Change Type:** added\n**Context:** PR #21865: Reimplementation of Element-wise layers with broadcasting support\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,664 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"layers_common.hpp\"\n+#include \n+\n+#include \n```", + "output": "Yup, thanks, will do." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21865: Reimplementation of Element-wise layers with broadcasting support\n\n**Code Changes:**\n```diff\n@@ -63,10 +63,17 @@ class ONNXImporter\n LayerInfo(int _layerId = 0, int _outputId = 0) : layerId(_layerId), outputId(_outputId) {}\n };\n \n+ struct TensorInfo {\n+ int real_ndims;\n+ TensorInfo(int _real_ndims = 0) : real_ndims(_real_ndims) {}\n+ };\n+\n std::map getGraphTensors(\n```", + "output": "@rogday There are two such cases in ONNX conformance: `Test_ONNX_conformance.Layer_Test/test_div_bcast_OCV_CPU` and `Test_ONNX_conformance.Layer_Test/test_mul_bcast_OCV_CPU`. To be honest, we can disable these two tests and turn them on when we have 1d Mat support. Current implementation without custom div and mul layers can pass all other tests of layers and nets (check [this log](https://github.com/opencv/opencv/actions/runs/2589842789)). Pratically speaking, I would call the two tests rare cases, since internels being 1d tensor are rare for conv nets." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21865: Reimplementation of Element-wise layers with broadcasting support\n\n**Code Changes:**\n```diff\n@@ -63,10 +63,17 @@ class ONNXImporter\n LayerInfo(int _layerId = 0, int _outputId = 0) : layerId(_layerId), outputId(_outputId) {}\n };\n \n+ struct TensorInfo {\n+ int real_ndims;\n+ TensorInfo(int _real_ndims = 0) : real_ndims(_real_ndims) {}\n+ };\n+\n std::map getGraphTensors(\n```", + "output": "Tested on my side with 2 inputs case and 3 inputs case and worked great. Thanks for the code. By the way, should we add 3 inputs test case for NaryEltwise in test_layers?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21865: Reimplementation of Element-wise layers with broadcasting support\n\n**Code Changes:**\n```diff\n@@ -63,10 +63,17 @@ class ONNXImporter\n LayerInfo(int _layerId = 0, int _outputId = 0) : layerId(_layerId), outputId(_outputId) {}\n };\n \n+ struct TensorInfo {\n+ int real_ndims;\n+ TensorInfo(int _real_ndims = 0) : real_ndims(_real_ndims) {}\n+ };\n+\n std::map getGraphTensors(\n```", + "output": "Great, thank you! I don't think it's necessary at this point to be honest. If you disagree, I can create a PR to opencv_extra, no problem (so that our CI can properly merge and test everything)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/sift.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #23124: different interpolation by double image\n**Review Line:** 193\n\n**Code Changes:**\n```diff\n CV_TRACE_FUNCTION();\n \n@@ -188,12 +191,22 @@ static Mat createInitialImage( const Mat& img, bool doubleImageSize, float sigma\n if( doubleImageSize )\n {\n sig_diff = sqrtf( std::max(sigma * sigma - SIFT_INIT_SIGMA * SIFT_INIT_SIGMA * 4, 0.01f) );\n+\n Mat dbl;\n+ if (enable_precise_upscale) {\n+ dbl.create(Size(gray_fpt.cols*2, gray_fpt.rows*2), gray_fpt.type());\n+ Mat H = Mat::zeros(2, 3, CV_32F);\n```", + "output": "> INTER_LINEAR_EXACT Main problem here is that we already have declared the \"bit-exact\" implementation for SIFT. \"Silent\" replacing of such implementation with new one is a regression in term of API interface stability. I believe we should have an option for enabling of fixed version and a warning about the deprecated behavior. /cc @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/sift.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #23124: different interpolation by double image\n**Review Line:** 202\n\n**Code Changes:**\n```diff\n+ dbl.create(Size(gray_fpt.cols*2, gray_fpt.rows*2), gray_fpt.type());\n+ Mat H = Mat::zeros(2, 3, CV_32F);\n+ H.at(0, 0) = 0.5f;\n+ H.at(1, 1) = 0.5f;\n+\n+ cv::warpAffine(gray_fpt, dbl, H, dbl.size(), INTER_LINEAR | WARP_INVERSE_MAP, BORDER_REFLECT);\n+ } else {\n #if DoG_TYPE_SHORT\n- resize(gray_fpt, dbl, Size(gray_fpt.cols*2, gray_fpt.rows*2), 0, 0, INTER_LINEAR_EXACT);\n+ resize(gray_fpt, dbl, Size(gray_fpt.cols*2, gray_fpt.rows*2), 0, 0, INTER_LINEAR_EXACT);\n #else\n```", + "output": "Discussed with @vpisarev : we still need LINEAR interpolation to preserve image gradients, so `INTER_NEAREST` can't be used here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/test/test_descriptors_regression.impl.hpp\n**Change Type:** modified\n**Context:** PR #23124: different interpolation by double image\n\n**Code Changes:**\n```diff\n@@ -7,6 +7,34 @@ namespace opencv_test { namespace {\n /****************************************************************************************\\\n * Regression tests for descriptor extractors. *\n \\****************************************************************************************/\n+static void double_image(Mat& src, Mat& dst) {\n+\n+ dst.create(Size(src.cols*2, src.rows*2), src.type());\n+\n+ Mat H = Mat::zeros(2, 3, CV_32F);\n+ H.at(0, 0) = 0.5f;\n```", + "output": "@alalek, nearest neighbor resize is used here. It should be bit exact. So the suggested method to compare image and downsized_back image is correct" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/test/test_gstreamer.cpp\n**Change Type:** modified\n**Context:** PR #22919: Timeouts support for GStreamer backend\n\n**Code Changes:**\n```diff\n@@ -151,4 +151,31 @@ TEST(videoio_gstreamer, gray16_writing)\n EXPECT_EQ(0, remove(temp_file.c_str()));\n }\n \n+TEST(videoio_gstreamer, timeout_property)\n+{\n+ if (!videoio_registry::hasBackend(CAP_GSTREAMER))\n+ throw SkipTestException(\"GStreamer backend was not found\");\n+\n+ VideoCapture cap;\n```", + "output": "If `cap.get(CAP_PROP_READ_TIMEOUT_MSEC)` is zero then we should bailout or skip this tests (as GStreamer version is very old). Currently it fails: ``` [ RUN ] videoio_gstreamer.timeout_property [ WARN:0@10.679] global cap_gstreamer.cpp:1427 open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1 [ INFO:0@10.679] global cap_gstreamer.cpp:1126 open OpenCV | GStreamer: videotestsrc ! appsink [ INFO:0@10.679] global cap_gstreamer.cpp:1159 open OpenCV | GStreamer: mode - MANUAL /build/precommit_custom_linux/4.x/opencv/modules/videoio/test/test_gstreamer.cpp:165: Failure The difference between default_timeout and cap.get(CAP_PROP_READ_TIMEOUT_MSEC) is 30000, which exceeds 1e-3, where default_timeout evaluates to 30000, cap.get(CAP_PROP_READ_TIMEOUT_MSEC) evaluates to 0, and 1e-3 evaluates to 0.001. [ WARN:0@10.680] global cap_gstreamer.cpp:1772 setProperty OpenCV | GStreamer warning: GStreamer before 1.10 does not support read timeout /build/precommit_custom_linux/4.x/opencv/modules/videoio/test/test_gstreamer.cpp:169: Failure The difference between read_timeout and cap.get(CAP_PROP_READ_TIMEOUT_MSEC) is 1234, which exceeds 1e-3, where read_timeout evaluates to 1234, cap.get(CAP_PROP_READ_TIMEOUT_MSEC) evaluates to 0, and 1e-3 evaluates to 0.001. [ FAILED ] videoio_gstreamer.timeout_property (2 ms) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/ts/src/ts.cpp\n**Change Type:** modified\n**Context:** PR #16189: Enable 2-channel arrays in resize tests\n**Review Line:** 624\n\n**Code Changes:**\n```diff\n@@ -621,20 +621,27 @@ void TS::set_gtest_status()\n \n void TS::update_context( BaseTest* test, int test_case_idx, bool update_ts_context )\n {\n+ CV_UNUSED(update_ts_context);\n+\n if( current_test_info.test != test )\n {\n for( int i = 0; i <= CONSOLE_IDX; i++ )\n output_buf[i] = string();\n```", + "output": "@alalek , do you remember whether the changes in this file are necessary? It seems that some calib3d and color conversion tests fail due to this modification." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/include/opencv2/features2d.hpp\n**Change Type:** modified\n**Context:** PR #21942: added blob contours to blob detector\n\n**Code Changes:**\n```diff\n@@ -107,6 +107,10 @@ class CV_EXPORTS KeyPointsFilter\n * Remove keypoints from some image by mask for pixels of this image.\n */\n static void runByPixelsMask( std::vector& keypoints, const Mat& mask );\n+ /*\n+ * Remove objects from some image and a vector of points by mask for pixels of this image\n+ */\n+ static void runByPixelsMask2VectorPoint(std::vector &keypoints, std::vector > &removeFrom, const Mat &mask);\n /*\n * Remove duplicated keypoints.\n```", + "output": "@alalek I added stup implementation with `CV_Error`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/features2d/homography/homography.markdown\n**Change Type:** modified\n**Context:** PR #22130: Update homography tutorial with a small exercise\n**Review Line:** 423\n\n**Code Changes:**\n```diff\n \n+#### Exercise\n+\n+This demo shows you how to compute the homography transformation from two camera poses. Try to perform the same operations, but by computing N inter homography this time. Instead of computing one homography to directly warp the source image to the desired camera viewpoint, perform N warping operations to the see the different transformations operating.\n+\n+You should get something similar to this video:\n+\n ### Demo 4: Decompose the homography matrix {#tutorial_homography_Demo4}\n \n OpenCV 3 contains the function @ref cv::decomposeHomographyMat which allows to decompose the homography matrix to a set of rotations, translations and plane normals.\n```", + "output": "To be updated with the YouTube ID: ```suggestion You should get something similar to this video: @youtube{ViPN810E0SU} ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n**Review Line:** 1029\n\n**Code Changes:**\n```diff\n+\n+ImageCollection::Impl::Impl(std::string const& filename, int flags) {\n+ this->init(filename, flags);\n+}\n+\n+void ImageCollection::Impl::init(String const& filename, int flags) {\n+ m_filename = filename;\n+ m_flags = flags;\n+\n+#ifdef HAVE_GDAL\n+ if (m_flags != IMREAD_UNCHANGED && (m_flags & IMREAD_LOAD_GDAL) == IMREAD_LOAD_GDAL) {\n```", + "output": "Good point thanks, I missed that. Will fix it asap." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n**Review Line:** 344\n\n**Code Changes:**\n```diff\n+\n+The performance of page decoding is O(1) if collection is increment sequentially. If the user wants to access random page,\n+then the time Complexity is O(n) because the collection has to be reinitialized every time in order to go to the correct page.\n+However, the intermediate pages are not decoded during the process, so typically it's quite fast.\n+This is required because multipage codecs does not support going backwards.\n+After decoding the one page, it is stored inside the collection cache. Hence, trying to get Mat object from already decoded page is O(1).\n+If you need memory, you can use .releaseCache() method to release cached index.\n+The space complexity is O(n) if all pages are decoded into memory. The user is able to decode and release images on demand.\n+*/\n+class CV_EXPORTS ImageCollection {\n+public:\n```", + "output": "@asmorkalov, this can be made optional very easily. Just call release/releaseCache() after you finished processing of each image. I think, it's quite simple and yet good enough workaround. I suggest not to make the API more complex at the moment" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n\n**Code Changes:**\n```diff\n@@ -54,6 +54,8 @@\n #include \n #include \n #include \n+#include \n+\n \n \n /****************************************************************************************\\\n@@ -658,57 +660,14 @@ bool imreadmulti(const String& filename, std::vector& mats, int start, int\n```", + "output": "@asmorkalov you marked this as resolved. In the todays meeting, Vadim also agreed with you to rename the method name. I got confused why you marked this as resolved. I was going to rename it. Should I?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n\n**Code Changes:**\n```diff\n@@ -54,6 +54,8 @@\n #include \n #include \n #include \n+#include \n+\n \n \n /****************************************************************************************\\\n@@ -658,57 +660,14 @@ bool imreadmulti(const String& filename, std::vector& mats, int start, int\n```", + "output": "My fault. I resolved wrong discussion. Renamed method looks better, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n**Review Line:** 1223\n\n**Code Changes:**\n```diff\n+ return &m_pCollection->getImpl()->operator[](m_curr);\n+}\n+\n+ImageCollection::iterator& ImageCollection::iterator::operator++() {\n+ if(m_pCollection->pImpl->currentIndex() == m_curr) {\n+ m_pCollection->pImpl->advance();\n+ }\n+ m_curr++;\n+ return *this;\n+}\n+\n```", + "output": "@ocpalo The question is still open. I suspect, that it's a bug." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n**Review Line:** 1223\n\n**Code Changes:**\n```diff\n+ return &m_pCollection->getImpl()->operator[](m_curr);\n+}\n+\n+ImageCollection::iterator& ImageCollection::iterator::operator++() {\n+ if(m_pCollection->pImpl->currentIndex() == m_curr) {\n+ m_pCollection->pImpl->advance();\n+ }\n+ m_curr++;\n+ return *this;\n+}\n+\n```", + "output": "Vadim also suspected that this is a bug. But I do not think so. He requested a test to break this and I implemented it and it seems it works fine. I did not included in the comment because it was not well-written test. @asmorkalov If you think a specific scenario that can break this, please tell me. So I can included in the unit tests and see if its broken or not. The idea is, only iterators can advance the to next page that satisfy the requirement which is iterator internal counter is equal to current page index. Since iterators should not pass `end()`, So if this requirement is not satisfied, it does not call `nextPage()`. So file status is protected. Iterator count is incremented in any case, which is required. I will write a detailed description of how that incrementing index works tonight." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #22128: [GSoC 2022] Multipage Image Decoder API\n\n**Code Changes:**\n```diff\n@@ -332,6 +332,51 @@ CV_EXPORTS_W bool haveImageReader( const String& filename );\n */\n CV_EXPORTS_W bool haveImageWriter( const String& filename );\n \n+/** @brief To read Multi Page images on demand\n+\n+The ImageCollection class provides iterator API to read multi page images on demand. Create iterator\n+to the collection of the images and iterate over the collection. Decode the necessary page with operator*.\n+\n+The performance of page decoding is O(1) if collection is increment sequentially. If the user wants to access random page,\n```", + "output": "In your example, `auto` is deduced to `Mat`. If user wants to get reference to Mat object, he needs `auto&`. Anyway, it is still safe. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_obsensor/obsensor_stream_channel_interface.hpp\n**Change Type:** added\n**Context:** PR #22196: videoio: add support for obsensor (Orbbec RGB-D Camera )\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,103 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+/*\n+* Copyright(C) 2022 by ORBBEC Technology., Inc.\n+* Authors:\n+* Huang Zhenchang \n+*\n```", + "output": "Namespaces do not need to be indented, here and belowing file. Please check the [Code Style of OpenCV](https://github.com/opencv/opencv/wiki/Coding_Style_Guide)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #22196: videoio: add support for obsensor (Orbbec RGB-D Camera )\n**Review Line:** 664\n\n**Code Changes:**\n```diff\n+ @{\n+*/\n+//! OBSENSOR data given from image generator\n+enum VideoCaptureOBSensorDataType{ \n+ CAP_OBSENSOR_DEPTH_MAP = 0, //!< Depth values in mm (CV_16UC1)\n+ CAP_OBSENSOR_BGR_IMAGE = 1, //!< Data given from BGR stream generator\n+ CAP_OBSENSOR_IR_IMAGE = 2 //!< Data given from IR stream generator(CV_16UC1)\n+};\n+\n+//! OBSENSOR stream generator\n+enum VideoCaptureOBSensorGenerators{ \n```", + "output": "@vpisarev Vadim, could you take a look on this and put a comment?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #22196: videoio: add support for obsensor (Orbbec RGB-D Camera )\n**Review Line:** 664\n\n**Code Changes:**\n```diff\n+ @{\n+*/\n+//! OBSENSOR data given from image generator\n+enum VideoCaptureOBSensorDataType{ \n+ CAP_OBSENSOR_DEPTH_MAP = 0, //!< Depth values in mm (CV_16UC1)\n+ CAP_OBSENSOR_BGR_IMAGE = 1, //!< Data given from BGR stream generator\n+ CAP_OBSENSOR_IR_IMAGE = 2 //!< Data given from IR stream generator(CV_16UC1)\n+};\n+\n+//! OBSENSOR stream generator\n+enum VideoCaptureOBSensorGenerators{ \n```", + "output": "@alalek, @hzcyf, I can confirm that for OpenNI and Intel RealSense (PerC) cameras depth map is the default stream. From this point of view the new Orbbec API follows this tradition. I think, it should be retained as-is." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #22196: videoio: add support for obsensor (Orbbec RGB-D Camera )\n**Review Line:** 664\n\n**Code Changes:**\n```diff\n+ @{\n+*/\n+//! OBSENSOR data given from image generator\n+enum VideoCaptureOBSensorDataType{ \n+ CAP_OBSENSOR_DEPTH_MAP = 0, //!< Depth values in mm (CV_16UC1)\n+ CAP_OBSENSOR_BGR_IMAGE = 1, //!< Data given from BGR stream generator\n+ CAP_OBSENSOR_IR_IMAGE = 2 //!< Data given from IR stream generator(CV_16UC1)\n+};\n+\n+//! OBSENSOR stream generator\n+enum VideoCaptureOBSensorGenerators{ \n```", + "output": "@alalek, there is already such situation with OpenNI and PerC and so far it did not cause any major problems. Let me merge this PR" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_v4l.cpp\n**Change Type:** modified\n**Context:** PR #22236: V4L2: Add multi-planar capture support\n\n**Code Changes:**\n```diff\n@@ -286,6 +286,12 @@ typedef uint32_t __u32;\n #define MAX_V4L_BUFFERS 10\n #define DEFAULT_V4L_BUFFERS 4\n \n+// types of memory in 'special' buffer\n+enum {\n+ MEMORY_ORIG = 0, // Image data in original format.\n+ MEMORY_RGB = 1, // Image data converted to RGB format.\n+};\n+\n```", + "output": "The assert breaks behavior with existing cameras. Laplace example log for my Genius and Logitech webcam: ``` ./example_cpp_laplace -c=0 This program demonstrates Laplace point/edge detection using OpenCV function Laplacian() It captures from the camera of your choice: 0, 1, ... default 0 Call: ./example_cpp_laplace -c= -p= [ERROR:0@0.080] global /home/alexander/Projects/OpenCV/opencv-master/modules/videoio/src/cap.cpp (280) open VIDEOIO(V4L2): raised OpenCV exception: OpenCV(4.6.0-dev) /home/alexander/Projects/OpenCV/opencv-master/modules/videoio/src/cap_v4l.cpp:892: error: (-215:Assertion failed) 1 < num_planes in function 'createBuffers' Can't open camera/video stream: 0 ``` The assertion should be with `<=`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_v4l.cpp\n**Change Type:** modified\n**Context:** PR #22236: V4L2: Add multi-planar capture support\n\n**Code Changes:**\n```diff\n@@ -286,6 +286,12 @@ typedef uint32_t __u32;\n #define MAX_V4L_BUFFERS 10\n #define DEFAULT_V4L_BUFFERS 4\n \n+// types of memory in 'special' buffer\n+enum {\n+ MEMORY_ORIG = 0, // Image data in original format.\n+ MEMORY_RGB = 1, // Image data converted to RGB format.\n+};\n+\n```", + "output": "Thank you for your review. I fixed it and confirmed the operation with both i.MX 8M (multi-planar) and UVC camera (single-planar)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22311: DNN: add another two Mish activation to onnx_graph_simplifier\n**Review Line:** 561\n\n**Code Changes:**\n```diff\n+ int addVal = addNodeToMatch(\"\");\n+ int add = addNodeToMatch(\"Add\", exp, addVal);\n+ addNodeToMatch(\"Log\", add);\n+ setFusedNode(\"Softplus\", input);\n+ }\n+};\n+\n class MulCastSubgraph : public Subgraph\n {\n public:\n@@ -734,6 +763,8 @@ void simplifySubgraphs(opencv_onnx::GraphProto& net)\n```", + "output": "Thanks for code reviewing! Actually, I have verified it. And the `SoftplusSubgraph` can work on [Yolov4](https://drive.google.com/file/d/1WBZO-YCqYTO73j8w_soMPD3ljp8fAD_-/view?usp=sharing), and the `SoftplusSubgraph2` can only work on [this test case](https://github.com/opencv/opencv_extra/pull/990). > You can make a small onnx model of both cases. I also tried to generate another case Mish case like Yolov4. But it fails. Probably due to this YoloV4 being converted from TensorFlow model." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22337: DNN: load fp16 ONNX model as fp32\n\n**Code Changes:**\n```diff\n@@ -767,11 +767,67 @@ Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)\n Mat(sizes, CV_32FC1, val).copyTo(blob);\n }\n }\n+ else if (datatype == opencv_onnx::TensorProto_DataType_FLOAT16)\n+ {\n+ // FIXME, for now, we only load FP16 Tensor as FP32 Mat, full support for FP16 is required in the future.\n+ CV_LOG_ONCE_WARNING(NULL, \"DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.\");\n+\n+ // ONNX saves float 16 data in two format: int32 and raw_data.\n```", + "output": "Hi @rogday. A model may output hundreds of lines of warning, which is too messy. I think one line of warning is enough to let the user know what's going on inside." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22337: DNN: load fp16 ONNX model as fp32\n\n**Code Changes:**\n```diff\n@@ -767,11 +767,67 @@ Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)\n Mat(sizes, CV_32FC1, val).copyTo(blob);\n }\n }\n+ else if (datatype == opencv_onnx::TensorProto_DataType_FLOAT16)\n+ {\n+ // FIXME, for now, we only load FP16 Tensor as FP32 Mat, full support for FP16 is required in the future.\n+ CV_LOG_ONCE_WARNING(NULL, \"DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.\");\n+\n+ // ONNX saves float 16 data in two format: int32 and raw_data.\n```", + "output": "Thanks for code reviewing. I will try to test this under the ARMv7 platform." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22337: DNN: load fp16 ONNX model as fp32\n\n**Code Changes:**\n```diff\n@@ -767,11 +767,67 @@ Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)\n Mat(sizes, CV_32FC1, val).copyTo(blob);\n }\n }\n+ else if (datatype == opencv_onnx::TensorProto_DataType_FLOAT16)\n+ {\n+ // FIXME, for now, we only load FP16 Tensor as FP32 Mat, full support for FP16 is required in the future.\n+ CV_LOG_ONCE_WARNING(NULL, \"DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.\");\n+\n+ // ONNX saves float 16 data in two format: int32 and raw_data.\n```", + "output": "@zihaomu @alalek According my research the behavior is different for different ARM generations: - ARM v8 supports unaligned load and `vld1_s16` does not case SIGBUG, but has some performance penalty. - ARM v7 has different behavior that depends on SoC, kernel and compiler flags and may throw SIGBUS. The behavior is managed by `/proc/cpu/alignment`. So, I propose to ensure element-size alignment to be in safe." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22337: DNN: load fp16 ONNX model as fp32\n\n**Code Changes:**\n```diff\n@@ -767,11 +767,67 @@ Mat getMatFromTensor(const opencv_onnx::TensorProto& tensor_proto)\n Mat(sizes, CV_32FC1, val).copyTo(blob);\n }\n }\n+ else if (datatype == opencv_onnx::TensorProto_DataType_FLOAT16)\n+ {\n+ // FIXME, for now, we only load FP16 Tensor as FP32 Mat, full support for FP16 is required in the future.\n+ CV_LOG_ONCE_WARNING(NULL, \"DNN: load FP16 model as FP32 model, and it takes twice the FP16 RAM requirement.\");\n+\n+ // ONNX saves float 16 data in two format: int32 and raw_data.\n```", + "output": "@asmorkalov Thanks for the detailed investigation. I will try to fix this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22531: Parsing quantized nodes does not rely on names\n\n**Code Changes:**\n```diff\n@@ -52,6 +52,13 @@ extern bool DNN_DIAGNOSTICS_RUN;\n \n class ONNXLayerHandler;\n \n+template \n+static T getScaleFromMat(Mat m)\n+{\n+ CV_Assert(m.total() == 1);\n+ return m.at(0);\n+}\n```", + "output": "Thanks for your code review. OpenCV DNN needs to allocate output blob according to depth type (for example the depth of `CV_8S` will be allocated as int8 Mat.) > someone could rename the layer or add a new one that doesn't follow that convention. It is possible. I agree with `create a set of all integer layers`. I think a better way is for us to have both `Int8` name endings and keep sets. The set only stores some special layers that do not follow the rules like `Quantize`. For other layer type ending with `Int8`, we can keep the current implementation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22531: Parsing quantized nodes does not rely on names\n\n**Code Changes:**\n```diff\n@@ -52,6 +52,13 @@ extern bool DNN_DIAGNOSTICS_RUN;\n \n class ONNXLayerHandler;\n \n+template \n+static T getScaleFromMat(Mat m)\n+{\n+ CV_Assert(m.total() == 1);\n+ return m.at(0);\n+}\n```", + "output": "Hi @rogday, please take a look. I have refactored the code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22531: Parsing quantized nodes does not rely on names\n**Review Line:** 3707\n\n**Code Changes:**\n```diff\n dispatch[\"AveragePool\"] = &ONNXImporter::parseAveragePool;\n dispatch[\"GlobalAveragePool\"] = dispatch[\"GlobalMaxPool\"] = &ONNXImporter::parseGlobalPool;\n- dispatch[\"ReduceMax\"] = dispatch[\"ReduceMin\"] = dispatch[\"ReduceMean\"] = dispatch[\"ReduceSum\"] = dispatch[\"ReduceMax\"] =\n- dispatch[\"ReduceMin\"] = dispatch[\"ReduceSumSquare\"] = dispatch[\"ReduceProd\"] = dispatch[\"ReduceL1\"] =\n+ dispatch[\"ReduceMax\"] = dispatch[\"ReduceMin\"] = dispatch[\"ReduceMean\"] = dispatch[\"ReduceSum\"] =\n+ dispatch[\"ReduceSumSquare\"] = dispatch[\"ReduceProd\"] = dispatch[\"ReduceL1\"] =\n dispatch[\"ReduceL2\"] = dispatch[\"ReduceLogSum\"] = dispatch[\"ReduceLogSumExp\"] = &ONNXImporter::parseReduce;\n dispatch[\"Slice\"] = &ONNXImporter::parseSlice;\n dispatch[\"Split\"] = &ONNXImporter::parseSplit;\n```", + "output": "Hi @rogday, I found there this code is duplicated. There are two \"ReduceMax\" and \"ReduceMin\"." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ie/giebackend.cpp\n**Change Type:** modified\n**Context:** PR #22588: G-API: Add synchronous execution for IE backend\n\n**Code Changes:**\n```diff\n@@ -392,6 +392,12 @@ struct IEUnit {\n params.vpl_preproc_ctx.value());\n GAPI_LOG_INFO(nullptr, \"VPP preproc created successfuly\");\n }\n+\n+ if (params.mode == cv::gapi::ie::InferMode::Sync &&\n+ params.nireq != 1u) {\n+ throw std::logic_error(\n+ \"Failed: cv::gapi::ie::InferMode::Sync works only with nireq equal to 1.\");\n+ }\n```", + "output": "Here the question how to switch between `AsyncInferExecutor` and `SyncInferExecutor`. I see two options here: 1. Provide to user handle: `params.cfgInferenceAPI(ParamDesc::API api) // ParamDesc::API::ASYNC by default` 2. Calculate it based on number of infer requests. if nireq > 1 then it must be `AsyncInferRequest` since there is no sense to infer synchronously with multiple infer requests. I prefer the first option since it's more flexible and sometimes there is a difference between sync/async mode even with nireq == 1. @dmatveev do you mind `1)` option?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #22572: [3.4] Add information about the disparity-to-depth mapping matrix\n**Review Line:** 645\n\n**Code Changes:**\n```diff\n@@ -642,7 +642,7 @@ CV_EXPORTS Mat findHomography( InputArray srcPoints, InputArray dstPoints,\n @param Qz Optional output 3x3 rotation matrix around z-axis.\n \n The function computes a RQ decomposition using the given rotations. This function is used in\n-decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera\n+@ref decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera\n and a rotation matrix.\n \n It optionally returns three rotation matrices, one for each axis, and the three Euler angles in\n@@ -676,7 +676,7 @@ be used in OpenGL. Note, there is always more than one sequence of rotations abo\n principal axes that results in the same orientation of an object, e.g. see @cite Slabaugh . Returned\n```", + "output": "During the merge of this PR we need to define policy how to provide documentation links. And avoid mess between `#` and `@ref` (and related merge conflicts). 4.x branch has links already through `#`: - [page](https://docs.opencv.org/4.6.0/d9/d0c/group__calib3d.html#ga044b03d81f99370db05c52c3d0b46599) - [source](https://github.com/opencv/opencv/blame/4.6.0/modules/calib3d/include/opencv2/calib3d.hpp#L757)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectCUDA.cmake\n**Change Type:** modified\n**Context:** PR #22880: Prevent cudacodec libs locations being reset when cmake is re-run\n\n**Code Changes:**\n```diff\n@@ -10,6 +10,10 @@ endif()\n \n #set(OPENCV_CMAKE_CUDA_DEBUG 1)\n \n+if(CUDA_TOOLKIT_ROOT_DIR)\n+ set(CUDA_TOOLKIT_TARGET_DIR ${CUDA_TOOLKIT_ROOT_DIR})\n+endif()\n+\n if(((NOT CMAKE_VERSION VERSION_LESS \"3.9.0\") # requires https://gitlab.kitware.com/cmake/cmake/merge_requests/663\n OR OPENCV_CUDA_FORCE_EXTERNAL_CMAKE_MODULE)\n```", + "output": "@alalek I agree but I've just noticed this issue was fixed in CMake 6 months ago https://gitlab.kitware.com/cmake/cmake/-/commit/a7758394afc2717bb31a47826e62c1ba86a63c41. I suggest for compatibility with older versions of CMake we could apply the same fix as mentioned there." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #19156: Named arguments handling\n\n**Code Changes:**\n```diff\n@@ -1555,6 +1555,22 @@ void filter2D(InputArray _src, OutputArray _dst, int ddepth,\n delta, borderType, src.isSubmatrix());\n }\n \n+void filter2D( InputArray src, OutputArray dst, InputArray kernel,\n+ const Filter2DParams& params)\n+{\n+ Mat K = kernel.getMat(), tempK;\n+ if (params.scale != 1) {\n+ int kdepth = K.depth();\n```", + "output": "* `ddepth` check was removed, thanks! * `scale` check is retained, because it's extension to the existing API" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect/aruco_board.hpp\n**Change Type:** modified\n**Context:** PR #22986: Move contrib charuco to main objdetect\n\n**Code Changes:**\n```diff\n@@ -8,7 +8,7 @@\n \n namespace cv {\n namespace aruco {\n-//! @addtogroup aruco\n+//! @addtogroup objdetect_aruco\n //! @{\n \n class Dictionary;\n@@ -22,29 +22,15 @@ class Dictionary;\n```", + "output": "@alalek, do you want me to remove the virtuality for the function `matchImagePoints()` and `generateImage()`? Do you want me to use virtuality into Pimpl implementation?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/include/opencv2/features2d.hpp\n**Change Type:** modified\n**Context:** PR #20367: Add read/write functions for features2d and normalize naming convention\n\n**Code Changes:**\n```diff\n@@ -316,6 +316,21 @@ class CV_EXPORTS_W SIFT : public Feature2D\n double sigma, int descriptorType);\n \n CV_WRAP virtual String getDefaultName() const CV_OVERRIDE;\n+\n+ CV_WRAP virtual void setNFeatures(int maxFeatures) = 0;\n+ CV_WRAP virtual int getNFeatures() const = 0;\n+\n+ CV_WRAP virtual void setNOctaveLayers(int nOctaveLayers) = 0;\n+ CV_WRAP virtual int getNOctaveLayers() const = 0;\n```", + "output": "OK, I make them pure abstract methods, and let the derived class *SimpleBlobDetectorImpl* do the implementation in blobdetector.cpp => Concerning your suggestion (which is the same thing) about using exceptions in base class for new getters/setters, it could be for exemple for SIFT: ```cpp CV_WRAP virtual void setNFeatures(int maxFeatures) { CV_Error(Error::StsNotImplemented, \"The method is not implemented, this must be done by derived classes.\"); } CV_WRAP virtual int getNFeatures() const { CV_Error(Error::StsNotImplemented, \"The method is not implemented, this must be done by derived classes.\"); } ``` However I'm not sure this is the good choice. Indeed, I initially added cores to the new setters/getters to workaround the **\"Compare ABI dumps\" fails** during build. But it should be pure abstract methods because adding an implementation with exception in base class will leads to runtime exceptions if derived class does not overload theses methods... So, I would prefer to make them pure abstract to yield to compilation errors if derived class does not implement them, but this will trigger **\"Compare ABI dumps\" fails** during build. So for the same SIFT example, I would prefer to use: ```cpp CV_WRAP virtual void setNFeatures(int maxFeatures) = 0; CV_WRAP virtual int getNFeatures() const = 0; ``` @asmorkalov is it possible to keep add new getters/setters as abstract methods and ignore **\"Compare ABI dumps\" fails** when merging the PR?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/blobdetector.cpp\n**Change Type:** modified\n**Context:** PR #20367: Add read/write functions for features2d and normalize naming convention\n**Review Line:** 86\n\n**Code Changes:**\n```diff\n+ static void validateParameters(const SimpleBlobDetector::Params& p)\n+ {\n+ if (p.thresholdStep <= 0)\n+ CV_Error(Error::StsBadArg, \"thresholdStep>0\");\n+\n+ if (p.minThreshold > p.maxThreshold || p.minThreshold <= 0)\n+ CV_Error(Error::StsBadArg, \"00\");\n+\n```", + "output": "@augustinmanecy Should that be `p.minThreshold < 0`? Some of my own internal tests are now failing against OpenCV master, and I think this is why." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #22905: DNN: clean old convolution and optimize depth-wise Conv, Conv1D and Conv3D\n**Review Line:** 1267\n\n**Code Changes:**\n```diff\n@@ -259,7 +259,7 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n std::vector reluslope;\n Ptr activ;\n \n- Ptr fastConv2dImpl;\n+ Ptr fastConvImpl;\n \n #ifdef HAVE_OPENCL\n Ptr > convolutionOp;\n@@ -967,808 +967,6 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n }\n #endif // HAVE_WEBNN\n \n- class ParallelConv : public cv::ParallelLoopBody\n- {\n```", + "output": "Hi @alalek. Thanks for your reminder, the purpose of this PR is that let Conv3d and Conv1D execute in the new branch. And it will not affect the speed and memory consumption of Conv2D. (Conv2D Related PR: https://github.com/opencv/opencv/pull/21910. ) I will update the speed performance test and the memory consumption of Conv3D and Conv1D. In theory, Conv3D and Conv1D require twice as much memory as before, since we need repack the weight at the `fast_conv` initialized stage. It will not reach 7 times (because Conv3D and Conv1D do not support Winograd)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #22905: DNN: clean old convolution and optimize depth-wise Conv, Conv1D and Conv3D\n**Review Line:** 1267\n\n**Code Changes:**\n```diff\n@@ -259,7 +259,7 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n std::vector reluslope;\n Ptr activ;\n \n- Ptr fastConv2dImpl;\n+ Ptr fastConvImpl;\n \n #ifdef HAVE_OPENCL\n Ptr > convolutionOp;\n@@ -967,808 +967,6 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n }\n #endif // HAVE_WEBNN\n \n- class ParallelConv : public cv::ParallelLoopBody\n- {\n```", + "output": "> i7-12700k has P and E cores. Need to bind test process to selected CPUs. Thanks for your reminder. How about I add a new performance with a single thread result? I'm not sure if this is enough. > Resnet34_kinetics - no such test in opencv_perf_dnn `Resnet34_kinetics` is an accuracy test. And the backbone of Resnet34_Conv3d is a typical implementation of Conv3D." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #22905: DNN: clean old convolution and optimize depth-wise Conv, Conv1D and Conv3D\n**Review Line:** 1267\n\n**Code Changes:**\n```diff\n@@ -259,7 +259,7 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n std::vector reluslope;\n Ptr activ;\n \n- Ptr fastConv2dImpl;\n+ Ptr fastConvImpl;\n \n #ifdef HAVE_OPENCL\n Ptr > convolutionOp;\n@@ -967,808 +967,6 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n }\n #endif // HAVE_WEBNN\n \n- class ParallelConv : public cv::ParallelLoopBody\n- {\n```", + "output": "Performance report is not intended to show non-reproducible marketing **single** number. Performance report is required to track regressions and to avoid them. We need all numbers from performance tests. @vpisarev Used pipeline of optimization development has serious gaps. Ignoring existence of performance tests during optimization is not an acceptable flow. We need to fix that process." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac.hpp\n**Change Type:** modified\n**Context:** PR #22967: Fix maxIter parameter in usac findEssentialMat\n\n**Code Changes:**\n```diff\n@@ -801,7 +801,8 @@ bool solvePnPRansac( InputArray objectPoints, InputArray imagePoints,\n Mat findEssentialMat( InputArray points1, InputArray points2,\n InputArray cameraMatrix1,\n int method, double prob,\n- double threshold, OutputArray mask);\n+ double threshold, OutputArray mask,\n+ int maxIters);\n \n Mat estimateAffine2D(InputArray from, InputArray to, OutputArray inliers,\n int method, double ransacReprojThreshold, int maxIters,\n```", + "output": "It's internal function, not public API. Default value looks a bit suspicious. Please check if it's used in other places. Most probably `maxIters` should be promoted there too." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel_impl.cpp\n**Change Type:** modified\n**Context:** PR #22966: Fix slower CV_PAUSE on SkyLake and above.\n**Review Line:** 62\n\n**Code Changes:**\n```diff\n # define _mm_pause cv_non_sse_mm_pause\n # endif\n-# define CV_PAUSE(v) do { for (int __delay = (v); __delay > 0; --__delay) { _mm_pause(); } } while (0)\n+// 5 * v is meants for backward compatibility: with pre-Skylake CPUs, _mm_pause took 4 or 5 cycles.\n+// With post-Skylake CPUs, _mm_pause takes 140 cycles.\n+# define CV_PAUSE(v) do { const uint64_t __delay = 5 * v; uint64_t __init = __rdtsc(); do { _mm_pause(); } while ((__rdtsc() - __init) < __delay); } while (0)\n # elif defined __GNUC__ && defined __aarch64__\n # define CV_PAUSE(v) do { for (int __delay = (v); __delay > 0; --__delay) { asm volatile(\"yield\" ::: \"memory\"); } } while (0)\n # elif defined __GNUC__ && defined __arm__\n```", + "output": "RDTSC latency is about 25-35 cycles. @vrabaud Alternative is ignoring of `v` parameter and run `_mm_pause()` once. If it works for you, then we could do that. BTW, TBB code doesn't suggest anything new (still uses euristic constants): https://github.com/search?q=repo%3Aoneapi-src%2FoneTBB%20machine_pause&type=code" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/video/src/tracking/tracker_nano.cpp\n**Change Type:** added\n**Context:** PR #22808: support Nanotrack in video module\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,359 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the https://github.com/HonglinChu/NanoTrack/blob/master/ncnn_macos_nanotrack/nanotrack.cpp\n+// Author, HongLinChu, 1628464345@qq.com\n+// Adapt to OpenCV, ZihaoMu: zihaomu@outlook.com\n+\n+// Link to original inference code: https://github.com/HonglinChu/NanoTrack\n```", + "output": "Thanks for code reviewing, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/video/src/tracking/tracker_nano.cpp\n**Change Type:** added\n**Context:** PR #22808: support Nanotrack in video module\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,359 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the https://github.com/HonglinChu/NanoTrack/blob/master/ncnn_macos_nanotrack/nanotrack.cpp\n+// Author, HongLinChu, 1628464345@qq.com\n+// Adapt to OpenCV, ZihaoMu: zihaomu@outlook.com\n+\n+// Link to original inference code: https://github.com/HonglinChu/NanoTrack\n```", + "output": "Thanks for code reviewing, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/video/src/tracking/tracker_nano.cpp\n**Change Type:** added\n**Context:** PR #22808: support Nanotrack in video module\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,359 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the https://github.com/HonglinChu/NanoTrack/blob/master/ncnn_macos_nanotrack/nanotrack.cpp\n+// Author, HongLinChu, 1628464345@qq.com\n+// Adapt to OpenCV, ZihaoMu: zihaomu@outlook.com\n+\n+// Link to original inference code: https://github.com/HonglinChu/NanoTrack\n```", + "output": "Got it, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/video/test/test_trackers.cpp\n**Change Type:** modified\n**Context:** PR #22808: support Nanotrack in video module\n\n**Code Changes:**\n```diff\n@@ -64,40 +64,67 @@ TEST_P(DistanceAndOverlap, GOTURN)\n \n INSTANTIATE_TEST_CASE_P(Tracking, DistanceAndOverlap, TESTSET_NAMES);\n \n-TEST(GOTURN, memory_usage)\n+static bool checkIOU(const Rect& r0, const Rect& r1, double threshold)\n {\n- cv::Rect roi(145, 70, 85, 85);\n+ int interArea = (r0 & r1).area();\n+ double iouVal = (interArea * 1.0 )/ (r0.area() + r1.area() - interArea);;\n```", + "output": "Thanks, I will fix it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/video/include/opencv2/video/tracking.hpp\n**Change Type:** modified\n**Context:** PR #22808: support Nanotrack in video module\n**Review Line:** 867\n\n**Code Changes:**\n```diff\n+protected:\n+ TrackerNano(); // use ::create()\n+public:\n+ virtual ~TrackerNano() CV_OVERRIDE;\n+\n+ struct CV_EXPORTS_W_SIMPLE Params\n+ {\n+ CV_WRAP Params();\n+ CV_PROP_RW std::string backbone;\n+ CV_PROP_RW std::string neckhead;\n+ CV_PROP_RW int backend;\n```", + "output": "@alalek @zihaomu I propose to expose both DNN parameters and Tracking parameters themself like this: ``` struct CV_EXPORTS_W_SIMPLE TrackingParams { int exemplarSize = 127; int instanceSize = 255; float windowInfluence = 0.455f; float lr = 0.37f; float contextAmount = 0.5; bool swapRB = true; int totalStride = 16; float penaltyK = 0.055f; }; struct CV_EXPORTS_W_SIMPLE DnnParams { CV_WRAP Params(); CV_PROP_RW std::string backbone; CV_PROP_RW std::string neckhead; CV_PROP_RW int backend; CV_PROP_RW int target; }; ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/tracker.py\n**Change Type:** modified\n**Context:** PR #22808: support Nanotrack in video module\n\n**Code Changes:**\n```diff\n@@ -9,6 +9,9 @@\n network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+For NanoTrack:\n+ nanotrack_backbone: https://github.com/HonglinChu/SiamTrackers/blob/master/NanoTrack/models/onnx/nanotrack_backbone_sim.onnx\n+ nanotrack_headneck: https://github.com/HonglinChu/SiamTrackers/blob/master/NanoTrack/models/onnx/nanotrack_head_sim.onnx\n \n USAGE:\n tracker.py [-h] [--input INPUT] [--tracker_algo TRACKER_ALGO]\n```", + "output": "Thanks for code reviewing, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_hdr.cpp\n**Change Type:** modified\n**Context:** PR #22830: imgcodecs: ensure parameters are key-value pairs, fix HDR encoder\n\n**Code Changes:**\n```diff\n@@ -141,14 +141,28 @@ bool HdrEncoder::write( const Mat& input_img, const std::vector& params )\n if(img.depth() != CV_32F) {\n img.convertTo(img, CV_32FC3, 1/255.0f);\n }\n- CV_Assert(params.empty() || params[0] == HDR_NONE || params[0] == HDR_RLE);\n+\n+ int compression = IMWRITE_HDR_COMPRESSION_RLE;\n+ for (size_t i = 0; i + 1 < params.size(); i += 2)\n+ {\n+ switch (params[i])\n```", + "output": "@Kumataro Thank you! Fixed. Also fixed `imencode()` and added tests. BTW, C API is on hold. We don't modify it anymore." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_hdr.cpp\n**Change Type:** modified\n**Context:** PR #22830: imgcodecs: ensure parameters are key-value pairs, fix HDR encoder\n\n**Code Changes:**\n```diff\n@@ -141,14 +141,28 @@ bool HdrEncoder::write( const Mat& input_img, const std::vector& params )\n if(img.depth() != CV_32F) {\n img.convertTo(img, CV_32FC3, 1/255.0f);\n }\n- CV_Assert(params.empty() || params[0] == HDR_NONE || params[0] == HDR_RLE);\n+\n+ int compression = IMWRITE_HDR_COMPRESSION_RLE;\n+ for (size_t i = 0; i + 1 < params.size(); i += 2)\n+ {\n+ switch (params[i])\n```", + "output": "Thank you to fix it , and I agree with you. `CV_IMWRITE_HDR_COMPRESSION` is not necessary and will not be defined." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tensorflow/tf_graph_simplifier.cpp\n**Change Type:** modified\n**Context:** PR #22725: DNN: report an error at the infinite loop of tensorflow importer\n\n**Code Changes:**\n```diff\n@@ -829,12 +829,19 @@ void RemoveIdentityOps(tensorflow::GraphDef& net)\n IdentityOpsMap::iterator it = identity_ops.find(input_op_name);\n \n if (it != identity_ops.end()) {\n+ std::set loopCheckSet;\n // In case of Identity after Identity\n while (true)\n {\n IdentityOpsMap::iterator nextIt = identity_ops.find(it->second);\n if (nextIt != identity_ops.end())\n```", + "output": "Thanks for your code review. Different models may have different `itVec` sizes. Since it just put the item in `identity_ops` to `itVec`, the size of `identity_ops` is the max number. From my point of view, the maximum size of `identity_ops` should be less than 50. I agree with you, the `std::set` should be more reasonable." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_stackblur.cpp\n**Change Type:** added\n**Context:** PR #20379: Add StackBlur for imgproc\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,313 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+/*\n+StackBlur - a fast almost Gaussian Blur\n+Theory: http://underdestruction.com/2004/02/25/stackblur-2004\n+The code has been borrowed from (https://github.com/flozz/StackBlur).\n+\n```", + "output": "Hi @asmorkalov. Thanks for the code review. Fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/perf/perf_blur.cpp\n**Change Type:** modified\n**Context:** PR #20379: Add StackBlur for imgproc\n**Review Line:** 271\n\n**Code Changes:**\n```diff\n+ double eps = 1e-3;\n+\n+ eps = CV_MAT_DEPTH(type) <= CV_32S ? 1 : eps;\n+\n+ Mat src(size, type);\n+ Mat dst(size, type);\n+\n+ declare.in(src, WARMUP_RNG).out(dst);\n+\n+ TEST_CYCLE() stackBlur(src, dst, Size(3,3));\n+\n```", + "output": "Thanks for your code reviewing, the code has been borrowed from [here](https://github.com/opencv/opencv/blob/4.x/modules/imgproc/perf/perf_blur.cpp#L117-L137). Does this problem only happen on stackblur? Maybe I can fix them both. Also, I have no idea to fix it, any suggestions?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #20379: Add StackBlur for imgproc\n**Review Line:** 1624\n\n**Code Changes:**\n```diff\n@@ -1620,6 +1620,22 @@ CV_EXPORTS_W void blur( InputArray src, OutputArray dst,\n Size ksize, Point anchor = Point(-1,-1),\n int borderType = BORDER_DEFAULT );\n \n+/** @brief Blurs an image using the StackBlur.\n+The function applies and StackBlur to an image.\n+StackBlur can generate similar results as Gaussian blur, and the time does not increase as the kernel size increases.\n+It creates a kind of moving stack of colors whilst scanning through the image. Thereby it just has to add one new block of color to the right side\n+of the stack and remove the leftmost color. The remaining colors on the topmost layer of the stack are either added on or reduced by one,\n+depending on if they are on the right or on the left side of the stack.\n+Described here: http://underdestruction.com/2004/02/25/stackblur-2004.\n```", + "output": "Documentation formatting is broken. New empty line is required after `@brief`. Or `@details`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #20379: Add StackBlur for imgproc\n**Review Line:** 1633\n\n**Code Changes:**\n```diff\n+depending on if they are on the right or on the left side of the stack.\n+Described here: http://underdestruction.com/2004/02/25/stackblur-2004.\n+Stack Blur Algorithm by Mario Klingemann \n+@param src input image. The number of channels can be arbitrary, but the depth should be one of\n+CV_8U, CV_16U, CV_16S or CV_32F.\n+@param dst output image of the same size and type as src.\n+@param ksize stack-blurring kernel size. The ksize.width and ksize.height can differ but they both must be\n+positive and odd.\n+*/\n+CV_EXPORTS_W void stackBlur(InputArray src, OutputArray dst, Size ksize);\n+\n```", + "output": "Thanks for code reviewing. @alalek. For now, we only use BORDER_REPLICATE type as border type, since the original paper is it. More details of paper can be found at http://underdestruction.com/2004/02/25/stackblur-2004/. Should we support other border types? Or just describe this detail in a new patch?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #22689: libav for jetson tk1 does not provide libavutil/display.h\n\n**Code Changes:**\n```diff\n@@ -82,7 +82,11 @@ extern \"C\" {\n \n #include \n #include \n+// https://github.com/FFmpeg/FFmpeg/blame/d79c240196f43b93bd204363f1facc270029f113/doc/APIchanges#L1689-L1695\n+#if LIBAVUTIL_BUILD >= (LIBAVUTIL_VERSION_MICRO >= 100 \\\n+ ? CALC_FFMPEG_VERSION(52, 85, 100) : CALC_FFMPEG_VERSION(53, 15, 0))\n #include \n+#endif\n \n```", + "output": "Condition should correlate with `av_display_rotation_get` code below. relates #21754 /cc @mshabunin Correct version conditions are defined here: https://github.com/FFmpeg/FFmpeg/blame/d79c240196f43b93bd204363f1facc270029f113/doc/APIchanges#L1689-L1695 BTW, versions like .100/.101 mush have `&& LIBAVFORMAT_VERSION_MICRO >= 100` check as they are FFmpeg only. `libav` (from Ubuntu 14.04 as example for TK1) has .0/.1/etc patch versions." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/system.cpp\n**Change Type:** modified\n**Context:** PR #21833: Add Loongson Advanced SIMD Extension support: -DCPU_BASELINE=LASX\n**Review Line:** 674\n\n**Code Changes:**\n```diff\n have[CV_CPU_RVV] = true;\n #endif\n \n+ #if defined __loongarch_asx\n+ have[CV_CPU_LASX] = true;\n+ #endif\n+\n bool skip_baseline_check = false;\n #ifndef NO_GETENV\n if (getenv(\"OPENCV_SKIP_CPU_BASELINE_CHECK\"))\n```", + "output": "@gititgo Do you have any news on the `Calib3d_StereoBM.regression` test failure?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/types.hpp\n**Change Type:** modified\n**Context:** PR #22526: core: expose rectangle intersection to bindings\n\n**Code Changes:**\n```diff\n@@ -2017,6 +2017,15 @@ double jaccardDistance(const Rect_<_Tp>& a, const Rect_<_Tp>& b) {\n return 1.0 - Aab / (Aa + Ab - Aab);\n }\n \n+/** @brief Finds out if there is any intersection between two rectangles\n+ *\n+ * mainly useful for language bindings\n+ * @param rect1 First rectangle\n+ * @param rect2 Second rectangle\n+ * @return the area of the intersection\n```", + "output": "@alalek the naming is the only outstanding issue here, right? Any suggestions?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/types.hpp\n**Change Type:** modified\n**Context:** PR #22526: core: expose rectangle intersection to bindings\n\n**Code Changes:**\n```diff\n@@ -2017,6 +2017,15 @@ double jaccardDistance(const Rect_<_Tp>& a, const Rect_<_Tp>& b) {\n return 1.0 - Aab / (Aa + Ab - Aab);\n }\n \n+/** @brief Finds out if there is any intersection between two rectangles\n+ *\n+ * mainly useful for language bindings\n+ * @param rect1 First rectangle\n+ * @param rect2 Second rectangle\n+ * @return the area of the intersection\n```", + "output": "@paroj I discussed the solution with alalek and mshabunin. Please add \"area\" to the function name to have clear function meaning. It's enough to merge the PR." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/infer/bindings_onnx.hpp\n**Change Type:** added\n**Context:** PR #22017: Add python bindings for G-API onnx\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,43 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level\n+// directory of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef OPENCV_GAPI_INFER_BINDINGS_ONNX_HPP\n+#define OPENCV_GAPI_INFER_BINDINGS_ONNX_HPP\n+\n+#include // GKernelPackage\n+#include // Params\n```", + "output": "@xiong-jie-y Are you sure about the last copyright line? If you don't work at Intel (email is not) not sure about then please remove this line (first 3 lines are enough)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/misc/python/test/test_gapi_infer_onnx.py\n**Change Type:** added\n**Context:** PR #22017: Add python bindings for G-API onnx\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,74 @@\n+#!/usr/bin/env python\n+\n+import numpy as np\n+import cv2 as cv\n+import os\n+import sys\n+import unittest\n+\n+from tests_common import NewOpenCVTests\n```", + "output": "Thank you! Done! :slightly_smiling_face:" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/infer/onnx.hpp\n**Change Type:** modified\n**Context:** PR #22017: Add python bindings for G-API onnx\n**Review Line:** 301\n\n**Code Changes:**\n```diff\n+\n+ @param tag string tag of the network for which these parameters are intended.\n+ @param model_path path to model file (.onnx file).\n+ */\n+ Params(const std::string& tag, const std::string& model_path)\n+ : desc{model_path, 0u, 0u, {}, {}, {}, {}, {}, {}, {}, {}, {}, true}, m_tag(tag) {}\n+\n+ // BEGIN(G-API's network parametrization API)\n+ GBackend backend() const { return cv::gapi::onnx::backend(); }\n+ std::string tag() const { return m_tag; }\n+ cv::util::any params() const { return { desc }; }\n```", + "output": "@alalek here is where `is_generic` is set to `true` (in the tail)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/misc/python/test/test_gapi_infer_onnx.py\n**Change Type:** added\n**Context:** PR #22017: Add python bindings for G-API onnx\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,74 @@\n+#!/usr/bin/env python\n+\n+import numpy as np\n+import cv2 as cv\n+import os\n+import sys\n+import unittest\n+\n+from tests_common import NewOpenCVTests\n```", + "output": "@alalek, is behavior of download gapi .py incorrect? What is expected? I’m out of context, sorry." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/onnx/gonnxbackend.cpp\n**Change Type:** modified\n**Context:** PR #22017: Add python bindings for G-API onnx\n**Review Line:** 1143\n\n**Code Changes:**\n```diff\n+ auto &pp = cv::util::any_cast(np.opaque);\n const auto &ki = cv::util::any_cast(ii.opaque);\n+\n+ GModel::Graph model(gr);\n+ auto& op = model.metadata(nh).get();\n+ if (pp.is_generic) {\n+ auto& info = cv::util::any_cast(op.params);\n+\n+ for (const auto& a : info.in_names)\n+ {\n+ pp.input_names.push_back(a);\n```", + "output": "@mpashchenkov Could review this part, please?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/stitching/include/opencv2/stitching/detail/matchers.hpp\n**Change Type:** modified\n**Context:** PR #22329: Fix stitching Python bindings (and one stitching_detailed.cpp bug)\n**Review Line:** 206\n\n**Code Changes:**\n```diff\n@@ -138,7 +138,7 @@ class CV_EXPORTS_W FeaturesMatcher\n @sa detail::MatchesInfo\n */\n CV_WRAP_AS(apply2) void operator ()(const std::vector &features, CV_OUT std::vector &pairwise_matches,\n- const cv::UMat &mask = cv::UMat());\n+ const cv::UMat &mask = cv::UMat()) { match(features, pairwise_matches, mask); };\n \n /** @return True, if it's possible to use the same matcher instance in parallel, false otherwise\n */\n@@ -161,6 +161,16 @@ class CV_EXPORTS_W FeaturesMatcher\n virtual void match(const ImageFeatures &features1, const ImageFeatures &features2,\n MatchesInfo& matches_info) = 0;\n \n+ /** @brief This method implements logic to match features between arbitrary number of features.\n+ By default this checks every pair of inputs in the input, but the behaviour can be changed by subclasses.\n```", + "output": "@alalek I'm not familiar with the rules of the project and versioning, you may be fully aware of this, but unless I am missing something the method is now correctly inherited from its superclass and works as you can see from https://github.com/opencv/opencv/pull/22329/files/26a7647e0e7c665e48e058f71cc1ab952cc4303d#diff-db221159aa572ef04177249f99bc952a3c8e11f8d34029118db5bbbbf787bc1eR134. I do not think the API changes unless I missed something. (Sorry if you knew this already!)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/stitching/include/opencv2/stitching/detail/matchers.hpp\n**Change Type:** modified\n**Context:** PR #22329: Fix stitching Python bindings (and one stitching_detailed.cpp bug)\n**Review Line:** 206\n\n**Code Changes:**\n```diff\n@@ -138,7 +138,7 @@ class CV_EXPORTS_W FeaturesMatcher\n @sa detail::MatchesInfo\n */\n CV_WRAP_AS(apply2) void operator ()(const std::vector &features, CV_OUT std::vector &pairwise_matches,\n- const cv::UMat &mask = cv::UMat());\n+ const cv::UMat &mask = cv::UMat()) { match(features, pairwise_matches, mask); };\n \n /** @return True, if it's possible to use the same matcher instance in parallel, false otherwise\n */\n@@ -161,6 +161,16 @@ class CV_EXPORTS_W FeaturesMatcher\n virtual void match(const ImageFeatures &features1, const ImageFeatures &features2,\n MatchesInfo& matches_info) = 0;\n \n+ /** @brief This method implements logic to match features between arbitrary number of features.\n+ By default this checks every pair of inputs in the input, but the behaviour can be changed by subclasses.\n```", + "output": "@alalek The base class has the same method. It was overload. The PR brings the same strategy pattern as it was before and calls virtual method internally to resolve class specific behavior. With the new approach old code will work as designed. I do not expect changes for correct C++ code after relbuild with new OpenCV." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/gapi/oak_devices/oak_devices.markdown\n**Change Type:** added\n**Context:** PR #22501: OAK Information\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,26 @@\n+Using DepthAI Hardware / OAK depth sensors {#tutorial_gapi_oak_devices}\n+=======================================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_gapi_face_beautification}\n+\n+![Oak-D and Oak-D-Light cameras](pics/oak.jpg)\n+\n```", + "output": "Normally we don't put code like this but use a `@snippet` macro instead -- so the code is taken from an actual compilable source." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/gapi/oak_devices/oak_devices.markdown\n**Change Type:** added\n**Context:** PR #22501: OAK Information\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,26 @@\n+Using DepthAI Hardware / OAK depth sensors {#tutorial_gapi_oak_devices}\n+=======================================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_gapi_face_beautification}\n+\n+![Oak-D and Oak-D-Light cameras](pics/oak.jpg)\n+\n```", + "output": "Thank you @dmatveev, I will still become a good contributor who follows good practices. I'm learning." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/gapi/oak_devices/oak_devices.markdown\n**Change Type:** added\n**Context:** PR #22501: OAK Information\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,26 @@\n+Using DepthAI Hardware / OAK depth sensors {#tutorial_gapi_oak_devices}\n+=======================================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_gapi_face_beautification}\n+\n+![Oak-D and Oak-D-Light cameras](pics/oak.jpg)\n+\n```", + "output": "Done master @smirnov-alexey and @dmatveev . I'm studying GAPI with OAK to detail in the next request." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/gapi/oak_devices/oak_devices.markdown\n**Change Type:** added\n**Context:** PR #22501: OAK Information\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,26 @@\n+Using DepthAI Hardware / OAK depth sensors {#tutorial_gapi_oak_devices}\n+=======================================================================\n+\n+@tableofcontents\n+\n+@prev_tutorial{tutorial_gapi_face_beautification}\n+\n+![Oak-D and Oak-D-Light cameras](pics/oak.jpg)\n+\n```", + "output": "@cabelo Intel -> Luxonis. Also it makes sense to use markdown notation for the link." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/libjpeg-turbo/CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #22372: [GSoC 2022] nasm/simd support for libjpeg-turbo\n\n**Code Changes:**\n```diff\n@@ -15,8 +15,55 @@ endif()\n \n message(STATUS \"libjpeg-turbo: VERSION = ${VERSION}, BUILD = ${BUILD}\")\n \n+math(EXPR BITS \"${CMAKE_SIZEOF_VOID_P} * 8\")\n+string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} CMAKE_SYSTEM_PROCESSOR_LC)\n+\n+if(CMAKE_SYSTEM_PROCESSOR_LC MATCHES \"x86_64\" OR\n+ CMAKE_SYSTEM_PROCESSOR_LC MATCHES \"amd64\" OR\n+ CMAKE_SYSTEM_PROCESSOR_LC MATCHES \"i[0-9]86\" OR\n```", + "output": "If I understand your concern correctly, NEON and others are handled by libjpeg-turbo simd/cmakelists. For example, [here](https://github.com/opencv/opencv/blob/79afd80817c8d84484052ae06e64d61db202d27a/3rdparty/libjpeg-turbo/src/simd/CMakeLists.txt#L218) NEON support are handled below. I will investigave further. Thanks for the link." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_android_mediandk.cpp\n**Change Type:** modified\n**Context:** PR #22227: Fix issues: #22214 and #22198\n**Review Line:** 169\n\n**Code Changes:**\n```diff\n LOGE(\"Unsupported video format: %d\", colorFormat);\n return false;\n }\n+\n+ Mat croppedFrame = frame(Rect(0, 0, videoWidth, videoHeight));\n+ out.assign(croppedFrame);\n+\n+ if (videoOrientationAuto && -1 != videoRotationCode) {\n+ cv::rotate(out, out, videoRotationCode);\n+ }\n+\n```", + "output": "Got it, thanks for the clarification!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tensorflow/tf_importer.cpp\n**Change Type:** modified\n**Context:** PR #22448: changed names of permutations if Reshpe is in NHWC\n**Review Line:** 1137\n\n**Code Changes:**\n```diff\n {\n int order[] = {0, 3, 1, 2}; // Transform back to OpenCV's NCHW.\n- addPermuteLayer(order, name + \"/nchw\", inpId);\n+\n+ setName = changedType ? name : name + \"/nchw\";\n+ addPermuteLayer(order, setName, inpId);\n inpLayout = DATA_LAYOUT_NCHW;\n }\n \n```", + "output": "@rogday It was introduced by you in a pull request a year ago: https://github.com/opencv/opencv/pull/20190/files#diff-f5872c76075aa1f06f357d1dc27448bf1a02a28b79e62d92e8b7aa47311b57dfR1007" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tensorflow/tf_importer.cpp\n**Change Type:** modified\n**Context:** PR #22448: changed names of permutations if Reshpe is in NHWC\n**Review Line:** 1137\n\n**Code Changes:**\n```diff\n {\n int order[] = {0, 3, 1, 2}; // Transform back to OpenCV's NCHW.\n- addPermuteLayer(order, name + \"/nchw\", inpId);\n+\n+ setName = changedType ? name : name + \"/nchw\";\n+ addPermuteLayer(order, setName, inpId);\n inpLayout = DATA_LAYOUT_NCHW;\n }\n \n```", + "output": "@fengyuentau, it's a refactoring PR that broke history a little bit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/tensorflow/tf_importer.cpp\n**Change Type:** modified\n**Context:** PR #22448: changed names of permutations if Reshpe is in NHWC\n**Review Line:** 1137\n\n**Code Changes:**\n```diff\n {\n int order[] = {0, 3, 1, 2}; // Transform back to OpenCV's NCHW.\n- addPermuteLayer(order, name + \"/nchw\", inpId);\n+\n+ setName = changedType ? name : name + \"/nchw\";\n+ addPermuteLayer(order, setName, inpId);\n inpLayout = DATA_LAYOUT_NCHW;\n }\n \n```", + "output": "@rogday I've investigated this. So, there's a sample in tests, where the next condition is not met: https://github.com/opencv/opencv/pull/22448/files#diff-f5872c76075aa1f06f357d1dc27448bf1a02a28b79e62d92e8b7aa47311b57dfR1105 It's not met cause newShape has values: [1, 2, 4, 3]. So, there will not be permutation before the actual Reshape node. The test sample is: reshape_layer with the dnn/tensorflow/reshape_layer_net.pb graph. Which has only input and Reshape layer(look at the screenshot). ![Screenshot from 2022-09-06 19-20-01](https://user-images.githubusercontent.com/22273578/188699917-b3eba7a1-c0dd-40a5-86db-d0c0ab7b5960.png) The reference output has shape: output shape: (1, 3, 2, 4). So, the loaded graph in opencv will look like: input->Reshape->Reshape/nchw. Also this behaviour can be observed in not_implemented_layer_net.pb graph, which has 2 inputs, 1 is going to not_implemented_layer, the 2nd is going to reshape, after that outputs from this layers are multiplied. So, the problem, that you've covered may be defined like: 1). Input layer of reshape has NHWC layout. 2). By some reason(the question here is why don't we reshape, if condition newShape.at(1) == 1 is not met) the permutation from HNWC to NCHW layouts is not performed. 3). Next Reshape is performed only with NCHW data layout. 4). Data is always permuted from NCHW to NHWC data format at the end if the source of Reshape in original graph is NHWC/ Even if it wasn't permuted from NHWC to NCHW at input. I don't see any reasons, why there's no permutation at the start. Cause usually when you have graph, where all layers have 1 data layout. I can investigate it lately, but this is not what this PR is about. Also the changing of naming the original node name to last permutation layer breaks several tests." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22306: DNN: Add qgemm and squeeze op13 supported on ONNXImporter\n\n**Code Changes:**\n```diff\n@@ -192,6 +192,7 @@ class ONNXImporter\n void parseQSigmoid (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseQAvgPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseQConcat (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseQGemm (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n \n // '???' domain or '???' layer type\n void parseCustomLayer (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n@@ -2183,17 +2184,39 @@ void ONNXImporter::parseTranspose(LayerParams& layerParams, const opencv_onnx::N\n \n```", + "output": "Thank you! I see that `C` and `y_scale` tensors are being handled correctly now. But what about `y_zero_point`? I think we should assert that either `ninputs == 8` or `node_proto.input(8).empty()`, or `that 8'th blob contains scalar zero`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #22306: DNN: Add qgemm and squeeze op13 supported on ONNXImporter\n\n**Code Changes:**\n```diff\n@@ -192,6 +192,7 @@ class ONNXImporter\n void parseQSigmoid (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseQAvgPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseQConcat (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseQGemm (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n \n // '???' domain or '???' layer type\n void parseCustomLayer (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n@@ -2183,17 +2184,39 @@ void ONNXImporter::parseTranspose(LayerParams& layerParams, const opencv_onnx::N\n \n```", + "output": "Hi @rogday, I have checked the detail of the quantized structure in DNN. It was my fault, we have uniformly set the `output_scale` and `output_zeropoint` [here](https://github.com/opencv/opencv/blob/4.x/modules/dnn/src/onnx/onnx_importer.cpp#L756-L799). So we don't need to set them again during the node parsing stage. And in some cases, we need to calculate the `outputMultiplier`, that's why we need to get the `output_scale`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_spng.cpp\n**Change Type:** added\n**Context:** PR #22226: [GSoC 2022] spng encoder/decoder added as optional png codec\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,754 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"precomp.hpp\"\n+\n+#ifdef HAVE_SPNG\n+\n+/****************************************************************************************\\\n```", + "output": "Please check if `spng_ctx_free` with NULL is expected behaviour." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_png.cpp\n**Change Type:** modified\n**Context:** PR #22226: [GSoC 2022] spng encoder/decoder added as optional png codec\n\n**Code Changes:**\n```diff\n@@ -5,7 +5,7 @@\n \n namespace opencv_test { namespace {\n \n-#ifdef HAVE_PNG\n+#if defined(HAVE_PNG) || defined(HAVE_SPNG)\n \n TEST(Imgcodecs_Png, write_big)\n {\n@@ -186,6 +186,225 @@ const string exif_files[] =\n```", + "output": "@alalek, I already investigated this issue. Since it's many tiny files, one big file is probably preferable for now. I suggest to merge as-is" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindLibsGrfmt.cmake\n**Change Type:** modified\n**Context:** PR #22226: [GSoC 2022] spng encoder/decoder added as optional png codec\n\n**Code Changes:**\n```diff\n@@ -221,8 +221,21 @@ if(WITH_JASPER AND NOT HAVE_OPENJPEG)\n endif()\n endif()\n \n+if(WITH_SPNG)\n+ set(SPNG_LIBRARY libspng CACHE INTERNAL \"\")\n+ set(SPNG_LIBRARIES ${SPNG_LIBRARY})\n+ add_subdirectory(\"${OpenCV_SOURCE_DIR}/3rdparty/libspng\")\n+ set(SPNG_INCLUDE_DIR \"${${SPNG_LIBRARY}_SOURCE_DIR}\" CACHE INTERNAL \"\")\n+ set(SPNG_DEFINITIONS \"\")\n```", + "output": "@alalek, instead of putting generic comments, could you please put the snippet what you have in mind? I personally cannot reverse engineer it from your comment" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/libspng/CMakeLists.txt\n**Change Type:** added\n**Context:** PR #22226: [GSoC 2022] spng encoder/decoder added as optional png codec\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,47 @@\n+# ----------------------------------------------------------------------------\n+# CMake file for libspng. See root CMakeLists.txt\n+#\n+# ----------------------------------------------------------------------------\n+\n+project(${SPNG_LIBRARY})\n+\n+set(CURR_INCLUDE_DIR \"${CMAKE_CURRENT_LIST_DIR}\")\n+set_property(GLOBAL PROPERTY SPNG_INCLUDE_DIR ${CURR_INCLUDE_DIR})\n```", + "output": "Can you please provide more information about how to handle it? I am not that familiar with CMake and dependency handling. @alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/test/test_normal.cpp\n**Change Type:** modified\n**Context:** PR #22241: More tests for normals\n**Review Line:** 334\n\n**Code Changes:**\n```diff\n+ runCase(false, 3, true, meanErr, maxErr, threshold3d1d);\n+ }\n+ else\n+ {\n+ throw SkipTestException(\"Not implemented for anything except LINEMOD with scale\");\n+ }\n+}\n+\n+INSTANTIATE_TEST_CASE_P(RGBD_Normals, NormalsRandomPlanes,\n+::testing::Combine(::testing::Values(\n+ // 3 normal computer params + 5 thresholds:\n```", + "output": "Added `SkipTestException`, please check" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/fast_convolution.cpp\n**Change Type:** added\n**Context:** PR #21910: DNN: Accelerating convolution\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,694 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the ficus (https://github.com/vpisarev/ficus/blob/master/lib/NN/OpConv.fx).\n+// Here is the original license:\n+/*\n+ This file is a part of ficus language project.\n+ See ficus/LICENSE for the licensing terms\n```", + "output": "Thanks for code reviewing, I will fix these issues in the next update." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/depthwise_convolution.cpp\n**Change Type:** added\n**Context:** PR #21910: DNN: Accelerating convolution\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,385 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the ficus (https://github.com/vpisarev/ficus/blob/master/lib/NN/OpConv.fx).\n+// Here is the original license:\n+/*\n+ This file is a part of ficus language project.\n+ See ficus/LICENSE for the licensing terms\n```", + "output": "@vpisarev Please verify as this integration contradicts to 3rdparty original files and/or 3rdparty adopted files." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/depthwise_convolution.cpp\n**Change Type:** added\n**Context:** PR #21910: DNN: Accelerating convolution\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,385 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the ficus (https://github.com/vpisarev/ficus/blob/master/lib/NN/OpConv.fx).\n+// Here is the original license:\n+/*\n+ This file is a part of ficus language project.\n+ See ficus/LICENSE for the licensing terms\n```", + "output": "Thanks for code reviewing. Any advice on this? I don't know how to modify it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/depthwise_convolution.cpp\n**Change Type:** added\n**Context:** PR #21910: DNN: Accelerating convolution\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,385 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// This file is modified from the ficus (https://github.com/vpisarev/ficus/blob/master/lib/NN/OpConv.fx).\n+// Here is the original license:\n+/*\n+ This file is a part of ficus language project.\n+ See ficus/LICENSE for the licensing terms\n```", + "output": "@alalek, could you please explain your comment? What's the contradiction? I can confirm that the code has been borrowed from Ficus, licensed under Apache 2 license." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/fast_convolution.simd.hpp\n**Change Type:** added\n**Context:** PR #21910: DNN: Accelerating convolution\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,342 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef OPENCV_FAST_CONVOLUTION_SIMD_HPP\n+#define OPENCV_FAST_CONVOLUTION_SIMD_HPP\n+\n+#include \"opencv2/core/hal/intrin.hpp\"\n+#include \n```", + "output": "@zihaomu, it should be compile-time error. If user changes FAST_CONV_MR/FAST_CONV_NR, he/she should also modify the optimized loop or explicitly disable it and switch to C implementation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #21910: DNN: Accelerating convolution\n**Review Line:** 74\n\n**Code Changes:**\n```diff\n@@ -71,6 +71,8 @@ using namespace cv::dnn::ocl4dnn;\n using namespace cv::dnn::cuda4dnn;\n #endif\n \n+#include \"fast_convolution/fast_convolution.hpp\"\n+\n namespace cv\n {\n namespace dnn\n@@ -253,11 +255,14 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n```", + "output": "Need to take a look on: - test failures in Linux Debug configuration - test failures in Linux AVX2 configuration (`-DCPU_BASELINE=AVX2`) - looks like unconditional doubling of weights storage requires more memory and several Win32 tests started to fail with OOM message. @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #21910: DNN: Accelerating convolution\n**Review Line:** 74\n\n**Code Changes:**\n```diff\n@@ -71,6 +71,8 @@ using namespace cv::dnn::ocl4dnn;\n using namespace cv::dnn::cuda4dnn;\n #endif\n \n+#include \"fast_convolution/fast_convolution.hpp\"\n+\n namespace cv\n {\n namespace dnn\n@@ -253,11 +255,14 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n```", + "output": "Thanks for code reviewing. The failure in Linux Debug and Linux AVX2 only occurs in the quantized model. Since parameters of int8 layers rely on the output of fp32 models, we can modify the threshold to solve it in a short time. For Win32, I'm looking for a way around it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** apps/interactive-calibration/calibPipeline.cpp\n**Change Type:** modified\n**Context:** PR #22135: Add option to force reopen camera in interactive calibration tool\n**Review Line:** 83\n\n**Code Changes:**\n```diff\n+\n+ CV_LOG_INFO(NULL, \"VideoCapture error: trying to reopen...\");\n+ do\n+ {\n+ open_camera();\n+ } while (!mCapture.isOpened() || !mCapture.grab());\n+\n+ CV_LOG_INFO(NULL, \"VideoCapture error: reopened successfully.\");\n+ auto newSize = cv::Size((int)mCapture.get(cv::CAP_PROP_FRAME_WIDTH), (int)mCapture.get(cv::CAP_PROP_FRAME_HEIGHT));\n+ CV_CheckEQ(mImageSize, newSize, \"Camera image size changed after reopening.\");\n+ }\n```", + "output": "Thank you, I added \"trying to reopen\" message only before the loop since thousands of messages/sec about which try it is aren't very informative in my opinion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fast_convolution/winograd_3x3s1_f63.cpp\n**Change Type:** modified\n**Context:** PR #22183: DNN: ARMv7 compatible fastConv\n**Review Line:** 1095\n\n**Code Changes:**\n```diff\n // init 16 registers. FMA/load ratio = 32/12\n- float32x4_t r00 = vdupq_n_f32(0.0f), r01 = r00, r02 = r00, r03 = r00;\n- float32x4_t r04 = r00, r05 = r00, r06 = r00, r07 = r00;\n- float32x4_t r08 = r00, r09 = r00, r10 = r00, r11 = r00;\n- float32x4_t r12 = r00, r13 = r00, r14 = r00, r15 = r00;\n+ float32x2_t q00 = vdup_n_f32(0.0f), q01 = q00, q02 = q00, q03 = q00,\n+ q04 = q00, q05 = q00, q06 = q00, q07 = q00;\n+\n+ float32x4_t r04 = vdupq_n_f32(0.0f), r05 = r04, r06 = r04, r07 = r04;\n+ float32x4_t r08 = r04, r09 = r04, r10 = r04, r11 = r04;\n+ float32x4_t r12 = r04, r13 = r04, r14 = r04, r15 = r04;\n```", + "output": "Thanks for code reviewing. As @nihui's commented, `vmlaq_lane_f32 ` is the best substitute for `vfmaq_laneq_f32 ` under the ARMv7 platform. Another option is `vmlaq_n_f32`, it will be parsed into two arm assembly code `fmul.4s v24, v22, v20[0]` and `fadd.4s v3, v3, v24`. And `vmlaq_lane_f32` will be parsed into only one arm assembly code `vmla.f32 q8, q5, d0[1]` on ARMv7. At the same time, two consecutive half-length register loads will be converted into a 128bit load during loading data to register, so the data load time is the same." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #21372: Add per_tensor_quantize to int8 quantize\n\n**Code Changes:**\n```diff\n@@ -2174,26 +2174,36 @@ class ConvolutionLayerImpl CV_FINAL : public BaseConvolutionLayerImpl\n Mat weightsQuantized(weightsMat.rows, weightsMat.cols, CV_8S);\n Mat biasQuantized(1, numOutput, CV_32S);\n Mat outputMultiplier(1, numOutput, CV_32F);\n- double realMin, realMax, weightsScale;\n+ bool perChannel = params.get(\"per_channel\", true);\n \n- for( int i = 0; i < numOutput; i++ )\n+ if (perChannel) // per-Channel quantization.\n {\n```", + "output": "Thanks for code reviewing. I will try to improve the code based on your comment." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/fully_connected_layer.cpp\n**Change Type:** modified\n**Context:** PR #21372: Add per_tensor_quantize to int8 quantize\n\n**Code Changes:**\n```diff\n@@ -619,26 +619,36 @@ class FullyConnectedLayerImpl CV_FINAL : public InnerProductLayer\n Mat weightsQuantized(weightsMat.rows, weightsMat.cols, CV_8S);\n Mat biasQuantized(1, numOutput, CV_32S);\n Mat outputMultiplier(1, numOutput, CV_32F);\n+ bool perChannel = params.get(\"per_channel\", true);\n \n- double realMin, realMax, weightsScale;\n- for( int i = 0; i < numOutput; i++ )\n+ if (perChannel) // per-Channel quantization.\n {\n```", + "output": "Thanks for the code review, I will revise the code and submit it soon." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/net_quantization.cpp\n**Change Type:** modified\n**Context:** PR #21372: Add per_tensor_quantize to int8 quantize\n\n**Code Changes:**\n```diff\n@@ -33,7 +33,7 @@ void getQuantizationParams(const Mat& src, std::vector& scales, std::vect\n }\n \n // FIXIT drop from inference API\n-Net Net::Impl::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype)\n+Net Net::Impl::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype, bool perChannel)\n {\n // Net can be quantized only once.\n if (netWasQuantized)\n@@ -192,6 +192,10 @@ Net Net::Impl::quantize(InputArrayOfArrays calibData, int inputsDtype, int outpu\n```", + "output": "Thanks for code reviewing. How about removing `Net Net::Impl::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype)` and just keeping `Net Net::Impl::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype, bool perTensor = false)`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/net.cpp\n**Change Type:** modified\n**Context:** PR #21372: Add per_tensor_quantize to int8 quantize\n\n**Code Changes:**\n```diff\n@@ -115,12 +115,12 @@ void Net::forward(std::vector>& outputBlobs,\n }\n \n // FIXIT drop from inference API\n-Net Net::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype)\n+Net Net::quantize(InputArrayOfArrays calibData, int inputsDtype, int outputsDtype, bool perChannel)\n {\n CV_TRACE_FUNCTION();\n CV_Assert(impl);\n CV_Assert(!empty());\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21818: Add prefixes to layer names and layer output names\n**Review Line:** 747\n\n**Code Changes:**\n```diff\n- return node_proto.name();\n+ if (useLegacyNames)\n+ return node_proto.name();\n+ return cv::format(\"onnx_node!%s\", node_proto.name().c_str());\n }\n for (int i = 0; i < node_proto.output_size(); ++i)\n {\n@@ -735,7 +752,9 @@ const std::string& extractNodeName(const opencv_onnx::NodeProto& node_proto)\n // the second method is to use an empty string in place of an input or output name.\n if (!name.empty())\n {\n```", + "output": "@rogday Could you please prepare a small test case for this problem? I will try to fix it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21818: Add prefixes to layer names and layer output names\n**Review Line:** 747\n\n**Code Changes:**\n```diff\n- return node_proto.name();\n+ if (useLegacyNames)\n+ return node_proto.name();\n+ return cv::format(\"onnx_node!%s\", node_proto.name().c_str());\n }\n for (int i = 0; i < node_proto.output_size(); ++i)\n {\n@@ -735,7 +752,9 @@ const std::string& extractNodeName(const opencv_onnx::NodeProto& node_proto)\n // the second method is to use an empty string in place of an input or output name.\n if (!name.empty())\n {\n```", + "output": "@alalek, sure, you could use tests at https://github.com/opencv/opencv_extra/pull/968." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/mirrors/gitcode.cmake\n**Change Type:** modified\n**Context:** PR #21811: Support mirrors for TIM-VX and others 3rdparty components in opencv_contrib\n\n**Code Changes:**\n```diff\n@@ -1,14 +1,25 @@\n-# Tengine (Download via commit id)\n+######\n+# Download via commit id\n+######\n+# Tengine\n ocv_update(TENGINE_PKG_MD5_GITCODE 1b5908632b557275cd6e85b0c03f9690)\n ocv_update(TENGINE_PKG_MD5_ORIGINAL 23f61ebb1dd419f1207d8876496289c5) # same as tengine_md5sum for TENGINE commit of e89cf8870de2ff0a80cfe626c0b52b2a16fb302e\n+# NVIDIA_OPTICAL_FLOW\n+ocv_update(NVIDIA_OPTICAL_FLOW_PKG_MD5_GITCODE 8d5b7eeb24d6ca9c6bcfdff4196d5b47)\n```", + "output": "Thanks for the suggestion! Code has been updated to your suggestion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #21036: dnn: TIM-VX NPU backend support\n**Review Line:** 99\n\n**Code Changes:**\n```diff\n@@ -95,7 +96,8 @@ CV__DNN_INLINE_NS_BEGIN\n DNN_TARGET_FPGA, //!< FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.\n DNN_TARGET_CUDA,\n DNN_TARGET_CUDA_FP16,\n- DNN_TARGET_HDDL\n+ DNN_TARGET_HDDL,\n+ DNN_TARGET_NPU,\n };\n \n CV_EXPORTS std::vector< std::pair > getAvailableBackends();\n@@ -321,6 +323,19 @@ CV__DNN_INLINE_NS_BEGIN\n```", + "output": "Hi, @alalek. Thank you for code review. In my test, if I move `DNN_BACKEND_TIMVX` to the end of `enum Backend`, I will get an error. If I do the following, the value of `DNN_BACKEND_TIMVX` in the OpenCV library is `1000001`, while the value of `DNN_BACKEND_TIMVX` when calling the OpenCV library is `6`. This may be caused by `#ifdef __OPENCV_BUILD` flag. In this case, I can not use `DNN_BACKEND_TIMVX` to trigger TimVx Backend, I can only use `1000001` to do it. It happens on `Ubuntu of ARM`, but not for the `Ubuntu of X86`. ```C++ enum Backend { //! DNN_BACKEND_DEFAULT equals to DNN_BACKEND_INFERENCE_ENGINE if //! OpenCV is built with Intel's Inference Engine library or //! DNN_BACKEND_OPENCV otherwise. DNN_BACKEND_DEFAULT = 0, DNN_BACKEND_HALIDE, DNN_BACKEND_INFERENCE_ENGINE, //!< Intel's Inference Engine computational backend //!< @sa setInferenceEngineBackendType DNN_BACKEND_OPENCV, DNN_BACKEND_VKCOM, DNN_BACKEND_CUDA, #ifdef __OPENCV_BUILD DNN_BACKEND_INFERENCE_ENGINE_NGRAPH = 1000000, // internal - use DNN_BACKEND_INFERENCE_ENGINE + setInferenceEngineBackendType() DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019, // internal - use DNN_BACKEND_INFERENCE_ENGINE + setInferenceEngineBackendType() #endif DNN_BACKEND_TIMVX }; ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/op_timvx.cpp\n**Change Type:** added\n**Context:** PR #21036: dnn: TIM-VX NPU backend support\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,931 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2019-2021, Shenzhen Institute of Artificial Intelligence and\n+// Robotics for Society, all rights reserved.\n+// Third party copyrights are property of their respective owners.\n+\n+#include \"precomp.hpp\"\n```", + "output": "Thanks, it was fixed in the latest update." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindTIMVX.cmake\n**Change Type:** added\n**Context:** PR #21036: dnn: TIM-VX NPU backend support\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,69 @@\n+set(TIMVX_INSTALL_DIR \"\" CACHE PATH \"Path to libtim-vx installation\")\n+set(VIVANTE_SDK_DIR \"\" CACHE PATH \"Path to VIVANTE SDK needed by TIM-VX.\")\n+set(VIVANTE_SDK_LIB_CANDIDATES \"OpenVX;VSC;GAL;ArchModelSw;NNArchPerf\" CACHE STRING \"VIVANTE SDK library candidates\")\n+\n+# Ensure VIVANTE SDK library candidates are present in given search path\n+function(find_vivante_sdk_libs _viv_notfound _viv_search_path)\n+ foreach(one ${VIVANTE_SDK_LIB_CANDIDATES})\n+ #NO_DEFAULT_PATH is used to ensure VIVANTE SDK libs are from one only source\n+ find_library(VIV_${one}_LIB ${one} PATHS \"${_viv_search_path}/lib\" NO_DEFAULT_PATH)\n```", + "output": "@alalek Our code needs a header (line 83) & some shared libraries to run. So the header is installed to `/opencv_install/include/CL`, and the libraries are installed to `/opencv_install/lib`. Based on the [CI make install output](https://pullrequest.opencv.org/buildbot/builders/precommit_linux64/builds/34886/steps/compile%20release/logs/stdio), I can see the header and the libraries are installed to the destination as I set. However, it turns out `opencv_test_dnn` cannot locate the header. Any suggestions?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/PR-4.x-ARM64.yaml\n**Change Type:** modified\n**Context:** PR #21907: Workflow to build and test on ARM64 for 4.x branch\n\n**Code Changes:**\n```diff\n@@ -1,7 +1,9 @@\n name: PR:4.x ARM64\n \n-# TODO: enable pipeline after 4.x update\n-on: workflow_dispatch\n+on:\n+ pull_request:\n+ branches:\n+ - 4.x\n \n```", + "output": "To be honest I don't remember why they were disabled. @asmorkalov, do you remember? Btw, we can turn them on and check." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/perf/opencl/perf_matop.cpp\n**Change Type:** modified\n**Context:** PR #21937: Fix warnings for clang15\n**Review Line:** 236\n\n**Code Changes:**\n```diff\n@@ -233,6 +233,7 @@ PERF_TEST_P_(OpenCLBuffer, cpu_read)\n for (size_t x_bytes = 0; x_bytes < width_bytes; x_bytes++)\n counter += (unsigned)(ptr[x_bytes]);\n }\n+ (void)counter; // To avoid -Wunused-but-set-variable\n }\n \n SANITY_CHECK_NOTHING();\n```", + "output": "Thank you for your review! If all lines related with `counter` are removed, this performance test function will be lost all meaning. So I think this `counter` variable should not be removed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/trace.cpp\n**Change Type:** modified\n**Context:** PR #21937: Fix warnings for clang15\n\n**Code Changes:**\n```diff\n@@ -988,15 +988,13 @@ void parallelForFinalize(const Region& rootRegion)\n std::vector threads_ctx;\n getTraceManager().tls.gather(threads_ctx);\n RegionStatistics parallel_for_stat;\n- int threads = 0;\n for (size_t i = 0; i < threads_ctx.size(); i++)\n {\n TraceManagerThreadLocal* child_ctx = threads_ctx[i];\n \n if (child_ctx && child_ctx->stackTopRegion() == &rootRegion)\n```", + "output": "Thank you for your comment, I agree with you ! I will remove `threads` variable. And I'm sorry to commit fixed code needs some time because I will test compiling with several version of gcc and clang." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/pointcloud/load_point_cloud.cpp\n**Change Type:** added\n**Context:** PR #20471: GSoC module to save and load point cloud\n**Review Line:** 18\n\n**Code Changes:**\n```diff\n+\n+#include \n+\n+namespace cv {\n+\n+#if OPENCV_HAVE_FILESYSTEM_SUPPORT\n+\n+static PointCloudDecoder findDecoder(const String &filename)\n+{\n+ auto file_ext = getExtension(filename);\n+ if (file_ext == \"obj\" || file_ext == \"OBJ\")\n```", + "output": "According to discussion w/ @asmorkalov: > OpenCV can be built without FS support for iOS and some other bare-metal configurations. The behavior is managed by `OPENCV_HAVE_FILESYSTEM_SUPPORT` macro. > > * See filesystem support in core: https://github.com/opencv/opencv/blob/master/modules/core/src/utils/filesystem.cpp > * Please add alternative branch with \"not supported\" exception like here: https://github.com/opencv/opencv/blob/master/modules/core/src/glob.cpp#L286. API call should not disappear from interface. here: https://github.com/opencv/opencv/pull/20471#discussion_r678880179" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21608: DNN: add depth2space and space2depth layer for onnx importer\n\n**Code Changes:**\n```diff\n@@ -175,6 +175,7 @@ class ONNXImporter\n void parseSoftMax (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseDetectionOutput (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseCumSum (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseDepthToSpace (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSimpleLayers (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n \n // Domain: com.microsoft\n@@ -3183,6 +3184,85 @@ void ONNXImporter::parseCumSum(LayerParams& layerParams, const opencv_onnx::Node\n addLayer(layerParams, node_proto);\n```", + "output": "Thanks for the code review. All comments mentioned issues have been fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21608: DNN: add depth2space and space2depth layer for onnx importer\n\n**Code Changes:**\n```diff\n@@ -175,6 +175,7 @@ class ONNXImporter\n void parseSoftMax (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseDetectionOutput (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseCumSum (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseDepthToSpace (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSimpleLayers (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n \n // Domain: com.microsoft\n@@ -3183,6 +3184,85 @@ void ONNXImporter::parseCumSum(LayerParams& layerParams, const opencv_onnx::Node\n addLayer(layerParams, node_proto);\n```", + "output": "It's My fault. Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21608: DNN: add depth2space and space2depth layer for onnx importer\n\n**Code Changes:**\n```diff\n@@ -175,6 +175,7 @@ class ONNXImporter\n void parseSoftMax (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseDetectionOutput (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseCumSum (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseDepthToSpace (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSimpleLayers (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n \n // Domain: com.microsoft\n@@ -3183,6 +3184,85 @@ void ONNXImporter::parseCumSum(LayerParams& layerParams, const opencv_onnx::Node\n addLayer(layerParams, node_proto);\n```", + "output": "Thanks for code reviewing. All issues have been fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/PR-3.4-U20.yaml\n**Change Type:** added\n**Context:** PR #21875: Added workflow for Github Actions to build and test OpenCV on Linux\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,167 @@\n+name: PR:3.4 U20\n+\n+on:\n+ pull_request:\n+ branches:\n+ - 3.4\n+\n+env:\n+ EXTRA_CMAKE_OPTIONS: '-DBUILD_DOCS=ON -DPYTHON_DEFAULT_EXECUTABLE=/usr/bin/python3 -DBUILD_EXAMPLES=ON -DOPENCV_ENABLE_NONFREE=ON -DENABLE_CCACHE=OFF'\n```", + "output": "Thank you for the feedback, I've split tests by steps for better readability." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDownload.cmake\n**Change Type:** modified\n**Context:** PR #21531: Support downloading 3rdparty resources from Gitcode & Gitlab-style mirrors\n\n**Code Changes:**\n```diff\n@@ -37,6 +37,50 @@ file(WRITE \"${OPENCV_DOWNLOAD_LOG}\" \"#use_cache \\\"${OPENCV_DOWNLOAD_PATH}\\\"\\n\")\n file(REMOVE \"${OPENCV_DOWNLOAD_WITH_CURL}\")\n file(REMOVE \"${OPENCV_DOWNLOAD_WITH_WGET}\")\n \n+ocv_check_environment_variables(OPENCV_DOWNLOAD_MIRROR_ID)\n+\n+function(ocv_init_download_mirror)\n+ if(NOT DEFINED OPENCV_DOWNLOAD_MIRROR_ID)\n+ # Run `git remote get-url origin` to get remote source\n+ execute_process(\n```", + "output": "> If user specifies this then we should not run git check. Agree. Will do. > I believe variable name should be `OPENCV_DOWNLOAD_MIRROR_ID`. A domain link (such as github.com, gitcode.net, etc.) should be assigned to `OPENCV_MIRROR_CUSTOM` if one wants to use customized mirror. It does not make sense to me if the CMake variable name is renamed with `*_ID`. If we are using `OPENCV_DOWNLOAD_MIRROR_ID`, the initialization process can be changed as follows: ```cmake set(OPENCV_DOWNLOAD_MIRROR_ID \"m\" CACHE STRING \"\") if(\"m${OPENCV_DOWNLOAD_MIRROR_ID}\" STREUQAL \"mcustom\") set(__HOST \"custom\") elseif(\"m${OPENCV_DOWNLOAD_MIRROR_ID}\" STREUQAL \"mgitcode\") set(__HOST \"gitcode\") else() # detect source tree via `git remote get-url origin` endif() if((DEFINED __HOST) AND (EXISTS \"${CMAKE_SOURCE_DIR}/cmake/mirrors/${__HOST}.cmake\")) ocv_cmake_hook_append(OCV_DOWNLOAD_MIRROR_${__HOST} \"${CMAKE_CURRENT_SOURCE_DIR}/cmake/mirrors/${__HOST}.cmake\") else() # hint: not recognized host endif() ``` In this case, domain links, download URLs, MD5s and filenames can all be organized in each `${__HOST}.cmake`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n\n**Code Changes:**\n```diff\n@@ -122,6 +122,7 @@ class ONNXImporter\n void parseMaxUnpool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseMaxPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseAveragePool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseGlobalPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseReduce (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSlice (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSplit (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n@@ -1087,7 +1088,7 @@ void ONNXImporter::parseAveragePool(LayerParams& layerParams, const opencv_onnx:\n addLayer(layerParams, node_proto);\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n\n**Code Changes:**\n```diff\n@@ -122,6 +122,7 @@ class ONNXImporter\n void parseMaxUnpool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseMaxPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseAveragePool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n+ void parseGlobalPool (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseReduce (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSlice (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n void parseSplit (LayerParams& layerParams, const opencv_onnx::NodeProto& node_proto);\n@@ -1087,7 +1088,7 @@ void ONNXImporter::parseAveragePool(LayerParams& layerParams, const opencv_onnx:\n addLayer(layerParams, node_proto);\n```", + "output": "Thanks for the detailed code review, I learned a lot from your comments. All errors are fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n\n**Code Changes:**\n```diff\n@@ -325,6 +325,20 @@ CV__DNN_INLINE_NS_BEGIN\n static Ptr create(const LayerParams& params);\n };\n \n+ class CV_EXPORTS ReduceLayer : public Layer\n+ {\n+ public:\n+ int reduceType;\n+ std::vector reduceDims;\n+ static Ptr create(const LayerParams& params);\n```", + "output": "Thanks for your code review, all comments have been fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n**Review Line:** 1223\n\n**Code Changes:**\n```diff\n+ shouldDelete[i] = temp;\n \n- node_proto.set_output(0, poolLp.name);\n- addLayer(poolLp, node_proto);\n+ std::swap(perm[index], perm[i]);\n+ std::swap(inpShape[index], inpShape[i]);\n+ needPermuet = true;\n+ }\n+ else\n+ break;\n }\n```", + "output": "Thanks a lot, I will try to solve it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/reduce_layer.cpp\n**Change Type:** added\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,213 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"layers_common.hpp\"\n+\n+#include \n+#include \n```", + "output": "Thanks for the code review, I have removed all `virtual` stuff in the base class, both `layers/reduce_layer.cpp` and `int8_layers/reduce_layer.cpp`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/int8layers/reduce_layer.cpp\n**Change Type:** added\n**Context:** PR #21601: DNN: Support some reduce layers of ONNX on CPU backend\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,213 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"layers_common.hpp\"\n+\n+#include \n+#include \n```", + "output": "@rogday Thx for code reviewing, I have remove the base class, and refactor the code with `struct` instead of `class`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #21692: add apply softmax option to ClassificationModel\n\n**Code Changes:**\n```diff\n@@ -1310,6 +1310,9 @@ CV__DNN_INLINE_NS_BEGIN\n class CV_EXPORTS_W_SIMPLE ClassificationModel : public Model\n {\n public:\n+ CV_DEPRECATED_EXTERNAL // avoid using in C++ code, will be moved to \"protected\" (need to fix bindings first)\n+ ClassificationModel();\n+\n /**\n * @brief Create classification model from network represented in one of the supported formats.\n * An order of @p model and @p config arguments does not matter.\n```", + "output": "Please add \"empty\" line after `@brief` statement. http://pullrequest.opencv.org/buildbot/export/pr/21692/docs/d0/dd5/classcv_1_1dnn_1_1ClassificationModel.html" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #21692: add apply softmax option to ClassificationModel\n\n**Code Changes:**\n```diff\n@@ -1310,6 +1310,9 @@ CV__DNN_INLINE_NS_BEGIN\n class CV_EXPORTS_W_SIMPLE ClassificationModel : public Model\n {\n public:\n+ CV_DEPRECATED_EXTERNAL // avoid using in C++ code, will be moved to \"protected\" (need to fix bindings first)\n+ ClassificationModel();\n+\n /**\n * @brief Create classification model from network represented in one of the supported formats.\n * An order of @p model and @p config arguments does not matter.\n```", + "output": "Overall, I have unified to `Softmax`. Thanks," + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #21692: add apply softmax option to ClassificationModel\n\n**Code Changes:**\n```diff\n@@ -197,28 +197,95 @@ void Model::predict(InputArray frame, OutputArrayOfArrays outs) const\n }\n \n \n+class ClassificationModel_Impl : public Model::Impl\n+{\n+public:\n+ virtual ~ClassificationModel_Impl() {}\n+ ClassificationModel_Impl() : Impl() {}\n+ ClassificationModel_Impl(const ClassificationModel_Impl&) = delete;\n```", + "output": "@alalek Is the same true for ClassificationModel::softmax()? ``` class ClassificationModel_Impl : public Model::Impl { public: static void softmax(InputArray inblob, OutputArray outblob) { } } void ClassificationModel::softmax(InputArray inblob, OutputArray outblob) { ClassificationModel_Impl::softmax(inblob, outblob); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #21018: New LevMarq implementation\n\n**Code Changes:**\n```diff\n@@ -468,77 +468,267 @@ can be found in:\n */\n CV_EXPORTS_W void Rodrigues( InputArray src, OutputArray dst, OutputArray jacobian = noArray() );\n \n-/** Levenberg-Marquardt solver. Starting with the specified vector of parameters it\n- optimizes the target vector criteria \"err\"\n- (finds local minima of each target vector component absolute value).\n \n- When needed, it calls user-provided callback.\n+/** @brief Type of matrix used in LevMarq solver\n```", + "output": "``` @brief one liner details ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/dualquaternion.inl.hpp\n**Change Type:** modified\n**Context:** PR #21018: New LevMarq implementation\n**Review Line:** 161\n\n**Code Changes:**\n```diff\n@@ -158,7 +158,7 @@ inline Quat DualQuat::getRotation(QuatAssumeType assumeUnit) const\n template \n inline Vec DualQuat::getTranslation(QuatAssumeType assumeUnit) const\n {\n- Quat trans = 2.0 * (getDualPart() * getRealPart().inv(assumeUnit));\n+ Quat trans = T(2.0) * (getDualPart() * getRealPart().inv(assumeUnit));\n return Vec{trans[1], trans[2], trans[3]};\n }\n \n```", + "output": "Backported to #21319 and [#3137@contrib](https://github.com/opencv/opencv_contrib/pull/3137)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_recognition.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,144 @@\n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n+using namespace cv::dnn;\n```", + "output": "Fix it. I got the right result with index -1, so I thought a negative index may be supported now. :( Thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -110,6 +110,155 @@ class Test_Model : public DNNTestLayer\n model.segment(frame, mask);\n normAssert(mask, exp, \"\", norm, norm);\n }\n+\n+ void testTextRecognitionModel(const std::string& weights, const std::string& cfg,\n+ const std::string& imgPath, const std::string& seq,\n+ const std::string& decodeType, const std::vector& vocabulary,\n+ const Size& size = {-1, -1}, Scalar mean = Scalar(),\n+ double scale = 1.0, bool swapRB = false, bool crop = false)\n```", + "output": "I have already put the data into opencv/opencv_extra, but it has not been merged yet. https://github.com/opencv/opencv_extra/pull/773 The name of image has been changed into \"text_rec_test.png\", and I will push again when the data is ready. Thanks for your review." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_detection.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n**Review Line:** 88\n\n**Code Changes:**\n```diff\n+ while (std::getline(testList, testImgPath)) {\n+ String imgPath = evalDataPath + \"/test_images/\" + testImgPath;\n+ std::cout << \"Image Path: \" << imgPath << std::endl;\n+\n+ Mat frame = imread(samples::findFile(imgPath), IMREAD_COLOR);\n+ CV_Assert(!frame.empty());\n+ Mat src = frame.clone();\n+\n+ // Inference\n+ std::vector> results;\n+ detector.detect(frame, results);\n```", + "output": "I feel confused that this file is only a sample for scene text detection, not an API. Is there anything related to the test? Why should I change CV_Assert into SkipTestException? Thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -110,6 +110,155 @@ class Test_Model : public DNNTestLayer\n model.segment(frame, mask);\n normAssert(mask, exp, \"\", norm, norm);\n }\n+\n+ void testTextRecognitionModel(const std::string& weights, const std::string& cfg,\n+ const std::string& imgPath, const std::string& seq,\n+ const std::string& decodeType, const std::vector& vocabulary,\n+ const Size& size = {-1, -1}, Scalar mean = Scalar(),\n+ double scale = 1.0, bool swapRB = false, bool crop = false)\n```", + "output": "Thank you. Sadly, I still get an error in \"Linux x64 Debug\", but I find that the test_dnn in \"Linux x64\" passed. I am not sure about the difference between these two tests. Is there any detailed log information containing which line throws out the error. I can only get \"error: (-215: Assertion failed) dims <= 2 in function 'at' thrown in the test body.\" in https://pullrequest.opencv.org/buildbot/builders/precommit_linux64_no_opt/builds/24676/steps/test_dnn/logs/stdio I think I have tested the API successfully, and you can see more information in https://github.com/HannibalAPE/opencv/blob/text_det_recog_demo/doc/tutorials/dnn/dnn_scene_text_det_and_rec/scene_text_recognition.markdown" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -110,6 +110,155 @@ class Test_Model : public DNNTestLayer\n model.segment(frame, mask);\n normAssert(mask, exp, \"\", norm, norm);\n }\n+\n+ void testTextRecognitionModel(const std::string& weights, const std::string& cfg,\n+ const std::string& imgPath, const std::string& seq,\n+ const std::string& decodeType, const std::vector& vocabulary,\n+ const Size& size = {-1, -1}, Scalar mean = Scalar(),\n+ double scale = 1.0, bool swapRB = false, bool crop = false)\n```", + "output": "@alalek Thanks for your reminder." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -1309,6 +1309,255 @@ CV__DNN_INLINE_NS_BEGIN\n float confThreshold = 0.5f, float nmsThreshold = 0.0f);\n };\n \n+\n+/** @brief This class represents high-level API for text recognition networks.\n+ *\n+ * TextRecognitionModel allows to set params for preprocessing input image.\n+ * TextRecognitionModel creates net from file with trained weights and config,\n+ * sets preprocessing input, runs forward pass and return recognition result.\n```", + "output": "I have changed the private functions into static local functions in the dnn/src/model.cpp Thank you for your help, and please check whether it is ok." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -1309,6 +1309,255 @@ CV__DNN_INLINE_NS_BEGIN\n float confThreshold = 0.5f, float nmsThreshold = 0.0f);\n };\n \n+\n+/** @brief This class represents high-level API for text recognition networks.\n+ *\n+ * TextRecognitionModel allows to set params for preprocessing input image.\n+ * TextRecognitionModel creates net from file with trained weights and config,\n+ * sets preprocessing input, runs forward pass and return recognition result.\n```", + "output": "I have modified the `TextRecognitionModel` class, please check it again." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,316 @@\n+# High Level API: TextDetectionModel and TextRecognitionModel {#tutorial_dnn_text_spotting}\n+\n+@prev_tutorial{tutorial_dnn_OCR}\n+\n+## Introduction\n+In this tutorial, we will introduce the APIs for TextRecognitionModel and TextDetectionModel in detail.\n+\n+---\n+#### TextRecognitionModel:\n```", + "output": "Thank you. I have tested it and changed it to `Scalar(127.5, 127.5, 127.5)`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_detection.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,151 @@\n+#include \n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n```", + "output": "I will check all my code to avoid it, thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -4,7 +4,6 @@\n \n #include \"precomp.hpp\"\n #include \n-#include \n #include \n #include \n \n@@ -37,9 +36,10 @@ struct Model::Impl\n virtual void setPreferableBackend(Backend backendId) { net.setPreferableBackend(backendId); }\n```", + "output": "I have updated it. Please check whether I get your mind. Thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -1309,6 +1309,255 @@ CV__DNN_INLINE_NS_BEGIN\n float confThreshold = 0.5f, float nmsThreshold = 0.0f);\n };\n \n+\n+/** @brief This class represents high-level API for text recognition networks.\n+ *\n+ * TextRecognitionModel allows to set params for preprocessing input image.\n+ * TextRecognitionModel creates net from file with trained weights and config,\n+ * sets preprocessing input, runs forward pass and return recognition result.\n```", + "output": "I have re-implemented my API in this way, please check it. Thank you for your help." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n**Review Line:** 512\n\n**Code Changes:**\n```diff\n+\n+ inline\n+ void setVocabulary(const std::vector& inputVoc)\n+ {\n+ vocabulary = inputVoc;\n+ }\n+\n+ inline\n+ void setDecodeType(const std::string& type)\n+ {\n+ decodeType = type;\n```", + "output": "We need to initialize the `vocabulary` with the input `voc`, so I think `assign` is a better choice than `insert`. Please check it and let me know if it is not good. Thank you for your help." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_detection.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,151 @@\n+#include \n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n```", + "output": "@vpisarev Please take a look on it. Thank you for your suggestions in advance." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_detection.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,151 @@\n+#include \n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n```", + "output": "@alalek I have talked with Vadim by email, and I have removed the patent information." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -1309,6 +1309,255 @@ CV__DNN_INLINE_NS_BEGIN\n float confThreshold = 0.5f, float nmsThreshold = 0.0f);\n };\n \n+\n+/** @brief This class represents high-level API for text recognition networks.\n+ *\n+ * TextRecognitionModel allows to set params for preprocessing input image.\n+ * TextRecognitionModel creates net from file with trained weights and config,\n+ * sets preprocessing input, runs forward pass and return recognition result.\n```", + "output": "> decodeType This is algorithm / model parameter. (it is not intended to be changed between multiple \"recognize\" calls for different frames). Must be moved into constructor / `::create()` methods and `.get` (may be `.set`) properties. --- /cc @dkurt @l-bat @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_spotting.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,169 @@\n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n+using namespace cv::dnn;\n```", + "output": "@alalek I think this function can not sort the points into the right order (bottomLeft, topLeft, topRight, bottomRight) in some cases. > -i=path/to/evaluation_data_det/IC15/test_images/img_5.jpg -dmp=path/to/DB_IC15_resnet50.onnx -iw=1280 -ih=736 -rmp=path/to/crnn_cs.onnx -vp=path/to/alphabet_94.txt -rgb=1 By the above setting, you can reproduce the errors. You can find the test image [here](https://drive.google.com/uc?export=dowload&id=149tAhIcvfCYeyufRoZ9tmc2mZDKE_XrF)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -4,7 +4,6 @@\n \n #include \"precomp.hpp\"\n #include \n-#include \n #include \n #include \n \n@@ -37,9 +36,10 @@ struct Model::Impl\n virtual void setPreferableBackend(Backend backendId) { net.setPreferableBackend(backendId); }\n```", + "output": "> 3-j Please check this part of code too (do we really need to reorder points?)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/scene_text_spotting.cpp\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,169 @@\n+#include \n+#include \n+\n+#include \n+#include \n+#include \n+\n+using namespace cv;\n+using namespace cv::dnn;\n```", + "output": "`minAreaRect()` may return boxes with angle -90. Please check latest commit with workarounds." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n**Review Line:** 291\n\n**Code Changes:**\n```diff\n+\n+#### Test with an image\n+Examples:\n+```bash\n+example_dnn_scene_text_recognition -mp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=/path/to/alphabet_94.txt\n+example_dnn_scene_text_detection -mp=path/to/DB_TD500_resnet50.onnx -i=path/to/an/image -ih=736 -iw=736\n+example_dnn_scene_text_spotting -dmp=path/to/DB_IC15_resnet50.onnx -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -iw=1280 -ih=736 -rgb=1 -vp=/path/to/alphabet_94.txt\n+example_dnn_text_detection -dmp=path/to/EAST.pb -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=path/to/alphabet_94.txt\n+```\n+\n+#### Test on public datasets\n```", + "output": "> -mp=path/to/DB_TD500_resnet50.onnx > -ih=736 -iw=736 Please check model parameters here (and above near model download links). This set performs better: `-ih=736 -iw=1280` (on `IC15/test_images/img_5.jpg`) BTW, It make sense to put some defaults into `TextDetectionModel_DB` ctor." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/model.cpp\n**Change Type:** modified\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n\n**Code Changes:**\n```diff\n@@ -4,7 +4,6 @@\n \n #include \"precomp.hpp\"\n #include \n-#include \n #include \n #include \n \n@@ -37,9 +36,10 @@ struct Model::Impl\n virtual void setPreferableBackend(Backend backendId) { net.setPreferableBackend(backendId); }\n```", + "output": "@HannibalAPE Could you please check preserving order of vertexes in `unclip()` call?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_text_spotting/dnn_text_spotting.markdown\n**Change Type:** added\n**Context:** PR #17570: [GSoC] High Level API and Samples for Scene Text Detection and Recognition\n**Review Line:** 291\n\n**Code Changes:**\n```diff\n+\n+#### Test with an image\n+Examples:\n+```bash\n+example_dnn_scene_text_recognition -mp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=/path/to/alphabet_94.txt\n+example_dnn_scene_text_detection -mp=path/to/DB_TD500_resnet50.onnx -i=path/to/an/image -ih=736 -iw=736\n+example_dnn_scene_text_spotting -dmp=path/to/DB_IC15_resnet50.onnx -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -iw=1280 -ih=736 -rgb=1 -vp=/path/to/alphabet_94.txt\n+example_dnn_text_detection -dmp=path/to/EAST.pb -rmp=path/to/crnn_cs.onnx -i=path/to/an/image -rgb=1 -vp=path/to/alphabet_94.txt\n+```\n+\n+#### Test on public datasets\n```", + "output": "Thanks for explanation. That makes sense. > I can train a new model This can be an activity after this PR merge." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core.hpp\n**Change Type:** modified\n**Context:** PR #20733: Implement ArgMax and ArgMin\n\n**Code Changes:**\n```diff\n@@ -819,12 +819,45 @@ mixChannels , or split .\n @param minLoc pointer to the returned minimum location (in 2D case); NULL is used if not required.\n @param maxLoc pointer to the returned maximum location (in 2D case); NULL is used if not required.\n @param mask optional mask used to select a sub-array.\n-@sa max, min, compare, inRange, extractImageCOI, mixChannels, split, Mat::reshape\n+@sa max, min, reduceArgMin, reduceArgMax, compare, inRange, extractImageCOI, mixChannels, split, Mat::reshape\n */\n CV_EXPORTS_W void minMaxLoc(InputArray src, CV_OUT double* minVal,\n CV_OUT double* maxVal = 0, CV_OUT Point* minLoc = 0,\n CV_OUT Point* maxLoc = 0, InputArray mask = noArray());\n```", + "output": "@asmorkalov Lets discuss this on Friday" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/recurrent_layers.cpp\n**Change Type:** modified\n**Context:** PR #20658: dnn: LSTM optimisation\n**Review Line:** 362\n\n**Code Changes:**\n```diff\n@@ -343,6 +358,15 @@ class LSTMLayerImpl CV_FINAL : public LSTMLayer\n hOutTs = hOutTs.colRange(i * hOutTs.cols / numDirs, (i + 1) * hOutTs.cols / numDirs);\n Mat cOutTs = produceCellOutput ? output[1].reshape(1, numSamplesTotal) : Mat();\n \n+#if CV_TRY_AVX2 || CV_TRY_AVX\n+ bool canUseAvx = gates.isContinuous() && bias.isContinuous()\n+ && Wx.depth() == CV_32F && gates.depth() == CV_32F\n+ && bias.depth() == CV_32F && Wx.cols >= 8;\n+ bool canUseAvx_hInternal = hInternal.isContinuous() && gates.isContinuous() && bias.isContinuous()\n+ && Wh.depth() == CV_32F && hInternal.depth() == CV_32F && gates.depth() == CV_32F\n+ && Wh.cols >= 8;\n```", + "output": "looks good, thank you. please add the same for lines 414 and 429 below" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/pooling_layer.cpp\n**Change Type:** modified\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -46,6 +46,7 @@\n #include \"../op_cuda.hpp\"\n #include \"../op_halide.hpp\"\n #include \"../op_inf_engine.hpp\"\n+#include \"../op_webnn.hpp\"\n \n #ifdef HAVE_DNN_NGRAPH\n #include \"../ie_ngraph.hpp\"\n@@ -85,6 +86,7 @@ typedef int HALIDE_DIFF_T;\n #include \"../cuda4dnn/primitives/max_unpooling.hpp\"\n```", + "output": "The indent of above lines seem not correct. Please check the code format checker in the bot." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/batch_norm_layer.cpp\n**Change Type:** modified\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -15,6 +15,7 @@ Implementation of Batch Normalization layer.\n #include \"../op_halide.hpp\"\n #include \"../op_inf_engine.hpp\"\n #include \"../ie_ngraph.hpp\"\n+#include \"../op_webnn.hpp\"\n \n #include \n \n@@ -172,6 +173,7 @@ class BatchNormLayerImpl CV_FINAL : public BatchNormLayer\n return (backendId == DNN_BACKEND_OPENCV) ||\n```", + "output": "Thanks! I have pushed a new commit to improve this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/op_webnn.hpp\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,171 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef __OPENCV_DNN_OP_WEBNN_HPP__\n+#define __OPENCV_DNN_OP_WEBNN_HPP__\n+\n+#include \"opencv2/core/cvdef.h\"\n+#include \"opencv2/core/cvstd.hpp\"\n```", + "output": "Thanks! I have pushed a new commit to improve this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/dnn.cpp\n**Change Type:** modified\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #include \"ie_ngraph.hpp\"\n #include \"op_vkcom.hpp\"\n #include \"op_cuda.hpp\"\n+#include \"op_webnn.hpp\"\n \n #ifdef HAVE_CUDA\n #include \"cuda4dnn/init.hpp\"\n@@ -224,6 +225,13 @@ class BackendRegistry\n #endif\n```", + "output": "It seems that this is a test version. I have fixed this. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/dnn.cpp\n**Change Type:** modified\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #include \"ie_ngraph.hpp\"\n #include \"op_vkcom.hpp\"\n #include \"op_cuda.hpp\"\n+#include \"op_webnn.hpp\"\n \n #ifdef HAVE_CUDA\n #include \"cuda4dnn/init.hpp\"\n@@ -224,6 +225,13 @@ class BackendRegistry\n #endif\n```", + "output": "This has been fixed now. Some building errors are also fixed. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/op_webnn.hpp\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,171 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef __OPENCV_DNN_OP_WEBNN_HPP__\n+#define __OPENCV_DNN_OP_WEBNN_HPP__\n+\n+#include \"opencv2/core/cvdef.h\"\n+#include \"opencv2/core/cvstd.hpp\"\n```", + "output": "Thanks. This is fixed in the new commit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/op_webnn.hpp\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,171 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef __OPENCV_DNN_OP_WEBNN_HPP__\n+#define __OPENCV_DNN_OP_WEBNN_HPP__\n+\n+#include \"opencv2/core/cvdef.h\"\n+#include \"opencv2/core/cvstd.hpp\"\n```", + "output": "Thanks! I have removed this from op_webnn.cpp & op_webnn.hpp." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectWebNN.cmake\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,49 @@\n+if(NOT EMSCRIPTEN)\n+ if(WITH_WEBNN)\n+ ocv_check_environment_variables(WEBNN_HEADER_DIRS)\n+ ocv_check_environment_variables(WEBNN_INCLUDE_DIRS)\n+ ocv_check_environment_variables(WEBNN_LIBRARIES)\n+ if(NOT DEFINED WEBNN_HEADER_DIRS)\n+ set(WEBNN_HEADER_DIRS \"$ENV{WEBNN_NATIVE_DIR}/gen/src/include\")\n+ endif()\n+ if(NOT DEFINED WEBNN_INCLUDE_DIRS)\n```", + "output": "Thanks for your comment. I have fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectWebNN.cmake\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,49 @@\n+if(NOT EMSCRIPTEN)\n+ if(WITH_WEBNN)\n+ ocv_check_environment_variables(WEBNN_HEADER_DIRS)\n+ ocv_check_environment_variables(WEBNN_INCLUDE_DIRS)\n+ ocv_check_environment_variables(WEBNN_LIBRARIES)\n+ if(NOT DEFINED WEBNN_HEADER_DIRS)\n+ set(WEBNN_HEADER_DIRS \"$ENV{WEBNN_NATIVE_DIR}/gen/src/include\")\n+ endif()\n+ if(NOT DEFINED WEBNN_INCLUDE_DIRS)\n```", + "output": "Thanks for your suggestion. More logs have been added." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/webnn/README.md\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,11 @@\n+## Build Instructions\n+\n+### Build WebNN-native and set the environment variable\n+\n+Refer to [WebNN's build instructions](https://github.com/webmachinelearning/webnn-native) to complete the build of WebNN-native.\n+\n+Set environment variable `WEBNN_NATIVE_DIR` to enable native DNN_BACKEND_WEBNN build: `export WEBNN_NATIVE_DIR=${PATH_TO_WebNN}`. Please let `WEBNN_NATIVE_DIR` points the output directory of webnn-native build (e.g. webnn-native/out/Release).\n+\n+### Test native DNN_BACKEND_WEBNN backend\n```", + "output": "Hi @alalek, I changed this link but I'm not sure the new link is as your expectation, so please have a look. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/webnn/README.md\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,11 @@\n+## Build Instructions\n+\n+### Build WebNN-native and set the environment variable\n+\n+Refer to [WebNN's build instructions](https://github.com/webmachinelearning/webnn-native) to complete the build of WebNN-native.\n+\n+Set environment variable `WEBNN_NATIVE_DIR` to enable native DNN_BACKEND_WEBNN build: `export WEBNN_NATIVE_DIR=${PATH_TO_WebNN}`. Please let `WEBNN_NATIVE_DIR` points the output directory of webnn-native build (e.g. webnn-native/out/Release).\n+\n+### Test native DNN_BACKEND_WEBNN backend\n```", + "output": "Please use `@ref tutorial_linux_install`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/webnn/README.md\n**Change Type:** added\n**Context:** PR #20406: [GSoC] OpenCV.js: Accelerate OpenCV.js DNN via WebNN - ### Overview Proposal: OpenCV.js: Accelerate OpenCV.js DNN via WebNN Mentor: Ningxin...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,11 @@\n+## Build Instructions\n+\n+### Build WebNN-native and set the environment variable\n+\n+Refer to [WebNN's build instructions](https://github.com/webmachinelearning/webnn-native) to complete the build of WebNN-native.\n+\n+Set environment variable `WEBNN_NATIVE_DIR` to enable native DNN_BACKEND_WEBNN build: `export WEBNN_NATIVE_DIR=${PATH_TO_WebNN}`. Please let `WEBNN_NATIVE_DIR` points the output directory of webnn-native build (e.g. webnn-native/out/Release).\n+\n+### Test native DNN_BACKEND_WEBNN backend\n```", + "output": "Thanks! I have modified this in the new commit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/misc/python/test/test_dnn.py\n**Change Type:** modified\n**Context:** PR #20480: Add Python's test for LSTM layer\n**Review Line:** 331\n\n**Code Changes:**\n```diff\n+ net = cv.dnn.readNet(model)\n+ input = np.load(input_file)\n+ # we have to expand the shape of input tensor because Python bindings cut 3D tensors to 2D\n+ # it should be fixed in future. see : https://github.com/opencv/opencv/issues/19091\n+ # please remove `expand_dims` after that\n+ input = np.expand_dims(input, axis=3)\n+ gold_output = np.load(output_file)\n+ net.setInput(input)\n+\n+ for backend, target in self.dnnBackendsAndTargets:\n+ printParams(backend, target)\n```", + "output": "Thanks. Updated." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/system.cpp\n**Change Type:** modified\n**Context:** PR #20900: fix: correctly check neon flags for QNX platform\n\n**Code Changes:**\n```diff\n@@ -129,7 +129,7 @@ void* allocSingletonNewBuffer(size_t size) { return malloc(size); }\n #if defined __ANDROID__ || defined __unix__ || defined __FreeBSD__ || defined __OpenBSD__ || defined __HAIKU__\n # include \n # include \n-#if defined __QNXNTO__\n+#if defined __QNX__\n # include \n #else\n # include \n@@ -545,7 +545,7 @@ struct HWFeatures\n```", + "output": "@asmaloney Thank you for pointing this out. I saw there is `__QNXNTO__` before I make this PR (in system.cpp, line 132), and I didn't know what does `NTO` mean until you mentioned, thus I followed that. Now I change the two `__QNXNTO__` to the more generic one `__QNX__`, hope this works." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/five-point.cpp\n**Change Type:** modified\n**Context:** PR #20636: Recover pose from different cameras (version 2)\n\n**Code Changes:**\n```diff\n@@ -401,6 +401,29 @@ class EMEstimatorCallback CV_FINAL : public PointSetRegistrator::Callback\n }\n };\n \n+// Find essential matrix given undistorted points and two cameras.\n+static Mat findEssentialMat_( InputArray _points1, InputArray _points2,\n+ InputArray cameraMatrix1, InputArray cameraMatrix2,\n+ int method, double prob, double threshold, OutputArray _mask)\n+{\n+ // Scale the points back. We use \"arithmetic mean\" between the supplied two camera matrices.\n```", + "output": "Thanks for pointing this out. I created a new private api for findEssentialMat, which takes undistorted points and two cameras. The new findEssentialMat api is used by the existing findEssentialMat and new recoverPose apis. Please let me know of any concerns with this approach." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/face_detect.cpp\n**Change Type:** added\n**Context:** PR #20422: Add DNN-based face detection and face recognition into modules/objdetect\n**Review Line:** 174\n\n**Code Changes:**\n```diff\n+ }\n+ }\n+ }\n+ }\n+\n+ Mat postProcess(const std::vector& output_blobs)\n+ {\n+ // Extract from output_blobs\n+ Mat loc = output_blobs[0];\n+ Mat conf = output_blobs[1];\n+ Mat iou = output_blobs[2];\n```", + "output": "Well, after thinking about it for some time I tend to agree with @alalek; I suggest to give prof. Yu the credit and rename the class in .hpp to `FaceDetectorYN` (short for YuNet). We could still use another, user-provided model, but we will leave space for some further hypothetical `FaceDetector*` classes." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/batch_norm_layer.cpp\n**Change Type:** modified\n**Context:** PR #20228: 8-bit quantization in dnn module and int8 layers\n\n**Code Changes:**\n```diff\n@@ -409,6 +409,18 @@ class BatchNormLayerImpl CV_FINAL : public BatchNormLayer\n }\n #endif // HAVE_DNN_NGRAPH\n \n+ virtual bool tryQuantize(const std::vector > &scales,\n+ const std::vector > &zeropoints, LayerParams& params) CV_OVERRIDE\n+ {\n+ params.set(\"input_scale\", scales[0][0]);\n+ params.set(\"input_zeropoint\", zeropoints[0][0]);\n+\n```", + "output": "@SamFC10, the compatibility is the first priority, so we need to find some method to provide the information that you need while preserving compatibility. For example, prior to copying blobs to weights_ and bias_ we can save the previous state to some other internal arrays." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/java/src/java/core+Mat.java\n**Change Type:** modified\n**Context:** PR #20731: Add matrix multiplication to Mat for iOS/Android\n\n**Code Changes:**\n```diff\n@@ -466,16 +466,32 @@ public void locateROI(Size wholeSize, Point ofs) {\n // C++: Mat Mat::mul(Mat m, double scale = 1)\n //\n \n- // javadoc: Mat::mul(m, scale)\n+ /**\n+ * Element-wise multiplication with scale factor\n+ * @param m operand with with which to perform element-wise multiplication\n+ * @param scale scale factor\n+ */\n```", + "output": "\"GEMM\" name comes from [BLAS](https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms). > Also the name gemm makes it difficult to find BTW, - numpy uses [matmul](https://numpy.org/doc/stable/reference/generated/numpy.matmul.html) name - MATLAB uses [mtimes](https://www.mathworks.com/help/matlab/ref/mtimes.html) Perhaps I would suggest to replace `matrixMul` -> `matMul` as OpenCV already have `cv::Mat` instead of `cv::Matrix`. /cc @vpisarev @asmorkalov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.simd.hpp\n**Change Type:** modified\n**Context:** PR #20712: Add RowVec_8u32f\n\n**Code Changes:**\n```diff\n@@ -465,6 +465,49 @@ struct RowVec_8u32s\n bool smallValues;\n };\n \n+struct RowVec_8u32f\n+{\n+ RowVec_8u32f() {}\n+ RowVec_8u32f( const Mat& _kernel ) : kernel(_kernel) {}\n+\n+ int operator()(const uchar* _src, uchar* _dst, int width, int cn) const\n```", + "output": "Thank you for looking at my code and giving me suggestion! Code was compiled with g++ 7.5.0 with -O3 flag. Regarding the use of `v_expand()`, in my new commit I have replaced it with `vx_load_expand_q(...)` just as you suggested, so there shouldn't be the need to use `v_expand()` anymore. Please let me know if that looks good to you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.simd.hpp\n**Change Type:** modified\n**Context:** PR #20712: Add RowVec_8u32f\n**Review Line:** 495\n\n**Code Changes:**\n```diff\n+ v_float32 f = vx_setall_f32(_kx[k]);\n+ const uchar* src = (const uchar*)_src + i + k * cn;\n+ v_float32 vs_ll = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src)));\n+ v_float32 vs_lh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + v_float32::nlanes)));\n+ v_float32 vs_hl = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 2*v_float32::nlanes)));\n+ v_float32 vs_hh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 3*v_float32::nlanes)));\n+ s0 = v_muladd(vs_ll, f, s0);\n+ s1 = v_muladd(vs_lh, f, s1);\n+ s2 = v_muladd(vs_hl, f, s2);\n+ s3 = v_muladd(vs_hh, f, s3);\n+ }\n```", + "output": "@Nicholas-Ho-arm - @jondea noticed that this change might not be optimal. Have you re-run the benchmark before replacing the single load + 6 expand with 4 loads + 4 expands? I guess it should not matter much, because the 4 sequences (load + expand) can be issued in parallel, while the previous version needs to do 1 load , than 2 parallel expands, and finally 4 parallel expands. What we could do maybe, to increase the level of parallelism of the original version, is to load 2 (or maybe 4) consecutive `v_uint8` from `src` to increase the data being processed by the loop? @jondea / @alalek / @asmorkalov - any opinion here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.simd.hpp\n**Change Type:** modified\n**Context:** PR #20712: Add RowVec_8u32f\n**Review Line:** 495\n\n**Code Changes:**\n```diff\n+ v_float32 f = vx_setall_f32(_kx[k]);\n+ const uchar* src = (const uchar*)_src + i + k * cn;\n+ v_float32 vs_ll = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src)));\n+ v_float32 vs_lh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + v_float32::nlanes)));\n+ v_float32 vs_hl = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 2*v_float32::nlanes)));\n+ v_float32 vs_hh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 3*v_float32::nlanes)));\n+ s0 = v_muladd(vs_ll, f, s0);\n+ s1 = v_muladd(vs_lh, f, s1);\n+ s2 = v_muladd(vs_hl, f, s2);\n+ s3 = v_muladd(vs_hh, f, s3);\n+ }\n```", + "output": "@fpetrogalli Thank you for your comment. I ran the same perf tests after and noticed that there was practically no difference in the numbers so performance wise, I think they are equivalent.
Perf tests results with `vx_load_expand_q`: |Name of Test|PR time|Reference time|x-factor| |---|:-:|:-:|:-:| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (0, 1), BORDER_CONSTANT)|0.027|0.027|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (0, 1), BORDER_REPLICATE)|0.027|0.027|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (0, 2), BORDER_CONSTANT)|0.029|0.029|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (0, 2), BORDER_REPLICATE)|0.028|0.028|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (1, 0), BORDER_CONSTANT)|0.028|0.028|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (1, 0), BORDER_REPLICATE)|0.027|0.027|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (1, 1), BORDER_CONSTANT)|0.026|0.026|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (1, 1), BORDER_REPLICATE)|0.025|0.025|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (2, 0), BORDER_CONSTANT)|0.029|0.029|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (2, 0), BORDER_REPLICATE)|0.028|0.028|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (2, 2), BORDER_CONSTANT)|0.029|0.029|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 16SC1, (2, 2), BORDER_REPLICATE)|0.028|0.028|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (0, 1), BORDER_CONSTANT)|0.035|0.044|0.79| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (0, 1), BORDER_REPLICATE)|0.034|0.043|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (0, 2), BORDER_CONSTANT)|0.036|0.045|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (0, 2), BORDER_REPLICATE)|0.035|0.044|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (1, 0), BORDER_CONSTANT)|0.037|0.045|0.82| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (1, 0), BORDER_REPLICATE)|0.036|0.045|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (1, 1), BORDER_CONSTANT)|0.035|0.044|0.79| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (1, 1), BORDER_REPLICATE)|0.034|0.043|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (2, 0), BORDER_CONSTANT)|0.036|0.045|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (2, 0), BORDER_REPLICATE)|0.035|0.044|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (2, 2), BORDER_CONSTANT)|0.036|0.045|0.81| |sobelFilter::Size_MatType_dx_dy_Border3x3::(127x61, 32FC1, (2, 2), BORDER_REPLICATE)|0.036|0.044|0.80| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (0, 1), BORDER_CONSTANT)|0.117|0.116|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (0, 1), BORDER_REPLICATE)|0.116|0.114|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (0, 2), BORDER_CONSTANT)|0.138|0.137|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (0, 2), BORDER_REPLICATE)|0.136|0.133|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (1, 0), BORDER_CONSTANT)|0.126|0.124|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (1, 0), BORDER_REPLICATE)|0.124|0.123|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (1, 1), BORDER_CONSTANT)|0.108|0.106|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (1, 1), BORDER_REPLICATE)|0.107|0.105|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (2, 0), BORDER_CONSTANT)|0.139|0.137|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (2, 0), BORDER_REPLICATE)|0.138|0.134|1.03| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (2, 2), BORDER_CONSTANT)|0.140|0.136|1.03| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 16SC1, (2, 2), BORDER_REPLICATE)|0.139|0.135|1.04| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (0, 1), BORDER_CONSTANT)|0.220|0.322|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (0, 1), BORDER_REPLICATE)|0.219|0.320|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (0, 2), BORDER_CONSTANT)|0.241|0.347|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (0, 2), BORDER_REPLICATE)|0.237|0.346|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (1, 0), BORDER_CONSTANT)|0.239|0.341|0.70| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (1, 0), BORDER_REPLICATE)|0.239|0.340|0.70| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (1, 1), BORDER_CONSTANT)|0.219|0.326|0.67| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (1, 1), BORDER_REPLICATE)|0.219|0.320|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (2, 0), BORDER_CONSTANT)|0.240|0.348|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (2, 0), BORDER_REPLICATE)|0.238|0.345|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (2, 2), BORDER_CONSTANT)|0.246|0.348|0.71| |sobelFilter::Size_MatType_dx_dy_Border3x3::(320x240, 32FC1, (2, 2), BORDER_REPLICATE)|0.240|0.346|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (0, 1), BORDER_CONSTANT)|0.403|0.410|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (0, 1), BORDER_REPLICATE)|0.416|0.404|1.03| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (0, 2), BORDER_CONSTANT)|0.481|0.475|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (0, 2), BORDER_REPLICATE)|0.482|0.473|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (1, 0), BORDER_CONSTANT)|0.446|0.439|1.02| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (1, 0), BORDER_REPLICATE)|0.444|0.442|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (1, 1), BORDER_CONSTANT)|0.382|0.365|1.05| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (1, 1), BORDER_REPLICATE)|0.365|0.368|0.99| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (2, 0), BORDER_CONSTANT)|0.479|0.473|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (2, 0), BORDER_REPLICATE)|0.487|0.470|1.04| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (2, 2), BORDER_CONSTANT)|0.481|0.476|1.01| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 16SC1, (2, 2), BORDER_REPLICATE)|0.477|0.475|1.00| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (0, 1), BORDER_CONSTANT)|0.823|1.237|0.67| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (0, 1), BORDER_REPLICATE)|0.828|1.232|0.67| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (0, 2), BORDER_CONSTANT)|0.909|1.324|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (0, 2), BORDER_REPLICATE)|0.896|1.324|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (1, 0), BORDER_CONSTANT)|0.899|1.327|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (1, 0), BORDER_REPLICATE)|0.906|1.320|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (1, 1), BORDER_CONSTANT)|0.821|1.236|0.66| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (1, 1), BORDER_REPLICATE)|0.817|1.237|0.66| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (2, 0), BORDER_CONSTANT)|0.908|1.325|0.69| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (2, 0), BORDER_REPLICATE)|0.904|1.326|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (2, 2), BORDER_CONSTANT)|0.907|1.325|0.68| |sobelFilter::Size_MatType_dx_dy_Border3x3::(640x480, 32FC1, (2, 2), BORDER_REPLICATE)|0.906|1.321|0.69|
" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/filter.simd.hpp\n**Change Type:** modified\n**Context:** PR #20712: Add RowVec_8u32f\n**Review Line:** 495\n\n**Code Changes:**\n```diff\n+ v_float32 f = vx_setall_f32(_kx[k]);\n+ const uchar* src = (const uchar*)_src + i + k * cn;\n+ v_float32 vs_ll = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src)));\n+ v_float32 vs_lh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + v_float32::nlanes)));\n+ v_float32 vs_hl = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 2*v_float32::nlanes)));\n+ v_float32 vs_hh = v_cvt_f32(v_reinterpret_as_s32(vx_load_expand_q(src + 3*v_float32::nlanes)));\n+ s0 = v_muladd(vs_ll, f, s0);\n+ s1 = v_muladd(vs_lh, f, s1);\n+ s2 = v_muladd(vs_hl, f, s2);\n+ s3 = v_muladd(vs_hh, f, s3);\n+ }\n```", + "output": "@Nicholas-Ho-arm - thank you for confirming! :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dnn_model_runner/dnn_conversion/paddlepaddle/README.md\n**Change Type:** modified\n**Context:** PR #20290: Add paddle humanseg demo\n**Review Line:** 1\n\n**Code Changes:**\n```diff\n@@ -1,6 +1,6 @@\n-# Run PaddlePaddle model by OpenCV\n+# Run PaddlePaddle model using OpenCV\n \n-This tutorial shows how to run PaddlePaddle model by opencv.\n+These two demonstrations show how to inference PaddlePaddle model using OpenCV.\n \n ## Environment Setup\n```", + "output": "/cc @jiangjiajun As previous contributor please take a quick look on updates of PaddlePaddle samples." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dnn_model_runner/dnn_conversion/paddlepaddle/README.md\n**Change Type:** modified\n**Context:** PR #20290: Add paddle humanseg demo\n**Review Line:** 1\n\n**Code Changes:**\n```diff\n@@ -1,6 +1,6 @@\n-# Run PaddlePaddle model by OpenCV\n+# Run PaddlePaddle model using OpenCV\n \n-This tutorial shows how to run PaddlePaddle model by opencv.\n+These two demonstrations show how to inference PaddlePaddle model using OpenCV.\n \n ## Environment Setup\n```", + "output": "I have reviewed this change before, @wjj19950828 fix some grammer problems for my pull request" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #20591: Add CAP_PROP_STREAM_OPEN_TIME - I added CAP_PROP_STREAM_OPEN_TIME to the capture properties enumeration and support for it in the `getPrope...\n\n**Code Changes:**\n```diff\n@@ -1591,6 +1591,9 @@ double CvCapture_FFMPEG::getProperty( int property_id ) const\n case CAP_PROP_HW_ACCELERATION_USE_OPENCL:\n return static_cast(use_opencl);\n #endif // USE_AV_HW_CODECS\n+ case CAP_PROP_STREAM_OPEN_TIME_USEC:\n+ //ic->start_time_realtime is in microseconds\n+ return ((double)ic->start_time_realtime);\n default:\n break;\n }\n```", + "output": "> do I need to do something on this PR No. This PR is merged. Thank you for contribution! See here about automatic local whitespace checks: https://github.com/opencv/opencv/wiki/How_to_contribute#q3-i-was-asked-to-remove-whitespace-issues-how-can-i-do-that" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_yuv.simd.hpp\n**Change Type:** modified\n**Context:** PR #19554: different paddings in cvtColorTwoPlane() for biplane YUV420\n**Review Line:** 1403\n\n**Code Changes:**\n```diff\n+ const uchar* _y1, size_t _y1_step, const uchar* _uv, size_t _uv_step)\n {\n- YUV420sp2RGB8Invoker converter(dst_data, dst_step, dst_width, _stride, _y1, _uv);\n+ YUV420sp2RGB8Invoker converter(dst_data, dst_step, dst_width, _y1, _y1_step, _uv, _uv_step);\n if (dst_width * dst_height >= MIN_SIZE_FOR_PARALLEL_YUV420_CONVERSION)\n parallel_for_(Range(0, dst_height/2), converter);\n else\n@@ -1817,26 +1818,16 @@ void cvtYUVtoBGR(const uchar * src_data, size_t src_step,\n CvtColorLoop(src_data, src_step, dst_data, dst_step, width, height, YCrCb2RGB_f(dcn, blueIdx, isCbCr));\n }\n \n```", + "output": "> `Y` plane must have even width and height values (see 420 format). Noted with thanks :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** 3rdparty/carotene/hal/tegra_hal.hpp\n**Change Type:** modified\n**Context:** PR #19554: different paddings in cvtColorTwoPlane() for biplane YUV420\n\n**Code Changes:**\n```diff\n@@ -1778,60 +1778,63 @@ TegraCvtColor_Invoker(bgrx2hsvf, bgrx2hsv, src_data + static_cast(range.\n : CV_HAL_ERROR_NOT_IMPLEMENTED \\\n )\n \n-#define TEGRA_CVT2PYUVTOBGR(src_data, src_step, dst_data, dst_step, dst_width, dst_height, dcn, swapBlue, uIdx) \\\n+#define TEGRA_CVT2PYUVTOBGR_EX(y_data, y_step, uv_data, uv_step, dst_data, dst_step, dst_width, dst_height, dcn, swapBlue, uIdx) \\\n ( \\\n CAROTENE_NS::isSupportedConfiguration() ? \\\n dcn == 3 ? \\\n uIdx == 0 ? \\\n```", + "output": "HAL interface functions should not have overloads. Because used processor macros don't support overloading. So new name should be added for implementation with new parameters/behavior. (Check is following: code should compile and work without \"carotene\" changes) /cc @mshabunin @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #16766: feature: Extend VideoWriter to accept vector of parameters\n\n**Code Changes:**\n```diff\n@@ -189,7 +189,9 @@ enum VideoCaptureProperties {\n enum VideoWriterProperties {\n VIDEOWRITER_PROP_QUALITY = 1, //!< Current quality (0..100%) of the encoded videostream. Can be adjusted dynamically in some codecs.\n VIDEOWRITER_PROP_FRAMEBYTES = 2, //!< (Read-only): Size of just encoded video frame. Note that the encoding order may be different from representation order.\n- VIDEOWRITER_PROP_NSTRIPES = 3 //!< Number of stripes for parallel encoding. -1 for auto detection.\n+ VIDEOWRITER_PROP_NSTRIPES = 3, //!< Number of stripes for parallel encoding. -1 for auto detection.\n+ VIDEOWRITER_PROP_IS_COLOR = 4 //!< If it is not zero, the encoder will expect and encode color frames, otherwise it\n+ //!< will work with grayscale frames.\n };\n \n```", + "output": "* @param params Encoder parameters encoded as pairs (paramId_1, paramValue_1, paramId_2, paramValue_2,paramId_3, paramValue_3,... )" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/include/opencv2/videoio.hpp\n**Change Type:** modified\n**Context:** PR #16766: feature: Extend VideoWriter to accept vector of parameters\n\n**Code Changes:**\n```diff\n@@ -189,7 +189,9 @@ enum VideoCaptureProperties {\n enum VideoWriterProperties {\n VIDEOWRITER_PROP_QUALITY = 1, //!< Current quality (0..100%) of the encoded videostream. Can be adjusted dynamically in some codecs.\n VIDEOWRITER_PROP_FRAMEBYTES = 2, //!< (Read-only): Size of just encoded video frame. Note that the encoding order may be different from representation order.\n- VIDEOWRITER_PROP_NSTRIPES = 3 //!< Number of stripes for parallel encoding. -1 for auto detection.\n+ VIDEOWRITER_PROP_NSTRIPES = 3, //!< Number of stripes for parallel encoding. -1 for auto detection.\n+ VIDEOWRITER_PROP_IS_COLOR = 4 //!< If it is not zero, the encoder will expect and encode color frames, otherwise it\n+ //!< will work with grayscale frames.\n };\n \n```", + "output": "Thank you, got it!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n\n**Code Changes:**\n```diff\n@@ -47,6 +47,10 @@\n #include \"opencv2/core/hal/intrin.hpp\"\n #include \n \n+#ifdef HAVE_OPENCL\n+using namespace cv::dnn::ocl4dnn;\n+#endif\n+\n namespace cv\n {\n```", + "output": "it's another piece of C++ 11 code. @alalek, do we check anywhere that the code is C++ 98 compliant?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/ocl.cpp\n**Change Type:** modified\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n\n**Code Changes:**\n```diff\n@@ -51,7 +51,10 @@\n #include \n #endif\n \n+#include \n+\n #include \"opencv2/core/ocl_genbase.hpp\"\n+#include \"opencl_kernels_core.hpp\"\n \n #define CV_OPENCL_ALWAYS_SHOW_BUILD_LOG 0\n```", + "output": "Nice catch! Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/concat_layer.cpp\n**Change Type:** modified\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n**Review Line:** 179\n\n**Code Changes:**\n```diff\n@@ -174,11 +175,62 @@ class ConcatLayerImpl : public ConcatLayer\n }\n };\n \n+#ifdef HAVE_OPENCL\n+ bool forward_ocl(std::vector &inputs, std::vector &outputs, std::vector &internals)\n+ {\n+ CV_TRACE_FUNCTION();\n+ CV_TRACE_ARG_VALUE(name, \"name\", name.c_str());\n+\n+ int cAxis = clamp(axis, inputs[0]->dims);\n```", + "output": "Using `cv::Mat` in OpenCL code path is not a good practice. But probably this is a problem of dnn module interfaces itself. /cc @vpisarev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ocl4dnn/src/ocl4dnn_conv_spatial.cpp\n**Change Type:** added\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1568 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "@alalek , I don't understand this comment, could you please elaborate more ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ocl4dnn/src/ocl4dnn_conv_spatial.cpp\n**Change Type:** added\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1568 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "Thanks for the info. we have fixed this in the update patch." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/ocl.cpp\n**Change Type:** modified\n**Context:** PR #9114: add libdnn acceleration to dnn module - libdnn provide ocl acceleration to current dnn module. Currently it provide ocl kernels for 5 layers...\n**Review Line:** 5263\n\n**Code Changes:**\n```diff\n+ clReleaseEvent(start_gpu_cl_);\n+ ocl::Kernel kernel(\"null_kernel_float\", ocl::core::benchmark_oclsrc);\n+ float arg = 0;\n+ clSetKernelArg((cl_kernel)kernel.ptr(), 0, sizeof(arg), &arg);\n+ clEnqueueTask((cl_command_queue)queue.ptr(), (cl_kernel)kernel.ptr(), 0,\n+ NULL, &start_gpu_cl_);\n+ clFinish((cl_command_queue)queue.ptr());\n+ running_ = true;\n+ has_run_at_least_once_ = true;\n+ }\n+#endif\n```", + "output": "Thanks for your advice, I will try this in next commit" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/semihosting/histogram/histogram.cpp\n**Change Type:** added\n**Context:** PR #20392: AArch64 semihosting\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,43 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \n+#include \n+\n+#include \n+#include \n```", + "output": "@asmorkalov - I need some input here to be able to address your comment. Thank you! Francesco" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/utility.hpp\n**Change Type:** modified\n**Context:** PR #19985: [build][option] Introduce `OPENCV_DISABLE_THREAD_SUPPORT` option.\n**Review Line:** 717\n\n**Code Changes:**\n```diff\n@@ -714,9 +714,27 @@ void Mat::forEach_impl(const Functor& operation) {\n /////////////////////////// Synchronization Primitives ///////////////////////////////\n \n #if !defined(_M_CEE)\n+#ifndef OPENCV_DISABLE_THREAD_SUPPORT\n typedef std::recursive_mutex Mutex;\n typedef std::lock_guard AutoLock;\n-#endif\n+#else // OPENCV_DISABLE_THREAD_SUPPORT\n+// Custom (failing) implementation of `std::recursive_mutex`.\n```", + "output": "@alalek - I was wondering, should I turn these in `#if !OPENCV_DISABLE_THREAD_SUPPORT` and set the macro to 0 by default, like for the filesystem on? Francesco" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/utility.hpp\n**Change Type:** modified\n**Context:** PR #19985: [build][option] Introduce `OPENCV_DISABLE_THREAD_SUPPORT` option.\n**Review Line:** 717\n\n**Code Changes:**\n```diff\n@@ -714,9 +714,27 @@ void Mat::forEach_impl(const Functor& operation) {\n /////////////////////////// Synchronization Primitives ///////////////////////////////\n \n #if !defined(_M_CEE)\n+#ifndef OPENCV_DISABLE_THREAD_SUPPORT\n typedef std::recursive_mutex Mutex;\n typedef std::lock_guard AutoLock;\n-#endif\n+#else // OPENCV_DISABLE_THREAD_SUPPORT\n+// Custom (failing) implementation of `std::recursive_mutex`.\n```", + "output": "I though it was a requirement. If it is not required, I'll leave it as it is. Let me know if you think otherwise. Thank you! Francesco" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** CMakeLists.txt\n**Change Type:** modified\n**Context:** PR #19985: [build][option] Introduce `OPENCV_DISABLE_THREAD_SUPPORT` option.\n\n**Code Changes:**\n```diff\n@@ -512,6 +512,7 @@ OCV_OPTION(OPENCV_GENERATE_SETUPVARS \"Generate setup_vars* scripts\" ON IF (NOT\n OCV_OPTION(ENABLE_CONFIG_VERIFICATION \"Fail build if actual configuration doesn't match requested (WITH_XXX != HAVE_XXX)\" OFF)\n OCV_OPTION(OPENCV_ENABLE_MEMALIGN \"Enable posix_memalign or memalign usage\" ON)\n OCV_OPTION(OPENCV_DISABLE_FILESYSTEM_SUPPORT \"Disable filesystem support\" OFF)\n+OCV_OPTION(OPENCV_DISABLE_THREAD_SUPPORT \"Build the library without multi-threaded code.\" OFF)\n \n OCV_OPTION(ENABLE_PYLINT \"Add target with Pylint checks\" (BUILD_DOCS OR BUILD_EXAMPLES) IF (NOT CMAKE_CROSSCOMPILING AND NOT APPLE_FRAMEWORK) )\n OCV_OPTION(ENABLE_FLAKE8 \"Add target with Python flake8 checker\" (BUILD_DOCS OR BUILD_EXAMPLES) IF (NOT CMAKE_CROSSCOMPILING AND NOT APPLE_FRAMEWORK) )\n@@ -666,6 +667,11 @@ if(UNIX)\n set(HAVE_PTHREAD 1)\n```", + "output": "> It should be handled in cmake/vars/OPENCV_DISABLE_THREAD_SUPPORT.cmake I agree, sorry, I keep forgetting about this file and I end up running in the same issue everytime I was recompiling after changing branch because... > Ensure that you have clean CMake cache (after code update) ... I was not aware of this trick! Thank you. I am going to remove the last commit with this change." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/test/test_octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,113 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"test_precomp.hpp\"\n+\n+namespace opencv_test { namespace {\n+\n+using namespace cv;\n```", + "output": "@zihaomu, I suggest to use integer pmin & pmax and then inside the loop scale it by a power-of-two, then we can guarantee bit-exactness between platforms: ``` int scale; Point3i pmin, pmax; .... scale = 1<<20; pmin = Point3i(-scale, -scale, -scale); pmax = Point3i(scale, scale, scale); ... RNG rng_Point(12345); ``` ``` for(int i = 0; i , pointCloudSize; i++) { float _x = (float)rng_Point.uniform(pmin.x, pmax.x)/scale; float _y = (float)rng_Point.uniform(pmin.y, pmax.y)/scale; float _z = (float)rng_Point.uniform(pmin.z, pmax.z)/scale; pointcloud.push_back(Point3f(_x, _y, _z)); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/test/test_octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,113 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"test_precomp.hpp\"\n+\n+namespace opencv_test { namespace {\n+\n+using namespace cv;\n```", + "output": "@zihaomu, please, remove `rng_Point` from the class. Before the point cloud initialization loop simply put ``` RNG& rng_Point = theRNG(); ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -2374,6 +2374,151 @@ void undistortPoints(InputArray src, OutputArray dst,\n InputArray R = noArray(), InputArray P = noArray(),\n TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER, 5, 0.01));\n \n+\n+/** @brief Octree for 3D vision.\n+ *\n+ * In 3D vision filed, the Octree is used to process and accelerate the pointcloud data. The class Octree represents\n+ * the Octree data structure. Each Octree will have a fixed depth. The depth of Octree refers to the distance from\n+ * the root node to the leaf node.All OctreeNodes will not exceed this depth.Increasing the depth will increase\n```", + "output": "Thank you for your code review. Currently, Octree to Python has not been completed, so \"CV_EXPORTS\" is not used." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -2374,6 +2374,151 @@ void undistortPoints(InputArray src, OutputArray dst,\n InputArray R = noArray(), InputArray P = noArray(),\n TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER, 5, 0.01));\n \n+\n+/** @brief Octree for 3D vision.\n+ *\n+ * In 3D vision filed, the Octree is used to process and accelerate the pointcloud data. The class Octree represents\n+ * the Octree data structure. Each Octree will have a fixed depth. The depth of Octree refers to the distance from\n+ * the root node to the leaf node.All OctreeNodes will not exceed this depth.Increasing the depth will increase\n```", + "output": "Thanks, this will be fixed in the new version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -2374,6 +2374,151 @@ void undistortPoints(InputArray src, OutputArray dst,\n InputArray R = noArray(), InputArray P = noArray(),\n TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER, 5, 0.01));\n \n+\n+/** @brief Octree for 3D vision.\n+ *\n+ * In 3D vision filed, the Octree is used to process and accelerate the pointcloud data. The class Octree represents\n+ * the Octree data structure. Each Octree will have a fixed depth. The depth of Octree refers to the distance from\n+ * the root node to the leaf node.All OctreeNodes will not exceed this depth.Increasing the depth will increase\n```", + "output": "@alalek Thank you for your reminder, I have added the \"CV_EXPORT\" flag in the new version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.hpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,99 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2021, Huawei Technologies Co., Ltd. All rights reserved.\n+// Third party copyrights are property of their respective owners.\n+//\n+// Author: Zihao Mu \n+// Liangqian Kong \n```", + "output": "Please align license headers in both files from `src/` directory (they should have the same content). Consider using this form: ``` // This file is part of OpenCV project. // It is subject to the license terms in the LICENSE file found in the top-level directory // of this distribution and at http://opencv.org/license.html. ``` then add this if required: ``` // // Copyright (C) 2021, Huawei Technologies Co., Ltd. All rights reserved. // Third party copyrights are property of their respective owners. ``` And optionally add this: ``` // // Author: Zihao Mu // Liangqian Kong // Longbu Wang ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -2374,6 +2374,151 @@ void undistortPoints(InputArray src, OutputArray dst,\n InputArray R = noArray(), InputArray P = noArray(),\n TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER, 5, 0.01));\n \n+\n+/** @brief Octree for 3D vision.\n+ *\n+ * In 3D vision filed, the Octree is used to process and accelerate the pointcloud data. The class Octree represents\n+ * the Octree data structure. Each Octree will have a fixed depth. The depth of Octree refers to the distance from\n+ * the root node to the leaf node.All OctreeNodes will not exceed this depth.Increasing the depth will increase\n```", + "output": "Empty line should be added between `@brief` and details comments (or `@details` should be used): http://pullrequest.opencv.org/buildbot/export/pr/19684/docs/d3/da6/classcv_1_1Octree.html See `insertPoint()` documentation as a reference. Similar note for other methods." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/include/opencv2/3d.hpp\n**Change Type:** modified\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -2374,6 +2374,151 @@ void undistortPoints(InputArray src, OutputArray dst,\n InputArray R = noArray(), InputArray P = noArray(),\n TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER, 5, 0.01));\n \n+\n+/** @brief Octree for 3D vision.\n+ *\n+ * In 3D vision filed, the Octree is used to process and accelerate the pointcloud data. The class Octree represents\n+ * the Octree data structure. Each Octree will have a fixed depth. The depth of Octree refers to the distance from\n+ * the root node to the leaf node.All OctreeNodes will not exceed this depth.Increasing the depth will increase\n```", + "output": "empty line after `@brief` note: http://pullrequest.opencv.org/buildbot/export/pr/19684/docs/d3/da6/classcv_1_1Octree.html" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.hpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,99 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2021, Huawei Technologies Co., Ltd. All rights reserved.\n+// Third party copyrights are property of their respective owners.\n+//\n+// Author: Zihao Mu \n+// Liangqian Kong \n```", + "output": "Thanks for reviewing my code. In the next version, I will put these functions directly into `.cpp`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,546 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+#include \"octree.hpp\"\n+\n+#define OCTREE_CHILD_NUM 8\n+\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,546 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+#include \"octree.hpp\"\n+\n+#define OCTREE_CHILD_NUM 8\n+\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,546 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+#include \"octree.hpp\"\n+\n+#define OCTREE_CHILD_NUM 8\n+\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,546 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+#include \"octree.hpp\"\n+\n+#define OCTREE_CHILD_NUM 8\n+\n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/3d/src/octree.cpp\n**Change Type:** added\n**Context:** PR #19684: Add Octree to 3D module in next branch - in progress\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,546 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html\n+\n+#include \"precomp.hpp\"\n+#include \"octree.hpp\"\n+\n+#define OCTREE_CHILD_NUM 8\n+\n```", + "output": "Thank you for your carefully reviewing. I have added ``static`` prefix in these functions." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/test/test_hdr.cpp\n**Change Type:** modified\n**Context:** PR #20054: Fix Robertson Calibration NaN Bug\n\n**Code Changes:**\n```diff\n@@ -249,4 +249,21 @@ TEST(Photo_CalibrateRobertson, regression)\n checkEqual(expected, response, 1e-1f, \"CalibrateRobertson\");\n }\n \n+TEST(Photo_CalibrateRobertson, bug_18180)\n+{\n+ vector images;\n+ vector fn;\n+ string test_path = cvtest::TS::ptr()->get_data_path() + \"hdr/exposures/bug_18180/\";\n+ for(int i = 1; i <= 4; ++i)\n```", + "output": "That's smarter - let me do that. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/test/test_hdr.cpp\n**Change Type:** modified\n**Context:** PR #20054: Fix Robertson Calibration NaN Bug\n\n**Code Changes:**\n```diff\n@@ -249,4 +249,21 @@ TEST(Photo_CalibrateRobertson, regression)\n checkEqual(expected, response, 1e-1f, \"CalibrateRobertson\");\n }\n \n+TEST(Photo_CalibrateRobertson, bug_18180)\n+{\n+ vector images;\n+ vector fn;\n+ string test_path = cvtest::TS::ptr()->get_data_path() + \"hdr/exposures/bug_18180/\";\n+ for(int i = 1; i <= 4; ++i)\n```", + "output": "Ah I figured out the issue, I had to move the epsilon back. Addressed. Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/src/merge.cpp\n**Change Type:** modified\n**Context:** PR #20054: Fix Robertson Calibration NaN Bug\n\n**Code Changes:**\n```diff\n@@ -344,7 +344,7 @@ class MergeRobertsonImpl CV_FINAL : public MergeRobertson\n result += times.at((int)i) * w.mul(im);\n wsum += times.at((int)i) * times.at((int)i) * w;\n }\n- result = result.mul(1 / wsum);\n+ result = result.mul(1 / (wsum + Scalar::all(DBL_EPSILON)));\n }\n \n void process(InputArrayOfArrays src, OutputArray dst, InputArray times) CV_OVERRIDE\n```", + "output": "@alalek Ah good catch. It technically works, but a warning pops up: ``` assign OpenCV/MatExpr: processing of multi-channel arrays might be changed in the future: https://github.com/opencv/opencv/issues/16739 ``` Addressed the issue and actually moved the epsilon to where the division occurs. Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_lab.cpp\n**Change Type:** modified\n**Context:** PR #19883: Add Neon optimised RGB2Lab conversion\n**Review Line:** 1576\n\n**Code Changes:**\n```diff\n typedef uchar channel_type;\n@@ -1571,6 +1573,69 @@ struct RGB2Lab_b\n }\n }\n \n+#if CV_NEON\n+ template \n+ inline void rgb2lab_batch(const ushort* tab,\n+ const v_uint8 vRi, const v_uint8 vGi, const v_uint8 vBi,\n+ v_int32& vL, v_int32& va, v_int32& vb) const\n+ {\n```", + "output": "Hi @asmorkalov - this code is specific to 128-bit. If we replace `CV_NEON` with `CV_SIMD`, the version supporting wider vectors at line 1692 will stop running. Would replacing `CV_NEON` with `CV_SIMD128` a better option?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/color_lab.cpp\n**Change Type:** modified\n**Context:** PR #19883: Add Neon optimised RGB2Lab conversion\n**Review Line:** 1576\n\n**Code Changes:**\n```diff\n typedef uchar channel_type;\n@@ -1571,6 +1573,69 @@ struct RGB2Lab_b\n }\n }\n \n+#if CV_NEON\n+ template \n+ inline void rgb2lab_batch(const ushort* tab,\n+ const v_uint8 vRi, const v_uint8 vGi, const v_uint8 vBi,\n+ v_int32& vL, v_int32& va, v_int32& vb) const\n+ {\n```", + "output": "@asmorkalov `CV_SIMD` is below. This is dedicated branch for NEON only (existed `v_lut()`-based approach is not very fast with NEON)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/ocl.hpp\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n\n**Code Changes:**\n```diff\n@@ -43,6 +43,8 @@\n #define OPENCV_OPENCL_HPP\n \n #include \"opencv2/core.hpp\"\n+#include \n+#include \n \n namespace cv { namespace ocl {\n \n@@ -277,6 +279,12 @@ class CV_EXPORTS Context\n```", + "output": "@alalek Should be another function, as we can't change signature of existent function?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/ocl.cpp\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n\n**Code Changes:**\n```diff\n@@ -113,10 +113,6 @@\n \n #include \"opencv2/core/opencl/runtime/opencl_core.hpp\"\n \n-#ifdef HAVE_DIRECTX\n-#include \"directx.hpp\"\n-#endif\n-\n #ifdef HAVE_OPENCL_SVM\n #include \"opencv2/core/opencl/runtime/opencl_svm_20.hpp\"\n```", + "output": "Thanks for information. Caching support here is low priority for now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/directx.cpp\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n**Review Line:** 260\n\n**Code Changes:**\n```diff\n- CV_Error(cv::Error::OpenCLInitError, \"OpenCL: No supported extensions\");\n-#else\n- if (!is_support_cl_khr_d3d11_sharing)\n- CV_Error(cv::Error::OpenCLInitError, \"OpenCL: No supported extensions\");\n+ ~OpenCL_D3D11_NV() {\n+ device->Release();\n+ }\n+ ID3D11Device* device;\n+ clCreateFromD3D11Texture2DNV_fn clCreateFromD3D11Texture2DNV;\n+ clEnqueueAcquireD3D11ObjectsNV_fn clEnqueueAcquireD3D11ObjectsNV;\n+ clEnqueueReleaseD3D11ObjectsNV_fn clEnqueueReleaseD3D11ObjectsNV;\n```", + "output": "On CI we observing crash of the test application [on Windows](http://pullrequest.opencv.org/buildbot/builders/precommit_linux64/builds/31400) (exit code 2170) on this point (after ExitProcess call). @mikhail-nikolskiy Could you please confirm? Do you have any ideas how to avoid that? In general, we probably can skip this cleanup (it is not critical per OpenCL context or on application termination stage). I will add this if there are no other good known options.
Stack trace ``` 0:000> k # Child-SP RetAddr Call Site 00 000000eb`8eaff140 00007ffa`14f47240 KERNELBASE!RaiseException+0x69 01 000000eb`8eaff220 00007ffa`14f1de30 dxgi!CDXGIFactory::FinalRelease+0x29158 02 000000eb`8eaff2e0 00007ffa`14f1dd14 dxgi!ATL::CComObject::~CComObject+0xbc 03 000000eb`8eaff320 00007ffa`14f19703 dxgi!ATL::CComObject::`vector deleting destructor'+0x14 04 000000eb`8eaff350 00007ffa`14f213f4 dxgi!ATL::CComObject::Release+0x63 05 000000eb`8eaff390 00007ffa`14f20356 dxgi!CDXGIBaseAdapter::FinalRelease+0xc0 06 000000eb`8eaff3f0 00007ffa`14f18255 dxgi!ATL::CComObject::`scalar deleting destructor'+0x76 07 000000eb`8eaff430 00007ffa`11ed27ba dxgi!ATL::CComObject::Release+0x65 08 000000eb`8eaff470 00007ffa`11ed270e d3d11!NDXGI::CUMDAdapter::~CUMDAdapter+0x92 09 000000eb`8eaff4a0 00007ffa`11ed12e8 d3d11!NDXGI::CUMDAdapter::`scalar deleting destructor'+0xe 0a 000000eb`8eaff4d0 00007ffa`11ed2844 d3d11!NDXGI::CDevice::~CDevice+0x3c 0b 000000eb`8eaff500 00007ffa`11ed287b d3d11!CLayeredObject::`scalar deleting destructor'+0x14 0c 000000eb`8eaff530 00007ffa`11ee1015 d3d11!CLayeredObject::Release+0xb 0d 000000eb`8eaff560 00007ffa`11ed26d2 d3d11!ATL::AtlComPtrAssign+0x51 0e 000000eb`8eaff590 00007ffa`11ed2674 d3d11!TComObject::~TComObject+0x3e 0f 000000eb`8eaff5c0 00007ffa`11ed6899 d3d11!TComObject::`scalar deleting destructor'+0x14 10 000000eb`8eaff5f0 00007ff9`ad2beaa8 d3d11!TComObject::Release+0x39 11 (Inline Function) --------`-------- opencv_core452!cv::directx::OpenCL_D3D11::{dtor}+0x14 [C:\\build\\precommit_custom_windows\\opencv\\modules\\core\\src\\directx.cpp @ 287] 12 000000eb`8eaff620 00007ff9`ad39659e opencv_core452!cv::directx::OpenCL_D3D11::`scalar deleting destructor'+0x28 13 (Inline Function) --------`-------- opencv_core452!std::_Ref_count_base::_Decref+0x17 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 541] 14 (Inline Function) --------`-------- opencv_core452!std::_Ptr_base::_Decref+0x20 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 774] 15 (Inline Function) --------`-------- opencv_core452!std::shared_ptr::{dtor}+0x20 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 1033] 16 (Inline Function) --------`-------- opencv_core452!std::_Default_allocator_traits >,void *> > >::destroy+0x20 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\xmemory @ 699] 17 (Inline Function) --------`-------- opencv_core452!std::_Tree_node >,void *>::_Freenode+0x29 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\xtree @ 379] 18 000000eb`8eaff650 00007ff9`ad39ec95 opencv_core452!std::_Tree_val > > >::_Erase_tree >,void *> > >+0x6e [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\xtree @ 745] 19 (Inline Function) --------`-------- opencv_core452!std::_Tree,std::less,std::allocator > >,0> >::clear+0x12 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\xtree @ 1448] 1a 000000eb`8eaff690 00007ff9`ad3a1209 opencv_core452!cv::ocl::Context::Impl::~Impl+0xd5 [C:\\build\\precommit_custom_windows\\opencv\\modules\\core\\src\\ocl.cpp @ 2395] 1b 000000eb`8eaff730 00007ff9`ad4084c7 opencv_core452!std::_Ref_count_obj2::_Destroy+0x49 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 1445] 1c (Inline Function) --------`-------- opencv_core452!std::_Ref_count_base::_Decref+0x19 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 541] 1d (Inline Function) --------`-------- opencv_core452!std::_Ptr_base::_Decref+0x22 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 774] 1e (Inline Function) --------`-------- opencv_core452!std::shared_ptr::{dtor}+0x22 [C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.27.29110\\include\\memory @ 1033] 1f (Inline Function) --------`-------- opencv_core452!cv::ocl::OpenCLExecutionContext::{dtor}+0x22 [C:\\build\\precommit_custom_windows\\opencv\\modules\\core\\include\\opencv2\\core\\ocl.hpp @ 857] 20 000000eb`8eaff760 00007ff9`ad40b690 opencv_core452!cv::TLSData::deleteDataInstance+0x47 [C:\\build\\precommit_custom_windows\\opencv\\modules\\core\\include\\opencv2\\core\\utils\\tls.hpp @ 82] 21 000000eb`8eaff790 00007ffa`18eeddd5 opencv_core452!cv::details::TlsStorage::releaseThread+0x130 [C:\\build\\precommit_custom_windows\\opencv\\modules\\core\\src\\system.cpp @ 1571] ```
" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_hw.hpp\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n**Review Line:** 116\n\n**Code Changes:**\n```diff\n@@ -80,7 +113,7 @@ std::string getDecoderConfiguration(VideoAccelerationType va_type, AVDictionary\n case VIDEO_ACCELERATION_ANY: return \"vaapi.iHD\";\n case VIDEO_ACCELERATION_D3D11: return \"\";\n case VIDEO_ACCELERATION_VAAPI: return \"vaapi.iHD\";\n- case VIDEO_ACCELERATION_MFX: return \"\";\n+ case VIDEO_ACCELERATION_MFX: return \"qsv.iHD\";\n }\n return \"\";\n #endif\n@@ -125,7 +158,6 @@ std::string getEncoderConfiguration(VideoAccelerationType va_type, AVDictionary\n #endif\n```", + "output": "There is a regression on Linux machine for the test `sample_322x242_15frames.yuv420p.libvpx-vp9.mp4, FFMPEG, MFX, false` Test logs are below (OPENCV_FFMPEG_DEBUG=1 OPENCV_LOG_LEVEL=DEBUG):
With patch ``` [ RUN ] videoio/videocapture_acceleration.read/84, where GetParam() = (sample_322x242_15frames.yuv420p.libvpx-vp9.mp4, FFMPEG, MFX, false) [OPENCV:FFMPEG:48] Opening '/home/alalek/projects/opencv/extra/testdata/highgui/video/sample_322x242_15frames.yuv420p.libvpx-vp9.mp4' for reading [OPENCV:FFMPEG:48] Setting default whitelist 'file,crypto,data' [OPENCV:FFMPEG:56] Probing mov,mp4,m4a,3gp,3g2,mj2 score:100 size:2048 [OPENCV:FFMPEG:56] Probing mp3 score:1 size:2048 [OPENCV:FFMPEG:48] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100 [OPENCV:FFMPEG:56] type:'ftyp' parent:'root' sz: 28 8 31057 [OPENCV:FFMPEG:48] ISO: File Type Major Brand: isom [OPENCV:FFMPEG:56] type:'free' parent:'root' sz: 8 36 31057 [OPENCV:FFMPEG:56] type:'mdat' parent:'root' sz: 30210 44 31057 [OPENCV:FFMPEG:56] type:'moov' parent:'root' sz: 811 30254 31057 [OPENCV:FFMPEG:56] type:'mvhd' parent:'moov' sz: 108 8 803 [OPENCV:FFMPEG:56] time scale = 1000 [OPENCV:FFMPEG:56] type:'trak' parent:'moov' sz: 597 116 803 [OPENCV:FFMPEG:56] type:'tkhd' parent:'trak' sz: 92 8 589 [OPENCV:FFMPEG:56] type:'edts' parent:'trak' sz: 36 100 589 [OPENCV:FFMPEG:56] type:'elst' parent:'edts' sz: 28 8 28 [OPENCV:FFMPEG:56] track[0].edit_count = 1 [OPENCV:FFMPEG:56] duration=600 time=0 rate=1.000000 [OPENCV:FFMPEG:56] type:'mdia' parent:'trak' sz: 461 136 589 [OPENCV:FFMPEG:56] type:'mdhd' parent:'mdia' sz: 32 8 453 [OPENCV:FFMPEG:56] type:'hdlr' parent:'mdia' sz: 45 40 453 [OPENCV:FFMPEG:56] ctype=[0][0][0][0] [OPENCV:FFMPEG:56] stype=vide [OPENCV:FFMPEG:56] type:'minf' parent:'mdia' sz: 376 85 453 [OPENCV:FFMPEG:56] type:'vmhd' parent:'minf' sz: 20 8 368 [OPENCV:FFMPEG:56] type:'dinf' parent:'minf' sz: 36 28 368 [OPENCV:FFMPEG:56] type:'dref' parent:'dinf' sz: 28 8 28 [OPENCV:FFMPEG:48] Unknown dref type 0x206c7275 size 12 [OPENCV:FFMPEG:56] type:'stbl' parent:'minf' sz: 312 64 368 [OPENCV:FFMPEG:56] type:'stsd' parent:'stbl' sz: 132 8 304 [OPENCV:FFMPEG:56] size=116 4CC=vp09 codec_type=0 [OPENCV:FFMPEG:56] type:'vpcC' parent:'stsd' sz: 20 8 30 [OPENCV:FFMPEG:56] type:'fiel' parent:'stsd' sz: 10 28 30 [OPENCV:FFMPEG:56] type:'stts' parent:'stbl' sz: 24 140 304 [OPENCV:FFMPEG:56] track[0].stts.entries = 1 [OPENCV:FFMPEG:56] sample_count=15, sample_duration=512 [OPENCV:FFMPEG:56] type:'stss' parent:'stbl' sz: 20 164 304 [OPENCV:FFMPEG:56] keyframe_count = 1 [OPENCV:FFMPEG:56] type:'stsc' parent:'stbl' sz: 28 184 304 [OPENCV:FFMPEG:56] track[0].stsc.entries = 1 [OPENCV:FFMPEG:56] type:'stsz' parent:'stbl' sz: 80 212 304 [OPENCV:FFMPEG:56] sample_size = 0 sample_count = 15 [OPENCV:FFMPEG:56] type:'stco' parent:'stbl' sz: 20 292 304 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 0, offset 2c, dts 0, size 5068, distance 0, keyframe 1 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 1, offset 13f8, dts 512, size 1308, distance 1, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 2, offset 1914, dts 1024, size 1265, distance 2, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 3, offset 1e05, dts 1536, size 1507, distance 3, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 4, offset 23e8, dts 2048, size 2089, distance 4, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 5, offset 2c11, dts 2560, size 1956, distance 5, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 6, offset 33b5, dts 3072, size 1795, distance 6, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 7, offset 3ab8, dts 3584, size 2204, distance 7, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 8, offset 4354, dts 4096, size 2267, distance 8, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 9, offset 4c2f, dts 4608, size 1315, distance 9, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 10, offset 5152, dts 5120, size 5251, distance 10, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 11, offset 65d5, dts 5632, size 1670, distance 11, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 12, offset 6c5b, dts 6144, size 632, distance 12, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 13, offset 6ed3, dts 6656, size 612, distance 13, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 14, offset 7137, dts 7168, size 1263, distance 14, keyframe 0 [OPENCV:FFMPEG:48] Processing st: 0, edit list 0 - media time: 0, duration: 7680 [OPENCV:FFMPEG:56] type:'udta' parent:'moov' sz: 98 713 803 [OPENCV:FFMPEG:56] type:'meta' parent:'udta' sz: 90 8 90 [OPENCV:FFMPEG:56] type:'hdlr' parent:'meta' sz: 33 8 78 [OPENCV:FFMPEG:56] ctype=[0][0][0][0] [OPENCV:FFMPEG:56] stype=mdir [OPENCV:FFMPEG:56] type:'ilst' parent:'meta' sz: 45 41 78 [OPENCV:FFMPEG:56] type:'[169]too' parent:'ilst' sz: 37 8 37 [OPENCV:FFMPEG:56] on_parse_exit_offset=31057 [OPENCV:FFMPEG:48] Before avformat_find_stream_info() pos: 31057 bytes read:31057 seeks:0 nb_streams:1 [OPENCV:FFMPEG:56] stream 0, sample 0, dts 0 [OPENCV:FFMPEG:48] Format yuv420p chosen by get_format(). [OPENCV:FFMPEG:48] All info found [OPENCV:FFMPEG:56] stream 0: start_time: 0 duration: 0.6 [OPENCV:FFMPEG:56] format: start_time: 0 duration: 0.6 (estimate from stream) bitrate=414 kb/s [OPENCV:FFMPEG:48] After avformat_find_stream_info() pos: 5112 bytes read:31057 seeks:0 frames:1 [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_impl.hpp (1022) open FFMPEG: stream[0] is video stream with codecID=167 width=322 height=242 [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (927) HWAccelIterator FFMPEG: allowed acceleration types (mfx): 'qsv.iHD' [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (945) HWAccelIterator FFMPEG: disabled codecs: 'av1.vaapi,av1_qsv,vp8.vaapi,vp8_qsv' [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_impl.hpp (1054) open FFMPEG: trying to configure H/W acceleration: 'qsv.iHD' [OPENCV:FFMPEG:40] Trying to use DRM render node for device 0. [OPENCV:FFMPEG:40] libva: VA-API version 1.9.0 libva: Trying to open /usr/lib64/dri/iHD_drv_video.so libva: Found init function __vaDriverInit_1_9 libva: va_openDriver() returns 0 Initialised VAAPI connection: version 1.9 [OPENCV:FFMPEG:48] Format 0x41524742 -> bgra. [OPENCV:FFMPEG:48] Format 0x42475241 -> argb. [OPENCV:FFMPEG:48] Format 0x41424752 -> rgba. [OPENCV:FFMPEG:48] Format 0x52474241 -> abgr. [OPENCV:FFMPEG:48] Format 0x58524742 -> bgr0. [OPENCV:FFMPEG:48] Format 0x42475258 -> 0rgb. [OPENCV:FFMPEG:48] Format 0x58424752 -> rgb0. [OPENCV:FFMPEG:48] Format 0x52474258 -> 0bgr. [OPENCV:FFMPEG:48] Format 0x30335241 -> unknown. [OPENCV:FFMPEG:48] Format 0x30334241 -> unknown. [OPENCV:FFMPEG:48] Format 0x30335258 -> unknown. [OPENCV:FFMPEG:48] Format 0x30334258 -> unknown. [OPENCV:FFMPEG:48] Format 0x36314752 -> unknown. [OPENCV:FFMPEG:48] Format 0x50424752 -> unknown. [OPENCV:FFMPEG:48] Format 0x50524742 -> unknown. [OPENCV:FFMPEG:48] Format 0x56555941 -> unknown. [OPENCV:FFMPEG:48] Format 0x30303859 -> gray. [OPENCV:FFMPEG:48] Format 0x3231564e -> nv12. [OPENCV:FFMPEG:48] Format 0x3132564e -> unknown. [OPENCV:FFMPEG:48] Format 0x32595559 -> yuyv422. [OPENCV:FFMPEG:48] Format 0x59565955 -> uyvy422. [OPENCV:FFMPEG:48] Format 0x32315659 -> yuv420p. [OPENCV:FFMPEG:48] Format 0x30323449 -> yuv420p. [OPENCV:FFMPEG:48] Format 0x50313134 -> yuv411p. [OPENCV:FFMPEG:48] Format 0x48323234 -> yuv422p. [OPENCV:FFMPEG:48] Format 0x56323234 -> yuv440p. [OPENCV:FFMPEG:48] Format 0x50343434 -> yuv444p. [OPENCV:FFMPEG:48] Format 0x33434d49 -> unknown. [OPENCV:FFMPEG:48] Format 0x30313050 -> p010le. [OPENCV:FFMPEG:40] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 (). [OPENCV:FFMPEG:40] Driver not found in known nonstandard list, using standard behaviour. [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (272) hw_check_device FFMPEG: Using MFX video acceleration on device: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 () [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (562) hw_create_device FFMPEG: Created video acceleration context (av_hwdevice_ctx_create) for vaapi on device 'default' [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (579) hw_create_device FFMPEG: Can't bind vaapi video acceleration context to already created OpenCL context [OPENCV:FFMPEG:40] Initialize MFX session: API version is 1.34, implementation version is 1.34 [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (305) hw_create_derived_context FFMPEG: Created derived video acceleration context (av_hwdevice_ctx_create_derived) for qsv VideoCapture FFMPEG:MFX [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (618) hw_create_frames FFMPEG: avcodec_get_hw_frames_parameters() call failed: -2 [OPENCV:FFMPEG:48] Created surface 0. [OPENCV:FFMPEG:48] Direct mapping possible. [OPENCV:FFMPEG:48] Created surface 0x1. [OPENCV:FFMPEG:48] Created surface 0x2. [OPENCV:FFMPEG:48] Created surface 0x3. [OPENCV:FFMPEG:48] Created surface 0x4. [OPENCV:FFMPEG:48] Created surface 0x5. [OPENCV:FFMPEG:48] Created surface 0x6. [OPENCV:FFMPEG:48] Created surface 0x7. [OPENCV:FFMPEG:48] Created surface 0x8. [OPENCV:FFMPEG:48] Created surface 0x9. [OPENCV:FFMPEG:48] Created surface 0xa. [OPENCV:FFMPEG:48] Created surface 0xb. [OPENCV:FFMPEG:48] Created surface 0xc. [OPENCV:FFMPEG:48] Created surface 0xd. [OPENCV:FFMPEG:48] Created surface 0xe. [OPENCV:FFMPEG:48] Created surface 0xf. [OPENCV:FFMPEG:48] Created surface 0x10. [OPENCV:FFMPEG:48] Created surface 0x11. [OPENCV:FFMPEG:48] Created surface 0x12. [OPENCV:FFMPEG:48] Created surface 0x13. [OPENCV:FFMPEG:48] Created surface 0x14. [OPENCV:FFMPEG:48] Created surface 0x15. [OPENCV:FFMPEG:48] Created surface 0x16. [OPENCV:FFMPEG:48] Created surface 0x17. [OPENCV:FFMPEG:48] Created surface 0x18. [OPENCV:FFMPEG:48] Created surface 0x19. [OPENCV:FFMPEG:48] Created surface 0x1a. [OPENCV:FFMPEG:48] Created surface 0x1b. [OPENCV:FFMPEG:48] Created surface 0x1c. [OPENCV:FFMPEG:48] Created surface 0x1d. [OPENCV:FFMPEG:48] Created surface 0x1e. [OPENCV:FFMPEG:48] Created surface 0x1f. [OPENCV:FFMPEG:48] Format qsv chosen by get_format(). [OPENCV:FFMPEG:40] Decoder: output is video memory surface [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (618) hw_create_frames FFMPEG: avcodec_get_hw_frames_parameters() call failed: -2 [OPENCV:FFMPEG:48] Created surface 0x20. [OPENCV:FFMPEG:48] Direct mapping possible. [OPENCV:FFMPEG:48] Created surface 0x21. [OPENCV:FFMPEG:48] Created surface 0x22. [OPENCV:FFMPEG:48] Created surface 0x23. [OPENCV:FFMPEG:48] Created surface 0x24. [OPENCV:FFMPEG:48] Created surface 0x25. [OPENCV:FFMPEG:48] Created surface 0x26. [OPENCV:FFMPEG:48] Created surface 0x27. [OPENCV:FFMPEG:48] Created surface 0x28. [OPENCV:FFMPEG:48] Created surface 0x29. [OPENCV:FFMPEG:48] Created surface 0x2a. [OPENCV:FFMPEG:48] Created surface 0x2b. [OPENCV:FFMPEG:48] Created surface 0x2c. [OPENCV:FFMPEG:48] Created surface 0x2d. [OPENCV:FFMPEG:48] Created surface 0x2e. [OPENCV:FFMPEG:48] Created surface 0x2f. [OPENCV:FFMPEG:48] Created surface 0x30. [OPENCV:FFMPEG:48] Created surface 0x31. [OPENCV:FFMPEG:48] Created surface 0x32. [OPENCV:FFMPEG:48] Created surface 0x33. [OPENCV:FFMPEG:48] Created surface 0x34. [OPENCV:FFMPEG:48] Created surface 0x35. [OPENCV:FFMPEG:48] Created surface 0x36. [OPENCV:FFMPEG:48] Created surface 0x37. [OPENCV:FFMPEG:48] Created surface 0x38. [OPENCV:FFMPEG:48] Created surface 0x39. [OPENCV:FFMPEG:48] Created surface 0x3a. [OPENCV:FFMPEG:48] Created surface 0x3b. [OPENCV:FFMPEG:48] Created surface 0x3c. [OPENCV:FFMPEG:48] Created surface 0x3d. [OPENCV:FFMPEG:48] Created surface 0x3e. [OPENCV:FFMPEG:48] Created surface 0x3f. [OPENCV:FFMPEG:48] Format qsv chosen by get_format(). [OPENCV:FFMPEG:40] Decoder: output is video memory surface [OPENCV:FFMPEG:16] Error initializing the MFX video decoder: unsupported (-3) /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:743: Failure Value of: hw_reader.read(frame) Actual: false Expected: true Google Test trace: /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:732: frame=0 /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:745: Failure Value of: frame.empty() Actual: true Expected: false Google Test trace: /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:732: frame=0 [OPENCV:FFMPEG:40] Statistics: 31057 bytes read, 0 seeks [ FAILED ] videoio/videocapture_acceleration.read/84, where GetParam() = (sample_322x242_15frames.yuv420p.libvpx-vp9.mp4, FFMPEG, MFX, false) (6 ms) ```
Master branch ``` [ RUN ] videoio/videocapture_acceleration.read/84, where GetParam() = (sample_322x242_15frames.yuv420p.libvpx-vp9.mp4, FFMPEG, MFX, false) [OPENCV:FFMPEG:48] Opening '/home/alalek/projects/opencv/extra/testdata/highgui/video/sample_322x242_15frames.yuv420p.libvpx-vp9.mp4' for reading [OPENCV:FFMPEG:48] Setting default whitelist 'file,crypto,data' [OPENCV:FFMPEG:56] Probing mov,mp4,m4a,3gp,3g2,mj2 score:100 size:2048 [OPENCV:FFMPEG:56] Probing mp3 score:1 size:2048 [OPENCV:FFMPEG:48] Format mov,mp4,m4a,3gp,3g2,mj2 probed with size=2048 and score=100 [OPENCV:FFMPEG:56] type:'ftyp' parent:'root' sz: 28 8 31057 [OPENCV:FFMPEG:48] ISO: File Type Major Brand: isom [OPENCV:FFMPEG:56] type:'free' parent:'root' sz: 8 36 31057 [OPENCV:FFMPEG:56] type:'mdat' parent:'root' sz: 30210 44 31057 [OPENCV:FFMPEG:56] type:'moov' parent:'root' sz: 811 30254 31057 [OPENCV:FFMPEG:56] type:'mvhd' parent:'moov' sz: 108 8 803 [OPENCV:FFMPEG:56] time scale = 1000 [OPENCV:FFMPEG:56] type:'trak' parent:'moov' sz: 597 116 803 [OPENCV:FFMPEG:56] type:'tkhd' parent:'trak' sz: 92 8 589 [OPENCV:FFMPEG:56] type:'edts' parent:'trak' sz: 36 100 589 [OPENCV:FFMPEG:56] type:'elst' parent:'edts' sz: 28 8 28 [OPENCV:FFMPEG:56] track[0].edit_count = 1 [OPENCV:FFMPEG:56] duration=600 time=0 rate=1.000000 [OPENCV:FFMPEG:56] type:'mdia' parent:'trak' sz: 461 136 589 [OPENCV:FFMPEG:56] type:'mdhd' parent:'mdia' sz: 32 8 453 [OPENCV:FFMPEG:56] type:'hdlr' parent:'mdia' sz: 45 40 453 [OPENCV:FFMPEG:56] ctype=[0][0][0][0] [OPENCV:FFMPEG:56] stype=vide [OPENCV:FFMPEG:56] type:'minf' parent:'mdia' sz: 376 85 453 [OPENCV:FFMPEG:56] type:'vmhd' parent:'minf' sz: 20 8 368 [OPENCV:FFMPEG:56] type:'dinf' parent:'minf' sz: 36 28 368 [OPENCV:FFMPEG:56] type:'dref' parent:'dinf' sz: 28 8 28 [OPENCV:FFMPEG:48] Unknown dref type 0x206c7275 size 12 [OPENCV:FFMPEG:56] type:'stbl' parent:'minf' sz: 312 64 368 [OPENCV:FFMPEG:56] type:'stsd' parent:'stbl' sz: 132 8 304 [OPENCV:FFMPEG:56] size=116 4CC=vp09 codec_type=0 [OPENCV:FFMPEG:56] type:'vpcC' parent:'stsd' sz: 20 8 30 [OPENCV:FFMPEG:56] type:'fiel' parent:'stsd' sz: 10 28 30 [OPENCV:FFMPEG:56] type:'stts' parent:'stbl' sz: 24 140 304 [OPENCV:FFMPEG:56] track[0].stts.entries = 1 [OPENCV:FFMPEG:56] sample_count=15, sample_duration=512 [OPENCV:FFMPEG:56] type:'stss' parent:'stbl' sz: 20 164 304 [OPENCV:FFMPEG:56] keyframe_count = 1 [OPENCV:FFMPEG:56] type:'stsc' parent:'stbl' sz: 28 184 304 [OPENCV:FFMPEG:56] track[0].stsc.entries = 1 [OPENCV:FFMPEG:56] type:'stsz' parent:'stbl' sz: 80 212 304 [OPENCV:FFMPEG:56] sample_size = 0 sample_count = 15 [OPENCV:FFMPEG:56] type:'stco' parent:'stbl' sz: 20 292 304 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 0, offset 2c, dts 0, size 5068, distance 0, keyframe 1 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 1, offset 13f8, dts 512, size 1308, distance 1, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 2, offset 1914, dts 1024, size 1265, distance 2, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 3, offset 1e05, dts 1536, size 1507, distance 3, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 4, offset 23e8, dts 2048, size 2089, distance 4, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 5, offset 2c11, dts 2560, size 1956, distance 5, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 6, offset 33b5, dts 3072, size 1795, distance 6, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 7, offset 3ab8, dts 3584, size 2204, distance 7, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 8, offset 4354, dts 4096, size 2267, distance 8, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 9, offset 4c2f, dts 4608, size 1315, distance 9, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 10, offset 5152, dts 5120, size 5251, distance 10, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 11, offset 65d5, dts 5632, size 1670, distance 11, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 12, offset 6c5b, dts 6144, size 632, distance 12, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 13, offset 6ed3, dts 6656, size 612, distance 13, keyframe 0 [OPENCV:FFMPEG:56] AVIndex stream 0, sample 14, offset 7137, dts 7168, size 1263, distance 14, keyframe 0 [OPENCV:FFMPEG:48] Processing st: 0, edit list 0 - media time: 0, duration: 7680 [OPENCV:FFMPEG:56] type:'udta' parent:'moov' sz: 98 713 803 [OPENCV:FFMPEG:56] type:'meta' parent:'udta' sz: 90 8 90 [OPENCV:FFMPEG:56] type:'hdlr' parent:'meta' sz: 33 8 78 [OPENCV:FFMPEG:56] ctype=[0][0][0][0] [OPENCV:FFMPEG:56] stype=mdir [OPENCV:FFMPEG:56] type:'ilst' parent:'meta' sz: 45 41 78 [OPENCV:FFMPEG:56] type:'[169]too' parent:'ilst' sz: 37 8 37 [OPENCV:FFMPEG:56] on_parse_exit_offset=31057 [OPENCV:FFMPEG:48] Before avformat_find_stream_info() pos: 31057 bytes read:31057 seeks:0 nb_streams:1 [OPENCV:FFMPEG:56] stream 0, sample 0, dts 0 [OPENCV:FFMPEG:48] Format yuv420p chosen by get_format(). [OPENCV:FFMPEG:48] All info found [OPENCV:FFMPEG:56] stream 0: start_time: 0 duration: 0.6 [OPENCV:FFMPEG:56] format: start_time: 0 duration: 0.6 (estimate from stream) bitrate=414 kb/s [OPENCV:FFMPEG:48] After avformat_find_stream_info() pos: 5112 bytes read:31057 seeks:0 frames:1 [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_impl.hpp (1022) open FFMPEG: stream[0] is video stream with codecID=167 width=322 height=242 [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (927) HWAccelIterator FFMPEG: allowed acceleration types (mfx): 'qsv.iHD' [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (945) HWAccelIterator FFMPEG: disabled codecs: 'av1.vaapi,av1_qsv,vp8.vaapi,vp8_qsv' [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_impl.hpp (1054) open FFMPEG: trying to configure H/W acceleration: 'qsv.iHD' [OPENCV:FFMPEG:40] Trying to use DRM render node for device 0. [OPENCV:FFMPEG:40] libva: VA-API version 1.9.0 libva: Trying to open /usr/lib64/dri/iHD_drv_video.so libva: Found init function __vaDriverInit_1_9 libva: va_openDriver() returns 0 Initialised VAAPI connection: version 1.9 [OPENCV:FFMPEG:48] Format 0x41524742 -> bgra. [OPENCV:FFMPEG:48] Format 0x42475241 -> argb. [OPENCV:FFMPEG:48] Format 0x41424752 -> rgba. [OPENCV:FFMPEG:48] Format 0x52474241 -> abgr. [OPENCV:FFMPEG:48] Format 0x58524742 -> bgr0. [OPENCV:FFMPEG:48] Format 0x42475258 -> 0rgb. [OPENCV:FFMPEG:48] Format 0x58424752 -> rgb0. [OPENCV:FFMPEG:48] Format 0x52474258 -> 0bgr. [OPENCV:FFMPEG:48] Format 0x30335241 -> unknown. [OPENCV:FFMPEG:48] Format 0x30334241 -> unknown. [OPENCV:FFMPEG:48] Format 0x30335258 -> unknown. [OPENCV:FFMPEG:48] Format 0x30334258 -> unknown. [OPENCV:FFMPEG:48] Format 0x36314752 -> unknown. [OPENCV:FFMPEG:48] Format 0x50424752 -> unknown. [OPENCV:FFMPEG:48] Format 0x50524742 -> unknown. [OPENCV:FFMPEG:48] Format 0x56555941 -> unknown. [OPENCV:FFMPEG:48] Format 0x30303859 -> gray. [OPENCV:FFMPEG:48] Format 0x3231564e -> nv12. [OPENCV:FFMPEG:48] Format 0x3132564e -> unknown. [OPENCV:FFMPEG:48] Format 0x32595559 -> yuyv422. [OPENCV:FFMPEG:48] Format 0x59565955 -> uyvy422. [OPENCV:FFMPEG:48] Format 0x32315659 -> yuv420p. [OPENCV:FFMPEG:48] Format 0x30323449 -> yuv420p. [OPENCV:FFMPEG:48] Format 0x50313134 -> yuv411p. [OPENCV:FFMPEG:48] Format 0x48323234 -> yuv422p. [OPENCV:FFMPEG:48] Format 0x56323234 -> yuv440p. [OPENCV:FFMPEG:48] Format 0x50343434 -> yuv444p. [OPENCV:FFMPEG:48] Format 0x33434d49 -> unknown. [OPENCV:FFMPEG:48] Format 0x30313050 -> p010le. [OPENCV:FFMPEG:40] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 (). [OPENCV:FFMPEG:40] Driver not found in known nonstandard list, using standard behaviour. [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (272) hw_check_device FFMPEG: Using MFX video acceleration on device: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 () [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (562) hw_create_device FFMPEG: Created video acceleration context (av_hwdevice_ctx_create) for vaapi on device 'default' [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (579) hw_create_device FFMPEG: Can't bind vaapi video acceleration context to already created OpenCL context [OPENCV:FFMPEG:40] Initialize MFX session: API version is 1.34, implementation version is 1.34 [ INFO:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (305) hw_create_derived_context FFMPEG: Created derived video acceleration context (av_hwdevice_ctx_create_derived) for qsv VideoCapture FFMPEG:MFX [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (618) hw_create_frames FFMPEG: avcodec_get_hw_frames_parameters() call failed: -2 [OPENCV:FFMPEG:48] Created surface 0. [OPENCV:FFMPEG:48] Direct mapping possible. [OPENCV:FFMPEG:48] Created surface 0x1. [OPENCV:FFMPEG:48] Created surface 0x2. [OPENCV:FFMPEG:48] Created surface 0x3. [OPENCV:FFMPEG:48] Created surface 0x4. [OPENCV:FFMPEG:48] Created surface 0x5. [OPENCV:FFMPEG:48] Created surface 0x6. [OPENCV:FFMPEG:48] Created surface 0x7. [OPENCV:FFMPEG:48] Created surface 0x8. [OPENCV:FFMPEG:48] Created surface 0x9. [OPENCV:FFMPEG:48] Created surface 0xa. [OPENCV:FFMPEG:48] Created surface 0xb. [OPENCV:FFMPEG:48] Created surface 0xc. [OPENCV:FFMPEG:48] Created surface 0xd. [OPENCV:FFMPEG:48] Created surface 0xe. [OPENCV:FFMPEG:48] Created surface 0xf. [OPENCV:FFMPEG:48] Created surface 0x10. [OPENCV:FFMPEG:48] Created surface 0x11. [OPENCV:FFMPEG:48] Created surface 0x12. [OPENCV:FFMPEG:48] Created surface 0x13. [OPENCV:FFMPEG:48] Created surface 0x14. [OPENCV:FFMPEG:48] Created surface 0x15. [OPENCV:FFMPEG:48] Created surface 0x16. [OPENCV:FFMPEG:48] Created surface 0x17. [OPENCV:FFMPEG:48] Created surface 0x18. [OPENCV:FFMPEG:48] Created surface 0x19. [OPENCV:FFMPEG:48] Created surface 0x1a. [OPENCV:FFMPEG:48] Created surface 0x1b. [OPENCV:FFMPEG:48] Created surface 0x1c. [OPENCV:FFMPEG:48] Created surface 0x1d. [OPENCV:FFMPEG:48] Created surface 0x1e. [OPENCV:FFMPEG:48] Created surface 0x1f. [OPENCV:FFMPEG:48] Format qsv chosen by get_format(). [OPENCV:FFMPEG:40] Decoder: output is video memory surface [DEBUG:0] global /home/alalek/projects/opencv/dev/modules/videoio/src/cap_ffmpeg_hw.hpp (618) hw_create_frames FFMPEG: avcodec_get_hw_frames_parameters() call failed: -2 [OPENCV:FFMPEG:48] Created surface 0x20. [OPENCV:FFMPEG:48] Direct mapping possible. [OPENCV:FFMPEG:48] Created surface 0x21. [OPENCV:FFMPEG:48] Created surface 0x22. [OPENCV:FFMPEG:48] Created surface 0x23. [OPENCV:FFMPEG:48] Created surface 0x24. [OPENCV:FFMPEG:48] Created surface 0x25. [OPENCV:FFMPEG:48] Created surface 0x26. [OPENCV:FFMPEG:48] Created surface 0x27. [OPENCV:FFMPEG:48] Created surface 0x28. [OPENCV:FFMPEG:48] Created surface 0x29. [OPENCV:FFMPEG:48] Created surface 0x2a. [OPENCV:FFMPEG:48] Created surface 0x2b. [OPENCV:FFMPEG:48] Created surface 0x2c. [OPENCV:FFMPEG:48] Created surface 0x2d. [OPENCV:FFMPEG:48] Created surface 0x2e. [OPENCV:FFMPEG:48] Created surface 0x2f. [OPENCV:FFMPEG:48] Created surface 0x30. [OPENCV:FFMPEG:48] Created surface 0x31. [OPENCV:FFMPEG:48] Created surface 0x32. [OPENCV:FFMPEG:48] Created surface 0x33. [OPENCV:FFMPEG:48] Created surface 0x34. [OPENCV:FFMPEG:48] Created surface 0x35. [OPENCV:FFMPEG:48] Created surface 0x36. [OPENCV:FFMPEG:48] Created surface 0x37. [OPENCV:FFMPEG:48] Created surface 0x38. [OPENCV:FFMPEG:48] Created surface 0x39. [OPENCV:FFMPEG:48] Created surface 0x3a. [OPENCV:FFMPEG:48] Created surface 0x3b. [OPENCV:FFMPEG:48] Created surface 0x3c. [OPENCV:FFMPEG:48] Created surface 0x3d. [OPENCV:FFMPEG:48] Created surface 0x3e. [OPENCV:FFMPEG:48] Created surface 0x3f. [OPENCV:FFMPEG:48] Format qsv chosen by get_format(). [OPENCV:FFMPEG:40] Decoder: output is video memory surface [OPENCV:FFMPEG:16] Error initializing the MFX video decoder: unsupported (-3) /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:743: Failure Value of: hw_reader.read(frame) Actual: false Expected: true Google Test trace: /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:732: frame=0 /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:745: Failure Value of: frame.empty() Actual: true Expected: false Google Test trace: /home/alalek/projects/opencv/dev/modules/videoio/test/test_video_io.cpp:732: frame=0 [OPENCV:FFMPEG:40] Statistics: 31057 bytes read, 0 seeks [ FAILED ] videoio/videocapture_acceleration.read/84, where GetParam() = (sample_322x242_15frames.yuv420p.libvpx-vp9.mp4, FFMPEG, MFX, false) (6 ms) ```
vainfo ``` $ vainfo libva info: VA-API version 1.9.0 libva info: Trying to open /usr/lib64/dri/iHD_drv_video.so libva info: Found init function __vaDriverInit_1_9 libva info: va_openDriver() returns 0 vainfo: VA-API version: 1.9 (libva 2.9.0) vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 20.3.0 () vainfo: Supported profile and entrypoints VAProfileNone : VAEntrypointVideoProc VAProfileNone : VAEntrypointStats VAProfileMPEG2Simple : VAEntrypointVLD VAProfileMPEG2Simple : VAEntrypointEncSlice VAProfileMPEG2Main : VAEntrypointVLD VAProfileMPEG2Main : VAEntrypointEncSlice VAProfileH264Main : VAEntrypointVLD VAProfileH264Main : VAEntrypointEncSlice VAProfileH264Main : VAEntrypointFEI VAProfileH264Main : VAEntrypointEncSliceLP VAProfileH264High : VAEntrypointVLD VAProfileH264High : VAEntrypointEncSlice VAProfileH264High : VAEntrypointFEI VAProfileH264High : VAEntrypointEncSliceLP VAProfileVC1Simple : VAEntrypointVLD VAProfileVC1Main : VAEntrypointVLD VAProfileVC1Advanced : VAEntrypointVLD VAProfileJPEGBaseline : VAEntrypointVLD VAProfileJPEGBaseline : VAEntrypointEncPicture VAProfileH264ConstrainedBaseline: VAEntrypointVLD VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice VAProfileH264ConstrainedBaseline: VAEntrypointFEI VAProfileH264ConstrainedBaseline: VAEntrypointEncSliceLP VAProfileVP8Version0_3 : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointFEI ```
Looks like there is no VP9 entry in vainfo: - Fedora 33 - `intel-media-driver.x86_64 20.3.0-2.fc33 @rpmfusion-nonfree-updates` - CPU: i5-6600 (Skylake) Similar problem: https://bugs.launchpad.net/ubuntu/+source/intel-vaapi-driver/+bug/1591714 @mikhail-nikolskiy Can we handle/workaround this somehow?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_hw.hpp\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n**Review Line:** 116\n\n**Code Changes:**\n```diff\n@@ -80,7 +113,7 @@ std::string getDecoderConfiguration(VideoAccelerationType va_type, AVDictionary\n case VIDEO_ACCELERATION_ANY: return \"vaapi.iHD\";\n case VIDEO_ACCELERATION_D3D11: return \"\";\n case VIDEO_ACCELERATION_VAAPI: return \"vaapi.iHD\";\n- case VIDEO_ACCELERATION_MFX: return \"\";\n+ case VIDEO_ACCELERATION_MFX: return \"qsv.iHD\";\n }\n return \"\";\n #endif\n@@ -125,7 +158,6 @@ std::string getEncoderConfiguration(VideoAccelerationType va_type, AVDictionary\n #endif\n```", + "output": "@mikhail-nikolskiy Thank you for the information! Looks like we need to add the first frame grabbing as part of `.open()` implementation to handle/workaround this case (I believe we can implement this later)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/cmake/detect_ffmpeg.cmake\n**Change Type:** modified\n**Context:** PR #19755: cv::UMat output/input in VideoCapture/VideoWriter (data stays in GPU memory)\n**Review Line:** 115\n\n**Code Changes:**\n```diff\n+ if(OPENCL_INCLUDE_DIRS)\n+ set(__opencl_dirs \"${OPENCL_INCLUDE_DIRS}\")\n+ elseif(OPENCL_INCLUDE_DIR)\n+ set(__opencl_dirs \"${OPENCL_INCLUDE_DIR}\")\n+ else()\n+ set(__opencl_dirs \"${OpenCV_SOURCE_DIR}/3rdparty/include/opencl/1.2\")\n+ endif()\n+ # extra dependencies for buildin code (OpenCL dir is required for extensions like cl_d3d11.h)\n+ # buildin HAVE_OPENCL is already defined through cvconfig.h\n+ list(APPEND __builtin_include_dirs \"${__opencl_dirs}\")\n+\n```", + "output": "Thanks, this problem is caused by `-DBUILD_SHARED_LIBS=OFF` option." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "@alalek This snippet is on internal review. So it's refactoring now. I'll let you know when it becomes ready for your final review. Sorry for inconvenient." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "@alalek Yes it is very strange for me too. Orest tested this snippet on VPU with enabled NEON, but don't get speedup. There is no branch with enabled NEON but I can create if it's necessary." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "@alalek @terfendail could you please advice another way to disable this snippet for the NEON case?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "Can't reproduce performance change observations on TX1 AARCH64 platform (Ubuntu 16.04 + GCC 5.4.0): 'PATCH\" as commit 61e82884944c1e6d05c6bf30fa8c9ace483d35d2 \"NO_SIMD\" patch looks like this: ```.diff -#if CV_SIMD +#if 0 x = addw_simd(in1, in2, out, _alpha, _beta, _gamma, length); #endif ``` Results for affected \"1080\" tests: |Name of Test|NO_SIMD|PATCH|(x-factor)| |---|:-:|:-:|:-:| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC1, 0, { gapi.kernel_package })|13.698|9.106|1.50| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC1, 5, { gapi.kernel_package })|3.504|3.490|1.00| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC1, -1, { gapi.kernel_package })|13.715|9.091|1.51| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16UC1, 0, { gapi.kernel_package })|15.818|5.398|2.93| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16UC1, 5, { gapi.kernel_package })|3.726|3.744|1.00| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16UC1, -1, { gapi.kernel_package })|16.571|5.423|3.06| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16SC1, 0, { gapi.kernel_package })|15.776|5.380|2.93| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16SC1, 5, { gapi.kernel_package })|3.739|3.741|1.00| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 16SC1, -1, { gapi.kernel_package })|18.858|5.429|3.47| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC3, 0, { gapi.kernel_package })|40.477|26.670|1.52| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC3, 5, { gapi.kernel_package })|9.827|9.823|1.00| |TestPerformance::AddWeightedPerfTestFluid/AddWeightedPerfTest::(compare_f, 1920x1080, 8UC3, -1, { gapi.kernel_package })|40.471|26.656|1.52| |TestPerformance::AddWeightedPerfTestFluid_short/AddWeightedPerfTest::(compare_f, 1920x1080, 16UC1, -1, { gapi.kernel_package })|16.534|5.439|3.04| |TestPerformance::AddWeightedPerfTestFluid_short/AddWeightedPerfTest::(compare_f, 1920x1080, 16SC1, -1, { gapi.kernel_package })|18.934|5.442|3.48| TX1 CPU is AARCH64-based ARM Cortex-A57 (quad-core) @ 1.73GHz TX1 spec is here: https://elinux.org/Jetson_TX1 --- @terfendail Please try to collect results for this patch on the target platform." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "I think it would be better and easier to add static assert. @alalek could you please comment?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18466: GAPI: SIMD optimization for AddWeighted kernel. - SIMD optimization for AddWeighted kernel via universal intrinsics. ``` force_build...\n\n**Code Changes:**\n```diff\n@@ -97,6 +97,130 @@ static inline DST divr(SRC1 x, SRC2 y, float scale=1)\n // Fluid kernels: addWeighted\n //\n //---------------------------\n+#if CV_SSE2\n+CV_ALWAYS_INLINE v_float32 v_load_f32(const ushort* in)\n+{\n+ return v_cvt_f32(v_reinterpret_as_s32(vx_load_expand(in)));\n+}\n+\n```", + "output": "@alalek It seems to me that Vitaly is on vacation. So he can't collect results for this patch on the target platform. Could you please continue review without this step. Moreover, I have already provided the results for the target platform." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #19546: Added Steps support in DNN Slice layer\n\n**Code Changes:**\n```diff\n@@ -641,20 +641,11 @@ void ONNXImporter::handleNode(const opencv_onnx::NodeProto& node_proto_)\n int axis = 0;\n std::vector begin;\n std::vector end;\n+ std::vector steps;\n int inp_size = node_proto.input_size();\n \n if (inp_size == 1)\n {\n- if (layerParams.has(\"steps\"))\n```", + "output": "@alalek, this case could be useful for opset9, which doesn't support step!=1, however, it means that initially onnx model could not be generated with step value != 1, hence, this check is useless. Just in case, I've checked the model from the [issue](https://github.com/opencv/opencv/issues/14672) (which relates to the PR, where this code block was added), there is no step parameter (in opset9 slice layer \"no step parameter\" => step=1). Removed this code block." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #19546: Added Steps support in DNN Slice layer\n\n**Code Changes:**\n```diff\n@@ -627,6 +627,26 @@ TEST_P(Test_ONNX_layers, Slice)\n #endif\n }\n \n+TEST_P(Test_ONNX_layers, Slice_Steps_2DInput)\n+{\n+ testONNXModels(\"slice_opset_11_steps_2d\");\n+}\n+\n+TEST_P(Test_ONNX_layers, Slice_Steps_3DInput)\n```", + "output": "@alalek, corrected." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/slice_layer.cpp\n**Change Type:** modified\n**Context:** PR #19546: Added Steps support in DNN Slice layer\n**Review Line:** 169\n\n**Code Changes:**\n```diff\n {\n if (shapesInitialized || inpShape[j] > 0)\n outputs[i][j] = normalize_axis_range(sliceRanges[i][j], inpShape[j]).size();\n+\n+ if (!sliceSteps.empty() && (i < sliceSteps.size()) && (j < sliceSteps[i].size()) && (sliceSteps[i][j] > 1))\n+ outputs[i][j] = (outputs[i][j] + sliceSteps[i][j] - 1) / sliceSteps[i][j];\n }\n }\n }\n@@ -181,6 +201,7 @@ class SliceLayerImpl : public SliceLayer\n const MatSize& inpShape = inputs[0].size;\n```", + "output": "@alalek, checks were added." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 692\n\n**Code Changes:**\n```diff\n+ {\n+ outIdx[2] = x;\n+ inputs[i].at(outIdx.data()) = tmpInput.at(idx.data());\n+ }\n+ }\n+ }\n+ inputs[i] = inputs[i].reshape(0, outShape);\n+ }\n+ }\n+ }\n+\n```", + "output": "@dkurt, this block represents tensor expansion. The initial problem problem is incorrect processing of the case, when, for example, there are two tensors: `a` (shape: [1,1,1,n] ) and `b` (shape: [1,,,n] ). [ `tf.math.add`](https://www.tensorflow.org/api_docs/python/tf/math/add) returns the result `c` (shape: [1,,,n] ), whereas OpenCV compares `1` vs ` ` and `1` vs` `, throws exception in ``getMemoryShapes()`` and that's all. The solution for such cases (for example, for tensor shape ``1x10x1x1 (NxCxHxW)`` +``1x10x5x5 (NxCxHxW)``) was: 1. choose the correct output resultant shape: ``1x10x5x5 (NxCxHxW)`` 2. expand ``1x10x1x1 (NxCxHxW)`` for further correct summation with ``1x10x5x5 (NxCxHxW)``." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 213\n\n**Code Changes:**\n```diff\n outputChannels = numChannels;\n \n outputs.assign(1, inputs[0]);\n outputs[0][1] = numChannels;\n+\n+ if (dims > 2)\n+ {\n+ size_t vecIdx = 0;\n+ bool isVecFound = false;\n+ for (size_t i = 0; i < inputs.size(); i++)\n+ {\n```", + "output": "@alalek, could you, please, specify which exactly check was lost? If ``dims <= 2``, then the loop will be a no-op." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 689\n\n**Code Changes:**\n```diff\n+ {\n+ outIdx[1] = idx[1] = k;\n+ for (size_t x = 0; x < xSize; x++)\n+ {\n+ outIdx[2] = x;\n+ inputs[i].at(outIdx.data()) = tmpInput.at(idx.data());\n+ }\n+ }\n+ }\n+ inputs[i] = inputs[i].reshape(0, outShape);\n+ }\n```", + "output": "@alalek, there are two options to resolve the issue: 1. support Mat (shape: ``[1, m, k, n]`` ) + Vec (shape: ``[1, 1, 1, n]``) operation in multiple implementations (ex.: OCL), which is quite complex and error-prone 2. expand Vec (shape: ``[1, 1, 1, n]``) to Mat to support a new case using an existing implementation at the expense of extra memory usage" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 223\n\n**Code Changes:**\n```diff\n+ {\n+ bool allOnes = isAllOnes(inputs[i], 2, dims);\n+ if (!allOnes && !isVecFound)\n+ {\n+ vecIdx = i;\n+ isVecFound = true;\n+ }\n+\n+ if (!allOnes && i != vecIdx)\n+ {\n+ for (size_t j = 2; j < dims; j++)\n```", + "output": "@alalek, it seems that ''explicit input parameter'' is an extra option. This PR adds support Mat (shape: [1, m, k, n] ) + Vec (shape: [1, 1, 1, n]). This support was implemented with Vec (shape: [1, 1, 1, n]) to Mat expansion to avoid possible reconstruction of ``eltwise_layer.cpp`` summation core. Thus, the current approach should not lead to further corruptions: >We need to ensure that \"layers fusing\" is still properly handled (fused code may not call the \".forward()\" method)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 223\n\n**Code Changes:**\n```diff\n+ {\n+ bool allOnes = isAllOnes(inputs[i], 2, dims);\n+ if (!allOnes && !isVecFound)\n+ {\n+ vecIdx = i;\n+ isVecFound = true;\n+ }\n+\n+ if (!allOnes && i != vecIdx)\n+ {\n+ for (size_t j = 2; j < dims; j++)\n```", + "output": "@alalek, could you, please, clarify the following: if we don't change the computation core (``forward(...)`` and ``run(...)`` functions were not changed) and modify only one of the inputs' shape, expanding it in ``getMemoryShapes(...)``, we should not impact on the chosen backend. Thus, do we really need ``supportBackend()`` callback corrections here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 223\n\n**Code Changes:**\n```diff\n+ {\n+ bool allOnes = isAllOnes(inputs[i], 2, dims);\n+ if (!allOnes && !isVecFound)\n+ {\n+ vecIdx = i;\n+ isVecFound = true;\n+ }\n+\n+ if (!allOnes && i != vecIdx)\n+ {\n+ for (size_t j = 2; j < dims; j++)\n```", + "output": "@alalek, thank you for the review, added corrections for ``supportBackend()``." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/eltwise_layer.cpp\n**Change Type:** modified\n**Context:** PR #19477: Added TF add behaviour alignment\n**Review Line:** 658\n\n**Code Changes:**\n```diff\n@@ -602,6 +654,47 @@ class EltwiseLayerImpl CV_FINAL : public EltwiseLayer\n \n CV_Assert(outputs.size() == 1);\n const int nstripes = getNumThreads();\n+\n+ if (channelsModeInput == ELTWISE_CHANNNELS_SAME && inputs[0].dims > 2)\n+ {\n+ for (size_t i = 0; i < inputs.size(); i++)\n+ {\n+ MatShape inpShape = shape(inputs[i].size);\n+ bool allOnes = isAllOnes(inpShape, 2, inputs[i].dims);\n```", + "output": "@alalek, ``forward_ocl()`` was corrected." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/objc/common/CVObjcUtil.h\n**Change Type:** modified\n**Context:** PR #17818: Documentation fixes/improvements\n\n**Code Changes:**\n```diff\n@@ -6,12 +6,6 @@\n \n #pragma once\n \n-typedef union { double d; int64_t l; } V64;\n-typedef union { float f; int32_t i; } V32;\n-\n-#define DOUBLE_TO_BITS(x) ((V64){ .d = x }).l\n-#define FLOAT_TO_BITS(x) ((V32){ .f = x }).i\n-\n```", + "output": "@vpisarev thanks for pointing out about Cv32suf and Cv64suf - I have modified the PR to use them and removed the redundant V64 and V32 unions. I also moved the DOUBLE_TO_BITS and FLOAT_TO_BITS macros out to just the files that need them." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/js/generator/embindgen.py\n**Change Type:** modified\n**Context:** PR #19284: Add QRCodeDetector to JavaScript Build\n\n**Code Changes:**\n```diff\n@@ -119,6 +119,7 @@ def makeWhiteList(module_list):\n 'InputOutputArray': 'cv::Mat&',\n 'InputArrayOfArrays': 'const std::vector&',\n 'OutputArrayOfArrays': 'std::vector&',\n+ 'string': 'std::string',\n 'String': 'std::string',\n 'const String&':'const std::string&'\n }\n@@ -462,8 +463,7 @@ def gen_function_binding_with_wrapper(self, func, class_info):\n ret_type = type_dict[ptr_type]\n```", + "output": "Thank you, i will remove the comment and fix the coding style. If you swap the order of the entry `'string': 'std::string'` (line 122) with `'String': 'std::string'` (line 123) a double replacement happens and the code doesn't work anymore. Should i add a comment there with a warning to not swap the order of the entries?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/js/generator/embindgen.py\n**Change Type:** modified\n**Context:** PR #19284: Add QRCodeDetector to JavaScript Build\n\n**Code Changes:**\n```diff\n@@ -119,6 +119,7 @@ def makeWhiteList(module_list):\n 'InputOutputArray': 'cv::Mat&',\n 'InputArrayOfArrays': 'const std::vector&',\n 'OutputArrayOfArrays': 'std::vector&',\n+ 'string': 'std::string',\n 'String': 'std::string',\n 'const String&':'const std::string&'\n }\n@@ -462,8 +463,7 @@ def gen_function_binding_with_wrapper(self, func, class_info):\n ret_type = type_dict[ptr_type]\n```", + "output": "Thank you for information! I will take a look on this behavior" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/infer/ie.hpp\n**Change Type:** modified\n**Context:** PR #18240: [G-API]: Adding reshape for CNN input.\n**Review Line:** 187\n\n**Code Changes:**\n```diff\n+ }\n+\n+ Params& cfgInputReshape(const std::unordered_set& layer_names) {\n+ desc.layer_names_to_reshape = layer_names;\n+ return *this;\n+ }\n+\n // BEGIN(G-API's network parametrization API)\n GBackend backend() const { return cv::gapi::ie::backend(); }\n std::string tag() const { return Net::tag(); }\n@@ -165,13 +203,13 @@ class Params {\n```", + "output": "@mpashchenkov what about this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/ie/giebackend.cpp\n**Change Type:** modified\n**Context:** PR #18240: [G-API]: Adding reshape for CNN input.\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n #include \"backends/ie/giebackend/giewrapper.hpp\"\n \n #include \"api/gbackend_priv.hpp\" // FIXME: Make it part of Backend SDK!\n+#include \"logger.hpp\"\n \n #if INF_ENGINE_RELEASE < 2021010000\n #include \"ie_compound_blob.h\"\n@@ -224,6 +225,9 @@ struct IEUnit {\n // but ExecutableNetwork returns ConstInputsDataMap/ConstOutputsDataMap\n```", + "output": "@TolyaTalamanov, main question about this const_cast for net reshape, other const_cast i can remove." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #19539: Added CALIB_FIX_FOCAL_LENGTH to fisheye calibration\n**Review Line:** 3777\n\n**Code Changes:**\n```diff\n CALIB_FIX_K4 = 1 << 7,\n CALIB_FIX_INTRINSIC = 1 << 8,\n CALIB_FIX_PRINCIPAL_POINT = 1 << 9,\n- CALIB_ZERO_DISPARITY = 1 << 10\n+ CALIB_ZERO_DISPARITY = 1 << 10,\n+ CALIB_FIX_FOCAL_LENGTH = 1 << 11\n };\n \n /** @brief Projects points using fisheye model\n@@ -3927,6 +3928,8 @@ namespace fisheye\n are set to zeros and stay zero.\n```", + "output": "@alalek It's already on place: https://github.com/opencv/opencv/blob/master/modules/calib3d/misc/java/test/Calib3dTest.java#L634." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/ccl_bolelli_forest.inc.hpp\n**Change Type:** added\n**Context:** PR #19631: Sota ccl\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1746 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// 2021 Federico Bolelli \n+// 2021 Stefano Allegretti \n+// 2021 Costantino Grana \n+//\n+// This file has been automatically generated using GRAPHGEN (https://github.com/prittt/GRAPHGEN)\n```", + "output": "Please use short license header: https://github.com/opencv/opencv/wiki/Coding_Style_Guide#file-structure ``` // This file is part of OpenCV project. // It is subject to the license terms in the LICENSE file found in the top-level directory // of this distribution and at http://opencv.org/license.html. // // 2021 Federico Bolelli // 2021 Stefano Allegretti // 2021 Costantino Grana ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/ccl_bolelli_forest.inc.hpp\n**Change Type:** added\n**Context:** PR #19631: Sota ccl\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1746 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// 2021 Federico Bolelli \n+// 2021 Stefano Allegretti \n+// 2021 Costantino Grana \n+//\n+// This file has been automatically generated using GRAPHGEN (https://github.com/prittt/GRAPHGEN)\n```", + "output": "@alalek, sorry I've missed it. Do you want me to replace also the [connectedcomponents.cpp](https://github.com/prittt/opencv/blob/sota-ccl/modules/imgproc/src/connectedcomponents.cpp) license header or just the ones in the hpp files?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/java/test/MatTest.java\n**Change Type:** modified\n**Context:** PR #19503: Fix bug in put/get of non-continuous Mat\n**Review Line:** 471\n\n**Code Changes:**\n```diff\n+ Mat matContinuous = matNonContinuous.clone();\n+ short[] outNonContinuous = new short[(int)matNonContinuous.total()];\n+ matNonContinuous.get(new int[] { 0, 0, 0 }, outNonContinuous);\n+ short[] outContinuous = new short[(int)matNonContinuous.total()];\n+ matContinuous.get(new int[] { 0, 0, 0 }, outContinuous);\n+ assertArrayEquals(outNonContinuous, outContinuous);\n+ Mat subMat2 = m2.submat(new Range[]{new Range(1,4), new Range(1,5), new Range(0,8)});\n+ Mat subMatClone2 = subMat2.clone();\n+ short[] outNonContinuous2 = new short[(int)subMat2.total()];\n+ subMat2.get(new int[] { 0, 1, 1 }, outNonContinuous2);\n+ short[] outContinuous2 = new short[(int)subMat2.total()];\n```", + "output": "@asmorkalov not sure what you mean. The bug this PR is fixing is in the output from the `get` function which is an array. So we need to compare the arrays not the Mats." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/java/test/pure_test/src/org/opencv/test/OpenCVTestCase.java\n**Change Type:** modified\n**Context:** PR #19503: Fix bug in put/get of non-continuous Mat\n\n**Code Changes:**\n```diff\n@@ -305,19 +305,22 @@ public static void assertListEquals(List list1, List li\n }\n \n public static void assertArrayEquals(E[] ar1, E[] ar2, double epsilon) {\n- if (ar1.length != ar2.length) {\n- fail(\"Arrays have different sizes.\");\n- }\n+ assertEquals(ar1.length, ar2.length);\n \n for (int i = 0; i < ar1.length; i++)\n```", + "output": "@asmorkalov this was a copy/paste from another `assertArrayEquals` function - I'll change all of them to be like this" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #19620: Fixed several issues found by static analysis\n\n**Code Changes:**\n```diff\n@@ -1408,29 +1408,24 @@ struct ExpFunctor : public BaseFunctor\n \n ExpFunctor(float base_ = -1.f, float scale_ = 1.f, float shift_ = 0.f)\n : base(base_), scale(scale_), shift(shift_)\n- {\n- CV_Check(base, base == -1.f || base > 0.f, \"Unsupported 'base' value\");\n- }\n-\n- bool supportBackend(int backendId, int targetId)\n- {\n```", + "output": "I'm not sure, for some reason functors were designed to implement layer's `finalize` method and all data members of this class are public, so they can be changed externally and then _finalized_ to calculate norm parameters. **cc** @SamFC10 , @l-bat" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #19620: Fixed several issues found by static analysis\n\n**Code Changes:**\n```diff\n@@ -1408,29 +1408,24 @@ struct ExpFunctor : public BaseFunctor\n \n ExpFunctor(float base_ = -1.f, float scale_ = 1.f, float shift_ = 0.f)\n : base(base_), scale(scale_), shift(shift_)\n- {\n- CV_Check(base, base == -1.f || base > 0.f, \"Unsupported 'base' value\");\n- }\n-\n- bool supportBackend(int backendId, int targetId)\n- {\n```", + "output": "My initial thought was to initialize the norm fields here and not have `finalize()` at all like @alalek suggests. I added it to `finalize()` instead as the PowerLayer above has a similar `finalize()` method and thought initializing norm fields in finalize() instead of the constructor \"shouldn't\" really affect the layer's implementation as finalize() is called before forward() anyway. Now I think its better to shift finalize() body here and remove it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_darknet_importer.cpp\n**Change Type:** modified\n**Context:** PR #19474: TanH darknet and test\n**Review Line:** 707\n\n**Code Changes:**\n```diff\n testDarknetLayer(\"mish\", true);\n }\n \n+TEST_P(Test_Darknet_layers, tanh)\n+{\n+ testDarknetLayer(\"tanh\");\n+}\n+\n TEST_P(Test_Darknet_layers, avgpool_softmax)\n {\n testDarknetLayer(\"avgpool_softmax\");\n```", + "output": "@aryanshomray Could you please update test data to have non-trivial different values in `testdata/dnn/darknet/tanh_out.npy` file? Current all 8 values are equal to `1`. --- BTW, GitHub shows \"Empty file.\" in opencv_extra PR, but it is not correct (GitHub bug)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/calibinit.cpp\n**Change Type:** modified\n**Context:** PR #19498: Properly handle empty centers in findCirclesGrid\n\n**Code Changes:**\n```diff\n@@ -2274,7 +2274,7 @@ bool findCirclesGrid2(InputArray _image, Size patternSize,\n }\n }\n \n- if (!H.empty()) // undone rectification\n+ if (!centers.empty() && !H.empty()) // undone rectification\n {\n Mat orgPointsMat;\n transform(centers, orgPointsMat, H.inv());\n```", + "output": "Hi @alalek, Yep #19079 introduced the issue. Since the rectification code was removed from the `isFound` conditional in the main attempts for loop, we need to add back in that check before attempting to rectify the centers. The error occurs when the attempts loop reaches its last attempt. If no centers are found on the last attempt, but centers were found on one of the previous attempts, `H` will be valid but `centers` will be empty. The final `!H.empty()` conditional will then attempt to transform a empty container of centers which will trigger the following assertion: ``` opencv/modules/core/src/matmul.dispatch.cpp:439: error: (-215:Assertion failed) scn == m.cols || scn + 1 == m.cols in function 'transform' ``` I attempted to preserve the original function interface, and figured checking if the result was valid before rectifying the centers was the best way to do that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_goodfeaturetotrack.cpp\n**Change Type:** modified\n**Context:** PR #19392: goodFeaturesToTrack returns also corner value\n\n**Code Changes:**\n```diff\n@@ -88,14 +88,13 @@ test_cornerEigenValsVecs( const Mat& src, Mat& eigenv, int block_size,\n cvtest::filter2D( src, dy2, ftype, kernel*kernel_scale, anchor, 0, borderType,borderValue );\n \n double denom = (1 << (aperture_size-1))*block_size;\n- denom = denom * denom;\n \n if( _aperture_size < 0 )\n- denom *= 4;\n+ denom *= 2.;\n if(type != ftype )\n```", + "output": "@amirtu Please comment changes in reference code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #19392: goodFeaturesToTrack returns also corner value\n\n**Code Changes:**\n```diff\n@@ -1999,6 +1999,38 @@ CV_EXPORTS_W void goodFeaturesToTrack( InputArray image, OutputArray corners,\n InputArray mask, int blockSize,\n int gradientSize, bool useHarrisDetector = false,\n double k = 0.04 );\n+\n+/** @brief Same as above, but returns also quality measure of the detected corners.\n+\n+@param image Input 8-bit or floating-point 32-bit, single-channel image.\n+@param corners Output vector of detected corners.\n+@param maxCorners Maximum number of corners to return. If there are more corners than are found,\n```", + "output": "I have missed this, thanks for notice." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_goodfeaturetotrack.cpp\n**Change Type:** modified\n**Context:** PR #19392: goodFeaturesToTrack returns also corner value\n\n**Code Changes:**\n```diff\n@@ -88,14 +88,13 @@ test_cornerEigenValsVecs( const Mat& src, Mat& eigenv, int block_size,\n cvtest::filter2D( src, dy2, ftype, kernel*kernel_scale, anchor, 0, borderType,borderValue );\n \n double denom = (1 << (aperture_size-1))*block_size;\n- denom = denom * denom;\n \n if( _aperture_size < 0 )\n- denom *= 4;\n+ denom *= 2.;\n if(type != ftype )\n```", + "output": "@alalek @asmorkalov Sorry for delayed answer, it has taken me some time to understand the context. Here are approximations of partial derivatives being computed with Sobel operator, the scaling is needed to preserve the energy after this filter: power of two due to averaging and differentiation parts of the filter, block size is considered in advance due to windowing of the image for every point, CV_32FC1 is (probably) normalized, CV_8U is not. First derivatives in both horizontal and vertical direction then should be scaled, as can be seen in cornerEigenValsVecs() (/imgproc/.../corner.cpp). Previously this test reference function was relying only on the relative order of the corner metrics, so scaling was not an issue." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #19486: [hal][neon] Optimize the v_dotprod_fast intrinsics for aarch64.\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,22 @@ CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN\n #define CV_SIMD128_64F 0\n #endif\n \n+// The following macro checks if the code is being compiled for the\n+// AArch64 execution state of Armv8, to enable the 128-bit\n+// intrinsics. The macro `__ARM_64BIT_STATE` is the one recommended by\n+// the Arm C Language Extension (ACLE) specifications [1] to check the\n+// availability of 128-bit intrinsics, and it is supporrted by clang\n+// and gcc. The macro `_M_ARM64` is the equivalent one for Microsoft\n```", + "output": "Hello @alalek , @tomoaki0705 - thank you for the review! My initial downstream code was using `__aarch64__`, but than I thought to use `CV_SIMD128_64F` to catch both clang and gcc, and also visual studio: ``` if defined(__aarch64__) || defined(_M_ARM64) #define CV_SIMD128_64F 1 #else #define CV_SIMD128_64F 0 #endif ``` `_M_ARM64` is equivalent to `__aarch64__` when targeting Windows. I agree the name `CV_SIMD128_64F` is misleading in this case. We are not conditionally compiling for checking if there is support for `double` types, we are checking if we are in `aarch64` execution mode on `armv8`. I would like to capture as many compiler as possible with one flag, so I would like to propose the following macro to be used for the code in this patch. ``` if defined(__aarch64__) || defined(_M_ARM64) #define CV_NEON_ADVSIMD 1 #else #define CV_NEON_ADVSIMD 0 #endif ``` The name `Advanced SIMD` is the name of NEON on Armv8: https://developer.arm.com/documentation/101028/0012/13--Advanced-SIMD--Neon--intrinsics?lang=en Would that be OK?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #19486: [hal][neon] Optimize the v_dotprod_fast intrinsics for aarch64.\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,22 @@ CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN\n #define CV_SIMD128_64F 0\n #endif\n \n+// The following macro checks if the code is being compiled for the\n+// AArch64 execution state of Armv8, to enable the 128-bit\n+// intrinsics. The macro `__ARM_64BIT_STATE` is the one recommended by\n+// the Arm C Language Extension (ACLE) specifications [1] to check the\n+// availability of 128-bit intrinsics, and it is supporrted by clang\n+// and gcc. The macro `_M_ARM64` is the equivalent one for Microsoft\n```", + "output": "Sure - thanks. BTW, I didn't mean to rename the old one, I meant to add a new one. :) But leaving just one macro is easier and simpler! So I will leave it as it is. Thank you again! I'll push the other fixes in a minute." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #19486: [hal][neon] Optimize the v_dotprod_fast intrinsics for aarch64.\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,22 @@ CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN\n #define CV_SIMD128_64F 0\n #endif\n \n+// The following macro checks if the code is being compiled for the\n+// AArch64 execution state of Armv8, to enable the 128-bit\n+// intrinsics. The macro `__ARM_64BIT_STATE` is the one recommended by\n+// the Arm C Language Extension (ACLE) specifications [1] to check the\n+// availability of 128-bit intrinsics, and it is supporrted by clang\n+// and gcc. The macro `_M_ARM64` is the equivalent one for Microsoft\n```", + "output": "Well, please wait the comment from @alalek or someone else who can merge this. I'm not authorized for merging" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_neon.hpp\n**Change Type:** modified\n**Context:** PR #19486: [hal][neon] Optimize the v_dotprod_fast intrinsics for aarch64.\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,22 @@ CV_CPU_OPTIMIZATION_HAL_NAMESPACE_BEGIN\n #define CV_SIMD128_64F 0\n #endif\n \n+// The following macro checks if the code is being compiled for the\n+// AArch64 execution state of Armv8, to enable the 128-bit\n+// intrinsics. The macro `__ARM_64BIT_STATE` is the one recommended by\n+// the Arm C Language Extension (ACLE) specifications [1] to check the\n+// availability of 128-bit intrinsics, and it is supporrted by clang\n+// and gcc. The macro `_M_ARM64` is the equivalent one for Microsoft\n```", + "output": "Of course, @tomoaki0705 . Thank you again for your review. :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_base.hpp\n**Change Type:** modified\n**Context:** PR #19439: Added Exif parsing for PNG Issue 16579\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@\n \n #include \"utils.hpp\"\n #include \"bitstrm.hpp\"\n+#include \"exif.hpp\"\n \n namespace cv\n {\n@@ -65,6 +66,7 @@ class BaseImageDecoder\n int height() const { return m_height; }\n```", + "output": "Thank you @alalek. My bad. I changed that code slightly to return an ExifEntry_t copy instead, as returning an exif reference that could be altered seemed poor design. The decoder should be the only class able to edit the ExifReader. The current PR contains squashed fix, and appears to be failing the build now only due to missing the png exif orientation test image files to execute the new test cases. The test files are in a PR on opencv_extra: https://github.com/opencv/opencv_extra/pull/843" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/highgui/include/opencv2/highgui.hpp\n**Change Type:** modified\n**Context:** PR #19411: pollKey implementation for w32 backend of highgui\n\n**Code Changes:**\n```diff\n@@ -347,23 +347,34 @@ The function waitKey waits for a key event infinitely (when \\f$\\texttt{delay}\\le\n milliseconds, when it is positive. Since the OS has a minimum time between switching threads, the\n function will not wait exactly delay ms, it will wait at least delay ms, depending on what else is\n running on your computer at that time. It returns the code of the pressed key or -1 if no key was\n-pressed before the specified time had elapsed.\n+pressed before the specified time had elapsed. To check for a key press but not wait for it, use\n+#pollKey.\n \n-@note\n-\n```", + "output": "I don't quite understand. remove Line 373 which is a blank line between `@note` and its body? or reformat paragraph to use 72-100 characters of width consistently?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cuda.hpp\n**Change Type:** modified\n**Context:** PR #19259: Minimal implementation of GpuMatND\n\n**Code Changes:**\n```diff\n@@ -340,6 +340,201 @@ class CV_EXPORTS_W GpuMat\n Allocator* allocator;\n };\n \n+struct CV_EXPORTS_W GpuData\n+{\n+ explicit GpuData(size_t _size);\n+ ~GpuData();\n+\n+ GpuData(const GpuData&) = delete;\n```", + "output": "@asmorkalov Does this ctor has some intersection with similar `cv::Mat` ctor? If so, then it make sense to \"wrap\" (or move) `cv::Mat` instead of `void*` (perhaps out of the scope of this PR)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cuda.hpp\n**Change Type:** modified\n**Context:** PR #19286: add cuda::Stream constructor with cuda stream flags\n\n**Code Changes:**\n```diff\n@@ -656,6 +656,18 @@ class CV_EXPORTS_W Stream\n //! creates a new asynchronous stream with custom allocator\n CV_WRAP Stream(const Ptr& allocator);\n \n+ /** @brief creates a new Stream using the cudaFlags argument to determine the behaviors of the stream\n+\n+ @note The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and\n+ supports the same parameter values.\n+ @code\n+ // creates an OpenCV cuda::Stream that manages an asynchronous, non-blocking,\n```", + "output": "/cc @nglee @cudawarped @tomoaki0705 Any through about this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cuda.hpp\n**Change Type:** modified\n**Context:** PR #19286: add cuda::Stream constructor with cuda stream flags\n\n**Code Changes:**\n```diff\n@@ -656,6 +656,18 @@ class CV_EXPORTS_W Stream\n //! creates a new asynchronous stream with custom allocator\n CV_WRAP Stream(const Ptr& allocator);\n \n+ /** @brief creates a new Stream using the cudaFlags argument to determine the behaviors of the stream\n+\n+ @note The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and\n+ supports the same parameter values.\n+ @code\n+ // creates an OpenCV cuda::Stream that manages an asynchronous, non-blocking,\n```", + "output": "Snap! I have an alternative. Thanks @nglee you gave the hint at the std::move doesn't actually do what I though it did. More research and I now see that std::move is very much only a cast and that it is the responsibility of the constructor/assignment operator to manually change memory. For example STL leaves the var \"placed in a valid but unspecified state\". I could change my PR to set the `cudaStream_t` to null. Unfortunately, a null cudaStream_t (the pointer itself rather than to which it points) means the default CUDA stream. This would prevent double cudaStreamDestroy() because the destroy api only works on async streams and would likely return an error or silent ignore (I could test it). Is this any better? Ehhh...marginally. It does transfer ownership of `cudaStream_t`. But that means the previously owning var now has a value of null which a programmer could continue to use in errant code to do things without realizing it is now with the default CUDA stream. The legacy of the `cudaStream_t` might be the death of this. I do not see any CUDA define for an invalid stream. (They have them for devices and cpus...but not streams). I have no value that I can set the previously owning var so that it can not be misused. My main use case is to create a Stream with a specific type of cudaStream_t. I need unique per-thread non-default async streams. These can be created with `cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking)`. OpenCV aready has code with this approach at `cv::dnn::cuda4dnn::csl::UniqueStream()` The prototype for that nvidia cuda API is: `cudaStreamCreateWithFlags(cudaStream_t *pStream, unsigned int flags)` ### Alternative Have a Stream constructor like this ```c++ CV_WRAP cv::cuda::Stream::Stream(unsigned int cudaFlags); // example use cv::cuda::Stream mystream(cudaStreamNonBlocking); ``` Benefits are: * It is a constructor with all its benefits * No CUDA headers are needed in the public opencv API * Ownership is always contained within the Stream() * It can be used from Python" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #19057: Preparations for enabling 1D support for Conv and Pool operations on CUDA - relates: https://github.com/opencv/opencv/pull/18862 https://gi...\n\n**Code Changes:**\n```diff\n@@ -117,6 +117,9 @@ class BaseConvolutionLayerImpl : public ConvolutionLayer\n {\n kernel_size.assign(1, kernel_size[0]);\n strides.assign(1, strides[0]);\n+ dilations.assign(1, dilations[0]);\n+ pads_begin.assign(1, pads_begin[0]);\n+ pads_end.assign(1, pads_end[0]);\n }\n CV_Assert(weightShape.dims() == kernel_size.size() + 2);\n for (int i = 0; i < kernel_size.size(); i++) {\n```", + "output": "@sl-sergei Any feedback?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #19057: Preparations for enabling 1D support for Conv and Pool operations on CUDA - relates: https://github.com/opencv/opencv/pull/18862 https://gi...\n\n**Code Changes:**\n```diff\n@@ -117,6 +117,9 @@ class BaseConvolutionLayerImpl : public ConvolutionLayer\n {\n kernel_size.assign(1, kernel_size[0]);\n strides.assign(1, strides[0]);\n+ dilations.assign(1, dilations[0]);\n+ pads_begin.assign(1, pads_begin[0]);\n+ pads_end.assign(1, pads_end[0]);\n }\n CV_Assert(weightShape.dims() == kernel_size.size() + 2);\n for (int i = 0; i < kernel_size.size(); i++) {\n```", + "output": "> @sl-sergei Any feedback? @alalek I updated it, now everything should be ok" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/onnx/gonnxbackend.cpp\n**Change Type:** modified\n**Context:** PR #18943: G-API: ONNX. Support for networks with three dimensional input.\n**Review Line:** 583\n\n**Code Changes:**\n```diff\n this_session = Ort::Session(this_env, params.model_path.data(), session_options);\n+#else\n+ std::wstring_convert, wchar_t> converter;\n+ std::wstring w_model_path = converter.from_bytes(params.model_path.data());\n+ this_session = Ort::Session(this_env, w_model_path.data(), session_options);\n+#endif\n this_memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);\n \n in_tensor_info = getTensorInfo(INPUT);\n```", + "output": "@dmatveev, It is a fix for successful build on Windows." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/onnx/gonnxbackend.cpp\n**Change Type:** modified\n**Context:** PR #18943: G-API: ONNX. Support for networks with three dimensional input.\n**Review Line:** 583\n\n**Code Changes:**\n```diff\n this_session = Ort::Session(this_env, params.model_path.data(), session_options);\n+#else\n+ std::wstring_convert, wchar_t> converter;\n+ std::wstring w_model_path = converter.from_bytes(params.model_path.data());\n+ this_session = Ort::Session(this_env, w_model_path.data(), session_options);\n+#endif\n this_memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);\n \n in_tensor_info = getTensorInfo(INPUT);\n```", + "output": "Isn't it more complex than that? Internets say something about ```c++ std::wstring_convert>().from_bytes(input_str); ``` cc: @alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/onnx/gonnxbackend.cpp\n**Change Type:** modified\n**Context:** PR #18943: G-API: ONNX. Support for networks with three dimensional input.\n**Review Line:** 583\n\n**Code Changes:**\n```diff\n this_session = Ort::Session(this_env, params.model_path.data(), session_options);\n+#else\n+ std::wstring_convert, wchar_t> converter;\n+ std::wstring w_model_path = converter.from_bytes(params.model_path.data());\n+ this_session = Ort::Session(this_env, w_model_path.data(), session_options);\n+#endif\n this_memory_info = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);\n \n in_tensor_info = getTensorInfo(INPUT);\n```", + "output": "@TolyaTalamanov can you please point to `FText{}` implementation here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/infer/onnx.hpp\n**Change Type:** modified\n**Context:** PR #18943: G-API: ONNX. Support for networks with three dimensional input.\n\n**Code Changes:**\n```diff\n@@ -86,7 +86,7 @@ template class Params {\n };\n \n // BEGIN(G-API's network parametrization API)\n- GBackend backend() const { return cv::gapi::onnx::backend(); }\n+ GBackend backend() const { return cv::gapi::onnx::backend(); }\n std::string tag() const { return Net::tag(); }\n cv::util::any params() const { return { desc }; }\n // END(G-API's network parametrization API)\n```", + "output": "@alalek, Moved `#include ` from onnx.hpp" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_pytorch_tf_classification/pytorch_cls_model_conversion_c_tutorial.md\n**Change Type:** added\n**Context:** PR #17604: [GSoC] Added TF and PyTorch classification conversion cases\n**Review Line:** 19\n\n**Code Changes:**\n```diff\n+* provide model inference\n+\n+We will explore the above-listed points by the example of ResNet-50 architecture.\n+\n+## Introduction\n+Let's briefly view the key concepts involved in the pipeline of PyTorch models transition with OpenCV API. The initial step in conversion of PyTorch models into cv::dnn::Net\n+is model transferring into [ONNX](https://onnx.ai/about.html) format. ONNX aims at the interchangeability of the neural networks between various frameworks. There is a built-in function in PyTorch for ONNX conversion: [``torch.onnx.export``](https://pytorch.org/docs/stable/onnx.html#torch.onnx.export).\n+Further the obtained ``.onnx`` model is passed into cv::dnn::readNetFromONNX or cv::dnn::readNet.\n+\n+## Requirements\n+To be able to experiment with the below code you will need to install a set of libraries. We will use a virtual environment with python3.7+ for this:\n```", + "output": "Please use doxygen @ref to add link to the class documentation, or at least \\`\\`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_pytorch_tf_classification/pytorch_cls_model_conversion_c_tutorial.md\n**Change Type:** added\n**Context:** PR #17604: [GSoC] Added TF and PyTorch classification conversion cases\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,220 @@\n+# Conversion of PyTorch Classification Models and Launch with OpenCV C++ {#pytorch_cls_c_tutorial_dnn_conversion}\n+\n+@prev_tutorial{pytorch_cls_tutorial_dnn_conversion}\n+\n+| | |\n+| -: | :- |\n+| Original author | Anastasia Murzova |\n+| Compatibility | OpenCV >= 4.5 |\n+\n```", + "output": "> https://docs.opencv.org/master Please use doxygen links instead (see `@ref`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_pytorch_tf_classification/tf_cls_model_conversion_tutorial.md\n**Change Type:** added\n**Context:** PR #17604: [GSoC] Added TF and PyTorch classification conversion cases\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,360 @@\n+# Conversion of TensorFlow Classification Models and Launch with OpenCV Python {#tf_cls_tutorial_dnn_conversion}\n+\n+| | |\n+| -: | :- |\n+| Original author | Anastasia Murzova |\n+| Compatibility | OpenCV >= 4.5 |\n+\n+## Goals\n+In this tutorial you will learn how to:\n```", + "output": "Please use Doxygen @ref here instead of direct link." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/yolo_object_detection.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,117 @@\n+#include \n+#include \n+#include \n+#include \n+using namespace cv;\n+using namespace cv::dnn;\n+\n+#include \n+#include \n```", + "output": "@AlexeyAB, thanks, but I meant that `cv::imread` can read images with alpha into 24bit, http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#gga61d9b0126a3e57d9277ac48327799c80af660544735200cbe942eea09232eb822." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/yolo_object_detection.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,117 @@\n+#include \n+#include \n+#include \n+#include \n+using namespace cv;\n+using namespace cv::dnn;\n+\n+#include \n+#include \n```", + "output": "@dkurt I fixed it. Initially I did it as in `ssd_object_detection.cpp` and I thought that this has some hidden meaning :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,624 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "Thanks! Anyway I suggest to replace it as a single layer or think how we can do the same transformations using existing ones (Permute, Reshape). Reshape layer doesn't change the data by definition neither in one of the frameworks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,624 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "@dkurt I added reorg as separate layer `reorg_layer.cpp`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,624 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "@AlexeyAB, May be we can remove at least width/height sizes tracking? As I can see only `current_shape.input_channels` is used to read convolutional layer weights." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_io.hpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,116 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "@AlexeyAB, on the other hand, let's keep it as is now. I'll just install darknet and compare it with PR and we can merge it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_importer.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,195 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n+// copy or use the software.\n+//\n+//\n```", + "output": "Can we remove it? There is only one entry of `toString` that might be replaced to `cv::format`. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/darknet/darknet_importer.cpp\n**Change Type:** added\n**Context:** PR #9705: Added DNN Darknet Yolo v2 for object detection - ``` opencv_extra=dnn_model_darknet_yolo_v2 ``` ### This pullrequest changes Added neural n...\n**Review Line:** 1\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,195 @@\n+/*M///////////////////////////////////////////////////////////////////////////////////////\n+//\n+// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n+//\n+// By downloading, copying, installing or using the software you agree to this license.\n+// If you do not agree to this license, do not download, install,\n```", + "output": "@AlexeyAB, I don't think that we should place past years copyrights to newly added source. @vpisarev, please tell us if the following header is enough? ``` // This file is part of OpenCV project. // It is subject to the license terms in the LICENSE file found in the top-level directory // of this distribution and at http://opencv.org/license.html. // // Copyright (C) 2017, Intel Corporation, all rights reserved. // Third party copyrights are property of their respective owners. ``` Or the same as above but with a current date." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thank you for pointing this out, the new code has added the mouse to select the initial box." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, I have fixed this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thank you for point out this. Fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks for reviewing my code. I have used `selectROI()` in next version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks for reviewing my code. I have used the Rect2f in the new version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks. In new version, I have used `samples::findFile(net)`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks. In the new version, I have used `samples::findFileOrKeep()`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks. Fixed" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks. In new version, I directly use the `\\` for divide." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thank you for your careful review. I have used your comment in new version." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thanks, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thank you for reviewing my code, Should I change this in .cpp file, @alalek @asmorkalov ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/dasiamrpn_tracker.cpp\n**Change Type:** added\n**Context:** PR #19078: Add DaSiamRPN tracker sample of c++ version\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,518 @@\n+// DaSiamRPN tracker.\n+// Original paper: https://arxiv.org/abs/1808.06048\n+// Link to original repo: https://github.com/foolwood/DaSiamRPN\n+// Links to onnx models:\n+// - network: https://www.dropbox.com/s/rr1lk9355vzolqv/dasiamrpn_model.onnx?dl=0\n+// - kernel_r1: https://www.dropbox.com/s/999cqx5zrfi7w4p/dasiamrpn_kernel_r1.onnx?dl=0\n+// - kernel_cls1: https://www.dropbox.com/s/qvmtszx5h339a0w/dasiamrpn_kernel_cls1.onnx?dl=0\n+\n+#include \n```", + "output": "Thank you for pointing out this. I will fix it soon." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #19089: Added '@ref' to 3.4 Camera Calibration and 3D Reconstruction documentation\n**Review Line:** 3193\n\n**Code Changes:**\n```diff\n@@ -3189,7 +3190,7 @@ optimization. It stays at the center or at a different location specified when C\n @param P2 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second\n camera.\n @param Q Output \\f$4 \\times 4\\f$ disparity-to-depth mapping matrix (see reprojectImageTo3D ).\n- @param flags Operation flags that may be zero or CALIB_ZERO_DISPARITY . If the flag is set,\n+ @param flags Operation flags that may be zero or @ref fisheye::CALIB_ZERO_DISPARITY . If the flag is set,\n the function makes the principal points of each camera have the same pixel coordinates in the\n rectified views. And if the flag is not set, the function may still shift the images in the\n horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the\n@@ -3215,7 +3216,7 @@ optimization. It stays at the center or at a different location specified when C\n observed by the second camera.\n```", + "output": "@alalek I propose to add constant to fisheye enum with the same value to presume compatibility and consistence. Whta do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/orb.cpp\n**Change Type:** modified\n**Context:** PR #10690: Fixed issue in ORB detection if firstLevel property is set above 0 - resolves #10561 ### This pullrequest changes Fixed pyramid buffer size...\n**Review Line:** 676\n\n**Code Changes:**\n```diff\n@@ -673,7 +673,7 @@ class ORB_Impl : public ORB\n void setEdgeThreshold(int edgeThreshold_) { edgeThreshold = edgeThreshold_; }\n int getEdgeThreshold() const { return edgeThreshold; }\n \n- void setFirstLevel(int firstLevel_) { firstLevel = firstLevel_; }\n+ void setFirstLevel(int firstLevel_) { CV_Assert(firstLevel >= 0); firstLevel = firstLevel_; }\n int getFirstLevel() const { return firstLevel; }\n \n void setWTA_K(int wta_k_) { wta_k = wta_k_; }\n@@ -1014,7 +1014,7 @@ void ORB_Impl::detectAndCompute( InputArray _image, InputArray _mask,\n \n```", + "output": "@vpisarev This definitely should be fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/misc/python/test/test_gapi_streaming.py\n**Change Type:** modified\n**Context:** PR #18762: [G-API] Wrap GArray\n**Review Line:** 176\n\n**Code Changes:**\n```diff\n+ ccomp.start()\n+\n+ # Assert\n+ max_num_frames = 10\n+ proc_num_frames = 0\n+ while cap.isOpened():\n+ has_expected, frame = cap.read()\n+ has_actual, actual = ccomp.pull()\n+\n+ self.assertEqual(has_expected, has_actual)\n+\n```", + "output": "Please limit the number of processed frames. We really don't need to process all 100 frames here. /cc @dmatveev This note is actual for other tests too, including C++" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/api/gcomputation.cpp\n**Change Type:** modified\n**Context:** PR #18762: [G-API] Wrap GArray\n**Review Line:** 202\n\n**Code Changes:**\n```diff\n {\n run_args.emplace_back(cv::Scalar{});\n outs.emplace_back(&cv::util::get(run_args.back()));\n break;\n }\n+ case cv::GShape::GARRAY:\n+ {\n+ switch (info.kind)\n+ {\n+ case cv::detail::OpaqueKind::CV_POINT2F:\n+ run_args.emplace_back(cv::detail::VectorRef{std::vector{}});\n```", + "output": "@TolyaTalamanov could you explain this part a little (also the same code is in gstreaming.cpp)? From where it's called? Won't any other GArray template type come here? I believe we can extract pretty much everything from GArray in C++ version, yet these changes seems to affect C++ code and not python related features." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #18285: cuda4dnn: enable fusion tests, update thresholds, fix eltwise fusion regression\n**Review Line:** 225\n\n**Code Changes:**\n```diff\n testONNXModels(\"two_deconvolution\", npy, 0, 0, false, false);\n testONNXModels(\"deconvolution_group\", npy, 0, 0, false, false);\n testONNXModels(\"deconvolution_output_shape\", npy, 0, 0, false, false);\n- testONNXModels(\"deconv_adjpad_2d\", npy, 0, 0, false, false);\n+ if (target != DNN_TARGET_CUDA_FP16) // bug\n+ testONNXModels(\"deconv_adjpad_2d\", npy, 0, 0, false, false);\n }\n \n TEST_P(Test_ONNX_layers, Deconvolution3D)\n@@ -675,6 +676,8 @@ TEST_P(Test_ONNX_layers, LinearWithConstant)\n #if defined(INF_ENGINE_RELEASE) && INF_ENGINE_VER_MAJOR_LT(2020040000)\n```", + "output": "> split this tests into multiple Agreed. /cc @l-bat @dkurt @sl-sergei" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #18845: Fix for index error for Reduce Mean\n\n**Code Changes:**\n```diff\n@@ -494,14 +494,17 @@ void ONNXImporter::handleNode(const opencv_onnx::NodeProto& node_proto_)\n MatShape inpShape = outShapes[node_proto.input(0)];\n DictValue axes = layerParams.get(\"axes\");\n bool keepdims = layerParams.get(\"keepdims\");\n- MatShape targetShape = inpShape;\n+ MatShape targetShape;\n+ std::vector shouldDelete(inpShape.size(), false);\n for (int i = 0; i < axes.size(); i++) {\n int axis = clamp(axes.get(i), inpShape.size());\n- if (keepdims) {\n```", + "output": "@sl-sergei Would you like me to make the changes, test, and then commit them to this pull request?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #18845: Fix for index error for Reduce Mean\n\n**Code Changes:**\n```diff\n@@ -494,14 +494,17 @@ void ONNXImporter::handleNode(const opencv_onnx::NodeProto& node_proto_)\n MatShape inpShape = outShapes[node_proto.input(0)];\n DictValue axes = layerParams.get(\"axes\");\n bool keepdims = layerParams.get(\"keepdims\");\n- MatShape targetShape = inpShape;\n+ MatShape targetShape;\n+ std::vector shouldDelete(inpShape.size(), false);\n for (int i = 0; i < axes.size(); i++) {\n int axis = clamp(axes.get(i), inpShape.size());\n- if (keepdims) {\n```", + "output": "> @sl-sergei Would you like me to make the changes, test, and then commit them to this pull request? @joegeisbauer It was your initiative to make this contribution, and you did great work highlighting the problem, reproducing and proposing a fix. I think you should finish it:)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "OpenCV implementation of this algorithm has several parameters. Why do we lose them? ``` createBackgroundSubtractorMOG2(int history=500, double varThreshold=16, bool detectShadows=true); ``` --- > BackSubMOG2 Do we really want to obfuscate function name? --- > `@sa BackSubMOG2` Points on that? on this function again?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "Suggest to use `@return` for this note if you wish. However, a lot of kernels use this note without `@return`, so it's up to you" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "typo: `@param`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "This `@ref` is not in place. What is it for?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/common/gapi_video_tests_common.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -321,6 +321,35 @@ inline GComputation runOCVnGAPIOptFlowPipeline(TestFunctional& testInst,\n return c;\n }\n \n+inline void testBackgroundSubtractorStreaming(cv::GStreamingCompiled& gapiBackSub,\n+ const cv::Ptr& pOCVBackSub,\n+ const int diffPercent, const int tolerance,\n+ const double lRate, const std::size_t testNumFrames)\n+{\n+ cv::Mat frame, gapiForeground, ocvForeground;\n```", + "output": "``` @OrestChura OrestChura 6 hours ago Member please, const reference parameters where applicable @anna-khakimova anna-khakimova 35 minutes ago Author Contributor This comment for gapiBackSub? ``` Yes, for `gapiBackSub` for sure, but now for `pOCVBackSub`, too, I'm afraid. If it is applicable. Const is probably not, but reference - maybe. I'm not really sure, please check if `cv::Ptr` can be given as a reference. All in all, It is a _structure_ which saves a pointer inside." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/cpu/gapi_video_tests_cpu.cpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -97,4 +97,18 @@ INSTANTIATE_TEST_CASE_MACRO_P(WITH_VIDEO(BuildPyr_CalcOptFlow_PipelineInternalTe\n Values(15),\n Values(3),\n Values(true)));\n+\n+\n+INSTANTIATE_TEST_CASE_MACRO_P(WITH_VIDEO(BackgroundSubtractorTestCPU),\n+ BackgroundSubtractorTest,\n+ Combine(Values(VIDEO_CPU),\n+ Values(std::make_tuple(cv::gapi::video::TYPE_BS_MOG2, 16),\n```", + "output": "> [----------] 256 tests from BackgroundSubtractorTestCPU/BackgroundSubtractorTest (413881 ms total) This one [test case](http://pullrequest.opencv.org/buildbot/builders/precommit_opencl_linux/builds/27140/steps/test_gapi-ippicv-opencl/logs/stdio) consumes the 2x times more than other 30+k G-API tests. Keep tests fast. Do not enable tests if they don't increase code coverage. TODO: - please reduce amount of enabled test cases (you may want to create \"full\" case for development disabled by default). - reduce amount of processed frames (no need to process all 100 frames of video, 3-5 frames should be enough) /cc @dmatveev" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "I did as you said on the previous step, but doxygen misinterpreted the comments in this arrangement. @OrestChura can confirm my words. He also observed this behavior. I can prove it by rolling back, but I don't want to waste time on it. So I would rather leave it as it is now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/common/gapi_video_tests_inl.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -8,6 +8,7 @@\n #define OPENCV_GAPI_VIDEO_TESTS_INL_HPP\n \n #include \"gapi_video_tests.hpp\"\n+#include \n \n namespace opencv_test\n {\n@@ -88,6 +89,49 @@ TEST_P(BuildPyr_CalcOptFlow_PipelineTest, AccuracyTest)\n compareOutputsOptFlow(outOCV, outGAPI);\n```", + "output": "Thanks. Applied." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,73 @@ G_TYPED_KERNEL(GCalcOptFlowLKForPyr,\n return std::make_tuple(empty_array_desc(), empty_array_desc(), empty_array_desc());\n }\n };\n+\n+enum BackgroundSubtractorType\n+{\n+ TYPE_BS_MOG2,\n+ TYPE_BS_KNN\n+};\n```", + "output": "Ok. Thanks for prompt. Changed" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/common/gapi_video_tests_inl.hpp\n**Change Type:** modified\n**Context:** PR #18674: GAPI: New BackgroundSubtractor stateful kernel - New BackgroundSubtractor kernel represents 2 types of operation such as: - Background Subt...\n\n**Code Changes:**\n```diff\n@@ -8,6 +8,7 @@\n #define OPENCV_GAPI_VIDEO_TESTS_INL_HPP\n \n #include \"gapi_video_tests.hpp\"\n+#include \n \n namespace opencv_test\n {\n@@ -88,6 +89,49 @@ TEST_P(BuildPyr_CalcOptFlow_PipelineTest, AccuracyTest)\n compareOutputsOptFlow(outOCV, outGAPI);\n```", + "output": "Please move out `findDataFile` call from try-catch block. We don't want to suppress its exceptions. > catch (...) it make sense to capture `const cv::Exception&` only. /cc @mpashchenkov" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.markdown\n**Change Type:** modified\n**Context:** PR #17839: Update tutorial for erosion / dilation\n\n**Code Changes:**\n```diff\n@@ -84,57 +84,198 @@ This tutorial's code is shown below. You can also download it\n Explanation\n -----------\n \n--# Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n- previous sections). Let's check the general structure of the C++ program:\n+@add_toggle_cpp\n+Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n+previous sections). Let's check the general structure of the C++ program:\n+\n```", + "output": "@alalek I know, that was my first thought too. However, I confronted a lot of issues while installing OpenCV for python under Ubuntu. I also have trouble for the java installation. The sites and tutorials found in docs and online are very error prune or are for older versions. Some tutorials aren't working at all or install OpenCV and the version is shown but nothing more (executing example code throws errors). That's why I though it would be useful to bring some clarification as a \"troubleshooting\" section. But of course I can remove the section. What's your opinion about the other content (except the troubleshooting section now)? P.S. I want to start also a discussion somewhere about simplifying the installation process by collecting some opinions and discuss various installation options and the problems beginners have while starting learning OpenCV. That way the troubleshooting section won't be necessary at all and the beginners can get started much faster. My experience for example wasn't that great and the installation process was too complicated for \"just trying it out\". Finding the right resources and explanations was quite difficult." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.markdown\n**Change Type:** modified\n**Context:** PR #17839: Update tutorial for erosion / dilation\n\n**Code Changes:**\n```diff\n@@ -84,57 +84,198 @@ This tutorial's code is shown below. You can also download it\n Explanation\n -----------\n \n--# Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n- previous sections). Let's check the general structure of the C++ program:\n+@add_toggle_cpp\n+Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n+previous sections). Let's check the general structure of the C++ program:\n+\n```", + "output": "I believe this content should be contributed into this document instead: https://github.com/opencv/opencv/wiki/FAQ (maintained by @asmorkalov ) Please open a new issue with proposals for Wiki updates (GitHub doesn't support PRs for Wiki) There is PR with updating of installation process: #18195 Feel free to discuss it there." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.markdown\n**Change Type:** modified\n**Context:** PR #17839: Update tutorial for erosion / dilation\n\n**Code Changes:**\n```diff\n@@ -84,57 +84,198 @@ This tutorial's code is shown below. You can also download it\n Explanation\n -----------\n \n--# Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n- previous sections). Let's check the general structure of the C++ program:\n+@add_toggle_cpp\n+Most of the material shown here is trivial (if you have any doubt, please refer to the tutorials in\n+previous sections). Let's check the general structure of the C++ program:\n+\n```", + "output": "@asmorkalov do you think we can fit somehow this kind of content in the FAQ? And about the mentioned PR, I'll check it out, thanks. Looks like a lot of improvements are being made." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/executor/gtbbexecutor.cpp\n**Change Type:** added\n**Context:** PR #17851: Sole tbb executor\n**Review Line:** 287\n\n**Code Changes:**\n```diff\n+ //\n+ struct task_body {\n+ exec_ctx& ctx;\n+\n+ std::size_t push_ready_dependants(tile_node* node) const {\n+ return graph::push_ready_dependants(ctx.q, node);\n+ }\n+\n+ void spawn_clones(std::size_t items) const {\n+ tasking::batch_spawn(items, ctx.root.get(), *this);\n+ }\n```", + "output": "Can't really see, why does creating new functions make logic clearer; however, it does seem that it just depends on the taste. So, thanks for the explanation!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/executor/gtbbexecutor.cpp\n**Change Type:** added\n**Context:** PR #17851: Sole tbb executor\n**Review Line:** 16\n\n**Code Changes:**\n```diff\n+\n+#include \n+#include \n+#include \"logger.hpp\" // GAPI_LOG\n+\n+#include \n+#include // unique_ptr\n+\n+#include \n+#include \n+\n```", + "output": "thanks, I know this" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "```suggestion // Liangqian Kong ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thank you for your code review. I would like to consult where the file generated by doxygen to place?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "I'm sorry I may misunderstand what you meant until I see the doxygen warnings. Do you mean that I have to change the comment from latex format to the doxygen format and download the doxygen to check whether my comments are all correct? Thanks, I'll submit the new pull request after I've modified my code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "You can wrap Latex formula with the following Doxygen `\\f$` or `\\f[` tags to render Latex formula in the generated html docs. See for instance how it is done [here](https://github.com/opencv/opencv/blob/f52a2cf5e16c5545d8e6cc7bcb1571e4d3c2cddf/modules/calib3d/include/opencv2/calib3d.hpp#L64) and how the documentation looks like [here](https://docs.opencv.org/4.4.0/d9/d0c/group__calib3d.html#details). For now, the generated documentation looks like [this](https://pullrequest.opencv.org/buildbot/export/pr/18335/docs/da/d4a/classcv_1_1Quat.html#details). --- I think it is better to ask the core OpenCV dev (**/cc** @alalek ) if it is better to submit a new PR or keep the development in this one." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks very much for your help on doxygen -------------------------------------------------------------------------------------------------------- maybe I could submit the code here first to the subsequent discussion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Ok, thanks. It has been dropped in my latest code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks again for your help on doxygen In this commit, we only fix the warning of the doxygen and first put out the code . we will improve the comment later." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks very much for your code review. @savuor @alalek I think it's natural to construct a quaternion by a Vec4. Mat to quaternion is a bit more complicated which feels more suitable for implementation in createFrom* function, so we want to remove `explicit Quat(const cv::Mat &R);` and Vec3 rvec, and add createFromMat and createFromRvec. can this be OK?@savuor @alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Yes. I will check all description before next commit. Thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "/cc @vpisarev There are several options: 1. `derived` vs `separate` class 2. template parameter vs just regular parameter. 3. left \"as is\" P.S. Mainly, strict typing helps to avoid shooting legs by misused parameter value." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "OK. Thanks for your suggestions" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks very much!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks very much!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/test/test_quaternion.cpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n**Review Line:** 163\n\n**Code Changes:**\n```diff\n+ EXPECT_EQ(cos(acos(q1)), q1);\n+ EXPECT_EQ(atan(tan(q3)), q3);\n+ EXPECT_EQ(tan(atan(q1)), q1);\n+}\n+\n+TEST_F(QuatTest, opeartor){\n+ Quatd minusQ{-1, -2, -3, -4};\n+ Quatd qAdd{3.5, 0, 6.5, 8};\n+ Quatd qMinus{-1.5, 4, -0.5, 0};\n+ Quatd qMultq{-20, 1, -5, 27};\n+ Quatd qMults{2.5, 5.0, 7.5, 10.0};\n```", + "output": "Thanks very much!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "A solution proposed by @alalek: get rid of constructors like `Quat(Vec3)`, `Quat(Mat)`,` Quat(Vec4)`, make functions like createFromAxisAngle, createFromRodrigues, createFromRotMatrix,.. instead. When passing a piece of data as an argument to constructor, it's hard to understand its meaning, named functions are less confusing. I agree with that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "I think I should ask @savuor for help. Can we use static_assert here ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "@chargerKong there was a meeting between me, @alalek and @vpisarev We settled on a previous solution (a lot of functions with default `assumeUnit` argument) but with the following additions: * the `assumeUnit` argument should be enum of two values (like ASSUME_UNIT and ASSUME_NON_UNIT for example), not bool - so that it'd be easier to search it in the code or in the docs * the set of functions with `assumeUnit` should be revised: sometimes this assumption will not give significant performance improvement (for example, I'm not sure if normalization takes a lot in `slerp`, `squad` or `spline` functions)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "Thanks all for the suggestions! I totally agree the first suggestions. The second one can also be done, but here are my thoughs to surpport it may be better to keep `assumeUnit` argument in these functions. - Adding this param could ensure the uniformity of the code style. So I think users may also expect these functions to have the assumeUnit parameter. - It does not improve much performance in some functions, but it actually saves some time, even though the improvement rate is low." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "We have added the `std::` before these trigonometric functions during implementation, but some other files that also in `cv` namespace are not, like affine.hpp and warpers_inl.hpp. They just use it directly. So we finally add `using std::*` to solve the error(see below) raising from compilation with the entire OpenCV. error is something like this ``` /home/kong/opencv/modules/core/include/opencv2/core/affine.hpp:502:26: error: no matching function for call to ‘acos(double&)’ double theta = acos(c); ... /home/kong/opencv/modules/core/include/opencv2/core/quaternion.inl.hpp:494:16: note: template argument deduction/substitution failed: /home/kong/opencv/modules/stitching/include/opencv2/stitching/detail/warpers_inl.hpp:359:27: note: mismatched types ‘const cv::Quat’ and ‘float’ v = scale * r * sin(u_); ``` P.S. This error will not occur without adding enum @savuor @alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,1194 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+//\n+// License Agreement\n+// For Open Source Computer Vision Library\n+//\n+// Copyright (C) 2020, Huawei Technologies Co., Ltd. All rights reserved.\n```", + "output": "I've changed it in affine.hpp and warpers_inl.hpp, please have a look. Thanks. @alalek" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/quaternion.hpp\n**Change Type:** added\n**Context:** PR #18335: Ordinary quaternion\n**Review Line:** 33\n\n**Code Changes:**\n```diff\n+\n+#include \n+#include \n+namespace cv\n+{\n+//! @addtogroup core\n+//! @{\n+\n+//! Unit quaternion flag\n+enum QuatAssumeType\n+{\n```", + "output": "@vpisarev Which documentation group should be used for this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #18783: Add support for Conv1D on OpenCV backend - resolves https://github.com/opencv/opencv/issues/18205 **merge with extra:** https://github.com/...\n**Review Line:** 721\n\n**Code Changes:**\n```diff\n+ CV_Check(weights.step1(), weights.step1() % VEC_ALIGN == 0, \"\");\n+ CV_CheckType(weights.type(), CV_32FC1, \"\");\n ParallelConv p;\n \n p.input_ = &input;\n p.weights_ = &weights;\n p.output_ = &output;\n- for( int i = 0; i < 4; i++ ) p.outShape[i] = output.size[i];\n+ int max_ind = isConv1D? 3: 4;\n+ for( int i = 0; i < max_ind; i++ ) p.outShape[i] = output.size[i];\n p.outShape[1] /= ngroups;\n```", + "output": "Can you please check `kernel_size.size()` for 2D convolution? It's strange why we define `kernel_d` for Conv2D" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #18783: Add support for Conv1D on OpenCV backend - resolves https://github.com/opencv/opencv/issues/18205 **merge with extra:** https://github.com/...\n\n**Code Changes:**\n```diff\n@@ -200,12 +200,12 @@ LayerParams ONNXImporter::getLayerParams(const opencv_onnx::NodeProto& node_prot\n \n if(attribute_name == \"kernel_shape\")\n {\n- CV_Assert(attribute_proto.ints_size() == 2 || attribute_proto.ints_size() == 3);\n+ CV_Assert(attribute_proto.ints_size() == 1 || attribute_proto.ints_size() == 2 || attribute_proto.ints_size() == 3);\n lp.set(\"kernel_size\", parse(attribute_proto.ints()));\n }\n else if(attribute_name == \"strides\")\n {\n```", + "output": "It's interesting if we can avoid this flag. Can you please check if it's critical to have 2 parameters for 1D convolution? In example, we have kernel_size = (3, 3). Because we can determine 1D convolution not by `kernel_size.size() == 1` but by `inp.dims == 3`" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #18783: Add support for Conv1D on OpenCV backend - resolves https://github.com/opencv/opencv/issues/18205 **merge with extra:** https://github.com/...\n\n**Code Changes:**\n```diff\n@@ -200,12 +200,12 @@ LayerParams ONNXImporter::getLayerParams(const opencv_onnx::NodeProto& node_prot\n \n if(attribute_name == \"kernel_shape\")\n {\n- CV_Assert(attribute_proto.ints_size() == 2 || attribute_proto.ints_size() == 3);\n+ CV_Assert(attribute_proto.ints_size() == 1 || attribute_proto.ints_size() == 2 || attribute_proto.ints_size() == 3);\n lp.set(\"kernel_size\", parse(attribute_proto.ints()));\n }\n else if(attribute_name == \"strides\")\n {\n```", + "output": "> It's interesting if we can avoid this flag. Can you please check if it's critical to have 2 parameters for 1D convolution? In example, we have kernel_size = (3, 3). Because we can determine 1D convolution not by `kernel_size.size() == 1` but by `inp.dims == 3` I removed the flag and changed the logic for isConv*D variables. Please, check" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/convolution_layer.cpp\n**Change Type:** modified\n**Context:** PR #18783: Add support for Conv1D on OpenCV backend - resolves https://github.com/opencv/opencv/issues/18205 **merge with extra:** https://github.com/...\n\n**Code Changes:**\n```diff\n@@ -113,17 +113,22 @@ class BaseConvolutionLayerImpl : public ConvolutionLayer\n MatSize weightShape = blobs.empty() ? inputs[1].size : blobs[0].size;\n \n CV_Assert(inputs[0].dims == outputs[0].dims);\n+ if (weightShape.dims() == 3)\n+ {\n+ kernel_size.assign(1, kernel_size[0]);\n+ strides.assign(1, strides[0]);\n+ }\n CV_Assert(weightShape.dims() == kernel_size.size() + 2);\n```", + "output": "Can you please keep origin formatting so this part of code can be unchanged? Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n\n**Code Changes:**\n```diff\n@@ -151,6 +151,348 @@ GAPI_FLUID_KERNEL(GFluidAddW, cv::gapi::core::GAddW, false)\n \n enum Arithm { ARITHM_ABSDIFF, ARITHM_ADD, ARITHM_SUBTRACT, ARITHM_MULTIPLY, ARITHM_DIVIDE };\n \n+#if CV_SIMD\n+CV_ALWAYS_INLINE void absdiff_store(short out[], const v_int16& a, const v_int16& b, int x)\n+{\n+ vx_store(&out[x], v_absdiffs(a, b));\n+}\n+\n```", + "output": "removed, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 195\n\n**Code Changes:**\n```diff\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n+ }\n+\n```", + "output": "@dmatveev Do we really need these checks in G-API code? Does G-API support inplace processing? If no, then it makes sense to add `CV_DbgAssert()` instead. @anna-khakimova There are 7 similar loops in this patch. Commit contains 3 updates only. Why?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 195\n\n**Code Changes:**\n```diff\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n+ }\n+\n```", + "output": "@alalek Answer for first question: This checks necessary to process last several elements of input array (tail) via univ intrinsics since their number is less than `nlanes` . Answer for second question: As I've already mentioned in previous comment, for inplace implementation it's necessary that input and output array types should be the same. So, this check is needed only in 3 cases (in 3 functions which calls when input and output types are the same). Note: Please read my comments more attentively." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 193\n\n**Code Changes:**\n```diff\n+ VT a = vx_load(&in1[x]);\n+ VT b = vx_load(&in2[x]);\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n```", + "output": "@dmatveev for example check for inplace case is here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 195\n\n**Code Changes:**\n```diff\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n+ }\n+\n```", + "output": "Thanks. Resolved =)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 195\n\n**Code Changes:**\n```diff\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n+ }\n+\n```", + "output": "@alalek There are no such cases in GAPI. For **AbsDiff** there are cases such as: 1. uchar inputs and uchar output 2. short int inputs and short int output. 3. ushort inputs and ushort output. 4. float inputs and float outputs For all cases mentioned above there is check to detect inplace. For **Add** and **Sub** there are cases such as: 1. **uchar** inputs and **uchar** output <--- For this case there are checks for inplace. 2. **short int** inputs and **uchar** output <---- There is no sense to check for inplace. 3. **float** inputs and **uchar** output <------ There is no sense to check for inplace. 4. **short int** inputs and **short int** output <------- For this case there are check for inplace. 5. **float** inputs and **float** output <-------- For this case there are checks for inplace. 6. **uchar** inputs and **float** output <------- There is no sense to check for inplace. 7. **short int** inputs and **float** output <------- There is no sense to check for inplace. As you can see, there are no cases with **int** at all and with **int** and **float** in particular. **These GAPI kernels don't support **int** type at all.** Inplace implementation is absent for these kernels. **And case when user gives one the same **int** matrix for inputs, then cast this matrix to float and pass it to output is difficult to imagine in reality.** What's the point in adding checks for all occasions?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 195\n\n**Code Changes:**\n```diff\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n+ continue; // process one more time (unaligned tail)\n+ }\n+ break;\n+ }\n+\n```", + "output": "> ... for now. > > Assertions verify assumptions which are required by related code below. > It is a really powerful tool. > This helps with investigations in the future through emitting error messages. This reduces annoying debugging process of related problems. @alalek Ok. Done." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/src/backends/fluid/gfluidcore.cpp\n**Change Type:** modified\n**Context:** PR #18338: Univ Intrinsics implementation of Add, Sub, Absdiff kernels - Performance report: [AbsDiff_Add_Sub_perf_report.xlsx](https://github.com/ope...\n**Review Line:** 190\n\n**Code Changes:**\n```diff\n+ {\n+ for (; x <= length - nlanes; x += nlanes)\n+ {\n+ VT a = vx_load(&in1[x]);\n+ VT b = vx_load(&in2[x]);\n+ absdiff_store(out, a, b, x);\n+ }\n+\n+ if (x < length && (in1 != out) && (in2 != out))\n+ {\n+ x = length - nlanes;\n```", + "output": "@anton-potapov I forced to add these overloads because I realized that new universal intrinsic `absdiffs()` (added by me for `v_uint8`, `v_uint16`, `v_float32` types earlier) work the same as already exist `absdiff()` for argument types mentioned above. So now I have to use only one `absdiffs()` for v_int16 type only. For the rest types (`v_uint8`, `v_uint16`, `v_float32`) I use `absdiff()`. And so these overloads are need here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/objc/common/IntVectorExt.swift\n**Change Type:** added\n**Context:** PR #17165: Objc binding\n**Review Line:** 18\n\n**Code Changes:**\n```diff\n+ }\n+\n+ subscript(index: Int) -> Int32 {\n+ get {\n+ return self.get(index)\n+ }\n+ }\n+\n+ var array: [Int32] {\n+ get {\n+ var ret = Array(repeating: 0, count: data.count/MemoryLayout.stride)\n```", + "output": "@alalek Implicit returns were introduced in Swift 5.1: https://github.com/apple/swift-evolution/blob/master/proposals/0255-omit-return.md You'll need Xcode 11 or higher for Swift 5.1 support. All apps submitted to the App Store require Xcode 11 as of April 2020, so I don't think it's worth supporting earlier versions." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/misc/objc/common/IntVectorExt.swift\n**Change Type:** added\n**Context:** PR #17165: Objc binding\n**Review Line:** 18\n\n**Code Changes:**\n```diff\n+ }\n+\n+ subscript(index: Int) -> Int32 {\n+ get {\n+ return self.get(index)\n+ }\n+ }\n+\n+ var array: [Int32] {\n+ get {\n+ var ret = Array(repeating: 0, count: data.count/MemoryLayout.stride)\n```", + "output": "@chrisballinger thanks for jumping in. I hadn't even realised this was a new feature. While the point about App Store development is a good one - a lot of people starting to dabble with OpenCV may be running older OS versions and Xcode versions so I think it makes sense to fix this" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/perf/common/gapi_render_perf_tests.cpp\n**Change Type:** added\n**Context:** PR #16772: Added g-api render performance tests\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,9 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2020 Intel Corporation\n+\n+\n+#include \"../perf_precomp.hpp\"\n+#include \"gapi_render_perf_tests_inl.hpp\"\n```", + "output": "@TolyaTalamanov I thought there workarounds are not needed for the Mosiac anymore, is it correct?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/perf/common/gapi_render_perf_tests.cpp\n**Change Type:** added\n**Context:** PR #16772: Added g-api render performance tests\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,9 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2020 Intel Corporation\n+\n+\n+#include \"../perf_precomp.hpp\"\n+#include \"gapi_render_perf_tests_inl.hpp\"\n```", + "output": "@dmatveev Let me explain, in case `render` we implement tests for every backend separately, because there is no chance to implement the same reference versions" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/perf/common/gapi_render_perf_tests.cpp\n**Change Type:** added\n**Context:** PR #16772: Added g-api render performance tests\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,9 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2020 Intel Corporation\n+\n+\n+#include \"../perf_precomp.hpp\"\n+#include \"gapi_render_perf_tests_inl.hpp\"\n```", + "output": "@TolyaTalamanov then it shouldn't be common." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac.hpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,800 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef OPENCV_USAC_USAC_HPP\n+#define OPENCV_USAC_USAC_HPP\n+\n+namespace cv { namespace usac {\n+enum EstimationMethod { Homography, Fundamental, Fundamental8, Essential, Affine, P3P, P6P};\n```", + "output": "@ivashmak, please, rename the method for better consistency" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac.hpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,800 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef OPENCV_USAC_USAC_HPP\n+#define OPENCV_USAC_USAC_HPP\n+\n+namespace cv { namespace usac {\n+enum EstimationMethod { Homography, Fundamental, Fundamental8, Essential, Affine, P3P, P6P};\n```", + "output": "agree. @ivashmak, please, replace each pair of methods width/height with one \"getImage1Size\", \"getImage2Size\"." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac/homography_solver.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,445 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"../usac.hpp\"\n+#ifdef HAVE_EIGEN\n+#include \n+#endif\n```", + "output": "agree. @ivashmak, it would be better to move full-scale copyright notices into the header files. In .cpp files it's up to you. You can just reference the header files." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac/termination.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,378 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"../usac.hpp\"\n+\n+namespace cv { namespace usac {\n+////////////////////////////////// STANDARD TERMINATION ///////////////////////////////////////////\n```", + "output": "agree. @ivashmak, in some cases it's difficult to control it, e.g. when the computations are done in a cloud. Please, use the maximum number of iterations as hard limit." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/test/test_usac.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,408 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"test_precomp.hpp\"\n+\n+namespace opencv_test {\n+enum TestSolver { Homogr, Fundam, Essen, PnP, Affine};\n+/*\n```", + "output": "@alalek, in this case the default constructor is used, so the test is deterministic. But I forgot that before each test OpenCV test system resets \"theRNG()\" to some fixed state. @ivashmak, if so, please, just put \"cv::RNG& rng = cv::theRNG();\" and then the rest of the test will remain the same" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/data/essential_mat_data.txt\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,5 @@\n+leuvenA.jpg\n+leuvenB.jpg\n+651.4462353114224 0 376.27522319223914\n+0 653.7348054191838 280.1106539526218\n+0 0 1\n\\ No newline at end of file\n```", + "output": "@vpisarev @ivashmak Please add some clarification about license of images (add link on original sources)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac/fundamental_solver.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,335 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"../usac.hpp\"\n+#ifdef HAVE_EIGEN\n+#include \n+#endif\n```", + "output": "Sorry, I don't really see a typo. Where is it? However, thanks for point it out, I will get rid of cnt++ and use pointer++ instead." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac/fundamental_solver.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,335 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"../usac.hpp\"\n+#ifdef HAVE_EIGEN\n+#include \n+#endif\n```", + "output": "Oh, I see, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/src/usac/utils.cpp\n**Change Type:** added\n**Context:** PR #17683: [GSoC] New RANSAC. Homography part\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,526 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#include \"../precomp.hpp\"\n+#include \"../usac.hpp\"\n+#include \"opencv2/flann/miniflann.hpp\"\n+#include \n+\n```", + "output": "Thanks for the explanation." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/cudaarithm/test/test_core.cpp\n**Change Type:** modified\n**Context:** PR #18348: cudaarithm: use only even number for inplace flip\n**Review Line:** 285\n\n**Code Changes:**\n```diff\n@@ -281,6 +281,8 @@ CUDA_TEST_P(Flip, Accuracy)\n \n CUDA_TEST_P(Flip, AccuracyInplace)\n {\n+ size.width = (size.width >> 1) << 1; // in-place version only accepts even number\n+ size.height = (size.height >> 1) << 1; // in-place version only accepts even number\n cv::Mat src = randomMat(size, type);\n \n cv::cuda::GpuMat srcDst = loadMat(src, useRoi);\n```", + "output": "@asmorkalov , do you mean create `DIFFERENT_EVEN_SIZES` list [here](https://github.com/opencv/opencv/blob/b39cd06249213220e802bb64260727711d9fc98c/modules/ts/include/opencv2/ts/cuda_test.hpp#L212) and instantiate inplace tests separately? I think it would make this test unnecessary complex." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/cudaarithm/test/test_core.cpp\n**Change Type:** modified\n**Context:** PR #18348: cudaarithm: use only even number for inplace flip\n**Review Line:** 285\n\n**Code Changes:**\n```diff\n@@ -281,6 +281,8 @@ CUDA_TEST_P(Flip, Accuracy)\n \n CUDA_TEST_P(Flip, AccuracyInplace)\n {\n+ size.width = (size.width >> 1) << 1; // in-place version only accepts even number\n+ size.height = (size.height >> 1) << 1; // in-place version only accepts even number\n cv::Mat src = randomMat(size, type);\n \n cv::cuda::GpuMat srcDst = loadMat(src, useRoi);\n```", + "output": "That's something debatable. I first had the same feeling with @asmorkalov , since the log shows `113x113` which is not correct. Then I saw the [the line](https://github.com/opencv/opencv/blob/b39cd06249213220e802bb64260727711d9fc98c/modules/ts/include/opencv2/ts/cuda_test.hpp#L212) that @mshabunin pointed, and thought the same thing. \"It's only a single test which uses `DIFFERENT_EVEN_SIZES` \" Merging the idea will lead me to add single define in the `test_core.cpp` rather in `cuda_test.hpp`, but I need someone as an arbitor." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.cpp\n**Change Type:** modified\n**Context:** PR #17675: [GSoC] Add digit and text recognition samples.\n\n**Code Changes:**\n```diff\n@@ -2,12 +2,16 @@\n Text detection model: https://github.com/argman/EAST\n Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n \n- Text recognition model taken from here: https://github.com/meijieru/crnn.pytorch\n+ CRNN Text recognition model taken from here: https://github.com/meijieru/crnn.pytorch\n How to convert from pb to onnx:\n Using classes from here: https://github.com/meijieru/crnn.pytorch/blob/master/models/crnn.py\n \n+ More converted onnx text recognition models can be downloaded directly here:\n```", + "output": "@dkurt Thank you. I just tested it, these two preprocessing is indeed the same, I will remove the ```ocrFlag``` in the code." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.py\n**Change Type:** modified\n**Context:** PR #17675: [GSoC] Add digit and text recognition samples.\n**Review Line:** 16\n\n**Code Changes:**\n```diff\n+\n import torch\n- import models.crnn as CRNN\n+ from models.crnn import CRNN\n+\n model = CRNN(32, 1, 37, 256)\n model.load_state_dict(torch.load('crnn.pth'))\n dummy_input = torch.randn(1, 1, 32, 100)\n@@ -23,7 +30,8 @@\n parser = argparse.ArgumentParser(\n description=\"Use this script to run TensorFlow implementation (https://github.com/argman/EAST) of \"\n```", + "output": "Thank you for pointing out this, fixed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_OCR/dnn_OCR.markdown\n**Change Type:** added\n**Context:** PR #17675: [GSoC] Add digit and text recognition samples.\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,46 @@\n+# How to run custom OCR model {#tutorial_dnn_OCR}\n+\n+@prev_tutorial{tutorial_dnn_custom_layers}\n+\n+## Introduction\n+\n+In this tutorial, we first introduce how to obtain the custom OCR model, then how to transform your own OCR models so that they can be run correctly by the opencv_dnn module. and finally we will provide some pre-trained models.\n+\n+## Train your own OCR model\n```", + "output": "Thank you reply. Because the performance of different GPUs will cause different FPS." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/dnn/dnn_OCR/dnn_OCR.markdown\n**Change Type:** added\n**Context:** PR #17675: [GSoC] Add digit and text recognition samples.\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,46 @@\n+# How to run custom OCR model {#tutorial_dnn_OCR}\n+\n+@prev_tutorial{tutorial_dnn_custom_layers}\n+\n+## Introduction\n+\n+In this tutorial, we first introduce how to obtain the custom OCR model, then how to transform your own OCR models so that they can be run correctly by the opencv_dnn module. and finally we will provide some pre-trained models.\n+\n+## Train your own OCR model\n```", + "output": "Thank you for your comments, I will change this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #17978: Fix ONNX loading in issues opencv#17516, opencv#17531 - **Merge with extra:** opencv/opencv_extra#793 resolves #17516 resolves #17531 Also ...\n**Review Line:** 948\n\n**Code Changes:**\n```diff\n+ constParams.blobs.push_back(inputBuf);\n+\n+ opencv_onnx::NodeProto proto;\n+ proto.add_output(constParams.name);\n+ addLayer(dstNet, constParams, proto, layer_id, outShapes);\n+ }\n \n layerParams.set(\"num_output\", layerParams.blobs[0].size[ind_num_out]);\n layerParams.set(\"bias_term\", node_proto.input_size() == 3);\n```", + "output": "Please add a test which covers this code. Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_onnx_importer.cpp\n**Change Type:** modified\n**Context:** PR #17978: Fix ONNX loading in issues opencv#17516, opencv#17531 - **Merge with extra:** opencv/opencv_extra#793 resolves #17516 resolves #17531 Also ...\n**Review Line:** 627\n\n**Code Changes:**\n```diff\n+ applyTestTag(CV_TEST_TAG_DNN_SKIP_OPENCL_FP16);\n+#if defined(INF_ENGINE_RELEASE) && INF_ENGINE_VER_MAJOR_LT(2020040000)\n+ applyTestTag(CV_TEST_TAG_DNN_SKIP_IE);\n+#endif\n+ testONNXModels(\"matmul_with_two_inputs\");\n+}\n+\n INSTANTIATE_TEST_CASE_P(/*nothing*/, Test_ONNX_layers, dnnBackendsAndTargets());\n \n class Test_ONNX_nets : public Test_ONNX_layers\n```", + "output": "> Please skip the tests for all the versions lower 2020.4: > 2020.3 - failed https://pullrequest.opencv.org/buildbot/builders/precommit_custom_mac/builds/1053 > 2020.4 - passed https://pullrequest.opencv.org/buildbot/builders/precommit_custom_linux/builds/4279 Thank you, I tried to reproduce it locally" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/scale_layer.cpp\n**Change Type:** modified\n**Context:** PR #17919: Flownet2 with anysize\n**Review Line:** 355\n\n**Code Changes:**\n```diff\n Mat data_mean_cpu = blobs[1].clone();\n+ Mat mean_resize = Mat(inputs[0].size[3], inputs[0].size[2], CV_32FC3);\n+ Mat mean_3d = Mat(data_mean_cpu.size[3], data_mean_cpu.size[2], CV_32FC3, data_mean_cpu.ptr(0));\n+ resize(mean_3d, mean_resize, Size(inputs[0].size[3], inputs[0].size[2]));\n+ int new_size[] = {1, mean_resize.channels(), mean_resize.cols, mean_resize.rows};\n+ Mat data_mean_cpu_resize = mean_resize.reshape(1, *new_size);\n Mat data_mean_per_channel_cpu = blobs[2].clone();\n \n- const int numWeights = data_mean_cpu.total();\n+ const int numWeights = data_mean_cpu_resize.total();\n CV_Assert(numWeights != 0);\n```", + "output": "Hi @l-bat , I am most finishing the testing. However, there is one thing that I may have to confirm with you first. I spotted a minor difference between the opencv's reimpl and the Caffe's code. To be specific, in https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu#L353, the `num_iter` is 1 at the very beginning and is increased every iter once the network is built. However, in https://github.com/opencv/opencv/blob/master/modules/dnn/src/layers/scale_layer.cpp#L409, `num_iter` is set to be 0 for each image pair. There are basically two differences: (1) for Caffe, the `num_iter` starts from 1 while for OpenCV, the `num_iter` starts from 0. Since the param `recompute_mean: 1` in the test proto. The original Caffe is simply skipping the recomputation stage while OpenCV is doing the recomputation (akin to the \"self-normalization\", which is basically calculating the stats of the input image itself.) (2) Caffe supports `recompute_mean = N > 1`, which is basically moving-averaging the stats of the testing images when there are more than one pair. However, it seems that OpenCV cannot achieve this. What do you think? I think the current approach of OpenCV makes sense as well (i.e. \"self-normalization\"). If we agree on this, I can modify the Caffe code to fit our style and obtain the result in order to pass the test." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/scale_layer.cpp\n**Change Type:** modified\n**Context:** PR #17919: Flownet2 with anysize\n**Review Line:** 355\n\n**Code Changes:**\n```diff\n Mat data_mean_cpu = blobs[1].clone();\n+ Mat mean_resize = Mat(inputs[0].size[3], inputs[0].size[2], CV_32FC3);\n+ Mat mean_3d = Mat(data_mean_cpu.size[3], data_mean_cpu.size[2], CV_32FC3, data_mean_cpu.ptr(0));\n+ resize(mean_3d, mean_resize, Size(inputs[0].size[3], inputs[0].size[2]));\n+ int new_size[] = {1, mean_resize.channels(), mean_resize.cols, mean_resize.rows};\n+ Mat data_mean_cpu_resize = mean_resize.reshape(1, *new_size);\n Mat data_mean_per_channel_cpu = blobs[2].clone();\n \n- const int numWeights = data_mean_cpu.total();\n+ const int numWeights = data_mean_cpu_resize.total();\n CV_Assert(numWeights != 0);\n```", + "output": "@l-bat A friendly reminder." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/scale_layer.cpp\n**Change Type:** modified\n**Context:** PR #17919: Flownet2 with anysize\n**Review Line:** 355\n\n**Code Changes:**\n```diff\n Mat data_mean_cpu = blobs[1].clone();\n+ Mat mean_resize = Mat(inputs[0].size[3], inputs[0].size[2], CV_32FC3);\n+ Mat mean_3d = Mat(data_mean_cpu.size[3], data_mean_cpu.size[2], CV_32FC3, data_mean_cpu.ptr(0));\n+ resize(mean_3d, mean_resize, Size(inputs[0].size[3], inputs[0].size[2]));\n+ int new_size[] = {1, mean_resize.channels(), mean_resize.cols, mean_resize.rows};\n+ Mat data_mean_cpu_resize = mean_resize.reshape(1, *new_size);\n Mat data_mean_per_channel_cpu = blobs[2].clone();\n \n- const int numWeights = data_mean_cpu.total();\n+ const int numWeights = data_mean_cpu_resize.total();\n CV_Assert(numWeights != 0);\n```", + "output": "Thanks for the detailed explanation. > In our experiments in Forward_gpu num_iter will always reinitialized and set to 1. We think this is bug in Сaffe implementation. We try to fix this, but when we increment num_iter visual results has become worse. Yes, indeed. Based on https://github.com/opencv/opencv/blob/master/modules/dnn/src/layers/scale_layer.cpp#L409 and https://github.com/opencv/opencv/blob/master/modules/dnn/src/layers/scale_layer.cpp#L420, `num_iter=1`. However in Caffe, from https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu#L353, I can see `num_iter=1`. Because of an additional line `num_iter = ((int)num_iter+1);` (in https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu#L354). I think this means `num_iter` is actually 2, which is why I am saying that \"Caffe doesn't support `recompute_mean = 1`.\" since Caffe is always skipping the recomputation (i.e. re-using the stats in the training phase). See: https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu#L600 I think your current re-implementation is correct :)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/common/gapi_compoundkernel_tests.cpp\n**Change Type:** modified\n**Context:** PR #17163: Fixed cooperation of Compound kernel and GMatP type\n\n**Code Changes:**\n```diff\n@@ -227,6 +227,73 @@ namespace\n }\n };\n \n+ G_TYPED_KERNEL(GToInterleaved, , \"org.opencv.test.to_interleaved\")\n+ {\n+ static GMatDesc outMeta(GMatDesc in)\n+ {\n+ GAPI_Assert(in.planar == true);\n+ GAPI_Assert(in.chan == 3);\n```", + "output": "Thanks a lot! Fixed!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/arm64-build-checks.yml\n**Change Type:** added\n**Context:** PR #17989: Add arm64-build-checks github action\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,47 @@\n+name: arm64 build checks\n+\n+on: [pull_request]\n+\n+jobs:\n+ build:\n+\n+ runs-on: ubuntu-18.04\n+\n```", + "output": "Please check if `WITH_CUDA`, `WITH_OPENCL` and other flags already have these values by default" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/arm64-build-checks.yml\n**Change Type:** added\n**Context:** PR #17989: Add arm64-build-checks github action\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,47 @@\n+name: arm64 build checks\n+\n+on: [pull_request]\n+\n+jobs:\n+ build:\n+\n+ runs-on: ubuntu-18.04\n+\n```", + "output": "@dkurt Thanks! it works: ``` apt-get install -y libpython-dev:arm64 libpython3-dev:arm64 python-numpy python3-numpy cmake -DPYTHON2_INCLUDE_PATH=/usr/include/python2.7/ \\ -DPYTHON2_NUMPY_INCLUDE_DIRS=/usr/lib/python2.7/dist-packages/numpy/core/include \\ -DPYTHON3_INCLUDE_PATH=/usr/include/python3.6m/ \\ -DPYTHON3_CVPY_SUFFIX=.cpython-36m-aarch64-linux-gnu.so \\ -DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/lib/python3/dist-packages/numpy/core/include \\ -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/aarch64-gnu.toolchain.cmake \\ -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules ../ ``` and python info as below: ``` -- Python 2: -- Interpreter: /usr/bin/python2.7 (ver 2.7.17) -- Libraries: -- numpy: /usr/lib/python2.7/dist-packages/numpy/core/include (ver undefined - cannot be probed because of the cross-compilation) -- install path: lib/python2.7/dist-packages/cv2/python-2.7 -- -- Python 3: -- Interpreter: /usr/bin/python3 (ver 3.6.9) -- Libraries: -- numpy: /usr/lib/python3/dist-packages/numpy/core/include (ver undefined - cannot be probed because of the cross-compilation) -- install path: lib/python3.6/dist-packages/cv2/python-3.6 -- -- Python (for build): /usr/bin/python2.7 ``` Looks like we still need the `PYTHON2_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython2.7.so` and `PYTHON3_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython3.6m.so`, right?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/arm64-build-checks.yml\n**Change Type:** added\n**Context:** PR #17989: Add arm64-build-checks github action\n**Review Line:** 42\n\n**Code Changes:**\n```diff\n+ -DPYTHON3_INCLUDE_PATH=/usr/include/python3.6m/ \\\n+ -DPYTHON3_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython3.6m.so \\\n+ -DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/lib/python3/dist-packages/numpy/core/include \\\n+ -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/aarch64-gnu.toolchain.cmake \\\n+ -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \\\n+ ../\n+\n+ - name: Build\n+ run: |\n+ cd build\n+ make -j$(nproc --all)\n```", + "output": "@alalek, @mshabunin, does it make sense to enable Actions to test this PR?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/arm64-build-checks.yml\n**Change Type:** added\n**Context:** PR #17989: Add arm64-build-checks github action\n**Review Line:** 42\n\n**Code Changes:**\n```diff\n+ -DPYTHON3_INCLUDE_PATH=/usr/include/python3.6m/ \\\n+ -DPYTHON3_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython3.6m.so \\\n+ -DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/lib/python3/dist-packages/numpy/core/include \\\n+ -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/aarch64-gnu.toolchain.cmake \\\n+ -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \\\n+ ../\n+\n+ - name: Build\n+ run: |\n+ cd build\n+ make -j$(nproc --all)\n```", + "output": "Testing of PR scripts was performed by @Yikun here: https://github.com/Yikun/opencv/actions (Actions can be enabled in repository forks separately) --- There are no plans to enable Actions for OpenCV repository or maintain them due to limited resources." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** .github/workflows/arm64-build-checks.yml\n**Change Type:** added\n**Context:** PR #17989: Add arm64-build-checks github action\n**Review Line:** 42\n\n**Code Changes:**\n```diff\n+ -DPYTHON3_INCLUDE_PATH=/usr/include/python3.6m/ \\\n+ -DPYTHON3_LIBRARIES=/usr/lib/aarch64-linux-gnu/libpython3.6m.so \\\n+ -DPYTHON3_NUMPY_INCLUDE_DIRS=/usr/lib/python3/dist-packages/numpy/core/include \\\n+ -DCMAKE_TOOLCHAIN_FILE=../platforms/linux/aarch64-gnu.toolchain.cmake \\\n+ -DOPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \\\n+ ../\n+\n+ - name: Build\n+ run: |\n+ cd build\n+ make -j$(nproc --all)\n```", + "output": "@alalek Thanks for your reply. : ) > There are no plans to enable Actions for OpenCV repository or maintain them due to limited resources. If the \"limited resources\" refers to the CI resources (build machines), actually, Github provides free resources (container, x86, ubuntu) for the projects which are using github action service. So, we did not need to provide additional resources." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/opencv-logo.md\n**Change Type:** added\n**Context:** PR #18135: Added a note about OpenCV logo\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,9 @@\n+OpenCV logo has been originally designed and contributed to OpenCV by Adi Shavit in 2006. The graphical part consists of three stylized letters O, C, V, colored in the primary R, G, B color components, used by humans and computers to perceive the world. It is shaped in a way to mimic the famous [Kanizsa's triangle](https://en.wikipedia.org/wiki/Illusory_contours) to emphasize that the prior knowledge and internal processing are at least as important as the actually acquired \"raw\" data.\n+\n+The restyled version of the logo has been designed and contributed by [xperience.ai](https://xperience.ai/) in July 2020 for the [20th anniversary](https://opencv.org/anniversary/) of OpenCV.\n+\n+The logo uses [Exo 2](https://fonts.google.com/specimen/Exo+2#about) font by Natanael Gama distributed under OFL license.\n+\n+Higher-resolution version of the logo, as well as SVG version of it, can be obtained at OpenCV [Media Kit](https://opencv.org/resources/media-kit/).\n+\n+![](./opencv-logo2.png)\n```", + "output": "Hi @dkurt! You mean full link like https://xperience.ai/?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #17742: Adding comment from source code to DetectionOutputLayer class documentation - Adds a comment written in the source code for bool getMemoryS...\n\n**Code Changes:**\n```diff\n@@ -598,6 +598,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n static Ptr create(const LayerParams& params);\n };\n \n+ /**\n+ * @brief Detection output layer.\n+ *\n+ * The layer size is: @f$ (1 \\times 1 \\times N \\times 7) @f$\n+ * where N is [keep_top_k] parameter multiplied by batch size. Each row is:\n+ * [image_id, label, confidence, xmin, ymin, xmax, ymax]\n```", + "output": "Is it better like this: /** * @brief detection output layer. * * The layer size is: @f$ (1 \\times 1 \\times N \\times 7) @f$ * where N is the number of detections after nms, and each row is: * [image_id, label, confidence, xmin, ymin, xmax, ymax] */" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** platforms/linux/riscv64-clang.toolchain.cmake\n**Change Type:** added\n**Context:** PR #17922: Build riscv with c++ intrinsics\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,27 @@\n+set(CMAKE_SYSTEM_NAME Linux)\n+set(CMAKE_SYSTEM_PROCESSOR riscv64)\n+\n+set(RISCV_CLANG_BUILD_ROOT /opt/rvv-llvm CACHE PATH \"Path to CLANG for RISC-V cross compiler build directory\")\n+set(RISCV_GCC_INSTALL_ROOT /opt/RISCV CACHE PATH \"Path to GCC for RISC-V cross compiler installation directory\")\n+set(CMAKE_SYSROOT ${RISCV_GCC_INSTALL_ROOT}/sysroot CACHE PATH \"RISC-V sysroot\")\n+\n+set(CLANG_TARGET_TRIPLE riscv64-unknown-linux-gnu)\n+\n```", + "output": "they are not hardcoded. It seems that my understanding of Linux path is not in place. Thanks for the guidance. I'll edit the default path here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cvdef.h\n**Change Type:** modified\n**Context:** PR #17922: Build riscv with c++ intrinsics\n\n**Code Changes:**\n```diff\n@@ -260,6 +260,8 @@ namespace cv {\n #define CV_CPU_VSX 200\n #define CV_CPU_VSX3 201\n \n+#define CV_CPU_RVV 210\n+\n // CPU features groups\n #define CV_CPU_AVX512_SKX 256\n #define CV_CPU_AVX512_COMMON 257\n@@ -312,6 +314,8 @@ enum CpuFeatures {\n```", + "output": "Fixed. Thanks a lot for your review!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/detection_output_layer.cpp\n**Change Type:** modified\n**Context:** PR #17301: cuda4dnn(DetectionOutput): add fast approximate DetectionOutputOp\n**Review Line:** 203\n\n**Code Changes:**\n```diff\n@@ -195,7 +200,7 @@ class DetectionOutputLayerImpl CV_FINAL : public DetectionOutputLayer\n _locPredTransposed = getParameter(params, \"loc_pred_transposed\", 0, false, false);\n _bboxesNormalized = getParameter(params, \"normalized_bbox\", 0, false, true);\n _clip = getParameter(params, \"clip\", 0, false, false);\n- _groupByClasses = getParameter(params, \"group_by_classes\", 0, false, true);\n+ _groupByClasses = getParameter(params, \"group_by_classes\", 0, false, false);\n \n getCodeType(params);\n \n@@ -209,6 +214,7 @@ class DetectionOutputLayerImpl CV_FINAL : public DetectionOutputLayer\n virtual bool supportBackend(int backendId) CV_OVERRIDE\n```", + "output": "@dkurt Could you please take a look on this line more carefully? This change will diverge code agains 3.4 branch." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/detection_output_layer.cpp\n**Change Type:** modified\n**Context:** PR #17301: cuda4dnn(DetectionOutput): add fast approximate DetectionOutputOp\n**Review Line:** 203\n\n**Code Changes:**\n```diff\n@@ -195,7 +200,7 @@ class DetectionOutputLayerImpl CV_FINAL : public DetectionOutputLayer\n _locPredTransposed = getParameter(params, \"loc_pred_transposed\", 0, false, false);\n _bboxesNormalized = getParameter(params, \"normalized_bbox\", 0, false, true);\n _clip = getParameter(params, \"clip\", 0, false, false);\n- _groupByClasses = getParameter(params, \"group_by_classes\", 0, false, true);\n+ _groupByClasses = getParameter(params, \"group_by_classes\", 0, false, false);\n \n getCodeType(params);\n \n@@ -209,6 +214,7 @@ class DetectionOutputLayerImpl CV_FINAL : public DetectionOutputLayer\n virtual bool supportBackend(int backendId) CV_OVERRIDE\n```", + "output": "@YashasSamaga checked that this flag is not used in our models. So We can freely switch it to `false` by default." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/common/gapi_video_tests_common.hpp\n**Change Type:** modified\n**Context:** PR #17896: GAPI: KW fix - video_tests\n\n**Code Changes:**\n```diff\n@@ -43,7 +43,7 @@ inline void initTrackingPointsArray(std::vector& points, int width,\n \n points.clear();\n GAPI_Assert((nPointsX >= 0) && (nPointsY) >= 0);\n- points.reserve(static_cast(nPointsX * nPointsY));\n+ points.reserve(nPointsX * nPointsY);\n \n for (int x = stepX / 2; x < width; x += stepX)\n {\n@@ -80,9 +80,7 @@ struct OptFlowLKTestOutput\n```", + "output": "@anton-potapov Actually I thought it would have caused a warning if I hadn't cast to size_t. But, according to CI, it is OK :) Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/smooth.dispatch.cpp\n**Change Type:** modified\n**Context:** PR #15855: imgproc: fix bit-exact GaussianBlur() / sepFilter2D() - **Merge with contrib**: https://github.com/opencv/opencv_contrib/pull/2342 **Merge ...\n**Review Line:** 593\n\n**Code Changes:**\n```diff\n+static bool validateGaussianBlurKernel(std::vector& kernel)\n+{\n+ softdouble validation_sum = softdouble::zero();\n+ for (size_t i = 0; i < kernel.size(); i++)\n+ {\n+ validation_sum += softdouble((double)kernel[i]);\n+ }\n+\n+ bool isValid = validation_sum == softdouble::one();\n+ return isValid;\n+}\n```", + "output": "@netbrain , what is your platform/compiler?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.cpp\n**Change Type:** modified\n**Context:** PR #16941: Add text recognition to examples_dnn_text_decoder\n**Review Line:** 257\n\n**Code Changes:**\n```diff\n+ if (elementIndex > 0 && elements[elementIndex] != '-' &&\n+ elements[elementIndex - 1] != elements[elementIndex])\n+ {\n+ text += elements[elementIndex];\n+ }\n+ }\n+}\n\\ No newline at end of file\n```", + "output": "@themechanicalcoder, thanks for example. @rngtna, please check if current implementation will return \"aaaa\" for \"a--aa--aaa----a\"." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.cpp\n**Change Type:** modified\n**Context:** PR #16941: Add text recognition to examples_dnn_text_decoder\n\n**Code Changes:**\n```diff\n@@ -1,3 +1,20 @@\n+/*\n+ Text detection model: https://github.com/argman/EAST\n+ Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n+\n+ Text recognition model taken from here: https://github.com/meijieru/crnn.pytorch\n+ How to convert from pb to onnx:\n+ Using classes from here: https://github.com/meijieru/crnn.pytorch/blob/master/models/crnn.py\n+\n+ import torch\n```", + "output": "@rngtna, we do `warpPerspective` and then resize by `blobFromImage`. Why not to warp to the final 100x32 size?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/cuda4dnn/kernels/activations.hpp\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n**Review Line:** 25\n\n**Code Changes:**\n```diff\n \n+ template \n+ void swish(const csl::Stream& stream, csl::Span output, csl::View input);\n+\n+ template \n+ void mish(const csl::Stream& stream, csl::Span output, csl::View input);\n+\n template \n void sigmoid(const csl::Stream& stream, csl::Span output, csl::View input);\n \n```", + "output": "@thebhatman The CUDA kernels are called by a function which selects the correct vectorized kernel based on the alignment. I am not able to find that part of the code in your PR. You can have a look at [sigmoid](https://github.com/opencv/opencv/blob/85e848a0f73b69d99ae2e068f4987e2fc09c16fd/modules/dnn/src/cuda/activations.cu#L281-L302) example. The `sigmoid()` calls `launch_vectorized_sigmoid` which actually launches the kernel. 1. `SwishOp` calls `swish(stream, output, input)` 2. `swish()` checks the alignment of `output` and `input` and calls `launch_vectorized_swish` 3. `launch_vectorized_swish` selects the correct kernel and launches it (which is `raw::swish_vec)()`)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/cuda4dnn/kernels/activations.hpp\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n**Review Line:** 25\n\n**Code Changes:**\n```diff\n \n+ template \n+ void swish(const csl::Stream& stream, csl::Span output, csl::View input);\n+\n+ template \n+ void mish(const csl::Stream& stream, csl::Span output, csl::View input);\n+\n template \n void sigmoid(const csl::Stream& stream, csl::Span output, csl::View input);\n \n```", + "output": "@YashasSamaga Thank you! I have now added the function for launching the kernel." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/cuda/activations.cu\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n\n**Code Changes:**\n```diff\n@@ -62,6 +62,43 @@ namespace cv { namespace dnn { namespace cuda4dnn { namespace kernels {\n }\n }\n \n+ template \n+ __global__ void swish_vec(Span output, View input) {\n+ using vector_type = get_vector_type_t;\n+\n+ auto output_vPtr = vector_type::get_pointer(output.data());\n+ auto input_vPtr = vector_type::get_pointer(input.data());\n```", + "output": "Yes. That makes sense. Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_halide_layers.cpp\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n**Review Line:** 586\n\n**Code Changes:**\n```diff\n@@ -583,7 +583,7 @@ TEST_P(NoParamActivation, Accuracy)\n testInPlaceActivation(lp, backendId, targetId);\n }\n INSTANTIATE_TEST_CASE_P(Layer_Test_Halide, NoParamActivation, Combine(\n-/*type*/ Values(\"TanH\", \"Sigmoid\", \"AbsVal\", \"BNLL\"),\n+/*type*/ Values(\"TanH\", \"Sigmoid\", \"AbsVal\", \"BNLL\", \"Swish\", \"Mish\"),\n dnnBackendsAndTargetsWithHalide()\n ));\n \n```", + "output": "I think you should add new parameters (`l1` and `lInf`) to `test` and `testInPlaceActivation`. Their default values would be `0.0`. Inside the `test` function: - if `l1` and `lInf` arguments are zero, set them to the values given by `getDefaultThresholds` - if `l1` and `lInf` arguments are not zero, use them @dkurt is this ok?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n\n**Code Changes:**\n```diff\n@@ -613,6 +613,184 @@ struct TanHFunctor\n int64 getFLOPSPerElement() const { return 1; }\n };\n \n+struct SwishFunctor\n+{\n+ typedef SwishLayer Layer;\n+\n+ bool supportBackend(int backendId, int)\n+ {\n```", + "output": "I have removed the InfEngine backend support now. Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #15808: Added Swish and Mish activations - I have added the Swish and Mish activation functions. This resolves https://github.com/opencv/opencv/iss...\n\n**Code Changes:**\n```diff\n@@ -613,6 +613,184 @@ struct TanHFunctor\n int64 getFLOPSPerElement() const { return 1; }\n };\n \n+struct SwishFunctor\n+{\n+ typedef SwishLayer Layer;\n+\n+ bool supportBackend(int backendId, int)\n+ {\n```", + "output": "Thanks! Sorry, I forgot to highlight that you need to implement it in the following way (see `BNLLFunctor`): ```cpp #ifdef HAVE_INF_ENGINE InferenceEngine::Builder::Layer initInfEngineBuilderAPI() { CV_Error(Error::StsNotImplemented, \"\"); } #endif // HAVE_INF_ENGINE ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.py\n**Change Type:** modified\n**Context:** PR #16955: add text python recognition sample\n\n**Code Changes:**\n```diff\n@@ -1,25 +1,81 @@\n+'''\n+ Text detection model: https://github.com/argman/EAST\n+ Download link: https://www.dropbox.com/s/r2ingd0l3zt8hxs/frozen_east_text_detection.tar.gz?dl=1\n+ Text recognition model taken from here: https://github.com/meijieru/crnn.pytorch\n+ How to convert from pb to onnx:\n+ Using classes from here: https://github.com/meijieru/crnn.pytorch/blob/master/models/crnn.py\n+ import torch\n+ import models.crnn as CRNN\n+ model = CRNN(32, 1, 37, 256)\n```", + "output": "@dkurt do you mean I should replace it to ``` blob = cv.dnn.blobFromImage(cropped, size=(100, 32), mean=127.5, scalefactor=2 / 255.0) ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/stitching_detailed.py\n**Change Type:** modified\n**Context:** PR #16257: Make stitching script more pythonic - ### This pullrequest changes Improve the overall readability and maintainability of the Python exampl...\n\n**Code Changes:**\n```diff\n@@ -8,114 +8,305 @@\n # Python 2/3 compatibility\n from __future__ import print_function\n \n-import numpy as np\n+import argparse\n+from collections import OrderedDict\n+\n import cv2 as cv\n+import numpy as np\n```", + "output": "Thanks, force pushed some fixes." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/ocl.cpp\n**Change Type:** modified\n**Context:** PR #16608: fixed several problems when running tests on Mac - * failures in ocl_pyrUp * failures in ocl_flip when processing CV_16UC3 or CV_16SC3 data...\n\n**Code Changes:**\n```diff\n@@ -4705,6 +4705,8 @@ class OpenCLAllocator CV_FINAL : public MatAllocator\n int createFlags = 0, flags0 = 0;\n getBestFlags(ctx, accessFlags, usageFlags, createFlags, flags0);\n \n+ bool copyOnMap = (flags0 & UMatData::COPY_ON_MAP) != 0;\n+\n cl_context ctx_handle = (cl_context)ctx.ptr();\n int allocatorFlags = 0;\n int tempUMatFlags = 0;\n@@ -4764,8 +4766,15 @@ class OpenCLAllocator CV_FINAL : public MatAllocator\n```", + "output": "@alalek, the test is \"UMat.SyncTemp\". When you put some condition and nothing is failed, it does not mean that there is no bug in the logic. Before the patch the code attempts to do Map/Unmap even on machines without unified (read \"shared\") GPU-CPU memory. Somehow modern drivers on Windows/Linux implement this map unmap by doing implicit copying. On Mac it does not work well. On your macs everything works well probably because there is just iGPU there, no dGPU. I have 2 macs with dGPU, and they just freeze on \"UMat.SyncTemp\" test for infinite amount of time. After the patch everything works well." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/scale_layer.cpp\n**Change Type:** modified\n**Context:** PR #16575: Support FlowNet2 model - **Merge with extra:** opencv/opencv_extra#708 relates #16319 Supported layers: - [x] DataAugmentation (1 input, 3 ...\n\n**Code Changes:**\n```diff\n@@ -296,5 +296,118 @@ Ptr ShiftLayer::create(const LayerParams& params)\n return Ptr(new ScaleLayerImpl(scaleParams));\n }\n \n+class DataAugmentationLayerImpl CV_FINAL : public DataAugmentationLayer\n+{\n+public:\n+ DataAugmentationLayerImpl(const LayerParams& params)\n+ {\n+ setParamsFrom(params);\n```", + "output": "Please check initial values of `blobs[1]`. We cannot change them so data_mean_cpu should be a copy" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/include/opencv2/gapi/video.hpp\n**Change Type:** modified\n**Context:** PR #17082: G-API: gapi::buildOpticalFlowPyramid() Implementation\n**Review Line:** 27\n\n**Code Changes:**\n```diff\n+\n+using GOptFlowLKOutput = std::tuple,\n+ cv::GArray,\n+ cv::GArray>;\n+\n+G_TYPED_KERNEL(GBuildOptFlowPyramid, ,\n+ \"org.opencv.video.buildOpticalFlowPyramid\")\n+{\n+ static std::tuple\n+ outMeta(GMatDesc,const Size&,GScalarDesc,bool,int,int,bool)\n+ {\n```", + "output": "@dmatveev should I use GOpaque instead of GScalar to cover the `int` variable \"maxLevel\"? the same question in calcOpticalFlow All the previous kernels use GScalar, but I think only because GOpaque didn't exist that time" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/qrcode.py\n**Change Type:** added\n**Context:** PR #16795: Added a sample file for qrcode detection in python\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,248 @@\n+#!/usr/bin/env python\n+\n+'''\n+This program detects the QR-codes using OpenCV Library.\n+\n+Usage:\n+ qrcode.py\n+'''\n+\n```", + "output": "I have updated the changes please check" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #3554: [imgproc]Add template match with mask. - Signed-off-by: Yan Wang yan.wang@linux.intel.com\n\n**Code Changes:**\n```diff\n@@ -3330,9 +3330,11 @@ data type.\n @param result Map of comparison results. It must be single-channel 32-bit floating-point. If image\n is \\f$W \\times H\\f$ and templ is \\f$w \\times h\\f$ , then result is \\f$(W-w+1) \\times (H-h+1)\\f$ .\n @param method Parameter specifying the comparison method, see cv::TemplateMatchModes\n+@param mask Mask of searched template. It must have the same datatype and size with templ. It is\n+not set by default.\n */\n CV_EXPORTS_W void matchTemplate( InputArray image, InputArray templ,\n- OutputArray result, int method );\n+ OutputArray result, int method, InputArray mask = noArray() );\n```", + "output": "@ilya-lavrenov Thanks for your review. I also want implement it like this initially. But I found it may have no share code with matchTemplate. If you think the mode is better, I could change it. matchTemplateMask could be internal function which is called in matchTemplate(). Do you agree it?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #3554: [imgproc]Add template match with mask. - Signed-off-by: Yan Wang yan.wang@linux.intel.com\n\n**Code Changes:**\n```diff\n@@ -3330,9 +3330,11 @@ data type.\n @param result Map of comparison results. It must be single-channel 32-bit floating-point. If image\n is \\f$W \\times H\\f$ and templ is \\f$w \\times h\\f$ , then result is \\f$(W-w+1) \\times (H-h+1)\\f$ .\n @param method Parameter specifying the comparison method, see cv::TemplateMatchModes\n+@param mask Mask of searched template. It must have the same datatype and size with templ. It is\n+not set by default.\n */\n CV_EXPORTS_W void matchTemplate( InputArray image, InputArray templ,\n- OutputArray result, int method );\n+ OutputArray result, int method, InputArray mask = noArray() );\n```", + "output": "Sure. I could change it as your comments. It need some time and I think I could finish it tomorrow. Another issue is that I found I need add invArea into mask function. It will be added in the next commit too. Thanks again." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/templmatch.cpp\n**Change Type:** modified\n**Context:** PR #3554: [imgproc]Add template match with mask. - Signed-off-by: Yan Wang yan.wang@linux.intel.com\n**Review Line:** 847\n\n**Code Changes:**\n```diff\n+ if (mdepth == CV_8U)\n+ {\n+ mdepth = CV_32F;\n+ mtype = CV_MAKETYPE(CV_32F, mcn);\n+ compare(mask, Scalar::all(0), mask, CMP_NE);\n+ mask.convertTo(mask, mtype, 1.0 / 255);\n+ }\n+\n+ Size corrSize(img.cols - templ.cols + 1, img.rows - templ.rows + 1);\n+ _result.create(corrSize, CV_32F);\n+ Mat result = _result.getMat();\n```", + "output": "@vpisarev In matchTemplate(), only !mask.empty() will call matchTemplateMask(). So it may be unnecessary to check mask != 0 here?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/templmatch.cpp\n**Change Type:** modified\n**Context:** PR #3554: [imgproc]Add template match with mask. - Signed-off-by: Yan Wang yan.wang@linux.intel.com\n\n**Code Changes:**\n```diff\n@@ -814,12 +814,97 @@ void crossCorr( const Mat& img, const Mat& _templ, Mat& corr,\n }\n }\n }\n+\n+static void matchTemplateMask( InputArray _img, InputArray _templ, OutputArray _result, int method, InputArray _mask )\n+{\n+ int type = _img.type(), depth = CV_MAT_DEPTH(type), cn = CV_MAT_CN(type);\n+ CV_Assert( CV_TM_SQDIFF <= method && method <= CV_TM_CCOEFF_NORMED );\n+ CV_Assert( (depth == CV_8U || depth == CV_32F) && type == _templ.type() && _img.dims() <= 2 );\n```", + "output": "Sure. I will use CV_Error(Error::StsNotImplemented, \"\"). Thanks for your review." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpeg2000_openjpeg.cpp\n**Change Type:** added\n**Context:** PR #16494: RFC: Jpeg2000 OpenJPEG port - resolves #5849 relates #14058 Currently missing/incomplete: - [x] Write support - [ ] Reading of \"signed\" com...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,762 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+//\n+// Copyright (C) 2020, Stefan Brüns \n+\n+#include \"precomp.hpp\"\n+\n+#ifdef HAVE_OPENJPEG\n```", + "output": "Could you please use [short license header](https://github.com/opencv/opencv/wiki/Coding_Style_Guide#file-structure)? ``` // This file is part of OpenCV project. // It is subject to the license terms in the LICENSE file found in the top-level directory // of this distribution and at http://opencv.org/license.html. // // Copyright (C) 2020, Stefan Brüns ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #9292: Using environment variable to store capture options - String is parsed by av_dict_parse_string(ENV{OPENCV_FFMPEG_CAPTURE_OPTIONS}, \";\", \"|\")...\n\n**Code Changes:**\n```diff\n@@ -781,7 +781,23 @@ bool CvCapture_FFMPEG::open( const char* _filename )\n #endif\n \n #if LIBAVFORMAT_BUILD >= CALC_FFMPEG_VERSION(52, 111, 0)\n+#ifndef NO_GETENV\n+ char* options = getenv(\"OPENCV_FFMPEG_CAPTURE_OPTIONS\");\n+ if(options == NULL)\n+ {\n+ av_dict_set(&dict, \"rtsp_transport\", \"tcp\", 0);\n+ }\n```", + "output": "Thanks, I am unfamiliar with the ffmpeg versioning. Fixes incoming." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/cuda/atomics.hpp\n**Change Type:** modified\n**Context:** PR #16218: Enable cuda4dnn on hardware without support for __half - ie. hardware with compute capability < 5.3 It compiles/link fine and I was able to...\n\n**Code Changes:**\n```diff\n@@ -8,7 +8,12 @@\n #include \n #include \n \n+// The 16-bit __half floating-point version of atomicAdd() is only supported by devices of compute capability 7.x and higher.\n+// https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#atomicadd\n #if !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 700\n+// And half-precision floating-point operations are not supported by devices of compute capability strictly lower than 5.3\n+// https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications\n+#elif __CUDA_ARCH__ < 530\n```", + "output": "@JulienMaille friendly reminder." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/own/mat_tests.cpp\n**Change Type:** modified\n**Context:** PR #16577: G-API: Fix. own::Mat successfully creates a {-1, -1} mat. - ### own::Mat successfully created a mat with negative dimensions. Method `creat...\n\n**Code Changes:**\n```diff\n@@ -588,4 +588,23 @@ TEST(OwnMat, ROIView)\n << to_ocv(roi_view) << std::endl\n << expected_cv_mat << std::endl;\n }\n+\n+TEST(OwnMat, CreateWithNegativeDims)\n+{\n+ Mat own_mat;\n+ ASSERT_ANY_THROW(own_mat.create(cv::Size{-1, -1}, CV_8U));\n+}\n```", + "output": "Put space before `EXPECT_ANY_THROW`. BTW, try to use this pattern: https://medium.com/@pjbgf/title-testing-code-ocd-and-the-aaa-pattern-df453975ab80" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/test/own/mat_tests.cpp\n**Change Type:** modified\n**Context:** PR #16577: G-API: Fix. own::Mat successfully creates a {-1, -1} mat. - ### own::Mat successfully created a mat with negative dimensions. Method `creat...\n\n**Code Changes:**\n```diff\n@@ -588,4 +588,23 @@ TEST(OwnMat, ROIView)\n << to_ocv(roi_view) << std::endl\n << expected_cv_mat << std::endl;\n }\n+\n+TEST(OwnMat, CreateWithNegativeDims)\n+{\n+ Mat own_mat;\n+ ASSERT_ANY_THROW(own_mat.create(cv::Size{-1, -1}, CV_8U));\n+}\n```", + "output": "@TolyaTalamanov I made this tests easier. Is split necessary?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "Thanks for the review @mshabunin, As per [this](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html#mounting), they are backward compatible and we get the info we need from v1 itself. Open to change if required as I do not know much about it. > Does HAIKU and EMSCRIPTEN have cgroups at all? I am not sure about this, but if the file does not exist, we handle that case by returning 0. Again open to removing it if needed, did not find much data on this." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "Yeah, I noticed two types: 1. Just `3` or `4`, etc, it can be interpreted as CPU 4 or 4 CPUs. If one number it means 4 CPUs 2. Next is `1-2,5`. here I just realized I look only for `-`, I'll look for `,` also, thanks" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "Ohh I get what you are saying, ok I thought that was number of CPUs, thanks for the explanation, I'll fix that and the documentation error." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "Oh ok that makes sense, 0.5 makes 1 the valid count, since we take min_non_zero, we might end up taking a bigger value like 8 from other methods cause this is 0. Thanks for explaining." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "@alalek, can I check cgroups for android as well?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/parallel.cpp\n**Change Type:** modified\n**Context:** PR #16457: Fixed getCPUCount | Issue #16268 - #### This pullrequest changes Fixed getCPUCount to account for cgroups. Added common code to parse CPU ...\n\n**Code Changes:**\n```diff\n@@ -58,13 +58,20 @@\n #include \n #include \n #include \n+ #include \n #if defined __ANDROID__\n #include \n+ #include \n+ #include \n #elif defined __APPLE__\n```", + "output": "Thanks for the info, I'll read up more on it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/laplace.py\n**Change Type:** added\n**Context:** PR #14107: Adding python codes to samples/python - There are some files in samples/cpp whose python samples are not available. So, this PR is just add...\n**Review Line:** 12\n\n**Code Changes:**\n```diff\n+ Usage:\n+ python laplace.py \n+ If no arguments given default arguments will be used.\n+\n+ Keyboard Shortcuts:\n+ Press space bar to exit the program.\n+ '''\n+\n+# Python 2/3 compatibility\n+from __future__ import print_function\n+\n```", + "output": "@adityak2920 Friendly reminder." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** apps/interactive-calibration/defaultConfig.xml\n**Change Type:** modified\n**Context:** PR #16445: fixed typo - # This pullrequest changes I fixed the following typo in opencv. - `lenght` -> `length` - `Lenght` -> `Length`\n**Review Line:** 4\n\n**Code Changes:**\n```diff\n@@ -1,7 +1,7 @@\n \n \n 0\n-200\n+200\n 100\n 1\n 30\n```", + "output": "@alalek @dkurt Thank you for your advice. > I believe this compatibility code can handle migration I'll fix this point." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** apps/interactive-calibration/defaultConfig.xml\n**Change Type:** modified\n**Context:** PR #16445: fixed typo - # This pullrequest changes I fixed the following typo in opencv. - `lenght` -> `length` - `Lenght` -> `Length`\n**Review Line:** 4\n\n**Code Changes:**\n```diff\n@@ -1,7 +1,7 @@\n \n \n 0\n-200\n+200\n 100\n 1\n 30\n```", + "output": "@atinfinity do you have progress on this?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** apps/interactive-calibration/defaultConfig.xml\n**Change Type:** modified\n**Context:** PR #16445: fixed typo - # This pullrequest changes I fixed the following typo in opencv. - `lenght` -> `length` - `Lenght` -> `Length`\n**Review Line:** 4\n\n**Code Changes:**\n```diff\n@@ -1,7 +1,7 @@\n \n \n 0\n-200\n+200\n 100\n 1\n 30\n```", + "output": "@asmorkalov Sorry for the late reply. I'll try to do this weekend." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** apps/interactive-calibration/defaultConfig.xml\n**Change Type:** modified\n**Context:** PR #16445: fixed typo - # This pullrequest changes I fixed the following typo in opencv. - `lenght` -> `length` - `Lenght` -> `Length`\n**Review Line:** 4\n\n**Code Changes:**\n```diff\n@@ -1,7 +1,7 @@\n \n \n 0\n-200\n+200\n 100\n 1\n 30\n```", + "output": "@asmorkalov I fixed this point. https://github.com/opencv/opencv/pull/16445/commits/db87044f4ec7d43aeb442bf0d48e032c1c25f814" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #15993: This is a correction of the previously missleading documentation and a warning related to a common calibration failure described in issue #...\n\n**Code Changes:**\n```diff\n@@ -118,7 +118,18 @@ v = f_y \\times y'' + c_y\n tangential distortion coefficients. \\f$s_1\\f$, \\f$s_2\\f$, \\f$s_3\\f$, and \\f$s_4\\f$, are the thin prism distortion\n coefficients. Higher-order coefficients are not considered in OpenCV.\n \n-The next figures show two common types of radial distortion: barrel distortion (typically \\f$ k_1 < 0 \\f$) and pincushion distortion (typically \\f$ k_1 > 0 \\f$).\n+The next figures show two common types of radial distortion: barrel distortion\n+(\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically decreasing)\n+and pincushion distortion (\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically increasing).\n+Radial distortion is always monotonic for real lenses,\n+and if the estimator produces a non monotonic result,\n```", + "output": "@catree Could you please take a look on this update? Thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in performance optimization. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #15993: This is a correction of the previously missleading documentation and a warning related to a common calibration failure described in issue #...\n\n**Code Changes:**\n```diff\n@@ -118,7 +118,18 @@ v = f_y \\times y'' + c_y\n tangential distortion coefficients. \\f$s_1\\f$, \\f$s_2\\f$, \\f$s_3\\f$, and \\f$s_4\\f$, are the thin prism distortion\n coefficients. Higher-order coefficients are not considered in OpenCV.\n \n-The next figures show two common types of radial distortion: barrel distortion (typically \\f$ k_1 < 0 \\f$) and pincushion distortion (typically \\f$ k_1 > 0 \\f$).\n+The next figures show two common types of radial distortion: barrel distortion\n+(\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically decreasing)\n+and pincushion distortion (\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically increasing).\n+Radial distortion is always monotonic for real lenses,\n+and if the estimator produces a non monotonic result,\n```", + "output": "Yes, I think it is correct. Most likely since `k1` is an order of magnitude greater than the other terms, it is \"sufficient\" (or at least this is what can be read elsewhere) to look at the sign of `k1` to determine quickly if the distortion is barrel or pincushion? --- @midjji Looking at the equations above, is it not if `(1 + k1.r^2 + k2.r^4 + k3.r^6)/(1 + k4.r^2 + k5.r^4 + k6.r^6)` (ignoring the tangential distortion coefficients) is monotonically decreasing it produces a barrel distortion? Or it is more complicated when taking into account the denominator?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/calib3d/include/opencv2/calib3d.hpp\n**Change Type:** modified\n**Context:** PR #15993: This is a correction of the previously missleading documentation and a warning related to a common calibration failure described in issue #...\n\n**Code Changes:**\n```diff\n@@ -118,7 +118,18 @@ v = f_y \\times y'' + c_y\n tangential distortion coefficients. \\f$s_1\\f$, \\f$s_2\\f$, \\f$s_3\\f$, and \\f$s_4\\f$, are the thin prism distortion\n coefficients. Higher-order coefficients are not considered in OpenCV.\n \n-The next figures show two common types of radial distortion: barrel distortion (typically \\f$ k_1 < 0 \\f$) and pincushion distortion (typically \\f$ k_1 > 0 \\f$).\n+The next figures show two common types of radial distortion: barrel distortion\n+(\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically decreasing)\n+and pincushion distortion (\\f$ 1 + k_1 r^2 + k_2 r^4 + k_3 r^6 \\f$ monotonically increasing).\n+Radial distortion is always monotonic for real lenses,\n+and if the estimator produces a non monotonic result,\n```", + "output": "@midjji > Regarding the suggestion, there isn't a opt framework in opencv which supports it, so technically correct as is. Though it's unimportant. Shall I fix it? What do you mean? Do you want to change the `framework` word in the second part of the sentence? If you feel that the suggestion is not relevant, feel free to ignore it. At the end, it is just a minor suggestion." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/pooling_layer.cpp\n**Change Type:** modified\n**Context:** PR #16223: Add human parsing demo - **Merge with extra:** opencv/opencv_extra#695 ``` force_builders=Custom,Custom Win,Custom Mac build_image:Custom=u...\n\n**Code Changes:**\n```diff\n@@ -79,6 +79,7 @@ class PoolingLayerImpl CV_FINAL : public PoolingLayer\n {\n computeMaxIdx = true;\n globalPooling = false;\n+ isGlobalPooling = std::vector(3, false);\n stride = Size(1, 1);\n pad_t = pad_l = pad_b = pad_r = 0;\n \n@@ -95,7 +96,8 @@ class PoolingLayerImpl CV_FINAL : public PoolingLayer\n else\n```", + "output": "please check that we still need this change" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/slice_layer.cpp\n**Change Type:** modified\n**Context:** PR #16223: Add human parsing demo - **Merge with extra:** opencv/opencv_extra#695 ``` force_builders=Custom,Custom Win,Custom Mac build_image:Custom=u...\n**Review Line:** 117\n\n**Code Changes:**\n```diff\n@@ -114,7 +114,8 @@ class SliceLayerImpl : public SliceLayer\n virtual bool supportBackend(int backendId) CV_OVERRIDE\n {\n return backendId == DNN_BACKEND_OPENCV ||\n- ((backendId == DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019 || backendId == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH) &&\n+ (backendId == DNN_BACKEND_INFERENCE_ENGINE_NGRAPH && sliceRanges.size() == 1) ||\n+ (backendId == DNN_BACKEND_INFERENCE_ENGINE_NN_BUILDER_2019 &&\n #ifdef HAVE_INF_ENGINE\n INF_ENGINE_VER_MAJOR_GE(INF_ENGINE_RELEASE_2019R1) &&\n #endif\n```", + "output": "Oh I got it (just need to open an entire file to see). Thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/human_parsing.py\n**Change Type:** added\n**Context:** PR #16223: Add human parsing demo - **Merge with extra:** opencv/opencv_extra#695 ``` force_builders=Custom,Custom Win,Custom Mac build_image:Custom=u...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,178 @@\n+#!/usr/bin/env python\n+'''\n+You can download the converted pb model from https://www.dropbox.com/s/qag9vzambhhkvxr/lip_jppnet_384.pb?dl=0\n+or convert the model yourself.\n+\n+Follow these steps if you want to convert the original model yourself:\n+ To get original .meta pre-trained model download https://drive.google.com/file/d/1BFVXgeln-bek8TCbRjN6utPAgRE0LJZg/view\n+ For correct convert .meta to .pb model download original repository https://github.com/Engineering-Course/LIP_JPPNet\n+ Change script evaluate_parsing_JPPNet-s2.py for human parsing\n```", + "output": "Please check that script still running. It seems to me that this string is invalid. You may also replace the link to this one: https://www.dropbox.com/s/qag9vzambhhkvxr/lip_jppnet_384.pb?dl=0" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@rayonnant14, I don't see answer." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@allnes @dkurt for parameters CV_RETR_TREE and CV_CHAIN_APPROX_SIMPLE in function findContours" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -694,8 +694,8 @@ class CV_EXPORTS_W QRCodeDetector\n CV_WRAP bool detect(InputArray img, OutputArray points) const;\n \n /** @brief Decodes QR code in image once it's found by the detect() method.\n- Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n \n+ Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n @param img grayscale or color (BGR) image containing QR code.\n @param points Quadrangle vertices found by detect() method (or some other algorithm).\n @param straight_qrcode The optional output image containing rectified and binarized QR code\n```", + "output": "@dkurt Mat contains 4 points of qr-code bounding box. I use vector of Mat by the reason more than one qr-code on the image" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -694,8 +694,8 @@ class CV_EXPORTS_W QRCodeDetector\n CV_WRAP bool detect(InputArray img, OutputArray points) const;\n \n /** @brief Decodes QR code in image once it's found by the detect() method.\n- Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n \n+ Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n @param img grayscale or color (BGR) image containing QR code.\n @param points Quadrangle vertices found by detect() method (or some other algorithm).\n @param straight_qrcode The optional output image containing rectified and binarized QR code\n```", + "output": "@dkurt fixed it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@allnes fixed it in new commit" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/perf/perf_qrcode_pipeline.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -53,6 +53,56 @@ PERF_TEST_P_(Perf_Objdetect_QRCode, decode)\n }\n #endif\n \n+typedef ::perf::TestBaseWithParam< std::string > Perf_Objdetect_QRCode_Multi;\n+\n+PERF_TEST_P_(Perf_Objdetect_QRCode_Multi, detectMulti)\n+{\n+ const std::string name_current_image = GetParam();\n+ const std::string root = \"cv/qrcode/multiple/\";\n```", + "output": "@allnes corrected it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@allnes corrected it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@allness done it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@allnes correct it" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/include/opencv2/objdetect.hpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -694,8 +694,8 @@ class CV_EXPORTS_W QRCodeDetector\n CV_WRAP bool detect(InputArray img, OutputArray points) const;\n \n /** @brief Decodes QR code in image once it's found by the detect() method.\n- Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n \n+ Returns UTF8-encoded output string or empty string if the code cannot be decoded.\n @param img grayscale or color (BGR) image containing QR code.\n @param points Quadrangle vertices found by detect() method (or some other algorithm).\n @param straight_qrcode The optional output image containing rectified and binarized QR code\n```", + "output": "@alalek std::string doesn't work fine in python and java Maybe, it is better to use cv::String to avoid problems?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "@alalek QRDetect::separateVerticalLines() finds the best coeff_epsilon by comparison of compactness, which kmeans returns (supposed that number of clusters is 3(1 qr-code). If there is more than one qr-code on image this method doesn't work correctly. By this reason there is some changes in code in QRDetectMulti::separateVerticalLines() Maybe, it should be renamed to avoid the mess" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/src/qrcode.cpp\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -22,6 +22,48 @@ namespace cv\n {\n using std::vector;\n \n+static bool checkQRInputImage(InputArray img, Mat& gray)\n+{\n+ CV_Assert(!img.empty());\n+ CV_CheckDepthEQ(img.depth(), CV_8U, \"\");\n+\n+ if (img.cols() <= 20 || img.rows() <= 20)\n```", + "output": "This part must be reworked - using of **drawing** functions in algorithms should be avoided. /cc @allnes" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/misc/python/test/test_qrcode_detect.py\n**Change Type:** modified\n**Context:** PR #15338: QR-Code detector : multiple detection - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/684\n\n**Code Changes:**\n```diff\n@@ -11,8 +11,42 @@\n from tests_common import NewOpenCVTests\n \n class qrcode_detector_test(NewOpenCVTests):\n+\n+ def test_detect(self):\n+ img = cv.imread(os.path.join(self.extraTestDataPath, 'cv/qrcode/link_ocv.jpg'))\n+ self.assertFalse(img is None)\n+ detector = cv.QRCodeDetector()\n+ retval, points = detector.detect(img)\n```", + "output": "@alalek Which way is better img is None or np.shape(img) is () ?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/videoio/src/cap_ffmpeg_impl.hpp\n**Change Type:** modified\n**Context:** PR #16190: Videocapture skip frames - ### This pullrequest changes **Add the possibility to skip frames** in VideoCapture by using the existing `OPENC...\n**Review Line:** 988\n\n**Code Changes:**\n```diff\n+ enc->skip_frame = AVDISCARD_NONINTRA;\n+#endif\n+ else if (strcmp(avdiscard_entry->value, \"nonkey\") == 0)\n+ enc->skip_frame = AVDISCARD_NONKEY;\n+ else if (strcmp(avdiscard_entry->value, \"nonref\") == 0)\n+ enc->skip_frame = AVDISCARD_NONREF;\n+ }\n+#endif\n+\n #if LIBAVFORMAT_BUILD < CALC_FFMPEG_VERSION(53, 2, 0)\n #define AVMEDIA_TYPE_VIDEO CODEC_TYPE_VIDEO\n```", + "output": "@alalek All options (except `AVDISCARD_NONINTRA`) are in use since ffmpeg 0.7 (libavcodec 52.123.0) First use of `AVDISCARD_NONINTRA` is in version 2.3 (libavcodec 55.69.100). I added some checks on LIBAVCODEC_BUILD version in my last commit, and squashed previous ones." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/python/text_skewness_correction.py\n**Change Type:** added\n**Context:** PR #16170: added tutorial for text skewness correction - ### This pull request changes Given an image containing a rotated block of text at an unknow...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,58 @@\n+'''\n+Text skewness correction\n+This tutorial demonstrates how to correct the skewness in a text.\n+The program takes as input a skewed source image and shows non skewed text.\n+\n+Usage:\n+ python text_skewness_correction.py --image \"Image path\"\n+'''\n+\n```", + "output": "@asmorkalov when I changed the branch from master to 3.4 this pr suddenly started showing 903 commits and files changed = 1828 is there something wrong How to keep only the commits that I have made and earlier all builds had passed now it is showing that the builds are failed can you please tell how to do it correctly as I am new to open source contribution and Github" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #16309: Added resize flag handlers for imread and imdecode | Issue #16203 - # This pullrequest changes 1. Added flag handler for imdecode which was...\n\n**Code Changes:**\n```diff\n@@ -423,12 +423,12 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )\n int scale_denom = 1;\n if( flags > IMREAD_LOAD_GDAL )\n {\n- if( flags & IMREAD_REDUCED_GRAYSCALE_2 )\n- scale_denom = 2;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_4 )\n- scale_denom = 4;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_8 )\n- scale_denom = 8;\n```", + "output": "Thanks for the review @alalek, do I add the if conditions for greyscale and color, then it'll be readable, also I add these conditions to imread as well right?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #16309: Added resize flag handlers for imread and imdecode | Issue #16203 - # This pullrequest changes 1. Added flag handler for imdecode which was...\n\n**Code Changes:**\n```diff\n@@ -423,12 +423,12 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )\n int scale_denom = 1;\n if( flags > IMREAD_LOAD_GDAL )\n {\n- if( flags & IMREAD_REDUCED_GRAYSCALE_2 )\n- scale_denom = 2;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_4 )\n- scale_denom = 4;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_8 )\n- scale_denom = 8;\n```", + "output": "Sure @sturkmen72, I'll remove it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #16309: Added resize flag handlers for imread and imdecode | Issue #16203 - # This pullrequest changes 1. Added flag handler for imdecode which was...\n\n**Code Changes:**\n```diff\n@@ -423,12 +423,12 @@ imread_( const String& filename, int flags, int hdrtype, Mat* mat=0 )\n int scale_denom = 1;\n if( flags > IMREAD_LOAD_GDAL )\n {\n- if( flags & IMREAD_REDUCED_GRAYSCALE_2 )\n- scale_denom = 2;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_4 )\n- scale_denom = 4;\n- else if( flags & IMREAD_REDUCED_GRAYSCALE_8 )\n- scale_denom = 8;\n```", + "output": "Oh sorry, my bad, skipped this, thanks." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/test/test_read_write.cpp\n**Change Type:** modified\n**Context:** PR #16309: Added resize flag handlers for imread and imdecode | Issue #16203 - # This pullrequest changes 1. Added flag handler for imdecode which was...\n\n**Code Changes:**\n```diff\n@@ -5,6 +5,95 @@\n \n namespace opencv_test { namespace {\n \n+/* < , > */\n+typedef tuple< tuple, tuple > Imgcodecs_Resize_t;\n+\n+typedef testing::TestWithParam< Imgcodecs_Resize_t > Imgcodecs_Resize;\n+\n+/* resize_flag_and_dims = */\n```", + "output": "Sure @alalek, will do" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2.cpp\n**Change Type:** modified\n**Context:** PR #15915: Fix implicit conversion from array to scalar in python bindings - ### This pullrequest changes numpy array was implicitly converted by `PyA...\n\n**Code Changes:**\n```diff\n@@ -13,11 +13,14 @@\n # define Py_LIMITED_API 0x03030000\n #endif\n \n-#include \n+#include \n #include \n+#include \n \n #if PY_MAJOR_VERSION < 3\n```", + "output": "Thanks for catch it up. I've changed check for appropriate type. To be removed." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/python/src2/cv2.cpp\n**Change Type:** modified\n**Context:** PR #15915: Fix implicit conversion from array to scalar in python bindings - ### This pullrequest changes numpy array was implicitly converted by `PyA...\n**Review Line:** 284\n\n**Code Changes:**\n```diff\n+ {\n+ // According to the numpy documentation:\n+ // There are 21 statically-defined PyArray_Descr objects for the built-in data-types\n+ // So descriptor pointer is not owning.\n+ PyArray_Descr* to = getNumpyTypeDescriptor();\n+ if (canBeSafelyCasted(obj, to))\n+ {\n+ PyArray_CastScalarToCtype(obj, &value, to);\n+ return true;\n+ }\n+ }\n```", + "output": "@alalek Added some test to illustrate current problem with sequential types parsing (such as `vector` or `Size` (sequence of size 2)): https://github.com/VadimLevin/opencv/commit/07bdc4ad6c6d86627d20cdb203c27d03e7ac8652 For `vector` skipped test either fails with some inner error or perform narrowing conversion (`double` -> `int`) which we want to eliminate with strong type check. For `cv::Size` it simply rejects everything except tuple. So the main goal of the underlying idea, that I tried to describe in the previous comment, is elimination of confusing and non-intuitive conversion behavior. Implementation with templates may be a bit complicated, but in my opinion has more advantages rather disadvantages." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/js/test/run_puppeteer.js\n**Change Type:** added\n**Context:** PR #15503: Js test puppeteer - * js - tests and docs only * adds support to run js tests with puppeteer (headless chrome - with a single command) * di...\n**Review Line:** 168\n\n**Code Changes:**\n```diff\n+\n+By default will run a headless browser silently printing a small report in the terminal.\n+But it could used to debug the tests in the browser, take screenshots, global tool or\n+targeting external servers exposing the tests.\n+\n+TIP: you could install the tool globally (npm install --global build_js/bin) to execute it from any local folder.\n+\n+# Options\n+\n+ * port?: number. Default 8080\n+ * buildFolder?: string. Default __dirname (this folder)\n```", + "output": "I will be reviewing this these days and come back to you, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/opencl/opencl_info.hpp\n**Change Type:** modified\n**Context:** PR #15883: Refactor opencl_info - This pullrequest refactors parts of opencl information dumping: - Remove code duplication - Clarify functionality of...\n**Review Line:** 66\n\n**Code Changes:**\n```diff\n+ return \"dGPU\";\n+ }\n+ }\n+\n+ return \"unkown\";\n+}\n } // namespace\n \n static void dumpOpenCLInformation()\n@@ -80,12 +97,11 @@ static void dumpOpenCLInformation()\n for (int j = 0; j < platform->deviceNumber(); j++)\n```", + "output": "Good catch, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/opencl/opencl_info.hpp\n**Change Type:** modified\n**Context:** PR #15883: Refactor opencl_info - This pullrequest refactors parts of opencl information dumping: - Remove code duplication - Clarify functionality of...\n\n**Code Changes:**\n```diff\n@@ -47,6 +47,23 @@ static std::string bytesToStringRepr(size_t value)\n s = s.substr(0, s.size() - 1);\n return s;\n }\n+\n+static String getDeviceTypeString(const cv::ocl::Device& device)\n+{\n+ if (device.type() == cv::ocl::Device::TYPE_CPU) {\n+ return \"CPU\";\n+ }\n```", + "output": "Got it, thanks a lot for the info!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindOpenEXR.cmake\n**Change Type:** modified\n**Context:** PR #15159: fix FindOpenEXR to respect OPENEXR_ROOT - resolves #13403 ### This pullrequest changes - if CMake variable `OPENEXR_ROOT` is set, it has th...\n**Review Line:** 43\n\n**Code Changes:**\n```diff\n-FIND_LIBRARY(OPENEXR_HALF_LIBRARY\n- NAMES Half\n- PATH_SUFFIXES ${OPENEXR_LIBSEARCH_SUFFIXES}\n- PATHS ${LIBRARY_PATHS})\n+FOREACH(SEARCH_PATH ${SEARCH_PATHS})\n+ FIND_PATH(OPENEXR_INCLUDE_PATH ImfRgbaFile.h\n+ PATH_SUFFIXES OpenEXR\n+ NO_DEFAULT_PATH\n+ PATHS\n+ \"${SEARCH_PATH}/include\")\n \n```", + "output": "@SSE4 I found the issue. It's not related to CMake version. `FIND_PATH` like other `FIND_` functions set cache variable with path and does not search again, if something was found for the first time. I have some incomplete OpenEXR installation on my host or something that was not detected by your script in system directories. `FIND_PATH` finds headers in system path before hand and always returns the same value ignoring `PATH` option. `NO_DEFAULT_PATH` option for `FIND_PATH` fixes the problem." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVFindOpenEXR.cmake\n**Change Type:** modified\n**Context:** PR #15159: fix FindOpenEXR to respect OPENEXR_ROOT - resolves #13403 ### This pullrequest changes - if CMake variable `OPENEXR_ROOT` is set, it has th...\n**Review Line:** 43\n\n**Code Changes:**\n```diff\n-FIND_LIBRARY(OPENEXR_HALF_LIBRARY\n- NAMES Half\n- PATH_SUFFIXES ${OPENEXR_LIBSEARCH_SUFFIXES}\n- PATHS ${LIBRARY_PATHS})\n+FOREACH(SEARCH_PATH ${SEARCH_PATHS})\n+ FIND_PATH(OPENEXR_INCLUDE_PATH ImfRgbaFile.h\n+ PATH_SUFFIXES OpenEXR\n+ NO_DEFAULT_PATH\n+ PATHS\n+ \"${SEARCH_PATH}/include\")\n \n```", + "output": "great news, thank you very much" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/src/opengl.cpp\n**Change Type:** modified\n**Context:** PR #15658: Cuda + OpenGL on ARM - ### This pull request changes There might be multiple ways of getting OpenCV compile on Tegra (NVIDIA Jetson) platfo...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,13 @@\n #ifdef HAVE_OPENGL\n # include \"gl_core_3_1.hpp\"\n # ifdef HAVE_CUDA\n+# if (defined(__arm__) || defined(__aarch64__)) \\\n+ && !defined(OPENCV_SKIP_CUDA_OPENGL_ARM_WORKAROUND)\n+# include \n+# ifndef GL_VERSION\n+# define GL_VERSION 0x1F02\n+# endif\n```", + "output": "Thank you for update! I don't see information about fixed CUDA versions in linked forum thread, so this workaround should be enabled by default (but probably we should eliminate `#warning` from here or disable that message under another \"*_MESSAGE\" definition flag). Please change \"default\" value of `CL_VERSION` to `0x1F02`. I see this on my machine (Fedora): ``` /usr/include/GL/gl.h:#define GL_VERSION 0x1F02 /usr/include/GL/glcorearb.h:#define GL_VERSION 0x1F02 ``` I believe CMake changes are not necessary (option is very specific, lets keep it hidden). Flag can be added via CMake for future fixed versions (if necessary - workaround is harmless without warning message and proper default value): ``` cmake ... -DOPENCV_EXTRA_FLAGS=-DOPENCV_SKIP_CUDA_OPENGL_ARM_WORKAROUND=1 ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cuda.hpp\n**Change Type:** modified\n**Context:** PR #12673: Fix build warnings - Builds: [1](http://pullrequest.opencv.org/buildbot/builders/master_etc-fedora-tbb-lin64/builds/11) [2](http://pullrequ...\n**Review Line:** 138\n\n**Code Changes:**\n```diff\n \n //! constructor for GpuMat headers pointing to user-allocated data\n- CV_WRAP GpuMat(int rows, int cols, int type, void* data, size_t step = Mat::AUTO_STEP);\n- CV_WRAP GpuMat(Size size, int type, void* data, size_t step = Mat::AUTO_STEP);\n+ GpuMat(int rows, int cols, int type, void* data, size_t step = Mat::AUTO_STEP);\n+ GpuMat(Size size, int type, void* data, size_t step = Mat::AUTO_STEP);\n \n //! creates a GpuMat header for a part of the bigger matrix\n CV_WRAP GpuMat(const GpuMat& m, Range rowRange, Range colRange);\n```", + "output": "It is shame I caused such mess! Thanks for the cleanup. BTW: It makes sense to disable args, while taking in consideration that we need to return `void` in order to support UMat wrapper." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpeg2000.cpp\n**Change Type:** modified\n**Context:** PR #13847: Add JPEG2000 compression flag. - I believe the title says it all. Since this is my first PR to the OpenCV repository, I might have missed s...\n**Review Line:** 482\n\n**Code Changes:**\n```diff\n if( channels > 3 || channels < 1 )\n return false;\n \n+ CV_Assert(params.size() % 2 == 0);\n+ double target_compression_rate = 1.0;\n+ for( size_t i = 0; i < params.size(); i += 2 )\n+ {\n+ switch(params[i])\n+ {\n+ case cv::IMWRITE_JPEG2000_COMPRESSION_X1000:\n+ target_compression_rate = std::min(std::max(params[i+1], 0), 1000) / 1000.0;\n```", + "output": "Thank you for your comments! Indeed a valid point! I will add a check, if that's still required after resolving https://github.com/opencv/opencv/pull/13847#discussion_r258039971." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/include/opencv2/imgcodecs.hpp\n**Change Type:** modified\n**Context:** PR #13847: Add JPEG2000 compression flag. - I believe the title says it all. Since this is my first PR to the OpenCV repository, I might have missed s...\n\n**Code Changes:**\n```diff\n@@ -94,7 +94,8 @@ enum ImwriteFlags {\n IMWRITE_PAM_TUPLETYPE = 128,//!< For PAM, sets the TUPLETYPE field to the corresponding string value that is defined for the format\n IMWRITE_TIFF_RESUNIT = 256,//!< For TIFF, use to specify which DPI resolution unit to set; see libtiff documentation for valid values\n IMWRITE_TIFF_XDPI = 257,//!< For TIFF, use to specify the X direction DPI\n- IMWRITE_TIFF_YDPI = 258 //!< For TIFF, use to specify the Y direction DPI\n+ IMWRITE_TIFF_YDPI = 258, //!< For TIFF, use to specify the Y direction DPI\n+ IMWRITE_JPEG2000_COMPRESSION_X1000 = 272 //!< For JPEG2000, use to specify the target compression rate (multiplied by 1000). The value can be from 0 to 1000. Default is 1000.\n };\n \n enum ImwriteEXRTypeFlags {\n```", + "output": "Thank you for your feedback! Concerning the increment: I was just guessing that the increment is a power of 2 because the previous values were 64,128, and 256. I can adjust this to 272, if you prefer. Concerning an OpenJPEG-based implementation: I have not used this library so far, i.e. I cannot directly answer this question. If you like, I can do some research and propose a OpenJPEG-based implementation. If you anyway want to drop JasPer, there is not really a point in merging this PR, right? What do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/grfmt_jpeg2000.cpp\n**Change Type:** modified\n**Context:** PR #13847: Add JPEG2000 compression flag. - I believe the title says it all. Since this is my first PR to the OpenCV repository, I might have missed s...\n\n**Code Changes:**\n```diff\n@@ -43,6 +43,7 @@\n #include \"precomp.hpp\"\n \n #ifdef HAVE_JASPER\n+#include \n \n #include \"grfmt_jpeg2000.hpp\"\n #include \"opencv2/imgproc.hpp\"\n@@ -467,7 +468,7 @@ bool Jpeg2KEncoder::isFormatSupported( int depth ) const\n }\n```", + "output": "Thanks for your further investigation! I scanned again the JasPer documentation and it seems that there's no quality parameter. Therefore, my proposal would be the following: - introduce the parameter `IMWRITE_JPEG2000_COMPRESSION_X1000` proposed by you - do not introduce `IMWRITE_JPEG2000_QUALITY` now Once JasPer is removed in favour of OpenJPEG, one can maintain `IMWRITE_JPEG2000_COMPRESSION_X1000` with the same meaning as with JasPer and add a new parameter `IMWRITE_JPEG2000_QUALITY` which would have a similar meaning as the JPEG quality parameter. I believe this procedure would be the most future-proof version discussed so far. What do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/vkcom/src/vkcom.cpp\n**Change Type:** added\n**Context:** PR #12703: dnn: Add a Vulkan based backend - This commit adds a new backend \"DNN_BACKEND_VKCOM\" and a new target \"DNN_TARGET_VULKAN\". VKCOM means vulk...\n**Review Line:** 95\n\n**Code Changes:**\n```diff\n+\n+ VKCOM_CHECK_BOOL_RET_VAL(init(), false);\n+ Context* ctx = getContext();\n+ if (ctx)\n+ {\n+ ctx->ref++;\n+ return true;\n+ }\n+\n+ // create device, queue, command pool\n+ VkDeviceQueueCreateInfo queueCreateInfo = {};\n```", + "output": "Yes, it is thread local. It is for multiple callings on initPerThread(). e.g, multiple Net objects exsiting on a thread simultaneously. I am looking into the issues you and @dkurt mentioned. But I have short time on coding in these days and next week for I will be in biz travel. Responses and fixes may be a little bit late. Sorry." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/doc/00-root.markdown\n**Change Type:** added\n**Context:** PR #12857: G-API Documentation: first submission - This PR introduces a number of new OpenCV documentation chapters for Graph API module. In particula...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,109 @@\n+# Graph API {#gapi}\n+\n+# Introduction {#gapi_root_intro}\n+\n+OpenCV Graph API (or G-API) is a new OpenCV module targeted to make\n+regular image processing fast and portable. These two goals are\n+achieved by introducing a new graph-based model of execution.\n+\n+G-API is a special module in OpenCV -- in contrast with the majority\n```", + "output": "It is better to extract tutorials code into separate files: - create gapi folder here: https://github.com/opencv/opencv/tree/4.0.0-beta/samples/cpp/tutorial_code (or here: `modules/gapi/docs/code` - for better compatibility with standalone gapi) - this code is checked at least for compilation - use `@include` or `@snippet` to reference from documentation part (`@example` as examples for functions only)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gapi/doc/00-root.markdown\n**Change Type:** added\n**Context:** PR #12857: G-API Documentation: first submission - This PR introduces a number of new OpenCV documentation chapters for Graph API module. In particula...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,109 @@\n+# Graph API {#gapi}\n+\n+# Introduction {#gapi_root_intro}\n+\n+OpenCV Graph API (or G-API) is a new OpenCV module targeted to make\n+regular image processing fast and portable. These two goals are\n+achieved by introducing a new graph-based model of execution.\n+\n+G-API is a special module in OpenCV -- in contrast with the majority\n```", + "output": "Ah got it, will export the code into a distinct location, thanks!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/base.hpp\n**Change Type:** modified\n**Context:** PR #12310: Introduce enum interface for type-safety - - Merge #12601 first to avoid breaking JS bindings - Merge with opencv/opencv_contrib#1744 ### T...\n**Review Line:** 445\n\n**Code Changes:**\n```diff\n-#define CV_Assert_10( expr1, expr2, expr3, expr4, expr5, expr6, expr7, expr8, expr9, expr10 ) CV_Assert_9(expr1, expr2, expr3, expr4, expr5, expr6, expr7, expr8, expr9 ); CV_Assert_1(expr10)\n-\n-#define CV_Assert_N(...) do { __CV_CAT(CV_Assert_, __CV_VA_NUM_ARGS(__VA_ARGS__)) (__VA_ARGS__); } while(0)\n+#define CV_Assert_2( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_1( __VA_ARGS__ ))\n+#define CV_Assert_3( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_2( __VA_ARGS__ ))\n+#define CV_Assert_4( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_3( __VA_ARGS__ ))\n+#define CV_Assert_5( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_4( __VA_ARGS__ ))\n+#define CV_Assert_6( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_5( __VA_ARGS__ ))\n+#define CV_Assert_7( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_6( __VA_ARGS__ ))\n+#define CV_Assert_8( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_7( __VA_ARGS__ ))\n+#define CV_Assert_9( expr, ... ) CV_Assert_1(expr); __CV_EXPAND(CV_Assert_8( __VA_ARGS__ ))\n```", + "output": "@alalek Thanks for noticing that! I just fixed it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.inl.hpp\n**Change Type:** modified\n**Context:** PR #12310: Introduce enum interface for type-safety - - Merge #12601 first to avoid breaking JS bindings - Merge with opencv/opencv_contrib#1744 ### T...\n\n**Code Changes:**\n```diff\n@@ -199,6 +199,16 @@ inline void DictValue::release()\n case Param::REAL:\n delete pd;\n break;\n+ case Param::BOOLEAN:\n+ case Param::MAT:\n+ case Param::MAT_VECTOR:\n+ case Param::ALGORITHM:\n+ case Param::FLOAT:\n+ case Param::UNSIGNED_INT:\n```", + "output": "@alalek I thought you would mind if my patch exposes some hidden warnings, but if you do not mind, that is just perfect for me" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.inl.hpp\n**Change Type:** modified\n**Context:** PR #12310: Introduce enum interface for type-safety - - Merge #12601 first to avoid breaking JS bindings - Merge with opencv/opencv_contrib#1744 ### T...\n\n**Code Changes:**\n```diff\n@@ -199,6 +199,16 @@ inline void DictValue::release()\n case Param::REAL:\n delete pd;\n break;\n+ case Param::BOOLEAN:\n+ case Param::MAT:\n+ case Param::MAT_VECTOR:\n+ case Param::ALGORITHM:\n+ case Param::FLOAT:\n+ case Param::UNSIGNED_INT:\n```", + "output": "@alalek A lot of warning [are there](http://pullrequest.opencv.org/buildbot/builders/precommit-contrib_android/builds/11851) due to this change.. is that really OK?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.inl.hpp\n**Change Type:** modified\n**Context:** PR #12310: Introduce enum interface for type-safety - - Merge #12601 first to avoid breaking JS bindings - Merge with opencv/opencv_contrib#1744 ### T...\n\n**Code Changes:**\n```diff\n@@ -199,6 +199,16 @@ inline void DictValue::release()\n case Param::REAL:\n delete pd;\n break;\n+ case Param::BOOLEAN:\n+ case Param::MAT:\n+ case Param::MAT_VECTOR:\n+ case Param::ALGORITHM:\n+ case Param::FLOAT:\n+ case Param::UNSIGNED_INT:\n```", + "output": "@alalek All done!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/cvdef.h\n**Change Type:** modified\n**Context:** PR #12310: Introduce enum interface for type-safety - - Merge #12601 first to avoid breaking JS bindings - Merge with opencv/opencv_contrib#1744 ### T...\n**Review Line:** 467\n\n**Code Changes:**\n```diff\n+__CV_ENUM_FLAGS_BITWISE_AND (EnumType, EnumType, EnumType); \\\n+__CV_ENUM_FLAGS_BITWISE_XOR (EnumType, EnumType, EnumType); \\\n+ \\\n+__CV_ENUM_FLAGS_BITWISE_OR_EQ (EnumType, EnumType); \\\n+__CV_ENUM_FLAGS_BITWISE_AND_EQ (EnumType, EnumType); \\\n+__CV_ENUM_FLAGS_BITWISE_XOR_EQ (EnumType, EnumType); \\\n+\n /****************************************************************************************\\\n * static analysys *\n \\****************************************************************************************/\n```", + "output": "Thank you! I noticed that from the qt image warnings.. - If we remove the semicolons from the call, the editor would make wrong indentation (calls are made in several other places) - If we let the semicolons here, we will have warnings when `-Wpedantic` is enabled Therefore, the best solution is to add a semicolon to the macro header itself. I already made #12631 to do that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/pooling_layer.cpp\n**Change Type:** modified\n**Context:** PR #12497: dnn: remove raw SSE2 implementation - ### This pullrequest changes - remove raw SSE2 implementation - Unfortunately, neither test_dnn nor ...\n\n**Code Changes:**\n```diff\n@@ -488,19 +488,16 @@ class PoolingLayerImpl CV_FINAL : public PoolingLayer\n max_val0 = v_max(max_val0, v0);\n max_val1 = v_max(max_val1, v1);\n }\n-#if CV_SSE2\n else if( stride_w == 2 )\n for (int k = 0; k < kernel_w*kernel_h; k++)\n {\n int index = ofsptr[k];\n- v_float32x4 v00 = v_load(srcData1 + index), v01 = v_load(srcData1 + index + 4);\n```", + "output": "Nice catch ! I just found ```v_int32x4``` and didn't really look for ```v_float32x4``` Thank you !" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/interface.h\n**Change Type:** modified\n**Context:** PR #12463: added basic support for CV_16F (the new datatype etc.) - CV_USRTYPE1 is now equal to CV_16F, which may break some [rarely used] functionali...\n**Review Line:** 79\n\n**Code Changes:**\n```diff\n@@ -76,6 +76,7 @@ typedef signed char schar;\n #define CV_32F 5\n #define CV_64F 6\n #define CV_USRTYPE1 7\n+#define CV_16F 7\n \n #define CV_MAT_DEPTH_MASK (CV_DEPTH_MAX - 1)\n #define CV_MAT_DEPTH(flags) ((flags) & CV_MAT_DEPTH_MASK)\n@@ -124,6 +125,12 @@ typedef signed char schar;\n #define CV_64FC3 CV_MAKETYPE(CV_64F,3)\n```", + "output": "New size of these elements are 2 bytes instead of previous 8. This will lead to memory corruption. /cc @dkurt" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/hal/intrin_avx.hpp\n**Change Type:** modified\n**Context:** PR #12063: further improvements in split & merge; started using non-temporary store instructions - 1. changed `static const __m128/256` to const `__m1...\n**Review Line:** 309\n\n**Code Changes:**\n```diff\n { _mm256_storeu_si256((__m256i*)ptr, a.val); } \\\n inline void v_store_aligned(_Tp* ptr, const _Tpvec& a) \\\n { _mm256_store_si256((__m256i*)ptr, a.val); } \\\n+ inline void v_store_aligned_nocache(_Tp* ptr, const _Tpvec& a) \\\n+ { _mm256_stream_si256((__m256i*)ptr, a.val); } \\\n+ inline void v_store(_Tp* ptr, const _Tpvec& a, hal::StoreMode mode) \\\n+ { \\\n+ if( mode == hal::STORE_UNALIGNED ) \\\n+ _mm256_storeu_si256((__m256i*)ptr, a.val); \\\n+ else if( mode == hal::STORE_ALIGNED_NOCACHE ) \\\n+ _mm256_stream_si256((__m256i*)ptr, a.val); \\\n```", + "output": "@alalek, I would say, the function parameter makes some things easier to debug, not harder. You just temporarily modify those `v_store(ptr, reg, mode)` implementations to always use unaligned stores and see whether the crashes are eliminated. Also, a debugger will stop and show you the disassembly at `movaps/movntps/...` instruction that attempted to write at unaligned address, regardless of whether this instruction is under condition or not. I would say, such function with embedded `if` makes some code more convenient to write and helps to avoid unnecessary code bloat." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #12071: Added ONNX parser with AlexNet support - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/499 related #11008 ``` force_bui...\n\n**Code Changes:**\n```diff\n@@ -800,6 +800,18 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n */\n CV_EXPORTS_W Net readNetFromModelOptimizer(const String &xml, const String &bin);\n \n+ /** @brief Reads a network model ONNX.\n+ * @param onnxFile path to the .onnx file with text description of the network architecture.\n+ * @returns Network object that ready to do forward, throw an exception in failure cases.\n+ */\n+ CV_EXPORTS_W Net readNetFromONNX(const String &onnxFile);\n+\n```", + "output": "Please fix doxygen warnings and keep a single `@returns` entry. http://pullrequest.opencv.org/buildbot/builders/precommit_docs/builds/16011" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** added\n**Context:** PR #12071: Added ONNX parser with AlexNet support - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/499 related #11008 ``` force_bui...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,585 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// Copyright (C) 2018, Intel Corporation, all rights reserved.\n+// Third party copyrights are property of their respective owners.\n+\n+#include \"../precomp.hpp\"\n+\n```", + "output": "Please check that shape mar has a type CV_32SC1." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/onnx/onnx_importer.cpp\n**Change Type:** added\n**Context:** PR #12071: Added ONNX parser with AlexNet support - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/499 related #11008 ``` force_bui...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,585 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// Copyright (C) 2018, Intel Corporation, all rights reserved.\n+// Third party copyrights are property of their respective owners.\n+\n+#include \"../precomp.hpp\"\n+\n```", + "output": "@l-bat, I still a bit confused about `count` variable. Do we really need it? Am I right that we use it to register network's inputs by pins? In example, (0, 0), (0, 1) etc. But at the same time we also push corresponding name to `netInputs`. That is why I wanted to ask you try the following changes: ```cpp if (constBlobs.find(name) == constBlobs.end()) { netInputs.push_back(name); layer_id.insert(std::make_pair(name, LayerInfo(0, netInputs.size() - 1))); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/src/smooth.cpp\n**Change Type:** modified\n**Context:** PR #12073: Bit-exact GaussianBlur reworked to use wide intrinsics - ### This pullrequest changes Bit-exact GaussianBlur reworked to use wide intrinsic...\n\n**Code Changes:**\n```diff\n@@ -1820,22 +1820,13 @@ template <>\n void hlineSmooth1N(const uint8_t* src, int cn, const ufixedpoint16* m, int, ufixedpoint16* dst, int len, int)\n {\n int lencn = len*cn;\n- v_uint16x8 v_mul = v_setall_u16(*((uint16_t*)m));\n int i = 0;\n- for (; i <= lencn - 16; i += 16)\n- {\n- v_uint8x16 v_src = v_load(src + i);\n- v_uint16x8 v_tmp0, v_tmp1;\n```", + "output": "@alalek, not needed; I would rather cast `m` pointer to `uint16_t*` in the beginning" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_distancetransform.cpp\n**Change Type:** modified\n**Context:** PR #12278: Fix bug in distanceTransform\n\n**Code Changes:**\n```diff\n@@ -283,4 +283,23 @@ void CV_DisTransTest::prepare_to_validation( int /*test_case_idx*/ )\n \n TEST(Imgproc_DistanceTransform, accuracy) { CV_DisTransTest test; test.safe_run(); }\n \n+BIGDATA_TEST(Imgproc_DistanceTransform, large_image_12218)\n+{\n+ const int lls_maxcnt = 79992000; // labels's maximum count\n+ const int lls_mincnt = 1; // labels's minimum count\n+ int i, j, nz;\n+ Mat src(8000, 20000, CV_8UC1), dst, labels;\n```", + "output": "@alalek Thanks for your comments. This test is based on \"testDistanceTransformWithLabels ()\". I created this test with reference to imgproc / misc / java / test / ImgprocTest.java." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_distancetransform.cpp\n**Change Type:** modified\n**Context:** PR #12278: Fix bug in distanceTransform\n\n**Code Changes:**\n```diff\n@@ -283,4 +283,23 @@ void CV_DisTransTest::prepare_to_validation( int /*test_case_idx*/ )\n \n TEST(Imgproc_DistanceTransform, accuracy) { CV_DisTransTest test; test.safe_run(); }\n \n+BIGDATA_TEST(Imgproc_DistanceTransform, large_image_12218)\n+{\n+ const int lls_maxcnt = 79992000; // labels's maximum count\n+ const int lls_mincnt = 1; // labels's minimum count\n+ int i, j, nz;\n+ Mat src(8000, 20000, CV_8UC1), dst, labels;\n```", + "output": "@alalek This test case is not suitable, so I will fix it later." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/test/test_distancetransform.cpp\n**Change Type:** modified\n**Context:** PR #12278: Fix bug in distanceTransform\n\n**Code Changes:**\n```diff\n@@ -283,4 +283,23 @@ void CV_DisTransTest::prepare_to_validation( int /*test_case_idx*/ )\n \n TEST(Imgproc_DistanceTransform, accuracy) { CV_DisTransTest test; test.safe_run(); }\n \n+BIGDATA_TEST(Imgproc_DistanceTransform, large_image_12218)\n+{\n+ const int lls_maxcnt = 79992000; // labels's maximum count\n+ const int lls_mincnt = 1; // labels's minimum count\n+ int i, j, nz;\n+ Mat src(8000, 20000, CV_8UC1), dst, labels;\n```", + "output": "@alalek I changed it to test with the theoretical value calculated based on the maximum and minimum values of labels." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/imgproc/out_of_focus_deblur_filter/out_of_focus_deblur_filter.markdown\n**Change Type:** modified\n**Context:** PR #12214: doc: fix English gramma in tutorial out-of-focus-deblur filter - doc: fix English grammar in the tutorial out-of-focus-deblur filter Thanks...\n**Review Line:** 12\n\n**Code Changes:**\n```diff\n-- what is PSF of out-of-focus image\n+- what a degradation image model is\n+- what the PSF of an out-of-focus image is\n - how to restore a blurred image\n-- what is Wiener filter\n+- what is a Wiener filter\n \n Theory\n ------\n \n-@note The explanation is based on the books @cite gonzalez and @cite gruzman. Also, you can refer to Matlab's tutorial [Image Deblurring in Matlab] and an article [SmartDeblur].\n```", + "output": "Oh, thank you. Yes, I agree, the same correction should be." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** doc/tutorials/imgproc/out_of_focus_deblur_filter/out_of_focus_deblur_filter.markdown\n**Change Type:** modified\n**Context:** PR #12214: doc: fix English gramma in tutorial out-of-focus-deblur filter - doc: fix English grammar in the tutorial out-of-focus-deblur filter Thanks...\n**Review Line:** 43\n\n**Code Changes:**\n```diff\n \\f[U' = H_w\\cdot S\\f]\n \n where\n-\\f$U'\\f$ is spectrum of estimation of original image \\f$U\\f$,\n-\\f$H_w\\f$ is restoration filter, for example, Wiener filter.\n+\\f$U'\\f$ is the spectrum of estimation of original image \\f$U\\f$, and\n+\\f$H_w\\f$ is the restoration filter, for example, the Wiener filter.\n \n-### What is Wiener filter?\n+### What is the Wiener filter?\n \n```", + "output": "Your variant sounds better. I agree. Thank you" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/crop_layer.cpp\n**Change Type:** modified\n**Context:** PR #12350: Replace Slice layer to Crop in Faster-RCNN networks from Caffe - ### This pullrequest changes Enable OCV/OCL_fp16 tests for Faster-RCNN ne...\n**Review Line:** 180\n\n**Code Changes:**\n```diff\n+ ieLayer->axis.push_back(2); // width\n+ ieLayer->offset.push_back(crop_ranges[3].start);\n+ ieLayer->dim.push_back(crop_ranges[3].end - crop_ranges[3].start);\n+\n+ return Ptr(new InfEngineBackendNode(ieLayer));\n+#endif // HAVE_INF_ENGINE\n+ return Ptr();\n+ }\n+\n std::vector crop_ranges;\n };\n```", + "output": "@vpisarev, Yes, I'd be better to add it here and in every other layer." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/nms.cpp\n**Change Type:** modified\n**Context:** PR #11390: Deep learning text detection sample - ### This pullrequest changes Added sample with deep learning text detection using EAST: An Efficient ...\n\n**Code Changes:**\n```diff\n@@ -8,6 +8,8 @@\n #include \"precomp.hpp\"\n #include \"nms.inl.hpp\"\n \n+#include \n+\n namespace cv\n {\n namespace dnn\n@@ -28,6 +30,27 @@ void NMSBoxes(const std::vector& bboxes, const std::vector& scores,\n```", + "output": "@vpisarev, that's right but now `rotatedRectangleIntersection` returns just intersection points without contour orientation. In example, ```cpp std::vector inter; RotatedRect r1(Point2f(110.48, 272.86), Size2f(34.1097, 12.6564), 0); RotatedRect r2(Point2f(109.976, 274.791), Size2f(36.023, 13.1669), 0); rotatedRectangleIntersection(r1, r2, inter); for (int i = 0; i < inter.size(); ++i) std::cout << inter[i].x << \" \" << inter[i].y << std::endl; ``` gives ``` 93.4252 268.208 127.535 268.208 93.4252 279.188 127.535 279.188 0 ---- 1 | | 2 ---- 3 ``` So it just orients them. Otherwise the following `contourArea` returns `0`." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/nms.cpp\n**Change Type:** modified\n**Context:** PR #11390: Deep learning text detection sample - ### This pullrequest changes Added sample with deep learning text detection using EAST: An Efficient ...\n\n**Code Changes:**\n```diff\n@@ -8,6 +8,8 @@\n #include \"precomp.hpp\"\n #include \"nms.inl.hpp\"\n \n+#include \n+\n namespace cv\n {\n namespace dnn\n@@ -28,6 +30,27 @@ void NMSBoxes(const std::vector& bboxes, const std::vector& scores,\n```", + "output": "@vpisarev, Done. `rotatedRectangleIntersection` returns an ordered set of points now." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/text_detection.cpp\n**Change Type:** added\n**Context:** PR #11390: Deep learning text detection sample - ### This pullrequest changes Added sample with deep learning text detection using EAST: An Efficient ...\n**Review Line:** 117\n\n**Code Changes:**\n```diff\n+ imshow(kWinName, frame);\n+ }\n+ return 0;\n+}\n+\n+void decode(const Mat& scores, const Mat& geometry, float scoreThresh,\n+ std::vector& detections, std::vector& confidences)\n+{\n+ detections.clear();\n+ CV_Assert(scores.dims == 4, geometry.dims == 4, scores.size[0] == 1,\n+ geometry.size[0] == 1, scores.size[1] == 1, geometry.size[1] == 5,\n```", + "output": "@vpisarev. The thing is that this decoding procedure is too specific for this network. In example, x4 final feature maps reduction comparing to input images: ```cpp // Decode a prediction. // Multiple by 4 because feature maps are 4 time less than input image. float offsetX = x * 4.0f, offsetY = y * 4.0f; ... ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** cmake/OpenCVDetectInferenceEngine.cmake\n**Change Type:** modified\n**Context:** PR #11587: Update Intel's Inference Engine deep learning backend - ### This pullrequest changes * Update Intel's Inference Engine backend support ([O...\n\n**Code Changes:**\n```diff\n@@ -41,8 +41,7 @@ set(INF_ENGINE_INCLUDE_DIRS \"${INF_ENGINE_ROOT_DIR}/include\" CACHE PATH \"Path to\n \n if(NOT INF_ENGINE_ROOT_DIR\n OR NOT EXISTS \"${INF_ENGINE_ROOT_DIR}\"\n- OR NOT EXISTS \"${INF_ENGINE_INCLUDE_DIRS}\"\n- OR NOT EXISTS \"${INF_ENGINE_INCLUDE_DIRS}/inference_engine.hpp\"\n+ OR NOT EXISTS \"${INF_ENGINE_ROOT_DIR}/include/inference_engine.hpp\"\n )\n ie_fail()\n endif()\n```", + "output": "@mshabunin, Yes, we can emit it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_darknet_importer.cpp\n**Change Type:** modified\n**Context:** PR #11104: Reading Darknet from cv::FileStorage - ### This pullrequest changes I added new constructor for DarknetImporter to load from std::istream. ...\n\n**Code Changes:**\n```diff\n@@ -65,6 +65,34 @@ TEST(Test_Darknet, read_yolo_voc)\n ASSERT_FALSE(net.empty());\n }\n \n+TEST(Test_Darknet, read_yolo_voc_stream)\n+{\n+ Mat ref;\n+ Mat sample = imread(_tf(\"dog416.png\"));\n+ Mat inp = blobFromImage(sample, 1.0/255, Size(416, 416), Scalar(), true, false);\n+ const std::string cfgFile = findDataFile(\"dnn/yolo-voc.cfg\", false);\n```", + "output": "@asciian, please remind me why you decided to replace `std::istream` to `cv::FileNode`? The last one seems to be more complicated here." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_darknet_importer.cpp\n**Change Type:** modified\n**Context:** PR #11104: Reading Darknet from cv::FileStorage - ### This pullrequest changes I added new constructor for DarknetImporter to load from std::istream. ...\n\n**Code Changes:**\n```diff\n@@ -65,6 +65,34 @@ TEST(Test_Darknet, read_yolo_voc)\n ASSERT_FALSE(net.empty());\n }\n \n+TEST(Test_Darknet, read_yolo_voc_stream)\n+{\n+ Mat ref;\n+ Mat sample = imread(_tf(\"dog416.png\"));\n+ Mat inp = blobFromImage(sample, 1.0/255, Size(416, 416), Scalar(), true, false);\n+ const std::string cfgFile = findDataFile(\"dnn/yolo-voc.cfg\", false);\n```", + "output": "@dkurt Because `std::istream` couldn't become the arguments of exporting functions and knew no way to implements the topic of this issue without `cv::FileNode`. If you know that, Please tell me." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/test/test_darknet_importer.cpp\n**Change Type:** modified\n**Context:** PR #11104: Reading Darknet from cv::FileStorage - ### This pullrequest changes I added new constructor for DarknetImporter to load from std::istream. ...\n\n**Code Changes:**\n```diff\n@@ -65,6 +65,34 @@ TEST(Test_Darknet, read_yolo_voc)\n ASSERT_FALSE(net.empty());\n }\n \n+TEST(Test_Darknet, read_yolo_voc_stream)\n+{\n+ Mat ref;\n+ Mat sample = imread(_tf(\"dog416.png\"));\n+ Mat inp = blobFromImage(sample, 1.0/255, Size(416, 416), Scalar(), true, false);\n+ const std::string cfgFile = findDataFile(\"dnn/yolo-voc.cfg\", false);\n```", + "output": "@dkurt Ah, I know another way. With raw string, we can implements that. But the way requires adding the function two arguments which are used for telling the length of content." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in test coverage and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/misc/java/test/DnnTensorFlowTest.java\n**Change Type:** modified\n**Context:** PR #11104: Reading Darknet from cv::FileStorage - ### This pullrequest changes I added new constructor for DarknetImporter to load from std::istream. ...\n\n**Code Changes:**\n```diff\n@@ -1,10 +1,14 @@\n package org.opencv.test.dnn;\n \n import java.io.File;\n+import java.io.FileInputStream;\n+import java.io.IOException;\n import java.util.ArrayList;\n import java.util.List;\n import org.opencv.core.Core;\n import org.opencv.core.Mat;\n```", + "output": "Thank you! Fixed that." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/objdetect/test/test_qrcode.cpp\n**Change Type:** modified\n**Context:** PR #11921: Detect QRCode (Update) - Along with this algorithm there are related data in opencv_extra: PR link: [https://github.com/opencv/opencv_extra...\n\n**Code Changes:**\n```diff\n@@ -1,74 +1,115 @@\n-/*M///////////////////////////////////////////////////////////////////////////////////////\n-//\n-// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.\n-//\n-// By downloading, copying, installing or using the software you agree to this license.\n-// If you do not agree to this license, do not download, install,\n-// copy or use the software.\n-//\n-//\n```", + "output": "@alalek @vpisarev done." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/ocl4dnn/src/ocl4dnn_conv_spatial.cpp\n**Change Type:** modified\n**Context:** PR #11762: update convolution opencl kernels in dnn module - update convolution kernel cache config, cleanup the idlf kernel and extend tuning paramet...\n\n**Code Changes:**\n```diff\n@@ -55,6 +55,7 @@\n #include \"../include/math_functions.hpp\"\n #include \"../include/default_kernel_config.hpp\"\n #include \"opencv2/dnn/shape_utils.hpp\"\n+#include \"opencv2/core/utils/logger.hpp\"\n \n #if defined WIN32 || defined _WIN32\n #include \n@@ -87,10 +88,13 @@ static void initializeGlobalBuiltinConfigurations(const std::string& cache_path)\n {\n```", + "output": "@alalek , PR is rebased on latest 3.4 branch." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -565,14 +565,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n };\n \n /**\n- * @brief Resize input 4-dimensional blob by nearest neighbor strategy.\n+ * @brief Resize input 4-dimensional blob by nearest neighbor or bilinear strategy.\n *\n- * Layer is used to support TensorFlow's resize_nearest_neighbor op.\n+ * Layer is used to support TensorFlow's resize_nearest_neighbor and resize_bilinear ops.\n */\n```", + "output": "@dmonterom, what do you think about renaming `ResizeNearestNeighborLayer` to `ResizeLayer` and use interpolation strategy as an additional flag? Please add a test similar to nearest neighbor layer https://github.com/opencv/opencv/blob/5b868ccd829975da5372bf330994553e176aee09/modules/dnn/test/test_tf_importer.cpp#L325-L328 Using script from opencv/opencv_extra: https://github.com/opencv/opencv_extra/blob/b467369489529d8eb2e847c13b117ec4e253a638/testdata/dnn/tensorflow/generate_tf_models.py#L258-L262" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -565,14 +565,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n };\n \n /**\n- * @brief Resize input 4-dimensional blob by nearest neighbor strategy.\n+ * @brief Resize input 4-dimensional blob by nearest neighbor or bilinear strategy.\n *\n- * Layer is used to support TensorFlow's resize_nearest_neighbor op.\n+ * Layer is used to support TensorFlow's resize_nearest_neighbor and resize_bilinear ops.\n */\n```", + "output": "Hi @dkurt, thats a good idea, I will change it as soon as possible. I will also add the new test soon." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -565,14 +565,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n };\n \n /**\n- * @brief Resize input 4-dimensional blob by nearest neighbor strategy.\n+ * @brief Resize input 4-dimensional blob by nearest neighbor or bilinear strategy.\n *\n- * Layer is used to support TensorFlow's resize_nearest_neighbor op.\n+ * Layer is used to support TensorFlow's resize_nearest_neighbor and resize_bilinear ops.\n */\n```", + "output": "@dmonterom, thank you! It also can help us to resolve an issue https://github.com/opencv/opencv/issues/10985 with `Interp` layer from some of Caffe's forks smoother." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -565,14 +565,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n };\n \n /**\n- * @brief Resize input 4-dimensional blob by nearest neighbor strategy.\n+ * @brief Resize input 4-dimensional blob by nearest neighbor or bilinear strategy.\n *\n- * Layer is used to support TensorFlow's resize_nearest_neighbor op.\n+ * Layer is used to support TensorFlow's resize_nearest_neighbor and resize_bilinear ops.\n */\n```", + "output": "Hey @dkurt, I made the changes (I also tested the two layers and everything seems ok). If you agree with them i will replace the resize nearest neighbor test with a new one." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/all_layers.hpp\n**Change Type:** modified\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -565,14 +565,14 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n };\n \n /**\n- * @brief Resize input 4-dimensional blob by nearest neighbor strategy.\n+ * @brief Resize input 4-dimensional blob by nearest neighbor or bilinear strategy.\n *\n- * Layer is used to support TensorFlow's resize_nearest_neighbor op.\n+ * Layer is used to support TensorFlow's resize_nearest_neighbor and resize_bilinear ops.\n */\n```", + "output": "@dmonterom please add `*_net`, `*_in` and `*_out` files generated by script as a PR to opencv_extra with modified version of the script." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/resize_layer.cpp\n**Change Type:** added\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,176 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+// Copyright (C) 2017, Intel Corporation, all rights reserved.\n+// Third party copyrights are property of their respective owners.\n+#include \"../precomp.hpp\"\n+#include \"layers_common.hpp\"\n+#include \"../op_inf_engine.hpp\"\n```", + "output": "Thanks for a review! Replaced it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/resize_layer.cpp\n**Change Type:** added\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n**Review Line:** 131\n\n**Code Changes:**\n```diff\n+ for (int c = 0; c < numPlanes; ++c)\n+ {\n+ *outData = inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x0] - inpData_row0_c[x0]) +\n+ (input_x - x0) * (inpData_row0_c[x1] - inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x1] - inpData_row0_c[x1] - inpData_row1_c[x0] + inpData_row0_c[x0]));\n+\n+ inpData_row0_c += inpSpatialSize;\n+ inpData_row1_c += inpSpatialSize;\n+ outData += outSpatialSize;\n+ }\n```", + "output": "@alalek, There are different topics that TenorFlow's resize_bilinear doesn't equal to OpenCV's one. Replaced constants to string values." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/resize_layer.cpp\n**Change Type:** added\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n**Review Line:** 131\n\n**Code Changes:**\n```diff\n+ for (int c = 0; c < numPlanes; ++c)\n+ {\n+ *outData = inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x0] - inpData_row0_c[x0]) +\n+ (input_x - x0) * (inpData_row0_c[x1] - inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x1] - inpData_row0_c[x1] - inpData_row1_c[x0] + inpData_row0_c[x0]));\n+\n+ inpData_row0_c += inpSpatialSize;\n+ inpData_row1_c += inpSpatialSize;\n+ outData += outSpatialSize;\n+ }\n```", + "output": "@alalek, if we extend cv::resize in the master branch and add the optional floating-point shift, will that make cv::resize suitable for implementation of this layer? alternatively, can we use cv::warpAffine with the transformation matrix `[fx, 0, shift_x; 0, fy, shift_y]` to emulate the proper resize + shift?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/resize_layer.cpp\n**Change Type:** added\n**Context:** PR #11050: Added ResizeBilinear op for tf - ### This pullrequest changes Added ResizeBilinear op for tensorflow importer.\n**Review Line:** 131\n\n**Code Changes:**\n```diff\n+ for (int c = 0; c < numPlanes; ++c)\n+ {\n+ *outData = inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x0] - inpData_row0_c[x0]) +\n+ (input_x - x0) * (inpData_row0_c[x1] - inpData_row0_c[x0] +\n+ (input_y - y0) * (inpData_row1_c[x1] - inpData_row0_c[x1] - inpData_row1_c[x0] + inpData_row0_c[x0]));\n+\n+ inpData_row0_c += inpSpatialSize;\n+ inpData_row1_c += inpSpatialSize;\n+ outData += outSpatialSize;\n+ }\n```", + "output": "@alalek, it means that cv::warpAffine can do that?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgcodecs/src/loadsave.cpp\n**Change Type:** modified\n**Context:** PR #11417: imgcodecs cmake: the option to customize supported formats list - (WITH_HDR, WITH_SUNRASTER, WITH_PXM) There were (and may be still exist) ...\n\n**Code Changes:**\n```diff\n@@ -131,8 +131,10 @@ struct ImageCodecInitializer\n decoders.push_back( makePtr() );\n encoders.push_back( makePtr() );\n \n+ #ifdef HAVE_IMGCODEC_HDR\n decoders.push_back( makePtr() );\n encoders.push_back( makePtr() );\n+ #endif\n #ifdef HAVE_JPEG\n decoders.push_back( makePtr() );\n```", + "output": "I beg your pardon. (from what I see - was fixed already, thanks!)" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/gpu/src/pyrlk.cpp\n**Change Type:** modified\n**Context:** PR #11060: Thread-safe version of sparse function in cv::gpu::PyrLKOpticalFlow - Issue: The function sparse of cv::gpu::PyrLKOpticalFlow class isn't t...\n\n**Code Changes:**\n```diff\n@@ -42,6 +42,11 @@\n \n #include \"precomp.hpp\"\n \n+#ifdef HAVE_TBB\n+#include \n+#include \n+#endif\n+\n using namespace std;\n```", + "output": "I understand, so I can use TBB (tbb::mutex and condition_variable), and make compilable the thread-safe version only if there is TBB support \"#ifdef HAVE_TBB\" What do you think? Thank you for your help" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/gstreamer_pipeline.cpp\n**Change Type:** modified\n**Context:** PR #11482: Remove command line argument related to container - ``` docker_image:Linux x64 Debug=gstreamer:16.04 buildworker:Win64 OpenCL=windows-2 doc...\n\n**Code Changes:**\n```diff\n@@ -18,7 +18,6 @@ class GStreamerPipeline\n \"{h help usage ? | | print help messages }\"\n \"{m mode | | coding mode (supported: encode, decode) }\"\n \"{p pipeline |default | pipeline name (supported: 'default', 'gst-basic', 'gst-vaapi', 'gst-libav', 'ffmpeg') }\"\n- \"{ct container |mp4 | container name (supported: 'mp4', 'mov', 'avi', 'mkv') }\"\n \"{cd codec |h264 | codec name (supported: 'h264', 'h265', 'mpeg2', 'mpeg4', 'mjpeg', 'vp8') }\"\n \"{f file path | | path to file }\"\n \"{vr resolution |720p | video resolution for encoding (supported: '720p', '1080p', '4k') }\"\n@@ -30,32 +29,36 @@ class GStreamerPipeline\n if (cmd_parser->has(\"help\"))\n```", + "output": "@alalek corrected." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/gstreamer_pipeline.cpp\n**Change Type:** modified\n**Context:** PR #11482: Remove command line argument related to container - ``` docker_image:Linux x64 Debug=gstreamer:16.04 buildworker:Win64 OpenCL=windows-2 doc...\n\n**Code Changes:**\n```diff\n@@ -18,7 +18,6 @@ class GStreamerPipeline\n \"{h help usage ? | | print help messages }\"\n \"{m mode | | coding mode (supported: encode, decode) }\"\n \"{p pipeline |default | pipeline name (supported: 'default', 'gst-basic', 'gst-vaapi', 'gst-libav', 'ffmpeg') }\"\n- \"{ct container |mp4 | container name (supported: 'mp4', 'mov', 'avi', 'mkv') }\"\n \"{cd codec |h264 | codec name (supported: 'h264', 'h265', 'mpeg2', 'mpeg4', 'mjpeg', 'vp8') }\"\n \"{f file path | | path to file }\"\n \"{vr resolution |720p | video resolution for encoding (supported: '720p', '1080p', '4k') }\"\n@@ -30,32 +29,36 @@ class GStreamerPipeline\n if (cmd_parser->has(\"help\"))\n```", + "output": "@alalek replace exit_code to std::exception" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/gstreamer_pipeline.cpp\n**Change Type:** modified\n**Context:** PR #11482: Remove command line argument related to container - ``` docker_image:Linux x64 Debug=gstreamer:16.04 buildworker:Win64 OpenCL=windows-2 doc...\n\n**Code Changes:**\n```diff\n@@ -18,7 +18,6 @@ class GStreamerPipeline\n \"{h help usage ? | | print help messages }\"\n \"{m mode | | coding mode (supported: encode, decode) }\"\n \"{p pipeline |default | pipeline name (supported: 'default', 'gst-basic', 'gst-vaapi', 'gst-libav', 'ffmpeg') }\"\n- \"{ct container |mp4 | container name (supported: 'mp4', 'mov', 'avi', 'mkv') }\"\n \"{cd codec |h264 | codec name (supported: 'h264', 'h265', 'mpeg2', 'mpeg4', 'mjpeg', 'vp8') }\"\n \"{f file path | | path to file }\"\n \"{vr resolution |720p | video resolution for encoding (supported: '720p', '1080p', '4k') }\"\n@@ -30,32 +29,36 @@ class GStreamerPipeline\n if (cmd_parser->has(\"help\"))\n```", + "output": "@alalek replaced" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/core/include/opencv2/core/check.hpp\n**Change Type:** added\n**Context:** PR #10978: core: add \"check\" macros - Proposal basics only without massive code changes.
Proposal details * Name: C...\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,135 @@\n+// This file is part of OpenCV project.\n+// It is subject to the license terms in the LICENSE file found in the top-level directory\n+// of this distribution and at http://opencv.org/license.html.\n+\n+#ifndef OPENCV_CORE_CHECK_HPP\n+#define OPENCV_CORE_CHECK_HPP\n+\n+#include \n+\n```", + "output": "Thanks! Fixed. Tests with integers are added." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/ml/include/opencv2/ml.hpp\n**Change Type:** modified\n**Context:** PR #10213: Simulated Annealing for ANN_MLP training method - **Merge with extra**: https://github.com/opencv/opencv_extra/pull/412 Add a trainning met...\n\n**Code Changes:**\n```diff\n@@ -1406,13 +1406,14 @@ class CV_EXPORTS_W ANN_MLP : public StatModel\n /** Available training methods */\n enum TrainingMethods {\n BACKPROP=0, //!< The back-propagation algorithm.\n- RPROP=1 //!< The RPROP algorithm. See @cite RPROP93 for details.\n+ RPROP = 1, //!< The RPROP algorithm. See @cite RPROP93 for details.\n+ ANNEAL = 2 //!< The simulated annealing algorithm. See @cite Kirkpatrick83 for details.\n };\n \n /** Sets training method and common parameters.\n```", + "output": "In ann_mlp.cpp I insert: ``` class ANN_MLP_ANNEALImpl : public ANN_MLP_ANNEAL { public: ANN_MLP_ANNEALImpl() { clear(); setActivationFunction(SIGMOID_SYM, 0, 0); setLayerSizes(Mat()); setTrainMethod(ANN_MLP::ANNEAL, 0.1, FLT_EPSILON, 0, 0); } virtual ~ANN_MLP_ANNEALImpl() {} CV_IMPL_PROPERTY(double, AnnealInitialT, params.initialT) CV_IMPL_PROPERTY(double, AnnealFinalT, params.finalT) CV_IMPL_PROPERTY(double, AnnealCoolingRatio, params.coolingRatio) CV_IMPL_PROPERTY(int, AnnealItePerStep, params.itePerStep) AnnParams params; }; Ptr ANN_MLP_ANNEAL::create() { return makePtr(); <---- ERROR 2259 } ``` >G:\\Lib\\opencv\\modules\\core\\include\\opencv2/core/ptr.inl.hpp(301): error C2259: 'cv::ml::ANN_MLP_ANNEALImpl' : class' : cannot instantiate abstract class see G:\\Lib\\opencv\\modules\\core\\include\\opencv2/core/ptr.inl.hpp(301): note: 'int cv::ml::StatModel::getVarCount(void) const' : is abstract and all methods are enumerated My anneal class is ``` class CV_EXPORTS_W ANN_MLP_ANNEAL : public ANN_MLP { public: /** ANNEAL: Update initial temperature. It must be \\>=0. Default value is 10.*/ /** @see setAnnealInitialT */ CV_WRAP virtual double getAnnealInitialT() const; /** @copybrief getAnnealInitialT @see getAnnealInitialT */ CV_WRAP virtual void setAnnealInitialT(double val) const; /** ANNEAL: Update final temperature. It must be \\>=0 and less than initialT. Default value is 0.1.*/ /** @see setAnnealFinalT */ CV_WRAP virtual double getAnnealFinalT() const; /** @copybrief getAnnealFinalT @see getAnnealFinalT */ CV_WRAP virtual void setAnnealFinalT(double val) ; /** ANNEAL: Update cooling ratio. It must be \\>0 and less than 1. Default value is 0.95.*/ /** @see setAnnealCoolingRatio */ CV_WRAP virtual double getAnnealCoolingRatio() const ; /** @copybrief getAnnealCoolingRatio @see getAnnealCoolingRatio */ CV_WRAP virtual void setAnnealCoolingRatio(double val) const; /** ANNEAL: Update iteration per step. It must be \\>0 . Default value is 10.*/ /** @see setAnnealItePerStep */ CV_WRAP virtual int getAnnealItePerStep() const; /** @copybrief getAnnealItePerStep @see getAnnealItePerStep */ CV_WRAP virtual void setAnnealItePerStep(int val) const; /** @brief Creates empty model Use StatModel::train to train the model, Algorithm::load\\(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags. */ CV_WRAP static Ptr create(); /** @brief Loads and creates a serialized ANN from a file * * Use ANN::save to serialize and store an ANN to disk. * Load the ANN from this file again, by calling this function with the path to the file. * * @param filepath path to serialized ANN */ CV_WRAP static Ptr load(const String& filepath); .... ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/photo/src/contrast_preserve.hpp\n**Change Type:** modified\n**Context:** PR #10997: Optimize decolor in photo - ### This pullrequest changes - Replaced `vector.push_back` into `vector.resize()` and `vector[i] = foo;` to imp...\n\n**Code Changes:**\n```diff\n@@ -58,19 +58,19 @@ class Decolor\n public:\n float sigma;\n void init();\n- vector product(vector < vector > &comb, vector &initRGB);\n+ vector product(vector &comb, const double initRGB[3]);\n double energyCalcu(vector &Cg, vector < vector > &polyGrad, vector &wei);\n void singleChannelGradx(const Mat &img, Mat& dest);\n void singleChannelGrady(const Mat &img, Mat& dest);\n void gradvector(const Mat &img, vector &grad);\n```", + "output": "Thanks! I will try this change in the next refactoring patch." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/elementwise_layers.cpp\n**Change Type:** modified\n**Context:** PR #9517: Layers for MobileNet from TensorFlow - ### This pullrequest changes resolves https://github.com/opencv/opencv/issues/9462 (waiting for feed...\n\n**Code Changes:**\n```diff\n@@ -248,6 +248,62 @@ struct ReLUFunctor\n int64 getFLOPSPerElement() const { return 1; }\n };\n \n+struct ReLU6Functor\n+{\n+ typedef ReLU6Layer Layer;\n+ float minValue, maxValue;\n+\n+ ReLU6Functor(float minValue_ = 0.0f, float maxValue_ = 6.0f)\n```", + "output": "Sure, thank you!" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/imgproc/include/opencv2/imgproc.hpp\n**Change Type:** modified\n**Context:** PR #10575: Added getFontScaleFromHeight() - Adding a function to compute font scale for a desired font height in pixels\n**Review Line:** 4734\n\n**Code Changes:**\n```diff\n@@ -4730,6 +4730,20 @@ CV_EXPORTS_W Size getTextSize(const String& text, int fontFace,\n double fontScale, int thickness,\n CV_OUT int* baseLine);\n \n+\n+/** @brief Calculates the font-specific size to use to achieve a given height in pixels.\n+\n+@param fontFace Font to use, see cv::HersheyFonts.\n+@param pixelHeight Pixel height to compute the fontScale for\n+@param thickness Thickness of lines used to render the text.See putText for details.\n+@return The fontSize to use for cv::putText\n```", + "output": "Thanks for reviewing. Yes, I can. Should I push to this branch or open a new one as it is already merged?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/prior_box_layer.cpp\n**Change Type:** modified\n**Context:** PR #10874: Use only absolute prior boxes explicit sizes. Remove scales attributes. - ### This pullrequest changes **Merge with extra**: https://githu...\n\n**Code Changes:**\n```diff\n@@ -179,44 +179,62 @@ class PriorBoxLayerImpl : public PriorBoxLayer\n }\n \n PriorBoxLayerImpl(const LayerParams ¶ms)\n- : _boxWidth(0), _boxHeight(0)\n {\n setParamsFrom(params);\n _minSize = getParameter(params, \"min_size\", 0, false, 0);\n _flip = getParameter(params, \"flip\", 0, false, true);\n _clip = getParameter(params, \"clip\", 0, false, true);\n```", + "output": "@alalek, thanks! I'll manage it." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/dnn.cpp\n**Change Type:** modified\n**Context:** PR #10607: dnn : Added an imagesFromBlob method to the dnn module - Added an std::vector < cv::Mat > imagesFromBlob( cv::Mat blob ) method to the dn...\n\n**Code Changes:**\n```diff\n@@ -192,6 +192,31 @@ void blobFromImages(InputArrayOfArrays images_, OutputArray blob_, double scalef\n }\n }\n \n+void imagesFromBlob(const cv::Mat& blob_, OutputArrayOfArrays images_)\n+{\n+ CV_TRACE_FUNCTION();\n+\n+ //A blob is a 4 dimensional matrix in floating point precision\n+ //blob_[0] = batchSize = nbOfImages\n```", + "output": "@Pandinosaurus, Thanks, I see. You're completely right. We need to use `getPlane`: ```cpp std::vector blobSize(4); blobSize[0] = 2; // batch size blobSize[1] = 3; // channels blobSize[2] = 4; // height blobSize[3] = 5; // width Mat blob(blobSize, CV_32F); std::vector imgs(blob.size[0]); for (int n = 0; n < blob.size[0]; ++n) { std::vector channels(blob.size[1]); for (int c = 0; c < blob.size[1]; ++c) { channels[c] = getPlane(blob, n, c); } cv::merge(channels, imgs[n]); } ```" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #10607: dnn : Added an imagesFromBlob method to the dnn module - Added an std::vector < cv::Mat > imagesFromBlob( cv::Mat blob ) method to the dn...\n\n**Code Changes:**\n```diff\n@@ -735,6 +735,15 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n double scalefactor=1.0, Size size = Size(),\n const Scalar& mean = Scalar(), bool swapRB=true, bool crop=true);\n \n+ /** @brief Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure\n+ * (std::vector).\n+ * @param[in] blob_ 4 dimensional array (images, channels, height, width) in floating point precision (CV_32F) from\n+ * which you would like to extract the images.\n+ * @param[out] images_ array of 2D Mat containing the images extracted from the blob in floating point precision\n+ * (CV_32F). They are non normalized neither mean added. The number of returned images equals the first dimension\n```", + "output": "Please keep things simpler. There are extra information about `cv::split`. I think it's enough to say that this function returns number of images equals first dimension of blob (it's convenient name is a batch size). An every image has number of channels equals to the second dimension of blob (we usually call it channels/features/depth dimension). All the information about output could be placed at `@return` tag but not into `@brief` one." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/fast.cpp\n**Change Type:** modified\n**Context:** PR #10362: add HAL for FAST - ### This pullrequest changes Add HAL interface for FAST algorithm for corner detection. Call only for FAST_9_16 and thr...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@ The references are:\n #include \"fast.hpp\"\n #include \"fast_score.hpp\"\n #include \"opencl_kernels_features2d.hpp\"\n+#include \"hal_replacement.hpp\"\n #include \"opencv2/core/hal/intrin.hpp\"\n \n #include \"opencv2/core/openvx/ovx_defs.hpp\"\n@@ -414,6 +415,62 @@ static bool openvx_FAST(InputArray _img, std::vector& keypoints,\n \n```", + "output": "@mshabunin, maybe we need to add `FAST_5_8` and `FAST_7_12` or FAST with parameter `type`, and check it inside HAL. What do you think?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/fast.cpp\n**Change Type:** modified\n**Context:** PR #10362: add HAL for FAST - ### This pullrequest changes Add HAL interface for FAST algorithm for corner detection. Call only for FAST_9_16 and thr...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@ The references are:\n #include \"fast.hpp\"\n #include \"fast_score.hpp\"\n #include \"opencl_kernels_features2d.hpp\"\n+#include \"hal_replacement.hpp\"\n #include \"opencv2/core/hal/intrin.hpp\"\n \n #include \"opencv2/core/openvx/ovx_defs.hpp\"\n@@ -414,6 +415,62 @@ static bool openvx_FAST(InputArray _img, std::vector& keypoints,\n \n```", + "output": "@ElenaGvozdeva , yes it will be better to define available types in `features2d/hal/interface.h` file (see [imgproc/hal/interface.h](https://github.com/opencv/opencv/blob/master/modules/imgproc/include/opencv2/imgproc/hal/interface.h) for example)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in API design and compatibility. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/features2d/src/fast.cpp\n**Change Type:** modified\n**Context:** PR #10362: add HAL for FAST - ### This pullrequest changes Add HAL interface for FAST algorithm for corner detection. Call only for FAST_9_16 and thr...\n\n**Code Changes:**\n```diff\n@@ -45,6 +45,7 @@ The references are:\n #include \"fast.hpp\"\n #include \"fast_score.hpp\"\n #include \"opencl_kernels_features2d.hpp\"\n+#include \"hal_replacement.hpp\"\n #include \"opencv2/core/hal/intrin.hpp\"\n \n #include \"opencv2/core/openvx/ovx_defs.hpp\"\n@@ -414,6 +415,62 @@ static bool openvx_FAST(InputArray _img, std::vector& keypoints,\n \n```", + "output": "No, we can only do something like this: * add one more interface for FAST ```cpp inline int hal_ni_FAST(const uchar* src_data, size_t src_step, int width, int height, uchar* keypoints_data, size_t* keypoints_size, int threshold, bool nonmax_suppression, int type) { return CV_HAL_ERROR_NOT_IMPLEMENTED; } //! @cond IGNORED #define cv_hal_FAST hal_ni_FAST //! @endcond ``` * try to call this interface one by one ```cpp cv::Mat img = _img.getMat(); CALL_HAL(fast_dense, hal::FAST, img, keypoints, threshold, nonmax_suppression, type); size_t keypoints_size; CALL_HAL(fast, cv_hal_FAST, img.data, img.step, img.cols, img.rows, (uchar*)(keypoints[0]), &keypoints_size, threshold, nonmax_suppression, type); ``` @vpisarev, if it isn't acceptable, could we discuss it one more time?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/recurrent_layers.cpp\n**Change Type:** modified\n**Context:** PR #9491: LSTM layer for TensorFlow importer - ### This pullrequest changes LSTM layer from TensorFlow. Slice layer extension resolves https://githu...\n**Review Line:** 124\n\n**Code Changes:**\n```diff\n+ if (blobs.size() > 3)\n+ {\n+ CV_Assert(blobs.size() == 6);\n+ for (int i = 3; i < 6; ++i)\n+ {\n+ CV_Assert(blobs[i].rows == Wh.cols && blobs[i].cols == Wh.cols);\n+ CV_Assert(blobs[i].type() == bias.type());\n+ }\n+ }\n+ }\n+ useTimestampDim = params.get(\"use_timestamp_dim\", true);\n```", + "output": "@dkurt , shouldn't it be `blobs[i].rows == Wh.rows && ...`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in memory management. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/recurrent_layers.cpp\n**Change Type:** modified\n**Context:** PR #9491: LSTM layer for TensorFlow importer - ### This pullrequest changes LSTM layer from TensorFlow. Slice layer extension resolves https://githu...\n**Review Line:** 124\n\n**Code Changes:**\n```diff\n+ if (blobs.size() > 3)\n+ {\n+ CV_Assert(blobs.size() == 6);\n+ for (int i = 3; i < 6; ++i)\n+ {\n+ CV_Assert(blobs[i].rows == Wh.cols && blobs[i].cols == Wh.cols);\n+ CV_Assert(blobs[i].type() == bias.type());\n+ }\n+ }\n+ }\n+ useTimestampDim = params.get(\"use_timestamp_dim\", true);\n```", + "output": "@mshabunin, looks like these blobs are squad. Actually, they are diagonal matrices: https://github.com/opencv/opencv/pull/9491/files#diff-c6e561028becbce9ec212c49506ce60dR1150. Are there any memory problems?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in testing and validation. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/src/layers/recurrent_layers.cpp\n**Change Type:** modified\n**Context:** PR #9491: LSTM layer for TensorFlow importer - ### This pullrequest changes LSTM layer from TensorFlow. Slice layer extension resolves https://githu...\n**Review Line:** 124\n\n**Code Changes:**\n```diff\n+ if (blobs.size() > 3)\n+ {\n+ CV_Assert(blobs.size() == 6);\n+ for (int i = 3; i < 6; ++i)\n+ {\n+ CV_Assert(blobs[i].rows == Wh.cols && blobs[i].cols == Wh.cols);\n+ CV_Assert(blobs[i].type() == bias.type());\n+ }\n+ }\n+ }\n+ useTimestampDim = params.get(\"use_timestamp_dim\", true);\n```", + "output": "Just a warning from the static analyzer: ``` >>> CID 1420444: Incorrect expression (COPY_PASTE_ERROR) >>> \"cols\" in \"this->blobs[i].rows == Wh.cols\" looks like a copy-paste error. 124 CV_Assert(blobs[i].rows == Wh.cols && blobs[i].cols == Wh.cols); ``` Probably it can be suppressed by rewriting the condition: `CV_Assert(blobs[i].rows == blobs[i].cols && blobs[i].cols == Wh.cols);` I will look into it, thank you." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in implementation quality. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/cpp/tutorial_code/features2D/Homography/decompose_homography.cpp\n**Change Type:** renamed\n**Context:** PR #10333: Update samples - resolves #10332 some small improvements on samples\n**Review Line:** 170\n\n**Code Changes:**\n```diff\n int main(int argc, char *argv[])\n {\n CommandLineParser parser(argc, argv, params);\n \n- if (parser.get(\"help\"))\n+ if ( parser.has(\"help\") )\n {\n- cout << about << endl;\n+ parser.about( \"Code for homography tutorial.\\n\"\n+ \"Example 4: decompose the homography matrix.\\n\" );\n parser.printMessage();\n```", + "output": "@sturkmen72 Maybe for consistency you can also update the other tutorial codes in the homography folder with `parser.has(\"help\") [...]`?" + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #10331: Changed wrapping mode for dnn::Net::forward - ### This pullrequest changes Changed wrapping mode for cv::dnn::Net::forward to avoid confli...\n\n**Code Changes:**\n```diff\n@@ -423,8 +423,8 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n * @param outputBlobs contains all output blobs for each layer specified in @p outBlobNames.\n * @param outBlobNames names for layers which outputs are needed to get\n */\n- void forward(std::vector >& outputBlobs,\n- const std::vector& outBlobNames);\n+ CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std::vector >& outputBlobs,\n+ const std::vector& outBlobNames);\n \n //TODO:\n```", + "output": "@arrybn, thank you! Python bindings generator can do it by `OutputArrayOfArrays`: https://docs.opencv.org/master/da/d49/tutorial_py_bindings_basics.html > Header parser can understand the input and output arguments from keywords like InputArray, OutputArray etc. But sometimes, we may need to hardcode inputs and outputs. For that, macros like CV_OUT, CV_IN_OUT etc. are used." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in header file design and interface. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #10331: Changed wrapping mode for dnn::Net::forward - ### This pullrequest changes Changed wrapping mode for cv::dnn::Net::forward to avoid confli...\n\n**Code Changes:**\n```diff\n@@ -423,8 +423,8 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n * @param outputBlobs contains all output blobs for each layer specified in @p outBlobNames.\n * @param outBlobNames names for layers which outputs are needed to get\n */\n- void forward(std::vector >& outputBlobs,\n- const std::vector& outBlobNames);\n+ CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std::vector >& outputBlobs,\n+ const std::vector& outBlobNames);\n \n //TODO:\n```", + "output": "@arrybn Hi! Perhaps there is missing `CV_WRAP` keyword." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in documentation and code clarity. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** modules/dnn/include/opencv2/dnn/dnn.hpp\n**Change Type:** modified\n**Context:** PR #10331: Changed wrapping mode for dnn::Net::forward - ### This pullrequest changes Changed wrapping mode for cv::dnn::Net::forward to avoid confli...\n\n**Code Changes:**\n```diff\n@@ -423,8 +423,8 @@ CV__DNN_EXPERIMENTAL_NS_BEGIN\n * @param outputBlobs contains all output blobs for each layer specified in @p outBlobNames.\n * @param outBlobNames names for layers which outputs are needed to get\n */\n- void forward(std::vector >& outputBlobs,\n- const std::vector& outBlobNames);\n+ CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std::vector >& outputBlobs,\n+ const std::vector& outBlobNames);\n \n //TODO:\n```", + "output": "Agree with @dkurt . CV_OUT is not necessary here. Perhaps there is some bug/issue with bindings generator itself. Looks like nothing was changed by this patch in Python signatures: - [master](https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#a8d248f50efec70dffa785de18a4b2170) - [patch](http://pullrequest.opencv.org/buildbot/export/pr/10331/docs/db/d30/classcv_1_1dnn_1_1Net.html#a8d248f50efec70dffa785de18a4b2170) Workaround is to pass None: ``` outputBlobs = new.forward(outputBlobs=None, outputName='name') ``` BTW, There is this method: `CV_WRAP Mat forward(const String& outputName = String());`. Python binding of this method has strong conflict with method above (can't be resolved properly without renaming like `CV_WRAP_AS`)." + }, + { + "instruction": "You are an expert OpenCV code reviewer specializing in code quality and best practices. Review the provided code changes and provide specific, actionable feedback.", + "input": "**File:** samples/dnn/js_face_recognition.html\n**Change Type:** added\n**Context:** PR #10215: JavaScript bindings for dnn module - ``` force_builders=Custom docker_image:Docs=docs-js docker_image:Custom=javascript ```\n\n**Code Changes:**\n```diff\n@@ -0,0 +1,205 @@\n+\n+\n+\n+\n+\n+ \n+ \n+\n+\n+ \n+\n+\n+ \n+\n+