cve_id
stringclasses
265 values
cve_published
timestamp[ns]
cve_descriptions
stringclasses
265 values
cve_metrics
dict
cve_references
listlengths
1
27
cve_configurations
listlengths
1
3
url
stringclasses
266 values
cve_tags
listlengths
1
4
domain
stringclasses
1 value
issue_owner_repo
listlengths
2
2
issue_body
stringlengths
2
8.04k
issue_title
stringlengths
3
346
issue_comments_url
stringlengths
59
81
issue_comments_count
int64
0
66
issue_created_at
timestamp[ns]
issue_updated_at
stringlengths
20
20
issue_html_url
stringlengths
40
62
issue_github_id
int64
1.01B
2.39B
issue_number
int64
21
125k
label
bool
2 classes
issue_msg
stringlengths
120
8.18k
issue_msg_n_tokens
int64
120
8.18k
issue_embedding
listlengths
3.07k
3.07k
null
null
null
null
null
null
null
null
null
[ "xuxueli", "xxl-job" ]
Please answer some questions before submitting your issue. Thanks! ### Which version of XXL-JOB do you using? 2.3.0版本 ### Expected behavior 期望XxlJobHelper.handleSuccess("handle success");执行,且页面上查询调用日志能看到这句日志; ### Actual behavior 未收到回调,调用日志中无此回调日志 ### Steps to reproduce the behavior 去掉异步后,恢复正常 ### Other information
2.3.0版本使用异步@Async注解,无法收到回调
https://api.github.com/repos/xuxueli/xxl-job/issues/3416/comments
10
2024-04-02T09:30:56
2024-04-08T00:58:05Z
https://github.com/xuxueli/xxl-job/issues/3416
2,220,015,928
3,416
false
This is a GitHub Issue repo:xxl-job owner:xuxueli Title : 2.3.0版本使用异步@Async注解,无法收到回调 Issue date: --- start body --- Please answer some questions before submitting your issue. Thanks! ### Which version of XXL-JOB do you using? 2.3.0版本 ### Expected behavior 期望XxlJobHelper.handleSuccess("handle success");执行,且页面上查询调用日志能看到这句日志; ### Actual behavior 未收到回调,调用日志中无此回调日志 ### Steps to reproduce the behavior 去掉异步后,恢复正常 ### Other information --- end body ---
469
[ 0.016067083925008774, 0.01040063239634037, -0.005331416614353657, 0.02199573442339897, 0.010954167693853378, -0.005047365557402372, 0.004380938597023487, 0.0592573843896389, 0.007407173048704863, -0.009278995916247368, 0.0027057668194174767, 0.017829654738307, -0.022534703835844994, 0.0071...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7 ### Operating system MacOS ### Operating system, version and so on MacOS Ventura ### Description I installed ImageMagick using Homebrew. Homebrew installed the third party dependencies in Cellar directory. Then I copied the static libraries from all the third party lib folders to my application directory and included the respective headers. I also included the three imagemagick libraries-magick++, magickwand, magickcore. I included these libraries in my xcode project. The project successfully build but when I tried to run it gave the error- libc++abi: terminating with uncaught exception of type Magick::ErrorMissingDelegate: im_project: no decode delegate for this image format `JPG' @ error/constitute.c/ReadImage/746 terminating with uncaught exception of type Magick::ErrorMissingDelegate: im_project: no decode delegate for this image format `JPG' @ error/constitute.c/ReadImage/746 ### Steps to Reproduce I installed ImageMagick using Homebrew. Homebrew installed the third party dependencies in Cellar directory. Then I copied the static libraries from all the third party lib folders to my application directory and included the respective headers. I also included the three imagemagick libraries-magick++, magickwand, magickcore. I included these libraries in my xcode project. ### Images _No response_
Binaries not available for macOS
https://api.github.com/repos/ImageMagick/ImageMagick/issues/6593/comments
2
2023-08-28T13:45:21
2023-08-29T06:32:27Z
https://github.com/ImageMagick/ImageMagick/issues/6593
1,869,800,338
6,593
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : Binaries not available for macOS Issue date: --- start body --- ### ImageMagick version 7 ### Operating system MacOS ### Operating system, version and so on MacOS Ventura ### Description I installed ImageMagick using Homebrew. Homebrew installed the third party dependencies in Cellar directory. Then I copied the static libraries from all the third party lib folders to my application directory and included the respective headers. I also included the three imagemagick libraries-magick++, magickwand, magickcore. I included these libraries in my xcode project. The project successfully build but when I tried to run it gave the error- libc++abi: terminating with uncaught exception of type Magick::ErrorMissingDelegate: im_project: no decode delegate for this image format `JPG' @ error/constitute.c/ReadImage/746 terminating with uncaught exception of type Magick::ErrorMissingDelegate: im_project: no decode delegate for this image format `JPG' @ error/constitute.c/ReadImage/746 ### Steps to Reproduce I installed ImageMagick using Homebrew. Homebrew installed the third party dependencies in Cellar directory. Then I copied the static libraries from all the third party lib folders to my application directory and included the respective headers. I also included the three imagemagick libraries-magick++, magickwand, magickcore. I included these libraries in my xcode project. ### Images _No response_ --- end body ---
1,516
[ -0.010914307087659836, -0.026869894936680794, -0.019606387242674828, 0.031441330909729004, 0.005933345295488834, 0.018514322116971016, -0.007834936492145061, 0.04195563867688179, -0.0014095265651121736, 0.050235021859407425, -0.0017777812900021672, -0.030679425224661827, -0.02088893018662929...
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
hello ![maint](https://github.com/Piwigo/Piwigo/assets/36087587/42f36c35-6f13-45cc-acd3-1a3be4399902) PHP Warning: Undefined array key "Optimal" in .maintenance_actions.tpl.php on line 334
Make compatible Maintenance
https://api.github.com/repos/Piwigo/Piwigo/issues/2038/comments
4
2023-11-10T17:05:36
2023-11-29T10:19:39Z
https://github.com/Piwigo/Piwigo/issues/2038
1,988,010,078
2,038
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : Make compatible Maintenance Issue date: --- start body --- hello ![maint](https://github.com/Piwigo/Piwigo/assets/36087587/42f36c35-6f13-45cc-acd3-1a3be4399902) PHP Warning: Undefined array key "Optimal" in .maintenance_actions.tpl.php on line 334 --- end body ---
325
[ -0.054141104221343994, -0.0010934912133961916, -0.002370051108300686, 0.0483458936214447, 0.03140120953321457, 0.01492108590900898, -0.015850208699703217, 0.03747987747192383, -0.0022440683096647263, 0.00929910410195589, 0.021779274567961693, -0.02066117711365223, -0.01565336063504219, -0....
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
When I have dwgadd input: ``` text "baz" (1 2 3) 8 text.oblique_angle = 45.0 ``` Parsing of entity not working. Code of: ``` #define SET_ENT(var, name) \ if (SSCANF_S (p, #var "." FMT_NAME " = %d.%d.%X", &s1[0] SZ, &i1, &i2, &u)) \ { \ BITCODE_H hdl; \ if (!ent.u.var || ent.type != DWG_TYPE_##name) \ fn_error ("Invalid type " #var ". Empty or wrong type\n"); \ hdl = dwg_add_handleref (dwg, i1, u, NULL); \ dwg_dynapi_entity_set_value (ent.u.var, #name, s1, hdl, 0); \ } ... ``` which is for parsing of the handle will be used instead of the right parse of float.
dwgadd issue with parsing of entity field (entity.field)
https://api.github.com/repos/LibreDWG/libredwg/issues/610/comments
2
2023-01-28T06:19:59
2023-02-02T08:33:07Z
https://github.com/LibreDWG/libredwg/issues/610
1,560,716,065
610
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : dwgadd issue with parsing of entity field (entity.field) Issue date: --- start body --- When I have dwgadd input: ``` text "baz" (1 2 3) 8 text.oblique_angle = 45.0 ``` Parsing of entity not working. Code of: ``` #define SET_ENT(var, name) \ if (SSCANF_S (p, #var "." FMT_NAME " = %d.%d.%X", &s1[0] SZ, &i1, &i2, &u)) \ { \ BITCODE_H hdl; \ if (!ent.u.var || ent.type != DWG_TYPE_##name) \ fn_error ("Invalid type " #var ". Empty or wrong type\n"); \ hdl = dwg_add_handleref (dwg, i1, u, NULL); \ dwg_dynapi_entity_set_value (ent.u.var, #name, s1, hdl, 0); \ } ... ``` which is for parsing of the handle will be used instead of the right parse of float. --- end body ---
1,051
[ -0.017645366489887238, 0.020459497347474098, -0.008381548337638378, 0.015835193917155266, 0.038150496780872345, 0.03446931019425392, -0.015759136527776718, 0.042257606983184814, -0.01951638236641884, -0.009431143291294575, 0.013051485642790794, 0.018375519663095474, 0.01129455491900444, -0...
null
null
null
null
null
null
null
null
null
[ "weidai11", "cryptopp" ]
VS2022 with the current head > cryptopp-master\validat2.cpp(1177,69): warning C4244: 'argument': conversion from 'CryptoPP::word' to 'long', possible loss of data Should `m` be declared as `Integer`?
Warning in validat2.cpp
https://api.github.com/repos/weidai11/cryptopp/issues/1238/comments
0
2023-10-01T13:05:50
2023-10-01T13:05:50Z
https://github.com/weidai11/cryptopp/issues/1238
1,920,757,942
1,238
false
This is a GitHub Issue repo:cryptopp owner:weidai11 Title : Warning in validat2.cpp Issue date: --- start body --- VS2022 with the current head > cryptopp-master\validat2.cpp(1177,69): warning C4244: 'argument': conversion from 'CryptoPP::word' to 'long', possible loss of data Should `m` be declared as `Integer`? --- end body ---
339
[ -0.029369154945015907, 0.008569530211389065, -0.014134434051811695, 0.028974179178476334, 0.03870747238397598, 0.011320243589580059, 0.0018514416879042983, 0.048186853528022766, -0.021046483889222145, 0.03929993510246277, -0.0006722496473230422, -0.004027326591312885, 0.009832036681473255, ...
CVE-2022-28463
2022-05-08T23:15:17.820000
ImageMagick 7.1.0-27 is vulnerable to Buffer Overflow.
{ "cvssMetricV2": [ { "acInsufInfo": false, "baseSeverity": "MEDIUM", "cvssData": { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "availabilityImpact": "PARTIAL", "baseScore": 6.8, "confidentialityImpact": "PARTIAL", "integrityImpact": "PARTIAL", "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P", "version": "2.0" }, "exploitabilityScore": 8.6, "impactScore": 6.4, "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "source": "nvd@nist.gov", "type": "Primary", "userInteractionRequired": true } ], "cvssMetricV30": null, "cvssMetricV31": [ { "cvssData": { "attackComplexity": "LOW", "attackVector": "LOCAL", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.1" }, "exploitabilityScore": 1.8, "impactScore": 5.9, "source": "nvd@nist.gov", "type": "Primary" } ] }
[ { "source": "cve@mitre.org", "tags": [ "Patch", "Third Party Advisory" ], "url": "https://github.com/ImageMagick/ImageMagick/commit/ca3654ebf7a439dc736f56f083c9aa98e4464b7f" }, { "source": "cve@mitre.org", "tags": [ "Exploit", "Issue Tracking", "Third Party ...
[ { "nodes": [ { "cpeMatch": [ { "criteria": "cpe:2.3:a:imagemagick:imagemagick:7.1.0-27:*:*:*:*:*:*:*", "matchCriteriaId": "0B494258-E7BF-4584-800D-D2D893003E17", "versionEndExcluding": null, "versionEndIncluding": null, "version...
https://github.com/ImageMagick/ImageMagick/issues/4988
[ "Exploit", "Issue Tracking", "Third Party Advisory" ]
github.com
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.1.0-27 ### Operating system Linux ### Operating system, version and so on Linux d477f3580ae9 5.4.0-105-generic #119~18.04.1-Ubuntu SMP Tue Mar 8 11:21:24 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ### Description Hello, We are currently working on fuzz testing feature, and we found a heap-use-after-free on ImageMagick. ### Steps to Reproduce ``` ➜ oss-fuzz git:(master) ✗ python infra/helper.py reproduce imagemagick encoder_cin_fuzzer ./build/out/imagemagick/crash-772bceeffddfb027f3363fb5be34fa55195a6e1a INFO:root:Running: docker run --rm --privileged -i -v /work/fuzz/oss-fuzz/build/out/imagemagick:/out -v /work/fuzz/oss-fuzz/build/out/imagemagick/crash-772bceeffddfb027f3363fb5be34fa55195a6e1a:/testcase -t gcr.io/oss-fuzz-base/base-runner reproduce encoder_cin_fuzzer -runs=100. + FUZZER=encoder_cin_fuzzer + shift + '[' '!' -v TESTCASE ']' + TESTCASE=/testcase + '[' '!' -f /testcase ']' + export RUN_FUZZER_MODE=interactive + RUN_FUZZER_MODE=interactive + export FUZZING_ENGINE=libfuzzer + FUZZING_ENGINE=libfuzzer + export SKIP_SEED_CORPUS=1 + SKIP_SEED_CORPUS=1 + run_fuzzer encoder_cin_fuzzer -runs=100 /testcase /out/encoder_cin_fuzzer -rss_limit_mb=2560 -timeout=25 -runs=100 /testcase -close_fd_mask=3 < /dev/null INFO: Running with entropic power schedule (0xFF, 100). INFO: Seed: 543797506 INFO: Loaded 1 modules (228899 inline 8-bit counters): 228899 [0x1f6a8b0, 0x1fa26d3), INFO: Loaded 1 PC tables (228899 PCs): 228899 [0x1fa26d8,0x2320908), /out/encoder_cin_fuzzer: Running 1 inputs 100 time(s) each. Running: /testcase ================================================================= ==18==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61b000001408 at pc 0x000000c77cfc bp 0x7ffd2026fd90 sp 0x7ffd2026fd88 READ of size 1 at 0x61b000001408 thread T0 SCARINESS: 12 (1-byte-read-heap-buffer-overflow) #0 0xc77cfb in PushLongPixel /src/imagemagick/./MagickCore/quantum-private.h:256:27 #1 0xc77cfb in ImportRGBQuantum /src/imagemagick/MagickCore/quantum-import.c:4061:15 #2 0xc77cfb in ImportQuantumPixels /src/imagemagick/MagickCore/quantum-import.c:4774:7 #3 0xd8a7e0 in ReadCINImage /src/imagemagick/coders/cin.c:774:12 #4 0x9cfca1 in ReadImage /src/imagemagick/MagickCore/constitute.c:728:15 #5 0x94d996 in BlobToImage /src/imagemagick/MagickCore/blob.c:475:13 #6 0x81e2b1 in Magick::Image::read(Magick::Blob const&) /src/imagemagick/Magick++/lib/Image.cpp:4043:12 #7 0x7ea865 in LLVMFuzzerTestOneInput /src/imagemagick/Magick++/fuzz/encoder_fuzzer.cc:66:11 #8 0x6e0502 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15 #9 0x6cb462 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6 #10 0x6d0ccc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9 #11 0x6fa2b2 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 #12 0x7f40139740b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) #13 0x6a9bad in _start (/out/encoder_cin_fuzzer+0x6a9bad) DEDUP_TOKEN: PushLongPixel--ImportRGBQuantum--ImportQuantumPixels 0x61b000001408 is located 0 bytes to the right of 1416-byte region [0x61b000000e80,0x61b000001408) allocated by thread T0 here: #0 0x7e678d in operator new[](unsigned long) /src/llvm-project/compiler-rt/lib/asan/asan_new_delete.cpp:98:3 #1 0x810ed0 in Magick::BlobRef::BlobRef(void const*, unsigned long) /src/imagemagick/Magick++/lib/BlobRef.cpp:30:12 #2 0x80ff7d in Magick::Blob::Blob(void const*, unsigned long) /src/imagemagick/Magick++/lib/Blob.cpp:27:18 #3 0x7ea859 in LLVMFuzzerTestOneInput /src/imagemagick/Magick++/fuzz/encoder_fuzzer.cc:64:22 #4 0x6e0502 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15 #5 0x6cb462 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6 #6 0x6d0ccc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9 #7 0x6fa2b2 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 #8 0x7f40139740b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) DEDUP_TOKEN: operator new[](unsigned long)--Magick::BlobRef::BlobRef(void const*, unsigned long)--Magick::Blob::Blob(void const*, unsigned long) SUMMARY: AddressSanitizer: heap-buffer-overflow /src/imagemagick/./MagickCore/quantum-private.h:256:27 in PushLongPixel Shadow bytes around the buggy address: 0x0c367fff8230: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8240: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8250: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8260: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8270: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c367fff8280: 00[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff8290: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==18==ABORTING ``` ### Images [poc.zip](https://github.com/ImageMagick/ImageMagick/files/8347686/poc.zip)
AddressSanitizer: heap-buffer-overflow /src/imagemagick/./MagickCore/quantum-private.h:256:27 in PushLongPixel
https://api.github.com/repos/ImageMagick/ImageMagick/issues/4988/comments
3
2022-03-25T05:27:23
2022-04-30T09:27:47Z
https://github.com/ImageMagick/ImageMagick/issues/4988
1,180,362,220
4,988
true
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : AddressSanitizer: heap-buffer-overflow /src/imagemagick/./MagickCore/quantum-private.h:256:27 in PushLongPixel Issue date: --- start body --- ### ImageMagick version 7.1.0-27 ### Operating system Linux ### Operating system, version and so on Linux d477f3580ae9 5.4.0-105-generic #119~18.04.1-Ubuntu SMP Tue Mar 8 11:21:24 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ### Description Hello, We are currently working on fuzz testing feature, and we found a heap-use-after-free on ImageMagick. ### Steps to Reproduce ``` ➜ oss-fuzz git:(master) ✗ python infra/helper.py reproduce imagemagick encoder_cin_fuzzer ./build/out/imagemagick/crash-772bceeffddfb027f3363fb5be34fa55195a6e1a INFO:root:Running: docker run --rm --privileged -i -v /work/fuzz/oss-fuzz/build/out/imagemagick:/out -v /work/fuzz/oss-fuzz/build/out/imagemagick/crash-772bceeffddfb027f3363fb5be34fa55195a6e1a:/testcase -t gcr.io/oss-fuzz-base/base-runner reproduce encoder_cin_fuzzer -runs=100. + FUZZER=encoder_cin_fuzzer + shift + '[' '!' -v TESTCASE ']' + TESTCASE=/testcase + '[' '!' -f /testcase ']' + export RUN_FUZZER_MODE=interactive + RUN_FUZZER_MODE=interactive + export FUZZING_ENGINE=libfuzzer + FUZZING_ENGINE=libfuzzer + export SKIP_SEED_CORPUS=1 + SKIP_SEED_CORPUS=1 + run_fuzzer encoder_cin_fuzzer -runs=100 /testcase /out/encoder_cin_fuzzer -rss_limit_mb=2560 -timeout=25 -runs=100 /testcase -close_fd_mask=3 < /dev/null INFO: Running with entropic power schedule (0xFF, 100). INFO: Seed: 543797506 INFO: Loaded 1 modules (228899 inline 8-bit counters): 228899 [0x1f6a8b0, 0x1fa26d3), INFO: Loaded 1 PC tables (228899 PCs): 228899 [0x1fa26d8,0x2320908), /out/encoder_cin_fuzzer: Running 1 inputs 100 time(s) each. Running: /testcase ================================================================= ==18==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x61b000001408 at pc 0x000000c77cfc bp 0x7ffd2026fd90 sp 0x7ffd2026fd88 READ of size 1 at 0x61b000001408 thread T0 SCARINESS: 12 (1-byte-read-heap-buffer-overflow) #0 0xc77cfb in PushLongPixel /src/imagemagick/./MagickCore/quantum-private.h:256:27 #1 0xc77cfb in ImportRGBQuantum /src/imagemagick/MagickCore/quantum-import.c:4061:15 #2 0xc77cfb in ImportQuantumPixels /src/imagemagick/MagickCore/quantum-import.c:4774:7 #3 0xd8a7e0 in ReadCINImage /src/imagemagick/coders/cin.c:774:12 #4 0x9cfca1 in ReadImage /src/imagemagick/MagickCore/constitute.c:728:15 #5 0x94d996 in BlobToImage /src/imagemagick/MagickCore/blob.c:475:13 #6 0x81e2b1 in Magick::Image::read(Magick::Blob const&) /src/imagemagick/Magick++/lib/Image.cpp:4043:12 #7 0x7ea865 in LLVMFuzzerTestOneInput /src/imagemagick/Magick++/fuzz/encoder_fuzzer.cc:66:11 #8 0x6e0502 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15 #9 0x6cb462 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6 #10 0x6d0ccc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9 #11 0x6fa2b2 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 #12 0x7f40139740b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) #13 0x6a9bad in _start (/out/encoder_cin_fuzzer+0x6a9bad) DEDUP_TOKEN: PushLongPixel--ImportRGBQuantum--ImportQuantumPixels 0x61b000001408 is located 0 bytes to the right of 1416-byte region [0x61b000000e80,0x61b000001408) allocated by thread T0 here: #0 0x7e678d in operator new[](unsigned long) /src/llvm-project/compiler-rt/lib/asan/asan_new_delete.cpp:98:3 #1 0x810ed0 in Magick::BlobRef::BlobRef(void const*, unsigned long) /src/imagemagick/Magick++/lib/BlobRef.cpp:30:12 #2 0x80ff7d in Magick::Blob::Blob(void const*, unsigned long) /src/imagemagick/Magick++/lib/Blob.cpp:27:18 #3 0x7ea859 in LLVMFuzzerTestOneInput /src/imagemagick/Magick++/fuzz/encoder_fuzzer.cc:64:22 #4 0x6e0502 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15 #5 0x6cb462 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6 #6 0x6d0ccc in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9 #7 0x6fa2b2 in main /src/llvm-project/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10 #8 0x7f40139740b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) DEDUP_TOKEN: operator new[](unsigned long)--Magick::BlobRef::BlobRef(void const*, unsigned long)--Magick::Blob::Blob(void const*, unsigned long) SUMMARY: AddressSanitizer: heap-buffer-overflow /src/imagemagick/./MagickCore/quantum-private.h:256:27 in PushLongPixel Shadow bytes around the buggy address: 0x0c367fff8230: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8240: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8250: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8260: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c367fff8270: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c367fff8280: 00[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff8290: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c367fff82d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==18==ABORTING ``` ### Images [poc.zip](https://github.com/ImageMagick/ImageMagick/files/8347686/poc.zip) --- end body ---
6,634
[ -0.03356558457016945, 0.005888080224394798, -0.004389616660773754, 0.014025619253516197, 0.04422759264707565, 0.0021507360506802797, -0.024624163284897804, 0.02871408872306347, -0.015048099681735039, 0.025611387565732002, -0.0017972748028114438, 0.001953291241079569, 0.008617047220468521, ...
null
null
null
null
null
null
null
null
null
[ "WebAssembly", "wabt" ]
The following wat program works fine in wat2wasm without function references enabled: ``` (module (func $example (export "example") (result funcref) ref.func $example)) ``` However, enabling the "function references" feature causes the following error: ``` Error: validate failed: test.wast:3:5: error: type mismatch in implicit return, expected [funcref] but got [(ref 0)] ref.func $example)) ^^^^^^^^ ``` It looks like `ref 0` is not marked as a subtype of `funcref`. Tested with the online demo: https://webassembly.github.io/wabt/demo/wat2wasm/
Incorrect type for ref.func when function references are enabled
https://api.github.com/repos/WebAssembly/wabt/issues/2241/comments
1
2023-05-28T19:28:30
2023-05-29T22:52:33Z
https://github.com/WebAssembly/wabt/issues/2241
1,729,631,247
2,241
false
This is a GitHub Issue repo:wabt owner:WebAssembly Title : Incorrect type for ref.func when function references are enabled Issue date: --- start body --- The following wat program works fine in wat2wasm without function references enabled: ``` (module (func $example (export "example") (result funcref) ref.func $example)) ``` However, enabling the "function references" feature causes the following error: ``` Error: validate failed: test.wast:3:5: error: type mismatch in implicit return, expected [funcref] but got [(ref 0)] ref.func $example)) ^^^^^^^^ ``` It looks like `ref 0` is not marked as a subtype of `funcref`. Tested with the online demo: https://webassembly.github.io/wabt/demo/wat2wasm/ --- end body ---
760
[ -0.0031104835215955973, 0.017278213053941727, -0.007815048098564148, 0.014643927104771137, 0.016548719257116318, -0.0021986153442412615, 0.008760688826441765, 0.03493468090891838, -0.014468307606875896, 0.0005049047758802772, 0.015359912067651749, 0.0784071534872055, -0.01007107738405466, ...
CVE-2021-25746
2022-05-06T01:15:09.180000
A security issue was discovered in ingress-nginx where a user that can create or update ingress objects can use .metadata.annotations in an Ingress object (in the networking.k8s.io or extensions API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster.
{ "cvssMetricV2": [ { "acInsufInfo": false, "baseSeverity": "MEDIUM", "cvssData": { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "availabilityImpact": "NONE", "baseScore": 5.5, "confidentialityImpact": "PARTIAL", "integrityImpact": "PARTIAL", "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:N", "version": "2.0" }, "exploitabilityScore": 8, "impactScore": 4.9, "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "source": "nvd@nist.gov", "type": "Primary", "userInteractionRequired": false } ], "cvssMetricV30": null, "cvssMetricV31": [ { "cvssData": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "NONE", "baseScore": 7.1, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "LOW", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:N", "version": "3.1" }, "exploitabilityScore": 2.8, "impactScore": 4.2, "source": "nvd@nist.gov", "type": "Primary" }, { "cvssData": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "LOW", "baseScore": 7.6, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "LOW", "privilegesRequired": "LOW", "scope": "UNCHANGED", "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L", "version": "3.1" }, "exploitabilityScore": 2.8, "impactScore": 4.7, "source": "jordan@liggitt.net", "type": "Secondary" } ] }
[ { "source": "jordan@liggitt.net", "tags": [ "Issue Tracking", "Mitigation", "Third Party Advisory" ], "url": "https://github.com/kubernetes/ingress-nginx/issues/8503" }, { "source": "jordan@liggitt.net", "tags": [ "Issue Tracking", "Mitigation", "Third...
[ { "nodes": [ { "cpeMatch": [ { "criteria": "cpe:2.3:a:kubernetes:ingress-nginx:*:*:*:*:*:*:*:*", "matchCriteriaId": "7DD01B7D-743B-41AF-9D8F-D8C6038E6BD0", "versionEndExcluding": "1.2.0", "versionEndIncluding": null, "versionSta...
https://github.com/kubernetes/ingress-nginx/issues/8503
[ "Issue Tracking", "Mitigation", "Third Party Advisory" ]
github.com
[ "kubernetes", "ingress-nginx" ]
### Issue Details A security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use `.metadata.annotations` in an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster. This issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2021-25746**. ### Affected Components and Configurations This bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`. Multitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue. #### Affected Versions - <v1.2.0 #### Fixed Versions - v1.2.0-beta.0 - v1.2.0 ### Mitigation If you are unable to roll out the fix, this vulnerability can be mitigated by implementing an admission policy that restricts the `metadata.annotations` values to known safe (see the newly added [rules](https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go), or the suggested value for [annotation-value-word-blocklist](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist)). ### Detection If you find evidence that this vulnerability has been exploited, please contact [security@kubernetes.io](mailto:security@kubernetes.io) ### Additional Details See ingress-nginx Issue [#8503](https://github.com/kubernetes/ingress-nginx/issues/8503) for more details. ### Acknowledgements This vulnerability was reported by Anthony Weems, and separately by jeffrey&oliver. Thank You, CJ Cullen on behalf of the Kubernetes Security Response Committee
CVE-2021-25746: Ingress-nginx directive injection via annotations
https://api.github.com/repos/kubernetes/ingress-nginx/issues/8503/comments
10
2022-04-22T16:18:27
2022-05-10T16:12:39Z
https://github.com/kubernetes/ingress-nginx/issues/8503
1,212,547,731
8,503
true
This is a GitHub Issue repo:ingress-nginx owner:kubernetes Title : CVE-2021-25746: Ingress-nginx directive injection via annotations Issue date: --- start body --- ### Issue Details A security issue was discovered in [ingress-nginx](https://github.com/kubernetes/ingress-nginx) where a user that can create or update ingress objects can use `.metadata.annotations` in an Ingress object (in the `networking.k8s.io` or `extensions` API group) to obtain the credentials of the ingress-nginx controller. In the default configuration, that credential has access to all secrets in the cluster. This issue has been rated **High** ([CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:L)), and assigned **CVE-2021-25746**. ### Affected Components and Configurations This bug affects ingress-nginx. If you do not have ingress-nginx installed on your cluster, you are not affected. You can check this by running `kubectl get po -n ingress-nginx`. Multitenant environments where non-admin users have permissions to create Ingress objects are most affected by this issue. #### Affected Versions - <v1.2.0 #### Fixed Versions - v1.2.0-beta.0 - v1.2.0 ### Mitigation If you are unable to roll out the fix, this vulnerability can be mitigated by implementing an admission policy that restricts the `metadata.annotations` values to known safe (see the newly added [rules](https://github.com/kubernetes/ingress-nginx/blame/main/internal/ingress/inspector/rules.go), or the suggested value for [annotation-value-word-blocklist](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist)). ### Detection If you find evidence that this vulnerability has been exploited, please contact [security@kubernetes.io](mailto:security@kubernetes.io) ### Additional Details See ingress-nginx Issue [#8503](https://github.com/kubernetes/ingress-nginx/issues/8503) for more details. ### Acknowledgements This vulnerability was reported by Anthony Weems, and separately by jeffrey&oliver. Thank You, CJ Cullen on behalf of the Kubernetes Security Response Committee --- end body ---
2,229
[ -0.031391892582178116, -0.03868468105792999, -0.011590325273573399, -0.012111238203942776, 0.018999634310603142, -0.013475209474563599, -0.013324419036507607, 0.03262563422322273, -0.009534087963402271, 0.01468153577297926, -0.0004609398893080652, -0.0018317648209631443, -0.00992477312684059...
null
null
null
null
null
null
null
null
null
[ "emqx", "nanomq" ]
**Describe the bug** When setting the parallel value in the system config to 0, nanomq doesn't seem to properly determine the number of parallel tasks needed to reliably deliver messages over a bridge. **Expected behavior** When setting the system parallel value to 0, I would expect that we could send many messages very quickly through the broker and see that all of them are delivered over a bridge to EMQX broker. **Actual Behavior** I am noticing several dropped messages when setting the parallel value to 0. If I set the parallel configuration to 255 I notice only a couple of lost messages. If I set the configuration value to 1024 ([outside the documented value](https://nanomq.io/docs/en/latest/config-description/broker.html)), then I see no lost messages. We tried this value due to a suggestion found [here](https://askemq-com.translate.goog/t/topic/5872?_x_tr_sl=zh-CN&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc&_x_tr_hist=true) **To Reproduce** Send 100+ messages with a small interval of 10 milliseconds to nanoMQ forwarding messages to an EMQX broker **Environment Details** - NanoMQ version v0.21.8-8 - Operating system and version Debian 11 - Compiler and language used Go Client - testing scenario Sending 100 messages to a topic with very small intervals (10 milliseconds) **Client SDK** [Go Paho ](https://github.com/eclipse/paho.golang)
Parallel tasks are not being accurately determined
https://api.github.com/repos/nanomq/nanomq/issues/1739/comments
5
2024-04-04T19:06:03
2024-05-09T03:22:29Z
https://github.com/nanomq/nanomq/issues/1739
2,226,311,001
1,739
false
This is a GitHub Issue repo:nanomq owner:emqx Title : Parallel tasks are not being accurately determined Issue date: --- start body --- **Describe the bug** When setting the parallel value in the system config to 0, nanomq doesn't seem to properly determine the number of parallel tasks needed to reliably deliver messages over a bridge. **Expected behavior** When setting the system parallel value to 0, I would expect that we could send many messages very quickly through the broker and see that all of them are delivered over a bridge to EMQX broker. **Actual Behavior** I am noticing several dropped messages when setting the parallel value to 0. If I set the parallel configuration to 255 I notice only a couple of lost messages. If I set the configuration value to 1024 ([outside the documented value](https://nanomq.io/docs/en/latest/config-description/broker.html)), then I see no lost messages. We tried this value due to a suggestion found [here](https://askemq-com.translate.goog/t/topic/5872?_x_tr_sl=zh-CN&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc&_x_tr_hist=true) **To Reproduce** Send 100+ messages with a small interval of 10 milliseconds to nanoMQ forwarding messages to an EMQX broker **Environment Details** - NanoMQ version v0.21.8-8 - Operating system and version Debian 11 - Compiler and language used Go Client - testing scenario Sending 100 messages to a topic with very small intervals (10 milliseconds) **Client SDK** [Go Paho ](https://github.com/eclipse/paho.golang) --- end body ---
1,551
[ -0.029883207753300667, 0.028512418270111084, -0.0061533208936452866, -0.008544586598873138, 0.025405297055840492, 0.015223377384245396, 0.001103295013308525, 0.0532170906662941, -0.03332541137933731, 0.009656449779868126, 0.0009033882524818182, -0.014598906971514225, 0.006416055839508772, ...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.0.10-60 ### Operating system Windows ### Operating system, version and so on Windows Server 2012 R2 ### Description Ghost script version: 9.53.3 Recently upgraded the ImageMagick from 6 to 7.0.10-60. After that we could see while converting the eps image to jpeg, we noticed that two images are getting generated and this is causing many issues in my app. Before migration, it was only one image was generated. This issue occurring after ImageMagick upgrade. do we have any fix for this like passing some arguments to ignore the TIFF generation/avoid creating TIFF. I could not find anything on the internet. with **identify** myImage.eps myImage.eps[0] EPT <> myImage.eps[1] TIFF <> myImage.eps[0] does work but I am more confused as to why I don't have to do this with older version, it just ignores the TIFF Preview. With the version of ImageMagick 7.0.9-23 Q16 x86 , its producing only image and with version of ImageMagick 7.0.10-60 Q16, its creating two images like I explained above. ### Steps to Reproduce Always occuring while converting eps image to jpeg format. ### Images _No response_
Issue while converting EPS to JPEG : creating multiple images
https://api.github.com/repos/ImageMagick/ImageMagick/issues/7046/comments
30
2024-01-19T11:18:10
2024-01-23T04:24:33Z
https://github.com/ImageMagick/ImageMagick/issues/7046
2,090,312,859
7,046
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : Issue while converting EPS to JPEG : creating multiple images Issue date: --- start body --- ### ImageMagick version 7.0.10-60 ### Operating system Windows ### Operating system, version and so on Windows Server 2012 R2 ### Description Ghost script version: 9.53.3 Recently upgraded the ImageMagick from 6 to 7.0.10-60. After that we could see while converting the eps image to jpeg, we noticed that two images are getting generated and this is causing many issues in my app. Before migration, it was only one image was generated. This issue occurring after ImageMagick upgrade. do we have any fix for this like passing some arguments to ignore the TIFF generation/avoid creating TIFF. I could not find anything on the internet. with **identify** myImage.eps myImage.eps[0] EPT <> myImage.eps[1] TIFF <> myImage.eps[0] does work but I am more confused as to why I don't have to do this with older version, it just ignores the TIFF Preview. With the version of ImageMagick 7.0.9-23 Q16 x86 , its producing only image and with version of ImageMagick 7.0.10-60 Q16, its creating two images like I explained above. ### Steps to Reproduce Always occuring while converting eps image to jpeg format. ### Images _No response_ --- end body ---
1,358
[ -0.006965338718146086, 0.005666424985975027, -0.008432630449533463, 0.02596452832221985, 0.0022593538742512465, 0.028782278299331665, 0.023270485922694206, 0.05673984810709953, -0.026335647329688072, 0.035737309604883194, 0.016095533967018127, -0.052891217172145844, -0.016975222155451775, ...
null
null
null
null
null
null
null
null
null
[ "axiomatic-systems", "Bento4" ]
Can you please add binaries for arm architecture on the official downloads page https://www.bento4.com/downloads/
binary for arm architecture linux on downloads page
https://api.github.com/repos/axiomatic-systems/Bento4/issues/823/comments
0
2023-01-03T11:23:07
2023-05-29T02:37:55Z
https://github.com/axiomatic-systems/Bento4/issues/823
1,517,256,589
823
false
This is a GitHub Issue repo:Bento4 owner:axiomatic-systems Title : binary for arm architecture linux on downloads page Issue date: --- start body --- Can you please add binaries for arm architecture on the official downloads page https://www.bento4.com/downloads/ --- end body ---
284
[ -0.01956113986670971, 0.038667697459459305, -0.01944749429821968, -0.011932721361517906, 0.022871049121022224, 0.014859079383313656, 0.011556272394955158, 0.06159557029604912, 0.006225613411515951, 0.03991779312491417, -0.01222393661737442, 0.0011204683687537909, 0.007223558146506548, 0.03...
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
Hello, I found a bug in dwg2dxf. ## environment - ubuntu 20.04, GCC 9.4.0, libredwg latest commit https://github.com/LibreDWG/libredwg/commit/76a574c2b7ecaab13a5ad7c9a9ba82074a5614fe - **not reproducible on the release 0.12.5** compile with ASAN ``` export CFLAGS="-fsanitize=address -g" export CXXFLAGS="-fsanitize=address -g" ./autogen.sh && ./configure --disable-shared && make -j$(nproc) ``` ## ASAN Log ``` root@535d9a1d505e:/# ./programs/dwg2dxf /dwg_poc1 Reading DWG file /dwg_poc1_trim ERROR: Header CRC mismatch 3030 <=> 9E42 Warning: Fixup illegal Header Length ERROR: bit_read_BD: unexpected 2-bit code: '11' ERROR: Invalid BD unit2_ratio Warning: Header Section[48] CRC mismatch 3030 <=> 5E4D ERROR: Invalid size 808464432, should be: 298, endpos: 12640 ERROR: Invalid object type 49344, only 0 classes ERROR: Invalid class index 48844 >= 0 ERROR: MS size overflow @18446744073668669582 ERROR: MS size overflow @18446744073668669582 ERROR: MS size overflow @18446744073668669600 ERROR: MS size overflow @18446744073668669770 ERROR: MS size overflow @18446744073668669818 ERROR: MS size overflow @18446744073668669866 ERROR: MS size overflow @18446744073668669914 ERROR: MS size overflow @18446744073668669962 ERROR: MS size overflow @18446744073668670010 ERROR: MS size overflow @18446744073668670058 ERROR: MS size overflow @18446744073668670106 ERROR: MS size overflow @18446744073668670154 ERROR: MS size overflow @18446744073668670202 ERROR: MS size overflow @18446744073668670250 ERROR: MS size overflow @18446744073668670298 ERROR: MS size overflow @18446744073668670346 ERROR: MS size overflow @18446744073668670394 ERROR: MS size overflow @18446744073668670442 ERROR: MS size overflow @18446744073668670490 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: MS size overflow @18446744073668670490 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 Warning: handleoff 0x0 looks wrong, max_handles 60 - last_handle 0 = 60 (@21600) ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: bit_read_RS buffer overflow at 21600.0 >= 21600 ERROR: AddressSanitizer:DEADLYSIGNAL ================================================================= ==167486==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000020 (pc 0x55fb6333b255 bp 0x7ffe08a3e710 sp 0x7ffe08a3e4a0 T0) ==167486==The signal is caused by a READ memory access. ==167486==Hint: address points to the zero page. #0 0x55fb6333b254 in secondheader_private /benchmark_vuln/source/vuln/libredwg/src/2ndheader.spec:42 #1 0x55fb633182a0 in decode_R13_R2000 /benchmark_vuln/source/vuln/libredwg/src/decode.c:937 #2 0x55fb632fee27 in dwg_decode /benchmark_vuln/source/vuln/libredwg/src/decode.c:232 #3 0x55fb632c6369 in dwg_read_file /benchmark_vuln/source/vuln/libredwg/src/dwg.c:268 #4 0x55fb632c3ed8 in main /benchmark_vuln/source/vuln/libredwg/programs/dwg2dxf.c:261 #5 0x7fd5a1e00082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) #6 0x55fb632c2d6d in _start (/benchmark_vuln/source/vuln/libredwg/programs/dwg2dxf+0x25cd6d) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /benchmark_vuln/source/vuln/libredwg/src/2ndheader.spec:42 in secondheader_private ==167486==ABORTING ``` ## POC [poc.zip](https://github.com/LibreDWG/libredwg/files/13597214/poc.zip)
[FUZZ] SEGV read in dwg2dxf, secondheader_private
https://api.github.com/repos/LibreDWG/libredwg/issues/890/comments
1
2023-12-07T08:09:33
2023-12-07T15:30:36Z
https://github.com/LibreDWG/libredwg/issues/890
2,030,135,755
890
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : [FUZZ] SEGV read in dwg2dxf, secondheader_private Issue date: --- start body --- Hello, I found a bug in dwg2dxf. ## environment - ubuntu 20.04, GCC 9.4.0, libredwg latest commit https://github.com/LibreDWG/libredwg/commit/76a574c2b7ecaab13a5ad7c9a9ba82074a5614fe - **not reproducible on the release 0.12.5** compile with ASAN ``` export CFLAGS="-fsanitize=address -g" export CXXFLAGS="-fsanitize=address -g" ./autogen.sh && ./configure --disable-shared && make -j$(nproc) ``` ## ASAN Log ``` root@535d9a1d505e:/# ./programs/dwg2dxf /dwg_poc1 Reading DWG file /dwg_poc1_trim ERROR: Header CRC mismatch 3030 <=> 9E42 Warning: Fixup illegal Header Length ERROR: bit_read_BD: unexpected 2-bit code: '11' ERROR: Invalid BD unit2_ratio Warning: Header Section[48] CRC mismatch 3030 <=> 5E4D ERROR: Invalid size 808464432, should be: 298, endpos: 12640 ERROR: Invalid object type 49344, only 0 classes ERROR: Invalid class index 48844 >= 0 ERROR: MS size overflow @18446744073668669582 ERROR: MS size overflow @18446744073668669582 ERROR: MS size overflow @18446744073668669600 ERROR: MS size overflow @18446744073668669770 ERROR: MS size overflow @18446744073668669818 ERROR: MS size overflow @18446744073668669866 ERROR: MS size overflow @18446744073668669914 ERROR: MS size overflow @18446744073668669962 ERROR: MS size overflow @18446744073668670010 ERROR: MS size overflow @18446744073668670058 ERROR: MS size overflow @18446744073668670106 ERROR: MS size overflow @18446744073668670154 ERROR: MS size overflow @18446744073668670202 ERROR: MS size overflow @18446744073668670250 ERROR: MS size overflow @18446744073668670298 ERROR: MS size overflow @18446744073668670346 ERROR: MS size overflow @18446744073668670394 ERROR: MS size overflow @18446744073668670442 ERROR: MS size overflow @18446744073668670490 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: MS size overflow @18446744073668670490 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 Warning: handleoff 0x0 looks wrong, max_handles 60 - last_handle 0 = 60 (@21600) ERROR: bit_read_RC buffer overflow at 21600.0 >= 21600 ERROR: bit_read_RS buffer overflow at 21600.0 >= 21600 ERROR: AddressSanitizer:DEADLYSIGNAL ================================================================= ==167486==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000020 (pc 0x55fb6333b255 bp 0x7ffe08a3e710 sp 0x7ffe08a3e4a0 T0) ==167486==The signal is caused by a READ memory access. ==167486==Hint: address points to the zero page. #0 0x55fb6333b254 in secondheader_private /benchmark_vuln/source/vuln/libredwg/src/2ndheader.spec:42 #1 0x55fb633182a0 in decode_R13_R2000 /benchmark_vuln/source/vuln/libredwg/src/decode.c:937 #2 0x55fb632fee27 in dwg_decode /benchmark_vuln/source/vuln/libredwg/src/decode.c:232 #3 0x55fb632c6369 in dwg_read_file /benchmark_vuln/source/vuln/libredwg/src/dwg.c:268 #4 0x55fb632c3ed8 in main /benchmark_vuln/source/vuln/libredwg/programs/dwg2dxf.c:261 #5 0x7fd5a1e00082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) #6 0x55fb632c2d6d in _start (/benchmark_vuln/source/vuln/libredwg/programs/dwg2dxf+0x25cd6d) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /benchmark_vuln/source/vuln/libredwg/src/2ndheader.spec:42 in secondheader_private ==167486==ABORTING ``` ## POC [poc.zip](https://github.com/LibreDWG/libredwg/files/13597214/poc.zip) --- end body ---
3,629
[ -0.01641518995165825, 0.016429847106337547, 0.00010551477316766977, 0.0051920367404818535, 0.07369247823953629, 0.026425525546073914, -0.013410624116659164, 0.040217217057943344, -0.04092072695493698, 0.023699430748820305, -0.022482948377728462, -0.0034387626219540834, 0.00752607174217701, ...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
## JerryScript commit hash 55acdf2048b390d0f56f12e64dbfb2559f0e70ad ## Build platform Ubuntu 20.04 LTS ## Build steps ``` ./tools/build.py --clean --debug --compile-flag=-fsanitize=address \ --compile-flag=-m32 --compile-flag=-fno-omit-frame-pointer \ --compile-flag=-fno-common --compile-flag=-g \ --strip=off --system-allocator=on --logging=on \ --linker-flag=-fuse-ld=gold --error-messages=on --line-info=ON \ --stack-limit=10 ``` ## poc ``` var ab = new Int8Array(20).map((v, i) => i).buffer; var ta = new Int8Array(ab, 0, 10); var seen_length = -1; ta.constructor = { [Symbol.species]: function (len) { seen_length = len; return new Int8Array(ab, 1, len); } }; print(-1, seen_length); print([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ta); var tb = ta.slice(); print(10, seen_length); print([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], ta); print([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], tb); ``` ## assert log ``` ICE: Assertion 'new_typedarray_info.offset == 0' failed at /home/sakura/jerryscript/jerry-core/ecma/builtin-objects/typedarray/ecma-builtin-typedarray-prototype.c(ecma_builtin_typedarray_prototype_slice):1631. ```
Assertion 'new_typedarray_info.offset == 0' failed at jerry-core/ecma/builtin-objects/typedarray/ecma-builtin-typedarray-prototype.c(ecma_builtin_typedarray_prototype_slice):1631.
https://api.github.com/repos/jerryscript-project/jerryscript/issues/4888/comments
0
2021-12-09T14:18:26
2021-12-09T15:54:27Z
https://github.com/jerryscript-project/jerryscript/issues/4888
1,075,625,398
4,888
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Assertion 'new_typedarray_info.offset == 0' failed at jerry-core/ecma/builtin-objects/typedarray/ecma-builtin-typedarray-prototype.c(ecma_builtin_typedarray_prototype_slice):1631. Issue date: --- start body --- ## JerryScript commit hash 55acdf2048b390d0f56f12e64dbfb2559f0e70ad ## Build platform Ubuntu 20.04 LTS ## Build steps ``` ./tools/build.py --clean --debug --compile-flag=-fsanitize=address \ --compile-flag=-m32 --compile-flag=-fno-omit-frame-pointer \ --compile-flag=-fno-common --compile-flag=-g \ --strip=off --system-allocator=on --logging=on \ --linker-flag=-fuse-ld=gold --error-messages=on --line-info=ON \ --stack-limit=10 ``` ## poc ``` var ab = new Int8Array(20).map((v, i) => i).buffer; var ta = new Int8Array(ab, 0, 10); var seen_length = -1; ta.constructor = { [Symbol.species]: function (len) { seen_length = len; return new Int8Array(ab, 1, len); } }; print(-1, seen_length); print([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], ta); var tb = ta.slice(); print(10, seen_length); print([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], ta); print([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ], tb); ``` ## assert log ``` ICE: Assertion 'new_typedarray_info.offset == 0' failed at /home/sakura/jerryscript/jerry-core/ecma/builtin-objects/typedarray/ecma-builtin-typedarray-prototype.c(ecma_builtin_typedarray_prototype_slice):1631. ``` --- end body ---
1,630
[ 0.006611850578337908, -0.011169757694005966, -0.011808249168097973, -0.026616640388965607, 0.013962197117507458, -0.014062201604247093, -0.00011112303036497906, 0.038186416029930115, -0.015600736252963543, 0.016631552949547768, -0.018339326605200768, 0.0150391710922122, 0.01723158173263073, ...
null
null
null
null
null
null
null
null
null
[ "WebAssembly", "wabt" ]
Instead of merely emitting data segments as array initializers, it would be neat if we could (optionally) use `#embed` too. Too bad `offset(...)` isn't standard so we'll need to emit separate files for each data segment.
wasm2c: Optionally support #embed
https://api.github.com/repos/WebAssembly/wabt/issues/2325/comments
7
2023-11-11T12:36:28
2023-11-12T07:06:48Z
https://github.com/WebAssembly/wabt/issues/2325
1,988,929,525
2,325
false
This is a GitHub Issue repo:wabt owner:WebAssembly Title : wasm2c: Optionally support #embed Issue date: --- start body --- Instead of merely emitting data segments as array initializers, it would be neat if we could (optionally) use `#embed` too. Too bad `offset(...)` isn't standard so we'll need to emit separate files for each data segment. --- end body ---
366
[ -0.012270249426364899, 0.020557859912514687, -0.027591943740844727, -0.015543747693300247, 0.03203301504254341, 0.006428808439522982, 0.004527027253061533, 0.0504276417195797, -0.035070132464170456, 0.007528331596404314, 0.017864566296339035, 0.025815514847636223, 0.010737363249063492, 0.0...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
## JerryScript commit hash 55acdf2048b390d0f56f12e64dbfb2559f0e70ad ## Build platform Ubuntu 20.04 LTS ## Build steps ``` ./tools/build.py --clean --debug --compile-flag=-fsanitize=address --compile-flag=-m32 --lto=off --logging=on --line-info=on --error-message=on --system-allocator=on --profile=es2015-subset --stack-limit=20 ``` ## poc ``` function assertSyntaxError(code) { try { eval(code); throw new Error('Should throw SyntaxError, but executed code without throwing'); } catch (e) { if (!e instanceof SyntaxError) throw new Error('Should throw SyntaxError, but threw ' + e); } } assertSyntaxError('let C = class { #constructor() {} }'); assertSyntaxError('let C = class { static #constructor() {} }'); assertSyntaxError('class C { #constructor() {} }'); assertSyntaxError('class C { static #constructor() {} }'); assertSyntaxError('let C = class { get #constructor() {} }'); assertSyntaxError('let C = class { static get #constructor() {} }'); assertSyntaxError('class C { get #constructor() {} }'); assertSyntaxError('class C { static get #constructor() {} }'); assertSyntaxError('let C = class { set #constructor(v) {} }'); assertSyntaxError('let C = class { static set #constructor(v) {} }'); assertSyntaxError('class C { set #constructor(v) {} }'); assertSyntaxError('class C { static set #constructor(v) {} }'); assertSyntaxError('let C = class { #constructor; }'); assertSyntaxError('let C = class { static #constructor; }'); assertSyntaxError('class C { #constructor; }'); assertSyntaxError('class C { static #constructor; }'); ``` ## assert log ``` ICE: Assertion '!is_static' failed at /home/sakura/jerryscript/jerry-core/parser/js/js-parser-expr.c(parser_parse_class_body):729. Error: ERR_FAILED_INTERNAL_ASSERTION [1] 4022294 abort ~/jerryscript/build2/bin/jerry fuzz_output/fuzzer1/.cur_input ```
Assertion '!is_static' failed at jerry-core/parser/js/js-parser-expr.c(parser_parse_class_body):729
https://api.github.com/repos/jerryscript-project/jerryscript/issues/4874/comments
4
2021-12-09T08:38:09
2021-12-15T09:32:12Z
https://github.com/jerryscript-project/jerryscript/issues/4874
1,075,298,781
4,874
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Assertion '!is_static' failed at jerry-core/parser/js/js-parser-expr.c(parser_parse_class_body):729 Issue date: --- start body --- ## JerryScript commit hash 55acdf2048b390d0f56f12e64dbfb2559f0e70ad ## Build platform Ubuntu 20.04 LTS ## Build steps ``` ./tools/build.py --clean --debug --compile-flag=-fsanitize=address --compile-flag=-m32 --lto=off --logging=on --line-info=on --error-message=on --system-allocator=on --profile=es2015-subset --stack-limit=20 ``` ## poc ``` function assertSyntaxError(code) { try { eval(code); throw new Error('Should throw SyntaxError, but executed code without throwing'); } catch (e) { if (!e instanceof SyntaxError) throw new Error('Should throw SyntaxError, but threw ' + e); } } assertSyntaxError('let C = class { #constructor() {} }'); assertSyntaxError('let C = class { static #constructor() {} }'); assertSyntaxError('class C { #constructor() {} }'); assertSyntaxError('class C { static #constructor() {} }'); assertSyntaxError('let C = class { get #constructor() {} }'); assertSyntaxError('let C = class { static get #constructor() {} }'); assertSyntaxError('class C { get #constructor() {} }'); assertSyntaxError('class C { static get #constructor() {} }'); assertSyntaxError('let C = class { set #constructor(v) {} }'); assertSyntaxError('let C = class { static set #constructor(v) {} }'); assertSyntaxError('class C { set #constructor(v) {} }'); assertSyntaxError('class C { static set #constructor(v) {} }'); assertSyntaxError('let C = class { #constructor; }'); assertSyntaxError('let C = class { static #constructor; }'); assertSyntaxError('class C { #constructor; }'); assertSyntaxError('class C { static #constructor; }'); ``` ## assert log ``` ICE: Assertion '!is_static' failed at /home/sakura/jerryscript/jerry-core/parser/js/js-parser-expr.c(parser_parse_class_body):729. Error: ERR_FAILED_INTERNAL_ASSERTION [1] 4022294 abort ~/jerryscript/build2/bin/jerry fuzz_output/fuzzer1/.cur_input ``` --- end body ---
2,146
[ -0.005126710515469313, 0.01451354380697012, -0.007311388850212097, -0.003207835368812084, 0.03058549016714096, -0.0008511141058988869, -0.023667342960834503, 0.019632970914244652, -0.03277016803622246, 0.035391781479120255, 0.0007805671775713563, -0.0036047184839844704, 0.019924262538552284,...
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
### system info Ubuntu x86_64, clang 6.0, dwg2dxf([0.12.4.4608](https://github.com/LibreDWG/libredwg/releases/tag/0.12.4.4608)) ### Command line ./programs/dwg2dxf -b -m @@ -o /dev/null ### AddressSanitizer output ==8999==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x61a0000000e0 in thread T0 #0 0x4d23a0 in __interceptor_cfree.localalias.0 /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:55 #1 0x50d77a in dwg_read_file /testcase/libredwg/src/dwg.c:258:7 #2 0x50c454 in main /testcase/libredwg/programs/dwg2dxf.c:258:15 #3 0x7ffff6e22c86 in __libc_start_main /build/glibc-CVJwZb/glibc-2.27/csu/../csu/libc-start.c:310 #4 0x419ee9 in _start (/testcase/libredwg/programs/dwg2dxf+0x419ee9) 0x61a0000000e0 is located 96 bytes inside of 1309-byte region [0x61a000000080,0x61a00000059d) allocated by thread T0 here: #0 0x4d2750 in calloc /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:74 #1 0x50cdd0 in dat_read_file /testcase/libredwg/src/dwg.c:91:33 #2 0x50d708 in dwg_read_file /testcase/libredwg/src/dwg.c:247:15 #3 0x50c454 in main /testcase/libredwg/programs/dwg2dxf.c:258:15 #4 0x7ffff6e22c86 in __libc_start_main /build/glibc-CVJwZb/glibc-2.27/csu/../csu/libc-start.c:310 SUMMARY: AddressSanitizer: bad-free /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:55 in __interceptor_cfree.localalias.0 ==8999==ABORTING ### poc https://gitee.com/cxlzff/fuzz-poc/raw/master/libredwg/dwg_read_file_baffree
bad-free exists in the function dwg_read_file in dwg.c
https://api.github.com/repos/LibreDWG/libredwg/issues/491/comments
3
2022-06-07T01:41:57
2022-12-15T09:10:27Z
https://github.com/LibreDWG/libredwg/issues/491
1,262,615,064
491
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : bad-free exists in the function dwg_read_file in dwg.c Issue date: --- start body --- ### system info Ubuntu x86_64, clang 6.0, dwg2dxf([0.12.4.4608](https://github.com/LibreDWG/libredwg/releases/tag/0.12.4.4608)) ### Command line ./programs/dwg2dxf -b -m @@ -o /dev/null ### AddressSanitizer output ==8999==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x61a0000000e0 in thread T0 #0 0x4d23a0 in __interceptor_cfree.localalias.0 /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:55 #1 0x50d77a in dwg_read_file /testcase/libredwg/src/dwg.c:258:7 #2 0x50c454 in main /testcase/libredwg/programs/dwg2dxf.c:258:15 #3 0x7ffff6e22c86 in __libc_start_main /build/glibc-CVJwZb/glibc-2.27/csu/../csu/libc-start.c:310 #4 0x419ee9 in _start (/testcase/libredwg/programs/dwg2dxf+0x419ee9) 0x61a0000000e0 is located 96 bytes inside of 1309-byte region [0x61a000000080,0x61a00000059d) allocated by thread T0 here: #0 0x4d2750 in calloc /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:74 #1 0x50cdd0 in dat_read_file /testcase/libredwg/src/dwg.c:91:33 #2 0x50d708 in dwg_read_file /testcase/libredwg/src/dwg.c:247:15 #3 0x50c454 in main /testcase/libredwg/programs/dwg2dxf.c:258:15 #4 0x7ffff6e22c86 in __libc_start_main /build/glibc-CVJwZb/glibc-2.27/csu/../csu/libc-start.c:310 SUMMARY: AddressSanitizer: bad-free /fuzzer/build/llvm_tools/llvm-4.0.0.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:55 in __interceptor_cfree.localalias.0 ==8999==ABORTING ### poc https://gitee.com/cxlzff/fuzz-poc/raw/master/libredwg/dwg_read_file_baffree --- end body ---
1,804
[ -0.04227988049387932, 0.012555411085486412, 0.009255865588784218, 0.008984474465250969, 0.06027739867568016, 0.032995447516441345, -0.009812932461500168, 0.023182516917586327, -0.035652223974466324, -0.0002461714029777795, -0.017111925408244133, 0.0029067418072372675, 0.021268494427204132, ...
null
null
null
null
null
null
null
null
null
[ "schollz", "croc" ]
<!-- The comments between these brackets won't show up in the submitted issue (as you can see in the Preview). --> <img width="383" alt="image" src="https://github.com/schollz/croc/assets/148059/c241f744-2eda-4946-9abe-161ca0fb5af0">
windows defender antivirus reports trojan in linux 64 bit binary download v9.6.11
https://api.github.com/repos/schollz/croc/issues/666/comments
3
2024-02-19T21:51:30
2024-04-16T07:03:56Z
https://github.com/schollz/croc/issues/666
2,143,197,005
666
false
This is a GitHub Issue repo:croc owner:schollz Title : windows defender antivirus reports trojan in linux 64 bit binary download v9.6.11 Issue date: --- start body --- <!-- The comments between these brackets won't show up in the submitted issue (as you can see in the Preview). --> <img width="383" alt="image" src="https://github.com/schollz/croc/assets/148059/c241f744-2eda-4946-9abe-161ca0fb5af0"> --- end body ---
423
[ -0.04505786672234535, -0.0131901940330863, -0.0073854043148458, 0.001649637008085847, 0.018442803993821144, 0.02020977810025215, 0.02105185203254223, 0.03594690188765526, 0.04806724563241005, 0.06548851728439331, -0.0014382556546479464, -0.025980057194828987, 0.012838179245591164, -0.01504...
null
null
null
null
null
null
null
null
null
[ "strukturag", "libde265" ]
##### libde265 commit hash ``` 45904e5667c5bf59c67fcdc586dfba110832894c ``` ##### Build platform Ubuntu 18.04.2 LTS x86_64 gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04) ##### Build steps ``` libde265/$ mkdir build &&cd build libde265/build$ cmake ../ libde265/build$ make -j8 ``` [poc](https://drive.google.com/file/d/1CMI5ZOqOgMNkbXVNcRAE08gxJlsW6_W8/view?usp=sharing) ##### assert log ``` dec265: /home/joe1sn/Desktop/libde265/libde265/sps.cc:931: de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool): Assertion `scaling_list_pred_matrix_id_delta==3' failed. Aborted ``` ##### gdb output ``` dec265: /home/joe1sn/Desktop/libde265/libde265/sps.cc:931: de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool): Assertion `scaling_list_pred_matrix_id_delta==3' failed. Program received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007ffff728f7f1 in __GI_abort () at abort.c:79 #2 0x00007ffff727f3fa in __assert_fail_base (fmt=0x7ffff74066c0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x7ffff7b885a0 "scaling_list_pred_matrix_id_delta==3", file=file@entry=0x7ffff7b87b50 "/home/joe1sn/Desktop/libde265/libde265/sps.cc", line=line@entry=931, function=function@entry=0x7ffff7b88860 <read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool)::__PRETTY_FUNCTION__> "de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool)") at assert.c:92 #3 0x00007ffff727f472 in __GI___assert_fail (assertion=0x7ffff7b885a0 "scaling_list_pred_matrix_id_delta==3", file=0x7ffff7b87b50 "/home/joe1sn/Desktop/libde265/libde265/sps.cc", line=931, function=0x7ffff7b88860 <read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool)::__PRETTY_FUNCTION__> "de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool)") at assert.c:101 #4 0x00007ffff7aec72c in read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #5 0x00007ffff7aea7fd in seq_parameter_set::read(error_queue*, bitreader*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #6 0x00007ffff7aa1c9e in decoder_context::read_sps_NAL(bitreader&) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #7 0x00007ffff7aa360c in decoder_context::decode_NAL(NAL_unit*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #8 0x00007ffff7aa38ef in decoder_context::decode(int*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #9 0x00007ffff7a9864c in de265_decode () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #10 0x0000555555556e46 in main () ```
Assertion `scaling_list_pred_matrix_id_delta==3' failed at 'dec265: src/libde265/sps.cc:931:'
https://api.github.com/repos/strukturag/libde265/issues/313/comments
1
2022-04-08T11:29:49
2023-01-29T11:36:21Z
https://github.com/strukturag/libde265/issues/313
1,197,203,073
313
false
This is a GitHub Issue repo:libde265 owner:strukturag Title : Assertion `scaling_list_pred_matrix_id_delta==3' failed at 'dec265: src/libde265/sps.cc:931:' Issue date: --- start body --- ##### libde265 commit hash ``` 45904e5667c5bf59c67fcdc586dfba110832894c ``` ##### Build platform Ubuntu 18.04.2 LTS x86_64 gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04) ##### Build steps ``` libde265/$ mkdir build &&cd build libde265/build$ cmake ../ libde265/build$ make -j8 ``` [poc](https://drive.google.com/file/d/1CMI5ZOqOgMNkbXVNcRAE08gxJlsW6_W8/view?usp=sharing) ##### assert log ``` dec265: /home/joe1sn/Desktop/libde265/libde265/sps.cc:931: de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool): Assertion `scaling_list_pred_matrix_id_delta==3' failed. Aborted ``` ##### gdb output ``` dec265: /home/joe1sn/Desktop/libde265/libde265/sps.cc:931: de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool): Assertion `scaling_list_pred_matrix_id_delta==3' failed. Program received signal SIGABRT, Aborted. __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 51 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007ffff728f7f1 in __GI_abort () at abort.c:79 #2 0x00007ffff727f3fa in __assert_fail_base (fmt=0x7ffff74066c0 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x7ffff7b885a0 "scaling_list_pred_matrix_id_delta==3", file=file@entry=0x7ffff7b87b50 "/home/joe1sn/Desktop/libde265/libde265/sps.cc", line=line@entry=931, function=function@entry=0x7ffff7b88860 <read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool)::__PRETTY_FUNCTION__> "de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool)") at assert.c:92 #3 0x00007ffff727f472 in __GI___assert_fail (assertion=0x7ffff7b885a0 "scaling_list_pred_matrix_id_delta==3", file=0x7ffff7b87b50 "/home/joe1sn/Desktop/libde265/libde265/sps.cc", line=931, function=0x7ffff7b88860 <read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool)::__PRETTY_FUNCTION__> "de265_error read_scaling_list(bitreader*, const seq_parameter_set*, scaling_list_data*, bool)") at assert.c:101 #4 0x00007ffff7aec72c in read_scaling_list(bitreader*, seq_parameter_set const*, scaling_list_data*, bool) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #5 0x00007ffff7aea7fd in seq_parameter_set::read(error_queue*, bitreader*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #6 0x00007ffff7aa1c9e in decoder_context::read_sps_NAL(bitreader&) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #7 0x00007ffff7aa360c in decoder_context::decode_NAL(NAL_unit*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #8 0x00007ffff7aa38ef in decoder_context::decode(int*) () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #9 0x00007ffff7a9864c in de265_decode () from /home/joe1sn/Desktop/libde265/build/libde265/liblibde265.so #10 0x0000555555556e46 in main () ``` --- end body ---
3,301
[ 0.001583218458108604, 0.04182342812418938, -0.012379677034914494, 0.009011203423142433, 0.06545280665159225, 0.004884644411504269, -0.00867507141083479, 0.05006224289536476, -0.01438931841403246, 0.027434108778834343, -0.014124704524874687, -0.016463326290249825, 0.03189679980278015, 0.011...
null
null
null
null
null
null
null
null
null
[ "axiomatic-systems", "Bento4" ]
Hello Bento4 Support, When I package a single resolution video it stores the output inside "output/video/avc1/<here>". But when I package a video with multiple resolution, it stroes the output inside "output/video/avc1/1/<here>; output/video/avc1/2/<here>; so on ..". Q. If I want to store Single resolution Packaging inside "**output/video/avc1/1**/<here>", then what to do ?? Please suggest. Thanks Tanmaya Patra
Dedicated directory in case Single resolution Dash Packaging
https://api.github.com/repos/axiomatic-systems/Bento4/issues/827/comments
2
2023-01-24T09:17:59
2023-02-05T05:51:44Z
https://github.com/axiomatic-systems/Bento4/issues/827
1,554,601,339
827
false
This is a GitHub Issue repo:Bento4 owner:axiomatic-systems Title : Dedicated directory in case Single resolution Dash Packaging Issue date: --- start body --- Hello Bento4 Support, When I package a single resolution video it stores the output inside "output/video/avc1/<here>". But when I package a video with multiple resolution, it stroes the output inside "output/video/avc1/1/<here>; output/video/avc1/2/<here>; so on ..". Q. If I want to store Single resolution Packaging inside "**output/video/avc1/1**/<here>", then what to do ?? Please suggest. Thanks Tanmaya Patra --- end body ---
605
[ -0.025491926819086075, 0.01792910508811474, -0.026355784386396408, 0.000131284847157076, 0.019526423886418343, 0.02053697407245636, 0.017374932765960693, 0.06669627130031586, -0.03201160207390785, 0.01646217703819275, 0.0282627884298563, -0.033576324582099915, 0.010545573197305202, 0.00406...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.1.1-28 ### Operating system Windows ### Operating system, version and so on Windows 11 ### Description There seems to be a bug in `-define gradient:direction=South gradient: ...`. ### Steps to Reproduce ``` magick -size 200x300 -define gradient:direction=South gradient:Red-Blue x.png ``` The result, x.png, is shown below. The expected result is a smooth gradient from red at the top (y=0) to blue at the bottom (y=299). In fact, the image is blue at y=199. The problem seems to be in paint.c, function GradientImage(), which includes: ``` case SouthGravity: { gradient->gradient_vector.x1=0.0; gradient->gradient_vector.y1=0.0; gradient->gradient_vector.x2=0.0; gradient->gradient_vector.y2=(double) image->columns-1; break; } ``` The y2 line should be: ``` gradient->gradient_vector.y2=(double) image->rows-1; ``` A workaround is to use `-define gradient:direction=North` with the colours in the opposite direction. ### Images ![x](https://github.com/ImageMagick/ImageMagick/assets/33812368/d4627de6-9d4a-4f58-bb87-3e4c0df6d88c)
-define gradient:direction=South gradient: ...
https://api.github.com/repos/ImageMagick/ImageMagick/issues/7208/comments
1
2024-04-01T15:26:48
2024-04-01T19:42:25Z
https://github.com/ImageMagick/ImageMagick/issues/7208
2,218,499,308
7,208
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : -define gradient:direction=South gradient: ... Issue date: --- start body --- ### ImageMagick version 7.1.1-28 ### Operating system Windows ### Operating system, version and so on Windows 11 ### Description There seems to be a bug in `-define gradient:direction=South gradient: ...`. ### Steps to Reproduce ``` magick -size 200x300 -define gradient:direction=South gradient:Red-Blue x.png ``` The result, x.png, is shown below. The expected result is a smooth gradient from red at the top (y=0) to blue at the bottom (y=299). In fact, the image is blue at y=199. The problem seems to be in paint.c, function GradientImage(), which includes: ``` case SouthGravity: { gradient->gradient_vector.x1=0.0; gradient->gradient_vector.y1=0.0; gradient->gradient_vector.x2=0.0; gradient->gradient_vector.y2=(double) image->columns-1; break; } ``` The y2 line should be: ``` gradient->gradient_vector.y2=(double) image->rows-1; ``` A workaround is to use `-define gradient:direction=North` with the colours in the opposite direction. ### Images ![x](https://github.com/ImageMagick/ImageMagick/assets/33812368/d4627de6-9d4a-4f58-bb87-3e4c0df6d88c) --- end body ---
1,373
[ -0.015863245353102684, 0.03025871515274048, -0.0019370376830920577, 0.029157886281609535, 0.0029231980443000793, 0.026843320578336716, -0.0005177782732062042, 0.03373056650161743, -0.04479531943798065, 0.007945735938847065, -0.005130150821059942, 0.0016256649978458881, 0.0539124496281147, ...
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
Leaving this issue here as the Piwigo Android looks dead on here, but basically, when using the Piwigo NG app, it prompts for which folder to use, but doesn't download any images. Screenshots below. Select the photo, and press Download Photo. ![Screenshot_2024-01-25-18-39-58-48_5257cff2deadcc78e7726b89be5bcb32~2.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/f5734aab-e9bb-4890-b20f-6f062a550622) Choose a folder, and press, use folder. ![Screenshot_2024-01-25-18-40-02-21_5734e8eb49b4234b62f913f831715b0f.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/ace684a3-4f44-47bc-93b4-ef2ae90700ac) Accept the permissions to use the folder. ![Screenshot_2024-01-25-18-40-05-51_5734e8eb49b4234b62f913f831715b0f.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/9cfb0814-bc92-4581-ab78-ad4ad676caaa) Then the prompt goes away, but when looking in that same folder using a file exploring so, there is nothing available. ![Screenshot_2024-01-25-18-40-15-22_7879f62e5a8be886cd1ee2c2e1b78460.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/aec67693-eeb6-4453-a4c3-ef02f60487a0)
Download photo on Android Piwigo NG does nothing
https://api.github.com/repos/Piwigo/Piwigo/issues/2098/comments
0
2024-01-25T18:51:20
2024-01-25T18:51:20Z
https://github.com/Piwigo/Piwigo/issues/2098
2,100,979,990
2,098
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : Download photo on Android Piwigo NG does nothing Issue date: --- start body --- Leaving this issue here as the Piwigo Android looks dead on here, but basically, when using the Piwigo NG app, it prompts for which folder to use, but doesn't download any images. Screenshots below. Select the photo, and press Download Photo. ![Screenshot_2024-01-25-18-39-58-48_5257cff2deadcc78e7726b89be5bcb32~2.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/f5734aab-e9bb-4890-b20f-6f062a550622) Choose a folder, and press, use folder. ![Screenshot_2024-01-25-18-40-02-21_5734e8eb49b4234b62f913f831715b0f.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/ace684a3-4f44-47bc-93b4-ef2ae90700ac) Accept the permissions to use the folder. ![Screenshot_2024-01-25-18-40-05-51_5734e8eb49b4234b62f913f831715b0f.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/9cfb0814-bc92-4581-ab78-ad4ad676caaa) Then the prompt goes away, but when looking in that same folder using a file exploring so, there is nothing available. ![Screenshot_2024-01-25-18-40-15-22_7879f62e5a8be886cd1ee2c2e1b78460.jpg](https://github.com/Piwigo/Piwigo/assets/4142039/aec67693-eeb6-4453-a4c3-ef02f60487a0) --- end body ---
1,247
[ -0.03336535394191742, 0.010968414135277271, -0.012059318833053112, -0.011406260542571545, 0.01809268817305565, -0.00596286915242672, 0.04601094126701355, 0.0413207933306694, 0.00217995373532176, 0.03731338679790497, -0.0009220740757882595, -0.013246698305010796, -0.004860833287239075, 0.00...
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
Piwigo should integrate routines for automatically recognising faces, animals, plants, etc. and for classifying photos with recognised objects. A user may enable automatic recognition and then name or confirm recognised objects linked to multiple images. The recognised objects could also be shared with other users. API methods should be provided to exploit/update recognised objects with third party apps (e.g. iOS and Android apps) but also to contribute to the recognition of new or known objects.
Automatic recognition and classification of faces, animals, objects
https://api.github.com/repos/Piwigo/Piwigo/issues/2059/comments
1
2023-12-10T11:09:14
2024-06-07T12:14:24Z
https://github.com/Piwigo/Piwigo/issues/2059
2,034,324,192
2,059
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : Automatic recognition and classification of faces, animals, objects Issue date: --- start body --- Piwigo should integrate routines for automatically recognising faces, animals, plants, etc. and for classifying photos with recognised objects. A user may enable automatic recognition and then name or confirm recognised objects linked to multiple images. The recognised objects could also be shared with other users. API methods should be provided to exploit/update recognised objects with third party apps (e.g. iOS and Android apps) but also to contribute to the recognition of new or known objects. --- end body ---
678
[ -0.042857613414525986, -0.0020515036303550005, -0.008292016573250294, -0.0019350427901372313, 0.01393947470933199, -0.009223704226315022, 0.0321933776140213, 0.03546145185828209, 0.016412029042840004, 0.04122357815504074, -0.01489266287535429, -0.03494543954730034, 0.013903641141951084, 0....
null
null
null
null
null
null
null
null
null
[ "WebAssembly", "wabt" ]
The function-references and gc proposals are going to be merged into the spec soon (as well as the "v4" revision of the exception-handling proposal, which we don't support yet). It could be good to talk about our plan for supporting these features in WABT. I think there's a few options to put on the table: - We work on the current best-effort basis to update our exception-handling support and finish the implementation of function-references and gc (in the binary/text readers/writers, the interpreter, and wasm2c). I'm not sure who has a ton of spare cycles for this, but if there is continued interest in WABT, I and other contributors will probably eventually make it happen. Could take a long time though, and I'm a little nervous about losing community trust/interest if we allow WABT to fall behind the living spec for too long. - We try to (re-)increase investment of engineering resources in WABT by some of the companies who use it. Based on conversations at the CG meeting, this seems like a bit of a long shot, but maybe I misread the situation or there is something these companies would want from us that we could provide. (E.g. how to make a persuasive case for Google and Mozilla to devote more engineering resources or to find a way to pay Igalia or others to start contributing more again.) - We deprecate most of the WABT tools except for wasm2c, which we'd port to run in binaryen. An upside is that binaryen attracts a lot of investment/velocity and seems to be Google's focus right now in terms of non-browser Wasm tools, and wasm2c would get to ride its coattails; downside is that binaryen isn't really intended (as I understand it) as a spec-conforming implementation of a Wasm binary/text reader/writer + validator/interpreter, given its IR that is slightly different from Wasm. (And, smaller point, binaryen currently seems to be much slower than WABT at some operations like a wasm-merge.) So the community would lose that part of WABT's benefits. - We replace the internals of WABT with the Bytecode Alliance's Rust wasm-tools project, and port wasm2c to run in wasm-tools. Upside is that this is also a heavily depended-on building block from the BA folks and a newer Rust codebase that also passes the spec tests, and it might be worthwhile to combine our efforts (aiui there are minor differences, e.g. I think wasm-tools can't currently write the folded text format which seems minor and probably addressable); downside is that this would be a pretty avulsive change (moving to Rust among other things) and would really need to have buy-in, might lose a lot of the current contributors, and would take significant investment to pull off. (Edit: and I also don't know how performance compares with WABT on huge modules.) I don't think anything is forcing us to make a decision here, but tentatively I'd rather make one than sort of avoid it.
Plan for GC and beyond
https://api.github.com/repos/WebAssembly/wabt/issues/2348/comments
9
2023-12-05T02:03:52
2024-03-28T21:33:34Z
https://github.com/WebAssembly/wabt/issues/2348
2,025,130,969
2,348
false
This is a GitHub Issue repo:wabt owner:WebAssembly Title : Plan for GC and beyond Issue date: --- start body --- The function-references and gc proposals are going to be merged into the spec soon (as well as the "v4" revision of the exception-handling proposal, which we don't support yet). It could be good to talk about our plan for supporting these features in WABT. I think there's a few options to put on the table: - We work on the current best-effort basis to update our exception-handling support and finish the implementation of function-references and gc (in the binary/text readers/writers, the interpreter, and wasm2c). I'm not sure who has a ton of spare cycles for this, but if there is continued interest in WABT, I and other contributors will probably eventually make it happen. Could take a long time though, and I'm a little nervous about losing community trust/interest if we allow WABT to fall behind the living spec for too long. - We try to (re-)increase investment of engineering resources in WABT by some of the companies who use it. Based on conversations at the CG meeting, this seems like a bit of a long shot, but maybe I misread the situation or there is something these companies would want from us that we could provide. (E.g. how to make a persuasive case for Google and Mozilla to devote more engineering resources or to find a way to pay Igalia or others to start contributing more again.) - We deprecate most of the WABT tools except for wasm2c, which we'd port to run in binaryen. An upside is that binaryen attracts a lot of investment/velocity and seems to be Google's focus right now in terms of non-browser Wasm tools, and wasm2c would get to ride its coattails; downside is that binaryen isn't really intended (as I understand it) as a spec-conforming implementation of a Wasm binary/text reader/writer + validator/interpreter, given its IR that is slightly different from Wasm. (And, smaller point, binaryen currently seems to be much slower than WABT at some operations like a wasm-merge.) So the community would lose that part of WABT's benefits. - We replace the internals of WABT with the Bytecode Alliance's Rust wasm-tools project, and port wasm2c to run in wasm-tools. Upside is that this is also a heavily depended-on building block from the BA folks and a newer Rust codebase that also passes the spec tests, and it might be worthwhile to combine our efforts (aiui there are minor differences, e.g. I think wasm-tools can't currently write the folded text format which seems minor and probably addressable); downside is that this would be a pretty avulsive change (moving to Rust among other things) and would really need to have buy-in, might lose a lot of the current contributors, and would take significant investment to pull off. (Edit: and I also don't know how performance compares with WABT on huge modules.) I don't think anything is forcing us to make a decision here, but tentatively I'd rather make one than sort of avoid it. --- end body ---
3,016
[ -0.023830849677324295, -0.02298600971698761, -0.019399426877498627, -0.0013100989162921906, -0.0014615324325859547, 0.021567316725850105, -0.014091284945607185, 0.03921330347657204, -0.01960665173828602, 0.028612958267331123, 0.034718118607997894, 0.03905389830470085, 0.03318784385919571, ...
null
null
null
null
null
null
null
null
null
[ "libming", "libming" ]
Hi, i find allocation-size-too-big error in swftocxx . I saved my test files [here](https://github.com/WorldExecute/files/tree/main/libming/swftocxx/Allocation-size-too-big). ## Bug Description I apply ASan (Address Sanitizer ) to check for address errors and the error report is as follows. ``` test_1: header indicates a filesize of 117920368 but filesize is 880 CharacterEndFlag in DefineButton2 != 0parseSWF_BUTTONCONDACTION: expected actionEnd flag Stream out of sync after parse of blocktype 34 (SWF_DEFINEBUTTON2). 513 but expecting 55. ================================================================= ==229354==ERROR: AddressSanitizer: requested allocation size 0xffffffff8c01020f (0xffffffff8c011210 after adjustments for alignment, red zones etc.) exceeds maximum supported size of 0x10000000000 (thread T0) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: Failed to use and restart external symbolizer! #0 0x494bcd (./libming/install-asan/bin/swftocxx+0x494bcd) #1 0x4fec2c (./libming/install-asan/bin/swftocxx+0x4fec2c) ==229354==HINT: if you don't care about these errors you may set allocator_may_return_null=1 SUMMARY: AddressSanitizer: allocation-size-too-big (./libming/install-asan/bin/swftocxx+0x494bcd) ==229354==ABORTING ``` ## Steps to Reproduce 1. Download the libming source code with the official link and build it with ASan (-fsanitize=address) 2. Executing swftocxx with the provided input files
allocation-size-too-big in swftocxx (Version 0.4.9)
https://api.github.com/repos/libming/libming/issues/246/comments
0
2022-08-14T06:07:31
2022-09-04T07:46:02Z
https://github.com/libming/libming/issues/246
1,338,145,157
246
false
This is a GitHub Issue repo:libming owner:libming Title : allocation-size-too-big in swftocxx (Version 0.4.9) Issue date: --- start body --- Hi, i find allocation-size-too-big error in swftocxx . I saved my test files [here](https://github.com/WorldExecute/files/tree/main/libming/swftocxx/Allocation-size-too-big). ## Bug Description I apply ASan (Address Sanitizer ) to check for address errors and the error report is as follows. ``` test_1: header indicates a filesize of 117920368 but filesize is 880 CharacterEndFlag in DefineButton2 != 0parseSWF_BUTTONCONDACTION: expected actionEnd flag Stream out of sync after parse of blocktype 34 (SWF_DEFINEBUTTON2). 513 but expecting 55. ================================================================= ==229354==ERROR: AddressSanitizer: requested allocation size 0xffffffff8c01020f (0xffffffff8c011210 after adjustments for alignment, red zones etc.) exceeds maximum supported size of 0x10000000000 (thread T0) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: failed to fork (errno 12) ==229354==WARNING: Failed to use and restart external symbolizer! #0 0x494bcd (./libming/install-asan/bin/swftocxx+0x494bcd) #1 0x4fec2c (./libming/install-asan/bin/swftocxx+0x4fec2c) ==229354==HINT: if you don't care about these errors you may set allocator_may_return_null=1 SUMMARY: AddressSanitizer: allocation-size-too-big (./libming/install-asan/bin/swftocxx+0x494bcd) ==229354==ABORTING ``` ## Steps to Reproduce 1. Download the libming source code with the official link and build it with ASan (-fsanitize=address) 2. Executing swftocxx with the provided input files --- end body ---
1,819
[ -0.03759874030947685, 0.018814945593476295, -0.005268340464681387, 0.007468349300324917, 0.07507287710905075, 0.0194223802536726, -0.03298845514655113, 0.046788156032562256, -0.0392497181892395, 0.04143025726079941, 0.010123935528099537, 0.0016723963199183345, 0.007367109879851341, 0.01584...
null
null
null
null
null
null
null
null
null
[ "schollz", "croc" ]
https://github.com/schollz/croc/pull/660 by @qk-santi added a `--transfers` flag to specify the number of ports for croc send. Croc relay still uses manual ports though. It would be great if relay could use the --transfers flag as well. croc 10.0.2 relay -h, showing old --ports flag and no new --transfers flag: ![image](https://github.com/schollz/croc/assets/692914/f8f7cc90-6bef-4cef-8ab7-2c468f25925b)
Extend "--transfers" flag to relay in addition to send
https://api.github.com/repos/schollz/croc/issues/711/comments
2
2024-05-24T17:28:16
2024-07-28T15:18:52Z
https://github.com/schollz/croc/issues/711
2,315,893,491
711
false
This is a GitHub Issue repo:croc owner:schollz Title : Extend "--transfers" flag to relay in addition to send Issue date: --- start body --- https://github.com/schollz/croc/pull/660 by @qk-santi added a `--transfers` flag to specify the number of ports for croc send. Croc relay still uses manual ports though. It would be great if relay could use the --transfers flag as well. croc 10.0.2 relay -h, showing old --ports flag and no new --transfers flag: ![image](https://github.com/schollz/croc/assets/692914/f8f7cc90-6bef-4cef-8ab7-2c468f25925b) --- end body ---
571
[ -0.0328059196472168, 0.016175562515854836, -0.026302343234419823, 0.0066892849281430244, -0.010778655298054218, 0.013765146024525166, 0.013469528406858444, 0.06882572919130325, -0.05818351358175278, 0.05114934220910072, 0.0015472524100914598, 0.013765146024525166, -0.005647045094519854, 0....
null
null
null
null
null
null
null
null
null
[ "ChurchCRM", "CRM" ]
**Description** Changes made in Family Editor doesn't save. - ChurchCRM version: 5.7.0 - PHP version the server running: PHP 8.1 - DB Server and Version the server is running: MySQL
Bug: Save doesn't work in family editor
https://api.github.com/repos/ChurchCRM/CRM/issues/7025/comments
18
2024-05-13T21:17:19
2024-08-08T02:48:11Z
https://github.com/ChurchCRM/CRM/issues/7025
2,293,862,564
7,025
false
This is a GitHub Issue repo:CRM owner:ChurchCRM Title : Bug: Save doesn't work in family editor Issue date: --- start body --- **Description** Changes made in Family Editor doesn't save. - ChurchCRM version: 5.7.0 - PHP version the server running: PHP 8.1 - DB Server and Version the server is running: MySQL --- end body ---
341
[ -0.031665828078985214, 0.029801471158862114, -0.007831713184714317, 0.04268248379230499, 0.0033597273286432028, 0.0394904799759388, -0.016863960772752762, 0.017160562798380852, 0.007520987186580896, 0.02954724058508873, 0.03406689316034317, 0.011694605462253094, -0.044716328382492065, -0.0...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
###### JerryScript revision cefd391772529c8a9531d7b3c244d78d38be47c6 ###### Build platform Ubuntu 22.04.3 ###### Build steps ```sh python ./tools/build.py --builddir=xxx --clean --compile-flag=-fsanitize=address --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --stack-limit=20 ``` ###### Test case ```sh import{a as "\{{91406,456}" ``` ###### Execution steps ```sh ./xxx/bin/jerry poc.js ``` ###### Output ```sh Release: ================================================================= ==2144424==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000005c at pc 0x559a32bf48a3 bp 0x7ffedf4f4450 sp 0x7ffedf4f4448 READ of size 1 at 0x60300000005c thread T0 #0 0x559a32bf48a2 in lexer_convert_ident_to_cesu8 /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9 #1 0x559a32bf4e08 in lexer_convert_literal_to_chars /jerryscript/jerry-core/parser/js/js-lexer.c:2133:5 #2 0x559a32bf5d19 in lexer_construct_literal_object /jerryscript/jerry-core/parser/js/js-lexer.c:2367:5 #3 0x559a32b707db in scanner_check_variables /jerryscript/jerry-core/parser/js/js-scanner-util.c:2279:5 #4 0x559a32b67485 in parser_parse_source /jerryscript/jerry-core/parser/js/js-parser.c:2274:9 #5 0x559a32b65924 in parser_parse_script /jerryscript/jerry-core/parser/js/js-parser.c:3332:38 #6 0x559a32ac2f38 in jerry_parse_common /jerryscript/jerry-core/api/jerryscript.c:418:21 #7 0x559a32ac2d34 in jerry_parse /jerryscript/jerry-core/api/jerryscript.c:486:10 #8 0x559a32c2876f in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:52:26 #9 0x559a32c2892f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26 #10 0x559a32abe5b2 in main /jerryscript/jerry-main/main-desktop.c:156:20 #11 0x7f10bf46dd8f in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16 #12 0x7f10bf46de3f in __libc_start_main csu/../csu/libc-start.c:392:3 #13 0x559a329fe424 in _start (/jerryscript/0323re/bin/jerry+0x41424) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65) 0x60300000005c is located 0 bytes to the right of 28-byte region [0x603000000040,0x60300000005c) allocated by thread T0 here: #0 0x559a32a83e4e in __interceptor_malloc (/jerryscript/0323re/bin/jerry+0xc6e4e) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65) #1 0x559a32c297f6 in jerry_port_source_read /jerryscript/jerry-port/common/jerry-port-fs.c:72:45 #2 0x559a32c2866d in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:33:28 #3 0x559a32c2892f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26 SUMMARY: AddressSanitizer: heap-buffer-overflow /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9 in lexer_convert_ident_to_cesu8 Shadow bytes around the buggy address: 0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c067fff8000: fa fa 00 00 00 fa fa fa 00 00 00[04]fa fa fa fa 0x0c067fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==2144424==ABORTING ``` ```sh Debug: ICE: Assertion '(byte >= LIT_CHAR_LOWERCASE_A && byte <= LIT_CHAR_LOWERCASE_F) || (byte >= LIT_CHAR_UPPERCASE_A && byte <= LIT_CHAR_UPPERCASE_F)' failed at /jerryscript/jerry-core/parser/js/js-lexer.c(lexer_unchecked_hex_to_character):178. Error: JERRY_FATAL_FAILED_ASSERTION Aborted ```
Heap-Buffer-Overflow in lexer_convert_ident_to_cesu8 /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9
https://api.github.com/repos/jerryscript-project/jerryscript/issues/5134/comments
0
2024-03-26T08:47:52
2024-03-26T08:47:52Z
https://github.com/jerryscript-project/jerryscript/issues/5134
2,207,604,318
5,134
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Heap-Buffer-Overflow in lexer_convert_ident_to_cesu8 /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9 Issue date: --- start body --- ###### JerryScript revision cefd391772529c8a9531d7b3c244d78d38be47c6 ###### Build platform Ubuntu 22.04.3 ###### Build steps ```sh python ./tools/build.py --builddir=xxx --clean --compile-flag=-fsanitize=address --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --stack-limit=20 ``` ###### Test case ```sh import{a as "\{{91406,456}" ``` ###### Execution steps ```sh ./xxx/bin/jerry poc.js ``` ###### Output ```sh Release: ================================================================= ==2144424==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60300000005c at pc 0x559a32bf48a3 bp 0x7ffedf4f4450 sp 0x7ffedf4f4448 READ of size 1 at 0x60300000005c thread T0 #0 0x559a32bf48a2 in lexer_convert_ident_to_cesu8 /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9 #1 0x559a32bf4e08 in lexer_convert_literal_to_chars /jerryscript/jerry-core/parser/js/js-lexer.c:2133:5 #2 0x559a32bf5d19 in lexer_construct_literal_object /jerryscript/jerry-core/parser/js/js-lexer.c:2367:5 #3 0x559a32b707db in scanner_check_variables /jerryscript/jerry-core/parser/js/js-scanner-util.c:2279:5 #4 0x559a32b67485 in parser_parse_source /jerryscript/jerry-core/parser/js/js-parser.c:2274:9 #5 0x559a32b65924 in parser_parse_script /jerryscript/jerry-core/parser/js/js-parser.c:3332:38 #6 0x559a32ac2f38 in jerry_parse_common /jerryscript/jerry-core/api/jerryscript.c:418:21 #7 0x559a32ac2d34 in jerry_parse /jerryscript/jerry-core/api/jerryscript.c:486:10 #8 0x559a32c2876f in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:52:26 #9 0x559a32c2892f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26 #10 0x559a32abe5b2 in main /jerryscript/jerry-main/main-desktop.c:156:20 #11 0x7f10bf46dd8f in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16 #12 0x7f10bf46de3f in __libc_start_main csu/../csu/libc-start.c:392:3 #13 0x559a329fe424 in _start (/jerryscript/0323re/bin/jerry+0x41424) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65) 0x60300000005c is located 0 bytes to the right of 28-byte region [0x603000000040,0x60300000005c) allocated by thread T0 here: #0 0x559a32a83e4e in __interceptor_malloc (/jerryscript/0323re/bin/jerry+0xc6e4e) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65) #1 0x559a32c297f6 in jerry_port_source_read /jerryscript/jerry-port/common/jerry-port-fs.c:72:45 #2 0x559a32c2866d in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:33:28 #3 0x559a32c2892f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26 SUMMARY: AddressSanitizer: heap-buffer-overflow /jerryscript/jerry-core/parser/js/js-lexer.c:2083:9 in lexer_convert_ident_to_cesu8 Shadow bytes around the buggy address: 0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =>0x0c067fff8000: fa fa 00 00 00 fa fa fa 00 00 00[04]fa fa fa fa 0x0c067fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c067fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==2144424==ABORTING ``` ```sh Debug: ICE: Assertion '(byte >= LIT_CHAR_LOWERCASE_A && byte <= LIT_CHAR_LOWERCASE_F) || (byte >= LIT_CHAR_UPPERCASE_A && byte <= LIT_CHAR_UPPERCASE_F)' failed at /jerryscript/jerry-core/parser/js/js-lexer.c(lexer_unchecked_hex_to_character):178. Error: JERRY_FATAL_FAILED_ASSERTION Aborted ``` --- end body ---
4,812
[ -0.00622946722432971, 0.015999583527445793, -0.0045332154259085655, -0.01105156447738409, 0.0447099469602108, 0.010873790830373764, 0.0052183824591338634, 0.03235471248626709, -0.03863603249192238, 0.016606975346803665, -0.010318249464035034, 0.006696121767163277, 0.003735087811946869, 0.0...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
This is the original mkv. <img width="776" alt="mkv" src="https://user-images.githubusercontent.com/40682277/204077854-bf062da3-2060-4e1e-a234-c6d401b4f1d8.png"> This is the result of remuxing mkv to mp4 on Windows. As you can see the Dolby Vision metadata is preserved. <img width="607" alt="mp4_windows" src="https://user-images.githubusercontent.com/40682277/204077870-76ebc93f-f341-4f58-b05e-66104f2c5c95.png"> This is what I tried on Linux. As you can see the Dolby Vision metadata is lost. <img width="325" alt="mp4_rinux" src="https://user-images.githubusercontent.com/40682277/204077885-803af644-77b0-466d-a182-3ecb23734316.png"> Tried with nightly builds on both Windows and Linux.
When trying to mkv to mp4 remux on linux, dolby vision metadata is not maintained.
https://api.github.com/repos/gpac/gpac/issues/2325/comments
10
2022-11-26T07:39:33
2022-11-28T14:29:29Z
https://github.com/gpac/gpac/issues/2325
1,465,078,528
2,325
false
This is a GitHub Issue repo:gpac owner:gpac Title : When trying to mkv to mp4 remux on linux, dolby vision metadata is not maintained. Issue date: --- start body --- This is the original mkv. <img width="776" alt="mkv" src="https://user-images.githubusercontent.com/40682277/204077854-bf062da3-2060-4e1e-a234-c6d401b4f1d8.png"> This is the result of remuxing mkv to mp4 on Windows. As you can see the Dolby Vision metadata is preserved. <img width="607" alt="mp4_windows" src="https://user-images.githubusercontent.com/40682277/204077870-76ebc93f-f341-4f58-b05e-66104f2c5c95.png"> This is what I tried on Linux. As you can see the Dolby Vision metadata is lost. <img width="325" alt="mp4_rinux" src="https://user-images.githubusercontent.com/40682277/204077885-803af644-77b0-466d-a182-3ecb23734316.png"> Tried with nightly builds on both Windows and Linux. --- end body ---
887
[ -0.004296874161809683, 0.05036136507987976, -0.02502339333295822, 0.022764137014746666, -0.008786785416305065, 0.03340265527367592, -0.02020460180938244, 0.041066963225603104, -0.0036820133682340384, 0.0251091867685318, 0.00495463190600276, 0.011668050661683083, -0.008036083541810513, 0.01...
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
In Piwigo 14 we have introduced the feature "search in this set", which can be accessed by either a button or an icon. <img width="504" alt="piwigo-14-search-in-this-set" src="https://github.com/Piwigo/Piwigo/assets/9326959/67ad8d88-118f-4549-b547-e6483166eea4"> With a few months of use, I love this feature BUT I think the button is too obstrusive. We'd better hide it by default. I know it's going to make the feature quite invisible but we have to choose between _"highly visible but obstrusive"_ and _"nearly invisible but slickly integrated"_ (with only the icon)
hide button "search in this set" by default
https://api.github.com/repos/Piwigo/Piwigo/issues/2089/comments
0
2024-01-18T11:16:01
2024-01-18T11:19:48Z
https://github.com/Piwigo/Piwigo/issues/2089
2,088,068,048
2,089
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : hide button "search in this set" by default Issue date: --- start body --- In Piwigo 14 we have introduced the feature "search in this set", which can be accessed by either a button or an icon. <img width="504" alt="piwigo-14-search-in-this-set" src="https://github.com/Piwigo/Piwigo/assets/9326959/67ad8d88-118f-4549-b547-e6483166eea4"> With a few months of use, I love this feature BUT I think the button is too obstrusive. We'd better hide it by default. I know it's going to make the feature quite invisible but we have to choose between _"highly visible but obstrusive"_ and _"nearly invisible but slickly integrated"_ (with only the icon) --- end body ---
725
[ -0.02084949053823948, -0.011247166432440281, -0.006308934185653925, 0.0013438437599688768, 0.04445520415902138, -0.04089878872036934, 0.02840687520802021, 0.0655566081404686, -0.00904663372784853, -0.009009587578475475, -0.015233316458761692, 0.015092541463673115, 0.004126924555748701, -0....
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
Some plugin might unset template variable `PAGE_BANNER`, resulting in a warning: ``` Undefined array key "PAGE_BANNER" ```
[PHP 8.2] Undefined array key "PAGE_BANNER"
https://api.github.com/repos/Piwigo/Piwigo/issues/2156/comments
0
2024-04-24T14:39:51
2024-04-24T14:43:19Z
https://github.com/Piwigo/Piwigo/issues/2156
2,261,468,377
2,156
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : [PHP 8.2] Undefined array key "PAGE_BANNER" Issue date: --- start body --- Some plugin might unset template variable `PAGE_BANNER`, resulting in a warning: ``` Undefined array key "PAGE_BANNER" ``` --- end body ---
277
[ -0.038909912109375, -0.02725098468363285, -0.003971761092543602, 0.023093102499842644, 0.03036939725279808, -0.023205477744340897, 0.006661741994321346, 0.051945436745882034, -0.033263057470321655, -0.023289760574698448, -0.006977797485888004, -0.0006097232690081, 0.006819769740104675, 0.0...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
jerryScript version 3.0.0: commit 05dbbd134c3b9e2482998f267857dd3722001cd7 Build platform: Ubuntu 20.04 Build cmd: ``` python tools/build.py --debug --profile=es.next --lto=off --compile-flag=-D_POSIX_C_SOURCE=200809 --compile-flag=-Wno-strict-prototypes --stack-limit=15 ``` Test case ```js function foo() { class Bar { static { var x = ` for (let i = 0, j = 10; i < j;) { } function baz() { return arguments; } class Proto { } `; eval(x); } } return foo; } new Promise(foo); ``` Error message: SEGV on debug version ``` Segmentation fault (core dumped) ``` Error messages in ASAN version: ``` AddressSanitizer:DEADLYSIGNAL ================================================================= ==3670588==ERROR: AddressSanitizer: SEGV on unknown address 0x00000008 (pc 0x5671dd80 bp 0xffa2bf18 sp 0xffa2be60 T0) ==3670588==The signal is caused by a READ memory access. ==3670588==Hint: address points to the zero page. #0 0x5671dd7f in parser_parse_class jerry-core/parser/js/js-parser-expr.c:1107 #1 0x567421af in parser_parse_statements jerry-core/parser/js/js-parser-statm.c:2787 #2 0x56662c82 in parser_parse_source jerry-core/parser/js/js-parser.c:2280 #3 0x56668fb5 in parser_parse_script jerry-core/parser/js/js-parser.c:3326 #4 0x56614236 in ecma_op_eval_chars_buffer jerry-core/ecma/operations/ecma-eval.c:86 #5 0x5661415b in ecma_op_eval jerry-core/ecma/operations/ecma-eval.c:56 #6 0x566d8c14 in ecma_builtin_global_object_eval jerry-core/ecma/builtin-objects/ecma-builtin-global.c:109 #7 0x566da71e in ecma_builtin_global_dispatch_routine jerry-core/ecma/builtin-objects/ecma-builtin-global.c:594 #8 0x565f26de in ecma_builtin_dispatch_routine jerry-core/ecma/builtin-objects/ecma-builtins.c:1460 #9 0x565f28fb in ecma_builtin_dispatch_call jerry-core/ecma/builtin-objects/ecma-builtins.c:1489 #10 0x566184a8 in ecma_op_function_call_native_built_in jerry-core/ecma/operations/ecma-function-object.c:1217 #11 0x56618e8c in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1411 #12 0x56618d91 in ecma_op_function_validated_call jerry-core/ecma/operations/ecma-function-object.c:1371 #13 0x566a03a5 in opfunc_call jerry-core/vm/vm.c:758 #14 0x566c08fe in vm_execute jerry-core/vm/vm.c:5217 #15 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #16 0x566182ba in ecma_op_function_call_simple jerry-core/ecma/operations/ecma-function-object.c:1176 #17 0x56618e70 in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1406 #18 0x566a7707 in vm_loop jerry-core/vm/vm.c:1794 #19 0x566c0899 in vm_execute jerry-core/vm/vm.c:5211 #20 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #21 0x566967f4 in opfunc_init_static_class_fields jerry-core/vm/opcodes.c:1081 #22 0x566a9f4a in vm_loop jerry-core/vm/vm.c:2150 #23 0x566c0899 in vm_execute jerry-core/vm/vm.c:5211 #24 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #25 0x566182ba in ecma_op_function_call_simple jerry-core/ecma/operations/ecma-function-object.c:1176 #26 0x56618e70 in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1406 #27 0x5662f276 in ecma_promise_run_executor jerry-core/ecma/operations/ecma-promise-object.c:447 #28 0x5662f4dc in ecma_op_create_promise_object jerry-core/ecma/operations/ecma-promise-object.c:514 #29 0x566ec87f in ecma_builtin_promise_dispatch_construct jerry-core/ecma/builtin-objects/ecma-builtin-promise.c:476 #30 0x565f2ba9 in ecma_builtin_dispatch_construct jerry-core/ecma/builtin-objects/ecma-builtins.c:1518 #31 0x56619172 in ecma_op_function_construct_built_in jerry-core/ecma/operations/ecma-function-object.c:1537 #32 0x566196cf in ecma_op_function_construct jerry-core/ecma/operations/ecma-function-object.c:1717 #33 0x566a08f1 in opfunc_construct jerry-core/vm/vm.c:840 #34 0x566c093c in vm_execute jerry-core/vm/vm.c:5236 #35 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #36 0x5669e5a3 in vm_run_global jerry-core/vm/vm.c:286 #37 0x565a2753 in jerry_run jerry-core/api/jerryscript.c:548 #38 0x5674f754 in jerryx_source_exec_script jerry-ext/util/sources.c:68 #39 0x5659d688 in main jerry-main/main-desktop.c:156 #40 0xf7653ed4 in __libc_start_main (/lib/i386-linux-gnu/libc.so.6+0x1aed4) ```
Memory corruption in parser_parse_class
https://api.github.com/repos/jerryscript-project/jerryscript/issues/5117/comments
0
2023-12-01T08:32:30
2024-05-28T04:59:15Z
https://github.com/jerryscript-project/jerryscript/issues/5117
2,020,374,504
5,117
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Memory corruption in parser_parse_class Issue date: --- start body --- jerryScript version 3.0.0: commit 05dbbd134c3b9e2482998f267857dd3722001cd7 Build platform: Ubuntu 20.04 Build cmd: ``` python tools/build.py --debug --profile=es.next --lto=off --compile-flag=-D_POSIX_C_SOURCE=200809 --compile-flag=-Wno-strict-prototypes --stack-limit=15 ``` Test case ```js function foo() { class Bar { static { var x = ` for (let i = 0, j = 10; i < j;) { } function baz() { return arguments; } class Proto { } `; eval(x); } } return foo; } new Promise(foo); ``` Error message: SEGV on debug version ``` Segmentation fault (core dumped) ``` Error messages in ASAN version: ``` AddressSanitizer:DEADLYSIGNAL ================================================================= ==3670588==ERROR: AddressSanitizer: SEGV on unknown address 0x00000008 (pc 0x5671dd80 bp 0xffa2bf18 sp 0xffa2be60 T0) ==3670588==The signal is caused by a READ memory access. ==3670588==Hint: address points to the zero page. #0 0x5671dd7f in parser_parse_class jerry-core/parser/js/js-parser-expr.c:1107 #1 0x567421af in parser_parse_statements jerry-core/parser/js/js-parser-statm.c:2787 #2 0x56662c82 in parser_parse_source jerry-core/parser/js/js-parser.c:2280 #3 0x56668fb5 in parser_parse_script jerry-core/parser/js/js-parser.c:3326 #4 0x56614236 in ecma_op_eval_chars_buffer jerry-core/ecma/operations/ecma-eval.c:86 #5 0x5661415b in ecma_op_eval jerry-core/ecma/operations/ecma-eval.c:56 #6 0x566d8c14 in ecma_builtin_global_object_eval jerry-core/ecma/builtin-objects/ecma-builtin-global.c:109 #7 0x566da71e in ecma_builtin_global_dispatch_routine jerry-core/ecma/builtin-objects/ecma-builtin-global.c:594 #8 0x565f26de in ecma_builtin_dispatch_routine jerry-core/ecma/builtin-objects/ecma-builtins.c:1460 #9 0x565f28fb in ecma_builtin_dispatch_call jerry-core/ecma/builtin-objects/ecma-builtins.c:1489 #10 0x566184a8 in ecma_op_function_call_native_built_in jerry-core/ecma/operations/ecma-function-object.c:1217 #11 0x56618e8c in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1411 #12 0x56618d91 in ecma_op_function_validated_call jerry-core/ecma/operations/ecma-function-object.c:1371 #13 0x566a03a5 in opfunc_call jerry-core/vm/vm.c:758 #14 0x566c08fe in vm_execute jerry-core/vm/vm.c:5217 #15 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #16 0x566182ba in ecma_op_function_call_simple jerry-core/ecma/operations/ecma-function-object.c:1176 #17 0x56618e70 in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1406 #18 0x566a7707 in vm_loop jerry-core/vm/vm.c:1794 #19 0x566c0899 in vm_execute jerry-core/vm/vm.c:5211 #20 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #21 0x566967f4 in opfunc_init_static_class_fields jerry-core/vm/opcodes.c:1081 #22 0x566a9f4a in vm_loop jerry-core/vm/vm.c:2150 #23 0x566c0899 in vm_execute jerry-core/vm/vm.c:5211 #24 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #25 0x566182ba in ecma_op_function_call_simple jerry-core/ecma/operations/ecma-function-object.c:1176 #26 0x56618e70 in ecma_op_function_call jerry-core/ecma/operations/ecma-function-object.c:1406 #27 0x5662f276 in ecma_promise_run_executor jerry-core/ecma/operations/ecma-promise-object.c:447 #28 0x5662f4dc in ecma_op_create_promise_object jerry-core/ecma/operations/ecma-promise-object.c:514 #29 0x566ec87f in ecma_builtin_promise_dispatch_construct jerry-core/ecma/builtin-objects/ecma-builtin-promise.c:476 #30 0x565f2ba9 in ecma_builtin_dispatch_construct jerry-core/ecma/builtin-objects/ecma-builtins.c:1518 #31 0x56619172 in ecma_op_function_construct_built_in jerry-core/ecma/operations/ecma-function-object.c:1537 #32 0x566196cf in ecma_op_function_construct jerry-core/ecma/operations/ecma-function-object.c:1717 #33 0x566a08f1 in opfunc_construct jerry-core/vm/vm.c:840 #34 0x566c093c in vm_execute jerry-core/vm/vm.c:5236 #35 0x566c0f17 in vm_run jerry-core/vm/vm.c:5312 #36 0x5669e5a3 in vm_run_global jerry-core/vm/vm.c:286 #37 0x565a2753 in jerry_run jerry-core/api/jerryscript.c:548 #38 0x5674f754 in jerryx_source_exec_script jerry-ext/util/sources.c:68 #39 0x5659d688 in main jerry-main/main-desktop.c:156 #40 0xf7653ed4 in __libc_start_main (/lib/i386-linux-gnu/libc.so.6+0x1aed4) ``` --- end body ---
4,760
[ -0.006350967567414045, -0.00275737838819623, -0.006961366161704063, 0.010429699905216694, 0.01953275315463543, 0.022694123908877373, -0.027323273941874504, 0.028170069679617882, -0.01662542298436165, 0.05018675699830055, -0.015002396889030933, 0.006605006288737059, 0.015199982561171055, 0....
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you! - [x] I looked for a similar issue and couldn't find any. - [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/ - [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95 I have created a couple of test uncompressed files with Y Cb Cr components (no subsampling). Converting those to PNG, JPEG or YUV fails with unable to convert caps. A simple way to reproduce this is: ``` #gpac uncvg:c=Y,U,V:fps=0 -o gpac_yuv.heif $ ../../gpac/bin/gcc/gpac -i gpac_yuv.heif -o out.yuv No suitable filter to adapt caps between pid gpac_yuv.heif in filter uncvdec to filter writegen, disconnecting pid! session last connect error Filter not found for the desired type ``` In addition, trying with PNG will segfault: ``` $ ../../gpac/bin/gcc/gpac -i gpac_yuv.heif -o out.png No suitable filter to adapt caps between pid gpac_yuv.heif in filter uncvdec to filter encpng:1.6.37, disconnecting pid! Segmentation fault (core dumped) ``` This doesn't appear to happen with other components - mono, mono+alpha planar, RGB pixel interleave, RGBA pixel interleave, and RGB planar appear OK.
uncompressed YCbCr can't be decoded to a file
https://api.github.com/repos/gpac/gpac/issues/2467/comments
3
2023-05-12T10:45:52
2023-05-13T00:13:37Z
https://github.com/gpac/gpac/issues/2467
1,707,398,894
2,467
false
This is a GitHub Issue repo:gpac owner:gpac Title : uncompressed YCbCr can't be decoded to a file Issue date: --- start body --- Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you! - [x] I looked for a similar issue and couldn't find any. - [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/ - [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95 I have created a couple of test uncompressed files with Y Cb Cr components (no subsampling). Converting those to PNG, JPEG or YUV fails with unable to convert caps. A simple way to reproduce this is: ``` #gpac uncvg:c=Y,U,V:fps=0 -o gpac_yuv.heif $ ../../gpac/bin/gcc/gpac -i gpac_yuv.heif -o out.yuv No suitable filter to adapt caps between pid gpac_yuv.heif in filter uncvdec to filter writegen, disconnecting pid! session last connect error Filter not found for the desired type ``` In addition, trying with PNG will segfault: ``` $ ../../gpac/bin/gcc/gpac -i gpac_yuv.heif -o out.png No suitable filter to adapt caps between pid gpac_yuv.heif in filter uncvdec to filter encpng:1.6.37, disconnecting pid! Segmentation fault (core dumped) ``` This doesn't appear to happen with other components - mono, mono+alpha planar, RGB pixel interleave, RGBA pixel interleave, and RGB planar appear OK. --- end body ---
1,685
[ -0.009747851639986038, 0.01474534161388874, -0.019083822146058083, 0.006336103659123182, -0.006634717341512442, 0.010200920514762402, -0.02855708636343479, 0.02752738445997238, -0.03198942914605141, 0.007757093291729689, -0.011628774926066399, -0.019784020259976387, 0.024946263059973717, 0...
null
null
null
null
null
null
null
null
null
[ "kubernetes", "kubernetes" ]
Hi, I am unsure exactly who this concerns, but I have been using the `ginkgo` conformance tests for my project [nodejs-k8s](https://github.com/Megapixel99/nodejs-k8s) and I have been running into some issues parsing the protobuf data which is sent from the tests. I am not entirely sure if I am doing something incorrectly (or if this is even the correct project for this issue); however, assuming I am doing everything properly, the protobuf data being sent from the tests seems to be corrupted. The command I am using to run the tests is: `kubetest2 noop --kubeconfig=./test-config --v 10 --test=ginkgo` and my `kubeconfig` file can be found here: https://github.com/Megapixel99/nodejs-k8s/blob/f2c0f14f17b16d3093aa33a45dfdcd01b373c460/test-config When running the tests I am logging the method, headers, and body of every request sent to the server [via this code here](https://github.com/Megapixel99/nodejs-k8s/blob/f2c0f14f17b16d3093aa33a45dfdcd01b373c460/index.js#L90-L95). Upon decoding the protobuf data; however, I am constantly getting errors. For example, the payload sent with this test: `[sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]` when converted using `NodeJS`s built-in [`Buffer.toString`](https://nodejs.org/api/buffer.html#buftostringencoding-start-end) method I get the following results: converted to base64 it is: ``` azhzAAoPCgJ2MRIJTmFtZXNwYWNlEvsBCvIBCgxzZWNyZXRzLTg0ODASABoAIgAqADIAOABCAFoYCg1lMmUtZnJhbWV3b3JrEgdzZWNyZXRzWi8KB2UyZS1ydW4SJDBkMTNmYmNlLWNmMzMtNDFkNS05YmJiLWQyZDY3YjEwMzBkOVosCiBwb2Qtc2VjdXJpdHkua3ViZXJuZXRlcy5pby9hdWRpdBIIYmFzZWxpbmVaLgoicG9kLXNlY3VyaXR5Lmt1YmVybmV0ZXMuaW8vZW5mb3JjZRIIYmFzZWxpbmVaKwofcG9kLXNlY3VyaXR5Lmt1YmVybmV0ZXMuaW8vd2FybhIIYmFzZWxpbmUSABoCCgAaACIA ``` and converted to UTF-8 it is: ``` k8s v1 Namespace� � secrets-8480"*28BZ e2e-frameworksecretsZ/ e2e-run$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z, pod-security.kubernetes.io/audibaselineZ. "pod-security.kubernetes.io/enforcbaselineZ+ pod-security.kubernetes.io/warbaseline " ``` and converted to binary it is: ``` k8s v1 Namespaceû ò secrets-8480"*28BZ e2e-frameworksecretsZ/ e2e-run$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z, pod-security.kubernetes.io/audibaselineZ. "pod-security.kubernetes.io/enforcbaselineZ+ pod-security.kubernetes.io/warbaseline " ``` When showing the hidden characters with [JSON.stringify](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) (after converting the data to UTF-8) I see: ``` k8s\u0000\n\u000f\n\u0002v1\u0012\tNamespace\u0012�\u0002\n�\u0001\n\u000fsecrets-8480\u0012\u0000\u001a\u0000\"\u0000*\u00002\u00008\u0000B\u0000Z\u001b\n\re2e-framework\u0012\nnamespacesZ/\n\u0007e2e-run\u0012$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z,\n pod-security.kubernetes.io/audit\u0012\bbaselineZ.\n\"pod-security.kubernetes.io/enforce\u0012\bbaselineZ+\n\u001fpod-security.kubernetes.io/warn\u0012\bbaseline\u0012\u0000\u001a\u0002\n\u0000\u001a\u0000\"\u0000 ``` Upon trying to decode the base64 data above in https://www.protobufpal.com/ using the proto below (which was taken from [this file](https://github.com/kubernetes/kubernetes/blob/1ebc3d2a6480f3679c28d0bc3d486ad93e95084f/staging/src/k8s.io/api/core/v1/generated.proto#L2376) and modified slightly to fix the references): ``` syntax = "proto2"; package openapitools; import "google/protobuf/struct.proto"; message Namespace { optional ObjectMeta metadata = 1; optional NamespaceSpec spec = 2; optional NamespaceStatus status = 3; } message NamespaceCondition { optional string type = 1; optional string status = 2; optional Time lastTransitionTime = 4; optional string reason = 5; optional string message = 6; } message NamespaceSpec { repeated string finalizers = 1; } message NamespaceStatus { optional string phase = 1; repeated NamespaceCondition conditions = 2; } message ObjectMeta { optional string name = 1; optional string generateName = 2; optional string namespace = 3; optional string selfLink = 4; optional string uid = 5; optional string resourceVersion = 6; optional int64 generation = 7; optional Time creationTimestamp = 8; optional Time deletionTimestamp = 9; optional int64 deletionGracePeriodSeconds = 10; map<string, string> labels = 11; map<string, string> annotations = 12; repeated OwnerReference ownerReferences = 13; repeated string finalizers = 14; repeated ManagedFieldsEntry managedFields = 17; } message OwnerReference { optional string apiVersion = 5; optional string kind = 1; optional string name = 3; optional string uid = 4; optional bool controller = 6; optional bool blockOwnerDeletion = 7; } message ManagedFieldsEntry { optional string manager = 1; optional string operation = 2; optional string apiVersion = 3; optional Time time = 4; optional string fieldsType = 6; optional FieldsV1 fieldsV1 = 7; optional string subresource = 8; } message FieldsV1 { optional bytes Raw = 1; } message Time { optional int64 seconds = 1; optional int32 nanos = 2; } ``` I receive the error `Unable to parse encoded message`, also when trying to parse the protobuf data using [protobufjs](https://www.npmjs.com/package/protobufjs) and the same proto as above I receive the error: ``` Error: invalid wire type 7 at offset 6 at Reader.skipType (/Users/seth/Desktop/temp/coding/k8s/node_modules/protobufjs/src/reader.js:382:19) at Reader.skipType (/Users/seth/Desktop/temp/coding/k8s/node_modules/protobufjs/src/reader.js:373:22) ``` And wire type 7 is an indicator of a corrupted buffer as mentioned here: https://github.com/protobufjs/protobuf.js/wiki/How-to-reverse-engineer-a-buffer-by-hand#so-what-are-those-wire-types. While it is possible that I am doing something wrong. Everything I have read about protocol buffers seems to indicate the protobuf data sent in the http request should (almost) match the spec defined in the proto file ie ``` { 1: value, 2: value, .... } ``` and not what I am currently seeing. Let me know if I need to provide anything else or if I am doing something incorrectly.
Conformance tests possibly sending corrupt protobuf (Protocol Buffer) data?
https://api.github.com/repos/kubernetes/kubernetes/issues/125201/comments
7
2024-05-29T19:36:33
2024-05-29T22:21:35Z
https://github.com/kubernetes/kubernetes/issues/125201
2,324,106,716
125,201
false
This is a GitHub Issue repo:kubernetes owner:kubernetes Title : Conformance tests possibly sending corrupt protobuf (Protocol Buffer) data? Issue date: --- start body --- Hi, I am unsure exactly who this concerns, but I have been using the `ginkgo` conformance tests for my project [nodejs-k8s](https://github.com/Megapixel99/nodejs-k8s) and I have been running into some issues parsing the protobuf data which is sent from the tests. I am not entirely sure if I am doing something incorrectly (or if this is even the correct project for this issue); however, assuming I am doing everything properly, the protobuf data being sent from the tests seems to be corrupted. The command I am using to run the tests is: `kubetest2 noop --kubeconfig=./test-config --v 10 --test=ginkgo` and my `kubeconfig` file can be found here: https://github.com/Megapixel99/nodejs-k8s/blob/f2c0f14f17b16d3093aa33a45dfdcd01b373c460/test-config When running the tests I am logging the method, headers, and body of every request sent to the server [via this code here](https://github.com/Megapixel99/nodejs-k8s/blob/f2c0f14f17b16d3093aa33a45dfdcd01b373c460/index.js#L90-L95). Upon decoding the protobuf data; however, I am constantly getting errors. For example, the payload sent with this test: `[sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]` when converted using `NodeJS`s built-in [`Buffer.toString`](https://nodejs.org/api/buffer.html#buftostringencoding-start-end) method I get the following results: converted to base64 it is: ``` azhzAAoPCgJ2MRIJTmFtZXNwYWNlEvsBCvIBCgxzZWNyZXRzLTg0ODASABoAIgAqADIAOABCAFoYCg1lMmUtZnJhbWV3b3JrEgdzZWNyZXRzWi8KB2UyZS1ydW4SJDBkMTNmYmNlLWNmMzMtNDFkNS05YmJiLWQyZDY3YjEwMzBkOVosCiBwb2Qtc2VjdXJpdHkua3ViZXJuZXRlcy5pby9hdWRpdBIIYmFzZWxpbmVaLgoicG9kLXNlY3VyaXR5Lmt1YmVybmV0ZXMuaW8vZW5mb3JjZRIIYmFzZWxpbmVaKwofcG9kLXNlY3VyaXR5Lmt1YmVybmV0ZXMuaW8vd2FybhIIYmFzZWxpbmUSABoCCgAaACIA ``` and converted to UTF-8 it is: ``` k8s v1 Namespace� � secrets-8480"*28BZ e2e-frameworksecretsZ/ e2e-run$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z, pod-security.kubernetes.io/audibaselineZ. "pod-security.kubernetes.io/enforcbaselineZ+ pod-security.kubernetes.io/warbaseline " ``` and converted to binary it is: ``` k8s v1 Namespaceû ò secrets-8480"*28BZ e2e-frameworksecretsZ/ e2e-run$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z, pod-security.kubernetes.io/audibaselineZ. "pod-security.kubernetes.io/enforcbaselineZ+ pod-security.kubernetes.io/warbaseline " ``` When showing the hidden characters with [JSON.stringify](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) (after converting the data to UTF-8) I see: ``` k8s\u0000\n\u000f\n\u0002v1\u0012\tNamespace\u0012�\u0002\n�\u0001\n\u000fsecrets-8480\u0012\u0000\u001a\u0000\"\u0000*\u00002\u00008\u0000B\u0000Z\u001b\n\re2e-framework\u0012\nnamespacesZ/\n\u0007e2e-run\u0012$0d13fbce-cf33-41d5-9bbb-d2d67b1030d9Z,\n pod-security.kubernetes.io/audit\u0012\bbaselineZ.\n\"pod-security.kubernetes.io/enforce\u0012\bbaselineZ+\n\u001fpod-security.kubernetes.io/warn\u0012\bbaseline\u0012\u0000\u001a\u0002\n\u0000\u001a\u0000\"\u0000 ``` Upon trying to decode the base64 data above in https://www.protobufpal.com/ using the proto below (which was taken from [this file](https://github.com/kubernetes/kubernetes/blob/1ebc3d2a6480f3679c28d0bc3d486ad93e95084f/staging/src/k8s.io/api/core/v1/generated.proto#L2376) and modified slightly to fix the references): ``` syntax = "proto2"; package openapitools; import "google/protobuf/struct.proto"; message Namespace { optional ObjectMeta metadata = 1; optional NamespaceSpec spec = 2; optional NamespaceStatus status = 3; } message NamespaceCondition { optional string type = 1; optional string status = 2; optional Time lastTransitionTime = 4; optional string reason = 5; optional string message = 6; } message NamespaceSpec { repeated string finalizers = 1; } message NamespaceStatus { optional string phase = 1; repeated NamespaceCondition conditions = 2; } message ObjectMeta { optional string name = 1; optional string generateName = 2; optional string namespace = 3; optional string selfLink = 4; optional string uid = 5; optional string resourceVersion = 6; optional int64 generation = 7; optional Time creationTimestamp = 8; optional Time deletionTimestamp = 9; optional int64 deletionGracePeriodSeconds = 10; map<string, string> labels = 11; map<string, string> annotations = 12; repeated OwnerReference ownerReferences = 13; repeated string finalizers = 14; repeated ManagedFieldsEntry managedFields = 17; } message OwnerReference { optional string apiVersion = 5; optional string kind = 1; optional string name = 3; optional string uid = 4; optional bool controller = 6; optional bool blockOwnerDeletion = 7; } message ManagedFieldsEntry { optional string manager = 1; optional string operation = 2; optional string apiVersion = 3; optional Time time = 4; optional string fieldsType = 6; optional FieldsV1 fieldsV1 = 7; optional string subresource = 8; } message FieldsV1 { optional bytes Raw = 1; } message Time { optional int64 seconds = 1; optional int32 nanos = 2; } ``` I receive the error `Unable to parse encoded message`, also when trying to parse the protobuf data using [protobufjs](https://www.npmjs.com/package/protobufjs) and the same proto as above I receive the error: ``` Error: invalid wire type 7 at offset 6 at Reader.skipType (/Users/seth/Desktop/temp/coding/k8s/node_modules/protobufjs/src/reader.js:382:19) at Reader.skipType (/Users/seth/Desktop/temp/coding/k8s/node_modules/protobufjs/src/reader.js:373:22) ``` And wire type 7 is an indicator of a corrupted buffer as mentioned here: https://github.com/protobufjs/protobuf.js/wiki/How-to-reverse-engineer-a-buffer-by-hand#so-what-are-those-wire-types. While it is possible that I am doing something wrong. Everything I have read about protocol buffers seems to indicate the protobuf data sent in the http request should (almost) match the spec defined in the proto file ie ``` { 1: value, 2: value, .... } ``` and not what I am currently seeing. Let me know if I need to provide anything else or if I am doing something incorrectly. --- end body ---
6,458
[ -0.004843174014240503, -0.0037250586319714785, -0.012030205689370632, -0.015976496040821075, 0.029146740213036537, -0.027791449800133705, -0.009638514369726181, -0.0006891059456393123, -0.005903490353375673, -0.003758941078558564, -0.010714775882661343, 0.0002558611158747226, 0.0051022740080...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
git log commit bbca869177585aaca8eb66d8541079e6f364798e (HEAD -> master, origin/master, origin/HEAD) Author: jeanlf <jeanlf@gpac.io> Date: Wed Jan 18 11:40:30 2023 +0100 compile setting: ./configure --enable-sanitizer ./MP4Box -info xxx ================================================================= ==2298535==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x606000000839 at pc 0x7ff4a18a8490 bp 0x7ffe6ddf6040 sp 0x7ffe6ddf57e8 READ of size 276 at 0x606000000839 thread T0 #0 0x7ff4a18a848f in __interceptor_memcpy ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:790 #1 0x7ff49f1ffc75 in memcpy /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 #2 0x7ff49f1ffc75 in mp3_dmx_process filters/reframe_mp3.c:673 #3 0x7ff49eddf14d in gf_filter_process_task filter_core/filter.c:2828 #4 0x7ff49eda10e2 in gf_fs_thread_proc filter_core/filter_session.c:1859 #5 0x7ff49edad8b6 in gf_fs_run filter_core/filter_session.c:2120 #6 0x7ff49e7eb8a6 in gf_media_import media_tools/media_import.c:1228 #7 0x5560971a73b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #8 0x556097176db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #9 0x7ff49ba83082 in __libc_start_main ../csu/libc-start.c:308 #10 0x55609714acfd in _start (/home/qianshuidewajueji/gpac/bin/gcc/MP4Box+0xa3cfd) 0x606000000839 is located 0 bytes to the right of 57-byte region [0x606000000800,0x606000000839) allocated by thread T0 here: #0 0x7ff4a191ac3e in __interceptor_realloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:163 #1 0x7ff49f2011d8 in mp3_dmx_process filters/reframe_mp3.c:547 #2 0x7ff49eddf14d in gf_filter_process_task filter_core/filter.c:2828 #3 0x7ff49eda10e2 in gf_fs_thread_proc filter_core/filter_session.c:1859 #4 0x7ff49edad8b6 in gf_fs_run filter_core/filter_session.c:2120 #5 0x7ff49e7eb8a6 in gf_media_import media_tools/media_import.c:1228 #6 0x5560971a73b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #7 0x556097176db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #8 0x7ff49ba83082 in __libc_start_main ../csu/libc-start.c:308 SUMMARY: AddressSanitizer: heap-buffer-overflow ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:790 in __interceptor_memcpy Shadow bytes around the buggy address: 0x0c0c7fff80b0: 00 00 00 00 fa fa fa fa fd fd fd fd fd fd fd fa 0x0c0c7fff80c0: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fff80d0: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00 0x0c0c7fff80e0: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff80f0: fa fa fa fa 00 00 00 00 00 00 00 fa fa fa fa fa =>0x0c0c7fff8100: 00 00 00 00 00 00 00[01]fa fa fa fa 00 00 00 00 0x0c0c7fff8110: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff8120: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fff8130: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00 0x0c0c7fff8140: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff8150: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==2298535==ABORTING This vulnerability is capable of crashing software, use unexpected value, or possible code execution. Occurrences poc: [xxx](https://github.com/qianshuidewajueji/poc/blob/main/gpac/xxx)
heap-buffer-overflow in function mp3_dmx_process
https://api.github.com/repos/gpac/gpac/issues/2391/comments
2
2023-02-05T15:13:20
2023-02-10T04:11:10Z
https://github.com/gpac/gpac/issues/2391
1,571,472,143
2,391
false
This is a GitHub Issue repo:gpac owner:gpac Title : heap-buffer-overflow in function mp3_dmx_process Issue date: --- start body --- git log commit bbca869177585aaca8eb66d8541079e6f364798e (HEAD -> master, origin/master, origin/HEAD) Author: jeanlf <jeanlf@gpac.io> Date: Wed Jan 18 11:40:30 2023 +0100 compile setting: ./configure --enable-sanitizer ./MP4Box -info xxx ================================================================= ==2298535==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x606000000839 at pc 0x7ff4a18a8490 bp 0x7ffe6ddf6040 sp 0x7ffe6ddf57e8 READ of size 276 at 0x606000000839 thread T0 #0 0x7ff4a18a848f in __interceptor_memcpy ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:790 #1 0x7ff49f1ffc75 in memcpy /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34 #2 0x7ff49f1ffc75 in mp3_dmx_process filters/reframe_mp3.c:673 #3 0x7ff49eddf14d in gf_filter_process_task filter_core/filter.c:2828 #4 0x7ff49eda10e2 in gf_fs_thread_proc filter_core/filter_session.c:1859 #5 0x7ff49edad8b6 in gf_fs_run filter_core/filter_session.c:2120 #6 0x7ff49e7eb8a6 in gf_media_import media_tools/media_import.c:1228 #7 0x5560971a73b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #8 0x556097176db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #9 0x7ff49ba83082 in __libc_start_main ../csu/libc-start.c:308 #10 0x55609714acfd in _start (/home/qianshuidewajueji/gpac/bin/gcc/MP4Box+0xa3cfd) 0x606000000839 is located 0 bytes to the right of 57-byte region [0x606000000800,0x606000000839) allocated by thread T0 here: #0 0x7ff4a191ac3e in __interceptor_realloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:163 #1 0x7ff49f2011d8 in mp3_dmx_process filters/reframe_mp3.c:547 #2 0x7ff49eddf14d in gf_filter_process_task filter_core/filter.c:2828 #3 0x7ff49eda10e2 in gf_fs_thread_proc filter_core/filter_session.c:1859 #4 0x7ff49edad8b6 in gf_fs_run filter_core/filter_session.c:2120 #5 0x7ff49e7eb8a6 in gf_media_import media_tools/media_import.c:1228 #6 0x5560971a73b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #7 0x556097176db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #8 0x7ff49ba83082 in __libc_start_main ../csu/libc-start.c:308 SUMMARY: AddressSanitizer: heap-buffer-overflow ../../../../src/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:790 in __interceptor_memcpy Shadow bytes around the buggy address: 0x0c0c7fff80b0: 00 00 00 00 fa fa fa fa fd fd fd fd fd fd fd fa 0x0c0c7fff80c0: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fff80d0: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00 0x0c0c7fff80e0: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff80f0: fa fa fa fa 00 00 00 00 00 00 00 fa fa fa fa fa =>0x0c0c7fff8100: 00 00 00 00 00 00 00[01]fa fa fa fa 00 00 00 00 0x0c0c7fff8110: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff8120: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa 0x0c0c7fff8130: 00 00 00 00 00 00 00 00 fa fa fa fa 00 00 00 00 0x0c0c7fff8140: 00 00 00 00 fa fa fa fa 00 00 00 00 00 00 00 00 0x0c0c7fff8150: fa fa fa fa 00 00 00 00 00 00 00 00 fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==2298535==ABORTING This vulnerability is capable of crashing software, use unexpected value, or possible code execution. Occurrences poc: [xxx](https://github.com/qianshuidewajueji/poc/blob/main/gpac/xxx) --- end body ---
4,314
[ -0.022006990388035774, 0.02279614470899105, -0.010914157144725323, 0.008576473221182823, 0.03915993496775627, 0.014614250510931015, -0.05241177976131439, 0.03546728938817978, -0.00007840363105060533, 0.04413310065865517, -0.03168530389666557, 0.021024269983172417, 0.01339329406619072, 0.00...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
[poc.zip](https://github.com/gpac/gpac/files/8311643/poc.zip) ``` Description Null Pointer Dereference in gpac Proof of Concept Version: ~/fuzzing/gpac/gpac/bin/gcc/MP4Box -version MP4Box - GPAC version 2.1-DEV-rev15-g6c0f4ff03-master (c) 2000-2022 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SSL GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D System information Ubuntu 20.04 focal, AMD EPYC 7742 64-Core @ 16x 2.25GHz command: ./MP4Box -info poc Result ~/fuzzing/gpac/gpac/bin/gcc/MP4Box -info ./poc [Core] exp-golomb read failed, not enough bits in bitstream ! [HEVC] Warning: Error parsing NAL unit AddressSanitizer:DEADLYSIGNAL ================================================================= ==1172475==ERROR: AddressSanitizer: SEGV on unknown address 0x00010000000d (pc 0x7f031b4ec838 bp 0x000000000002 sp 0x7ffca1f029b8 T0) ==1172475==The signal is caused by a READ memory access. #0 0x7f031b4ec837 (/lib/x86_64-linux-gnu/libasan.so.5+0x12e837) #1 0x7f031b4ec9d1 (/lib/x86_64-linux-gnu/libasan.so.5+0x12e9d1) #2 0x7f031b4ec60b (/lib/x86_64-linux-gnu/libasan.so.5+0x12e60b) #3 0x7f031b3ea141 (/lib/x86_64-linux-gnu/libasan.so.5+0x2c141) #4 0x7f031b3e6e1f (/lib/x86_64-linux-gnu/libasan.so.5+0x28e1f) #5 0x7f031b4cc0b1 in __interceptor_realloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10e0b1) #6 0x7f031abc5655 in gf_list_add (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0xac655) #7 0x7f031b0ca909 in naludmx_process (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x5b1909) #8 0x7f031afa67ef in gf_filter_process_task (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x48d7ef) #9 0x7f031af944d3 in gf_fs_thread_proc (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x47b4d3) #10 0x7f031af9943a in gf_fs_run (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x48043a) #11 0x7f031ae07151 in gf_media_import (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x2ee151) #12 0x5613ea17fdc2 in convert_file_info (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x3ddc2) #13 0x5613ea16e6d2 in mp4boxMain (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x2c6d2) #14 0x7f031a94b0b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) #15 0x5613ea15b53d in _start (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x1953d) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libasan.so.5+0x12e837) ==1172475==ABORTING Occurrences list.c L99 ```
Untrusted Pointer Dereference
https://api.github.com/repos/gpac/gpac/issues/2149/comments
0
2022-03-20T16:24:25
2022-03-21T10:52:11Z
https://github.com/gpac/gpac/issues/2149
1,174,603,686
2,149
false
This is a GitHub Issue repo:gpac owner:gpac Title : Untrusted Pointer Dereference Issue date: --- start body --- [poc.zip](https://github.com/gpac/gpac/files/8311643/poc.zip) ``` Description Null Pointer Dereference in gpac Proof of Concept Version: ~/fuzzing/gpac/gpac/bin/gcc/MP4Box -version MP4Box - GPAC version 2.1-DEV-rev15-g6c0f4ff03-master (c) 2000-2022 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SSL GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D System information Ubuntu 20.04 focal, AMD EPYC 7742 64-Core @ 16x 2.25GHz command: ./MP4Box -info poc Result ~/fuzzing/gpac/gpac/bin/gcc/MP4Box -info ./poc [Core] exp-golomb read failed, not enough bits in bitstream ! [HEVC] Warning: Error parsing NAL unit AddressSanitizer:DEADLYSIGNAL ================================================================= ==1172475==ERROR: AddressSanitizer: SEGV on unknown address 0x00010000000d (pc 0x7f031b4ec838 bp 0x000000000002 sp 0x7ffca1f029b8 T0) ==1172475==The signal is caused by a READ memory access. #0 0x7f031b4ec837 (/lib/x86_64-linux-gnu/libasan.so.5+0x12e837) #1 0x7f031b4ec9d1 (/lib/x86_64-linux-gnu/libasan.so.5+0x12e9d1) #2 0x7f031b4ec60b (/lib/x86_64-linux-gnu/libasan.so.5+0x12e60b) #3 0x7f031b3ea141 (/lib/x86_64-linux-gnu/libasan.so.5+0x2c141) #4 0x7f031b3e6e1f (/lib/x86_64-linux-gnu/libasan.so.5+0x28e1f) #5 0x7f031b4cc0b1 in __interceptor_realloc (/lib/x86_64-linux-gnu/libasan.so.5+0x10e0b1) #6 0x7f031abc5655 in gf_list_add (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0xac655) #7 0x7f031b0ca909 in naludmx_process (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x5b1909) #8 0x7f031afa67ef in gf_filter_process_task (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x48d7ef) #9 0x7f031af944d3 in gf_fs_thread_proc (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x47b4d3) #10 0x7f031af9943a in gf_fs_run (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x48043a) #11 0x7f031ae07151 in gf_media_import (/home/aidai/fuzzing/gpac/gpac/bin/gcc/libgpac.so.11+0x2ee151) #12 0x5613ea17fdc2 in convert_file_info (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x3ddc2) #13 0x5613ea16e6d2 in mp4boxMain (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x2c6d2) #14 0x7f031a94b0b2 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x240b2) #15 0x5613ea15b53d in _start (/home/aidai/fuzzing/gpac/gpac/bin/gcc/MP4Box+0x1953d) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (/lib/x86_64-linux-gnu/libasan.so.5+0x12e837) ==1172475==ABORTING Occurrences list.c L99 ``` --- end body ---
2,985
[ -0.007193812169134617, 0.031201068311929703, -0.010804659686982632, 0.011201992630958557, 0.022306393831968307, 0.02054976485669613, -0.029081961140036583, 0.04338593780994415, -0.005635849665850401, 0.0026976794470101595, -0.031396251171827316, 0.030671291053295135, 0.013300187885761261, ...
null
null
null
null
null
null
null
null
null
[ "openlink", "virtuoso-opensource" ]
Virtuoso 7.2.12 crashes with the `XTE_NODEBLD_ACC` function. Environment: Ubuntu 20.04, docker image `openlink/virtuoso-opensource-7:7.2.12`. PoC: ```sql SELECT XTE_NODEBLD_ACC(); ``` Backtrace: ``` #0 0xc11b6f (bif_xte_nodebld_acc+0x1f) #1 0x755af4 (ins_call_bif+0xc4) #2 0x75d882 (code_vec_run_v+0x25f2) #3 0x7b9c8f (qn_input+0x38f) #4 0x7ba136 (qn_send_output+0x236) #5 0x82c91d (set_ctr_vec_input+0x99d) #6 0x7b9cce (qn_input+0x3ce) #7 0x7cb4bb (qr_exec+0x11db) #8 0x7d8e56 (sf_sql_execute+0x11a6) #9 0x7d995e (sf_sql_execute_w+0x17e) #10 0x7e261d (sf_sql_execute_wrapper+0x3d) #11 0xe29cec (future_wrapper+0x3fc) #12 0xe315ee (_thread_boot+0x11e) #13 0x7fc880d33609 (start_thread+0xd9) #14 0x7fc880b03353 (clone+0x43) ```
Virtuoso 7.2.12 crashes with the `XTE_NODEBLD_ACC` function
https://api.github.com/repos/openlink/virtuoso-opensource/issues/1295/comments
0
2024-05-03T19:30:40
2024-05-07T11:59:52Z
https://github.com/openlink/virtuoso-opensource/issues/1295
2,278,300,244
1,295
false
This is a GitHub Issue repo:virtuoso-opensource owner:openlink Title : Virtuoso 7.2.12 crashes with the `XTE_NODEBLD_ACC` function Issue date: --- start body --- Virtuoso 7.2.12 crashes with the `XTE_NODEBLD_ACC` function. Environment: Ubuntu 20.04, docker image `openlink/virtuoso-opensource-7:7.2.12`. PoC: ```sql SELECT XTE_NODEBLD_ACC(); ``` Backtrace: ``` #0 0xc11b6f (bif_xte_nodebld_acc+0x1f) #1 0x755af4 (ins_call_bif+0xc4) #2 0x75d882 (code_vec_run_v+0x25f2) #3 0x7b9c8f (qn_input+0x38f) #4 0x7ba136 (qn_send_output+0x236) #5 0x82c91d (set_ctr_vec_input+0x99d) #6 0x7b9cce (qn_input+0x3ce) #7 0x7cb4bb (qr_exec+0x11db) #8 0x7d8e56 (sf_sql_execute+0x11a6) #9 0x7d995e (sf_sql_execute_w+0x17e) #10 0x7e261d (sf_sql_execute_wrapper+0x3d) #11 0xe29cec (future_wrapper+0x3fc) #12 0xe315ee (_thread_boot+0x11e) #13 0x7fc880d33609 (start_thread+0xd9) #14 0x7fc880b03353 (clone+0x43) ``` --- end body ---
936
[ -0.02461540885269642, -0.011930587701499462, -0.009366197511553764, 0.001600172952748835, 0.040207453072071075, 0.005163064692169428, -0.006616676691919565, 0.042511291801929474, 0.006438403390347958, 0.029895035549998283, -0.013013940304517746, -0.002912366297096014, -0.001013928558677435, ...
null
null
null
null
null
null
null
null
null
[ "schollz", "croc" ]
Hi avira antivirus allert version 9.6.13, 9.6.12, 9.6.11, but program not give false positive 9.6.10 Please can you fix this ? Best regards Fatih Çakır
avrira antivirus virus alert after 9.6.10 version
https://api.github.com/repos/schollz/croc/issues/677/comments
1
2024-02-29T19:59:54
2024-05-08T12:30:09Z
https://github.com/schollz/croc/issues/677
2,162,002,903
677
false
This is a GitHub Issue repo:croc owner:schollz Title : avrira antivirus virus alert after 9.6.10 version Issue date: --- start body --- Hi avira antivirus allert version 9.6.13, 9.6.12, 9.6.11, but program not give false positive 9.6.10 Please can you fix this ? Best regards Fatih Çakır --- end body ---
317
[ -0.05972383916378021, -0.017367392778396606, -0.008925777859985828, 0.027753466740250587, 0.0355781652033329, 0.01683637499809265, -0.0010327507043257356, 0.03988877683877945, 0.018585609272122383, 0.06028609350323677, 0.026956941932439804, -0.03992001339793205, 0.011713618412613869, -0.01...
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
e.g.: ``` $ LD_LIBRARY_PATH=../src/.libs valgrind .libs/dwggrep -i tekst ../../test/test-data/example_r13.dwg ==1418971== Memcheck, a memory error detector ==1418971== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==1418971== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info ==1418971== Command: .libs/dwggrep -i tekst ../../test/test-data/example_r13.dwg ==1418971== ==1418971== Invalid read of size 16 ==1418971== at 0x657ACD9: ??? ==1418971== by 0x6A9741F: ??? ==1418971== Address 0x6a9742f is 15 bytes inside a block of size 18 alloc'd ==1418971== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==1418971== by 0x48DDA5C: bit_read_TV (bits.c:1705) ==1418971== by 0x4A71D90: dwg_decode_DICTIONARY_private (dwg.spec:3111) ==1418971== by 0x4A713DD: dwg_decode_DICTIONARY (dwg.spec:3028) ==1418971== by 0x4EB6A58: dwg_decode_add_object (decode.c:5341) ==1418971== by 0x48ED654: decode_R13_R2000 (decode.c:799) ==1418971== by 0x48E411F: dwg_decode (decode.c:225) ==1418971== by 0x48CF18B: dwg_read_file (dwg.c:261) ==1418971== by 0x11C3BA: main (dwggrep.c:2013) ==1418971== Invalid read of size 16 ==1418971== at 0x657ACD9: ??? ==1418971== by 0x6A9747F: ??? ==1418971== Address 0x6a9748f is 15 bytes inside a block of size 28 alloc'd ==1418971== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==1418971== by 0x48DDA5C: bit_read_TV (bits.c:1705) ==1418971== by 0x4A71D90: dwg_decode_DICTIONARY_private (dwg.spec:3111) ==1418971== by 0x4A713DD: dwg_decode_DICTIONARY (dwg.spec:3028) ==1418971== by 0x4EB6A58: dwg_decode_add_object (decode.c:5341) ==1418971== by 0x48ED654: decode_R13_R2000 (decode.c:799) ==1418971== by 0x48E411F: dwg_decode (decode.c:225) ==1418971== by 0x48CF18B: dwg_read_file (dwg.c:261) ==1418971== by 0x11C3BA: main (dwggrep.c:2013) ``` but many more.
valgrind errors in decode
https://api.github.com/repos/LibreDWG/libredwg/issues/701/comments
1
2023-04-21T08:42:42
2023-04-24T10:21:52Z
https://github.com/LibreDWG/libredwg/issues/701
1,678,132,423
701
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : valgrind errors in decode Issue date: --- start body --- e.g.: ``` $ LD_LIBRARY_PATH=../src/.libs valgrind .libs/dwggrep -i tekst ../../test/test-data/example_r13.dwg ==1418971== Memcheck, a memory error detector ==1418971== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==1418971== Using Valgrind-3.18.1 and LibVEX; rerun with -h for copyright info ==1418971== Command: .libs/dwggrep -i tekst ../../test/test-data/example_r13.dwg ==1418971== ==1418971== Invalid read of size 16 ==1418971== at 0x657ACD9: ??? ==1418971== by 0x6A9741F: ??? ==1418971== Address 0x6a9742f is 15 bytes inside a block of size 18 alloc'd ==1418971== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==1418971== by 0x48DDA5C: bit_read_TV (bits.c:1705) ==1418971== by 0x4A71D90: dwg_decode_DICTIONARY_private (dwg.spec:3111) ==1418971== by 0x4A713DD: dwg_decode_DICTIONARY (dwg.spec:3028) ==1418971== by 0x4EB6A58: dwg_decode_add_object (decode.c:5341) ==1418971== by 0x48ED654: decode_R13_R2000 (decode.c:799) ==1418971== by 0x48E411F: dwg_decode (decode.c:225) ==1418971== by 0x48CF18B: dwg_read_file (dwg.c:261) ==1418971== by 0x11C3BA: main (dwggrep.c:2013) ==1418971== Invalid read of size 16 ==1418971== at 0x657ACD9: ??? ==1418971== by 0x6A9747F: ??? ==1418971== Address 0x6a9748f is 15 bytes inside a block of size 28 alloc'd ==1418971== at 0x4848899: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so) ==1418971== by 0x48DDA5C: bit_read_TV (bits.c:1705) ==1418971== by 0x4A71D90: dwg_decode_DICTIONARY_private (dwg.spec:3111) ==1418971== by 0x4A713DD: dwg_decode_DICTIONARY (dwg.spec:3028) ==1418971== by 0x4EB6A58: dwg_decode_add_object (decode.c:5341) ==1418971== by 0x48ED654: decode_R13_R2000 (decode.c:799) ==1418971== by 0x48E411F: dwg_decode (decode.c:225) ==1418971== by 0x48CF18B: dwg_read_file (dwg.c:261) ==1418971== by 0x11C3BA: main (dwggrep.c:2013) ``` but many more. --- end body ---
2,120
[ -0.02852575108408928, 0.028313100337982178, -0.0068618254736065865, 0.01030602864921093, 0.021781643852591515, 0.020460164174437523, 0.010716143995523453, 0.028464993461966515, 0.0007433327846229076, 0.017634930089116096, -0.015432462096214294, -0.007461808156222105, 0.02817639522254467, -...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
###### JerryScript revision Commit: [a6ab5e9](https://github.com/jerryscript-project/jerryscript/commit/a6ab5e9abed70cdedf9f4e9c1dc379eb762ebf64) Version: v3.0.0 ###### Build platform Ubuntu 18.04.5 LTS (Linux 4.19.128-microsoft-standard x86_64) Ubuntu 18.04.5 LTS (Linux 5.4.0-44-generic x86_64) ###### Build steps ```bash python ./tools/build.py --clean --debug --compile-flag=-fsanitize=address --compile-flag=-m32 --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --system-allocator=on --stack-limit=20 ``` ###### Test case <details> <summary>poc.js</summary> <pre><code> ```javascript function JSEtset() { const v2 = String.fromCodePoint(1337); const v4 = v2.padEnd(1337, v2); const v6 = { {: 0, e: String }; const v7 = v6[v4]; } JSEtset(); ``` </code></pre> </details> ​ ###### Execution steps & Output ```bash $ ./jerryscript/build/bin/jerry poc.js ICE: Assertion 'scope_stack_p > context_p->scope_stack_p' failed at /jerryscript/jerry-core/parser/js/js-scanner-util.c(scanner_literal_is_created):3112. Error: ERR_FAILED_INTERNAL_ASSERTION [1] 22037 abort jerry poc.js ```
Assertion 'scope_stack_p > context_p->scope_stack_p' failed at jerry-core/parser/js/js-scanner-util.c(scanner_literal_is_created):3112.
https://api.github.com/repos/jerryscript-project/jerryscript/issues/4918/comments
1
2022-01-04T05:59:16
2022-01-04T10:39:03Z
https://github.com/jerryscript-project/jerryscript/issues/4918
1,093,036,586
4,918
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Assertion 'scope_stack_p > context_p->scope_stack_p' failed at jerry-core/parser/js/js-scanner-util.c(scanner_literal_is_created):3112. Issue date: --- start body --- ###### JerryScript revision Commit: [a6ab5e9](https://github.com/jerryscript-project/jerryscript/commit/a6ab5e9abed70cdedf9f4e9c1dc379eb762ebf64) Version: v3.0.0 ###### Build platform Ubuntu 18.04.5 LTS (Linux 4.19.128-microsoft-standard x86_64) Ubuntu 18.04.5 LTS (Linux 5.4.0-44-generic x86_64) ###### Build steps ```bash python ./tools/build.py --clean --debug --compile-flag=-fsanitize=address --compile-flag=-m32 --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --system-allocator=on --stack-limit=20 ``` ###### Test case <details> <summary>poc.js</summary> <pre><code> ```javascript function JSEtset() { const v2 = String.fromCodePoint(1337); const v4 = v2.padEnd(1337, v2); const v6 = { {: 0, e: String }; const v7 = v6[v4]; } JSEtset(); ``` </code></pre> </details> ​ ###### Execution steps & Output ```bash $ ./jerryscript/build/bin/jerry poc.js ICE: Assertion 'scope_stack_p > context_p->scope_stack_p' failed at /jerryscript/jerry-core/parser/js/js-scanner-util.c(scanner_literal_is_created):3112. Error: ERR_FAILED_INTERNAL_ASSERTION [1] 22037 abort jerry poc.js ``` --- end body ---
1,484
[ 0.002925133565440774, 0.005028787534683943, -0.010494217276573181, 0.008755048736929893, 0.012618223205208778, -0.005942776333540678, 0.0015902292216196656, 0.0430721752345562, -0.03771405667066574, 0.020825618878006935, 0.003724411129951477, 0.0028455760329961777, 0.01311407145112753, 0.0...
null
null
null
null
null
null
null
null
null
[ "slims", "slims9_bulian" ]
testing issue
Test Issue
https://api.github.com/repos/slims/slims9_bulian/issues/234/comments
0
2024-05-02T08:22:50
2024-05-02T08:39:40Z
https://github.com/slims/slims9_bulian/issues/234
2,274,906,931
234
false
This is a GitHub Issue repo:slims9_bulian owner:slims Title : Test Issue Issue date: --- start body --- testing issue --- end body ---
136
[ -0.007967821322381496, 0.007839241996407509, -0.021402373909950256, 0.008195947855710983, 0.03948654979467392, 0.009100156836211681, -0.007540603633970022, 0.05843345820903778, -0.008444813080132008, 0.016823261976242065, -0.03520607575774193, 0.011339940130710602, 0.006727645639330149, 0....
null
null
null
null
null
null
null
null
null
[ "kubernetes", "kubernetes" ]
### What would you like to be added? As of go 1.22, for string to bytes conversion, we can replace the usage of `unsafe.Slice(unsafe.StringData(s), len(s))` with type casting `[]bytes(str)`, without the worry of losing performance. As of go 1.22, string to bytes conversion `[]bytes(str)` is faster than using the `unsafe` package. Both methods have 0 memory allocation now. I saw at least two places in the codebase still using the `unsafe` way: + https://github.com/kubernetes/kubernetes/blob/e342ab05bb903519350cedac784898529eaef06b/staging/src/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go#L277-L286 + https://github.com/kubernetes/kubernetes/blob/e342ab05bb903519350cedac784898529eaef06b/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go#L511-L520 Here's my benchmark results, comparing two ways of conversion: ```bash ╰─○ go test -v -run=none -bench=. -benchmem=true goos: darwin goarch: amd64 pkg: example.com/m/v2 cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkStringToBytesUnsafe BenchmarkStringToBytesUnsafe-12 1000000000 0.5636 ns/op 0 B/op 0 allocs/op BenchmarkStringToBytesCasting BenchmarkStringToBytesCasting-12 1000000000 0.2548 ns/op 0 B/op 0 allocs/op PASS ok example.com/m/v2 1.448s ``` My test code: ``` package main import ( "testing" "unsafe" ) const ( str = "some random string" ) func toBytes(s string) []byte { // unsafe.StringData is unspecified for the empty string, so we provide a strict interpretation if len(s) == 0 { return nil } // Copied from go 1.20.1 os.File.WriteString // https://github.com/golang/go/blob/202a1a57064127c3f19d96df57b9f9586145e21c/src/os/file.go#L246 return unsafe.Slice(unsafe.StringData(s), len(s)) } func toBytesRaw(s string) []byte { return []byte(s) } func BenchmarkStringToBytesUnsafe(b *testing.B) { for i := 0; i < b.N; i++ { _ = toBytes(str) } } func BenchmarkStringToBytesCasting(b *testing.B) { for i := 0; i < b.N; i++ { _ = toBytesRaw(str) } } ``` ### Why is this needed? + Kubernetes as of now is built with go 1.22, in go 1.22 type casting is faster than unsafe, regarding **only** string to bytes conversion + We can make string to bytes conversion faster with type casting + We don't need to be 'unsafe' anymore
As of go 1.22, there's no need to use the unsafe package for string to bytes conversion
https://api.github.com/repos/kubernetes/kubernetes/issues/124656/comments
6
2024-05-01T15:47:09
2024-05-30T16:45:47Z
https://github.com/kubernetes/kubernetes/issues/124656
2,273,725,693
124,656
false
This is a GitHub Issue repo:kubernetes owner:kubernetes Title : As of go 1.22, there's no need to use the unsafe package for string to bytes conversion Issue date: --- start body --- ### What would you like to be added? As of go 1.22, for string to bytes conversion, we can replace the usage of `unsafe.Slice(unsafe.StringData(s), len(s))` with type casting `[]bytes(str)`, without the worry of losing performance. As of go 1.22, string to bytes conversion `[]bytes(str)` is faster than using the `unsafe` package. Both methods have 0 memory allocation now. I saw at least two places in the codebase still using the `unsafe` way: + https://github.com/kubernetes/kubernetes/blob/e342ab05bb903519350cedac784898529eaef06b/staging/src/k8s.io/apiserver/pkg/authentication/token/cache/cached_token_authenticator.go#L277-L286 + https://github.com/kubernetes/kubernetes/blob/e342ab05bb903519350cedac784898529eaef06b/staging/src/k8s.io/apiserver/pkg/storage/value/encrypt/envelope/kmsv2/envelope.go#L511-L520 Here's my benchmark results, comparing two ways of conversion: ```bash ╰─○ go test -v -run=none -bench=. -benchmem=true goos: darwin goarch: amd64 pkg: example.com/m/v2 cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkStringToBytesUnsafe BenchmarkStringToBytesUnsafe-12 1000000000 0.5636 ns/op 0 B/op 0 allocs/op BenchmarkStringToBytesCasting BenchmarkStringToBytesCasting-12 1000000000 0.2548 ns/op 0 B/op 0 allocs/op PASS ok example.com/m/v2 1.448s ``` My test code: ``` package main import ( "testing" "unsafe" ) const ( str = "some random string" ) func toBytes(s string) []byte { // unsafe.StringData is unspecified for the empty string, so we provide a strict interpretation if len(s) == 0 { return nil } // Copied from go 1.20.1 os.File.WriteString // https://github.com/golang/go/blob/202a1a57064127c3f19d96df57b9f9586145e21c/src/os/file.go#L246 return unsafe.Slice(unsafe.StringData(s), len(s)) } func toBytesRaw(s string) []byte { return []byte(s) } func BenchmarkStringToBytesUnsafe(b *testing.B) { for i := 0; i < b.N; i++ { _ = toBytes(str) } } func BenchmarkStringToBytesCasting(b *testing.B) { for i := 0; i < b.N; i++ { _ = toBytesRaw(str) } } ``` ### Why is this needed? + Kubernetes as of now is built with go 1.22, in go 1.22 type casting is faster than unsafe, regarding **only** string to bytes conversion + We can make string to bytes conversion faster with type casting + We don't need to be 'unsafe' anymore --- end body ---
2,657
[ 0.013339736498892307, -0.01756463758647442, -0.018894700333476067, 0.007315339520573616, 0.019403252750635147, -0.0018956639105454087, -0.04063208028674126, 0.04715199023485184, -0.012739904224872589, -0.04180566594004631, 0.0028312711510807276, 0.013730931095778942, 0.023849831894040108, ...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7 ### Operating system Windows ### Operating system, version and so on Windows 11 22H3, Windows Package Manager v1.6.3482 ### Description When searching for ImageMagick in winget there are no results. Looking at the winget pkgs github the manifest was removed by the bot about 2 weeks ago. Any plans to address this? I've searched closed issues and see no mention of this anywhere. ### Steps to Reproduce winget search ImageMagick.ImageMagick This returns no results. ### Images _No response_
ImageMagick.ImageMagick missing from winget
https://api.github.com/repos/ImageMagick/ImageMagick/issues/7029/comments
2
2024-01-15T02:00:25
2024-01-15T18:12:25Z
https://github.com/ImageMagick/ImageMagick/issues/7029
2,081,110,774
7,029
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : ImageMagick.ImageMagick missing from winget Issue date: --- start body --- ### ImageMagick version 7 ### Operating system Windows ### Operating system, version and so on Windows 11 22H3, Windows Package Manager v1.6.3482 ### Description When searching for ImageMagick in winget there are no results. Looking at the winget pkgs github the manifest was removed by the bot about 2 weeks ago. Any plans to address this? I've searched closed issues and see no mention of this anywhere. ### Steps to Reproduce winget search ImageMagick.ImageMagick This returns no results. ### Images _No response_ --- end body ---
692
[ -0.024094685912132263, 0.016017412766814232, -0.016799084842205048, 0.0203371774405241, 0.003342675045132637, 0.01841728202998638, -0.01153994258493185, 0.04001610353589058, -0.01903439126908779, 0.031897690147161484, 0.010079450905323029, -0.02486264519393444, 0.012445036321878433, 0.0052...
CVE-2022-43245
2022-11-02T14:15:14.123000
Libde265 v1.0.8 was discovered to contain a segmentation violation via apply_sao_internal<unsigned short> in sao.cc. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted video file.
{ "cvssMetricV2": null, "cvssMetricV30": null, "cvssMetricV31": [ { "cvssData": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "HIGH", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H", "version": "3.1" }, "exploitabilityScore": 2.8, "impactScore": 3.6, "source": "nvd@nist.gov", "type": "Primary" } ] }
[ { "source": "cve@mitre.org", "tags": [ "Exploit", "Issue Tracking", "Third Party Advisory" ], "url": "https://github.com/strukturag/libde265/issues/352" }, { "source": "cve@mitre.org", "tags": [ "Mailing List", "Third Party Advisory" ], "url": "https...
[ { "nodes": [ { "cpeMatch": [ { "criteria": "cpe:2.3:a:struktur:libde265:1.0.8:*:*:*:*:*:*:*", "matchCriteriaId": "E86A03B2-D0E9-4887-AD06-FBA3F3500FC3", "versionEndExcluding": null, "versionEndIncluding": null, "versionStartExcl...
https://github.com/strukturag/libde265/issues/352
[ "Exploit", "Issue Tracking", "Third Party Advisory" ]
github.com
[ "strukturag", "libde265" ]
### Description SEGV /libde265/libde265/sao.cc:231 in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) ### Version ```shell $ ./dec265 -h dec265 v1.0.8 -------------- usage: dec265 [options] videofile.bin The video file must be a raw bitstream, or a stream with NAL units (option -n). options: -q, --quiet do not show decoded image -t, --threads N set number of worker threads (0 - no threading) -c, --check-hash perform hash check -n, --nal input is a stream with 4-byte length prefixed NAL units -f, --frames N set number of frames to process -o, --output write YUV reconstruction -d, --dump dump headers -0, --noaccel do not use any accelerated code (SSE) -v, --verbose increase verbosity level (up to 3 times) -L, --no-logging disable logging -B, --write-bytestream FILENAME write raw bytestream (from NAL input) -m, --measure YUV compute PSNRs relative to reference YUV -T, --highest-TID select highest temporal sublayer to decode --disable-deblocking disable deblocking filter --disable-sao disable sample-adaptive offset filter -h, --help show help ``` ### Replay ```shell git clone https://github.com/strukturag/libde265.git cd libde265 mkdir build cd build cmake ../ -DCMAKE_CXX_FLAGS="-fsanitize=address" make -j$(nproc) ./dec265/dec265 poc18 ``` ### ASAN ```Shell WARNING: non-existing PPS referenced WARNING: non-existing PPS referenced WARNING: slice header invalid WARNING: slice header invalid WARNING: slice header invalid ASAN:DEADLYSIGNAL ================================================================= ==24487==ERROR: AddressSanitizer: SEGV on unknown address 0x61106a5b8d93 (pc 0x55dd23192a5c bp 0x0c2c0000008e sp 0x7fff32e6f1c0 T0) ==24487==The signal is caused by a READ memory access. #0 0x55dd23192a5b in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) /libde265/libde265/sao.cc:231 #1 0x55dd2318b477 in void apply_sao<unsigned char>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned char const*, int, unsigned char*, int) /libde265/libde265/sao.cc:270 #2 0x55dd2318b477 in apply_sample_adaptive_offset_sequential(de265_image*) /libde265/libde265/sao.cc:362 #3 0x55dd230bd468 in decoder_context::run_postprocessing_filters_sequential(de265_image*) /libde265/libde265/decctx.cc:1898 #4 0x55dd230bd468 in decoder_context::decode_some(bool*) /libde265/libde265/decctx.cc:778 #5 0x55dd230ce78b in decoder_context::read_slice_NAL(bitreader&, NAL_unit*, nal_header&) /libde265/libde265/decctx.cc:697 #6 0x55dd230d0729 in decoder_context::decode_NAL(NAL_unit*) /libde265/libde265/decctx.cc:1239 #7 0x55dd230d15a9 in decoder_context::decode(int*) /libde265/libde265/decctx.cc:1327 #8 0x55dd23088be5 in main /libde265/dec265/dec265.cc:764 #9 0x7fed8173ac86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21c86) #10 0x55dd2308b0f9 in _start (/libde265/dec265/dec265+0x1b0f9) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /libde265/libde265/sao.cc:231 in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) ==24487==ABORTING ``` ### POC https://github.com/FDU-Sec/poc/blob/main/libde265/poc18 ### Environment ```shell Ubuntu 18.04.5 LTS Clang 10.0.1 gcc 7.5.0 ``` ### Credit Peng Deng ([Fudan University](https://secsys.fudan.edu.cn))
SEGV sao.cc: in void apply_sao_internal<unsigned short>
https://api.github.com/repos/strukturag/libde265/issues/352/comments
2
2022-10-10T15:37:10
2023-01-24T16:03:22Z
https://github.com/strukturag/libde265/issues/352
1,403,369,863
352
true
This is a GitHub Issue repo:libde265 owner:strukturag Title : SEGV sao.cc: in void apply_sao_internal<unsigned short> Issue date: --- start body --- ### Description SEGV /libde265/libde265/sao.cc:231 in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) ### Version ```shell $ ./dec265 -h dec265 v1.0.8 -------------- usage: dec265 [options] videofile.bin The video file must be a raw bitstream, or a stream with NAL units (option -n). options: -q, --quiet do not show decoded image -t, --threads N set number of worker threads (0 - no threading) -c, --check-hash perform hash check -n, --nal input is a stream with 4-byte length prefixed NAL units -f, --frames N set number of frames to process -o, --output write YUV reconstruction -d, --dump dump headers -0, --noaccel do not use any accelerated code (SSE) -v, --verbose increase verbosity level (up to 3 times) -L, --no-logging disable logging -B, --write-bytestream FILENAME write raw bytestream (from NAL input) -m, --measure YUV compute PSNRs relative to reference YUV -T, --highest-TID select highest temporal sublayer to decode --disable-deblocking disable deblocking filter --disable-sao disable sample-adaptive offset filter -h, --help show help ``` ### Replay ```shell git clone https://github.com/strukturag/libde265.git cd libde265 mkdir build cd build cmake ../ -DCMAKE_CXX_FLAGS="-fsanitize=address" make -j$(nproc) ./dec265/dec265 poc18 ``` ### ASAN ```Shell WARNING: non-existing PPS referenced WARNING: non-existing PPS referenced WARNING: slice header invalid WARNING: slice header invalid WARNING: slice header invalid ASAN:DEADLYSIGNAL ================================================================= ==24487==ERROR: AddressSanitizer: SEGV on unknown address 0x61106a5b8d93 (pc 0x55dd23192a5c bp 0x0c2c0000008e sp 0x7fff32e6f1c0 T0) ==24487==The signal is caused by a READ memory access. #0 0x55dd23192a5b in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) /libde265/libde265/sao.cc:231 #1 0x55dd2318b477 in void apply_sao<unsigned char>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned char const*, int, unsigned char*, int) /libde265/libde265/sao.cc:270 #2 0x55dd2318b477 in apply_sample_adaptive_offset_sequential(de265_image*) /libde265/libde265/sao.cc:362 #3 0x55dd230bd468 in decoder_context::run_postprocessing_filters_sequential(de265_image*) /libde265/libde265/decctx.cc:1898 #4 0x55dd230bd468 in decoder_context::decode_some(bool*) /libde265/libde265/decctx.cc:778 #5 0x55dd230ce78b in decoder_context::read_slice_NAL(bitreader&, NAL_unit*, nal_header&) /libde265/libde265/decctx.cc:697 #6 0x55dd230d0729 in decoder_context::decode_NAL(NAL_unit*) /libde265/libde265/decctx.cc:1239 #7 0x55dd230d15a9 in decoder_context::decode(int*) /libde265/libde265/decctx.cc:1327 #8 0x55dd23088be5 in main /libde265/dec265/dec265.cc:764 #9 0x7fed8173ac86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21c86) #10 0x55dd2308b0f9 in _start (/libde265/dec265/dec265+0x1b0f9) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /libde265/libde265/sao.cc:231 in void apply_sao_internal<unsigned short>(de265_image*, int, int, slice_segment_header const*, int, int, int, unsigned short const*, int, unsigned short*, int) ==24487==ABORTING ``` ### POC https://github.com/FDU-Sec/poc/blob/main/libde265/poc18 ### Environment ```shell Ubuntu 18.04.5 LTS Clang 10.0.1 gcc 7.5.0 ``` ### Credit Peng Deng ([Fudan University](https://secsys.fudan.edu.cn)) --- end body ---
3,966
[ -0.015043802559375763, 0.02425888180732727, -0.003369497135281563, -0.0006616314058192074, 0.054001856595277786, -0.010069158859550953, -0.04111573100090027, 0.042044732719659805, -0.014317085035145283, 0.041055794805288315, -0.03557170182466507, 0.010166553780436516, 0.021636703982949257, ...
null
null
null
null
null
null
null
null
null
[ "kubernetes", "kubernetes" ]
### Failure cluster [ea6679417165f10786e6](https://go.k8s.io/triage#ea6679417165f10786e6) ##### Error text: ``` error during ./hack/e2e-internal/e2e-up.sh: exit status 1 ``` #### Recent failures: [4/23/2024, 5:00:35 AM ci-kubernetes-e2e-gce-device-plugin-gpu-1-29](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-device-plugin-gpu-1-29/1782696055729557504) [4/23/2024, 4:58:35 AM ci-kubernetes-gce-conformance-latest-1-29](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-gce-conformance-latest-1-29/1782695552371134464) [4/23/2024, 4:48:50 AM ci-kubernetes-e2e-gce-stable1-latest-gci-kubectl-skew](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-stable1-latest-gci-kubectl-skew/1782692532824576000) [4/23/2024, 4:45:45 AM ci-kubernetes-e2e-gce-new-master-gci-kubectl-skew](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-gci-kubectl-skew/1782692282072305664) [4/23/2024, 4:45:36 AM ci-kubernetes-e2e-gce-cos-k8sstable1-ingress](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable1-ingress/1782692281770315776) /kind failing-test <!-- If this is a flake, please add: /kind flake --> <!-- Please assign a SIG using: /sig SIG-NAME --> ![image](https://github.com/kubernetes/kubernetes/assets/23304/eb15bdc4-8a1c-441d-893d-98dd565fd11f)
Failure cluster [ea667941...] `cos-97-lts` AWOL from `cos-cloud` causing a lot of CI jobs to fail since 4/16
https://api.github.com/repos/kubernetes/kubernetes/issues/124478/comments
7
2024-04-23T13:21:43
2024-04-24T21:25:44Z
https://github.com/kubernetes/kubernetes/issues/124478
2,258,874,546
124,478
false
This is a GitHub Issue repo:kubernetes owner:kubernetes Title : Failure cluster [ea667941...] `cos-97-lts` AWOL from `cos-cloud` causing a lot of CI jobs to fail since 4/16 Issue date: --- start body --- ### Failure cluster [ea6679417165f10786e6](https://go.k8s.io/triage#ea6679417165f10786e6) ##### Error text: ``` error during ./hack/e2e-internal/e2e-up.sh: exit status 1 ``` #### Recent failures: [4/23/2024, 5:00:35 AM ci-kubernetes-e2e-gce-device-plugin-gpu-1-29](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-device-plugin-gpu-1-29/1782696055729557504) [4/23/2024, 4:58:35 AM ci-kubernetes-gce-conformance-latest-1-29](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-gce-conformance-latest-1-29/1782695552371134464) [4/23/2024, 4:48:50 AM ci-kubernetes-e2e-gce-stable1-latest-gci-kubectl-skew](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-stable1-latest-gci-kubectl-skew/1782692532824576000) [4/23/2024, 4:45:45 AM ci-kubernetes-e2e-gce-new-master-gci-kubectl-skew](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-new-master-gci-kubectl-skew/1782692282072305664) [4/23/2024, 4:45:36 AM ci-kubernetes-e2e-gce-cos-k8sstable1-ingress](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-cos-k8sstable1-ingress/1782692281770315776) /kind failing-test <!-- If this is a flake, please add: /kind flake --> <!-- Please assign a SIG using: /sig SIG-NAME --> ![image](https://github.com/kubernetes/kubernetes/assets/23304/eb15bdc4-8a1c-441d-893d-98dd565fd11f) --- end body ---
1,626
[ -0.031868454068899155, -0.030212754383683205, -0.002943047322332859, -0.017893727868795395, 0.03478492051362991, -0.02482033148407936, -0.004496216773986816, 0.03104819916188717, -0.0022329185158014297, -0.03997987508773804, -0.012296241708099842, -0.009835474193096161, -0.005483561661094427...
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
Example file: [line.dwg.gz](https://github.com/LibreDWG/libredwg/files/12373403/line.dwg.gz) Decoding output: ``` =======> Thumbnail: 3165 ``` Detection of thumbnail: ``` 00000c5d 3080 DWG_SENTINEL_THUMBNAIL_BEGIN [..16] 000050f8 3081 DWG_SENTINEL_THUMBNAIL_END [..16] ``` Begin address is right, but searching for a second sentinel doesn't work.
Don't decode AC1014 thumbnail
https://api.github.com/repos/LibreDWG/libredwg/issues/805/comments
5
2023-08-17T20:40:10
2023-08-18T14:16:16Z
https://github.com/LibreDWG/libredwg/issues/805
1,855,675,681
805
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : Don't decode AC1014 thumbnail Issue date: --- start body --- Example file: [line.dwg.gz](https://github.com/LibreDWG/libredwg/files/12373403/line.dwg.gz) Decoding output: ``` =======> Thumbnail: 3165 ``` Detection of thumbnail: ``` 00000c5d 3080 DWG_SENTINEL_THUMBNAIL_BEGIN [..16] 000050f8 3081 DWG_SENTINEL_THUMBNAIL_END [..16] ``` Begin address is right, but searching for a second sentinel doesn't work. --- end body ---
513
[ 0.005162316840142012, -0.00517304940149188, -0.008271154947578907, 0.02042030729353428, 0.026602208614349365, 0.028977662324905396, 0.031424663960933685, 0.06536788493394852, -0.00850726943463087, 0.0036132640670984983, -0.012857495807111263, -0.02810475416481495, 0.01740090735256672, -0.0...
null
null
null
null
null
null
null
null
null
[ "kubernetes", "ingress-nginx" ]
<!-- What do you want to happen? --> Sometimes, accessories to the installation may be manifests to be installed (for example, NetworkPolicies or Ingresses). It would be nice to be able to do this directly via the Chart using the approach that [Grafana](https://github.com/grafana/helm-charts/blob/grafana-7.3.10/charts/grafana/values.yaml#L1302-L1311), [Prometheus](https://github.com/prometheus-community/helm-charts/blob/prometheus-25.20.1/charts/prometheus/values.yaml#L1208-L1216) or [OAuth2 Proxy](https://github.com/oauth2-proxy/manifests/blob/oauth2-proxy-7.5.4/helm/oauth2-proxy/values.yaml#L439-L466) also uses. Ref: https://github.com/helm/helm/issues/12653
Allow the Chart to create extra manifest
https://api.github.com/repos/kubernetes/ingress-nginx/issues/11351/comments
4
2024-05-08T10:44:21
2024-05-23T10:40:29Z
https://github.com/kubernetes/ingress-nginx/issues/11351
2,285,284,526
11,351
false
This is a GitHub Issue repo:ingress-nginx owner:kubernetes Title : Allow the Chart to create extra manifest Issue date: --- start body --- <!-- What do you want to happen? --> Sometimes, accessories to the installation may be manifests to be installed (for example, NetworkPolicies or Ingresses). It would be nice to be able to do this directly via the Chart using the approach that [Grafana](https://github.com/grafana/helm-charts/blob/grafana-7.3.10/charts/grafana/values.yaml#L1302-L1311), [Prometheus](https://github.com/prometheus-community/helm-charts/blob/prometheus-25.20.1/charts/prometheus/values.yaml#L1208-L1216) or [OAuth2 Proxy](https://github.com/oauth2-proxy/manifests/blob/oauth2-proxy-7.5.4/helm/oauth2-proxy/values.yaml#L439-L466) also uses. Ref: https://github.com/helm/helm/issues/12653 --- end body ---
835
[ 0.005353325046598911, -0.011364640668034554, -0.02328714355826378, -0.011121470481157303, 0.022228635847568512, -0.03653277829289436, -0.03827788308262825, 0.03859257698059082, -0.029209058731794357, -0.013016769662499428, 0.006594209466129541, 0.0228723231703043, -0.03032478131353855, -0....
CVE-2023-49462
2023-12-07T20:15:38.190000
libheif v1.17.5 was discovered to contain a segmentation violation via the component /libheif/exif.cc.
{ "cvssMetricV2": null, "cvssMetricV30": null, "cvssMetricV31": [ { "cvssData": { "attackComplexity": "LOW", "attackVector": "NETWORK", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.1" }, "exploitabilityScore": 2.8, "impactScore": 5.9, "source": "nvd@nist.gov", "type": "Primary" } ] }
[ { "source": "cve@mitre.org", "tags": [ "Exploit", "Issue Tracking", "Patch" ], "url": "https://github.com/strukturag/libheif/issues/1043" } ]
[ { "nodes": [ { "cpeMatch": [ { "criteria": "cpe:2.3:a:struktur:libheif:1.17.5:*:*:*:*:*:*:*", "matchCriteriaId": "8776EE2B-B4B8-4509-BC0C-3668329FF6C9", "versionEndExcluding": null, "versionEndIncluding": null, "versionStartExcl...
https://github.com/strukturag/libheif/issues/1043
[ "Exploit", "Issue Tracking", "Patch" ]
github.com
[ "strukturag", "libheif" ]
### Description SEGV `libheif/libheif/exif.cc:55` in `read16` ### Version ``` heif-convert libheif version: 1.17.5 ------------------------------------------- Usage: heif-convert [options] <input-image> [output-image] The program determines the output file format from the output filename suffix. These suffixes are recognized: jpg, jpeg, png, y4m. If no output filename is specified, 'jpg' is used. Options: -h, --help show help -v, --version show version -q, --quality quality (for JPEG output) -o, --output FILENAME write output to FILENAME (optional) -d, --decoder ID use a specific decoder (see --list-decoders) --with-aux also write auxiliary images (e.g. depth images) --with-xmp write XMP metadata to file (output filename with .xmp suffix) --with-exif write EXIF metadata to file (output filename with .exif suffix) --skip-exif-offset skip EXIF metadata offset bytes --no-colons replace ':' characters in auxiliary image filenames with '_' --list-decoders list all available decoders (built-in and plugins) --quiet do not output status messages to console -C, --chroma-upsampling ALGO Force chroma upsampling algorithm (nn = nearest-neighbor / bilinear) --png-compression-level # Set to integer between 0 (fastest) and 9 (best). Use -1 for default. ``` ### Replay ``` cd libheif mkdir build && cd build CC="gcc -fsanitize=address" CXX="g++ -fsanitize=address" cmake --preset=release .. make -j ./examples/heif-convert ./poc test.png ``` ### ASAN ``` ==1926429==ERROR: AddressSanitizer: SEGV on unknown address 0x60b080000729 (pc 0x55abe2b1012c bp 0x000000000000 sp 0x7ffe0b2df5a0 T0) ==1926429==The signal is caused by a READ memory access. #0 0x55abe2b1012c in read16 /eva/put/libheif/libheif/exif.cc:55 #1 0x55abe2b1012c in find_exif_tag /eva/put/libheif/libheif/exif.cc:103 #2 0x55abe2b1136b in modify_exif_tag_if_it_exists(unsigned char*, int, unsigned short, unsigned short) /eva/put/libheif/libheif/exif.cc:124 #3 0x55abe2b1136b in modify_exif_orientation_tag_if_it_exists(unsigned char*, int, unsigned short) /eva/put/libheif/libheif/exif.cc:140 #4 0x55abe2b16c75 in PngEncoder::Encode(heif_image_handle const*, heif_image const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /eva/put/libheif/examples/encoder_png.cc:126 #5 0x55abe2b00c99 in main /eva/put/libheif/examples/heif_convert.cc:509 #6 0x7fb15dc29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #7 0x7fb15dc29e3f in __libc_start_main_impl ../csu/libc-start.c:392 #8 0x55abe2b09254 in _start (/eva/asan-bin/NestFuzz/libheif/heif-convert+0x15254) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /eva/put/libheif/libheif/exif.cc:55 in read16 ==1926429==ABORTING ``` ### POC [poc](https://github.com/fdu-sec/poc/raw/main/libheif/poc2) ### Environment ``` Description: Ubuntu 22.04.2 LTS gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 ``` ### Credit Yuchuan Meng ([Fudan University](https://secsys.fudan.edu.cn/))
SEGV libheif/libheif/exif.cc:55 in read16
https://api.github.com/repos/strukturag/libheif/issues/1043/comments
2
2023-11-22T09:15:02
2023-12-14T15:09:47Z
https://github.com/strukturag/libheif/issues/1043
2,005,894,442
1,043
true
This is a GitHub Issue repo:libheif owner:strukturag Title : SEGV libheif/libheif/exif.cc:55 in read16 Issue date: --- start body --- ### Description SEGV `libheif/libheif/exif.cc:55` in `read16` ### Version ``` heif-convert libheif version: 1.17.5 ------------------------------------------- Usage: heif-convert [options] <input-image> [output-image] The program determines the output file format from the output filename suffix. These suffixes are recognized: jpg, jpeg, png, y4m. If no output filename is specified, 'jpg' is used. Options: -h, --help show help -v, --version show version -q, --quality quality (for JPEG output) -o, --output FILENAME write output to FILENAME (optional) -d, --decoder ID use a specific decoder (see --list-decoders) --with-aux also write auxiliary images (e.g. depth images) --with-xmp write XMP metadata to file (output filename with .xmp suffix) --with-exif write EXIF metadata to file (output filename with .exif suffix) --skip-exif-offset skip EXIF metadata offset bytes --no-colons replace ':' characters in auxiliary image filenames with '_' --list-decoders list all available decoders (built-in and plugins) --quiet do not output status messages to console -C, --chroma-upsampling ALGO Force chroma upsampling algorithm (nn = nearest-neighbor / bilinear) --png-compression-level # Set to integer between 0 (fastest) and 9 (best). Use -1 for default. ``` ### Replay ``` cd libheif mkdir build && cd build CC="gcc -fsanitize=address" CXX="g++ -fsanitize=address" cmake --preset=release .. make -j ./examples/heif-convert ./poc test.png ``` ### ASAN ``` ==1926429==ERROR: AddressSanitizer: SEGV on unknown address 0x60b080000729 (pc 0x55abe2b1012c bp 0x000000000000 sp 0x7ffe0b2df5a0 T0) ==1926429==The signal is caused by a READ memory access. #0 0x55abe2b1012c in read16 /eva/put/libheif/libheif/exif.cc:55 #1 0x55abe2b1012c in find_exif_tag /eva/put/libheif/libheif/exif.cc:103 #2 0x55abe2b1136b in modify_exif_tag_if_it_exists(unsigned char*, int, unsigned short, unsigned short) /eva/put/libheif/libheif/exif.cc:124 #3 0x55abe2b1136b in modify_exif_orientation_tag_if_it_exists(unsigned char*, int, unsigned short) /eva/put/libheif/libheif/exif.cc:140 #4 0x55abe2b16c75 in PngEncoder::Encode(heif_image_handle const*, heif_image const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) /eva/put/libheif/examples/encoder_png.cc:126 #5 0x55abe2b00c99 in main /eva/put/libheif/examples/heif_convert.cc:509 #6 0x7fb15dc29d8f in __libc_start_call_main ../sysdeps/nptl/libc_start_call_main.h:58 #7 0x7fb15dc29e3f in __libc_start_main_impl ../csu/libc-start.c:392 #8 0x55abe2b09254 in _start (/eva/asan-bin/NestFuzz/libheif/heif-convert+0x15254) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /eva/put/libheif/libheif/exif.cc:55 in read16 ==1926429==ABORTING ``` ### POC [poc](https://github.com/fdu-sec/poc/raw/main/libheif/poc2) ### Environment ``` Description: Ubuntu 22.04.2 LTS gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 ``` ### Credit Yuchuan Meng ([Fudan University](https://secsys.fudan.edu.cn/)) --- end body ---
3,502
[ -0.014185000211000443, 0.023216884583234787, 0.0017391771543771029, -0.016183573752641678, 0.02916385978460312, -0.00800125952810049, -0.02763185277581215, 0.039358675479888916, -0.01799413003027439, 0.01325883250683546, -0.026851920410990715, 0.012652992270886898, 0.01153184100985527, 0.0...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.1.1.15 ### Operating system Linux ### Operating system, version and so on ALT Workstation K 10.1 ### Description File saved in RAW format has JPEG data and can't be opened by identify/display. ### Steps to Reproduce 1. `$ display file.jpg` 2. File > Save > Format > RAW > Save **Actual result:** The file has JPEG data and can't be opened by identify/display: ``` $ file file.raw file.raw: JPEG image data, JFIF standard 1.01, resolution (DPI), density 300x300, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=11, manufacturer=NIKON CORPORATION, model=NIKON D3100, orientation=upper-left, xresolution=176, yresolution=184, resolutionunit=2, software=GIMP 2.8.10, datetime=2014:10:06 12:13:58, GPS-Data], baseline, precision 8, 4608x3072, components 3 $ identify file.raw identify: Unsupported file format or not RAW file `file.raw' @ error/dng.c/ReadDNGImage/539. $ display file.raw display: Unsupported file format or not RAW file `file.raw' @ error/dng.c/ReadDNGImage/539 ``` **Expected result:** Image is saved in RAW format. ### Images _No response_
File saved in RAW format has JPEG data
https://api.github.com/repos/ImageMagick/ImageMagick/issues/6616/comments
2
2023-09-01T13:55:08
2023-09-03T00:37:43Z
https://github.com/ImageMagick/ImageMagick/issues/6616
1,877,474,087
6,616
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : File saved in RAW format has JPEG data Issue date: --- start body --- ### ImageMagick version 7.1.1.15 ### Operating system Linux ### Operating system, version and so on ALT Workstation K 10.1 ### Description File saved in RAW format has JPEG data and can't be opened by identify/display. ### Steps to Reproduce 1. `$ display file.jpg` 2. File > Save > Format > RAW > Save **Actual result:** The file has JPEG data and can't be opened by identify/display: ``` $ file file.raw file.raw: JPEG image data, JFIF standard 1.01, resolution (DPI), density 300x300, segment length 16, Exif Standard: [TIFF image data, big-endian, direntries=11, manufacturer=NIKON CORPORATION, model=NIKON D3100, orientation=upper-left, xresolution=176, yresolution=184, resolutionunit=2, software=GIMP 2.8.10, datetime=2014:10:06 12:13:58, GPS-Data], baseline, precision 8, 4608x3072, components 3 $ identify file.raw identify: Unsupported file format or not RAW file `file.raw' @ error/dng.c/ReadDNGImage/539. $ display file.raw display: Unsupported file format or not RAW file `file.raw' @ error/dng.c/ReadDNGImage/539 ``` **Expected result:** Image is saved in RAW format. ### Images _No response_ --- end body ---
1,293
[ -0.017110204324126244, -0.010379400104284286, -0.003691404592245817, 0.049262635409832, 0.021575452759861946, 0.011979777365922928, -0.008423383347690105, 0.02900436334311962, -0.000056494791351724416, 0.04657558351755142, 0.010853585787117481, -0.01763707771897316, 0.02486841008067131, 0....
null
null
null
null
null
null
null
null
null
[ "llvm", "llvm-project" ]
Just compile ```c int main(int argc, char **argv) { int x = -1; int arr[x]; return 0; } ``` with `clang -fsanitize=vla-bound -ftrivial-auto-var-init=zero` or with `clang -fsanitize=vla-bound -ftrivial-auto-var-init=pattern` Run the executable and see that UBSan rt catches segfault: `UndefinedBehaviorSanitizer:DEADLYSIGNAL`. With `clang -fsanitize=vla-bound -ftrivial-auto-var-init=uninitialized` UBSan works fine and the process terminates with zero exit-code. Observed in clang-17.0.6 and clang-18.1.3. OSes: Ubuntu-24.04 and ArchLinux.
[clang][UBSan] Segfault with -fsanitize=vla-bound -ftrivial-auto-var-init=zero/pattern
https://api.github.com/repos/llvm/llvm-project/issues/93949/comments
0
2024-05-31T10:49:04
2024-05-31T15:09:33Z
https://github.com/llvm/llvm-project/issues/93949
2,327,537,374
93,949
false
This is a GitHub Issue repo:llvm-project owner:llvm Title : [clang][UBSan] Segfault with -fsanitize=vla-bound -ftrivial-auto-var-init=zero/pattern Issue date: --- start body --- Just compile ```c int main(int argc, char **argv) { int x = -1; int arr[x]; return 0; } ``` with `clang -fsanitize=vla-bound -ftrivial-auto-var-init=zero` or with `clang -fsanitize=vla-bound -ftrivial-auto-var-init=pattern` Run the executable and see that UBSan rt catches segfault: `UndefinedBehaviorSanitizer:DEADLYSIGNAL`. With `clang -fsanitize=vla-bound -ftrivial-auto-var-init=uninitialized` UBSan works fine and the process terminates with zero exit-code. Observed in clang-17.0.6 and clang-18.1.3. OSes: Ubuntu-24.04 and ArchLinux. --- end body ---
773
[ -0.007292601745575666, 0.012752440758049488, -0.01215006411075592, 0.008555029518902302, 0.05541864410042763, -0.025568963959813118, -0.03380998969078064, 0.042576488107442856, -0.02632513828575611, -0.02181372232735157, -0.02298002503812313, 0.01367523055523634, 0.00977900717407465, 0.018...
null
null
null
null
null
null
null
null
null
[ "Piwigo", "Piwigo" ]
When all the images in the piwigo_images table have an md5sum value and the pwg.images.setMd5sum API is called, a JSONDecodeError is thrown. Issue is with the call to get_photos_no_md5sum(): https://github.com/Piwigo/Piwigo/blob/562170528c63a2318d153a08ec4847684fa71784/include/ws_functions/pwg.images.php#L2596 I have a fix in my local repo that wraps it in an if statement similar to how this is handled on the admin UI that I'll submit after I figure out pull requests: https://github.com/Piwigo/Piwigo/blob/562170528c63a2318d153a08ec4847684fa71784/admin.php#L290-L295 Sample Code: ``` from piwigo import Piwigo mysite = Piwigo('http://localhost') mysite.pwg.session.login(username="______", password="__________") pwg_token = mysite.pwg.session.getStatus()['pwg_token'] response = mysite.pwg.images.setMd5sum(block_size="20", pwg_token=pwg_token) print(response) ``` Output: ``` nathanz@piwigo-01:~$ python3 piwigo_api_test.py Traceback (most recent call last): File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 124, in __call__ result = r.json() File "/usr/lib/python3/dist-packages/requests/models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/nathanz/piwigo_api_test.py", line 8, in <module> response = mysite.pwg.images.setMd5sum(block_size="20", pwg_token=pwg_token) File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 96, in checking return fn(self, **kw) File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 135, in __call__ raise WsErrorException(r.text) piwigo.ws.WsErrorException: <br /> <b>Fatal error</b>: Uncaught mysqli_sql_exception: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 3 in /var/www/html/include/dblayer/functions_mysqli.inc.php:132 Stack trace: #0 /var/www/html/include/dblayer/functions_mysqli.inc.php(132): mysqli-&gt;query() #1 /var/www/html/include/dblayer/functions_mysqli.inc.php(888): pwg_query() #2 /var/www/html/admin/include/functions.php(3294): query2array() #3 /var/www/html/include/ws_functions/pwg.images.php(2602): add_md5sum() #4 /var/www/html/include/ws_core.inc.php(600): ws_images_setMd5sum() #5 /var/www/html/include/ws_protocols/rest_handler.php(41): PwgServer-&gt;invoke() #6 /var/www/html/include/ws_core.inc.php(281): PwgRestRequestHandler-&gt;handleRequest() #7 /var/www/html/ws.php(22): PwgServer-&gt;run() #8 {main} thrown in <b>/var/www/html/include/dblayer/functions_mysqli.inc.php</b> on line <b>132</b><br /> ```
JSONDecodeError when calling pwg.images.setMd5sum API
https://api.github.com/repos/Piwigo/Piwigo/issues/2114/comments
0
2024-02-10T02:48:57
2024-02-10T02:48:57Z
https://github.com/Piwigo/Piwigo/issues/2114
2,128,060,694
2,114
false
This is a GitHub Issue repo:Piwigo owner:Piwigo Title : JSONDecodeError when calling pwg.images.setMd5sum API Issue date: --- start body --- When all the images in the piwigo_images table have an md5sum value and the pwg.images.setMd5sum API is called, a JSONDecodeError is thrown. Issue is with the call to get_photos_no_md5sum(): https://github.com/Piwigo/Piwigo/blob/562170528c63a2318d153a08ec4847684fa71784/include/ws_functions/pwg.images.php#L2596 I have a fix in my local repo that wraps it in an if statement similar to how this is handled on the admin UI that I'll submit after I figure out pull requests: https://github.com/Piwigo/Piwigo/blob/562170528c63a2318d153a08ec4847684fa71784/admin.php#L290-L295 Sample Code: ``` from piwigo import Piwigo mysite = Piwigo('http://localhost') mysite.pwg.session.login(username="______", password="__________") pwg_token = mysite.pwg.session.getStatus()['pwg_token'] response = mysite.pwg.images.setMd5sum(block_size="20", pwg_token=pwg_token) print(response) ``` Output: ``` nathanz@piwigo-01:~$ python3 piwigo_api_test.py Traceback (most recent call last): File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 124, in __call__ result = r.json() File "/usr/lib/python3/dist-packages/requests/models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/nathanz/piwigo_api_test.py", line 8, in <module> response = mysite.pwg.images.setMd5sum(block_size="20", pwg_token=pwg_token) File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 96, in checking return fn(self, **kw) File "/home/nathanz/.local/lib/python3.10/site-packages/piwigo/ws.py", line 135, in __call__ raise WsErrorException(r.text) piwigo.ws.WsErrorException: <br /> <b>Fatal error</b>: Uncaught mysqli_sql_exception: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 3 in /var/www/html/include/dblayer/functions_mysqli.inc.php:132 Stack trace: #0 /var/www/html/include/dblayer/functions_mysqli.inc.php(132): mysqli-&gt;query() #1 /var/www/html/include/dblayer/functions_mysqli.inc.php(888): pwg_query() #2 /var/www/html/admin/include/functions.php(3294): query2array() #3 /var/www/html/include/ws_functions/pwg.images.php(2602): add_md5sum() #4 /var/www/html/include/ws_core.inc.php(600): ws_images_setMd5sum() #5 /var/www/html/include/ws_protocols/rest_handler.php(41): PwgServer-&gt;invoke() #6 /var/www/html/include/ws_core.inc.php(281): PwgRestRequestHandler-&gt;handleRequest() #7 /var/www/html/ws.php(22): PwgServer-&gt;run() #8 {main} thrown in <b>/var/www/html/include/dblayer/functions_mysqli.inc.php</b> on line <b>132</b><br /> ``` --- end body ---
3,381
[ -0.041666775941848755, 0.023267939686775208, -0.006083056330680847, 0.04552480950951576, 0.023228028789162636, -0.002096974989399314, 0.01999526284635067, 0.030491778627038002, -0.0066451323218643665, 0.0016962048830464482, -0.002617476973682642, -0.00578705221414566, 0.0028419746086001396, ...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
Do you have any plans for a new version?
Do you have any plans for a new version?
https://api.github.com/repos/jerryscript-project/jerryscript/issues/5043/comments
0
2023-02-28T03:28:39
2023-02-28T03:28:39Z
https://github.com/jerryscript-project/jerryscript/issues/5043
1,602,307,430
5,043
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Do you have any plans for a new version? Issue date: --- start body --- Do you have any plans for a new version? --- end body ---
205
[ -0.03394652530550957, 0.0047646164894104, -0.01042601652443409, 0.04021671786904335, 0.026699351146817207, -0.006098855286836624, 0.012977838516235352, 0.05176553502678871, -0.014450605027377605, 0.06538497656583786, 0.02632022276520729, -0.001469120499677956, 0.019437594339251518, 0.01620...
null
null
null
null
null
null
null
null
null
[ "axiomatic-systems", "Bento4" ]
Hi, developers of Bento4: When I tested the latest mp42aac, the following crash occurred. ## The problem The optput of mp42aac: ``` terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc ``` The output of mp42aac_asan: ``` ==114645==ERROR: AddressSanitizer failed to allocate 0xf262dd000 (65065046016) bytes of LargeMmapAllocator (errno: 12) ==114645==Process memory map follows: 0x000000400000-0x00000057c000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00000077b000-0x00000077c000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00000077c000-0x00000078b000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00007fff7000-0x00008fff7000 0x00008fff7000-0x02008fff7000 0x02008fff7000-0x10007fff8000 0x600000000000-0x602000000000 0x602000000000-0x602000010000 0x602000010000-0x603000000000 0x603000000000-0x603000010000 0x603000010000-0x604000000000 0x604000000000-0x604000010000 0x604000010000-0x607000000000 0x607000000000-0x607000010000 0x607000010000-0x616000000000 0x616000000000-0x616000020000 0x616000020000-0x619000000000 0x619000000000-0x619000020000 0x619000020000-0x621000000000 0x621000000000-0x621000020000 0x621000020000-0x624000000000 0x624000000000-0x624000020000 0x624000020000-0x631000000000 0x631000000000-0x631000030000 0x631000030000-0x640000000000 0x640000000000-0x640000003000 0x7f6fa9700000-0x7f6fa9800000 0x7f6fa9900000-0x7f6fa9a00000 0x7f6fa9a15000-0x7f6fabd67000 0x7f6fabd67000-0x7f6fabd6a000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabd6a000-0x7f6fabf69000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf69000-0x7f6fabf6a000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf6a000-0x7f6fabf6b000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf6b000-0x7f6fabf83000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fabf83000-0x7f6fac182000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac182000-0x7f6fac183000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac183000-0x7f6fac184000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac184000-0x7f6fac188000 0x7f6fac188000-0x7f6fac348000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac348000-0x7f6fac548000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac548000-0x7f6fac54c000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac54c000-0x7f6fac54e000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac54e000-0x7f6fac552000 0x7f6fac552000-0x7f6fac568000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac568000-0x7f6fac767000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac767000-0x7f6fac768000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac768000-0x7f6fac870000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6fac870000-0x7f6faca6f000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca6f000-0x7f6faca70000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca70000-0x7f6faca71000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca71000-0x7f6facbe3000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facbe3000-0x7f6facde3000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facde3000-0x7f6facded000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facded000-0x7f6facdef000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facdef000-0x7f6facdf3000 0x7f6facdf3000-0x7f6facee7000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6facee7000-0x7f6fad0e7000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0e7000-0x7f6fad0ea000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0ea000-0x7f6fad0eb000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0eb000-0x7f6fadd60000 0x7f6fadd60000-0x7f6fadd86000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf2b000-0x7f6fadf6b000 0x7f6fadf6c000-0x7f6fadf85000 0x7f6fadf85000-0x7f6fadf86000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf86000-0x7f6fadf87000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf87000-0x7f6fadf88000 0x7ffc3100b000-0x7ffc3102c000 [stack] 0x7ffc310c7000-0x7ffc310ca000 [vvar] 0x7ffc310ca000-0x7ffc310cc000 [vdso] 0xffffffffff600000-0xffffffffff601000 [vsyscall] ==114645==End of process memory map. ==114645==AddressSanitizer CHECK failed: ../../../../src/libsanitizer/sanitizer_common/sanitizer_posix.cc:121 "(("unable to mmap" && 0)) != (0)" (0x0, 0x0) #0 0x7f6face93631 (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xa0631) #1 0x7f6face985e3 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xa55e3) #2 0x7f6facea0611 (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xad611) #3 0x7f6face15c0c (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x22c0c) #4 0x7f6face8c4fe in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x994fe) #5 0x50f523 in AP4_Array<AP4_TfraAtom::Entry>::EnsureCapacity(unsigned int) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4Array.h:172 #6 0x50f523 in AP4_Array<AP4_TfraAtom::Entry>::SetItemCount(unsigned int) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4Array.h:210 #7 0x50f523 in AP4_TfraAtom::AP4_TfraAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4TfraAtom.cpp:88 #8 0x50fae7 in AP4_TfraAtom::Create(unsigned int, AP4_ByteStream&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4TfraAtom.cpp:53 #9 0x470906 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:443 #10 0x472452 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:234 #11 0x472452 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:154 #12 0x40bd11 in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4File.cpp:104 #13 0x40bd11 in AP4_File::AP4_File(AP4_ByteStream&, bool) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4File.cpp:78 #14 0x402a40 in main /home/xxzs/workdir/test/Bento4/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:250 #15 0x7f6fac1a883f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2083f) #16 0x4045d8 in _start (/home/xxzs/workdir/test/mp42aac/mp42aac_asan+0x4045d8) ``` ## Crash input [POC1.zip](https://github.com/axiomatic-systems/Bento4/files/10090406/POC1.zip) ## Validation steps 1. build the latest mp42aac 2. ./mp42aac ./POC1 /dev/null ## Environment * Host Operating System and version: Ubuntu 16.04 LTS * Host CPU architecture: 11th Gen Intel® Core™ i5-11500 @ 2.70GHz × 8 * gcc version: 5.4.0
std::bad_alloc in mp42aac
https://api.github.com/repos/axiomatic-systems/Bento4/issues/816/comments
0
2022-11-25T09:46:22
2023-05-29T02:41:03Z
https://github.com/axiomatic-systems/Bento4/issues/816
1,464,280,948
816
false
This is a GitHub Issue repo:Bento4 owner:axiomatic-systems Title : std::bad_alloc in mp42aac Issue date: --- start body --- Hi, developers of Bento4: When I tested the latest mp42aac, the following crash occurred. ## The problem The optput of mp42aac: ``` terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc ``` The output of mp42aac_asan: ``` ==114645==ERROR: AddressSanitizer failed to allocate 0xf262dd000 (65065046016) bytes of LargeMmapAllocator (errno: 12) ==114645==Process memory map follows: 0x000000400000-0x00000057c000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00000077b000-0x00000077c000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00000077c000-0x00000078b000 /home/xxzs/workdir/test/mp42aac/mp42aac_asan 0x00007fff7000-0x00008fff7000 0x00008fff7000-0x02008fff7000 0x02008fff7000-0x10007fff8000 0x600000000000-0x602000000000 0x602000000000-0x602000010000 0x602000010000-0x603000000000 0x603000000000-0x603000010000 0x603000010000-0x604000000000 0x604000000000-0x604000010000 0x604000010000-0x607000000000 0x607000000000-0x607000010000 0x607000010000-0x616000000000 0x616000000000-0x616000020000 0x616000020000-0x619000000000 0x619000000000-0x619000020000 0x619000020000-0x621000000000 0x621000000000-0x621000020000 0x621000020000-0x624000000000 0x624000000000-0x624000020000 0x624000020000-0x631000000000 0x631000000000-0x631000030000 0x631000030000-0x640000000000 0x640000000000-0x640000003000 0x7f6fa9700000-0x7f6fa9800000 0x7f6fa9900000-0x7f6fa9a00000 0x7f6fa9a15000-0x7f6fabd67000 0x7f6fabd67000-0x7f6fabd6a000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabd6a000-0x7f6fabf69000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf69000-0x7f6fabf6a000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf6a000-0x7f6fabf6b000 /lib/x86_64-linux-gnu/libdl-2.23.so 0x7f6fabf6b000-0x7f6fabf83000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fabf83000-0x7f6fac182000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac182000-0x7f6fac183000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac183000-0x7f6fac184000 /lib/x86_64-linux-gnu/libpthread-2.23.so 0x7f6fac184000-0x7f6fac188000 0x7f6fac188000-0x7f6fac348000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac348000-0x7f6fac548000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac548000-0x7f6fac54c000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac54c000-0x7f6fac54e000 /lib/x86_64-linux-gnu/libc-2.23.so 0x7f6fac54e000-0x7f6fac552000 0x7f6fac552000-0x7f6fac568000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac568000-0x7f6fac767000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac767000-0x7f6fac768000 /lib/x86_64-linux-gnu/libgcc_s.so.1 0x7f6fac768000-0x7f6fac870000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6fac870000-0x7f6faca6f000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca6f000-0x7f6faca70000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca70000-0x7f6faca71000 /lib/x86_64-linux-gnu/libm-2.23.so 0x7f6faca71000-0x7f6facbe3000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facbe3000-0x7f6facde3000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facde3000-0x7f6facded000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facded000-0x7f6facdef000 /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.21 0x7f6facdef000-0x7f6facdf3000 0x7f6facdf3000-0x7f6facee7000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6facee7000-0x7f6fad0e7000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0e7000-0x7f6fad0ea000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0ea000-0x7f6fad0eb000 /usr/lib/x86_64-linux-gnu/libasan.so.2.0.0 0x7f6fad0eb000-0x7f6fadd60000 0x7f6fadd60000-0x7f6fadd86000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf2b000-0x7f6fadf6b000 0x7f6fadf6c000-0x7f6fadf85000 0x7f6fadf85000-0x7f6fadf86000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf86000-0x7f6fadf87000 /lib/x86_64-linux-gnu/ld-2.23.so 0x7f6fadf87000-0x7f6fadf88000 0x7ffc3100b000-0x7ffc3102c000 [stack] 0x7ffc310c7000-0x7ffc310ca000 [vvar] 0x7ffc310ca000-0x7ffc310cc000 [vdso] 0xffffffffff600000-0xffffffffff601000 [vsyscall] ==114645==End of process memory map. ==114645==AddressSanitizer CHECK failed: ../../../../src/libsanitizer/sanitizer_common/sanitizer_posix.cc:121 "(("unable to mmap" && 0)) != (0)" (0x0, 0x0) #0 0x7f6face93631 (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xa0631) #1 0x7f6face985e3 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xa55e3) #2 0x7f6facea0611 (/usr/lib/x86_64-linux-gnu/libasan.so.2+0xad611) #3 0x7f6face15c0c (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x22c0c) #4 0x7f6face8c4fe in operator new(unsigned long) (/usr/lib/x86_64-linux-gnu/libasan.so.2+0x994fe) #5 0x50f523 in AP4_Array<AP4_TfraAtom::Entry>::EnsureCapacity(unsigned int) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4Array.h:172 #6 0x50f523 in AP4_Array<AP4_TfraAtom::Entry>::SetItemCount(unsigned int) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4Array.h:210 #7 0x50f523 in AP4_TfraAtom::AP4_TfraAtom(unsigned int, unsigned char, unsigned int, AP4_ByteStream&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4TfraAtom.cpp:88 #8 0x50fae7 in AP4_TfraAtom::Create(unsigned int, AP4_ByteStream&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4TfraAtom.cpp:53 #9 0x470906 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned int, unsigned int, unsigned long long, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:443 #10 0x472452 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, unsigned long long&, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:234 #11 0x472452 in AP4_AtomFactory::CreateAtomFromStream(AP4_ByteStream&, AP4_Atom*&) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4AtomFactory.cpp:154 #12 0x40bd11 in AP4_File::ParseStream(AP4_ByteStream&, AP4_AtomFactory&, bool) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4File.cpp:104 #13 0x40bd11 in AP4_File::AP4_File(AP4_ByteStream&, bool) /home/xxzs/workdir/test/Bento4/Source/C++/Core/Ap4File.cpp:78 #14 0x402a40 in main /home/xxzs/workdir/test/Bento4/Source/C++/Apps/Mp42Aac/Mp42Aac.cpp:250 #15 0x7f6fac1a883f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2083f) #16 0x4045d8 in _start (/home/xxzs/workdir/test/mp42aac/mp42aac_asan+0x4045d8) ``` ## Crash input [POC1.zip](https://github.com/axiomatic-systems/Bento4/files/10090406/POC1.zip) ## Validation steps 1. build the latest mp42aac 2. ./mp42aac ./POC1 /dev/null ## Environment * Host Operating System and version: Ubuntu 16.04 LTS * Host CPU architecture: 11th Gen Intel® Core™ i5-11500 @ 2.70GHz × 8 * gcc version: 5.4.0 --- end body ---
6,899
[ -0.030295224860310555, 0.014362522400915623, -0.010598709806799889, -0.012122707441449165, 0.03291219100356102, -0.017318153753876686, -0.047813497483730316, 0.03913133218884468, -0.022736812010407448, 0.021089663729071617, -0.015386217273771763, 0.014270158484578133, 0.02084336057305336, ...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.1.1-12 ### Operating system Linux ### Operating system, version and so on Ubuntu 20.04.6 LTS ### Description # Description Using ImageMagick 7.1.1-12 with command as below receiving error message: ``` magick id:000000,3525685,sig:11,src:001744,op:havoc,rep:4 /dev/null > magick: insufficient image data in file `id:000000,3525685,sig:11,src:001744,op:havoc,rep:4' @ error/dib.c/ReadDIBImage/674. ``` # System Configuration - ImageMagick 7.1.1-12 - Ubuntu 20.04 LTS ### Steps to Reproduce # Steps to Reproduce Execute this command on the provided file: ``` magick id:000000,3525685,sig:11,src:001744,op:havoc,rep:4 /dev/null ``` ### Images [id_000000,3525685,sig_11,src_001744,op_havoc,rep_4.zip](https://github.com/ImageMagick/ImageMagick/files/12092985/id_000000.3525685.sig_11.src_001744.op_havoc.rep_4.zip)
Insufficient Image Data In File
https://api.github.com/repos/ImageMagick/ImageMagick/issues/6499/comments
3
2023-07-19T08:50:45
2023-07-24T04:26:59Z
https://github.com/ImageMagick/ImageMagick/issues/6499
1,811,479,469
6,499
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : Insufficient Image Data In File Issue date: --- start body --- ### ImageMagick version 7.1.1-12 ### Operating system Linux ### Operating system, version and so on Ubuntu 20.04.6 LTS ### Description # Description Using ImageMagick 7.1.1-12 with command as below receiving error message: ``` magick id:000000,3525685,sig:11,src:001744,op:havoc,rep:4 /dev/null > magick: insufficient image data in file `id:000000,3525685,sig:11,src:001744,op:havoc,rep:4' @ error/dib.c/ReadDIBImage/674. ``` # System Configuration - ImageMagick 7.1.1-12 - Ubuntu 20.04 LTS ### Steps to Reproduce # Steps to Reproduce Execute this command on the provided file: ``` magick id:000000,3525685,sig:11,src:001744,op:havoc,rep:4 /dev/null ``` ### Images [id_000000,3525685,sig_11,src_001744,op_havoc,rep_4.zip](https://github.com/ImageMagick/ImageMagick/files/12092985/id_000000.3525685.sig_11.src_001744.op_havoc.rep_4.zip) --- end body ---
1,012
[ -0.004312982317060232, 0.007045608013868332, 0.0015848159091547132, 0.016681145876646042, 0.033790379762649536, 0.019563602283596992, -0.006788755767047405, 0.040582701563835144, -0.01562519744038582, 0.01857900060713291, -0.007584284991025925, -0.012257574126124382, 0.0026862495578825474, ...
null
null
null
null
null
null
null
null
null
[ "ImageMagick", "ImageMagick" ]
### ImageMagick version 7.1.1-5 ### Operating system MacOS ### Operating system, version and so on Ventura 13.2.1 (Intel) ### Description While experimenting with [newline escapes](https://imagemagick.org/script/escape.php), I found I was not able to get the literal `\` to work. When I use the text `everybody\nobody`, `\n` is interpreted as a new line, which is to be expected, but trying to escape the backslash using `everybody\\nobody` results in an unwanted line break. Maybe I'm missing something? ### Steps to Reproduce Just run the following command with the attached image `convert img.png -gravity center -fill white -pointsize 120 -font montserrat-Thin -annotate +0+0 "everybody\\nobody" out.jpg` ![img](https://user-images.githubusercontent.com/6848225/228930798-2e7eacdf-1c9e-4536-9ab3-70bc9aba3394.png) ### Images ![out](https://user-images.githubusercontent.com/6848225/228930601-a154e375-fafd-475e-b589-cd05d932a0c2.jpg)
Cannot use literal backslash
https://api.github.com/repos/ImageMagick/ImageMagick/issues/6209/comments
10
2023-03-30T18:36:03
2023-03-30T20:16:38Z
https://github.com/ImageMagick/ImageMagick/issues/6209
1,648,123,232
6,209
false
This is a GitHub Issue repo:ImageMagick owner:ImageMagick Title : Cannot use literal backslash Issue date: --- start body --- ### ImageMagick version 7.1.1-5 ### Operating system MacOS ### Operating system, version and so on Ventura 13.2.1 (Intel) ### Description While experimenting with [newline escapes](https://imagemagick.org/script/escape.php), I found I was not able to get the literal `\` to work. When I use the text `everybody\nobody`, `\n` is interpreted as a new line, which is to be expected, but trying to escape the backslash using `everybody\\nobody` results in an unwanted line break. Maybe I'm missing something? ### Steps to Reproduce Just run the following command with the attached image `convert img.png -gravity center -fill white -pointsize 120 -font montserrat-Thin -annotate +0+0 "everybody\\nobody" out.jpg` ![img](https://user-images.githubusercontent.com/6848225/228930798-2e7eacdf-1c9e-4536-9ab3-70bc9aba3394.png) ### Images ![out](https://user-images.githubusercontent.com/6848225/228930601-a154e375-fafd-475e-b589-cd05d932a0c2.jpg) --- end body ---
1,099
[ 0.010460291989147663, 0.008732357993721962, -0.009811488911509514, 0.023555519059300423, 0.020748453214764595, 0.029024001210927963, -0.01694832183420658, 0.047534745186567307, -0.050844963639974594, -0.009659219533205032, 0.008129897527396679, 0.0008565191528759897, -0.006239762995392084, ...
null
null
null
null
null
null
null
null
null
[ "jerryscript-project", "jerryscript" ]
###### JerryScript revision ``` 05dbbd134c3b9e2482998f267857dd3722001cd7 ``` ###### Build platform ``` Linux-6.2.15-200.fc37.x86_64-x86_64-with-glibc2.34 clang version 14.0.6 (Red Hat 14.0.6-4.el9_1) ``` ###### Build steps ```sh CC=/usr/bin/clang python3 tools/build.py --clean \ --debug \ --strip=off \ --compile-flag=-fsanitize=address \ --lto=off \ --compile-flag=-g \ --error-messages=on \ --promise-callback=on \ --logging=on \ --line-info=on \ --stack-limit=128 ``` ###### Test case ```JavaScript class RegExp{ } async () => { Set; } await Symbol; class Set{ } ``` ###### Execution ```bash ./build/bin/jerry poc.js ``` ###### Output ``` AddressSanitizer:DEADLYSIGNAL ================================================================= ==4093==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000010 (pc 0x00000072bb10 bp 0x7fffd82d4130 sp 0x7fffd82d3dc0 T0) ==4093==The signal is caused by a READ memory access. ==4093==Hint: address points to the zero page. #0 0x72bb10 in parser_parse_class /home/rocky/jerryscript/jerry-core/parser/js/js-parser-expr.c:1107:38 #1 0x750031 in parser_parse_statements /home/rocky/jerryscript/jerry-core/parser/js/js-parser-statm.c:2787:9 #2 0x64411b in parser_parse_source /home/rocky/jerryscript/jerry-core/parser/js/js-parser.c:2280:5 #3 0x6408a2 in parser_parse_script /home/rocky/jerryscript/jerry-core/parser/js/js-parser.c:3326:38 #4 0x53ce99 in jerry_parse_common /home/rocky/jerryscript/jerry-core/api/jerryscript.c:412:21 #5 0x53ca07 in jerry_parse /home/rocky/jerryscript/jerry-core/api/jerryscript.c:480:10 #6 0x77038c in jerryx_source_parse_script /home/rocky/jerryscript/jerry-ext/util/sources.c:52:26 #7 0x7704b4 in jerryx_source_exec_script /home/rocky/jerryscript/jerry-ext/util/sources.c:63:26 #8 0x536b9f in main /home/rocky/jerryscript/jerry-main/main-desktop.c:156:20 #9 0x7fba61a87eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) (BuildId: 82f7ae28e16376aa97cc3bf50b40ab2d1043924a) #10 0x7fba61a87f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) (BuildId: 82f7ae28e16376aa97cc3bf50b40ab2d1043924a) #11 0x43c604 in _start (/home/rocky/jerryscript/build/bin/jerry+0x43c604) (BuildId: 1da1efd61105afed74f3a1d623bc459cc93ece58) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/rocky/jerryscript/jerry-core/parser/js/js-parser-expr.c:1107:38 in parser_parse_class ==4093==ABORTING ```
Segmentation fault - js-parser-expr.c in parser_parse_class
https://api.github.com/repos/jerryscript-project/jerryscript/issues/5085/comments
0
2023-06-08T14:32:42
2024-01-31T23:34:38Z
https://github.com/jerryscript-project/jerryscript/issues/5085
1,748,030,281
5,085
false
This is a GitHub Issue repo:jerryscript owner:jerryscript-project Title : Segmentation fault - js-parser-expr.c in parser_parse_class Issue date: --- start body --- ###### JerryScript revision ``` 05dbbd134c3b9e2482998f267857dd3722001cd7 ``` ###### Build platform ``` Linux-6.2.15-200.fc37.x86_64-x86_64-with-glibc2.34 clang version 14.0.6 (Red Hat 14.0.6-4.el9_1) ``` ###### Build steps ```sh CC=/usr/bin/clang python3 tools/build.py --clean \ --debug \ --strip=off \ --compile-flag=-fsanitize=address \ --lto=off \ --compile-flag=-g \ --error-messages=on \ --promise-callback=on \ --logging=on \ --line-info=on \ --stack-limit=128 ``` ###### Test case ```JavaScript class RegExp{ } async () => { Set; } await Symbol; class Set{ } ``` ###### Execution ```bash ./build/bin/jerry poc.js ``` ###### Output ``` AddressSanitizer:DEADLYSIGNAL ================================================================= ==4093==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000010 (pc 0x00000072bb10 bp 0x7fffd82d4130 sp 0x7fffd82d3dc0 T0) ==4093==The signal is caused by a READ memory access. ==4093==Hint: address points to the zero page. #0 0x72bb10 in parser_parse_class /home/rocky/jerryscript/jerry-core/parser/js/js-parser-expr.c:1107:38 #1 0x750031 in parser_parse_statements /home/rocky/jerryscript/jerry-core/parser/js/js-parser-statm.c:2787:9 #2 0x64411b in parser_parse_source /home/rocky/jerryscript/jerry-core/parser/js/js-parser.c:2280:5 #3 0x6408a2 in parser_parse_script /home/rocky/jerryscript/jerry-core/parser/js/js-parser.c:3326:38 #4 0x53ce99 in jerry_parse_common /home/rocky/jerryscript/jerry-core/api/jerryscript.c:412:21 #5 0x53ca07 in jerry_parse /home/rocky/jerryscript/jerry-core/api/jerryscript.c:480:10 #6 0x77038c in jerryx_source_parse_script /home/rocky/jerryscript/jerry-ext/util/sources.c:52:26 #7 0x7704b4 in jerryx_source_exec_script /home/rocky/jerryscript/jerry-ext/util/sources.c:63:26 #8 0x536b9f in main /home/rocky/jerryscript/jerry-main/main-desktop.c:156:20 #9 0x7fba61a87eaf in __libc_start_call_main (/lib64/libc.so.6+0x3feaf) (BuildId: 82f7ae28e16376aa97cc3bf50b40ab2d1043924a) #10 0x7fba61a87f5f in __libc_start_main@GLIBC_2.2.5 (/lib64/libc.so.6+0x3ff5f) (BuildId: 82f7ae28e16376aa97cc3bf50b40ab2d1043924a) #11 0x43c604 in _start (/home/rocky/jerryscript/build/bin/jerry+0x43c604) (BuildId: 1da1efd61105afed74f3a1d623bc459cc93ece58) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV /home/rocky/jerryscript/jerry-core/parser/js/js-parser-expr.c:1107:38 in parser_parse_class ==4093==ABORTING ``` --- end body ---
2,764
[ -0.009594181552529335, 0.012947422452270985, 0.0012448147172108293, 0.009654904715716839, 0.021037030965089798, 0.0010719237616285682, -0.01655704714357853, 0.02051076851785183, -0.022710278630256653, 0.032115545123815536, -0.03138687461614609, 0.0021404740400612354, 0.010849117301404476, ...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
Hello, I have used Kvazaar to split a 360-degree video (YUV format) into 3x3 tiles and make a .hvc file. I then used the .hvc file to package within a container via MP4Box `$ MP4Box -add video_tiled.hvc:split_tiles -fps 30 -new video_tiled.mp4` Later I used the generated file to create an MPD file using the following command `$ MP4Box -dash 1000 -rap -frag-rap -profile live -out dash_tiled.mpd video_tiled.mp4` Although the manifest file is playing on Osmo4, the tiles have been split and I can observe the splitting in the video, but it seems to be still in YUV format. Also, I am not able to play the video on my Exoplayer application. The lines in the manifest file seem to be okay. (Attaching it here for reference). Can someone point out what am I missing? ![image](https://user-images.githubusercontent.com/73991812/167699440-ecb9c35c-e114-44aa-a33e-e97e361decc0.png) [dash_tiled.txt](https://github.com/gpac/gpac/files/8664164/dash_tiled.txt)
Issue with streaming manifest file on Exoplayer
https://api.github.com/repos/gpac/gpac/issues/2189/comments
3
2022-05-10T18:45:11
2022-10-10T16:43:31Z
https://github.com/gpac/gpac/issues/2189
1,231,555,661
2,189
false
This is a GitHub Issue repo:gpac owner:gpac Title : Issue with streaming manifest file on Exoplayer Issue date: --- start body --- Hello, I have used Kvazaar to split a 360-degree video (YUV format) into 3x3 tiles and make a .hvc file. I then used the .hvc file to package within a container via MP4Box `$ MP4Box -add video_tiled.hvc:split_tiles -fps 30 -new video_tiled.mp4` Later I used the generated file to create an MPD file using the following command `$ MP4Box -dash 1000 -rap -frag-rap -profile live -out dash_tiled.mpd video_tiled.mp4` Although the manifest file is playing on Osmo4, the tiles have been split and I can observe the splitting in the video, but it seems to be still in YUV format. Also, I am not able to play the video on my Exoplayer application. The lines in the manifest file seem to be okay. (Attaching it here for reference). Can someone point out what am I missing? ![image](https://user-images.githubusercontent.com/73991812/167699440-ecb9c35c-e114-44aa-a33e-e97e361decc0.png) [dash_tiled.txt](https://github.com/gpac/gpac/files/8664164/dash_tiled.txt) --- end body ---
1,111
[ -0.019390420988202095, 0.0067721083760261536, -0.02139526978135109, -0.006094897165894508, 0.0009082086035050452, 0.02347664162516594, -0.04009700194001198, 0.04857552796602249, 0.003047448582947254, 0.04184168204665184, -0.0028484940994530916, -0.013069786131381989, 0.013184567913413048, ...
null
null
null
null
null
null
null
null
null
[ "python", "cpython" ]
# Feature or enhancement ### Proposal: The issue https://github.com/python/cpython/issues/116017 explains already what the problem is with memory allocation used by the JIT. To give more data point, I decided to debug this a little bit further, put some debugging info in the `_PyJIT_Compile` and then ran a pyperformance run. The debugging info are around the memory allocated and the padding used to align it to the page size. The function has been called 1288249 times and this is the ratio between the actual memory allocated and the padding due to 16K (on MacOS) page size: - Total Padding size: 16,490,764,792 - Total Code/Data size: 6,737,241,608 71% of the memory allocated is wasted in padding whilst only 29% is being used by data. There is an indication that memory needed for these objects is *usually* much smaller than the page size. This is a brain dump from @brandtbucher to help out with the implementation: > for 3.14 we'll probably need to look into some sort of slab allocator that will let us share pages between executors. We can allocate by either batching the compiles or stopping the world to flip the permission bits, and then deallocate by maintaining refcounts of each page or something. [...] One benefit that could come with an arena allocator is the ability to JIT a bunch of guaranteed-in-range trampolines for long jumps to library/C-API calls, rather than needing to create a ton of redundant in-line trampolines inline in the trace (or using global offset table hacks). That should save us memory *and* speed things up, I think. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: This has been discussed with Brandt via email and in person at PyCon 2024.
JIT: improve memory allocation
https://api.github.com/repos/python/cpython/issues/119730/comments
6
2024-05-29T14:00:18
2024-06-03T15:16:09Z
https://github.com/python/cpython/issues/119730
2,323,414,127
119,730
false
This is a GitHub Issue repo:cpython owner:python Title : JIT: improve memory allocation Issue date: --- start body --- # Feature or enhancement ### Proposal: The issue https://github.com/python/cpython/issues/116017 explains already what the problem is with memory allocation used by the JIT. To give more data point, I decided to debug this a little bit further, put some debugging info in the `_PyJIT_Compile` and then ran a pyperformance run. The debugging info are around the memory allocated and the padding used to align it to the page size. The function has been called 1288249 times and this is the ratio between the actual memory allocated and the padding due to 16K (on MacOS) page size: - Total Padding size: 16,490,764,792 - Total Code/Data size: 6,737,241,608 71% of the memory allocated is wasted in padding whilst only 29% is being used by data. There is an indication that memory needed for these objects is *usually* much smaller than the page size. This is a brain dump from @brandtbucher to help out with the implementation: > for 3.14 we'll probably need to look into some sort of slab allocator that will let us share pages between executors. We can allocate by either batching the compiles or stopping the world to flip the permission bits, and then deallocate by maintaining refcounts of each page or something. [...] One benefit that could come with an arena allocator is the ability to JIT a bunch of guaranteed-in-range trampolines for long jumps to library/C-API calls, rather than needing to create a ton of redundant in-line trampolines inline in the trace (or using global offset table hacks). That should save us memory *and* speed things up, I think. ### Has this already been discussed elsewhere? I have already discussed this feature proposal on Discourse ### Links to previous discussion of this feature: This has been discussed with Brandt via email and in person at PyCon 2024. --- end body ---
1,944
[ -0.010257340036332607, -0.0046740989200770855, -0.019432038068771362, -0.009799298830330372, 0.008883217349648476, 0.018613116815686226, 0.026913372799754143, 0.048607856035232544, -0.013061105273663998, 0.018613116815686226, 0.007793635129928589, -0.008570916950702667, 0.041667841374874115,...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
```gpac -i avgen:dur=2 @ c=h264 -o gpac.mp4``` works but ```gpac -i avgen:dur=2 @ c=h264 @ bsrw -o gpac.mp4``` loops forever.
Introduction of bsrw filter breaks execution
https://api.github.com/repos/gpac/gpac/issues/2269/comments
4
2022-09-28T18:52:38
2022-09-29T13:40:17Z
https://github.com/gpac/gpac/issues/2269
1,389,807,966
2,269
false
This is a GitHub Issue repo:gpac owner:gpac Title : Introduction of bsrw filter breaks execution Issue date: --- start body --- ```gpac -i avgen:dur=2 @ c=h264 -o gpac.mp4``` works but ```gpac -i avgen:dur=2 @ c=h264 @ bsrw -o gpac.mp4``` loops forever. --- end body ---
274
[ -0.011064874939620495, 0.013791870325803757, -0.013903938233852386, 0.022533196955919266, 0.011318896897137165, 0.0097499405965209, -0.019992981106042862, 0.04384111240506172, -0.009884422644972801, 0.035413578152656555, -0.019649306312203407, 0.019544709473848343, 0.03233543410897255, 0.0...
null
null
null
null
null
null
null
null
null
[ "LibreDWG", "libredwg" ]
I have JSON output like: ``` { "entity": "BLOCK", "index": 23, "type": 4, "handle": [0, 1, 33], "size": 29, "bitsize": 187, "_subclass": "AcDbEntity", "layer": [5, 1, 16, 16], "prev_entity": [4, 0, 0, 0], "next_entity": [4, 0, 0, 0], "preview_exists": 0, "entmode": 1, "isbylayerlt": 1, "nolinks": 0, "color": 256, "ltype_scale": 1.0, "invisible": 0, "_subclass": "AcDbBlockBegin", "name": "*PAPER_SPACE" }, ``` There are two '_subclass' keys. This is not possible in JSON.
JSON file generated by libredwg is not valid - duplicate keys
https://api.github.com/repos/LibreDWG/libredwg/issues/828/comments
2
2023-08-31T11:22:16
2023-08-31T14:52:23Z
https://github.com/LibreDWG/libredwg/issues/828
1,875,330,097
828
false
This is a GitHub Issue repo:libredwg owner:LibreDWG Title : JSON file generated by libredwg is not valid - duplicate keys Issue date: --- start body --- I have JSON output like: ``` { "entity": "BLOCK", "index": 23, "type": 4, "handle": [0, 1, 33], "size": 29, "bitsize": 187, "_subclass": "AcDbEntity", "layer": [5, 1, 16, 16], "prev_entity": [4, 0, 0, 0], "next_entity": [4, 0, 0, 0], "preview_exists": 0, "entmode": 1, "isbylayerlt": 1, "nolinks": 0, "color": 256, "ltype_scale": 1.0, "invisible": 0, "_subclass": "AcDbBlockBegin", "name": "*PAPER_SPACE" }, ``` There are two '_subclass' keys. This is not possible in JSON. --- end body ---
793
[ -0.01876240223646164, 0.015418177470564842, -0.013651963323354721, 0.023583296686410904, 0.02159992605447769, 0.008056540973484516, 0.01897955872118473, 0.04010173678398132, -0.009511495009064674, 0.004574783146381378, 0.017850341275334358, -0.017835862934589386, 0.00408979831263423, -0.00...
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
null
mp4box.exe can not phase Restricted Video Sample Entry Box("resv")
https://api.github.com/repos/gpac/gpac/issues/2489/comments
3
2023-06-04T15:20:42
2023-06-05T15:05:15Z
https://github.com/gpac/gpac/issues/2489
1,740,429,905
2,489
false
This is a GitHub Issue repo:gpac owner:gpac Title : mp4box.exe can not phase Restricted Video Sample Entry Box("resv") Issue date: --- start body --- None --- end body ---
173
[ -0.03759848698973656, 0.042202383279800415, -0.010803809389472008, 0.033332210034132004, -0.0058124191127717495, 0.015499783679842949, -0.01907547563314438, 0.058622945100069046, -0.007431455887854099, 0.017464112490415573, -0.000589874223805964, 0.053251732140779495, 0.014272077940404415, ...
null
null
null
null
null
null
null
null
null
[ "openlink", "virtuoso-opensource" ]
Consider the test case below. It is unexpected that the second query returns `NULL, NULL`, as the first query returns `NULL, 1`, and the subsequent queries with `WHERE` filter shouldn't return any extra row. ```sql DROP TABLE t0; DROP TABLE t1; CREATE TABLE t0(c0 VARCHAR(500)); CREATE TABLE t1(c0 INTEGER, c1 INTEGER); INSERT INTO t1 (c0) VALUES (1); INSERT INTO t1 (c1) VALUES (2); INSERT INTO t0 (c0) VALUES ('a'); SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1; -- NULL, 1 SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE (NOT NULL) UNION ALL SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE ((NULL) IS NULL); -- Expected: NULL, 1 -- Actual: NULL, NULL -- This query works as expected SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE ((NULL) IS NULL); -- NULL, 1 ``` I build docker image following this: https://github.com/openlink/vos-reference-docker. Kindly inform me if I did something wrong or I should provide more information. Here's the version: ``` [vos-reference/develop/7] This Docker image is using the following version of Virtuoso: Virtuoso Open Source Edition (Column Store) (multi threaded) Version 7.2.12-dev.3238-pthreads as of Jan 19 2024 (b361275) Compiled for Linux (x86_64-pc-linux-gnu) Copyright (C) 1998-2024 OpenLink Software ```
Unexpected results when using `LEFT JOIN` with `NULL` as predicate
https://api.github.com/repos/openlink/virtuoso-opensource/issues/1238/comments
1
2024-01-19T12:04:24
2024-02-06T18:19:46Z
https://github.com/openlink/virtuoso-opensource/issues/1238
2,090,415,653
1,238
false
This is a GitHub Issue repo:virtuoso-opensource owner:openlink Title : Unexpected results when using `LEFT JOIN` with `NULL` as predicate Issue date: --- start body --- Consider the test case below. It is unexpected that the second query returns `NULL, NULL`, as the first query returns `NULL, 1`, and the subsequent queries with `WHERE` filter shouldn't return any extra row. ```sql DROP TABLE t0; DROP TABLE t1; CREATE TABLE t0(c0 VARCHAR(500)); CREATE TABLE t1(c0 INTEGER, c1 INTEGER); INSERT INTO t1 (c0) VALUES (1); INSERT INTO t1 (c1) VALUES (2); INSERT INTO t0 (c0) VALUES ('a'); SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1; -- NULL, 1 SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE (NOT NULL) UNION ALL SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE ((NULL) IS NULL); -- Expected: NULL, 1 -- Actual: NULL, NULL -- This query works as expected SELECT t1.c0 FROM t1 LEFT JOIN t0 ON t1.c1 WHERE ((NULL) IS NULL); -- NULL, 1 ``` I build docker image following this: https://github.com/openlink/vos-reference-docker. Kindly inform me if I did something wrong or I should provide more information. Here's the version: ``` [vos-reference/develop/7] This Docker image is using the following version of Virtuoso: Virtuoso Open Source Edition (Column Store) (multi threaded) Version 7.2.12-dev.3238-pthreads as of Jan 19 2024 (b361275) Compiled for Linux (x86_64-pc-linux-gnu) Copyright (C) 1998-2024 OpenLink Software ``` --- end body ---
1,495
[ 0.009459883905947208, -0.013933884911239147, -0.010098077356815338, 0.02925052121281624, 0.005707175470888615, 0.001811537891626358, -0.020595025271177292, 0.020169563591480255, -0.022549493238329887, -0.0004100889782421291, 0.010297512635588646, -0.013023129664361477, -0.0000506378855789080...
null
null
null
null
null
null
null
null
null
[ "MonetDB", "MonetDB" ]
**Describe the bug** One thread calling bm_commit (store manager applying the WAL) and making a bat persistent (calling BATmode), and another thread unloading the same bat (BBPmanager calling BBPtrim) at the same time may cause deadlock. BATmode has a reference to the heaps (bat_iterator) when it calls BBPretain, which calls incref, which waits until unloading is finished; and BBPtrim calls BBPfree which calls BATfree which waits until the heap reference count goes down. In other words, two threads waiting for each other.
possible deadlock when a bat is made persistent when it is also getting unloaded
https://api.github.com/repos/MonetDB/MonetDB/issues/7504/comments
0
2024-04-29T14:41:20
2024-06-27T13:20:06Z
https://github.com/MonetDB/MonetDB/issues/7504
2,269,231,495
7,504
false
This is a GitHub Issue repo:MonetDB owner:MonetDB Title : possible deadlock when a bat is made persistent when it is also getting unloaded Issue date: --- start body --- **Describe the bug** One thread calling bm_commit (store manager applying the WAL) and making a bat persistent (calling BATmode), and another thread unloading the same bat (BBPmanager calling BBPtrim) at the same time may cause deadlock. BATmode has a reference to the heaps (bat_iterator) when it calls BBPretain, which calls incref, which waits until unloading is finished; and BBPtrim calls BBPfree which calls BATfree which waits until the heap reference count goes down. In other words, two threads waiting for each other. --- end body ---
720
[ -0.02208988554775715, 0.0015961651224642992, -0.020781734958291054, 0.020261447876691818, -0.019146548584103584, 0.03258480876684189, 0.005124823655933142, 0.03817417472600937, -0.021926365792751312, 0.018477609381079674, 0.009766523726284504, 0.01657484658062458, 0.017570823431015015, -0....
null
null
null
null
null
null
null
null
null
[ "gpac", "gpac" ]
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you! - [x] I looked for a similar issue and couldn't find any. - [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/ - [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95 Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/ # Version ``` MP4Box - GPAC version 2.3-DEV-rev40-g3602a5ded-master (c) 2000-2023 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: --enable-sanitizer --verbose Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SSL GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_PNG GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D ``` # Reproduce complie and run ``` ./configure --enable-sanitizer --enable-debug make ./MP4Box -info gf_m2ts_process_tdt_tot ``` information reported by sanitizer ``` ================================================================= ==24800==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000001b51 at pc 0x7fa11638a599 bp 0x7fff33c01ff0 sp 0x7fff33c01fe0 READ of size 1 at 0x602000001b51 thread T0 #0 0x7fa11638a598 in gf_m2ts_process_tdt_tot media_tools/mpegts.c:952 #1 0x7fa11638a598 in gf_m2ts_process_tdt_tot media_tools/mpegts.c:905 #2 0x7fa11638b936 in gf_m2ts_section_complete media_tools/mpegts.c:623 #3 0x7fa11638d619 in gf_m2ts_gather_section media_tools/mpegts.c:760 #4 0x7fa116395c12 in gf_m2ts_process_packet media_tools/mpegts.c:2591 #5 0x7fa1163982b9 in gf_m2ts_process_data media_tools/mpegts.c:2817 #6 0x7fa1163a25c5 in gf_m2ts_probe_buffer media_tools/mpegts.c:3201 #7 0x7fa116aa5fa4 in m2tsdmx_probe_data filters/dmx_m2ts.c:1438 #8 0x7fa11696b778 in gf_filter_pid_raw_new filter_core/filter.c:4210 #9 0x7fa116b3a2db in filein_process filters/in_file.c:492 #10 0x7fa1169730ed in gf_filter_process_task filter_core/filter.c:2828 #11 0x7fa116935082 in gf_fs_thread_proc filter_core/filter_session.c:1859 #12 0x7fa116941856 in gf_fs_run filter_core/filter_session.c:2120 #13 0x7fa11637f806 in gf_media_import media_tools/media_import.c:1228 #14 0x562a5a4743b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #15 0x562a5a443db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #16 0x7fa113617082 in __libc_start_main ../csu/libc-start.c:308 #17 0x562a5a417cfd in _start (/home/qianshuidewajueji/gpac/bin/gcc/MP4Box+0xa3cfd) 0x602000001b51 is located 0 bytes to the right of 1-byte region [0x602000001b50,0x602000001b51) allocated by thread T0 here: #0 0x7fa1194ae808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x7fa11638b5e9 in gf_m2ts_section_complete media_tools/mpegts.c:566 #2 0x7fa11638d619 in gf_m2ts_gather_section media_tools/mpegts.c:760 #3 0x7fa116395c12 in gf_m2ts_process_packet media_tools/mpegts.c:2591 #4 0x7fa1163982b9 in gf_m2ts_process_data media_tools/mpegts.c:2817 #5 0x7fa1163a25c5 in gf_m2ts_probe_buffer media_tools/mpegts.c:3201 #6 0x7fa116aa5fa4 in m2tsdmx_probe_data filters/dmx_m2ts.c:1438 #7 0x7fa11696b778 in gf_filter_pid_raw_new filter_core/filter.c:4210 #8 0x7fa116b3a2db in filein_process filters/in_file.c:492 #9 0x7fa1169730ed in gf_filter_process_task filter_core/filter.c:2828 #10 0x7fa116935082 in gf_fs_thread_proc filter_core/filter_session.c:1859 #11 0x7fa116941856 in gf_fs_run filter_core/filter_session.c:2120 #12 0x7fa11637f806 in gf_media_import media_tools/media_import.c:1228 #13 0x562a5a4743b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #14 0x562a5a443db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #15 0x7fa113617082 in __libc_start_main ../csu/libc-start.c:308 SUMMARY: AddressSanitizer: heap-buffer-overflow media_tools/mpegts.c:952 in gf_m2ts_process_tdt_tot Shadow bytes around the buggy address: 0x0c047fff8310: fa fa 00 00 fa fa 04 fa fa fa 04 fa fa fa 04 fa 0x0c047fff8320: fa fa 06 fa fa fa 00 00 fa fa 00 00 fa fa 00 00 0x0c047fff8330: fa fa 00 00 fa fa 00 00 fa fa fd fa fa fa 00 00 0x0c047fff8340: fa fa 00 00 fa fa 04 fa fa fa 04 fa fa fa 04 fa 0x0c047fff8350: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 03 fa =>0x0c047fff8360: fa fa 00 00 fa fa 00 00 fa fa[01]fa fa fa 00 fa 0x0c047fff8370: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff8380: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff8390: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff83a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff83b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==24800==ABORTING ``` # Poc https://github.com/qianshuidewajueji/poc/blob/main/gpac/gf_m2ts_process_tdt_tot
heap-buffer-overflow in function gf_m2ts_process_tdt_tot media_tools/mpegts.c:952
https://api.github.com/repos/gpac/gpac/issues/2395/comments
1
2023-02-09T14:03:00
2023-02-13T16:32:43Z
https://github.com/gpac/gpac/issues/2395
1,577,948,156
2,395
false
This is a GitHub Issue repo:gpac owner:gpac Title : heap-buffer-overflow in function gf_m2ts_process_tdt_tot media_tools/mpegts.c:952 Issue date: --- start body --- Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you! - [x] I looked for a similar issue and couldn't find any. - [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/ - [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95 Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/ # Version ``` MP4Box - GPAC version 2.3-DEV-rev40-g3602a5ded-master (c) 2000-2023 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: --enable-sanitizer --verbose Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SSL GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_PNG GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D ``` # Reproduce complie and run ``` ./configure --enable-sanitizer --enable-debug make ./MP4Box -info gf_m2ts_process_tdt_tot ``` information reported by sanitizer ``` ================================================================= ==24800==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000001b51 at pc 0x7fa11638a599 bp 0x7fff33c01ff0 sp 0x7fff33c01fe0 READ of size 1 at 0x602000001b51 thread T0 #0 0x7fa11638a598 in gf_m2ts_process_tdt_tot media_tools/mpegts.c:952 #1 0x7fa11638a598 in gf_m2ts_process_tdt_tot media_tools/mpegts.c:905 #2 0x7fa11638b936 in gf_m2ts_section_complete media_tools/mpegts.c:623 #3 0x7fa11638d619 in gf_m2ts_gather_section media_tools/mpegts.c:760 #4 0x7fa116395c12 in gf_m2ts_process_packet media_tools/mpegts.c:2591 #5 0x7fa1163982b9 in gf_m2ts_process_data media_tools/mpegts.c:2817 #6 0x7fa1163a25c5 in gf_m2ts_probe_buffer media_tools/mpegts.c:3201 #7 0x7fa116aa5fa4 in m2tsdmx_probe_data filters/dmx_m2ts.c:1438 #8 0x7fa11696b778 in gf_filter_pid_raw_new filter_core/filter.c:4210 #9 0x7fa116b3a2db in filein_process filters/in_file.c:492 #10 0x7fa1169730ed in gf_filter_process_task filter_core/filter.c:2828 #11 0x7fa116935082 in gf_fs_thread_proc filter_core/filter_session.c:1859 #12 0x7fa116941856 in gf_fs_run filter_core/filter_session.c:2120 #13 0x7fa11637f806 in gf_media_import media_tools/media_import.c:1228 #14 0x562a5a4743b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #15 0x562a5a443db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #16 0x7fa113617082 in __libc_start_main ../csu/libc-start.c:308 #17 0x562a5a417cfd in _start (/home/qianshuidewajueji/gpac/bin/gcc/MP4Box+0xa3cfd) 0x602000001b51 is located 0 bytes to the right of 1-byte region [0x602000001b50,0x602000001b51) allocated by thread T0 here: #0 0x7fa1194ae808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144 #1 0x7fa11638b5e9 in gf_m2ts_section_complete media_tools/mpegts.c:566 #2 0x7fa11638d619 in gf_m2ts_gather_section media_tools/mpegts.c:760 #3 0x7fa116395c12 in gf_m2ts_process_packet media_tools/mpegts.c:2591 #4 0x7fa1163982b9 in gf_m2ts_process_data media_tools/mpegts.c:2817 #5 0x7fa1163a25c5 in gf_m2ts_probe_buffer media_tools/mpegts.c:3201 #6 0x7fa116aa5fa4 in m2tsdmx_probe_data filters/dmx_m2ts.c:1438 #7 0x7fa11696b778 in gf_filter_pid_raw_new filter_core/filter.c:4210 #8 0x7fa116b3a2db in filein_process filters/in_file.c:492 #9 0x7fa1169730ed in gf_filter_process_task filter_core/filter.c:2828 #10 0x7fa116935082 in gf_fs_thread_proc filter_core/filter_session.c:1859 #11 0x7fa116941856 in gf_fs_run filter_core/filter_session.c:2120 #12 0x7fa11637f806 in gf_media_import media_tools/media_import.c:1228 #13 0x562a5a4743b1 in convert_file_info /home/qianshuidewajueji/gpac/applications/mp4box/fileimport.c:130 #14 0x562a5a443db5 in mp4box_main /home/qianshuidewajueji/gpac/applications/mp4box/mp4box.c:6302 #15 0x7fa113617082 in __libc_start_main ../csu/libc-start.c:308 SUMMARY: AddressSanitizer: heap-buffer-overflow media_tools/mpegts.c:952 in gf_m2ts_process_tdt_tot Shadow bytes around the buggy address: 0x0c047fff8310: fa fa 00 00 fa fa 04 fa fa fa 04 fa fa fa 04 fa 0x0c047fff8320: fa fa 06 fa fa fa 00 00 fa fa 00 00 fa fa 00 00 0x0c047fff8330: fa fa 00 00 fa fa 00 00 fa fa fd fa fa fa 00 00 0x0c047fff8340: fa fa 00 00 fa fa 04 fa fa fa 04 fa fa fa 04 fa 0x0c047fff8350: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 03 fa =>0x0c047fff8360: fa fa 00 00 fa fa 00 00 fa fa[01]fa fa fa 00 fa 0x0c047fff8370: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff8380: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff8390: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff83a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c047fff83b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb Shadow gap: cc ==24800==ABORTING ``` # Poc https://github.com/qianshuidewajueji/poc/blob/main/gpac/gf_m2ts_process_tdt_tot --- end body ---
6,307
[ -0.02537584863603115, 0.004249727353453636, -0.011026319116353989, -0.0034916677977889776, 0.03938846290111542, 0.009586771950125694, -0.03941909223794937, 0.046708714216947556, -0.01122540608048439, 0.026570366695523262, -0.014479701407253742, 0.00151133316103369, 0.03883714601397514, 0.0...
CVE-2022-43040
2022-10-19T14:15:10.183000
GPAC 2.1-DEV-rev368-gfd054169b-master was discovered to contain a heap buffer overflow via the function gf_isom_box_dump_start_ex at /isomedia/box_funcs.c.
{ "cvssMetricV2": null, "cvssMetricV30": null, "cvssMetricV31": [ { "cvssData": { "attackComplexity": "LOW", "attackVector": "LOCAL", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.1" }, "exploitabilityScore": 1.8, "impactScore": 5.9, "source": "nvd@nist.gov", "type": "Primary" } ] }
[ { "source": "cve@mitre.org", "tags": [ "Exploit", "Issue Tracking", "Third Party Advisory" ], "url": "https://github.com/gpac/gpac/issues/2280" } ]
[ { "nodes": [ { "cpeMatch": [ { "criteria": "cpe:2.3:a:gpac:gpac:*:*:*:*:*:*:*:*", "matchCriteriaId": "B4D3D58A-C3C9-4441-A84A-FB91FD19985C", "versionEndExcluding": "2.2.0", "versionEndIncluding": null, "versionStartExcluding": n...
https://github.com/gpac/gpac/issues/2280
[ "Exploit", "Issue Tracking", "Third Party Advisory" ]
github.com
[ "gpac", "gpac" ]
### Description Heap-buffer-overflow in isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex ### Version ``` $ ./MP4Box -version MP4Box - GPAC version 2.1-DEV-rev368-gfd054169b-master (c) 2000-2022 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: --enable-sanitizer Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_JPEG GPAC_HAS_PNG GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D ``` ### Replay ``` git clone https://github.com/gpac/gpac.git cd gpac ./configure --enable-sanitizer make -j$(nproc) ./bin/gcc/MP4Box -diso mp4box-diso-heap-buffer-over-flow-1 ``` ### POC https://github.com/17ssDP/fuzzer_crashes/blob/main/gpac/mp4box-diso-heap-buffer-over-flow-1 ### ASAN ``` [iso file] Read Box type 04@0004 (0x04400004) at position 94 has size 0 but is not at root/file level. Forbidden, skipping end of parent box ! [iso file] Box "meta" (start 32) has 206 extra bytes [iso file] Box "uuid" (start 4061) has 58 extra bytes [iso file] Incomplete box mdat - start 4151 size 54847 [iso file] Incomplete file while reading for dump - aborting parsing ================================================================= ==18099==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x604000000540 at pc 0x7f54a04dd880 bp 0x7ffcec3ea7e0 sp 0x7ffcec3ea7d0 READ of size 1 at 0x604000000540 thread T0 #0 0x7f54a04dd87f in gf_isom_box_dump_start_ex isomedia/box_funcs.c:2074 #1 0x7f54a04dd87f in gf_isom_box_dump_start isomedia/box_funcs.c:2093 #2 0x7f54a04c0ae7 in trgt_box_dump isomedia/box_dump.c:5807 #3 0x7f54a04ddbb8 in gf_isom_box_dump isomedia/box_funcs.c:2108 #4 0x7f54a0470ffa in gf_isom_box_array_dump isomedia/box_dump.c:104 #5 0x7f54a04ddda8 in gf_isom_box_dump_done isomedia/box_funcs.c:2115 #6 0x7f54a04c09d5 in trgr_box_dump isomedia/box_dump.c:5799 #7 0x7f54a04ddbb8 in gf_isom_box_dump isomedia/box_funcs.c:2108 #8 0x7f54a04714d6 in gf_isom_dump isomedia/box_dump.c:138 #9 0x55e8639f1804 in dump_isom_xml /home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/applications/mp4box/filedump.c:2067 #10 0x55e8639c1d79 in mp4box_main /home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/applications/mp4box/mp4box.c:6364 #11 0x7f549f4e0c86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21c86) #12 0x55e8639920a9 in _start (/home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/bin/gcc/MP4Box+0x4e0a9) 0x604000000540 is located 0 bytes to the right of 48-byte region [0x604000000510,0x604000000540) allocated by thread T0 here: #0 0x7f54a2a4cb40 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb40) #1 0x7f54a041bd12 in trgt_box_new isomedia/box_code_base.c:10623 SUMMARY: AddressSanitizer: heap-buffer-overflow isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex Shadow bytes around the buggy address: 0x0c087fff8050: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00 0x0c087fff8060: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 fa 0x0c087fff8070: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 fa 0x0c087fff8080: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00 0x0c087fff8090: fa fa fd fd fd fd fd fd fa fa 00 00 00 00 00 00 =>0x0c087fff80a0: fa fa 00 00 00 00 00 00[fa]fa fa fa fa fa fa fa 0x0c087fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==18099==ABORTING ``` ### Environment ``` Ubuntu 16.04 Clang 10.0.1 gcc 5.5 ```
heap-buffer-overflow isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex
https://api.github.com/repos/gpac/gpac/issues/2280/comments
0
2022-10-09T08:31:37
2022-10-10T15:44:28Z
https://github.com/gpac/gpac/issues/2280
1,402,198,804
2,280
true
This is a GitHub Issue repo:gpac owner:gpac Title : heap-buffer-overflow isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex Issue date: --- start body --- ### Description Heap-buffer-overflow in isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex ### Version ``` $ ./MP4Box -version MP4Box - GPAC version 2.1-DEV-rev368-gfd054169b-master (c) 2000-2022 Telecom Paris distributed under LGPL v2.1+ - http://gpac.io Please cite our work in your research: GPAC Filters: https://doi.org/10.1145/3339825.3394929 GPAC: https://doi.org/10.1145/1291233.1291452 GPAC Configuration: --enable-sanitizer Features: GPAC_CONFIG_LINUX GPAC_64_BITS GPAC_HAS_IPV6 GPAC_HAS_SOCK_UN GPAC_MINIMAL_ODF GPAC_HAS_QJS GPAC_HAS_JPEG GPAC_HAS_PNG GPAC_HAS_LINUX_DVB GPAC_DISABLE_3D ``` ### Replay ``` git clone https://github.com/gpac/gpac.git cd gpac ./configure --enable-sanitizer make -j$(nproc) ./bin/gcc/MP4Box -diso mp4box-diso-heap-buffer-over-flow-1 ``` ### POC https://github.com/17ssDP/fuzzer_crashes/blob/main/gpac/mp4box-diso-heap-buffer-over-flow-1 ### ASAN ``` [iso file] Read Box type 04@0004 (0x04400004) at position 94 has size 0 but is not at root/file level. Forbidden, skipping end of parent box ! [iso file] Box "meta" (start 32) has 206 extra bytes [iso file] Box "uuid" (start 4061) has 58 extra bytes [iso file] Incomplete box mdat - start 4151 size 54847 [iso file] Incomplete file while reading for dump - aborting parsing ================================================================= ==18099==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x604000000540 at pc 0x7f54a04dd880 bp 0x7ffcec3ea7e0 sp 0x7ffcec3ea7d0 READ of size 1 at 0x604000000540 thread T0 #0 0x7f54a04dd87f in gf_isom_box_dump_start_ex isomedia/box_funcs.c:2074 #1 0x7f54a04dd87f in gf_isom_box_dump_start isomedia/box_funcs.c:2093 #2 0x7f54a04c0ae7 in trgt_box_dump isomedia/box_dump.c:5807 #3 0x7f54a04ddbb8 in gf_isom_box_dump isomedia/box_funcs.c:2108 #4 0x7f54a0470ffa in gf_isom_box_array_dump isomedia/box_dump.c:104 #5 0x7f54a04ddda8 in gf_isom_box_dump_done isomedia/box_funcs.c:2115 #6 0x7f54a04c09d5 in trgr_box_dump isomedia/box_dump.c:5799 #7 0x7f54a04ddbb8 in gf_isom_box_dump isomedia/box_funcs.c:2108 #8 0x7f54a04714d6 in gf_isom_dump isomedia/box_dump.c:138 #9 0x55e8639f1804 in dump_isom_xml /home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/applications/mp4box/filedump.c:2067 #10 0x55e8639c1d79 in mp4box_main /home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/applications/mp4box/mp4box.c:6364 #11 0x7f549f4e0c86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21c86) #12 0x55e8639920a9 in _start (/home/fuzz/dp/chunkfuzzer-evaluation/benchmark/gpac-asan/bin/gcc/MP4Box+0x4e0a9) 0x604000000540 is located 0 bytes to the right of 48-byte region [0x604000000510,0x604000000540) allocated by thread T0 here: #0 0x7f54a2a4cb40 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb40) #1 0x7f54a041bd12 in trgt_box_new isomedia/box_code_base.c:10623 SUMMARY: AddressSanitizer: heap-buffer-overflow isomedia/box_funcs.c:2074 in gf_isom_box_dump_start_ex Shadow bytes around the buggy address: 0x0c087fff8050: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00 0x0c087fff8060: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 fa 0x0c087fff8070: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 fa 0x0c087fff8080: fa fa 00 00 00 00 00 00 fa fa 00 00 00 00 00 00 0x0c087fff8090: fa fa fd fd fd fd fd fd fa fa 00 00 00 00 00 00 =>0x0c087fff80a0: fa fa 00 00 00 00 00 00[fa]fa fa fa fa fa fa fa 0x0c087fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa 0x0c087fff80f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa Shadow byte legend (one shadow byte represents 8 application bytes): Addressable: 00 Partially addressable: 01 02 03 04 05 06 07 Heap left redzone: fa Freed heap region: fd Stack left redzone: f1 Stack mid redzone: f2 Stack right redzone: f3 Stack after return: f5 Stack use after scope: f8 Global redzone: f9 Global init order: f6 Poisoned by user: f7 Container overflow: fc Array cookie: ac Intra object redzone: bb ASan internal: fe Left alloca redzone: ca Right alloca redzone: cb ==18099==ABORTING ``` ### Environment ``` Ubuntu 16.04 Clang 10.0.1 gcc 5.5 ``` --- end body ---
4,758
[ -0.044038865715265274, 0.03245703503489494, -0.011616716161370277, -0.0029756934382021427, 0.013277243822813034, 0.005267640110105276, -0.03641997650265694, 0.0417783185839653, -0.013828427530825138, 0.012230693362653255, -0.02361019141972065, 0.01342376135289669, 0.030922094359993935, 0.0...
null
null
null
null
null
null
null
null
null
[ "slims", "slims9_bulian" ]
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here.
Bulian9 Portable: showing following error how to solve it: All GMD/Media
https://api.github.com/repos/slims/slims9_bulian/issues/157/comments
1
2022-08-11T13:50:16
2022-11-12T02:43:23Z
https://github.com/slims/slims9_bulian/issues/157
1,335,991,587
157
false
This is a GitHub Issue repo:slims9_bulian owner:slims Title : Bulian9 Portable: showing following error how to solve it: All GMD/Media Issue date: --- start body --- **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - Browser [e.g. chrome, safari] - Version [e.g. 22] **Smartphone (please complete the following information):** - Device: [e.g. iPhone6] - OS: [e.g. iOS8.1] - Browser [e.g. stock browser, safari] - Version [e.g. 22] **Additional context** Add any other context about the problem here. --- end body ---
944
[ -0.018829138949513435, 0.010632720775902271, -0.010667525231838226, -0.010067150928080082, 0.04392305761575699, -0.019734051078557968, 0.0020839087665081024, 0.049143705517053604, -0.0061777676455676556, 0.04075586423277855, -0.019890671595931053, -0.01531390193849802, -0.008970814757049084,...
null
null
null
null
null
null
null
null
null
[ "axiomatic-systems", "Bento4" ]
Hi There, I am trying to convert an mp4 file with Dolby Atmos audio in it, into an TS file. Although, I am able to get the TS file in the output, but it doesn't have proper audio descriptors for an Atmos audio, as required in TS file. Samples are there in the below shared link. https://drive.google.com/drive/folders/1wpcO7gUfEQd3ChquIUwZBxaMaRXZJ-dJ?usp=sharing
mp42ts tool not translating Dolby Atmos audio in output TS file i.e. missing Atmos descriptor in TS
https://api.github.com/repos/axiomatic-systems/Bento4/issues/669/comments
1
2022-02-02T14:25:36
2022-02-03T08:49:51Z
https://github.com/axiomatic-systems/Bento4/issues/669
1,121,990,816
669
false
This is a GitHub Issue repo:Bento4 owner:axiomatic-systems Title : mp42ts tool not translating Dolby Atmos audio in output TS file i.e. missing Atmos descriptor in TS Issue date: --- start body --- Hi There, I am trying to convert an mp4 file with Dolby Atmos audio in it, into an TS file. Although, I am able to get the TS file in the output, but it doesn't have proper audio descriptors for an Atmos audio, as required in TS file. Samples are there in the below shared link. https://drive.google.com/drive/folders/1wpcO7gUfEQd3ChquIUwZBxaMaRXZJ-dJ?usp=sharing --- end body ---
586
[ 0.011646337807178497, 0.009564479812979698, -0.021633224561810493, 0.0065322075970470905, -0.0071884458884596825, 0.02036600559949875, -0.018450092524290085, 0.0511111319065094, -0.026249518617987633, 0.02931196242570877, 0.014210945926606655, 0.0019460850162431598, 0.003350962186232209, 0...