cve_id stringclasses 497 values | cve_published stringclasses 497 values | cve_descriptions stringclasses 494 values | cve_metrics dict | cve_references listlengths 1 23 ⌀ | cve_configurations listlengths 1 5 ⌀ | url stringclasses 506 values | cve_tags listlengths 1 4 ⌀ | domain stringclasses 1 value | issue_owner_repo listlengths 2 2 | issue_body stringlengths 2 8.04k ⌀ | issue_title stringlengths 1 346 | issue_comments_url stringlengths 59 81 | issue_comments_count int64 0 146 | issue_created_at timestamp[ns] | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 40 62 | issue_github_id int64 128M 2.4B | issue_number int64 67 125k | label bool 2 classes | issue_msg stringlengths 120 8.18k | issue_msg_n_tokens int64 120 8.18k | issue_embedding listlengths 3.07k 3.07k | __index_level_0__ int64 2 4.69k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | null | null | null | null | null | null | null | [
"slims",
"slims9_bulian"
] | ketika membuka halaman [example.com//index.php?p=libinfo](http://example.com/slims9/index.php?p=visitor) maka akan redirect ke halaman http://example.com/slims9/index.php
Di local bisa, namun ketika di deploy ke server tidak bisa
| Halaman index.php?p=visitor tidak dapat dibuka | https://api.github.com/repos/slims/slims9_bulian/issues/174/comments | 1 | 2022-12-22T00:55:12 | 2022-12-22T10:08:42Z | https://github.com/slims/slims9_bulian/issues/174 | 1,507,099,755 | 174 | false | This is a GitHub Issue
repo:slims9_bulian
owner:slims
Title : Halaman index.php?p=visitor tidak dapat dibuka
Issue date:
--- start body ---
ketika membuka halaman [example.com//index.php?p=libinfo](http://example.com/slims9/index.php?p=visitor) maka akan redirect ke halaman http://example.com/slims9/index.php
Di local bisa, namun ketika di deploy ke server tidak bisa
--- end body ---
| 393 | [
-0.00557185523211956,
0.022864343598484993,
-0.012904842384159565,
0.0233957190066576,
0.00872215535491705,
0.015394717454910278,
0.0019831706304103136,
0.03962545469403267,
-0.029286399483680725,
0.04156877472996712,
-0.012874477542936802,
0.04071857035160065,
-0.01884106919169426,
-0.032... | 4,077 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ x ] I looked for a similar issue and couldn't find any.
- [ x ] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ x ] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
I tried streaming a multiple track stream using gstreamer and package to hls it with gpac to then serve the segments with an origin, I notice that the option `max_cache_segs` is causing the playlist to be deleted.
GStreamer command
```bash
gst-launch-1.0 -v \
videotestsrc pattern=snow name="decodebin_video_0" \
videotestsrc pattern=ball name="decodebin_video_1" \
videotestsrc pattern=smpte100 name="decodebin_video_2" \
audiotestsrc freq=200 name="decodebin_audio_1" \
decodebin_video_0. ! video/x-raw ! queue name="src_0_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_0_video_raw_tee" \
decodebin_video_1. ! video/x-raw ! queue name="src_1_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_1_video_raw_tee" \
decodebin_video_2. ! video/x-raw ! queue name="src_3_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_3_video_raw_tee" \
decodebin_audio_1. ! audio/x-raw ! queue name="src_5_decodebin_audio_queue" ! tee name="src_5_audio_raw_tee" \
src_5_audio_raw_tee. ! queue name="src_5_audio_enc_queue" ! audioconvert ! audioresample ! audio/x-raw ! tee name="src_5_audio_resample_tee" \
src_5_audio_resample_tee. ! queue ! audioconvert ! "audio/x-raw,channels=2" ! avenc_aac name="src_5_audio_encoder" ! "audio/mpeg" ! aacparse ! tee name="src_5_audio_encoder_tee" \
src_0_video_raw_tee. ! queue name="src_0_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_0_video_transcoded_enc_tee" \
src_1_video_raw_tee. ! queue name="src_1_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_1_video_transcoded_enc_tee" \
src_3_video_raw_tee. ! queue name="src_3_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_3_video_transcoded_enc_tee" \
src_0_video_transcoded_enc_tee. ! queue name="src_0_video_transcoded_queue_mpegts" ! mpegtsmux m2ts-mode=false name="mpegts_mux_transcoded_0" ! tee name="mpegts_mux_transcoded_tee_0" \
src_1_video_transcoded_enc_tee. ! queue name="src_1_video_transcoded_queue_mpegts" ! mpegts_mux_transcoded_0. \
src_3_video_transcoded_enc_tee. ! queue name="src_3_video_transcoded_queue_mpegts" ! mpegts_mux_transcoded_0. \
src_5_audio_encoder_tee. ! queue ! mpegts_mux_transcoded_0. \
mpegts_mux_transcoded_tee_0. ! queue ! udpsink host="255.255.255.255" port=5000 name="udp_broadcast_0"
```
GPAC command
```bash
gpac \
-i udp://255.255.255.255:5000/:gpac:listen=true:tsprobe=true reframer:rt=on restamp \
-o /tmp/workdir/content/playlist.m3u8:gpac:dmode=dynamic:hmode=push:max_cache_segs=16
```
These commands creates a m3u8 playlist with the sub-playlist for each track in the input, but as mentioned before the playlist get deleted, making impossible to access to the stream, since the option `max_cache_segs` is for keeping a limit number of segments it shoudnt affect the number of playlist, also this happens only when the source has multiple tracks
In this GIF in shown that at the moment of running those commands all the playlist appear correctly, but get deleted almost instantly

| Issue with the option max_cache_segs deleting playlists | https://api.github.com/repos/gpac/gpac/issues/2532/comments | 1 | 2023-07-22T21:02:22 | 2023-07-24T09:35:56Z | https://github.com/gpac/gpac/issues/2532 | 1,816,894,218 | 2,532 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : Issue with the option max_cache_segs deleting playlists
Issue date:
--- start body ---
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ x ] I looked for a similar issue and couldn't find any.
- [ x ] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ x ] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
I tried streaming a multiple track stream using gstreamer and package to hls it with gpac to then serve the segments with an origin, I notice that the option `max_cache_segs` is causing the playlist to be deleted.
GStreamer command
```bash
gst-launch-1.0 -v \
videotestsrc pattern=snow name="decodebin_video_0" \
videotestsrc pattern=ball name="decodebin_video_1" \
videotestsrc pattern=smpte100 name="decodebin_video_2" \
audiotestsrc freq=200 name="decodebin_audio_1" \
decodebin_video_0. ! video/x-raw ! queue name="src_0_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_0_video_raw_tee" \
decodebin_video_1. ! video/x-raw ! queue name="src_1_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_1_video_raw_tee" \
decodebin_video_2. ! video/x-raw ! queue name="src_3_decodebin_video_queue" ! clockoverlay 'time-format="%T %Z"' ! tee name="src_3_video_raw_tee" \
decodebin_audio_1. ! audio/x-raw ! queue name="src_5_decodebin_audio_queue" ! tee name="src_5_audio_raw_tee" \
src_5_audio_raw_tee. ! queue name="src_5_audio_enc_queue" ! audioconvert ! audioresample ! audio/x-raw ! tee name="src_5_audio_resample_tee" \
src_5_audio_resample_tee. ! queue ! audioconvert ! "audio/x-raw,channels=2" ! avenc_aac name="src_5_audio_encoder" ! "audio/mpeg" ! aacparse ! tee name="src_5_audio_encoder_tee" \
src_0_video_raw_tee. ! queue name="src_0_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_0_video_transcoded_enc_tee" \
src_1_video_raw_tee. ! queue name="src_1_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_1_video_transcoded_enc_tee" \
src_3_video_raw_tee. ! queue name="src_3_video_scale_queue" ! videoscale ! "video/x-raw" ! x264enc ! "video/x-h264" ! h264parse ! tee name="src_3_video_transcoded_enc_tee" \
src_0_video_transcoded_enc_tee. ! queue name="src_0_video_transcoded_queue_mpegts" ! mpegtsmux m2ts-mode=false name="mpegts_mux_transcoded_0" ! tee name="mpegts_mux_transcoded_tee_0" \
src_1_video_transcoded_enc_tee. ! queue name="src_1_video_transcoded_queue_mpegts" ! mpegts_mux_transcoded_0. \
src_3_video_transcoded_enc_tee. ! queue name="src_3_video_transcoded_queue_mpegts" ! mpegts_mux_transcoded_0. \
src_5_audio_encoder_tee. ! queue ! mpegts_mux_transcoded_0. \
mpegts_mux_transcoded_tee_0. ! queue ! udpsink host="255.255.255.255" port=5000 name="udp_broadcast_0"
```
GPAC command
```bash
gpac \
-i udp://255.255.255.255:5000/:gpac:listen=true:tsprobe=true reframer:rt=on restamp \
-o /tmp/workdir/content/playlist.m3u8:gpac:dmode=dynamic:hmode=push:max_cache_segs=16
```
These commands creates a m3u8 playlist with the sub-playlist for each track in the input, but as mentioned before the playlist get deleted, making impossible to access to the stream, since the option `max_cache_segs` is for keeping a limit number of segments it shoudnt affect the number of playlist, also this happens only when the source has multiple tracks
In this GIF in shown that at the moment of running those commands all the playlist appear correctly, but get deleted almost instantly

--- end body ---
| 4,227 | [
-0.013030312955379486,
0.013530276715755463,
-0.011874145828187466,
0.0030974335968494415,
-0.008405645377933979,
0.017483117058873177,
-0.04177824780344963,
0.05130881071090698,
-0.028138602152466774,
0.031700845807790756,
-0.02187342755496502,
0.014756751246750355,
0.010561740957200527,
... | 1,208 |
null | null | null | null | null | null | null | null | null | [
"ImageMagick",
"ImageMagick"
] | ### ImageMagick version
7.1.1-26
### Operating system
MacOS
### Operating system, version and so on
macOS 14.1.1
### Description
Converting an sRGB image to Adobe98, DisplayP3 or ProPhoto using `-colorspace` results in pixel values multiplied by QuantumRange, and converting an Adobe98, DisplayP3 or ProPhoto image to sRGB divides the input values by QuantumRange.
### Steps to Reproduce
With the Q16 HDRI build from Homebrew:
```shellsession
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (15333601,3347367,2315759) #FFFFFF displayp3(6.01318e+06%,1.31269e+06%,908141%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (0,0,0) #000000 srgb(0.00186884%,-6.41763e-05%,-2.99722e-05%)
$ # These are the correct results
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 -evaluate divide 65535 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (234,51,35) #EA3323 displayp3(91.7552%,20.0304%,13.8573%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -evaluate multiply 65535 -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (279,0,0) #FF0000 srgb(109.299%,-54.3388%,-25.3778%)
```
With a Q32 HDRI build:
```shellsession
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (1004918212358,219376405103,151767922687) #FFFFFF displayp3(3.9408558e+11%,8.6029965e+10%,5.9516832e+10%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (0,0,0) #000000 srgb(2.8515823e-08%,-9.7923726e-10%,-4.5733248e-10%)
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 -evaluate divide 4294967295 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (234,51,35) #EA3323 displayp3(91.755199%,20.030412%,13.857342%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -evaluate multiply 4294967295 -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (279,0,0) #FF0000 srgb(109.29903%,-54.338837%,-25.377825%)
```
### Images
_No response_ | Conversion from/to Adobe98, DisplayP3 or ProPhoto yields incorrect results | https://api.github.com/repos/ImageMagick/ImageMagick/issues/7038/comments | 2 | 2024-01-17T14:29:12 | 2024-01-23T09:41:49Z | https://github.com/ImageMagick/ImageMagick/issues/7038 | 2,086,315,609 | 7,038 | false | This is a GitHub Issue
repo:ImageMagick
owner:ImageMagick
Title : Conversion from/to Adobe98, DisplayP3 or ProPhoto yields incorrect results
Issue date:
--- start body ---
### ImageMagick version
7.1.1-26
### Operating system
MacOS
### Operating system, version and so on
macOS 14.1.1
### Description
Converting an sRGB image to Adobe98, DisplayP3 or ProPhoto using `-colorspace` results in pixel values multiplied by QuantumRange, and converting an Adobe98, DisplayP3 or ProPhoto image to sRGB divides the input values by QuantumRange.
### Steps to Reproduce
With the Q16 HDRI build from Homebrew:
```shellsession
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (15333601,3347367,2315759) #FFFFFF displayp3(6.01318e+06%,1.31269e+06%,908141%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (0,0,0) #000000 srgb(0.00186884%,-6.41763e-05%,-2.99722e-05%)
$ # These are the correct results
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 -evaluate divide 65535 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (234,51,35) #EA3323 displayp3(91.7552%,20.0304%,13.8573%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -evaluate multiply 65535 -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (279,0,0) #FF0000 srgb(109.299%,-54.3388%,-25.3778%)
```
With a Q32 HDRI build:
```shellsession
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (1004918212358,219376405103,151767922687) #FFFFFF displayp3(3.9408558e+11%,8.6029965e+10%,5.9516832e+10%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (0,0,0) #000000 srgb(2.8515823e-08%,-9.7923726e-10%,-4.5733248e-10%)
$ magick -depth 8 canvas:'srgb(100%,0%,0%)' -colorspace display-p3 -evaluate divide 4294967295 txt:-
# ImageMagick pixel enumeration: 1,1,0,255,displayp3
0,0: (234,51,35) #EA3323 displayp3(91.755199%,20.030412%,13.857342%)
$ magick -depth 8 canvas:'display-p3(100%,0%,0%)' -evaluate multiply 4294967295 -colorspace srgb txt:-
# ImageMagick pixel enumeration: 1,1,0,255,srgb
0,0: (279,0,0) #FF0000 srgb(109.29903%,-54.338837%,-25.377825%)
```
### Images
_No response_
--- end body ---
| 2,503 | [
-0.0036579614970833063,
-0.008681467734277248,
-0.013011588715016842,
0.041489917784929276,
-0.0005660288734361529,
0.01988176442682743,
-0.006272307597100735,
0.03359381482005119,
0.0015583481872454286,
0.01927328296005726,
0.023490197956562042,
0.0087309954687953,
0.0312447939068079,
0.0... | 2,429 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [x] I looked for a similar issue and couldn't find any.
- [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
When using MP4box to segment MP4 files using inband parameter sets for bitstream switching the generated manifest references non-existent initialization segments. Only one initialization-segment is produced, which is fine, but the MPD-file references it like this: `initialization="ib_$RepresentationID$_.mp4"`. The template string contains the `$RepresentationID`, which probably shouldn't be the case, when only a single init segment is generated.
Invocation:
```
./bin/gcc/MP4Box -dash 4000 -rap -bs-switching inband -segment-name 'ib_$RepresentationID$_$Number%03d$' -out ./inband.mpd foreman_qp30.266 foreman_qp40.266
```
Output files:
```
ib_1_001.m4s
ib_1_002.m4s
ib_1_.mp4
ib_2_001.m4s
ib_2_002.m4s
inband.mpd
```
MPD-file:
```xml
<?xml version="1.0"?>
<!-- MPD file Generated with GPAC version 2.1-DEV-rev48-gf6d6225a9-master at 2022-03-15T12:40:30.266Z -->
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" minBufferTime="PT1.500S" type="static" mediaPresentationDuration="PT0H0M6.000S" maxSegmentDuration="PT0H0M5.120S" profiles="urn:mpeg:dash:profile:full:2011">
<ProgramInformation moreInformationURL="http://gpac.io">
<Title>inband.mpd generated by GPAC</Title>
</ProgramInformation>
<Period duration="PT0H0M6.000S">
<AdaptationSet segmentAlignment="true" maxWidth="352" maxHeight="288" maxFrameRate="50" par="11:9" lang="und" bitstreamSwitching="true" startWithSAP="1">
<SegmentTemplate media="ib_$RepresentationID$_$Number%03d$.m4s" initialization="ib_$RepresentationID$_.mp4" timescale="50" startNumber="1" duration="200"/>
<Representation id="1" mimeType="video/mp4" codecs="vvi1.1.L35.CQA" width="352" height="288" frameRate="50" sar="1:1" bandwidth="330000">
</Representation>
<Representation id="2" mimeType="video/mp4" codecs="vvi1.1.L35.CQA" width="352" height="288" frameRate="50" sar="1:1" bandwidth="87281">
</Representation>
</AdaptationSet>
</Period>
</MPD>
```
I have not yet tested if this is a general problem or specific to working with VVC bitstreams. | MPD references non-existent initialization segment | https://api.github.com/repos/gpac/gpac/issues/2141/comments | 1 | 2022-03-15T12:53:49 | 2022-03-22T08:45:49Z | https://github.com/gpac/gpac/issues/2141 | 1,169,636,455 | 2,141 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : MPD references non-existent initialization segment
Issue date:
--- start body ---
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [x] I looked for a similar issue and couldn't find any.
- [x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
When using MP4box to segment MP4 files using inband parameter sets for bitstream switching the generated manifest references non-existent initialization segments. Only one initialization-segment is produced, which is fine, but the MPD-file references it like this: `initialization="ib_$RepresentationID$_.mp4"`. The template string contains the `$RepresentationID`, which probably shouldn't be the case, when only a single init segment is generated.
Invocation:
```
./bin/gcc/MP4Box -dash 4000 -rap -bs-switching inband -segment-name 'ib_$RepresentationID$_$Number%03d$' -out ./inband.mpd foreman_qp30.266 foreman_qp40.266
```
Output files:
```
ib_1_001.m4s
ib_1_002.m4s
ib_1_.mp4
ib_2_001.m4s
ib_2_002.m4s
inband.mpd
```
MPD-file:
```xml
<?xml version="1.0"?>
<!-- MPD file Generated with GPAC version 2.1-DEV-rev48-gf6d6225a9-master at 2022-03-15T12:40:30.266Z -->
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" minBufferTime="PT1.500S" type="static" mediaPresentationDuration="PT0H0M6.000S" maxSegmentDuration="PT0H0M5.120S" profiles="urn:mpeg:dash:profile:full:2011">
<ProgramInformation moreInformationURL="http://gpac.io">
<Title>inband.mpd generated by GPAC</Title>
</ProgramInformation>
<Period duration="PT0H0M6.000S">
<AdaptationSet segmentAlignment="true" maxWidth="352" maxHeight="288" maxFrameRate="50" par="11:9" lang="und" bitstreamSwitching="true" startWithSAP="1">
<SegmentTemplate media="ib_$RepresentationID$_$Number%03d$.m4s" initialization="ib_$RepresentationID$_.mp4" timescale="50" startNumber="1" duration="200"/>
<Representation id="1" mimeType="video/mp4" codecs="vvi1.1.L35.CQA" width="352" height="288" frameRate="50" sar="1:1" bandwidth="330000">
</Representation>
<Representation id="2" mimeType="video/mp4" codecs="vvi1.1.L35.CQA" width="352" height="288" frameRate="50" sar="1:1" bandwidth="87281">
</Representation>
</AdaptationSet>
</Period>
</MPD>
```
I have not yet tested if this is a general problem or specific to working with VVC bitstreams.
--- end body ---
| 2,894 | [
-0.0024634012952446938,
0.02826240286231041,
-0.015206574462354183,
0.03079550340771675,
0.021427808329463005,
0.016162462532520294,
-0.02012142911553383,
0.0464242622256279,
-0.018177790567278862,
0.0272905845195055,
0.014449830166995525,
0.006802731659263372,
0.02679670974612236,
-0.0095... | 1,500 |
CVE-2020-22678 | 2021-10-12T21:15:07.540 | An issue was discovered in gpac 0.8.0. The gf_media_nalu_remove_emulation_bytes function in av_parsers.c has a heap-based buffer overflow which can lead to a denial of service (DOS) via a crafted input. | {
"cvssMetricV2": [
{
"acInsufInfo": false,
"baseSeverity": "MEDIUM",
"cvssData": {
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"integrityImpact": "NONE",
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"source": "nvd@nist.gov",
"type": "Primary",
"userInteractionRequired": true
}
],
"cvssMetricV30": null,
"cvssMetricV31": [
{
"cvssData": {
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"availabilityImpact": "HIGH",
"baseScore": 5.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H",
"version": "3.1"
},
"exploitabilityScore": 1.8,
"impactScore": 3.6,
"source": "nvd@nist.gov",
"type": "Primary"
}
]
} | [
{
"source": "cve@mitre.org",
"tags": [
"Exploit",
"Patch",
"Third Party Advisory"
],
"url": "https://github.com/gpac/gpac/issues/1339"
}
] | [
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:gpac:gpac:0.8.0:*:*:*:*:*:*:*",
"matchCriteriaId": "93EEFCFD-7417-40E6-84BF-4EA630F2A8A1",
"versionEndExcluding": null,
"versionEndIncluding": null,
"versionStartExcluding": ... | https://github.com/gpac/gpac/issues/1339 | [
"Exploit",
"Patch",
"Third Party Advisory"
] | github.com | [
"gpac",
"gpac"
] | Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ √] I looked for a similar issue and couldn't find any.
- [ √] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ √] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
A crafted input will lead to crash in av_parsers.c at gpac 0.8.0.
Triggered by
./MP4Box -diso POC -out /dev/null
Poc
[001gf_media_nalu_remove_emulation_bytes](https://github.com/gutiniao/afltest/blob/master/001gf_media_nalu_remove_emulation_bytes)
The ASAN information is as follows:
```
./MP4Box -diso 001gf_media_nalu_remove_emulation_bytes -out /dev/null
[iso file] Media header timescale is 0 - defaulting to 90000
=================================================================
==23148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000002d1 at pc 0x5632845c98b0 bp 0x7ffdce21c4e0 sp 0x7ffdce21c4d0
READ of size 1 at 0x6020000002d1 thread T0
#0 0x5632845c98af in gf_media_nalu_remove_emulation_bytes media_tools/av_parsers.c:4722
#1 0x5632845c991b in gf_media_avc_read_sps media_tools/av_parsers.c:4737
#2 0x5632843ea9a9 in avcc_Read isomedia/avc_ext.c:2371
#3 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#4 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#5 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#6 0x5632848afbf1 in video_sample_entry_Read isomedia/box_code_base.c:4405
#7 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#8 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#9 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#10 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#11 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#12 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#13 0x5632848b38a4 in stbl_Read isomedia/box_code_base.c:5381
#14 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#15 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#16 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#17 0x5632848ad40b in minf_Read isomedia/box_code_base.c:3500
#18 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#19 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#20 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#21 0x5632848ab73f in mdia_Read isomedia/box_code_base.c:3021
#22 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#23 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#24 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#25 0x5632848ba906 in trak_Read isomedia/box_code_base.c:7129
#26 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#27 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#28 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#29 0x5632848adf64 in moov_Read isomedia/box_code_base.c:3745
#30 0x563284419b35 in gf_isom_box_read isomedia/box_funcs.c:1528
#31 0x563284419b35 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#32 0x56328441a1e4 in gf_isom_parse_root_box isomedia/box_funcs.c:42
#33 0x563284430f44 in gf_isom_parse_movie_boxes isomedia/isom_intern.c:206
#34 0x563284433bca in gf_isom_open_file isomedia/isom_intern.c:615
#35 0x56328417c852 in mp4boxMain /home/liuz/gpac-master/applications/mp4box/main.c:4767
#36 0x7f0252bccb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#37 0x56328416db19 in _start (/usr/local/gpac-asan3/bin/MP4Box+0x163b19)
0x6020000002d1 is located 0 bytes to the right of 1-byte region [0x6020000002d0,0x6020000002d1)
allocated by thread T0 here:
#0 0x7f0253855b50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x5632843ea263 in avcc_Read isomedia/avc_ext.c:2343
SUMMARY: AddressSanitizer: heap-buffer-overflow media_tools/av_parsers.c:4722 in gf_media_nalu_remove_emulation_bytes
Shadow bytes around the buggy address:
0x0c047fff8000: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8010: fa fa fd fd fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8020: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8030: fa fa 00 00 fa fa 00 00 fa fa 00 05 fa fa 00 00
0x0c047fff8040: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
=>0x0c047fff8050: fa fa 00 00 fa fa 00 00 fa fa[01]fa fa fa 01 fa
0x0c047fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==23148==ABORTING
```
| There is a heap-buffer-overflow in the gf_media_nalu_remove_emulation_bytes function of av_parsers.c:4722 | https://api.github.com/repos/gpac/gpac/issues/1339/comments | 1 | 2019-11-13T03:14:38 | 2020-01-09T17:59:39Z | https://github.com/gpac/gpac/issues/1339 | 521,931,348 | 1,339 | true | This is a GitHub Issue
repo:gpac
owner:gpac
Title : There is a heap-buffer-overflow in the gf_media_nalu_remove_emulation_bytes function of av_parsers.c:4722
Issue date:
--- start body ---
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ √] I looked for a similar issue and couldn't find any.
- [ √] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ √] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
A crafted input will lead to crash in av_parsers.c at gpac 0.8.0.
Triggered by
./MP4Box -diso POC -out /dev/null
Poc
[001gf_media_nalu_remove_emulation_bytes](https://github.com/gutiniao/afltest/blob/master/001gf_media_nalu_remove_emulation_bytes)
The ASAN information is as follows:
```
./MP4Box -diso 001gf_media_nalu_remove_emulation_bytes -out /dev/null
[iso file] Media header timescale is 0 - defaulting to 90000
=================================================================
==23148==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000002d1 at pc 0x5632845c98b0 bp 0x7ffdce21c4e0 sp 0x7ffdce21c4d0
READ of size 1 at 0x6020000002d1 thread T0
#0 0x5632845c98af in gf_media_nalu_remove_emulation_bytes media_tools/av_parsers.c:4722
#1 0x5632845c991b in gf_media_avc_read_sps media_tools/av_parsers.c:4737
#2 0x5632843ea9a9 in avcc_Read isomedia/avc_ext.c:2371
#3 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#4 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#5 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#6 0x5632848afbf1 in video_sample_entry_Read isomedia/box_code_base.c:4405
#7 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#8 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#9 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#10 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#11 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#12 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#13 0x5632848b38a4 in stbl_Read isomedia/box_code_base.c:5381
#14 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#15 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#16 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#17 0x5632848ad40b in minf_Read isomedia/box_code_base.c:3500
#18 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#19 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#20 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#21 0x5632848ab73f in mdia_Read isomedia/box_code_base.c:3021
#22 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#23 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#24 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#25 0x5632848ba906 in trak_Read isomedia/box_code_base.c:7129
#26 0x5632844183d4 in gf_isom_box_read isomedia/box_funcs.c:1528
#27 0x5632844183d4 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#28 0x563284418e10 in gf_isom_box_array_read_ex isomedia/box_funcs.c:1419
#29 0x5632848adf64 in moov_Read isomedia/box_code_base.c:3745
#30 0x563284419b35 in gf_isom_box_read isomedia/box_funcs.c:1528
#31 0x563284419b35 in gf_isom_box_parse_ex isomedia/box_funcs.c:208
#32 0x56328441a1e4 in gf_isom_parse_root_box isomedia/box_funcs.c:42
#33 0x563284430f44 in gf_isom_parse_movie_boxes isomedia/isom_intern.c:206
#34 0x563284433bca in gf_isom_open_file isomedia/isom_intern.c:615
#35 0x56328417c852 in mp4boxMain /home/liuz/gpac-master/applications/mp4box/main.c:4767
#36 0x7f0252bccb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#37 0x56328416db19 in _start (/usr/local/gpac-asan3/bin/MP4Box+0x163b19)
0x6020000002d1 is located 0 bytes to the right of 1-byte region [0x6020000002d0,0x6020000002d1)
allocated by thread T0 here:
#0 0x7f0253855b50 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb50)
#1 0x5632843ea263 in avcc_Read isomedia/avc_ext.c:2343
SUMMARY: AddressSanitizer: heap-buffer-overflow media_tools/av_parsers.c:4722 in gf_media_nalu_remove_emulation_bytes
Shadow bytes around the buggy address:
0x0c047fff8000: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8010: fa fa fd fd fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8020: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
0x0c047fff8030: fa fa 00 00 fa fa 00 00 fa fa 00 05 fa fa 00 00
0x0c047fff8040: fa fa 00 00 fa fa 00 00 fa fa 00 00 fa fa 00 00
=>0x0c047fff8050: fa fa 00 00 fa fa 00 00 fa fa[01]fa fa fa 01 fa
0x0c047fff8060: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8070: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8080: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff8090: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==23148==ABORTING
```
--- end body ---
| 6,265 | [
-0.056500013917684555,
0.02562454156577587,
-0.013974976725876331,
-0.0013793068937957287,
0.027274833992123604,
0.028925126418471336,
-0.030065327882766724,
0.04056718945503235,
-0.004737089388072491,
0.022368963807821274,
-0.022443978115916252,
0.00022832171816844493,
0.015737788751721382,... | 353 |
CVE-2021-31254 | 2021-04-19T19:15:18.077 | Buffer overflow in the tenc_box_read function in MP4Box in GPAC 1.0.1 allows attackers to cause a denial of service or execute arbitrary code via a crafted file, related invalid IV sizes. | {
"cvssMetricV2": [
{
"acInsufInfo": false,
"baseSeverity": "MEDIUM",
"cvssData": {
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"availabilityImpact": "PARTIAL",
"baseScore": 6.8,
"confidentialityImpact": "PARTIAL",
"integrityImpact": "PARTIAL",
"vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P",
"version": "2.0"
},
"exploitabilityScore": 8.6,
"impactScore": 6.4,
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"source": "nvd@nist.gov",
"type": "Primary",
"userInteractionRequired": true
}
],
"cvssMetricV30": null,
"cvssMetricV31": [
{
"cvssData": {
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"integrityImpact": "HIGH",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H",
"version": "3.1"
},
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"source": "nvd@nist.gov",
"type": "Primary"
}
]
} | [
{
"source": "cve@mitre.org",
"tags": [
"Patch",
"Third Party Advisory"
],
"url": "https://github.com/gpac/gpac/commit/8986422c21fbd9a7bf6561cae65aae42077447e8"
},
{
"source": "cve@mitre.org",
"tags": [
"Exploit",
"Third Party Advisory"
],
"url": "https://g... | [
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:gpac:gpac:1.0.1:*:*:*:*:*:*:*",
"matchCriteriaId": "82DD2D40-0A05-48FD-940D-32B4D8B51AB3",
"versionEndExcluding": null,
"versionEndIncluding": null,
"versionStartExcluding": ... | https://github.com/gpac/gpac/issues/1703 | [
"Exploit",
"Third Party Advisory"
] | github.com | [
"gpac",
"gpac"
] | null | [Security]heap buffer overflow issue with gpac MP4Box | https://api.github.com/repos/gpac/gpac/issues/1703/comments | 0 | 2021-03-11T08:32:34 | 2023-09-22T06:12:25Z | https://github.com/gpac/gpac/issues/1703 | 828,921,799 | 1,703 | true | This is a GitHub Issue
repo:gpac
owner:gpac
Title : [Security]heap buffer overflow issue with gpac MP4Box
Issue date:
--- start body ---
None
--- end body ---
| 160 | [
-0.039044011384248734,
0.02426203526556492,
-0.003756725462153554,
0.01664052903652191,
0.03881349414587021,
0.02038644813001156,
-0.023368777707219124,
0.05183776840567589,
-0.015300641767680645,
0.03645068407058716,
0.002775222295895219,
0.020472893491387367,
0.021611075848340988,
0.0097... | 197 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | How can I measure the quality of 360 video?
I 'm playing 360 Video tiled streaming which is created by the following command.
mp4box.exe -dash 1000 -frag 1000 -profile live 10M.mp4 20M.mp4 30M.mp4 40M.mp4 -out "10x10\dash.mpd"
Could it possible to measure the merged viewport?
I saw the other people measured by VMAF and PSNR. #1713
But he used a complete MP4.
Could it possible to save the mp4 when I use the MP4Client to play the DASH ?
| How can I measure the quality of 360 video? | https://api.github.com/repos/gpac/gpac/issues/1835/comments | 1 | 2021-06-30T11:02:01 | 2021-07-05T15:08:16Z | https://github.com/gpac/gpac/issues/1835 | 933,572,058 | 1,835 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : How can I measure the quality of 360 video?
Issue date:
--- start body ---
How can I measure the quality of 360 video?
I 'm playing 360 Video tiled streaming which is created by the following command.
mp4box.exe -dash 1000 -frag 1000 -profile live 10M.mp4 20M.mp4 30M.mp4 40M.mp4 -out "10x10\dash.mpd"
Could it possible to measure the merged viewport?
I saw the other people measured by VMAF and PSNR. #1713
But he used a complete MP4.
Could it possible to save the mp4 when I use the MP4Client to play the DASH ?
--- end body ---
| 597 | [
-0.01712396927177906,
0.004221862182021141,
-0.02831919677555561,
-0.00317526631988585,
0.012046695686876774,
-0.0038375190924853086,
-0.03155162185430527,
0.057048358023166656,
-0.02980138175189495,
0.01955222897231579,
0.019599532708525658,
-0.014167482033371925,
0.024408750236034393,
-0... | 1,700 |
null | null | null | null | null | null | null | null | null | [
"openlink",
"virtuoso-opensource"
] | Fuzzer: Virtuoso 7.2.12 crashed at `xte_replace_strings_with_unames`.
It can also be reproduced with Version 7.2.13-dev.3239-pthreads as of Mar 17 2024 (da40b02).
PoC:
```SQL
SELECT x FROM ( SELECT 3421 x ) x ORDER BY ( xmlagg ( json_parse ( '[[1]]' ) ) );
```
backtrace:
```c
#0 0xb45ea1 (xte_replace_strings_with_unames+0x81)
#1 0xc11d44 (bif_xte_nodebld_final_impl+0x364)
#2 0x755284 (ins_call_bif+0xc4)
#3 0x763149 (code_vec_run_1+0xdf9)
#4 0x7b9121 (qn_input+0x3c1)
#5 0x7cd35a (qr_subq_exec+0x94a)
#6 0x756241 (ins_call+0xc91)
#7 0x757c30 (ins_call_vec+0x320)
#8 0x75b445 (code_vec_run_v+0xa25)
#9 0x7b90ef (qn_input+0x38f)
#10 0x7b985f (qn_ts_send_output+0x23f)
#11 0x52e956 (memcache_read_input+0x866)
#12 0x4369b0 (chash_read_input+0x770)
#13 0x7b912e (qn_input+0x3ce)
#14 0x7b9596 (qn_send_output+0x236)
#15 0x7b912e (qn_input+0x3ce)
#16 0x7c68fb (fun_ref_node_input+0x36b)
#17 0x7b912e (qn_input+0x3ce)
#18 0x7b9596 (qn_send_output+0x236)
#19 0x82bd7d (set_ctr_vec_input+0x99d)
#20 0x7b912e (qn_input+0x3ce)
#21 0x7ca91b (qr_exec+0x11db)
#22 0x7d82b6 (sf_sql_execute+0x11a6)
#23 0x7d8dbe (sf_sql_execute_w+0x17e)
#24 0x7e1a7d (sf_sql_execute_wrapper+0x3d)
#25 0xe2a11c (future_wrapper+0x3fc)
#26 0xe31a1e (_thread_boot+0x11e)
#27 0x7f84026c3609 (start_thread+0xd9)
#28 0x7f8402493353 (clone+0x43)
```
ways to reproduce (write poc to the file /tmp/test.sql first):
```bash
# remove the old one
docker container rm virtdb_test -f
# start virtuoso through docker
docker run --name virtdb_test -itd --env DBA_PASSWORD=dba pkleef/virtuoso-opensource-7
# wait the server starting
sleep 10
# check whether the simple query works
echo "SELECT 1;" | docker exec -i virtdb_test isql 1111 dba
# run the poc
cat /tmp/test.sql | docker exec -i virtdb_test isql 1111 dba
``` | Fuzzer: Virtuoso 7.2.12 crashed at `xte_replace_strings_with_unames` | https://api.github.com/repos/openlink/virtuoso-opensource/issues/1252/comments | 0 | 2024-03-18T11:46:44 | 2024-03-22T10:51:01Z | https://github.com/openlink/virtuoso-opensource/issues/1252 | 2,191,983,234 | 1,252 | false | This is a GitHub Issue
repo:virtuoso-opensource
owner:openlink
Title : Fuzzer: Virtuoso 7.2.12 crashed at `xte_replace_strings_with_unames`
Issue date:
--- start body ---
Fuzzer: Virtuoso 7.2.12 crashed at `xte_replace_strings_with_unames`.
It can also be reproduced with Version 7.2.13-dev.3239-pthreads as of Mar 17 2024 (da40b02).
PoC:
```SQL
SELECT x FROM ( SELECT 3421 x ) x ORDER BY ( xmlagg ( json_parse ( '[[1]]' ) ) );
```
backtrace:
```c
#0 0xb45ea1 (xte_replace_strings_with_unames+0x81)
#1 0xc11d44 (bif_xte_nodebld_final_impl+0x364)
#2 0x755284 (ins_call_bif+0xc4)
#3 0x763149 (code_vec_run_1+0xdf9)
#4 0x7b9121 (qn_input+0x3c1)
#5 0x7cd35a (qr_subq_exec+0x94a)
#6 0x756241 (ins_call+0xc91)
#7 0x757c30 (ins_call_vec+0x320)
#8 0x75b445 (code_vec_run_v+0xa25)
#9 0x7b90ef (qn_input+0x38f)
#10 0x7b985f (qn_ts_send_output+0x23f)
#11 0x52e956 (memcache_read_input+0x866)
#12 0x4369b0 (chash_read_input+0x770)
#13 0x7b912e (qn_input+0x3ce)
#14 0x7b9596 (qn_send_output+0x236)
#15 0x7b912e (qn_input+0x3ce)
#16 0x7c68fb (fun_ref_node_input+0x36b)
#17 0x7b912e (qn_input+0x3ce)
#18 0x7b9596 (qn_send_output+0x236)
#19 0x82bd7d (set_ctr_vec_input+0x99d)
#20 0x7b912e (qn_input+0x3ce)
#21 0x7ca91b (qr_exec+0x11db)
#22 0x7d82b6 (sf_sql_execute+0x11a6)
#23 0x7d8dbe (sf_sql_execute_w+0x17e)
#24 0x7e1a7d (sf_sql_execute_wrapper+0x3d)
#25 0xe2a11c (future_wrapper+0x3fc)
#26 0xe31a1e (_thread_boot+0x11e)
#27 0x7f84026c3609 (start_thread+0xd9)
#28 0x7f8402493353 (clone+0x43)
```
ways to reproduce (write poc to the file /tmp/test.sql first):
```bash
# remove the old one
docker container rm virtdb_test -f
# start virtuoso through docker
docker run --name virtdb_test -itd --env DBA_PASSWORD=dba pkleef/virtuoso-opensource-7
# wait the server starting
sleep 10
# check whether the simple query works
echo "SELECT 1;" | docker exec -i virtdb_test isql 1111 dba
# run the poc
cat /tmp/test.sql | docker exec -i virtdb_test isql 1111 dba
```
--- end body ---
| 2,018 | [
-0.018607938662171364,
-0.0004576177161652595,
-0.012780015356838703,
-0.0052991206757724285,
0.0465938039124012,
0.035381708294153214,
-0.008091051131486893,
0.03570712357759476,
-0.007876571267843246,
0.013223765417933464,
-0.012602514587342739,
0.01832689717411995,
-0.013393869623541832,
... | 3,407 |
null | null | null | null | null | null | null | null | null | [
"slims",
"slims9_bulian"
] | Ada yang pernah error kah saat install Slims terbaru : Access denied for user ''@'localhost' (using password: NO) padahal di halaman sebelum page 1, testing connection ke DB sudah berhasil | Install Slim error di 2 of 2 | https://api.github.com/repos/slims/slims9_bulian/issues/227/comments | 1 | 2024-01-05T18:40:52 | 2024-01-10T07:16:27Z | https://github.com/slims/slims9_bulian/issues/227 | 2,067,866,694 | 227 | false | This is a GitHub Issue
repo:slims9_bulian
owner:slims
Title : Install Slim error di 2 of 2
Issue date:
--- start body ---
Ada yang pernah error kah saat install Slims terbaru : Access denied for user ''@'localhost' (using password: NO) padahal di halaman sebelum page 1, testing connection ke DB sudah berhasil
--- end body ---
| 330 | [
-0.004255372099578381,
0.033548757433891296,
-0.010705659165978432,
0.006464822683483362,
0.017864568158984184,
-0.011417916044592857,
0.024914460256695747,
0.04549723491072655,
-0.03506048768758774,
0.01933269016444683,
0.006588377524167299,
0.01944897696375847,
-0.014484982006251812,
-0.... | 4,041 |
null | null | null | null | null | null | null | null | null | [
"ImageMagick",
"ImageMagick"
] | ### ImageMagick version
7.1.1-24
### Operating system
Linux
### Operating system, version and so on
Debian 12
### Description
```
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/dng.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
../coders/dng.c: In function 'SetLibRawParams':
../coders/dng.c:337:13: error: 'libraw_data_t' has no member named 'rawparams'; did you mean 'params'?
337 | raw_info->rawparams.max_raw_memory_mb=8192;
| ^~~~~~~~~
| params
../coders/dng.c:340:15: error: 'libraw_data_t' has no member named 'rawparams'; did you mean 'params'?
340 | raw_info->rawparams.max_raw_memory_mb=(unsigned int)
| ^~~~~~~~~
| params
make[1]: *** [Makefile:10711: coders/dng_la-dng.lo] Error 1
make[1]: *** Waiting for unfinished jobs....
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/ora.c:45:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/otb.c:41:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/pango.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/pattern.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/palm.c:44:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
make: *** [Makefile:6239: all] Error 2
```
This starts happens in versions 7.1.1-24 & 7.1.1-25.
It does not happen in version 7.1.1-23 or below.
### Steps to Reproduce
run make and this error occurs.
I build opencl headers from the official github repo and source them accordingly during the build.
### Images
_No response_ | make failure since version 7.1.1-24 | https://api.github.com/repos/ImageMagick/ImageMagick/issues/7002/comments | 1 | 2024-01-04T11:15:38 | 2024-01-04T12:37:39Z | https://github.com/ImageMagick/ImageMagick/issues/7002 | 2,065,463,903 | 7,002 | false | This is a GitHub Issue
repo:ImageMagick
owner:ImageMagick
Title : make failure since version 7.1.1-24
Issue date:
--- start body ---
### ImageMagick version
7.1.1-24
### Operating system
Linux
### Operating system, version and so on
Debian 12
### Description
```
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/dng.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
../coders/dng.c: In function 'SetLibRawParams':
../coders/dng.c:337:13: error: 'libraw_data_t' has no member named 'rawparams'; did you mean 'params'?
337 | raw_info->rawparams.max_raw_memory_mb=8192;
| ^~~~~~~~~
| params
../coders/dng.c:340:15: error: 'libraw_data_t' has no member named 'rawparams'; did you mean 'params'?
340 | raw_info->rawparams.max_raw_memory_mb=(unsigned int)
| ^~~~~~~~~
| params
make[1]: *** [Makefile:10711: coders/dng_la-dng.lo] Error 1
make[1]: *** Waiting for unfinished jobs....
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/ora.c:45:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/otb.c:41:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/pango.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/pattern.c:42:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
In file included from /home/jman/tmp/magick-build-script/workspace/include/CL/cl.h:20,
from ../MagickCore/studio.h:149,
from ../coders/palm.c:44:
/home/jman/tmp/magick-build-script/workspace/include/CL/cl_version.h:22:9: note: '#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)'
22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
| ^~~~~~~
make: *** [Makefile:6239: all] Error 2
```
This starts happens in versions 7.1.1-24 & 7.1.1-25.
It does not happen in version 7.1.1-23 or below.
### Steps to Reproduce
run make and this error occurs.
I build opencl headers from the official github repo and source them accordingly during the build.
### Images
_No response_
--- end body ---
| 4,287 | [
-0.025027064606547356,
0.014783241786062717,
0.0008297997992485762,
0.0406271331012249,
0.021987393498420715,
0.008630252443253994,
-0.01264074258506298,
0.029941421002149582,
0.002658037468791008,
0.030959106981754303,
-0.0034380410797894,
-0.024330751970410347,
0.0006389835034497082,
0.0... | 2,443 |
null | null | null | null | null | null | null | null | null | [
"kubernetes",
"kubernetes"
] | ### What happened?
When calling the API interface to create PVC resources, PVC is not yet in a bound state (still in a pending state), The API interface call has returned. When calling the API interface to delete PVC resources, PVC is still in the terminating state and has not been completely removed, but the API interface has returned success
### What did you expect to happen?
When calling the API interface to create PVC resources, when PVC is in a bound state, The API interface call returned. When calling the API interface to delete PVC resources, PVC has been completely removed, the API interface call returned success.
### How can we reproduce it (as minimally and precisely as possible)?
calling the API interface to create or delete PVC resources
### Anything else we need to know?
_No response_
### Kubernetes version
v1.23
### Cloud provider
none
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| problem when calling the API interface to create or delete PVC resources | https://api.github.com/repos/kubernetes/kubernetes/issues/125155/comments | 9 | 2024-05-28T03:15:55 | 2024-06-12T09:50:24Z | https://github.com/kubernetes/kubernetes/issues/125155 | 2,320,040,734 | 125,155 | false | This is a GitHub Issue
repo:kubernetes
owner:kubernetes
Title : problem when calling the API interface to create or delete PVC resources
Issue date:
--- start body ---
### What happened?
When calling the API interface to create PVC resources, PVC is not yet in a bound state (still in a pending state), The API interface call has returned. When calling the API interface to delete PVC resources, PVC is still in the terminating state and has not been completely removed, but the API interface has returned success
### What did you expect to happen?
When calling the API interface to create PVC resources, when PVC is in a bound state, The API interface call returned. When calling the API interface to delete PVC resources, PVC has been completely removed, the API interface call returned success.
### How can we reproduce it (as minimally and precisely as possible)?
calling the API interface to create or delete PVC resources
### Anything else we need to know?
_No response_
### Kubernetes version
v1.23
### Cloud provider
none
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
--- end body ---
| 1,541 | [
-0.040087420493364334,
-0.007030968088656664,
-0.009340559132397175,
0.03662643954157829,
-0.0012450672220438719,
-0.025248805060982704,
-0.004493142943829298,
0.02200583927333355,
-0.004704344552010298,
-0.008809149265289307,
0.022523624822497368,
0.0019280659034848213,
-0.03684445098042488... | 3,186 |
null | null | null | null | null | null | null | null | null | [
"free5gc",
"free5gc"
] | ## Issue Description
### What's the version are you using
free5GC v3.3.0 (latest stable version) on Ubuntu Server 20.04.6 with BASH 5.0.17
### Is your feature request related to a problem? Please describe.
It isn't a problem exactly, however it would be better to have [those configurations](https://free5gc.org/guide/5-install-ueransim/?h=reboot#7-testing-ueransim-against-free5gc) mentioned on the installation section of the documentation applied easily.
### Describe the solution you'd like
To have a nice and easy solution, I'd suggest coding a script to execute the commands faster.
### Describe alternatives/workarounds you've considered
It might be possible to use other iptables and/or kernel configuration files to make those configurations permanent on the host, but for now I'd suggest the script as an easy method of solving the issue.
Maybe in the future I/we could change the installation instructions with those permanent instructions (I didn't do it already as I couldn't test it myself yet).
### Additional context
I already made a PR to the repo with the suggested script.
## Issue Template Checklist
### If willing to resolve an issue by submitting Pull request, please consider the points below.
- [x] Yes, I have the time and I know how to start.
- [ ] Yes, I have the time but I don't know how to start, I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [ ] No, I don't have the time and I wouldn't even know how to start.
| [Feat] Add a script to assist reloading the host configs after rebooting | https://api.github.com/repos/free5gc/free5gc/issues/517/comments | 0 | 2023-12-11T23:45:27 | 2023-12-26T12:19:02Z | https://github.com/free5gc/free5gc/issues/517 | 2,036,723,050 | 517 | false | This is a GitHub Issue
repo:free5gc
owner:free5gc
Title : [Feat] Add a script to assist reloading the host configs after rebooting
Issue date:
--- start body ---
## Issue Description
### What's the version are you using
free5GC v3.3.0 (latest stable version) on Ubuntu Server 20.04.6 with BASH 5.0.17
### Is your feature request related to a problem? Please describe.
It isn't a problem exactly, however it would be better to have [those configurations](https://free5gc.org/guide/5-install-ueransim/?h=reboot#7-testing-ueransim-against-free5gc) mentioned on the installation section of the documentation applied easily.
### Describe the solution you'd like
To have a nice and easy solution, I'd suggest coding a script to execute the commands faster.
### Describe alternatives/workarounds you've considered
It might be possible to use other iptables and/or kernel configuration files to make those configurations permanent on the host, but for now I'd suggest the script as an easy method of solving the issue.
Maybe in the future I/we could change the installation instructions with those permanent instructions (I didn't do it already as I couldn't test it myself yet).
### Additional context
I already made a PR to the repo with the suggested script.
## Issue Template Checklist
### If willing to resolve an issue by submitting Pull request, please consider the points below.
- [x] Yes, I have the time and I know how to start.
- [ ] Yes, I have the time but I don't know how to start, I would need guidance.
- [ ] No, I don't have the time, although I believe I could do it if I had the time...
- [ ] No, I don't have the time and I wouldn't even know how to start.
--- end body ---
| 1,731 | [
-0.009512671269476414,
0.02415403537452221,
-0.019334468990564346,
-0.01499264407902956,
0.00970236212015152,
0.03532474860548973,
-0.013735060580074787,
0.05687930807471275,
-0.03872514143586159,
0.031727638095617294,
-0.03448167443275452,
-0.01365777850151062,
0.011557122692465782,
0.011... | 4,381 |
null | null | null | null | null | null | null | null | null | [
"LibreDWG",
"libredwg"
] | we encode the thumbnail directly after the header, but we need/want? to encode it with r13c3 much later (at the very end, after MEASUREMENT). and then patchup the address/crc in the header.
currently we do write the wrong header.thumbnail_address, mismatching the actual thumbnail start (fixed)
r13b1-r14: directly after header (88)
r13c3-r2000: after MEASUREMENT or AUXHEADER
as from the spec r13-r2000:
* HEADER
* FILE HEADER
* HEADER VARIABLES
* CLASS
* MEASUREMENT (R13 only, optional)
* PADDING (R13C3 and later)
* THUMBNAIL (Pre-R13C3)
* OBJECTS
* HANDLES
* OBJECT FREE SPACE (optional)
* MEASUREMENT (R14-R2000, optional)
* AuxHEADER (=SECOND HEADER, optional for R13C3 and later)
* THUMBNAIL (R13C3 and later)
| encode thumbnail_address r13c3 | https://api.github.com/repos/LibreDWG/libredwg/issues/853/comments | 4 | 2023-10-09T06:18:05 | 2023-11-24T07:43:41Z | https://github.com/LibreDWG/libredwg/issues/853 | 1,932,374,881 | 853 | false | This is a GitHub Issue
repo:libredwg
owner:LibreDWG
Title : encode thumbnail_address r13c3
Issue date:
--- start body ---
we encode the thumbnail directly after the header, but we need/want? to encode it with r13c3 much later (at the very end, after MEASUREMENT). and then patchup the address/crc in the header.
currently we do write the wrong header.thumbnail_address, mismatching the actual thumbnail start (fixed)
r13b1-r14: directly after header (88)
r13c3-r2000: after MEASUREMENT or AUXHEADER
as from the spec r13-r2000:
* HEADER
* FILE HEADER
* HEADER VARIABLES
* CLASS
* MEASUREMENT (R13 only, optional)
* PADDING (R13C3 and later)
* THUMBNAIL (Pre-R13C3)
* OBJECTS
* HANDLES
* OBJECT FREE SPACE (optional)
* MEASUREMENT (R14-R2000, optional)
* AuxHEADER (=SECOND HEADER, optional for R13C3 and later)
* THUMBNAIL (R13C3 and later)
--- end body ---
| 887 | [
-0.008883495815098286,
0.006460022181272507,
-0.011029117740690708,
0.02878839150071144,
0.013730441220104694,
0.008327794261276722,
0.014293860644102097,
0.06779550015926361,
-0.016393175348639488,
0.028587721288204193,
0.02026764489710331,
-0.015490161255002022,
0.02184212952852249,
-0.0... | 2,779 |
null | null | null | null | null | null | null | null | null | [
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`
!pip install langchain
!pip install langchain-community
!pip install pypdf
from langchain_community.document_loaders import PyPDFLoader
!wget https://www.jinji.go.jp/content/900035876.pdf
loader = PyPDFLoader("900035876.pdf")
pages = loader.load()
print(pages[0].page_content)
`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to load a japanese document through document loaders of langchain. However it doesn't seem to work for Japanese documents. Page content is an empty string
### System Info
langchain 0.2.1
langchain-community 0.2.1
langchain-core 0.2.1
langchain-openai 0.1.7
langchain-text-splitters 0.2.0
platform: Mac
python version: 3.11.5 | PDF Loader Returns blank content for Japanese text | https://api.github.com/repos/langchain-ai/langchain/issues/22259/comments | 0 | 2024-05-29T05:21:40 | 2024-05-29T05:24:07Z | https://github.com/langchain-ai/langchain/issues/22259 | 2,322,380,482 | 22,259 | false | This is a GitHub Issue
repo:langchain
owner:hwchase17
Title : PDF Loader Returns blank content for Japanese text
Issue date:
--- start body ---
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
`
!pip install langchain
!pip install langchain-community
!pip install pypdf
from langchain_community.document_loaders import PyPDFLoader
!wget https://www.jinji.go.jp/content/900035876.pdf
loader = PyPDFLoader("900035876.pdf")
pages = loader.load()
print(pages[0].page_content)
`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to load a japanese document through document loaders of langchain. However it doesn't seem to work for Japanese documents. Page content is an empty string
### System Info
langchain 0.2.1
langchain-community 0.2.1
langchain-core 0.2.1
langchain-openai 0.1.7
langchain-text-splitters 0.2.0
platform: Mac
python version: 3.11.5
--- end body ---
| 1,408 | [
-0.004707899410277605,
-0.014443391002714634,
-0.01741821877658367,
0.05198543891310692,
-0.021644895896315575,
-0.013655937276780605,
0.01433570496737957,
0.02560235746204853,
-0.035186417400836945,
0.025804270058870316,
0.011192618869245052,
0.023367872461676598,
-0.047408781945705414,
-... | 4,453 |
null | null | null | null | null | null | null | null | null | [
"axiomatic-systems",
"Bento4"
] | I’m trying to encrypt the attached segment using `mp4encrypt` using `MPEG-CENC` method, but the resulting segment becomes unplayable:
```sh
mp4encrypt --method MPEG-CENC \
--key 1:4a13a4c5449b903ca558716341531799:random \
--property 1:KID:a167ef50633c27f314f36a038ef20d6e \
clear/154023527127780.mp4 cenc/154023527127780.mp4
```
Shaka Player throws `MediaError(3,,CHUNK_DEMUXER_ERROR_APPEND_FAILED: Incorrect CENC subsample size.)`.
Decrypting the encrypted segment using `mp4decrypt` and checking the integrity using FFmpeg, results into:
```sh
mp4decrypt --key a167ef50633c27f314f36a038ef20d6e:4a13a4c5449b903ca558716341531799 \
cenc/154023527127780.mp4 clear/154023527127780.mp4
ffmpeg -v error -i clear/154023527127780.mp4 -f null -
```
```log
[h264 @ 0x14263a4c0] Invalid NAL unit size (337906356 > 1).
[h264 @ 0x14263a4c0] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Error submitting packet to decoder: Invalid data found when processing input
[h264 @ 0x14260a400] Invalid NAL unit size (1672766743 > 1).
[h264 @ 0x14260a400] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Error submitting packet to decoder: Invalid data found when processing input
[h264 @ 0x1426438d0] Invalid NAL unit size (-334245472 > 1).
[h264 @ 0x1426438d0] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Decoding error: Invalid data found when processing input
```
https://github.com/axiomatic-systems/Bento4/assets/1440785/ec6946b1-0825-4745-9c7c-d8e513c6a0a4 | Encrypted segment using MPEG-CENC method is not playable | https://api.github.com/repos/axiomatic-systems/Bento4/issues/950/comments | 3 | 2024-04-05T07:04:45 | 2024-06-05T09:32:07Z | https://github.com/axiomatic-systems/Bento4/issues/950 | 2,227,230,477 | 950 | false | This is a GitHub Issue
repo:Bento4
owner:axiomatic-systems
Title : Encrypted segment using MPEG-CENC method is not playable
Issue date:
--- start body ---
I’m trying to encrypt the attached segment using `mp4encrypt` using `MPEG-CENC` method, but the resulting segment becomes unplayable:
```sh
mp4encrypt --method MPEG-CENC \
--key 1:4a13a4c5449b903ca558716341531799:random \
--property 1:KID:a167ef50633c27f314f36a038ef20d6e \
clear/154023527127780.mp4 cenc/154023527127780.mp4
```
Shaka Player throws `MediaError(3,,CHUNK_DEMUXER_ERROR_APPEND_FAILED: Incorrect CENC subsample size.)`.
Decrypting the encrypted segment using `mp4decrypt` and checking the integrity using FFmpeg, results into:
```sh
mp4decrypt --key a167ef50633c27f314f36a038ef20d6e:4a13a4c5449b903ca558716341531799 \
cenc/154023527127780.mp4 clear/154023527127780.mp4
ffmpeg -v error -i clear/154023527127780.mp4 -f null -
```
```log
[h264 @ 0x14263a4c0] Invalid NAL unit size (337906356 > 1).
[h264 @ 0x14263a4c0] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Error submitting packet to decoder: Invalid data found when processing input
[h264 @ 0x14260a400] Invalid NAL unit size (1672766743 > 1).
[h264 @ 0x14260a400] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Error submitting packet to decoder: Invalid data found when processing input
[h264 @ 0x1426438d0] Invalid NAL unit size (-334245472 > 1).
[h264 @ 0x1426438d0] Error splitting the input into NAL units.
[vist#0:0/h264 @ 0x142612c60] Decoding error: Invalid data found when processing input
```
https://github.com/axiomatic-systems/Bento4/assets/1440785/ec6946b1-0825-4745-9c7c-d8e513c6a0a4
--- end body ---
| 1,739 | [
0.0015638088807463646,
0.006774052046239376,
-0.010067249648272991,
-0.00030793235055170953,
0.0034477387089282274,
-0.0033778271172195673,
-0.0444784052670002,
0.03143807873129845,
-0.02634558081626892,
0.059432096779346466,
0.01914837956428528,
-0.0057952916249632835,
-0.014070600271224976... | 1,828 |
null | null | null | null | null | null | null | null | null | [
"openlink",
"virtuoso-opensource"
] | Hi!
Our Virtuoso has been crashing for the last few months constantly (after 2-8 hours). The error is always segmentation fault. An extract of `/var/log/kern.log` is below
```
Nov 1 15:42:51 virtuoso kernel: [1295132.159079] virtuoso-t[6661]: segfault at 70 ip 00000000006891f9 sp 00007fb1c7923bb0 error 4 in virtuoso-t[400000+df3000]
Nov 1 20:00:31 virtuoso kernel: [1310591.247369] virtuoso-t[24645]: segfault at 70 ip 00000000006891f9 sp 00007f0f14836bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 19:23:31 virtuoso kernel: [1394769.846640] virtuoso-t[28091]: segfault at 70 ip 00000000006891f9 sp 00007f5ae4b78bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 20:23:28 virtuoso kernel: [1398366.479007] virtuoso-t[8756]: segfault at 70 ip 00000000006891f9 sp 00007fcfa4b98bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 23:12:51 virtuoso kernel: [1408528.783002] virtuoso-t[20164]: segfault at 70 ip 00000000006891f9 sp 00007f85f780dbb0 error 4 in virtuoso-t[400000+df3000]
Nov 3 07:04:01 virtuoso kernel: [1436798.537446] virtuoso-t[15692]: segfault at 70 ip 00000000006891f9 sp 00007fcc9d29abb0 error 4 in virtuoso-t[400000+df3000]
Nov 3 13:00:29 virtuoso kernel: [1458185.606298] virtuoso-t[20063]: segfault at 70 ip 00000000006891f9 sp 00007ff15e01fbb0 error 4 in virtuoso-t[400000+df3000]
```
* First I thought it was something about some heavy queries we were running, so I enabled `HTTPLogFile` in virtuoso `ini`, so i am getting now logs. However, there is nothing related to queries and the crashing
* So, we thought it may be file system, so we created a new Virtual Machine, and copied Virtuoso and database. Result was the same. Virtuoso crashed after started using it.
* Since we are running old version (Virtuoso 7.2.6-dev) I thought it will be good to migrate to the latest Virtuoso (7.2.11) but I thought It would good opportunity to migrate the database into the new format supported by 7.2.7+, which uses 64-bit prefix IDs in `RDF_IRI`. But in order to do that, I was trying to follow migration [option 1](https://github.com/openlink/virtuoso-opensource/blob/develop/7/README.UPGRADE.md#upgrade-method-1) . That means I need to make a dump of whole database using the `RDF_DUMP_NQUADS()` procedure. The problem however, is that i start the procedure, but Virtuoso always crashes somewhere in the middle, so I cannot make the Dump.
So, the question is what should I do? Should I try copying the existing database in the new Virtuoso 7.2.11, and then try migration [option 2](https://github.com/openlink/virtuoso-opensource/blob/develop/7/README.UPGRADE.md#upgrade-method-2)?
Any other help you can provide?
Details of current Virtuoso
**endpoint**: https://www.foodie-cloud.org/sparql
**version**: `7.2.6-dev`
**database size**: 265 GB
**`Virtuoso.ini` attached** (renamed to `virtuoso.txt`)
[`virtuoso.txt`](https://github.com/openlink/virtuoso-opensource/files/13250049/virtuoso.txt)
| Virtuoso crashing - migration options | https://api.github.com/repos/openlink/virtuoso-opensource/issues/1165/comments | 14 | 2023-11-03T12:20:49 | 2023-11-20T02:28:14Z | https://github.com/openlink/virtuoso-opensource/issues/1165 | 1,976,072,784 | 1,165 | false | This is a GitHub Issue
repo:virtuoso-opensource
owner:openlink
Title : Virtuoso crashing - migration options
Issue date:
--- start body ---
Hi!
Our Virtuoso has been crashing for the last few months constantly (after 2-8 hours). The error is always segmentation fault. An extract of `/var/log/kern.log` is below
```
Nov 1 15:42:51 virtuoso kernel: [1295132.159079] virtuoso-t[6661]: segfault at 70 ip 00000000006891f9 sp 00007fb1c7923bb0 error 4 in virtuoso-t[400000+df3000]
Nov 1 20:00:31 virtuoso kernel: [1310591.247369] virtuoso-t[24645]: segfault at 70 ip 00000000006891f9 sp 00007f0f14836bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 19:23:31 virtuoso kernel: [1394769.846640] virtuoso-t[28091]: segfault at 70 ip 00000000006891f9 sp 00007f5ae4b78bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 20:23:28 virtuoso kernel: [1398366.479007] virtuoso-t[8756]: segfault at 70 ip 00000000006891f9 sp 00007fcfa4b98bb0 error 4 in virtuoso-t[400000+df3000]
Nov 2 23:12:51 virtuoso kernel: [1408528.783002] virtuoso-t[20164]: segfault at 70 ip 00000000006891f9 sp 00007f85f780dbb0 error 4 in virtuoso-t[400000+df3000]
Nov 3 07:04:01 virtuoso kernel: [1436798.537446] virtuoso-t[15692]: segfault at 70 ip 00000000006891f9 sp 00007fcc9d29abb0 error 4 in virtuoso-t[400000+df3000]
Nov 3 13:00:29 virtuoso kernel: [1458185.606298] virtuoso-t[20063]: segfault at 70 ip 00000000006891f9 sp 00007ff15e01fbb0 error 4 in virtuoso-t[400000+df3000]
```
* First I thought it was something about some heavy queries we were running, so I enabled `HTTPLogFile` in virtuoso `ini`, so i am getting now logs. However, there is nothing related to queries and the crashing
* So, we thought it may be file system, so we created a new Virtual Machine, and copied Virtuoso and database. Result was the same. Virtuoso crashed after started using it.
* Since we are running old version (Virtuoso 7.2.6-dev) I thought it will be good to migrate to the latest Virtuoso (7.2.11) but I thought It would good opportunity to migrate the database into the new format supported by 7.2.7+, which uses 64-bit prefix IDs in `RDF_IRI`. But in order to do that, I was trying to follow migration [option 1](https://github.com/openlink/virtuoso-opensource/blob/develop/7/README.UPGRADE.md#upgrade-method-1) . That means I need to make a dump of whole database using the `RDF_DUMP_NQUADS()` procedure. The problem however, is that i start the procedure, but Virtuoso always crashes somewhere in the middle, so I cannot make the Dump.
So, the question is what should I do? Should I try copying the existing database in the new Virtuoso 7.2.11, and then try migration [option 2](https://github.com/openlink/virtuoso-opensource/blob/develop/7/README.UPGRADE.md#upgrade-method-2)?
Any other help you can provide?
Details of current Virtuoso
**endpoint**: https://www.foodie-cloud.org/sparql
**version**: `7.2.6-dev`
**database size**: 265 GB
**`Virtuoso.ini` attached** (renamed to `virtuoso.txt`)
[`virtuoso.txt`](https://github.com/openlink/virtuoso-opensource/files/13250049/virtuoso.txt)
--- end body ---
| 3,103 | [
-0.014446742832660675,
-0.03200872614979744,
-0.015118512324988842,
0.00536677660420537,
0.028760608285665512,
-0.007906214334070683,
-0.01993163302540779,
0.029572637751698494,
0.0005753413424827158,
0.051881302148103714,
0.026250697672367096,
-0.025689659640192986,
0.017141204327344894,
... | 3,485 |
null | null | null | null | null | null | null | null | null | [
"jerryscript-project",
"jerryscript"
] | ###### JerryScript revision
cefd391772529c8a9531d7b3c244d78d38be47c6
###### Build platform
Ubuntu 22.04.3
###### Build steps
```sh
python ./tools/build.py --builddir=xxx --clean --compile-flag=-fsanitize=address --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --stack-limit=20
```
###### Test case
```sh
function f(){return}
switch (class extends c { static { } ; }) {
case 1:
break}
while (false) {continue}
```
###### Execution steps
```sh
./xxx/bin/jerry poc.js
```
###### Output
```sh
Release:
Program received signal SIGSEGV, Segmentation fault.
AddressSanitizer:DEADLYSIGNAL
=================================================================
==1362976==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000008 (pc 0x55e5d2682005 bp 0x7ffe140aa670 sp 0x7ffe140aa540 T0)
==1362976==The signal is caused by a READ memory access.
==1362976==Hint: address points to the zero page.
#0 0x55e5d2682005 in scanner_seek /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17
#1 0x55e5d273667e in parser_parse_switch_statement_start /jerryscript/jerry-core/parser/js/js-parser-statm.c:1714:5
#2 0x55e5d272d2d1 in parser_parse_statements /jerryscript/jerry-core/parser/js/js-parser-statm.c:2821:9
#3 0x55e5d267fdfd in parser_parse_source /jerryscript/jerry-core/parser/js/js-parser.c:2280:5
#4 0x55e5d267e924 in parser_parse_script /jerryscript/jerry-core/parser/js/js-parser.c:3332:38
#5 0x55e5d25dbf38 in jerry_parse_common /jerryscript/jerry-core/api/jerryscript.c:418:21
#6 0x55e5d25dbd34 in jerry_parse /jerryscript/jerry-core/api/jerryscript.c:486:10
#7 0x55e5d274176f in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:52:26
#8 0x55e5d274192f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26
#9 0x55e5d25d75b2 in main /jerryscript/jerry-main/main-desktop.c:156:20
#10 0x7f39cdf6ed8f in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16
#11 0x7f39cdf6ee3f in __libc_start_main csu/../csu/libc-start.c:392:3
#12 0x55e5d2517424 in _start (/jerryscript/0323re/bin/jerry+0x41424) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17 in scanner_seek
==1362976==ABORTING
```
```sh
Debug:
ICE: Assertion 'context_p->next_scanner_info_p->source_p == context_p->source_p && context_p->next_scanner_info_p->type == SCANNER_TYPE_SWITCH' failed at /jerryscript/jerry-core/parser/js/js-parser-statm.c(parser_parse_switch_statement_start):1666.
Error: JERRY_FATAL_FAILED_ASSERTION
Program received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350406336) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
``` | SEGV in scanner_seek /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17 | https://api.github.com/repos/jerryscript-project/jerryscript/issues/5132/comments | 0 | 2024-03-26T07:50:12 | 2024-03-26T08:59:44Z | https://github.com/jerryscript-project/jerryscript/issues/5132 | 2,207,491,351 | 5,132 | false | This is a GitHub Issue
repo:jerryscript
owner:jerryscript-project
Title : SEGV in scanner_seek /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17
Issue date:
--- start body ---
###### JerryScript revision
cefd391772529c8a9531d7b3c244d78d38be47c6
###### Build platform
Ubuntu 22.04.3
###### Build steps
```sh
python ./tools/build.py --builddir=xxx --clean --compile-flag=-fsanitize=address --compile-flag=-g --strip=off --lto=off --logging=on --line-info=on --error-message=on --stack-limit=20
```
###### Test case
```sh
function f(){return}
switch (class extends c { static { } ; }) {
case 1:
break}
while (false) {continue}
```
###### Execution steps
```sh
./xxx/bin/jerry poc.js
```
###### Output
```sh
Release:
Program received signal SIGSEGV, Segmentation fault.
AddressSanitizer:DEADLYSIGNAL
=================================================================
==1362976==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000008 (pc 0x55e5d2682005 bp 0x7ffe140aa670 sp 0x7ffe140aa540 T0)
==1362976==The signal is caused by a READ memory access.
==1362976==Hint: address points to the zero page.
#0 0x55e5d2682005 in scanner_seek /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17
#1 0x55e5d273667e in parser_parse_switch_statement_start /jerryscript/jerry-core/parser/js/js-parser-statm.c:1714:5
#2 0x55e5d272d2d1 in parser_parse_statements /jerryscript/jerry-core/parser/js/js-parser-statm.c:2821:9
#3 0x55e5d267fdfd in parser_parse_source /jerryscript/jerry-core/parser/js/js-parser.c:2280:5
#4 0x55e5d267e924 in parser_parse_script /jerryscript/jerry-core/parser/js/js-parser.c:3332:38
#5 0x55e5d25dbf38 in jerry_parse_common /jerryscript/jerry-core/api/jerryscript.c:418:21
#6 0x55e5d25dbd34 in jerry_parse /jerryscript/jerry-core/api/jerryscript.c:486:10
#7 0x55e5d274176f in jerryx_source_parse_script /jerryscript/jerry-ext/util/sources.c:52:26
#8 0x55e5d274192f in jerryx_source_exec_script /jerryscript/jerry-ext/util/sources.c:63:26
#9 0x55e5d25d75b2 in main /jerryscript/jerry-main/main-desktop.c:156:20
#10 0x7f39cdf6ed8f in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16
#11 0x7f39cdf6ee3f in __libc_start_main csu/../csu/libc-start.c:392:3
#12 0x55e5d2517424 in _start (/jerryscript/0323re/bin/jerry+0x41424) (BuildId: efa40b4121fb9ed9276f89fc661eef85c730ab65)
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV /jerryscript/jerry-core/parser/js/js-scanner-util.c:372:17 in scanner_seek
==1362976==ABORTING
```
```sh
Debug:
ICE: Assertion 'context_p->next_scanner_info_p->source_p == context_p->source_p && context_p->next_scanner_info_p->type == SCANNER_TYPE_SWITCH' failed at /jerryscript/jerry-core/parser/js/js-parser-statm.c(parser_parse_switch_statement_start):1666.
Error: JERRY_FATAL_FAILED_ASSERTION
Program received signal SIGABRT, Aborted.
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140737350406336) at ./nptl/pthread_kill.c:44
44 ./nptl/pthread_kill.c: No such file or directory.
```
--- end body ---
| 3,156 | [
0.007407117169350386,
0.017671579495072365,
-0.0021833046339452267,
0.01068940106779337,
0.00502965971827507,
0.012960623949766159,
-0.01664586551487446,
0.04061825945973396,
-0.031269609928131104,
0.03674985095858574,
-0.0306248776614666,
0.012088767252862453,
0.023532800376415253,
0.0221... | 2,109 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Memory leak in function [gf_dash_resolve_url](https://github.com/gpac/gpac/blob/941a0891b989f5ad86b739cfb12c84b7e62e8ee4/src/media_tools/dash_client.c#L3360)
The memory leak is reported by a static analyzer tool developed at CAST (https://www.linkedin.com/company/cast-center).
Specifically, dynamic memory is allocated [here](https://github.com/gpac/gpac/blob/941a0891b989f5ad86b739cfb12c84b7e62e8ee4/src/media_tools/dash_client.c#L3405) and remains not freed. | Memory leak in function gf_dash_resolve_url | https://api.github.com/repos/gpac/gpac/issues/2570/comments | 1 | 2023-08-29T11:07:02 | 2023-08-29T12:40:33Z | https://github.com/gpac/gpac/issues/2570 | 1,871,437,136 | 2,570 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : Memory leak in function gf_dash_resolve_url
Issue date:
--- start body ---
Memory leak in function [gf_dash_resolve_url](https://github.com/gpac/gpac/blob/941a0891b989f5ad86b739cfb12c84b7e62e8ee4/src/media_tools/dash_client.c#L3360)
The memory leak is reported by a static analyzer tool developed at CAST (https://www.linkedin.com/company/cast-center).
Specifically, dynamic memory is allocated [here](https://github.com/gpac/gpac/blob/941a0891b989f5ad86b739cfb12c84b7e62e8ee4/src/media_tools/dash_client.c#L3405) and remains not freed.
--- end body ---
| 611 | [
-0.04429485276341438,
0.010849548503756523,
-0.018530908972024918,
-0.009198205545544624,
0.0023219678550958633,
0.005159513093531132,
-0.00806991197168827,
0.05553295090794563,
-0.03239920362830162,
0.0030486334580928087,
-0.0027534838300198317,
0.00806991197168827,
-0.005170721560716629,
... | 1,178 |
CVE-2022-42919 | 2022-11-07T00:15:09.697 | Python 3.9.x before 3.9.16 and 3.10.x before 3.10.9 on Linux allows local privilege escalation in a non-default configuration. The Python multiprocessing library, when used with the forkserver start method on Linux, allows pickles to be deserialized from any user in the same machine local network namespace, which in many system configurations means any user on the same machine. Pickles can execute arbitrary code. Thus, this allows for local user privilege escalation to the user that any forkserver process is running as. Setting multiprocessing.util.abstract_sockets_supported to False is a workaround. The forkserver start method for multiprocessing is not the default start method. This issue is Linux specific because only Linux supports abstract namespace sockets. CPython before 3.9 does not make use of Linux abstract namespace sockets by default. Support for users manually specifying an abstract namespace socket was added as a bugfix in 3.7.8 and 3.8.3, but users would need to make specific uncommon API calls in order to do that in CPython before 3.9. | {
"cvssMetricV2": null,
"cvssMetricV30": null,
"cvssMetricV31": [
{
"cvssData": {
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"availabilityImpact": "HIGH",
"baseScore": 7.8,
"baseSeverity": "HIGH",
"confidentialityImpact": "HIGH",
"integrityImpact": "HIGH",
"privilegesRequired": "LOW",
"scope": "UNCHANGED",
"userInteraction": "NONE",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H",
"version": "3.1"
},
"exploitabilityScore": 1.8,
"impactScore": 5.9,
"source": "nvd@nist.gov",
"type": "Primary"
}
]
} | [
{
"source": "cve@mitre.org",
"tags": [
"Release Notes"
],
"url": "https://github.com/python/cpython/compare/v3.10.8...v3.10.9"
},
{
"source": "cve@mitre.org",
"tags": [
"Release Notes"
],
"url": "https://github.com/python/cpython/compare/v3.9.15...v3.9.16"
},
{
... | [
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:python:python:*:*:*:*:*:*:*:*",
"matchCriteriaId": "D90DBA72-F261-408A-B8DD-5D2DD7E5985F",
"versionEndExcluding": null,
"versionEndIncluding": "3.7.15",
"versionStartExcludin... | https://github.com/python/cpython/issues/97514 | [
"Third Party Advisory"
] | github.com | [
"python",
"cpython"
] | ## TL;DR
Python 3.9, 3.10, and 3.11.0rc2 on Linux may allow for a local privilege escalation attack in a non-default configuration when code uses the `multiprocessing` module and configures `multiprocessing` to use the *forkserver* start method.
## Details
The Python `multiprocessing` library, when used with the *forkserver* start method on Linux, allows Python pickles to be deserialized from any user in the same machine local network namespace, which in many system configurations means any user on the same machine. Pickles can execute arbitrary code. Thus, this allows for local user privilege escalation to the user that any Python multiprocessing *forkserver* process is running as.
The forkserver start method for multiprocessing is not the default start method. This issue is Linux specific because only Linux supports abstract namespace sockets.
CPython before 3.9 does not make use of Linux abstract namespace sockets by default.
This issue has been assigned [CVE-2022-42919](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42919).
Credit: This issue was discovered by Devin Jeanpierre (@ssbr) of Google.
### Are Python 3.7 and 3.8 affected?
Not by default.
Support for users manually specifying an abstract namespace AF_UNIX socket was added [as a bugfix](https://github.com/python/cpython/issues/84031) in 3.7.8 and 3.8.3, but users would need to make specific uncommon `multiprocessing` API calls specifying their own *forkserver* control socket path in order to do that in CPython before 3.9.
### What about code that explicitly asks for an abstract socket?
Applications found to be making the uncommon `multiprocessing` API calls to explicitly use Linux abstract namespace sockets with a *forkserver* are believed to be rare and should have their own specific security issues filed.
## Workarounds
### From Python application or library code:
```python
import multiprocessing.util
multiprocessing.util.abstract_sockets_supported = False
```
This disables their use by default. You must execute that before anything else in your process has started making use of multiprocessing.
### If you can patch your CPython runtime itself:
Remove these two lines from CPython's [`Lib/multiprocessing/connection.py`](https://github.com/python/cpython/blob/1699128c4891da3bbe23553d709261d88855b93f/Lib/multiprocessing/connection.py#L79-L80):
```diff
- if util.abstract_sockets_supported:
- return f"\0listener-{os.getpid()}-{next(_mmap_counter)}"
```
_(that is what our security bug fix commits do)._
Or, similar to the application level fix, edit [`Lib/multiprocessing/util.py`](https://github.com/python/cpython/blob/1699128c4891da3bbe23553d709261d88855b93f/Lib/multiprocessing/util.py#L126) to always set:
```diff
- abstract_sockets_supported = _platform_supports_abstract_sockets()
+ abstract_sockets_supported = False
```
### Alternatives to avoid the problem
If your Linux Python application can be switched from multiprocessing's `.set_start_method("forkserver")` to a start method such as `"spawn"` that will also avoid this issue.
## Scope of the bug fixes
We are changing the default in Python 3.9 and higher to not use the Linux abstract namespace sockets by default.
It would be ideal to add authentication to the *forkserver* control socket so that it isn't even relying on filesystem permissions. This is a more complicated change and is expected to be done as a feature in 3.12.
### Tasks
- [x] Cherry pick the 3.11 commit to the 3.11.0 release. https://github.com/python/cpython/commit/4686d77a04570a663164c03193d9def23c89b122
- [x] Merge the 3.9 PR. https://github.com/python/cpython/pull/98504
- [x] After 3.11.0 is out, make sure 3.11.1 won't have a duplicate news entry about this due to the branch vs 3.11.0 release branch
- [ ] Push @gpshead 's PR(s) for proper forkserver socket authentication in 3.12.
<!-- gh-pr-number: gh-99309 -->
### Linked PRs
* PR: gh-99309
<!-- /gh-pr-number -->
| Linux specific local privilege escalation via the multiprocessing forkserver start method - CVE-2022-42919 | https://api.github.com/repos/python/cpython/issues/97514/comments | 13 | 2022-09-23T19:24:04 | 2023-06-27T13:19:22Z | https://github.com/python/cpython/issues/97514 | 1,384,215,836 | 97,514 | true | This is a GitHub Issue
repo:cpython
owner:python
Title : Linux specific local privilege escalation via the multiprocessing forkserver start method - CVE-2022-42919
Issue date:
--- start body ---
## TL;DR
Python 3.9, 3.10, and 3.11.0rc2 on Linux may allow for a local privilege escalation attack in a non-default configuration when code uses the `multiprocessing` module and configures `multiprocessing` to use the *forkserver* start method.
## Details
The Python `multiprocessing` library, when used with the *forkserver* start method on Linux, allows Python pickles to be deserialized from any user in the same machine local network namespace, which in many system configurations means any user on the same machine. Pickles can execute arbitrary code. Thus, this allows for local user privilege escalation to the user that any Python multiprocessing *forkserver* process is running as.
The forkserver start method for multiprocessing is not the default start method. This issue is Linux specific because only Linux supports abstract namespace sockets.
CPython before 3.9 does not make use of Linux abstract namespace sockets by default.
This issue has been assigned [CVE-2022-42919](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42919).
Credit: This issue was discovered by Devin Jeanpierre (@ssbr) of Google.
### Are Python 3.7 and 3.8 affected?
Not by default.
Support for users manually specifying an abstract namespace AF_UNIX socket was added [as a bugfix](https://github.com/python/cpython/issues/84031) in 3.7.8 and 3.8.3, but users would need to make specific uncommon `multiprocessing` API calls specifying their own *forkserver* control socket path in order to do that in CPython before 3.9.
### What about code that explicitly asks for an abstract socket?
Applications found to be making the uncommon `multiprocessing` API calls to explicitly use Linux abstract namespace sockets with a *forkserver* are believed to be rare and should have their own specific security issues filed.
## Workarounds
### From Python application or library code:
```python
import multiprocessing.util
multiprocessing.util.abstract_sockets_supported = False
```
This disables their use by default. You must execute that before anything else in your process has started making use of multiprocessing.
### If you can patch your CPython runtime itself:
Remove these two lines from CPython's [`Lib/multiprocessing/connection.py`](https://github.com/python/cpython/blob/1699128c4891da3bbe23553d709261d88855b93f/Lib/multiprocessing/connection.py#L79-L80):
```diff
- if util.abstract_sockets_supported:
- return f"\0listener-{os.getpid()}-{next(_mmap_counter)}"
```
_(that is what our security bug fix commits do)._
Or, similar to the application level fix, edit [`Lib/multiprocessing/util.py`](https://github.com/python/cpython/blob/1699128c4891da3bbe23553d709261d88855b93f/Lib/multiprocessing/util.py#L126) to always set:
```diff
- abstract_sockets_supported = _platform_supports_abstract_sockets()
+ abstract_sockets_supported = False
```
### Alternatives to avoid the problem
If your Linux Python application can be switched from multiprocessing's `.set_start_method("forkserver")` to a start method such as `"spawn"` that will also avoid this issue.
## Scope of the bug fixes
We are changing the default in Python 3.9 and higher to not use the Linux abstract namespace sockets by default.
It would be ideal to add authentication to the *forkserver* control socket so that it isn't even relying on filesystem permissions. This is a more complicated change and is expected to be done as a feature in 3.12.
### Tasks
- [x] Cherry pick the 3.11 commit to the 3.11.0 release. https://github.com/python/cpython/commit/4686d77a04570a663164c03193d9def23c89b122
- [x] Merge the 3.9 PR. https://github.com/python/cpython/pull/98504
- [x] After 3.11.0 is out, make sure 3.11.1 won't have a duplicate news entry about this due to the branch vs 3.11.0 release branch
- [ ] Push @gpshead 's PR(s) for proper forkserver socket authentication in 3.12.
<!-- gh-pr-number: gh-99309 -->
### Linked PRs
* PR: gh-99309
<!-- /gh-pr-number -->
--- end body ---
| 4,272 | [
-0.013807001523673534,
-0.01469848956912756,
-0.012959720566868782,
0.02597838267683983,
0.01802130602300167,
0.02090943045914173,
0.006885081995278597,
0.03274189680814743,
0.006995596922934055,
0.06837192177772522,
0.0063398745842278,
0.00933482963591814,
0.02894018404185772,
0.007246097... | 626 |
CVE-2021-32137 | 2021-09-13T14:15:09.640 | Heap buffer overflow in the URL_GetProtocolType function in MP4Box in GPAC 1.0.1 allows attackers to cause a denial of service or execute arbitrary code via a crafted file. | {
"cvssMetricV2": [
{
"acInsufInfo": false,
"baseSeverity": "MEDIUM",
"cvssData": {
"accessComplexity": "MEDIUM",
"accessVector": "NETWORK",
"authentication": "NONE",
"availabilityImpact": "PARTIAL",
"baseScore": 4.3,
"confidentialityImpact": "NONE",
"integrityImpact": "NONE",
"vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P",
"version": "2.0"
},
"exploitabilityScore": 8.6,
"impactScore": 2.9,
"obtainAllPrivilege": false,
"obtainOtherPrivilege": false,
"obtainUserPrivilege": false,
"source": "nvd@nist.gov",
"type": "Primary",
"userInteractionRequired": true
}
],
"cvssMetricV30": null,
"cvssMetricV31": [
{
"cvssData": {
"attackComplexity": "LOW",
"attackVector": "LOCAL",
"availabilityImpact": "HIGH",
"baseScore": 5.5,
"baseSeverity": "MEDIUM",
"confidentialityImpact": "NONE",
"integrityImpact": "NONE",
"privilegesRequired": "NONE",
"scope": "UNCHANGED",
"userInteraction": "REQUIRED",
"vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H",
"version": "3.1"
},
"exploitabilityScore": 1.8,
"impactScore": 3.6,
"source": "nvd@nist.gov",
"type": "Primary"
}
]
} | [
{
"source": "cve@mitre.org",
"tags": [
"Patch",
"Third Party Advisory"
],
"url": "https://github.com/gpac/gpac/commit/328def7d3b93847d64ecb6e9e0399684e57c3eca"
},
{
"source": "cve@mitre.org",
"tags": [
"Exploit",
"Third Party Advisory"
],
"url": "https://g... | [
{
"nodes": [
{
"cpeMatch": [
{
"criteria": "cpe:2.3:a:gpac:gpac:1.0.1:*:*:*:*:*:*:*",
"matchCriteriaId": "82DD2D40-0A05-48FD-940D-32B4D8B51AB3",
"versionEndExcluding": null,
"versionEndIncluding": null,
"versionStartExcluding": ... | https://github.com/gpac/gpac/issues/1766 | [
"Exploit",
"Third Party Advisory"
] | github.com | [
"gpac",
"gpac"
] | null | [security]heap buffer overflow in MP4Box URL_GetProtocolType | https://api.github.com/repos/gpac/gpac/issues/1766/comments | 0 | 2021-04-30T00:43:52 | 2023-09-22T06:09:59Z | https://github.com/gpac/gpac/issues/1766 | 871,726,037 | 1,766 | true | This is a GitHub Issue
repo:gpac
owner:gpac
Title : [security]heap buffer overflow in MP4Box URL_GetProtocolType
Issue date:
--- start body ---
None
--- end body ---
| 167 | [
-0.02080952748656273,
0.02841929905116558,
-0.00300834933295846,
0.03089425340294838,
0.05163266137242317,
0.00785868987441063,
-0.012076068669557571,
0.05479036271572113,
-0.0009814472869038582,
0.045260366052389145,
0.009359308518469334,
0.02516203187406063,
0.022815093398094177,
0.00518... | 304 |
null | null | null | null | null | null | null | null | null | [
"ImageMagick",
"ImageMagick"
] | ### ImageMagick version
7.1.1
### Operating system
Windows
### Operating system, version and so on
Windows 11
### Description
& "magick.exe" -size 1400x600 -font "Comfortaa-Medium.ttf" -gravity center -fill black caption:"CONSPIRACY THEORY WITH JESSE VENTURA" -format "%[caption:pointsize]" info:
returns a size of 161
Which then when applied to the poster with:
"magick.exe" "Conspiracy Theory with Jesse Ventura (2009) _imdb-tt1572498_.jpg" -gravity center -background None -layers Flatten ( -font "Comfortaa-Medium.ttf" -pointsize "161" -fill "white" -size "1400x600" -background none caption:"CONSPIRACY THEORY WITH JESSE VENTURA" -trim -gravity south -extent "1400x600" ) -gravity south -geometry "+0+400" -quality 92% -composite "Conspiracy Theory with Jesse Ventura (2009) _imdb-tt1572498_-out.jpg"
gives you:

Here are some other values when we just adjust text string slightly and see how the value changed significantly:

Also adjusting the box by 1 or 2 pixels will also fix the issue.
As we are creating thousands of text overlay images, we encounter this problem. There is also a similar issue whereby the returned value is way too small (47) for when we want to perform the same caption command for the following text: "THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING". It should be around 106
& "magick" -size 1200x485 -font "Comfortaa-Medium.ttf" -gravity center -fill black caption:"THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING" -format "%[caption:pointsize]" info:

[Comfortaa-Medium.zip](https://github.com/ImageMagick/ImageMagick/files/15269633/Comfortaa-Medium.zip)
### Steps to Reproduce
Steps outlined above
### Images
images and font shown above | Using caption: to perform a "best fit" is not always working and causes text truncation | https://api.github.com/repos/ImageMagick/ImageMagick/issues/7304/comments | 4 | 2024-05-10T02:29:28 | 2024-05-12T22:11:05Z | https://github.com/ImageMagick/ImageMagick/issues/7304 | 2,288,795,584 | 7,304 | false | This is a GitHub Issue
repo:ImageMagick
owner:ImageMagick
Title : Using caption: to perform a "best fit" is not always working and causes text truncation
Issue date:
--- start body ---
### ImageMagick version
7.1.1
### Operating system
Windows
### Operating system, version and so on
Windows 11
### Description
& "magick.exe" -size 1400x600 -font "Comfortaa-Medium.ttf" -gravity center -fill black caption:"CONSPIRACY THEORY WITH JESSE VENTURA" -format "%[caption:pointsize]" info:
returns a size of 161
Which then when applied to the poster with:
"magick.exe" "Conspiracy Theory with Jesse Ventura (2009) _imdb-tt1572498_.jpg" -gravity center -background None -layers Flatten ( -font "Comfortaa-Medium.ttf" -pointsize "161" -fill "white" -size "1400x600" -background none caption:"CONSPIRACY THEORY WITH JESSE VENTURA" -trim -gravity south -extent "1400x600" ) -gravity south -geometry "+0+400" -quality 92% -composite "Conspiracy Theory with Jesse Ventura (2009) _imdb-tt1572498_-out.jpg"
gives you:

Here are some other values when we just adjust text string slightly and see how the value changed significantly:

Also adjusting the box by 1 or 2 pixels will also fix the issue.
As we are creating thousands of text overlay images, we encounter this problem. There is also a similar issue whereby the returned value is way too small (47) for when we want to perform the same caption command for the following text: "THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING". It should be around 106
& "magick" -size 1200x485 -font "Comfortaa-Medium.ttf" -gravity center -fill black caption:"THE LORD OF THE RINGS: THE FELLOWSHIP OF THE RING" -format "%[caption:pointsize]" info:

[Comfortaa-Medium.zip](https://github.com/ImageMagick/ImageMagick/files/15269633/Comfortaa-Medium.zip)
### Steps to Reproduce
Steps outlined above
### Images
images and font shown above
--- end body ---
| 2,317 | [
0.018138207495212555,
0.0322423055768013,
-0.0049165696837008,
0.010375604964792728,
0.03820178285241127,
0.014753528870642185,
-0.022355685010552406,
0.03762111812829971,
-0.07585346698760986,
0.0196662787348032,
0.021209631115198135,
-0.005963298492133617,
-0.005168701522052288,
-0.03435... | 2,347 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ x] I looked for a similar issue and couldn't find any.
- [ x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
I'm creating MPEG-DASH-SRD content using following guide.
https://github.com/gpac/gpac/wiki/HEVC-Tile-based-adaptation-guide
I have different versions available in DASH content and I'm creating a final video by selecting appropriate quality for particular tile by concatenating the tracks on the base.
MP4Client is playing the final video properly as per my requirement.
But codec information is missing in the metadata.
As per steps in the above mentioned guide codec information is present in the hvc file.
After performing following step codec information is not added in the mp4 video.
MP4Box -add video_tiled.hvc:split_tiles -new video_tiled.mp4
Do I need to follow some other steps which are not mentioned in the guide during the packaging of the video? | Codec information missing after packaging of HEVC video | DASH | 360 video | https://api.github.com/repos/gpac/gpac/issues/1683/comments | 33 | 2021-01-26T11:29:01 | 2021-03-25T06:18:57Z | https://github.com/gpac/gpac/issues/1683 | 794,156,530 | 1,683 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : Codec information missing after packaging of HEVC video | DASH | 360 video
Issue date:
--- start body ---
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [ x] I looked for a similar issue and couldn't find any.
- [ x] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
- [ x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: http://gpac.io/2013/07/16/how-to-file-a-bug-properly/
I'm creating MPEG-DASH-SRD content using following guide.
https://github.com/gpac/gpac/wiki/HEVC-Tile-based-adaptation-guide
I have different versions available in DASH content and I'm creating a final video by selecting appropriate quality for particular tile by concatenating the tracks on the base.
MP4Client is playing the final video properly as per my requirement.
But codec information is missing in the metadata.
As per steps in the above mentioned guide codec information is present in the hvc file.
After performing following step codec information is not added in the mp4 video.
MP4Box -add video_tiled.hvc:split_tiles -new video_tiled.mp4
Do I need to follow some other steps which are not mentioned in the guide during the packaging of the video?
--- end body ---
| 1,653 | [
-0.020920412614941597,
0.02650822512805462,
-0.02516775391995907,
0.008020244538784027,
0.031327903270721436,
0.005128438118845224,
-0.004729308653622866,
0.05389001965522766,
-0.03629819676280022,
0.019534755498170853,
0.014541870914399624,
-0.027939068153500557,
0.0045146821066737175,
-0... | 1,801 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [x] I looked for a similar issue and couldn't find any.
- [x] I tried with the latest version of GPAC. Installers available at https://gpac.io/downloads/gpac-nightly-builds/
- [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: https://gpac.io/bug-reporting/
I tried to find a solution by following the code flow, but failed.
I am making segment files now.
and I want to allocate sample_duration value in trun box.
refer to below picture
<img width="1840" alt="trun----whatpng" src="https://github.com/gpac/gpac/assets/80387186/14c9f4cf-4cb5-4c9c-acd7-a4b7a8056c8a">
I tried below code
```
// First, I do not use default value using force_traf_flags – if GF_TRUE, will ignore these default in each traf but will still write them in moov
if ((err = gf_isom_setup_track_fragment(file_ptr, track_id, sample_description_index, 0, 0, 0, 0, 0 , GF_TRUE)) != GF_OK)
{
fprintf(stderr, "Err Raised : gf_isom_setup_track_fragment: %d\n", err);
return -1;
}
// Second , add sample_duration in this function. I think that 1024 is trun sample_duration value
if ((err = gf_isom_fragment_add_sample(file_ptr, track_id, &iso_sample, sample_description_index, 1024, 0, 0, GF_FALSE)) != GF_OK)
{
fprintf(stderr, "Err raised : gf_isom_fragment_add_smple :%d\n", err);
return -1;
}
```
In result, If every sample has same sample_duration, tfdt default_sample_duration is added.
But when I use `gf_isom_fragment_add_sample` this function using different Duration argument (not fixed 1024)
sample_duration of trun box is added.
https://github.com/gpac/gpac/blob/50c5c207595bb22f7f120572680971f0aae3c5fc/src/isomedia/movie_fragments.c#L638C7-L638C7
Result of looking at the code above
If first trun sample_duration and second sample_duration values are diffrent,
The code below works
```
if (!RunDur) trun->flags |= GF_ISOM_TRUN_DURATION;
```
https://github.com/gpac/gpac/blob/50c5c207595bb22f7f120572680971f0aae3c5fc/src/isomedia/movie_fragments.c#L678
I am curious as to why the trun sample_duration flag does not work when the trun sample_duration is the same.
| How to allocate sample_duration value in trun box ?? | https://api.github.com/repos/gpac/gpac/issues/2731/comments | 4 | 2024-01-17T01:24:38 | 2024-01-18T07:53:10Z | https://github.com/gpac/gpac/issues/2731 | 2,085,224,704 | 2,731 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : How to allocate sample_duration value in trun box ??
Issue date:
--- start body ---
Thanks for reporting your issue. Please make sure these boxes are checked before submitting your issue - thank you!
- [x] I looked for a similar issue and couldn't find any.
- [x] I tried with the latest version of GPAC. Installers available at https://gpac.io/downloads/gpac-nightly-builds/
- [x] I give enough information for contributors to reproduce my issue (meaningful title, github labels, platform and compiler, command-line ...). I can share files anonymously with this dropbox: https://www.mediafire.com/filedrop/filedrop_hosted.php?drop=eec9e058a9486fe4e99c33021481d9e1826ca9dbc242a6cfaab0fe95da5e5d95
Detailed guidelines: https://gpac.io/bug-reporting/
I tried to find a solution by following the code flow, but failed.
I am making segment files now.
and I want to allocate sample_duration value in trun box.
refer to below picture
<img width="1840" alt="trun----whatpng" src="https://github.com/gpac/gpac/assets/80387186/14c9f4cf-4cb5-4c9c-acd7-a4b7a8056c8a">
I tried below code
```
// First, I do not use default value using force_traf_flags – if GF_TRUE, will ignore these default in each traf but will still write them in moov
if ((err = gf_isom_setup_track_fragment(file_ptr, track_id, sample_description_index, 0, 0, 0, 0, 0 , GF_TRUE)) != GF_OK)
{
fprintf(stderr, "Err Raised : gf_isom_setup_track_fragment: %d\n", err);
return -1;
}
// Second , add sample_duration in this function. I think that 1024 is trun sample_duration value
if ((err = gf_isom_fragment_add_sample(file_ptr, track_id, &iso_sample, sample_description_index, 1024, 0, 0, GF_FALSE)) != GF_OK)
{
fprintf(stderr, "Err raised : gf_isom_fragment_add_smple :%d\n", err);
return -1;
}
```
In result, If every sample has same sample_duration, tfdt default_sample_duration is added.
But when I use `gf_isom_fragment_add_sample` this function using different Duration argument (not fixed 1024)
sample_duration of trun box is added.
https://github.com/gpac/gpac/blob/50c5c207595bb22f7f120572680971f0aae3c5fc/src/isomedia/movie_fragments.c#L638C7-L638C7
Result of looking at the code above
If first trun sample_duration and second sample_duration values are diffrent,
The code below works
```
if (!RunDur) trun->flags |= GF_ISOM_TRUN_DURATION;
```
https://github.com/gpac/gpac/blob/50c5c207595bb22f7f120572680971f0aae3c5fc/src/isomedia/movie_fragments.c#L678
I am curious as to why the trun sample_duration flag does not work when the trun sample_duration is the same.
--- end body ---
| 2,733 | [
-0.001181693165563047,
-0.006262973882257938,
-0.013316459022462368,
-0.008760828524827957,
-0.002465256489813328,
0.006572658661752939,
-0.02907780185341835,
0.058318596333265305,
-0.03729260712862015,
0.017114177346229553,
0.0033372645266354084,
0.033445991575717926,
0.040585048496723175,
... | 1,057 |
null | null | null | null | null | null | null | null | null | [
"jeecgboot",
"jeecg-boot"
] | ##### 版本号:3.6.3
##### 分支: master
##### 前端版本: vue3版
##### 问题描述:online报表配置sql:SELECT user_org ,COUNT( id ) as count FROM user 然后user_org 是用的分类字典然后给字段配置上sql字典语句 SELECT code AS value, name AS text FROM sys_category WHERE code like 'C03%' 发现第二页的数据没翻译
##### 截图&代码:


#### 友情提示(为了提高issue处理效率):
- 未按格式要求发帖、描述过于简抽象的,会被直接删掉;
- 请自己初判问题描述是否清楚,是否方便我们调查处理;
- 针对问题请说明是Online在线功能(需说明用的主题模板),还是生成的代码功能;
- springboot3_sas分支采用 `Spring Authorization Server` 替换 `Shiro`,目前是beta版不稳定,生产项目不要使用;
| online报表 SQL翻译 不翻译第二页的数据 | https://api.github.com/repos/jeecgboot/JeecgBoot/issues/6242/comments | 2 | 2024-05-22T08:28:45 | 2024-06-05T01:22:45Z | https://github.com/jeecgboot/JeecgBoot/issues/6242 | 2,309,908,496 | 6,242 | false | This is a GitHub Issue
repo:jeecg-boot
owner:jeecgboot
Title : online报表 SQL翻译 不翻译第二页的数据
Issue date:
--- start body ---
##### 版本号:3.6.3
##### 分支: master
##### 前端版本: vue3版
##### 问题描述:online报表配置sql:SELECT user_org ,COUNT( id ) as count FROM user 然后user_org 是用的分类字典然后给字段配置上sql字典语句 SELECT code AS value, name AS text FROM sys_category WHERE code like 'C03%' 发现第二页的数据没翻译
##### 截图&代码:


#### 友情提示(为了提高issue处理效率):
- 未按格式要求发帖、描述过于简抽象的,会被直接删掉;
- 请自己初判问题描述是否清楚,是否方便我们调查处理;
- 针对问题请说明是Online在线功能(需说明用的主题模板),还是生成的代码功能;
- springboot3_sas分支采用 `Spring Authorization Server` 替换 `Shiro`,目前是beta版不稳定,生产项目不要使用;
--- end body ---
| 864 | [
0.02043631486594677,
0.02525027096271515,
-0.007634872570633888,
0.018243972212076187,
0.020313667133450508,
-0.017968012019991875,
0.03679456561803818,
0.05632634833455086,
-0.009282962419092655,
0.01275544986128807,
0.004116392228752375,
-0.01019516121596098,
0.01557636633515358,
0.01950... | 3,760 |
null | null | null | null | null | null | null | null | null | [
"python",
"cpython"
] | The internals documentation is scattered in markdown files in the codebase, as well as parts which are in the dev guide. We would like to have it in one place, versioned along with the code (unlike the dev guide).
<!-- gh-linked-prs -->
### Linked PRs
* gh-119787
* gh-119815
* gh-120077
* gh-120137
* gh-120134
* gh-120445
* gh-121009
* gh-121601
<!-- /gh-linked-prs -->
| create an internals documentation folder in the cpython repo | https://api.github.com/repos/python/cpython/issues/119786/comments | 1 | 2024-05-30T15:35:00 | 2024-07-10T21:33:30Z | https://github.com/python/cpython/issues/119786 | 2,325,976,342 | 119,786 | false | This is a GitHub Issue
repo:cpython
owner:python
Title : create an internals documentation folder in the cpython repo
Issue date:
--- start body ---
The internals documentation is scattered in markdown files in the codebase, as well as parts which are in the dev guide. We would like to have it in one place, versioned along with the code (unlike the dev guide).
<!-- gh-linked-prs -->
### Linked PRs
* gh-119787
* gh-119815
* gh-120077
* gh-120137
* gh-120134
* gh-120445
* gh-121009
* gh-121601
<!-- /gh-linked-prs -->
--- end body ---
| 554 | [
-0.018858572468161583,
0.014861179515719414,
-0.023576216772198677,
0.015893539413809776,
0.03241129592061043,
0.025304820388555527,
0.03519626706838608,
0.03488415852189064,
-0.0035052215680480003,
0.055795446038246155,
-0.0199749618768692,
-0.005059763323515654,
0.027561606839299202,
-0.... | 3,989 |
null | null | null | null | null | null | null | null | null | [
"gpac",
"gpac"
] | Dasher - Add multiple subtitles, then do not create manifest file. - thank you!
- [*] I looked for a similar issue and couldn't find any.
- [*] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
> gpac -i video.mp4 -i subtitle1.vtt -o out.mpd
--> create manifest file.
> gpac -i video.mp4 -i subtitle1.vtt -i subtitle1.vtt -o out.mpd
--> DO NOT create manifest file. | Dasher - Add multiple subtitle, then do not create manifest file. | https://api.github.com/repos/gpac/gpac/issues/1951/comments | 4 | 2021-11-29T23:58:59 | 2021-12-07T01:32:47Z | https://github.com/gpac/gpac/issues/1951 | 1,066,625,110 | 1,951 | false | This is a GitHub Issue
repo:gpac
owner:gpac
Title : Dasher - Add multiple subtitle, then do not create manifest file.
Issue date:
--- start body ---
Dasher - Add multiple subtitles, then do not create manifest file. - thank you!
- [*] I looked for a similar issue and couldn't find any.
- [*] I tried with the latest version of GPAC. Installers available at http://gpac.io/downloads/gpac-nightly-builds/
> gpac -i video.mp4 -i subtitle1.vtt -o out.mpd
--> create manifest file.
> gpac -i video.mp4 -i subtitle1.vtt -i subtitle1.vtt -o out.mpd
--> DO NOT create manifest file.
--- end body ---
| 616 | [
-0.010250857099890709,
0.02476978302001953,
-0.02774994447827339,
0.021430205553770065,
-0.000026777812308864668,
0.010655200108885765,
-0.02737555280327797,
0.05744672939181328,
-0.05022845044732094,
0.022104112431406975,
0.02011234499514103,
-0.00576189486309886,
-0.016847645863890648,
-... | 1,629 |
null | null | null | null | null | null | null | null | null | [
"axiomatic-systems",
"Bento4"
] | I only need to use mp4decrypt but I'm getting this error. I'm on Windows 10 32Bit......I'm guessing it's the 32 bit that's causing the problem?
Which is confusing since the zip file name has x86 and win32 in its file name. | "not compatible with this version of windows" | https://api.github.com/repos/axiomatic-systems/Bento4/issues/711/comments | 0 | 2022-05-24T11:04:19 | 2022-05-24T11:04:19Z | https://github.com/axiomatic-systems/Bento4/issues/711 | 1,246,373,912 | 711 | false | This is a GitHub Issue
repo:Bento4
owner:axiomatic-systems
Title : "not compatible with this version of windows"
Issue date:
--- start body ---
I only need to use mp4decrypt but I'm getting this error. I'm on Windows 10 32Bit......I'm guessing it's the 32 bit that's causing the problem?
Which is confusing since the zip file name has x86 and win32 in its file name.
--- end body ---
| 388 | [
-0.026450473815202713,
0.01279420219361782,
-0.011726878583431244,
0.010522718541324139,
0.01881500333547592,
0.00010262728756060824,
-0.02159278281033039,
0.04195403680205345,
0.014244668185710907,
0.05659553036093712,
0.02958402782678604,
-0.013225236907601357,
0.012096337042748928,
0.00... | 1,985 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.