Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
259,103
19,586,090,518
IssuesEvent
2022-01-05 07:06:38
GeniiDevCore/DevWebApps_Training
https://api.github.com/repos/GeniiDevCore/DevWebApps_Training
opened
Request - Add Training doc link - QA Export Page
documentation enhancement Maintanance
Please add an info button in position suggested in yellow. linked to _to be updated once training docs are moved to private repository_ ![image](https://user-images.githubusercontent.com/45848960/148174724-655debed-80a6-4c90-9c38-eb6af3ba4f1c.png)
1.0
Request - Add Training doc link - QA Export Page - Please add an info button in position suggested in yellow. linked to _to be updated once training docs are moved to private repository_ ![image](https://user-images.githubusercontent.com/45848960/148174724-655debed-80a6-4c90-9c38-eb6af3ba4f1c.png)
non_defect
request add training doc link qa export page please add an info button in position suggested in yellow linked to to be updated once training docs are moved to private repository
0
2,953
5,135,391,753
IssuesEvent
2017-01-11 12:11:36
tbeu/Modelica
https://api.github.com/repos/tbeu/Modelica
closed
Synchronise version numbering of `ModelicaServices` and `Complex` with MSL
enhancement L: ModelicaServices P: high
**Modified by dietmarw on 10 Sep 2012 15:53 UTC** In order to avoid any confusion the packages * `ModelicaServices` * `Complex` should get the same version number as the MSL upon release, e.g., `version="3.2.1"`. That way it is far easier to keep track of which version was shipped with which. At the same time tool vendors can apply their own sub-versioning of the versions shipped with the MSL in order to make customisations. One way would be calling those versions (following the http://semver.org scheme) then `version="3.2.1+<VendorName>.<number>"` But that is just a suggestion anyway. ---- **Reported by dietmarw on 10 Sep 2012 15:47 UTC** In order to avoid any confusion the packages * `ModelicaServices` * `Complex` should get the same version number as the MSL upon release, e.g., `version=3.2.1`. That way it is far easier to keep track of which version was shipped with which. At the same time tool vendors can apply their own sub-versioning of the versions shipped with the MSL in order to make customisations. One way would be calling those versions (following the http://semver.org scheme) then `version=3.2.1+<VendorName>.<number>` But that is just a suggestion anyway. ---- Migrated-From: https://trac.modelica.org/Modelica/ticket/811
1.0
Synchronise version numbering of `ModelicaServices` and `Complex` with MSL - **Modified by dietmarw on 10 Sep 2012 15:53 UTC** In order to avoid any confusion the packages * `ModelicaServices` * `Complex` should get the same version number as the MSL upon release, e.g., `version="3.2.1"`. That way it is far easier to keep track of which version was shipped with which. At the same time tool vendors can apply their own sub-versioning of the versions shipped with the MSL in order to make customisations. One way would be calling those versions (following the http://semver.org scheme) then `version="3.2.1+<VendorName>.<number>"` But that is just a suggestion anyway. ---- **Reported by dietmarw on 10 Sep 2012 15:47 UTC** In order to avoid any confusion the packages * `ModelicaServices` * `Complex` should get the same version number as the MSL upon release, e.g., `version=3.2.1`. That way it is far easier to keep track of which version was shipped with which. At the same time tool vendors can apply their own sub-versioning of the versions shipped with the MSL in order to make customisations. One way would be calling those versions (following the http://semver.org scheme) then `version=3.2.1+<VendorName>.<number>` But that is just a suggestion anyway. ---- Migrated-From: https://trac.modelica.org/Modelica/ticket/811
non_defect
synchronise version numbering of modelicaservices and complex with msl modified by dietmarw on sep utc in order to avoid any confusion the packages modelicaservices complex should get the same version number as the msl upon release e g version that way it is far easier to keep track of which version was shipped with which at the same time tool vendors can apply their own sub versioning of the versions shipped with the msl in order to make customisations one way would be calling those versions following the scheme then version but that is just a suggestion anyway reported by dietmarw on sep utc in order to avoid any confusion the packages modelicaservices complex should get the same version number as the msl upon release e g version that way it is far easier to keep track of which version was shipped with which at the same time tool vendors can apply their own sub versioning of the versions shipped with the msl in order to make customisations one way would be calling those versions following the scheme then version but that is just a suggestion anyway migrated from
0
186,620
6,741,344,832
IssuesEvent
2017-10-20 00:06:28
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
Simplify "code style" guide where we could be using the linter instead
area: tooling enhancement in progress priority: medium
I read our code style guidelines (docs/code-style.md) for the first time in a while recently, and determined that most of it is obsolete with the linter and then deleted; that's a much better way to manage this stuff. To starthese items from our style guide could easily be converted to lint rules: 1. Don't use the `style=` attribute in HTML. Instead, define logical classes and put your styles in external files such as `zulip.css`. 2. Don't use inline event handlers (`onclick=`, etc. attributes) in HTML. Instead, attach a jQuery event handler (`$('#foo').on('click', function () {...})`) when the DOM is ready (inside a `$(function () {...})` block). 3. Use $(function () { ... rather than $(document).ready(function () { ... (Just lint for document.ready) 4. Scripts should start with `#!/usr/bin/env python3` and not `#/usr/bin/python` (the right Python may not be installed in `/usr/bin`) or `#/usr/bin/env python` (we require Python 3 compatibility). Don't put a shebang line on a Python file unless it's meaningful to run it as a script. (Some libraries can also be run as scripts, e.g. to run a test suite.) (For this, I think a good strategy would be to ban shebang lines in `.py` files, and to add a lint section for files without an extension, and require that those shebang lines follow these rules). 5. Scripts should be executed directly (`./script.py`), so that the interpreter is implicitly found from the shebang line, rather than explicitly overridden (`python script.py`). This is hard to lint for, but we can at least ban hardcoded paths of the form /srv/zulip/... 6. When selecting by id, don't use `foo.pk` when you mean `foo.id`. This is easy to do with linting for `[.]pk`, we've got like a dozen violations we can fix. 7. Probably most of the whitespace stuff can be deleted, but requires some testing. @derAnfaenger this is probably a good issue for you.
1.0
Simplify "code style" guide where we could be using the linter instead - I read our code style guidelines (docs/code-style.md) for the first time in a while recently, and determined that most of it is obsolete with the linter and then deleted; that's a much better way to manage this stuff. To starthese items from our style guide could easily be converted to lint rules: 1. Don't use the `style=` attribute in HTML. Instead, define logical classes and put your styles in external files such as `zulip.css`. 2. Don't use inline event handlers (`onclick=`, etc. attributes) in HTML. Instead, attach a jQuery event handler (`$('#foo').on('click', function () {...})`) when the DOM is ready (inside a `$(function () {...})` block). 3. Use $(function () { ... rather than $(document).ready(function () { ... (Just lint for document.ready) 4. Scripts should start with `#!/usr/bin/env python3` and not `#/usr/bin/python` (the right Python may not be installed in `/usr/bin`) or `#/usr/bin/env python` (we require Python 3 compatibility). Don't put a shebang line on a Python file unless it's meaningful to run it as a script. (Some libraries can also be run as scripts, e.g. to run a test suite.) (For this, I think a good strategy would be to ban shebang lines in `.py` files, and to add a lint section for files without an extension, and require that those shebang lines follow these rules). 5. Scripts should be executed directly (`./script.py`), so that the interpreter is implicitly found from the shebang line, rather than explicitly overridden (`python script.py`). This is hard to lint for, but we can at least ban hardcoded paths of the form /srv/zulip/... 6. When selecting by id, don't use `foo.pk` when you mean `foo.id`. This is easy to do with linting for `[.]pk`, we've got like a dozen violations we can fix. 7. Probably most of the whitespace stuff can be deleted, but requires some testing. @derAnfaenger this is probably a good issue for you.
non_defect
simplify code style guide where we could be using the linter instead i read our code style guidelines docs code style md for the first time in a while recently and determined that most of it is obsolete with the linter and then deleted that s a much better way to manage this stuff to starthese items from our style guide could easily be converted to lint rules don t use the style attribute in html instead define logical classes and put your styles in external files such as zulip css don t use inline event handlers onclick etc attributes in html instead attach a jquery event handler foo on click function when the dom is ready inside a function block use function rather than document ready function just lint for document ready scripts should start with usr bin env and not usr bin python the right python may not be installed in usr bin or usr bin env python we require python compatibility don t put a shebang line on a python file unless it s meaningful to run it as a script some libraries can also be run as scripts e g to run a test suite for this i think a good strategy would be to ban shebang lines in py files and to add a lint section for files without an extension and require that those shebang lines follow these rules scripts should be executed directly script py so that the interpreter is implicitly found from the shebang line rather than explicitly overridden python script py this is hard to lint for but we can at least ban hardcoded paths of the form srv zulip when selecting by id don t use foo pk when you mean foo id this is easy to do with linting for pk we ve got like a dozen violations we can fix probably most of the whitespace stuff can be deleted but requires some testing deranfaenger this is probably a good issue for you
0
6,666
23,682,970,319
IssuesEvent
2022-08-29 01:35:25
tm24fan8/Home-Assistant-Configs
https://api.github.com/repos/tm24fan8/Home-Assistant-Configs
closed
Fix living room media scenes
bug monitoring lighting multimedia automation
Need to investigate...used to work, now doesn't...I've changed nothing in between.
1.0
Fix living room media scenes - Need to investigate...used to work, now doesn't...I've changed nothing in between.
non_defect
fix living room media scenes need to investigate used to work now doesn t i ve changed nothing in between
0
44,047
11,937,066,429
IssuesEvent
2020-04-02 11:29:08
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
hanging at client side pncounter get after cluster members kill restart
Team: Client Team: Core Type: Defect
these 3 test run on 4.0 pass every time http://jenkins.hazelcast.com/view/kill/job/kill-x2/37/console and the same 3 test run on 4.0.1-SANPSHOT fail hang every time http://jenkins.hazelcast.com/view/kill/job/kill-x4/9/console /disk1/jenkins/workspace/kill-x4/4.0.1-SNAPSHOT/2020_04_01-05_00_23/pn-counter/ http://54.147.27.51/~jenkins/workspace/kill-x4/4.0.1-SNAPSHOT/2020_04_01-05_00_23/pn-counter/ find . -name hangers.txt | xargs cat ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) the client hangs at Validate.java:13 hzInstance.getCPSubsystem().getAtomicLong(name + "" + i).set( getCounter(i).get() ); PNCounter.get()
1.0
hanging at client side pncounter get after cluster members kill restart - these 3 test run on 4.0 pass every time http://jenkins.hazelcast.com/view/kill/job/kill-x2/37/console and the same 3 test run on 4.0.1-SANPSHOT fail hang every time http://jenkins.hazelcast.com/view/kill/job/kill-x4/9/console /disk1/jenkins/workspace/kill-x4/4.0.1-SNAPSHOT/2020_04_01-05_00_23/pn-counter/ http://54.147.27.51/~jenkins/workspace/kill-x4/4.0.1-SNAPSHOT/2020_04_01-05_00_23/pn-counter/ find . -name hangers.txt | xargs cat ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient3HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient4HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient2HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) ./output/HZ/HzClient1HZ/out.txt: at hzcmd.pncounter.Validate.init(Validate.java:13) the client hangs at Validate.java:13 hzInstance.getCPSubsystem().getAtomicLong(name + "" + i).set( getCounter(i).get() ); PNCounter.get()
defect
hanging at client side pncounter get after cluster members kill restart these test run on pass every time and the same test run on sanpshot fail hang every time jenkins workspace kill snapshot pn counter find name hangers txt xargs cat output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java output hz out txt at hzcmd pncounter validate init validate java the client hangs at validate java hzinstance getcpsubsystem getatomiclong name i set getcounter i get pncounter get
1
340,519
30,522,703,440
IssuesEvent
2023-07-19 09:09:01
ClickHouse/ClickHouse
https://api.github.com/repos/ClickHouse/ClickHouse
opened
SIGILL in fuzzer with TSAN
testing
Fuzzer with TSAN sometimes gets `SIGILL` because of a failed check: ``` CHECK failed: sanitizer_stack_store.cpp:80 "((block_idx)) < (((sizeof(blocks_)/sizeof((blocks_)[0]))))" (0x1000, 0x1000) (tid=184) ``` Complete stacktrace: ``` #0 0x00005583f3a1b7f6 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) () #1 0x00005583f3a20de0 in __sanitizer::StackStore::Alloc(unsigned long, unsigned long*, unsigned long*) () #2 0x00005583f3a20c54 in __sanitizer::StackStore::Store(__sanitizer::StackTrace const&, unsigned long*) () #3 0x00005583f3a22d4b in __sanitizer::StackDepotNode::store(unsigned int, __sanitizer::StackTrace const&, unsigned long long) () #4 0x00005583f3a231d1 in __sanitizer::StackDepotBase<__sanitizer::StackDepotNode, 1, 20>::Put(__sanitizer::StackTrace, bool*) () #5 0x00005583f3aa2c6c in __tsan::CurrentStackId(__tsan::ThreadState*, unsigned long) () #6 0x00005583f3a0b5e3 in __sanitizer::DD::MutexInit(__sanitizer::DDCallback*, __sanitizer::DDMutex*) () #7 0x00005583f3aae1bd in __tsan::DDMutexInit(__tsan::ThreadState*, unsigned long, __tsan::SyncVar*) () #8 0x00005583f3ab9313 in __tsan::MetaMap::GetSync(__tsan::ThreadState*, unsigned long, unsigned long, bool, bool) () #9 0x00005583f3ab15c9 in __tsan::Release(__tsan::ThreadState*, unsigned long, unsigned long) () #10 0x00005583f3a3105d in __tsan::FdRelease(__tsan::ThreadState*, unsigned long, int) () #11 0x00005583f3a436a4 in write () #12 0x00005583fbeb4a4d in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl (this=<optimized out>) at ./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16 #13 0x00005583f3aee11f in DB::WriteBuffer::write(char const*, unsigned long) () #14 0x00005583fc14d04b in DB::writePODBinary<DB::ThreadStatus*> (x=<error reading variable>, buf=...) at ./src/IO/WriteHelpers.h:88 #15 signalHandler (sig=<optimized out>, info=<optimized out>, context=<optimized out>) at ./src/Daemon/BaseDaemon.cpp:159 #16 0x00005583f3a3da00 in __tsan::CallUserSignalHandler(__tsan::ThreadState*, bool, bool, int, __sanitizer::__sanitizer_siginfo*, void*) () #17 0x00005583f3a3df26 in sighandler(int, __sanitizer::__sanitizer_siginfo*, void*) () #18 0x00007f98d8b3d520 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #19 0x0000000000000007 in ?? () #20 0x0000000000000000 in ?? () ``` AFAIU, the stack frames are stored in fixed amount of blocks: ``` static constexpr uptr kBlockSizeFrames = 0x100000; static constexpr uptr kBlockCount = 0x1000; static constexpr uptr kBlockSizeBytes = kBlockSizeFrames * sizeof(uptr); ``` Seems like we fill all the blocks causing the check to fail. I didn't find any other place that seems wrong but I'm also not sure what we can do in a case like this. Maybe limit how long we run tests in TSAN fuzzer?
1.0
SIGILL in fuzzer with TSAN - Fuzzer with TSAN sometimes gets `SIGILL` because of a failed check: ``` CHECK failed: sanitizer_stack_store.cpp:80 "((block_idx)) < (((sizeof(blocks_)/sizeof((blocks_)[0]))))" (0x1000, 0x1000) (tid=184) ``` Complete stacktrace: ``` #0 0x00005583f3a1b7f6 in __sanitizer::CheckFailed(char const*, int, char const*, unsigned long long, unsigned long long) () #1 0x00005583f3a20de0 in __sanitizer::StackStore::Alloc(unsigned long, unsigned long*, unsigned long*) () #2 0x00005583f3a20c54 in __sanitizer::StackStore::Store(__sanitizer::StackTrace const&, unsigned long*) () #3 0x00005583f3a22d4b in __sanitizer::StackDepotNode::store(unsigned int, __sanitizer::StackTrace const&, unsigned long long) () #4 0x00005583f3a231d1 in __sanitizer::StackDepotBase<__sanitizer::StackDepotNode, 1, 20>::Put(__sanitizer::StackTrace, bool*) () #5 0x00005583f3aa2c6c in __tsan::CurrentStackId(__tsan::ThreadState*, unsigned long) () #6 0x00005583f3a0b5e3 in __sanitizer::DD::MutexInit(__sanitizer::DDCallback*, __sanitizer::DDMutex*) () #7 0x00005583f3aae1bd in __tsan::DDMutexInit(__tsan::ThreadState*, unsigned long, __tsan::SyncVar*) () #8 0x00005583f3ab9313 in __tsan::MetaMap::GetSync(__tsan::ThreadState*, unsigned long, unsigned long, bool, bool) () #9 0x00005583f3ab15c9 in __tsan::Release(__tsan::ThreadState*, unsigned long, unsigned long) () #10 0x00005583f3a3105d in __tsan::FdRelease(__tsan::ThreadState*, unsigned long, int) () #11 0x00005583f3a436a4 in write () #12 0x00005583fbeb4a4d in DB::WriteBufferFromFileDescriptorDiscardOnFailure::nextImpl (this=<optimized out>) at ./src/IO/WriteBufferFromFileDescriptorDiscardOnFailure.cpp:16 #13 0x00005583f3aee11f in DB::WriteBuffer::write(char const*, unsigned long) () #14 0x00005583fc14d04b in DB::writePODBinary<DB::ThreadStatus*> (x=<error reading variable>, buf=...) at ./src/IO/WriteHelpers.h:88 #15 signalHandler (sig=<optimized out>, info=<optimized out>, context=<optimized out>) at ./src/Daemon/BaseDaemon.cpp:159 #16 0x00005583f3a3da00 in __tsan::CallUserSignalHandler(__tsan::ThreadState*, bool, bool, int, __sanitizer::__sanitizer_siginfo*, void*) () #17 0x00005583f3a3df26 in sighandler(int, __sanitizer::__sanitizer_siginfo*, void*) () #18 0x00007f98d8b3d520 in ?? () from /lib/x86_64-linux-gnu/libc.so.6 #19 0x0000000000000007 in ?? () #20 0x0000000000000000 in ?? () ``` AFAIU, the stack frames are stored in fixed amount of blocks: ``` static constexpr uptr kBlockSizeFrames = 0x100000; static constexpr uptr kBlockCount = 0x1000; static constexpr uptr kBlockSizeBytes = kBlockSizeFrames * sizeof(uptr); ``` Seems like we fill all the blocks causing the check to fail. I didn't find any other place that seems wrong but I'm also not sure what we can do in a case like this. Maybe limit how long we run tests in TSAN fuzzer?
non_defect
sigill in fuzzer with tsan fuzzer with tsan sometimes gets sigill because of a failed check check failed sanitizer stack store cpp block idx sizeof blocks sizeof blocks tid complete stacktrace in sanitizer checkfailed char const int char const unsigned long long unsigned long long in sanitizer stackstore alloc unsigned long unsigned long unsigned long in sanitizer stackstore store sanitizer stacktrace const unsigned long in sanitizer stackdepotnode store unsigned int sanitizer stacktrace const unsigned long long in sanitizer stackdepotbase put sanitizer stacktrace bool in tsan currentstackid tsan threadstate unsigned long in sanitizer dd mutexinit sanitizer ddcallback sanitizer ddmutex in tsan ddmutexinit tsan threadstate unsigned long tsan syncvar in tsan metamap getsync tsan threadstate unsigned long unsigned long bool bool in tsan release tsan threadstate unsigned long unsigned long in tsan fdrelease tsan threadstate unsigned long int in write in db writebufferfromfiledescriptordiscardonfailure nextimpl this at src io writebufferfromfiledescriptordiscardonfailure cpp in db writebuffer write char const unsigned long in db writepodbinary x buf at src io writehelpers h signalhandler sig info context at src daemon basedaemon cpp in tsan callusersignalhandler tsan threadstate bool bool int sanitizer sanitizer siginfo void in sighandler int sanitizer sanitizer siginfo void in from lib linux gnu libc so in in afaiu the stack frames are stored in fixed amount of blocks static constexpr uptr kblocksizeframes static constexpr uptr kblockcount static constexpr uptr kblocksizebytes kblocksizeframes sizeof uptr seems like we fill all the blocks causing the check to fail i didn t find any other place that seems wrong but i m also not sure what we can do in a case like this maybe limit how long we run tests in tsan fuzzer
0
15,457
2,856,032,943
IssuesEvent
2015-06-02 13:10:29
dermotte/lire
https://api.github.com/repos/dermotte/lire
closed
Make the maven build system work
auto-migrated Priority-Medium Type-Defect
``` Hi all Maven's <em>interesting</em> xml parser seems to dislike the lire copyright header, attached is a very small patch that hopefully makes this work a little better. ``` Original issue reported on code.google.com by `g.j.bow...@gmail.com` on 13 Aug 2013 at 5:29 Attachments: * [Fix-Maven.patch](https://storage.googleapis.com/google-code-attachments/lire/issue-7/comment-0/Fix-Maven.patch)
1.0
Make the maven build system work - ``` Hi all Maven's <em>interesting</em> xml parser seems to dislike the lire copyright header, attached is a very small patch that hopefully makes this work a little better. ``` Original issue reported on code.google.com by `g.j.bow...@gmail.com` on 13 Aug 2013 at 5:29 Attachments: * [Fix-Maven.patch](https://storage.googleapis.com/google-code-attachments/lire/issue-7/comment-0/Fix-Maven.patch)
defect
make the maven build system work hi all maven s interesting xml parser seems to dislike the lire copyright header attached is a very small patch that hopefully makes this work a little better original issue reported on code google com by g j bow gmail com on aug at attachments
1
24,260
12,246,688,310
IssuesEvent
2020-05-05 14:48:58
nanovms/nanos
https://api.github.com/repos/nanovms/nanos
opened
tfs: inflate existing file extent when appending, merge adjacent extents where possible, eliminate extent size maximum
filesystem performance
Right now we're allocating tfs extents more often than we need to. Explore the possibility of extending an existing file extent on append rather than allocating a new one. To accommodate the extension, a larger default storage allocation size could be used for each new extent, or id_heap_set_area could be used to try and extend an existing allocation (failure would just mean creating a new extent). We could also look at merging extents that are contiguous on disk, but such conditions should become much less frequent if the above is implemented, so it's not clear how valuable such a function would be. Once we complete the removal of all temporary buffers for retaining extent data, we can also remove the 1MB maximum extent size.
True
tfs: inflate existing file extent when appending, merge adjacent extents where possible, eliminate extent size maximum - Right now we're allocating tfs extents more often than we need to. Explore the possibility of extending an existing file extent on append rather than allocating a new one. To accommodate the extension, a larger default storage allocation size could be used for each new extent, or id_heap_set_area could be used to try and extend an existing allocation (failure would just mean creating a new extent). We could also look at merging extents that are contiguous on disk, but such conditions should become much less frequent if the above is implemented, so it's not clear how valuable such a function would be. Once we complete the removal of all temporary buffers for retaining extent data, we can also remove the 1MB maximum extent size.
non_defect
tfs inflate existing file extent when appending merge adjacent extents where possible eliminate extent size maximum right now we re allocating tfs extents more often than we need to explore the possibility of extending an existing file extent on append rather than allocating a new one to accommodate the extension a larger default storage allocation size could be used for each new extent or id heap set area could be used to try and extend an existing allocation failure would just mean creating a new extent we could also look at merging extents that are contiguous on disk but such conditions should become much less frequent if the above is implemented so it s not clear how valuable such a function would be once we complete the removal of all temporary buffers for retaining extent data we can also remove the maximum extent size
0
772,073
27,105,184,888
IssuesEvent
2023-02-15 11:33:57
GoogleCloudPlatform/java-docs-samples
https://api.github.com/repos/GoogleCloudPlatform/java-docs-samples
opened
The build failed
type: bug priority: p1 flakybot: issue
This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: fe50bd2a0519b78836310e8427617140e0b33ab7 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a464a19b-d7c6-4de5-ad73-a6e8cc3d66ec), [Sponge](http://sponge2/a464a19b-d7c6-4de5-ad73-a6e8cc3d66ec) status: failed <details><summary>Test output</summary><br><pre>java.util.concurrent.TimeoutException: Waited 3 minutes (plus 127315 nanoseconds delay) for TransformFuture@61d60e38[status=PENDING, info=[inputFuture=[com.google.api.core.ApiFutureToListenableFuture@6e95973c], function=[com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction@404ced67]]] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:527) at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:99) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73) at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:131) at compute.windows.osimage.DeleteImage.deleteImage(DeleteImage.java:47) at compute.windows.osimage.WindowsOsImageIT.cleanUp(WindowsOsImageIT.java:130) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$13(ClassBasedTestDescriptor.java:425) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$14(ClassBasedTestDescriptor.java:423) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1092) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:423) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:225) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:80) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:161) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:161) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:55) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:234) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:228) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:175) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:131) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) </pre></details>
1.0
The build failed - This test failed! To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot). If I'm commenting on this issue too often, add the `flakybot: quiet` label and I will stop commenting. --- commit: fe50bd2a0519b78836310e8427617140e0b33ab7 buildURL: [Build Status](https://source.cloud.google.com/results/invocations/a464a19b-d7c6-4de5-ad73-a6e8cc3d66ec), [Sponge](http://sponge2/a464a19b-d7c6-4de5-ad73-a6e8cc3d66ec) status: failed <details><summary>Test output</summary><br><pre>java.util.concurrent.TimeoutException: Waited 3 minutes (plus 127315 nanoseconds delay) for TransformFuture@61d60e38[status=PENDING, info=[inputFuture=[com.google.api.core.ApiFutureToListenableFuture@6e95973c], function=[com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction@404ced67]]] at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:527) at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:99) at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:73) at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:131) at compute.windows.osimage.DeleteImage.deleteImage(DeleteImage.java:47) at compute.windows.osimage.WindowsOsImageIT.cleanUp(WindowsOsImageIT.java:130) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:725) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$13(ClassBasedTestDescriptor.java:425) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$14(ClassBasedTestDescriptor.java:423) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1092) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:423) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:225) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:80) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:161) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:161) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at java.base/java.util.ArrayList.forEach(ArrayList.java:1511) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86) at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86) at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:55) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:234) at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:228) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:175) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:131) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581) </pre></details>
non_defect
the build failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent timeoutexception waited minutes plus nanoseconds delay for transformfuture function at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at compute windows osimage deleteimage deleteimage deleteimage java at compute windows osimage windowsosimageit cleanup windowsosimageit java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit platform commons util reflectionutils invokemethod reflectionutils java at org junit jupiter engine execution methodinvocation proceed methodinvocation java at org junit jupiter engine execution invocationinterceptorchain validatinginvocation proceed invocationinterceptorchain java at org junit jupiter engine extension timeoutextension intercept timeoutextension java at org junit jupiter engine extension timeoutextension interceptlifecyclemethod timeoutextension java at org junit jupiter engine extension timeoutextension interceptafterallmethod timeoutextension java at org junit jupiter engine execution executableinvoker reflectiveinterceptorcall lambda ofvoidmethod executableinvoker java at org junit jupiter engine execution executableinvoker lambda invoke executableinvoker java at org junit jupiter engine execution invocationinterceptorchain interceptedinvocation proceed invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain proceed invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain chainandinvoke invocationinterceptorchain java at org junit jupiter engine execution invocationinterceptorchain invoke invocationinterceptorchain java at org junit jupiter engine execution executableinvoker invoke executableinvoker java at org junit jupiter engine execution executableinvoker invoke executableinvoker java at org junit jupiter engine descriptor classbasedtestdescriptor lambda invokeafterallmethods classbasedtestdescriptor java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit jupiter engine descriptor classbasedtestdescriptor lambda invokeafterallmethods classbasedtestdescriptor java at java base java util arraylist foreach arraylist java at java base java util collections unmodifiablecollection foreach collections java at org junit jupiter engine descriptor classbasedtestdescriptor invokeafterallmethods classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor after classbasedtestdescriptor java at org junit jupiter engine descriptor classbasedtestdescriptor after classbasedtestdescriptor java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at java base java util arraylist foreach arraylist java at org junit platform engine support hierarchical samethreadhierarchicaltestexecutorservice invokeall samethreadhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical node around node java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical samethreadhierarchicaltestexecutorservice submit samethreadhierarchicaltestexecutorservice java at org junit platform engine support hierarchical hierarchicaltestexecutor execute hierarchicaltestexecutor java at org junit platform engine support hierarchical hierarchicaltestengine execute hierarchicaltestengine java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator lambda execute engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator withinterceptedstreams engineexecutionorchestrator java at org junit platform launcher core engineexecutionorchestrator execute engineexecutionorchestrator java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlaunchersession delegatinglauncher execute defaultlaunchersession java at org apache maven surefire junitplatform lazylauncher execute lazylauncher java at org apache maven surefire junitplatform junitplatformprovider lambda execute junitplatformprovider java at java base java util iterator foreachremaining iterator java at org apache maven surefire junitplatform junitplatformprovider execute junitplatformprovider java at org apache maven surefire junitplatform junitplatformprovider invokealltests junitplatformprovider java at org apache maven surefire junitplatform junitplatformprovider invoke junitplatformprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java
0
68,500
21,665,143,205
IssuesEvent
2022-05-07 03:48:09
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
broken room list css
T-Defect X-Cannot-Reproduce S-Minor A-Room-List A-Appearance O-Uncommon
### Steps to reproduce Not sure, I just noticed it had happened. Probably when a room bubbled to the top by activity? ### Outcome #### What did you expect? / #### What happened instead? ![image](https://user-images.githubusercontent.com/2803622/137633852-1aa18eee-d2e2-4a66-bb17-7da9ee5dc748.png) ### Operating system arch ### Application version nightly ### How did you install the app? aur ### Homeserver _No response_ ### Will you send logs? No
1.0
broken room list css - ### Steps to reproduce Not sure, I just noticed it had happened. Probably when a room bubbled to the top by activity? ### Outcome #### What did you expect? / #### What happened instead? ![image](https://user-images.githubusercontent.com/2803622/137633852-1aa18eee-d2e2-4a66-bb17-7da9ee5dc748.png) ### Operating system arch ### Application version nightly ### How did you install the app? aur ### Homeserver _No response_ ### Will you send logs? No
defect
broken room list css steps to reproduce not sure i just noticed it had happened probably when a room bubbled to the top by activity outcome what did you expect what happened instead operating system arch application version nightly how did you install the app aur homeserver no response will you send logs no
1
63,029
17,348,505,007
IssuesEvent
2021-07-29 04:55:13
Questie/Questie
https://api.github.com/repos/Questie/Questie
closed
Questie' tried to call the protected function 'CompactRaidFrame11:SetAttribute()
Type - Defect
<!-- READ THIS FIRST Hello, thanks for taking the time to report a bug! Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is: * @AeroScripts / Aero#1357 (Discord) * @BreakBB / TheCrux#1702 (Discord) * @drejjmit / Drejjmit#8241 (Discord) * @Dyaxler / Dyaxler#0086 (Discord) * @gogo1951 / Gogo#0298 (Discord) If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs&currency_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7 --> ## Bug description <!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. --> This appeared during a raid in UBRS when the raid group icons updated, hard to reproduce because it's in a raid: 1x [ADDON_ACTION_BLOCKED] AddOn 'Questie' tried to call the protected function 'CompactRaidFrame11:SetAttribute()'. [string "@!BugGrabber\BugGrabber.lua"]:519: in function <!BugGrabber\BugGrabber.lua:519> [string "=[C]"]: in function `SetAttribute' [string "@FrameXML\CompactUnitFrame.lua"]:163: in function `CompactUnitFrame_SetUnit' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:318: in function `CompactRaidFrameContainer_AddUnitFrame' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:254: in function `CompactRaidFrameContainer_AddPlayers' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:176: in function `CompactRaidFrameContainer_LayoutFrames' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:130: in function `CompactRaidFrameContainer_TryUpdate' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:57: in function `CompactRaidFrameContainer_OnEvent' [string "*:OnEvent"]:1: in function <[string "*:OnEvent"]:1> ## Screenshots <!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. --> ## Questie version <!-- Which version of Questie are you using? You can find it by: - 1. Hovering over the Questie Minimap Icon - 2. looking at your Questie.toc file (open it with any text editor). It looks something like this: "v5.9.0" or "## Version: 5.9.0". --> Ver. 6.3.12
1.0
Questie' tried to call the protected function 'CompactRaidFrame11:SetAttribute() - <!-- READ THIS FIRST Hello, thanks for taking the time to report a bug! Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is: * @AeroScripts / Aero#1357 (Discord) * @BreakBB / TheCrux#1702 (Discord) * @drejjmit / Drejjmit#8241 (Discord) * @Dyaxler / Dyaxler#0086 (Discord) * @gogo1951 / Gogo#0298 (Discord) If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs&currency_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7 --> ## Bug description <!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. --> This appeared during a raid in UBRS when the raid group icons updated, hard to reproduce because it's in a raid: 1x [ADDON_ACTION_BLOCKED] AddOn 'Questie' tried to call the protected function 'CompactRaidFrame11:SetAttribute()'. [string "@!BugGrabber\BugGrabber.lua"]:519: in function <!BugGrabber\BugGrabber.lua:519> [string "=[C]"]: in function `SetAttribute' [string "@FrameXML\CompactUnitFrame.lua"]:163: in function `CompactUnitFrame_SetUnit' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:318: in function `CompactRaidFrameContainer_AddUnitFrame' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:254: in function `CompactRaidFrameContainer_AddPlayers' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:176: in function `CompactRaidFrameContainer_LayoutFrames' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:130: in function `CompactRaidFrameContainer_TryUpdate' [string "@Blizzard_CompactRaidFrames\Blizzard_CompactRaidFrameContainer.lua"]:57: in function `CompactRaidFrameContainer_OnEvent' [string "*:OnEvent"]:1: in function <[string "*:OnEvent"]:1> ## Screenshots <!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. --> ## Questie version <!-- Which version of Questie are you using? You can find it by: - 1. Hovering over the Questie Minimap Icon - 2. looking at your Questie.toc file (open it with any text editor). It looks something like this: "v5.9.0" or "## Version: 5.9.0". --> Ver. 6.3.12
defect
questie tried to call the protected function setattribute read this first hello thanks for taking the time to report a bug before you proceed please verify that you re running the latest version of questie the easiest way to do this is via the twitch client but you can also download the latest version here questie is one of the most popular classic wow addons with over downloads however like almost all wow addons it s built and maintained by a team of volunteers the current questie team is aeroscripts aero discord breakbb thecrux discord drejjmit drejjmit discord dyaxler dyaxler discord gogo discord if you d like to help please consider making a donation you can do so here you can also help as a tester developer or translator please join the questie discord here bug description this appeared during a raid in ubrs when the raid group icons updated hard to reproduce because it s in a raid addon questie tried to call the protected function setattribute in function in function setattribute in function compactunitframe setunit in function compactraidframecontainer addunitframe in function compactraidframecontainer addplayers in function compactraidframecontainer layoutframes in function compactraidframecontainer tryupdate in function compactraidframecontainer onevent in function screenshots questie version which version of questie are you using you can find it by hovering over the questie minimap icon looking at your questie toc file open it with any text editor it looks something like this or version ver
1
69,610
22,575,678,285
IssuesEvent
2022-06-28 07:06:27
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Clicking on a room pill opens matrix.to in a new tab
T-Defect X-Regression S-Minor X-Release-Blocker A-Pills O-Frequent
### Steps to reproduce 1. Click on a room pill, like the one in this message: ![image](https://user-images.githubusercontent.com/5547783/175908214-49989754-c37f-4d8d-897b-623933a85dd4.png) ### Outcome #### What did you expect? Element switches to the room linked in the pill (in this case a room I'm already in) #### What happened instead? Element opened a new tab on matrix.to ### Operating system Arch Linux ### Browser information Firefox 101.0.1 ### URL for webapp develop.element.io ### Application version Element version: 9eda502a9bce-react-e7a8dbd04cc0-js-9b7628c10313, olm version: 3.2.8 ### Homeserver element.io ### Will you send logs? Yes
1.0
Clicking on a room pill opens matrix.to in a new tab - ### Steps to reproduce 1. Click on a room pill, like the one in this message: ![image](https://user-images.githubusercontent.com/5547783/175908214-49989754-c37f-4d8d-897b-623933a85dd4.png) ### Outcome #### What did you expect? Element switches to the room linked in the pill (in this case a room I'm already in) #### What happened instead? Element opened a new tab on matrix.to ### Operating system Arch Linux ### Browser information Firefox 101.0.1 ### URL for webapp develop.element.io ### Application version Element version: 9eda502a9bce-react-e7a8dbd04cc0-js-9b7628c10313, olm version: 3.2.8 ### Homeserver element.io ### Will you send logs? Yes
defect
clicking on a room pill opens matrix to in a new tab steps to reproduce click on a room pill like the one in this message outcome what did you expect element switches to the room linked in the pill in this case a room i m already in what happened instead element opened a new tab on matrix to operating system arch linux browser information firefox url for webapp develop element io application version element version react js olm version homeserver element io will you send logs yes
1
199,633
6,992,751,151
IssuesEvent
2017-12-15 08:32:52
Eustacio/seed-starter-manager
https://api.github.com/repos/Eustacio/seed-starter-manager
closed
Error message being displayed on the browser console when the front-end starts
Priority: low Type: bug
Error message being displayed on the browser console when the front-end starts and the _localStorage_ does not have a key/value pair for the theme, resulting on the _ThemeManagerService#restoreTheme_ method trying to use the word "null" as theme name
1.0
Error message being displayed on the browser console when the front-end starts - Error message being displayed on the browser console when the front-end starts and the _localStorage_ does not have a key/value pair for the theme, resulting on the _ThemeManagerService#restoreTheme_ method trying to use the word "null" as theme name
non_defect
error message being displayed on the browser console when the front end starts error message being displayed on the browser console when the front end starts and the localstorage does not have a key value pair for the theme resulting on the thememanagerservice restoretheme method trying to use the word null as theme name
0
54,070
29,503,157,367
IssuesEvent
2023-06-03 02:02:13
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
WSL2: Extremely slow
stage: needs information type: performance 🏃‍♀️ stale
With the release of WSL2 awhile back and the decent performance boost to operating purely on the linux side of the machine. I would prefer to stay on that side, react installs in about 5 seconds with wsl 2, a huge boost. Is there a work around or preferred method right now to using cypress inside of WSL2? For example I had to do a huge work around with XLaunch inorder for Cypress to even open a browser(live-server and react seem to not have this issue) but currently theirs very long delays. In my WSL2 setup it is taking 70 seconds to hit our main page which then redirects to Auth0. I understand this is bad practice and something I will ultimately have to address down the line. When i setup this same environment on windows side it takes about 4-5 seconds. I had setup my auto timeout to be 30 seconds so I thought the call was breaking entirely. Ultimately just looking for a way to keep using WSL2 with its superior performance than transitioning to windows side or being forced to get a MAC over this.
True
WSL2: Extremely slow - With the release of WSL2 awhile back and the decent performance boost to operating purely on the linux side of the machine. I would prefer to stay on that side, react installs in about 5 seconds with wsl 2, a huge boost. Is there a work around or preferred method right now to using cypress inside of WSL2? For example I had to do a huge work around with XLaunch inorder for Cypress to even open a browser(live-server and react seem to not have this issue) but currently theirs very long delays. In my WSL2 setup it is taking 70 seconds to hit our main page which then redirects to Auth0. I understand this is bad practice and something I will ultimately have to address down the line. When i setup this same environment on windows side it takes about 4-5 seconds. I had setup my auto timeout to be 30 seconds so I thought the call was breaking entirely. Ultimately just looking for a way to keep using WSL2 with its superior performance than transitioning to windows side or being forced to get a MAC over this.
non_defect
extremely slow with the release of awhile back and the decent performance boost to operating purely on the linux side of the machine i would prefer to stay on that side react installs in about seconds with wsl a huge boost is there a work around or preferred method right now to using cypress inside of for example i had to do a huge work around with xlaunch inorder for cypress to even open a browser live server and react seem to not have this issue but currently theirs very long delays in my setup it is taking seconds to hit our main page which then redirects to i understand this is bad practice and something i will ultimately have to address down the line when i setup this same environment on windows side it takes about seconds i had setup my auto timeout to be seconds so i thought the call was breaking entirely ultimately just looking for a way to keep using with its superior performance than transitioning to windows side or being forced to get a mac over this
0
304,193
26,260,361,098
IssuesEvent
2023-01-06 07:02:55
abpframework/abp
https://api.github.com/repos/abpframework/abp
closed
Test & Document IAbpHostEnvironment
abp-framework test effort-sm
See https://github.com/abpframework/abp/pull/14842 We have integration tests in the PR. We should also perform manuel tests. You can use the app startup template too see what's coming from the IAbpHostEnvironment in development mode, release and in production and prove that it works as expected. After your tests, please open a section in this document and explain how to use it and how it works: https://docs.abp.io/en/abp/7.0/Application-Startup (after the IAbpApplication section).
1.0
Test & Document IAbpHostEnvironment - See https://github.com/abpframework/abp/pull/14842 We have integration tests in the PR. We should also perform manuel tests. You can use the app startup template too see what's coming from the IAbpHostEnvironment in development mode, release and in production and prove that it works as expected. After your tests, please open a section in this document and explain how to use it and how it works: https://docs.abp.io/en/abp/7.0/Application-Startup (after the IAbpApplication section).
non_defect
test document iabphostenvironment see we have integration tests in the pr we should also perform manuel tests you can use the app startup template too see what s coming from the iabphostenvironment in development mode release and in production and prove that it works as expected after your tests please open a section in this document and explain how to use it and how it works after the iabpapplication section
0
53,302
13,261,381,179
IssuesEvent
2020-08-20 19:47:41
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
[topsimulator] documentation is ancient/non-existent (Trac #1156)
Migrated from Trac combo simulation defect
Documentation in topsimulator are not reachable from the main documentation page. It should be moved to an rst file. Most features are not documented at all. For example, the Corsika injector has many undocumented features. There are two injectors and they replicate functionality. The validation code is not documented. No test/example is documented. The corsika reader is not documented. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1156">https://code.icecube.wisc.edu/projects/icecube/ticket/1156</a>, reported by jgonzalezand owned by jgonzalez</em></summary> <p> ```json { "status": "closed", "changetime": "2016-03-18T21:14:03", "_ts": "1458335643235016", "description": "Documentation in topsimulator are not reachable from the main documentation page. It should be moved to an rst file.\n\nMost features are not documented at all. For example, the Corsika injector has many undocumented features. There are two injectors and they replicate functionality. The validation code is not documented. No test/example is documented. The corsika reader is not documented.", "reporter": "jgonzalez", "cc": "", "resolution": "fixed", "time": "2015-08-18T13:22:29", "component": "combo simulation", "summary": "[topsimulator] documentation is ancient/non-existent", "priority": "blocker", "keywords": "", "milestone": "", "owner": "jgonzalez", "type": "defect" } ``` </p> </details>
1.0
[topsimulator] documentation is ancient/non-existent (Trac #1156) - Documentation in topsimulator are not reachable from the main documentation page. It should be moved to an rst file. Most features are not documented at all. For example, the Corsika injector has many undocumented features. There are two injectors and they replicate functionality. The validation code is not documented. No test/example is documented. The corsika reader is not documented. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1156">https://code.icecube.wisc.edu/projects/icecube/ticket/1156</a>, reported by jgonzalezand owned by jgonzalez</em></summary> <p> ```json { "status": "closed", "changetime": "2016-03-18T21:14:03", "_ts": "1458335643235016", "description": "Documentation in topsimulator are not reachable from the main documentation page. It should be moved to an rst file.\n\nMost features are not documented at all. For example, the Corsika injector has many undocumented features. There are two injectors and they replicate functionality. The validation code is not documented. No test/example is documented. The corsika reader is not documented.", "reporter": "jgonzalez", "cc": "", "resolution": "fixed", "time": "2015-08-18T13:22:29", "component": "combo simulation", "summary": "[topsimulator] documentation is ancient/non-existent", "priority": "blocker", "keywords": "", "milestone": "", "owner": "jgonzalez", "type": "defect" } ``` </p> </details>
defect
documentation is ancient non existent trac documentation in topsimulator are not reachable from the main documentation page it should be moved to an rst file most features are not documented at all for example the corsika injector has many undocumented features there are two injectors and they replicate functionality the validation code is not documented no test example is documented the corsika reader is not documented migrated from json status closed changetime ts description documentation in topsimulator are not reachable from the main documentation page it should be moved to an rst file n nmost features are not documented at all for example the corsika injector has many undocumented features there are two injectors and they replicate functionality the validation code is not documented no test example is documented the corsika reader is not documented reporter jgonzalez cc resolution fixed time component combo simulation summary documentation is ancient non existent priority blocker keywords milestone owner jgonzalez type defect
1
192,766
14,629,157,014
IssuesEvent
2020-12-23 15:22:43
deathlyrage/pot-demo-bugs
https://api.github.com/repos/deathlyrage/pot-demo-bugs
closed
bug with drinking animation
needs testing
**Location:** (X=-330873.25,Y=165260.21875,Z=16001.956055) ![](https://mapbug.alderongames.com/uploads/(X=-330873.25,Y=165260.21875,Z=16001.956055).jpg) **Message:** when the laten drinks its mouth isn't opening like it usually does **Version:** 8864 (demo) **Reporter:** Mutedmirth (581-861-269)
1.0
bug with drinking animation - **Location:** (X=-330873.25,Y=165260.21875,Z=16001.956055) ![](https://mapbug.alderongames.com/uploads/(X=-330873.25,Y=165260.21875,Z=16001.956055).jpg) **Message:** when the laten drinks its mouth isn't opening like it usually does **Version:** 8864 (demo) **Reporter:** Mutedmirth (581-861-269)
non_defect
bug with drinking animation location x y z message when the laten drinks its mouth isn t opening like it usually does version demo reporter mutedmirth
0
33,617
12,216,769,284
IssuesEvent
2020-05-01 15:48:48
habusha/CIOIL
https://api.github.com/repos/habusha/CIOIL
opened
CVE-2020-10672 (High) detected in jackson-databind-2.9.7.jar
security vulnerability
## CVE-2020-10672 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/CIOIL/infra_github/pom.xml</p> <p>Path to vulnerable library: /tmp/ws-ua_20200501140025_KHMIDU/downloadResource_IFJBLS/20200501140117/jackson-databind-2.9.7.jar</p> <p> Dependency Hierarchy: - logstash-logback-encoder-5.2.jar (Root Library) - :x: **jackson-databind-2.9.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/bbaa61e2fd7a1837b81f9827e715dc8c1817cd31">bbaa61e2fd7a1837b81f9827e715dc8c1817cd31</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.aries.transaction.jms.internal.XaPooledConnectionFactory (aka aries.transaction.jms). <p>Publish Date: 2020-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10672>CVE-2020-10672</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10672">https://nvd.nist.gov/vuln/detail/CVE-2020-10672</a></p> <p>Release Date: 2020-03-18</p> <p>Fix Resolution: jackson-databind-2.9.10.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-10672 (High) detected in jackson-databind-2.9.7.jar - ## CVE-2020-10672 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.7.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/CIOIL/infra_github/pom.xml</p> <p>Path to vulnerable library: /tmp/ws-ua_20200501140025_KHMIDU/downloadResource_IFJBLS/20200501140117/jackson-databind-2.9.7.jar</p> <p> Dependency Hierarchy: - logstash-logback-encoder-5.2.jar (Root Library) - :x: **jackson-databind-2.9.7.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/habusha/CIOIL/commit/bbaa61e2fd7a1837b81f9827e715dc8c1817cd31">bbaa61e2fd7a1837b81f9827e715dc8c1817cd31</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.aries.transaction.jms.internal.XaPooledConnectionFactory (aka aries.transaction.jms). <p>Publish Date: 2020-03-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10672>CVE-2020-10672</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10672">https://nvd.nist.gov/vuln/detail/CVE-2020-10672</a></p> <p>Release Date: 2020-03-18</p> <p>Fix Resolution: jackson-databind-2.9.10.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm cioil infra github pom xml path to vulnerable library tmp ws ua khmidu downloadresource ifjbls jackson databind jar dependency hierarchy logstash logback encoder jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache aries transaction jms internal xapooledconnectionfactory aka aries transaction jms publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind step up your open source security game with whitesource
0
46,545
13,055,930,609
IssuesEvent
2020-07-30 03:09:13
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
cmake libarchive detection problem on osX (combo) (Trac #1403)
Incomplete Migration Migrated from Trac cmake defect
Migrated from https://code.icecube.wisc.edu/ticket/1403 ```json { "status": "closed", "changetime": "2015-12-09T16:41:50", "description": "-- libarchive\n-- Found PkgConfig: /usr/local/bin/pkg-config (found version \"0.28\")\n-- + /usr/lib/libarchive.dylib\n\nand later on:\n-- + icetray\n-- +-- libdcap *not* found, omitting optional dcap support\n'''-- +-- libarchive *not* found, omitting optional tarfile support'''\n-- +-- python [symlinks]\n-- +-- i3math-pybindings\n-- +-- icetray-pybindings\n-- +-- icetray_test-pybindings\n", "reporter": "mjurkovic", "cc": "", "resolution": "duplicate", "_ts": "1449679310745718", "component": "cmake", "summary": "cmake libarchive detection problem on osX (combo)", "priority": "critical", "keywords": "", "time": "2015-10-17T12:15:47", "milestone": "", "owner": "nega", "type": "defect" } ```
1.0
cmake libarchive detection problem on osX (combo) (Trac #1403) - Migrated from https://code.icecube.wisc.edu/ticket/1403 ```json { "status": "closed", "changetime": "2015-12-09T16:41:50", "description": "-- libarchive\n-- Found PkgConfig: /usr/local/bin/pkg-config (found version \"0.28\")\n-- + /usr/lib/libarchive.dylib\n\nand later on:\n-- + icetray\n-- +-- libdcap *not* found, omitting optional dcap support\n'''-- +-- libarchive *not* found, omitting optional tarfile support'''\n-- +-- python [symlinks]\n-- +-- i3math-pybindings\n-- +-- icetray-pybindings\n-- +-- icetray_test-pybindings\n", "reporter": "mjurkovic", "cc": "", "resolution": "duplicate", "_ts": "1449679310745718", "component": "cmake", "summary": "cmake libarchive detection problem on osX (combo)", "priority": "critical", "keywords": "", "time": "2015-10-17T12:15:47", "milestone": "", "owner": "nega", "type": "defect" } ```
defect
cmake libarchive detection problem on osx combo trac migrated from json status closed changetime description libarchive n found pkgconfig usr local bin pkg config found version n usr lib libarchive dylib n nand later on n icetray n libdcap not found omitting optional dcap support n libarchive not found omitting optional tarfile support n python n pybindings n icetray pybindings n icetray test pybindings n reporter mjurkovic cc resolution duplicate ts component cmake summary cmake libarchive detection problem on osx combo priority critical keywords time milestone owner nega type defect
1
80,552
30,335,627,733
IssuesEvent
2023-07-11 09:23:08
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Brendan getting UTD from me
T-Defect S-Major A-E2EE Z-UISI O-Uncommon
### Steps to reproduce 1. Where are you starting? What can you see? 2. What do you click? 3. More steps… ### Outcome #### What did you expect? #### What happened instead? ### Operating system macOS ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver element.io ### Will you send logs? Yes
1.0
Brendan getting UTD from me - ### Steps to reproduce 1. Where are you starting? What can you see? 2. What do you click? 3. More steps… ### Outcome #### What did you expect? #### What happened instead? ### Operating system macOS ### Application version _No response_ ### How did you install the app? _No response_ ### Homeserver element.io ### Will you send logs? Yes
defect
brendan getting utd from me steps to reproduce where are you starting what can you see what do you click more steps… outcome what did you expect what happened instead operating system macos application version no response how did you install the app no response homeserver element io will you send logs yes
1
34,861
7,465,314,175
IssuesEvent
2018-04-02 03:33:02
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
ExistsIn array_combine error when saveMany entities
Defect Need more information ORM
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.5.14 * Platform and Target: Win10 x64/PHP7.1.2 x64 / MySQL 5.5 ### What you did When saving many entities using the rule existsIn, PHP throws `Warning: array_combine(): Both parameters should have an equal number of elements [CORE\src\ORM\Rule\ExistsIn.php, line 134]` ### What happened `ExistsIn::__construct` initializes property `$this->_fields`. When `ExistsIn::__invoke` is invoked with option `allowNullableNulls=true` and come across a null field, it **permanently** alters the `$this->_fields` property by removing the corresponding field. When the next entity with a non-null value is tested, the `$this->_fields` property is still altered and lacks the original fields and the warning array_combine(): Both parameters should have an equal number of elements [CORE\src\ORM\Rule\ExistsIn.php, line 134] appears. ### What you expected to happen A copy of `$this->_fields` should be made at every call of `ExistsIn::__invoke` to be compliant with `saveMany()`.
1.0
ExistsIn array_combine error when saveMany entities - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.5.14 * Platform and Target: Win10 x64/PHP7.1.2 x64 / MySQL 5.5 ### What you did When saving many entities using the rule existsIn, PHP throws `Warning: array_combine(): Both parameters should have an equal number of elements [CORE\src\ORM\Rule\ExistsIn.php, line 134]` ### What happened `ExistsIn::__construct` initializes property `$this->_fields`. When `ExistsIn::__invoke` is invoked with option `allowNullableNulls=true` and come across a null field, it **permanently** alters the `$this->_fields` property by removing the corresponding field. When the next entity with a non-null value is tested, the `$this->_fields` property is still altered and lacks the original fields and the warning array_combine(): Both parameters should have an equal number of elements [CORE\src\ORM\Rule\ExistsIn.php, line 134] appears. ### What you expected to happen A copy of `$this->_fields` should be made at every call of `ExistsIn::__invoke` to be compliant with `saveMany()`.
defect
existsin array combine error when savemany entities this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target mysql what you did when saving many entities using the rule existsin php throws warning array combine both parameters should have an equal number of elements what happened existsin construct initializes property this fields when existsin invoke is invoked with option allownullablenulls true and come across a null field it permanently alters the this fields property by removing the corresponding field when the next entity with a non null value is tested the this fields property is still altered and lacks the original fields and the warning array combine both parameters should have an equal number of elements appears what you expected to happen a copy of this fields should be made at every call of existsin invoke to be compliant with savemany
1
16,745
2,941,305,438
IssuesEvent
2015-07-02 06:49:26
tnt944445/reaver-wps
https://api.github.com/repos/tnt944445/reaver-wps
closed
Reaver won't resume session in Kali 1.0
auto-migrated Priority-Triage Type-Defect
``` A few things to consider before submitting an issue: 0. We write documentation for a reason, if you have not read it and are having problems with Reaver these pages are required reading before submitting an issue: http://code.google.com/p/reaver-wps/wiki/HintsAndTips http://code.google.com/p/reaver-wps/wiki/README http://code.google.com/p/reaver-wps/wiki/FAQ http://code.google.com/p/reaver-wps/wiki/SupportedWirelessDrivers 1. Reaver will only work if your card is in monitor mode. If you do not know what monitor mode is then you should learn more about 802.11 hacking in linux before using Reaver. 2. Using Reaver against access points you do not own or have permission to attack is illegal. If you cannot answer basic questions (i.e. model number, distance away, etc) about the device you are attacking then do not post your issue here. We will not help you break the law. 3. Please look through issues that have already been posted and make sure your question has not already been asked here: http://code.google.com/p /reaver-wps/issues/list 4. Often times we need packet captures of mon0 while Reaver is running to troubleshoot the issue (tcpdump -i mon0 -s0 -w broken_reaver.pcap). Issue reports with pcap files attached will receive more serious consideration. Answer the following questions for every issue submitted: 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) Reaver 1.4 1. What operating system are you using (Linux is the only supported OS)? Kali Linux 2. Is your wireless card in monitor mode (yes/no)? Yes 3. What is the signal strength of the Access Point you are trying to crack? -50? Not sure if this is what you mean 4. What is the manufacturer and model # of the device you are trying to crack? Sagem Skybox 5. What is the entire command line string you are supplying to reaver? reaver -i mon0 -c1 -b 00:11:22:33:44:55 -vv -S -N -L -x361 -d 10 -r 20:361 -T 1 -l 600 6. Please describe what you think the issue is. I am unsure but I have tried placing an edited 001122334455.wpc file into /usr/local/etc/reaver (had to make folder) as stated in Issue 233 but with no success. I am booting from a USB using Kali Live but I am not shutting down so surely I should be able to resume? 7. Paste the output from Reaver below. I am unable to as I am back on windows now. The Reaver output doesn't say anything out of the ordinary. It just says session saved after ctrl+c (if I am lucky) then when I input the same bssid it starts from the beginning again regardless. Thanks, hope to get this one sorted! ``` Original issue reported on code.google.com by `mrvarial...@hotmail.com` on 16 Mar 2013 at 1:13
1.0
Reaver won't resume session in Kali 1.0 - ``` A few things to consider before submitting an issue: 0. We write documentation for a reason, if you have not read it and are having problems with Reaver these pages are required reading before submitting an issue: http://code.google.com/p/reaver-wps/wiki/HintsAndTips http://code.google.com/p/reaver-wps/wiki/README http://code.google.com/p/reaver-wps/wiki/FAQ http://code.google.com/p/reaver-wps/wiki/SupportedWirelessDrivers 1. Reaver will only work if your card is in monitor mode. If you do not know what monitor mode is then you should learn more about 802.11 hacking in linux before using Reaver. 2. Using Reaver against access points you do not own or have permission to attack is illegal. If you cannot answer basic questions (i.e. model number, distance away, etc) about the device you are attacking then do not post your issue here. We will not help you break the law. 3. Please look through issues that have already been posted and make sure your question has not already been asked here: http://code.google.com/p /reaver-wps/issues/list 4. Often times we need packet captures of mon0 while Reaver is running to troubleshoot the issue (tcpdump -i mon0 -s0 -w broken_reaver.pcap). Issue reports with pcap files attached will receive more serious consideration. Answer the following questions for every issue submitted: 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) Reaver 1.4 1. What operating system are you using (Linux is the only supported OS)? Kali Linux 2. Is your wireless card in monitor mode (yes/no)? Yes 3. What is the signal strength of the Access Point you are trying to crack? -50? Not sure if this is what you mean 4. What is the manufacturer and model # of the device you are trying to crack? Sagem Skybox 5. What is the entire command line string you are supplying to reaver? reaver -i mon0 -c1 -b 00:11:22:33:44:55 -vv -S -N -L -x361 -d 10 -r 20:361 -T 1 -l 600 6. Please describe what you think the issue is. I am unsure but I have tried placing an edited 001122334455.wpc file into /usr/local/etc/reaver (had to make folder) as stated in Issue 233 but with no success. I am booting from a USB using Kali Live but I am not shutting down so surely I should be able to resume? 7. Paste the output from Reaver below. I am unable to as I am back on windows now. The Reaver output doesn't say anything out of the ordinary. It just says session saved after ctrl+c (if I am lucky) then when I input the same bssid it starts from the beginning again regardless. Thanks, hope to get this one sorted! ``` Original issue reported on code.google.com by `mrvarial...@hotmail.com` on 16 Mar 2013 at 1:13
defect
reaver won t resume session in kali a few things to consider before submitting an issue we write documentation for a reason if you have not read it and are having problems with reaver these pages are required reading before submitting an issue reaver will only work if your card is in monitor mode if you do not know what monitor mode is then you should learn more about hacking in linux before using reaver using reaver against access points you do not own or have permission to attack is illegal if you cannot answer basic questions i e model number distance away etc about the device you are attacking then do not post your issue here we will not help you break the law please look through issues that have already been posted and make sure your question has not already been asked here reaver wps issues list often times we need packet captures of while reaver is running to troubleshoot the issue tcpdump i w broken reaver pcap issue reports with pcap files attached will receive more serious consideration answer the following questions for every issue submitted what version of reaver are you using only defects against the latest version will be considered reaver what operating system are you using linux is the only supported os kali linux is your wireless card in monitor mode yes no yes what is the signal strength of the access point you are trying to crack not sure if this is what you mean what is the manufacturer and model of the device you are trying to crack sagem skybox what is the entire command line string you are supplying to reaver reaver i b vv s n l d r t l please describe what you think the issue is i am unsure but i have tried placing an edited wpc file into usr local etc reaver had to make folder as stated in issue but with no success i am booting from a usb using kali live but i am not shutting down so surely i should be able to resume paste the output from reaver below i am unable to as i am back on windows now the reaver output doesn t say anything out of the ordinary it just says session saved after ctrl c if i am lucky then when i input the same bssid it starts from the beginning again regardless thanks hope to get this one sorted original issue reported on code google com by mrvarial hotmail com on mar at
1
59,614
17,023,179,182
IssuesEvent
2021-07-03 00:43:54
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Upload GPS trace page lacking in consistency
Component: website Priority: trivial Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 8.24am, Tuesday, 25th September 2007]** The upload GPS trace page (http://www.openstreetmap.org/traces/mine) has labels in lower case ("upload GPX file:"; "description:"; "tags:"; "public?"), and with colons. In the case of an error, the error page has the first letter of each label in upper case, without colons ("Upload GPX File"; "Description"; "Tags"; "Public?") This is inconsistent and doesn't look great. Also, the upload page should make it clear that the description is required.
1.0
Upload GPS trace page lacking in consistency - **[Submitted to the original trac issue database at 8.24am, Tuesday, 25th September 2007]** The upload GPS trace page (http://www.openstreetmap.org/traces/mine) has labels in lower case ("upload GPX file:"; "description:"; "tags:"; "public?"), and with colons. In the case of an error, the error page has the first letter of each label in upper case, without colons ("Upload GPX File"; "Description"; "Tags"; "Public?") This is inconsistent and doesn't look great. Also, the upload page should make it clear that the description is required.
defect
upload gps trace page lacking in consistency the upload gps trace page has labels in lower case upload gpx file description tags public and with colons in the case of an error the error page has the first letter of each label in upper case without colons upload gpx file description tags public this is inconsistent and doesn t look great also the upload page should make it clear that the description is required
1
72,374
24,085,005,468
IssuesEvent
2022-09-19 10:06:52
vector-im/element-ios
https://api.github.com/repos/vector-im/element-ios
closed
Tapping Sign out or Invite to Element in the user menu crashes on iPad.
T-Defect Z-AppLayout
### Steps to reproduce 1. Enable the new app layout on an iPad 2. Tap your user icon 3. Tap the Invite to Element or Sign out options. ### Outcome #### What did you expect? To see a sheet with the invite link or a confirmation alert for signing out. #### What happened instead? The app crashes (presumably missing a popover presentation source). https://user-images.githubusercontent.com/6060466/187466438-6c99fa98-cc9c-4ae6-9bcf-23fd7636604b.mp4 ### Your phone model iPad Simulator ### Operating system version iPadOS 15.5 ### Application version 1.9.1 ### Homeserver _No response_ ### Will you send logs? No
1.0
Tapping Sign out or Invite to Element in the user menu crashes on iPad. - ### Steps to reproduce 1. Enable the new app layout on an iPad 2. Tap your user icon 3. Tap the Invite to Element or Sign out options. ### Outcome #### What did you expect? To see a sheet with the invite link or a confirmation alert for signing out. #### What happened instead? The app crashes (presumably missing a popover presentation source). https://user-images.githubusercontent.com/6060466/187466438-6c99fa98-cc9c-4ae6-9bcf-23fd7636604b.mp4 ### Your phone model iPad Simulator ### Operating system version iPadOS 15.5 ### Application version 1.9.1 ### Homeserver _No response_ ### Will you send logs? No
defect
tapping sign out or invite to element in the user menu crashes on ipad steps to reproduce enable the new app layout on an ipad tap your user icon tap the invite to element or sign out options outcome what did you expect to see a sheet with the invite link or a confirmation alert for signing out what happened instead the app crashes presumably missing a popover presentation source your phone model ipad simulator operating system version ipados application version homeserver no response will you send logs no
1
50,155
13,187,349,906
IssuesEvent
2020-08-13 03:07:56
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
Services cannot be accessed if the simclasses python module is loaded before load("libdataio") (Trac #211)
Migrated from Trac combo simulation defect
If in a python script ```text from icecube import simclasses ``` is done before ```text load("libdataio") ``` modules/services in an I3Tray will not be able to Get any services from the context. The attached script "doesnotwork.py" demonstrates this. It will fail with the output: ```text Loading libicetray........................................ok Loading libdataio.........................................ok Loading libsim-services...................................ok Logging configured from file /mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/log4cplus.conf /mnt/sda7/home/fabian/Physik/software/icetray/simulation/src/icetray/public/icetray/I3Context.h:127: FATAL: bad any cast getting object "I3EventService" out of context as "I3EventService" Traceback (most recent call last): File "./doesnotwork.py", line 32, in <module> tray.Execute(4) File "/mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/lib/I3Tray.py", line 116, in Execute args[0].the_tray.Execute(args[1]) RuntimeError: bad any cast getting object "I3EventService" out of context as "I3EventService" ``` While "works.py" where the order of these two calls is reversed works just fine. Variations of this test suggest that this problem is not limited to I3EventService. The current official IC59 GCD file is needed to run the scripts. <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/211 , reported by kislat and owned by olivas_</summary> <p> ```json { "status": "closed", "changetime": "2011-04-15T09:41:21", "description": "If in a python script \n{{{\n from icecube import simclasses\n}}}\nis done before\n{{{\n load(\"libdataio\")\n}}}\nmodules/services in an I3Tray will not be able to Get any services from the context. The attached script \"doesnotwork.py\" demonstrates this. It will fail with the output:\n{{{\nLoading libicetray........................................ok\nLoading libdataio.........................................ok\nLoading libsim-services...................................ok\nLogging configured from file /mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/log4cplus.conf\n/mnt/sda7/home/fabian/Physik/software/icetray/simulation/src/icetray/public/icetray/I3Context.h:127: FATAL: bad any cast getting object \"I3EventService\" out of context as \"I3EventService\"\nTraceback (most recent call last):\n File \"./doesnotwork.py\", line 32, in <module>\n tray.Execute(4)\n File \"/mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/lib/I3Tray.py\", line 116, in Execute\n args[0].the_tray.Execute(args[1])\nRuntimeError: bad any cast getting object \"I3EventService\" out of context as \"I3EventService\"\n}}}\nWhile \"works.py\" where the order of these two calls is reversed works just fine. Variations of this test suggest that this problem is not limited to I3EventService. The current official IC59 GCD file is needed to run the scripts.", "reporter": "kislat", "cc": "", "resolution": "fixed", "_ts": "1302860481000000", "component": "combo simulation", "summary": "Services cannot be accessed if the simclasses python module is loaded before load(\"libdataio\")", "priority": "normal", "keywords": "", "time": "2010-09-08T13:47:33", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
Services cannot be accessed if the simclasses python module is loaded before load("libdataio") (Trac #211) - If in a python script ```text from icecube import simclasses ``` is done before ```text load("libdataio") ``` modules/services in an I3Tray will not be able to Get any services from the context. The attached script "doesnotwork.py" demonstrates this. It will fail with the output: ```text Loading libicetray........................................ok Loading libdataio.........................................ok Loading libsim-services...................................ok Logging configured from file /mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/log4cplus.conf /mnt/sda7/home/fabian/Physik/software/icetray/simulation/src/icetray/public/icetray/I3Context.h:127: FATAL: bad any cast getting object "I3EventService" out of context as "I3EventService" Traceback (most recent call last): File "./doesnotwork.py", line 32, in <module> tray.Execute(4) File "/mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/lib/I3Tray.py", line 116, in Execute args[0].the_tray.Execute(args[1]) RuntimeError: bad any cast getting object "I3EventService" out of context as "I3EventService" ``` While "works.py" where the order of these two calls is reversed works just fine. Variations of this test suggest that this problem is not limited to I3EventService. The current official IC59 GCD file is needed to run the scripts. <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/211 , reported by kislat and owned by olivas_</summary> <p> ```json { "status": "closed", "changetime": "2011-04-15T09:41:21", "description": "If in a python script \n{{{\n from icecube import simclasses\n}}}\nis done before\n{{{\n load(\"libdataio\")\n}}}\nmodules/services in an I3Tray will not be able to Get any services from the context. The attached script \"doesnotwork.py\" demonstrates this. It will fail with the output:\n{{{\nLoading libicetray........................................ok\nLoading libdataio.........................................ok\nLoading libsim-services...................................ok\nLogging configured from file /mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/log4cplus.conf\n/mnt/sda7/home/fabian/Physik/software/icetray/simulation/src/icetray/public/icetray/I3Context.h:127: FATAL: bad any cast getting object \"I3EventService\" out of context as \"I3EventService\"\nTraceback (most recent call last):\n File \"./doesnotwork.py\", line 32, in <module>\n tray.Execute(4)\n File \"/mnt/sda7/home/fabian/Physik/software/icetray/simulation/build-release/lib/I3Tray.py\", line 116, in Execute\n args[0].the_tray.Execute(args[1])\nRuntimeError: bad any cast getting object \"I3EventService\" out of context as \"I3EventService\"\n}}}\nWhile \"works.py\" where the order of these two calls is reversed works just fine. Variations of this test suggest that this problem is not limited to I3EventService. The current official IC59 GCD file is needed to run the scripts.", "reporter": "kislat", "cc": "", "resolution": "fixed", "_ts": "1302860481000000", "component": "combo simulation", "summary": "Services cannot be accessed if the simclasses python module is loaded before load(\"libdataio\")", "priority": "normal", "keywords": "", "time": "2010-09-08T13:47:33", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
services cannot be accessed if the simclasses python module is loaded before load libdataio trac if in a python script text from icecube import simclasses is done before text load libdataio modules services in an will not be able to get any services from the context the attached script doesnotwork py demonstrates this it will fail with the output text loading libicetray ok loading libdataio ok loading libsim services ok logging configured from file mnt home fabian physik software icetray simulation build release conf mnt home fabian physik software icetray simulation src icetray public icetray h fatal bad any cast getting object out of context as traceback most recent call last file doesnotwork py line in tray execute file mnt home fabian physik software icetray simulation build release lib py line in execute args the tray execute args runtimeerror bad any cast getting object out of context as while works py where the order of these two calls is reversed works just fine variations of this test suggest that this problem is not limited to the current official gcd file is needed to run the scripts migrated from reported by kislat and owned by olivas json status closed changetime description if in a python script n n from icecube import simclasses n nis done before n n load libdataio n nmodules services in an will not be able to get any services from the context the attached script doesnotwork py demonstrates this it will fail with the output n nloading libicetray ok nloading libdataio ok nloading libsim services ok nlogging configured from file mnt home fabian physik software icetray simulation build release conf n mnt home fabian physik software icetray simulation src icetray public icetray h fatal bad any cast getting object out of context as ntraceback most recent call last n file doesnotwork py line in n tray execute n file mnt home fabian physik software icetray simulation build release lib py line in execute n args the tray execute args nruntimeerror bad any cast getting object out of context as n nwhile works py where the order of these two calls is reversed works just fine variations of this test suggest that this problem is not limited to the current official gcd file is needed to run the scripts reporter kislat cc resolution fixed ts component combo simulation summary services cannot be accessed if the simclasses python module is loaded before load libdataio priority normal keywords time milestone owner olivas type defect
1
475,941
13,728,477,427
IssuesEvent
2020-10-04 11:51:53
mdolr/survol
https://api.github.com/repos/mdolr/survol
closed
CSS : When using the extension on reddit, embeds are being hidden when goign out of the parent
bug hacktoberfest help wanted priority
Here is an example, the embed gets cropped, in a normal scenario we should be able to see the red part : ![image](https://user-images.githubusercontent.com/19026937/66706152-b5bfc300-ed2f-11e9-97c0-7ba9f7422c5b.png) It happens on both old.reddit.com and reddit.com
1.0
CSS : When using the extension on reddit, embeds are being hidden when goign out of the parent - Here is an example, the embed gets cropped, in a normal scenario we should be able to see the red part : ![image](https://user-images.githubusercontent.com/19026937/66706152-b5bfc300-ed2f-11e9-97c0-7ba9f7422c5b.png) It happens on both old.reddit.com and reddit.com
non_defect
css when using the extension on reddit embeds are being hidden when goign out of the parent here is an example the embed gets cropped in a normal scenario we should be able to see the red part it happens on both old reddit com and reddit com
0
23,684
3,851,865,576
IssuesEvent
2016-04-06 05:27:56
GPF/imame4all
https://api.github.com/repos/GPF/imame4all
closed
How can I run a game by giving a command?
auto-migrated Priority-Medium Type-Defect
``` I would like to load a rom without menu. but I found that selectting a game is too hard for me. ``` Original issue reported on code.google.com by `zhengguo...@163.com` on 30 Oct 2012 at 1:38
1.0
How can I run a game by giving a command? - ``` I would like to load a rom without menu. but I found that selectting a game is too hard for me. ``` Original issue reported on code.google.com by `zhengguo...@163.com` on 30 Oct 2012 at 1:38
defect
how can i run a game by giving a command i would like to load a rom without menu but i found that selectting a game is too hard for me original issue reported on code google com by zhengguo com on oct at
1
149,313
23,459,449,999
IssuesEvent
2022-08-16 11:54:06
jupyterlab/jupyterlab
https://api.github.com/repos/jupyterlab/jupyterlab
closed
Rationalize modern Jupyter front-end offerings
enhancement status:Needs Discussion tag:Design and UX
This is a placeholder issue for JupyterLab 4.0's coverage of some of the different user experiences that the official Jupyter front-ends offer our users. ### Problem We currently offer: - The classic Jupyter Notebook - `nbclassic` - JupyterLab default mode - JupyterLab simple mode - `retrolab` ### Proposed Solution We should build into the JupyterLab 4.0 roadmap a set of front-end offerings that covers the user experience aspects of the Classic Notebook (also `nbclassic` and `jupyterlab-classic`) that meet our users' needs, including: - the `jupyter ...` commands that they invoke - the `pip install jupyter...` commands they need to run to install them - the extension installation experience (which front-ends am I installing this extension for, etc.) cc: @Zsailer @jtpio @ellisonbg
1.0
Rationalize modern Jupyter front-end offerings - This is a placeholder issue for JupyterLab 4.0's coverage of some of the different user experiences that the official Jupyter front-ends offer our users. ### Problem We currently offer: - The classic Jupyter Notebook - `nbclassic` - JupyterLab default mode - JupyterLab simple mode - `retrolab` ### Proposed Solution We should build into the JupyterLab 4.0 roadmap a set of front-end offerings that covers the user experience aspects of the Classic Notebook (also `nbclassic` and `jupyterlab-classic`) that meet our users' needs, including: - the `jupyter ...` commands that they invoke - the `pip install jupyter...` commands they need to run to install them - the extension installation experience (which front-ends am I installing this extension for, etc.) cc: @Zsailer @jtpio @ellisonbg
non_defect
rationalize modern jupyter front end offerings this is a placeholder issue for jupyterlab s coverage of some of the different user experiences that the official jupyter front ends offer our users problem we currently offer the classic jupyter notebook nbclassic jupyterlab default mode jupyterlab simple mode retrolab proposed solution we should build into the jupyterlab roadmap a set of front end offerings that covers the user experience aspects of the classic notebook also nbclassic and jupyterlab classic that meet our users needs including the jupyter commands that they invoke the pip install jupyter commands they need to run to install them the extension installation experience which front ends am i installing this extension for etc cc zsailer jtpio ellisonbg
0
321,040
23,836,955,086
IssuesEvent
2022-09-06 07:00:42
hspaans/molecule-containers
https://api.github.com/repos/hspaans/molecule-containers
opened
EOL Ubuntu 18.04
documentation dependencies
Ubuntu ends support for version 18.04 in April 2023, but versions 20.04 and 22.04 are also available. - [ ] Update .github/dependabot.yml - [ ] Update README.md - [ ] Delete package
1.0
EOL Ubuntu 18.04 - Ubuntu ends support for version 18.04 in April 2023, but versions 20.04 and 22.04 are also available. - [ ] Update .github/dependabot.yml - [ ] Update README.md - [ ] Delete package
non_defect
eol ubuntu ubuntu ends support for version in april but versions and are also available update github dependabot yml update readme md delete package
0
9,699
2,615,166,052
IssuesEvent
2015-03-01 06:46:27
chrsmith/reaver-wps
https://api.github.com/repos/chrsmith/reaver-wps
opened
reaver's Makefile does not support DESTDIR
auto-migrated Priority-Triage Type-Defect
``` 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) 1.4 1. What operating system are you using (Linux is the only supported OS)? n/a 2. Is your wireless card in monitor mode (yes/no)? n/a 3. What is the signal strength of the Access Point you are trying to crack? n/a 4. What is the manufacturer and model # of the device you are trying to crack? n/a 5. What is the entire command line string you are supplying to reaver? n/a 6. Please describe what you think the issue is. The Makefile does not support the DESTDIR variable as described here: http://www.gnu.org/prep/standards/html_node/DESTDIR.html This makes it difficult for packagers to install reaver. It looks like the Gentoo people have worked on a patch for this here: https://gist.github.com/guymann/2348394 7. Paste the output from Reaver below. n/a ``` Original issue reported on code.google.com by `ryandesi...@gmail.com` on 17 May 2013 at 9:03
1.0
reaver's Makefile does not support DESTDIR - ``` 0. What version of Reaver are you using? (Only defects against the latest version will be considered.) 1.4 1. What operating system are you using (Linux is the only supported OS)? n/a 2. Is your wireless card in monitor mode (yes/no)? n/a 3. What is the signal strength of the Access Point you are trying to crack? n/a 4. What is the manufacturer and model # of the device you are trying to crack? n/a 5. What is the entire command line string you are supplying to reaver? n/a 6. Please describe what you think the issue is. The Makefile does not support the DESTDIR variable as described here: http://www.gnu.org/prep/standards/html_node/DESTDIR.html This makes it difficult for packagers to install reaver. It looks like the Gentoo people have worked on a patch for this here: https://gist.github.com/guymann/2348394 7. Paste the output from Reaver below. n/a ``` Original issue reported on code.google.com by `ryandesi...@gmail.com` on 17 May 2013 at 9:03
defect
reaver s makefile does not support destdir what version of reaver are you using only defects against the latest version will be considered what operating system are you using linux is the only supported os n a is your wireless card in monitor mode yes no n a what is the signal strength of the access point you are trying to crack n a what is the manufacturer and model of the device you are trying to crack n a what is the entire command line string you are supplying to reaver n a please describe what you think the issue is the makefile does not support the destdir variable as described here this makes it difficult for packagers to install reaver it looks like the gentoo people have worked on a patch for this here paste the output from reaver below n a original issue reported on code google com by ryandesi gmail com on may at
1
77,879
27,212,972,774
IssuesEvent
2023-02-20 18:14:51
DependencyTrack/dependency-track
https://api.github.com/repos/DependencyTrack/dependency-track
closed
Unable to render swagger.json definition of dependency tracker
defect in triage
### Current Behavior I have automated several dependency tracker functionalities (uploading SBOMs, downloading analysis findings etc) using dependency track APIs by following https://docs.dependencytrack.org/integrations/rest-api/ document. But today all the APIs started failing, for further debugging when I try to render swagger.json definition and gave following error. Even I tried building dependency tracker in local still nothing worked. ![image](https://user-images.githubusercontent.com/91177761/220136067-99e2777b-cc8a-4833-bce4-fb32282dab1f.png) ```{"schemaValidationMessages":[{"level":"error","message":"Deprecated Swagger version. Please visit http://swagger.io for information on upgrading to Swagger/OpenAPI 2.0 or OpenAPI 3.x"}]}``` ### Steps to Reproduce 1. Build dependency tracker . 2. Open swagger UI console. Enter URL : <dependency_track_base_url>/api/swagger.json. ### Expected Behavior Output the list of API and their usage like ![image](https://user-images.githubusercontent.com/91177761/220137754-3113f5ca-f989-4984-9377-eb7e7d881cc0.png) ### Dependency-Track Version 4.7.1 ### Dependency-Track Distribution Container Image ### Database Server PostgreSQL ### Database Server Version 13.7 ### Browser Google Chrome ### Checklist - [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues) - [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
1.0
Unable to render swagger.json definition of dependency tracker - ### Current Behavior I have automated several dependency tracker functionalities (uploading SBOMs, downloading analysis findings etc) using dependency track APIs by following https://docs.dependencytrack.org/integrations/rest-api/ document. But today all the APIs started failing, for further debugging when I try to render swagger.json definition and gave following error. Even I tried building dependency tracker in local still nothing worked. ![image](https://user-images.githubusercontent.com/91177761/220136067-99e2777b-cc8a-4833-bce4-fb32282dab1f.png) ```{"schemaValidationMessages":[{"level":"error","message":"Deprecated Swagger version. Please visit http://swagger.io for information on upgrading to Swagger/OpenAPI 2.0 or OpenAPI 3.x"}]}``` ### Steps to Reproduce 1. Build dependency tracker . 2. Open swagger UI console. Enter URL : <dependency_track_base_url>/api/swagger.json. ### Expected Behavior Output the list of API and their usage like ![image](https://user-images.githubusercontent.com/91177761/220137754-3113f5ca-f989-4984-9377-eb7e7d881cc0.png) ### Dependency-Track Version 4.7.1 ### Dependency-Track Distribution Container Image ### Database Server PostgreSQL ### Database Server Version 13.7 ### Browser Google Chrome ### Checklist - [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues) - [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
defect
unable to render swagger json definition of dependency tracker current behavior i have automated several dependency tracker functionalities uploading sboms downloading analysis findings etc using dependency track apis by following document but today all the apis started failing for further debugging when i try to render swagger json definition and gave following error even i tried building dependency tracker in local still nothing worked schemavalidationmessages steps to reproduce build dependency tracker open swagger ui console enter url api swagger json expected behavior output the list of api and their usage like dependency track version dependency track distribution container image database server postgresql database server version browser google chrome checklist i have read and understand the i have checked the for whether this defect was already reported
1
52,857
13,225,170,893
IssuesEvent
2020-08-17 20:37:56
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
Steamshovel requires Qt >= 4.8 (Trac #478)
Migrated from Trac defect steamshovel
Steamshovel will build successfully with the current Qt version in I3_PORTS (4.6.4). However, at least one significant feature will not work correctly under this version of Qt. Under Qt 4.6, when steamshovel is running in in-shell console mode (i.e. there is a python prompt on the shell that spawned the steamshovel binary; this behavior can be forced with `steamshovel --console`), there is no way to access the scripting object 'window.gl.scenario'. Attempts to read this value produce this error message: QMetaMethod::invoke: Unable to invoke methods with return values in queued connections ERROR (qmeta): Queued property read failed (qmeta_args.cpp:102 in T scripting::qmeta_detail::property_read(QMetaProperty, QObject*) [with T = Scenario*]) This issue does not prevent basic usage of steamshovel, but will affect users who are trying to use the steamshovel scripting layer for automation tasks. Building steamshovel with Qt >= 4.8 fixes the issue. The underlying issue is the subject of this Qt bug: https://bugreports.qt-project.org/browse/QTBUG-10440 <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/478">https://code.icecube.wisc.edu/projects/icecube/ticket/478</a>, reported by sjacksoand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-11T23:06:32", "_ts": "1547247992066444", "description": "Steamshovel will build successfully with the current Qt version in I3_PORTS (4.6.4). However, at least one significant feature will not work correctly under this version of Qt. \n\nUnder Qt 4.6, when steamshovel is running in in-shell console mode (i.e. there is a python prompt on the shell that spawned the steamshovel binary; this behavior can be forced with `steamshovel --console`), there is no way to access the scripting object 'window.gl.scenario'. Attempts to read this value produce this error message:\n\n QMetaMethod::invoke: Unable to invoke methods with return values in queued connections\n ERROR (qmeta): Queued property read failed (qmeta_args.cpp:102 in T scripting::qmeta_detail::property_read(QMetaProperty, QObject*) [with T = Scenario*])\n\nThis issue does not prevent basic usage of steamshovel, but will affect users who are trying to use the steamshovel scripting layer for automation tasks. Building steamshovel with Qt >= 4.8 fixes the issue. \n\nThe underlying issue is the subject of this Qt bug:\n\nhttps://bugreports.qt-project.org/browse/QTBUG-10440\n\n", "reporter": "sjackso", "cc": "nwhitehorn", "resolution": "fixed", "time": "2013-12-30T17:57:13", "component": "steamshovel", "summary": "Steamshovel requires Qt >= 4.8", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
Steamshovel requires Qt >= 4.8 (Trac #478) - Steamshovel will build successfully with the current Qt version in I3_PORTS (4.6.4). However, at least one significant feature will not work correctly under this version of Qt. Under Qt 4.6, when steamshovel is running in in-shell console mode (i.e. there is a python prompt on the shell that spawned the steamshovel binary; this behavior can be forced with `steamshovel --console`), there is no way to access the scripting object 'window.gl.scenario'. Attempts to read this value produce this error message: QMetaMethod::invoke: Unable to invoke methods with return values in queued connections ERROR (qmeta): Queued property read failed (qmeta_args.cpp:102 in T scripting::qmeta_detail::property_read(QMetaProperty, QObject*) [with T = Scenario*]) This issue does not prevent basic usage of steamshovel, but will affect users who are trying to use the steamshovel scripting layer for automation tasks. Building steamshovel with Qt >= 4.8 fixes the issue. The underlying issue is the subject of this Qt bug: https://bugreports.qt-project.org/browse/QTBUG-10440 <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/478">https://code.icecube.wisc.edu/projects/icecube/ticket/478</a>, reported by sjacksoand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-11T23:06:32", "_ts": "1547247992066444", "description": "Steamshovel will build successfully with the current Qt version in I3_PORTS (4.6.4). However, at least one significant feature will not work correctly under this version of Qt. \n\nUnder Qt 4.6, when steamshovel is running in in-shell console mode (i.e. there is a python prompt on the shell that spawned the steamshovel binary; this behavior can be forced with `steamshovel --console`), there is no way to access the scripting object 'window.gl.scenario'. Attempts to read this value produce this error message:\n\n QMetaMethod::invoke: Unable to invoke methods with return values in queued connections\n ERROR (qmeta): Queued property read failed (qmeta_args.cpp:102 in T scripting::qmeta_detail::property_read(QMetaProperty, QObject*) [with T = Scenario*])\n\nThis issue does not prevent basic usage of steamshovel, but will affect users who are trying to use the steamshovel scripting layer for automation tasks. Building steamshovel with Qt >= 4.8 fixes the issue. \n\nThe underlying issue is the subject of this Qt bug:\n\nhttps://bugreports.qt-project.org/browse/QTBUG-10440\n\n", "reporter": "sjackso", "cc": "nwhitehorn", "resolution": "fixed", "time": "2013-12-30T17:57:13", "component": "steamshovel", "summary": "Steamshovel requires Qt >= 4.8", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
steamshovel requires qt trac steamshovel will build successfully with the current qt version in ports however at least one significant feature will not work correctly under this version of qt under qt when steamshovel is running in in shell console mode i e there is a python prompt on the shell that spawned the steamshovel binary this behavior can be forced with steamshovel console there is no way to access the scripting object window gl scenario attempts to read this value produce this error message qmetamethod invoke unable to invoke methods with return values in queued connections error qmeta queued property read failed qmeta args cpp in t scripting qmeta detail property read qmetaproperty qobject this issue does not prevent basic usage of steamshovel but will affect users who are trying to use the steamshovel scripting layer for automation tasks building steamshovel with qt fixes the issue the underlying issue is the subject of this qt bug migrated from json status closed changetime ts description steamshovel will build successfully with the current qt version in ports however at least one significant feature will not work correctly under this version of qt n nunder qt when steamshovel is running in in shell console mode i e there is a python prompt on the shell that spawned the steamshovel binary this behavior can be forced with steamshovel console there is no way to access the scripting object window gl scenario attempts to read this value produce this error message n n qmetamethod invoke unable to invoke methods with return values in queued connections n error qmeta queued property read failed qmeta args cpp in t scripting qmeta detail property read qmetaproperty qobject n nthis issue does not prevent basic usage of steamshovel but will affect users who are trying to use the steamshovel scripting layer for automation tasks building steamshovel with qt fixes the issue n nthe underlying issue is the subject of this qt bug n n reporter sjackso cc nwhitehorn resolution fixed time component steamshovel summary steamshovel requires qt priority normal keywords milestone owner nega type defect
1
634,743
20,371,788,265
IssuesEvent
2022-02-21 11:58:19
MattTheLegoman/RealmsInExile
https://api.github.com/repos/MattTheLegoman/RealmsInExile
closed
Revamp impassable terrain
priority: high mapping
There are issues with it painting weirdly, particularly in the desert.
1.0
Revamp impassable terrain - There are issues with it painting weirdly, particularly in the desert.
non_defect
revamp impassable terrain there are issues with it painting weirdly particularly in the desert
0
279,981
24,271,096,401
IssuesEvent
2022-09-28 10:19:39
SpookyKipper/SpookhostStatusPage
https://api.github.com/repos/SpookyKipper/SpookhostStatusPage
closed
🛑 broken site for testing is down
status broken-site-for-testing
In [`7bef2bf`](https://github.com/SpookyKipper/SpookhostStatusPage/commit/7bef2bf62557ef750995e9737e4ba50c001c59ad ), broken site for testing (hsdgwefweqfweqfefwefwefefsdfqwefdasefds) was **down**: - HTTP code: 0 - Response time: 0 ms
1.0
🛑 broken site for testing is down - In [`7bef2bf`](https://github.com/SpookyKipper/SpookhostStatusPage/commit/7bef2bf62557ef750995e9737e4ba50c001c59ad ), broken site for testing (hsdgwefweqfweqfefwefwefefsdfqwefdasefds) was **down**: - HTTP code: 0 - Response time: 0 ms
non_defect
🛑 broken site for testing is down in broken site for testing hsdgwefweqfweqfefwefwefefsdfqwefdasefds was down http code response time ms
0
663,117
22,162,343,935
IssuesEvent
2022-06-04 17:38:22
Luligabi1/MagicFungi
https://api.github.com/repos/Luligabi1/MagicFungi
closed
Uncaught NPE in PlayerEntityMixin
bug priority: high
https://github.com/Luligabi1/MagicFungi/blob/ee509b6c05ff6ed18d23780409e177e535069b1e/src/main/java/me/luligabi/magicfungi/mixin/PlayerEntityMixin.java#L33 if there is no instance that matches the provided HEALTH_BOOST, then it will be null and the call to getDuration will crash. do a null check
1.0
Uncaught NPE in PlayerEntityMixin - https://github.com/Luligabi1/MagicFungi/blob/ee509b6c05ff6ed18d23780409e177e535069b1e/src/main/java/me/luligabi/magicfungi/mixin/PlayerEntityMixin.java#L33 if there is no instance that matches the provided HEALTH_BOOST, then it will be null and the call to getDuration will crash. do a null check
non_defect
uncaught npe in playerentitymixin if there is no instance that matches the provided health boost then it will be null and the call to getduration will crash do a null check
0
669
2,964,699,718
IssuesEvent
2015-07-10 18:11:26
brharp/hjckrrh
https://api.github.com/repos/brharp/hjckrrh
closed
F3 - Browse FAQ by Category lists all keywords
feature: faq (F) priority: normal type: bug type: missing requirement
Keywords are listed under "Browse FAQ by Category" instead of actual FAQ categories. They all link to 404 pages. https://aoda.web.uoguelph.ca/aodademo/faq ![capture](https://cloud.githubusercontent.com/assets/12450480/8602909/ef7aa834-2642-11e5-8d5b-dd1ba3ab1344.PNG)
1.0
F3 - Browse FAQ by Category lists all keywords - Keywords are listed under "Browse FAQ by Category" instead of actual FAQ categories. They all link to 404 pages. https://aoda.web.uoguelph.ca/aodademo/faq ![capture](https://cloud.githubusercontent.com/assets/12450480/8602909/ef7aa834-2642-11e5-8d5b-dd1ba3ab1344.PNG)
non_defect
browse faq by category lists all keywords keywords are listed under browse faq by category instead of actual faq categories they all link to pages
0
48,797
13,184,744,354
IssuesEvent
2020-08-12 20:00:55
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
Occasional asymmetric client connections (Trac #231)
Incomplete Migration Migrated from Trac defect jeb + pnf
<details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/231 , reported by blaufuss and owned by tschmidt_</summary> <p> ```json { "status": "closed", "changetime": "2012-05-25T13:47:38", "description": "On occasion, pfserver1/2 will have different numbers of clients connected and this is bad for performance. \n\nSeems to happen when clients die right after connecting and reconnect and are double counted in connection pool.\n\nWork around: pause system, and restart PFServers\n\nA better way? Improved client/server connection proxy? ", "reporter": "blaufuss", "cc": "", "resolution": "worksforme", "_ts": "1337953658000000", "component": "jeb + pnf", "summary": "Occasional asymmetric client connections", "priority": "normal", "keywords": "", "time": "2010-12-01T17:13:43", "milestone": "", "owner": "tschmidt", "type": "defect" } ``` </p> </details>
1.0
Occasional asymmetric client connections (Trac #231) - <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/231 , reported by blaufuss and owned by tschmidt_</summary> <p> ```json { "status": "closed", "changetime": "2012-05-25T13:47:38", "description": "On occasion, pfserver1/2 will have different numbers of clients connected and this is bad for performance. \n\nSeems to happen when clients die right after connecting and reconnect and are double counted in connection pool.\n\nWork around: pause system, and restart PFServers\n\nA better way? Improved client/server connection proxy? ", "reporter": "blaufuss", "cc": "", "resolution": "worksforme", "_ts": "1337953658000000", "component": "jeb + pnf", "summary": "Occasional asymmetric client connections", "priority": "normal", "keywords": "", "time": "2010-12-01T17:13:43", "milestone": "", "owner": "tschmidt", "type": "defect" } ``` </p> </details>
defect
occasional asymmetric client connections trac migrated from reported by blaufuss and owned by tschmidt json status closed changetime description on occasion will have different numbers of clients connected and this is bad for performance n nseems to happen when clients die right after connecting and reconnect and are double counted in connection pool n nwork around pause system and restart pfservers n na better way improved client server connection proxy reporter blaufuss cc resolution worksforme ts component jeb pnf summary occasional asymmetric client connections priority normal keywords time milestone owner tschmidt type defect
1
42,598
11,164,630,076
IssuesEvent
2019-12-27 06:00:17
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
OverflowError in resample_poly (upfirdn)
defect scipy.signal
scipy.signal.resample_poly fails if the output vector length would be greater than 2^31-1. ``` Traceback (most recent call last): File "<ipython-input-1-ac5d2b0a1632>", line 11, in <module> yy = resample_poly(y, 128, 1) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\signaltools.py", line 2424, in resample_poly y = upfirdn(h, x, up, down, axis=axis) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\_upfirdn.py", line 183, in upfirdn return ufd.apply_filter(x, axis) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\_upfirdn.py", line 82, in apply_filter output_shape[axis] = output_len OverflowError: Python int too large to convert to C long ``` output_shape is created on the previous line (81): `output_shape = np.asarray(x.shape)` With an unspecified dtype it appears to get np.int32 by default, which is inadequate for specifying large array shapes. This could be fixed by explicitly specifying the dtype: `output_shape = np.asarray(x.shape, dtype=np.int64)`
1.0
OverflowError in resample_poly (upfirdn) - scipy.signal.resample_poly fails if the output vector length would be greater than 2^31-1. ``` Traceback (most recent call last): File "<ipython-input-1-ac5d2b0a1632>", line 11, in <module> yy = resample_poly(y, 128, 1) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\signaltools.py", line 2424, in resample_poly y = upfirdn(h, x, up, down, axis=axis) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\_upfirdn.py", line 183, in upfirdn return ufd.apply_filter(x, axis) File "F:\Programs\Miniconda3\lib\site-packages\scipy\signal\_upfirdn.py", line 82, in apply_filter output_shape[axis] = output_len OverflowError: Python int too large to convert to C long ``` output_shape is created on the previous line (81): `output_shape = np.asarray(x.shape)` With an unspecified dtype it appears to get np.int32 by default, which is inadequate for specifying large array shapes. This could be fixed by explicitly specifying the dtype: `output_shape = np.asarray(x.shape, dtype=np.int64)`
defect
overflowerror in resample poly upfirdn scipy signal resample poly fails if the output vector length would be greater than traceback most recent call last file line in yy resample poly y file f programs lib site packages scipy signal signaltools py line in resample poly y upfirdn h x up down axis axis file f programs lib site packages scipy signal upfirdn py line in upfirdn return ufd apply filter x axis file f programs lib site packages scipy signal upfirdn py line in apply filter output shape output len overflowerror python int too large to convert to c long output shape is created on the previous line output shape np asarray x shape with an unspecified dtype it appears to get np by default which is inadequate for specifying large array shapes this could be fixed by explicitly specifying the dtype output shape np asarray x shape dtype np
1
47,980
13,067,354,232
IssuesEvent
2020-07-31 00:11:33
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
Refactoring in wavedeform (Trac #1566)
Migrated from Trac combo reconstruction defect
GetPulses is 400+ lines and hard to read; therefore it should be refactored with low priority. Probably the only way to do this is a complete rewrite of the method using new internal data structures. Could also consider breaking nnls.c into its two distinct and roughly equal parts: 1. The NNLS logic and 2. The SuiteSparse implementation of that logic, probably with a template method that takes an arbitrary vector data type and a corresponding solver. Migrated from https://code.icecube.wisc.edu/ticket/1566 ```json { "status": "closed", "changetime": "2019-09-18T07:51:29", "description": "GetPulses is 400+ lines and hard to read; therefore it should be refactored with low priority. Probably the only way to do this is a complete rewrite of the method using new internal data structures. Could also consider breaking nnls.c into its two distinct and roughly equal parts: 1. The NNLS logic and 2. The SuiteSparse implementation of that logic, probably with a template method that takes an arbitrary vector data type and a corresponding solver.", "reporter": "jbraun", "cc": "", "resolution": "insufficient resources", "_ts": "1568793089537035", "component": "combo reconstruction", "summary": "Refactoring in wavedeform", "priority": "minor", "keywords": "", "time": "2016-02-24T20:17:35", "milestone": "Long-Term Future", "owner": "jbraun", "type": "defect" } ```
1.0
Refactoring in wavedeform (Trac #1566) - GetPulses is 400+ lines and hard to read; therefore it should be refactored with low priority. Probably the only way to do this is a complete rewrite of the method using new internal data structures. Could also consider breaking nnls.c into its two distinct and roughly equal parts: 1. The NNLS logic and 2. The SuiteSparse implementation of that logic, probably with a template method that takes an arbitrary vector data type and a corresponding solver. Migrated from https://code.icecube.wisc.edu/ticket/1566 ```json { "status": "closed", "changetime": "2019-09-18T07:51:29", "description": "GetPulses is 400+ lines and hard to read; therefore it should be refactored with low priority. Probably the only way to do this is a complete rewrite of the method using new internal data structures. Could also consider breaking nnls.c into its two distinct and roughly equal parts: 1. The NNLS logic and 2. The SuiteSparse implementation of that logic, probably with a template method that takes an arbitrary vector data type and a corresponding solver.", "reporter": "jbraun", "cc": "", "resolution": "insufficient resources", "_ts": "1568793089537035", "component": "combo reconstruction", "summary": "Refactoring in wavedeform", "priority": "minor", "keywords": "", "time": "2016-02-24T20:17:35", "milestone": "Long-Term Future", "owner": "jbraun", "type": "defect" } ```
defect
refactoring in wavedeform trac getpulses is lines and hard to read therefore it should be refactored with low priority probably the only way to do this is a complete rewrite of the method using new internal data structures could also consider breaking nnls c into its two distinct and roughly equal parts the nnls logic and the suitesparse implementation of that logic probably with a template method that takes an arbitrary vector data type and a corresponding solver migrated from json status closed changetime description getpulses is lines and hard to read therefore it should be refactored with low priority probably the only way to do this is a complete rewrite of the method using new internal data structures could also consider breaking nnls c into its two distinct and roughly equal parts the nnls logic and the suitesparse implementation of that logic probably with a template method that takes an arbitrary vector data type and a corresponding solver reporter jbraun cc resolution insufficient resources ts component combo reconstruction summary refactoring in wavedeform priority minor keywords time milestone long term future owner jbraun type defect
1
81,225
30,758,942,234
IssuesEvent
2023-07-29 12:42:18
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
opened
[🐛 Bug][Java]: Newly created instances of RemoteWebElement throw NPE on hashCode invocation
I-defect needs-triaging
### What happened? After running the below code snippet a NPE is thrown. This happens because the id property of the instance is unset (e.g. null) initially unless setId is called and hashCode is overridden as @Override public int hashCode() { return id.hashCode(); } This also means that newly created instances of RemoteWebElement cannot be compared or put into a map. Expected result: The hashCode implementation must never throw an exception. ### How can we reproduce the issue? ```shell WebElement we = new RemoteWebElement(); int hash = we.hashCode(); ``` ### Relevant log output ```shell java.lang.NullPointerException at org.openqa.selenium.remote.RemoteWebElement.hashCode(RemoteWebElement.java:298) ``` ``` ### Operating System all ### Selenium version latest ### What are the browser(s) and version(s) where you see this issue? all ### What are the browser driver(s) and version(s) where you see this issue? not applicable ### Are you using Selenium Grid? no
1.0
[🐛 Bug][Java]: Newly created instances of RemoteWebElement throw NPE on hashCode invocation - ### What happened? After running the below code snippet a NPE is thrown. This happens because the id property of the instance is unset (e.g. null) initially unless setId is called and hashCode is overridden as @Override public int hashCode() { return id.hashCode(); } This also means that newly created instances of RemoteWebElement cannot be compared or put into a map. Expected result: The hashCode implementation must never throw an exception. ### How can we reproduce the issue? ```shell WebElement we = new RemoteWebElement(); int hash = we.hashCode(); ``` ### Relevant log output ```shell java.lang.NullPointerException at org.openqa.selenium.remote.RemoteWebElement.hashCode(RemoteWebElement.java:298) ``` ``` ### Operating System all ### Selenium version latest ### What are the browser(s) and version(s) where you see this issue? all ### What are the browser driver(s) and version(s) where you see this issue? not applicable ### Are you using Selenium Grid? no
defect
newly created instances of remotewebelement throw npe on hashcode invocation what happened after running the below code snippet a npe is thrown this happens because the id property of the instance is unset e g null initially unless setid is called and hashcode is overridden as override public int hashcode return id hashcode this also means that newly created instances of remotewebelement cannot be compared or put into a map expected result the hashcode implementation must never throw an exception how can we reproduce the issue shell webelement we new remotewebelement int hash we hashcode relevant log output shell java lang nullpointerexception at org openqa selenium remote remotewebelement hashcode remotewebelement java operating system all selenium version latest what are the browser s and version s where you see this issue all what are the browser driver s and version s where you see this issue not applicable are you using selenium grid no
1
49,328
13,186,614,793
IssuesEvent
2020-08-13 00:45:15
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
fix broken include guards (Trac #1186)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1186">https://code.icecube.wisc.edu/ticket/1186</a>, reported by kjmeagher and owned by karg</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "tpx/private/tpx/converter/convert_I3IceTopBaseline.h", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "fix broken include guards", "priority": "normal", "keywords": "", "time": "2015-08-19T13:27:46", "milestone": "", "owner": "karg", "type": "defect" } ``` </p> </details>
1.0
fix broken include guards (Trac #1186) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1186">https://code.icecube.wisc.edu/ticket/1186</a>, reported by kjmeagher and owned by karg</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "description": "tpx/private/tpx/converter/convert_I3IceTopBaseline.h", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "_ts": "1550067117911749", "component": "combo reconstruction", "summary": "fix broken include guards", "priority": "normal", "keywords": "", "time": "2015-08-19T13:27:46", "milestone": "", "owner": "karg", "type": "defect" } ``` </p> </details>
defect
fix broken include guards trac migrated from json status closed changetime description tpx private tpx converter convert h reporter kjmeagher cc resolution fixed ts component combo reconstruction summary fix broken include guards priority normal keywords time milestone owner karg type defect
1
79,900
29,508,926,678
IssuesEvent
2023-06-03 17:16:58
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
PANIC during "zpool import -FfmX <pool>"
Type: Defect
### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Gentoo Distribution Version | Gentoo Base System release 2.13 Kernel Version |6.1.31-gentoo Architecture |x86_64 OpenZFS Version |zfs-2.1.11-r0-gentoo ### Describe the problem you're observing During recovery attempt of a very faulty zpool, zfs panic'd: ### Describe how to reproduce the problem I got a dd image of this disc. I suppose I could reproduce it (didn't try, yet). I'm happy to try and post more info if you tell me how/what to do. ### Include any warning/errors/backtraces from the system logs ![2023-06-03-13-58-33-598 1](https://github.com/openzfs/zfs/assets/32984777/c6e2a467-e5ad-4fe1-a97e-dccb33a77f8f) ![2023-06-03-14-01-09-655](https://github.com/openzfs/zfs/assets/32984777/2cc097bf-e8df-40dc-bb6a-e5a93fd674d4)
1.0
PANIC during "zpool import -FfmX <pool>" - ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Gentoo Distribution Version | Gentoo Base System release 2.13 Kernel Version |6.1.31-gentoo Architecture |x86_64 OpenZFS Version |zfs-2.1.11-r0-gentoo ### Describe the problem you're observing During recovery attempt of a very faulty zpool, zfs panic'd: ### Describe how to reproduce the problem I got a dd image of this disc. I suppose I could reproduce it (didn't try, yet). I'm happy to try and post more info if you tell me how/what to do. ### Include any warning/errors/backtraces from the system logs ![2023-06-03-13-58-33-598 1](https://github.com/openzfs/zfs/assets/32984777/c6e2a467-e5ad-4fe1-a97e-dccb33a77f8f) ![2023-06-03-14-01-09-655](https://github.com/openzfs/zfs/assets/32984777/2cc097bf-e8df-40dc-bb6a-e5a93fd674d4)
defect
panic during zpool import ffmx system information type version name distribution name gentoo distribution version gentoo base system release kernel version gentoo architecture openzfs version zfs gentoo describe the problem you re observing during recovery attempt of a very faulty zpool zfs panic d describe how to reproduce the problem i got a dd image of this disc i suppose i could reproduce it didn t try yet i m happy to try and post more info if you tell me how what to do include any warning errors backtraces from the system logs
1
110,261
11,695,684,520
IssuesEvent
2020-03-06 08:10:14
gatsbyjs/gatsby
https://api.github.com/repos/gatsbyjs/gatsby
closed
Example on proactively fetch updates
type: documentation
<!-- To make it easier for us to help you, please include as much useful information as possible. Useful Links: - Documentation: https://www.gatsbyjs.org/docs/ - Contributing: https://www.gatsbyjs.org/contributing/ Gatsby has several community support channels, try asking your question on: - Discord: https://gatsby.dev/discord - Spectrum: https://spectrum.chat/gatsby-js - Twitter: https://twitter.com/gatsbyjs Before opening a new issue, please search existing issues https://github.com/gatsbyjs/gatsby/issues --> ## Summary When reading https://www.gatsbyjs.org/docs/creating-a-source-plugin/#improve-plugin-developer-experience-by-enabling-faster-sync, it would be nice if an easy example could be listed on how a data source could be synced. The mentioned example of `gatsby-source-sanity` is not beginner friendly and doesn't illustrate what the constraints are for updating nodes. ### Motivation I believe anyone who wishes to integrate an external data source could be interested in seeing this. ### Suggestion Maybe an example where the current time is being added as a node. Then (if that is possible), the node is being updated by a `setInterval` function. In the `setInterval` users can see how exactly an existing node is being updated. ## Steps to resolve this issue <!-- Your suggestion may require additional steps. Remember to add any relevant labels. Note that you'll need to fill in the link to a similar article as well as the correct section. Don't worry if you're not yet sure about these, especially if this is a brand new topic! --> ### Draft the doc - [ ] Write the doc, following the format listed in these resources: - [Overview on contributing to documentation](https://www.gatsbyjs.org/contributing/docs-contributions/) - [Docs Templates](https://www.gatsbyjs.org/contributing/docs-templates/) - [Example of a similar article]() - [ ] Add the article to the [docs sidebar](https://github.com/gatsbyjs/gatsby/blob/master/www/src/data/sidebars/doc-links.yaml) under the [parent doc] section. ### Open a pull request - [ ] Open a pull request with your work including the words "closes #[this issue's number]" in the pull request description
1.0
Example on proactively fetch updates - <!-- To make it easier for us to help you, please include as much useful information as possible. Useful Links: - Documentation: https://www.gatsbyjs.org/docs/ - Contributing: https://www.gatsbyjs.org/contributing/ Gatsby has several community support channels, try asking your question on: - Discord: https://gatsby.dev/discord - Spectrum: https://spectrum.chat/gatsby-js - Twitter: https://twitter.com/gatsbyjs Before opening a new issue, please search existing issues https://github.com/gatsbyjs/gatsby/issues --> ## Summary When reading https://www.gatsbyjs.org/docs/creating-a-source-plugin/#improve-plugin-developer-experience-by-enabling-faster-sync, it would be nice if an easy example could be listed on how a data source could be synced. The mentioned example of `gatsby-source-sanity` is not beginner friendly and doesn't illustrate what the constraints are for updating nodes. ### Motivation I believe anyone who wishes to integrate an external data source could be interested in seeing this. ### Suggestion Maybe an example where the current time is being added as a node. Then (if that is possible), the node is being updated by a `setInterval` function. In the `setInterval` users can see how exactly an existing node is being updated. ## Steps to resolve this issue <!-- Your suggestion may require additional steps. Remember to add any relevant labels. Note that you'll need to fill in the link to a similar article as well as the correct section. Don't worry if you're not yet sure about these, especially if this is a brand new topic! --> ### Draft the doc - [ ] Write the doc, following the format listed in these resources: - [Overview on contributing to documentation](https://www.gatsbyjs.org/contributing/docs-contributions/) - [Docs Templates](https://www.gatsbyjs.org/contributing/docs-templates/) - [Example of a similar article]() - [ ] Add the article to the [docs sidebar](https://github.com/gatsbyjs/gatsby/blob/master/www/src/data/sidebars/doc-links.yaml) under the [parent doc] section. ### Open a pull request - [ ] Open a pull request with your work including the words "closes #[this issue's number]" in the pull request description
non_defect
example on proactively fetch updates to make it easier for us to help you please include as much useful information as possible useful links documentation contributing gatsby has several community support channels try asking your question on discord spectrum twitter before opening a new issue please search existing issues summary when reading it would be nice if an easy example could be listed on how a data source could be synced the mentioned example of gatsby source sanity is not beginner friendly and doesn t illustrate what the constraints are for updating nodes motivation i believe anyone who wishes to integrate an external data source could be interested in seeing this suggestion maybe an example where the current time is being added as a node then if that is possible the node is being updated by a setinterval function in the setinterval users can see how exactly an existing node is being updated steps to resolve this issue draft the doc write the doc following the format listed in these resources add the article to the under the section open a pull request open a pull request with your work including the words closes in the pull request description
0
45,470
12,814,792,022
IssuesEvent
2020-07-04 21:05:06
pymc-devs/pymc3
https://api.github.com/repos/pymc-devs/pymc3
closed
Mixture of mixtures works, but not Mixture of Mixture and Single distribution
defects shape problem
I am trying to model a Mixture between a Mixture and another distribution, but I am getting an error: **Minimal Example:** ```python with pm.Model() as m: a1 = pm.Normal.dist(mu=0, sigma=1) a2 = pm.Normal.dist(mu=0, sigma=1) a3 = pm.Normal.dist(mu=0, sigma=1) w1 = pm.Dirichlet('w1', np.array([1, 1])) mix = pm.Mixture.dist(w=w1, comp_dists=[a1, a2]) w2 = pm.Dirichlet('w2', np.array([1, 1])) like = pm.Mixture = pm.Mixture('like', w=w2, comp_dists=[mix, a3], observed=np.random.randn(20)) ``` **Traceback:** ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in _comp_modes(self) 289 try: --> 290 return tt.as_tensor_variable(self.comp_dists.mode) 291 except AttributeError: AttributeError: 'list' object has no attribute 'mode' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) <ipython-input-8-dedf5c958f15> in <module> 8 9 w2 = pm.Dirichlet('w2', np.array([1, 1])) ---> 10 like = pm.Mixture = pm.Mixture('like', w=w2, comp_dists=[mix, a3], observed=np.random.randn(20)) ~/.local/lib/python3.8/site-packages/pymc3/distributions/distribution.py in __new__(cls, name, *args, **kwargs) 44 raise TypeError("observed needs to be data but got: {}".format(type(data))) 45 total_size = kwargs.pop('total_size', None) ---> 46 dist = cls.dist(*args, **kwargs) 47 return model.Var(name, dist, data, total_size) 48 else: ~/.local/lib/python3.8/site-packages/pymc3/distributions/distribution.py in dist(cls, *args, **kwargs) 55 def dist(cls, *args, **kwargs): 56 dist = object.__new__(cls) ---> 57 dist.__init__(*args, **kwargs) 58 return dist 59 ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in __init__(self, w, comp_dists, *args, **kwargs) 139 140 try: --> 141 comp_modes = self._comp_modes() 142 comp_mode_logps = self.logp(comp_modes) 143 self.mode = comp_modes[tt.argmax(w * comp_mode_logps, axis=-1)] ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in _comp_modes(self) 290 return tt.as_tensor_variable(self.comp_dists.mode) 291 except AttributeError: --> 292 return tt.squeeze(tt.stack([comp_dist.mode 293 for comp_dist in self.comp_dists], 294 axis=-1)) ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in stack(*tensors, **kwargs) 4726 dtype = scal.upcast(*[i.dtype for i in tensors]) 4727 return theano.tensor.opt.MakeVector(dtype)(*tensors) -> 4728 return join(axis, *[shape_padaxis(t, axis) for t in tensors]) 4729 4730 ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in join(axis, *tensors_list) 4500 return tensors_list[0] 4501 else: -> 4502 return join_(axis, *tensors_list) 4503 4504 ~/.local/lib/python3.8/site-packages/theano/gof/op.py in __call__(self, *inputs, **kwargs) 613 """ 614 return_list = kwargs.pop('return_list', False) --> 615 node = self.make_node(*inputs, **kwargs) 616 617 if config.compute_test_value != 'off': ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in make_node(self, *axis_and_tensors) 4232 return tensor(dtype=out_dtype, broadcastable=bcastable) 4233 -> 4234 return self._make_node_internal( 4235 axis, tensors, as_tensor_variable_args, output_maker) 4236 ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in _make_node_internal(self, axis, tensors, as_tensor_variable_args, output_maker) 4299 if not python_all([x.ndim == len(bcastable) 4300 for x in as_tensor_variable_args[1:]]): -> 4301 raise TypeError("Join() can only join tensors with the same " 4302 "number of dimensions.") 4303 TypeError: Join() can only join tensors with the same number of dimensions. ``` However, if I create a fake Mixture dist for the third distribution, it seems to work: ```python with pm.Model() as m: a1 = pm.Normal.dist(mu=0, sigma=1) a2 = pm.Normal.dist(mu=0, sigma=1) a3 = pm.Normal.dist(mu=0, sigma=1) w1 = pm.Dirichlet('w1', np.array([1, 1])) mix = pm.Mixture.dist(w=w1, comp_dists=[a1, a2]) fake_mix = pm.Mixture.dist(w=[1, 0], comp_dists=[a3, a3]) w2 = pm.Dirichlet('w2', np.array([1, 1])) like = pm.Mixture('like', w=w2, comp_dists=[mix, fake_mix], observed=np.random.randn(20)) ``` I understand that this might not be optimal in the first place, and can certainly be coded as a custom distribution, but is this a design choice or a bug? It could also be just a question of shape handling, but I have no good intuition on how to check for that. ## Versions and main components * PyMC3 Version: 3.8 * Theano Version: 1.0.4 * Python Version: 3.8.2 * Operating system: Linux Ubuntu * How did you install PyMC3: pip
1.0
Mixture of mixtures works, but not Mixture of Mixture and Single distribution - I am trying to model a Mixture between a Mixture and another distribution, but I am getting an error: **Minimal Example:** ```python with pm.Model() as m: a1 = pm.Normal.dist(mu=0, sigma=1) a2 = pm.Normal.dist(mu=0, sigma=1) a3 = pm.Normal.dist(mu=0, sigma=1) w1 = pm.Dirichlet('w1', np.array([1, 1])) mix = pm.Mixture.dist(w=w1, comp_dists=[a1, a2]) w2 = pm.Dirichlet('w2', np.array([1, 1])) like = pm.Mixture = pm.Mixture('like', w=w2, comp_dists=[mix, a3], observed=np.random.randn(20)) ``` **Traceback:** ```python --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in _comp_modes(self) 289 try: --> 290 return tt.as_tensor_variable(self.comp_dists.mode) 291 except AttributeError: AttributeError: 'list' object has no attribute 'mode' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) <ipython-input-8-dedf5c958f15> in <module> 8 9 w2 = pm.Dirichlet('w2', np.array([1, 1])) ---> 10 like = pm.Mixture = pm.Mixture('like', w=w2, comp_dists=[mix, a3], observed=np.random.randn(20)) ~/.local/lib/python3.8/site-packages/pymc3/distributions/distribution.py in __new__(cls, name, *args, **kwargs) 44 raise TypeError("observed needs to be data but got: {}".format(type(data))) 45 total_size = kwargs.pop('total_size', None) ---> 46 dist = cls.dist(*args, **kwargs) 47 return model.Var(name, dist, data, total_size) 48 else: ~/.local/lib/python3.8/site-packages/pymc3/distributions/distribution.py in dist(cls, *args, **kwargs) 55 def dist(cls, *args, **kwargs): 56 dist = object.__new__(cls) ---> 57 dist.__init__(*args, **kwargs) 58 return dist 59 ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in __init__(self, w, comp_dists, *args, **kwargs) 139 140 try: --> 141 comp_modes = self._comp_modes() 142 comp_mode_logps = self.logp(comp_modes) 143 self.mode = comp_modes[tt.argmax(w * comp_mode_logps, axis=-1)] ~/.local/lib/python3.8/site-packages/pymc3/distributions/mixture.py in _comp_modes(self) 290 return tt.as_tensor_variable(self.comp_dists.mode) 291 except AttributeError: --> 292 return tt.squeeze(tt.stack([comp_dist.mode 293 for comp_dist in self.comp_dists], 294 axis=-1)) ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in stack(*tensors, **kwargs) 4726 dtype = scal.upcast(*[i.dtype for i in tensors]) 4727 return theano.tensor.opt.MakeVector(dtype)(*tensors) -> 4728 return join(axis, *[shape_padaxis(t, axis) for t in tensors]) 4729 4730 ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in join(axis, *tensors_list) 4500 return tensors_list[0] 4501 else: -> 4502 return join_(axis, *tensors_list) 4503 4504 ~/.local/lib/python3.8/site-packages/theano/gof/op.py in __call__(self, *inputs, **kwargs) 613 """ 614 return_list = kwargs.pop('return_list', False) --> 615 node = self.make_node(*inputs, **kwargs) 616 617 if config.compute_test_value != 'off': ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in make_node(self, *axis_and_tensors) 4232 return tensor(dtype=out_dtype, broadcastable=bcastable) 4233 -> 4234 return self._make_node_internal( 4235 axis, tensors, as_tensor_variable_args, output_maker) 4236 ~/.local/lib/python3.8/site-packages/theano/tensor/basic.py in _make_node_internal(self, axis, tensors, as_tensor_variable_args, output_maker) 4299 if not python_all([x.ndim == len(bcastable) 4300 for x in as_tensor_variable_args[1:]]): -> 4301 raise TypeError("Join() can only join tensors with the same " 4302 "number of dimensions.") 4303 TypeError: Join() can only join tensors with the same number of dimensions. ``` However, if I create a fake Mixture dist for the third distribution, it seems to work: ```python with pm.Model() as m: a1 = pm.Normal.dist(mu=0, sigma=1) a2 = pm.Normal.dist(mu=0, sigma=1) a3 = pm.Normal.dist(mu=0, sigma=1) w1 = pm.Dirichlet('w1', np.array([1, 1])) mix = pm.Mixture.dist(w=w1, comp_dists=[a1, a2]) fake_mix = pm.Mixture.dist(w=[1, 0], comp_dists=[a3, a3]) w2 = pm.Dirichlet('w2', np.array([1, 1])) like = pm.Mixture('like', w=w2, comp_dists=[mix, fake_mix], observed=np.random.randn(20)) ``` I understand that this might not be optimal in the first place, and can certainly be coded as a custom distribution, but is this a design choice or a bug? It could also be just a question of shape handling, but I have no good intuition on how to check for that. ## Versions and main components * PyMC3 Version: 3.8 * Theano Version: 1.0.4 * Python Version: 3.8.2 * Operating system: Linux Ubuntu * How did you install PyMC3: pip
defect
mixture of mixtures works but not mixture of mixture and single distribution i am trying to model a mixture between a mixture and another distribution but i am getting an error minimal example python with pm model as m pm normal dist mu sigma pm normal dist mu sigma pm normal dist mu sigma pm dirichlet np array mix pm mixture dist w comp dists pm dirichlet np array like pm mixture pm mixture like w comp dists observed np random randn traceback python attributeerror traceback most recent call last local lib site packages distributions mixture py in comp modes self try return tt as tensor variable self comp dists mode except attributeerror attributeerror list object has no attribute mode during handling of the above exception another exception occurred typeerror traceback most recent call last in pm dirichlet np array like pm mixture pm mixture like w comp dists observed np random randn local lib site packages distributions distribution py in new cls name args kwargs raise typeerror observed needs to be data but got format type data total size kwargs pop total size none dist cls dist args kwargs return model var name dist data total size else local lib site packages distributions distribution py in dist cls args kwargs def dist cls args kwargs dist object new cls dist init args kwargs return dist local lib site packages distributions mixture py in init self w comp dists args kwargs try comp modes self comp modes comp mode logps self logp comp modes self mode comp modes local lib site packages distributions mixture py in comp modes self return tt as tensor variable self comp dists mode except attributeerror return tt squeeze tt stack comp dist mode for comp dist in self comp dists axis local lib site packages theano tensor basic py in stack tensors kwargs dtype scal upcast return theano tensor opt makevector dtype tensors return join axis local lib site packages theano tensor basic py in join axis tensors list return tensors list else return join axis tensors list local lib site packages theano gof op py in call self inputs kwargs return list kwargs pop return list false node self make node inputs kwargs if config compute test value off local lib site packages theano tensor basic py in make node self axis and tensors return tensor dtype out dtype broadcastable bcastable return self make node internal axis tensors as tensor variable args output maker local lib site packages theano tensor basic py in make node internal self axis tensors as tensor variable args output maker if not python all x ndim len bcastable for x in as tensor variable args raise typeerror join can only join tensors with the same number of dimensions typeerror join can only join tensors with the same number of dimensions however if i create a fake mixture dist for the third distribution it seems to work python with pm model as m pm normal dist mu sigma pm normal dist mu sigma pm normal dist mu sigma pm dirichlet np array mix pm mixture dist w comp dists fake mix pm mixture dist w comp dists pm dirichlet np array like pm mixture like w comp dists observed np random randn i understand that this might not be optimal in the first place and can certainly be coded as a custom distribution but is this a design choice or a bug it could also be just a question of shape handling but i have no good intuition on how to check for that versions and main components version theano version python version operating system linux ubuntu how did you install pip
1
78,838
27,780,405,621
IssuesEvent
2023-03-16 20:33:56
DependencyTrack/dependency-track
https://api.github.com/repos/DependencyTrack/dependency-track
closed
DT Track takes error when Author is greater than 255 characters in SBOM during import
defect
### Current Behavior After importing an SBOM, we noticed that it was not populating the vulnerabilities as expected (might be unrelated) and so I looked at the logs and saw a fair number of these: ``` dtrack-apiserver_1 | 2023-02-10 18:26:19,671 INFO [BomUploadProcessingTask] Processing CycloneDX BOM uploaded to project: 6747cf15-28f8-4ef5-a37d-3b1f3cdb3855 dtrack-apiserver_1 | 2023-02-10 18:27:05,897 ERROR [BomUploadProcessingTask] Error while processing bom dtrack-apiserver_1 | javax.jdo.JDOFatalUserException: Attempt to store value "Douglas Bates [aut], Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Timothy A. Davis [ctb] (SuiteSparse and 'cs' C libraries, notably CHOLMOD, AMD; collaborators listed in dir(pattern = '^[A-Z]+[.]txt$', full.names=TRUE, system.file('doc', 'SuiteSparse', package='Matrix'))), Jens Oehlschlägel [ctb] (initial nearPD()), Jason Riedy [ctb] (condest() and onenormest() for octave, Copyright: Regents of the University of California), R Core Team [ctb] (base R matrix implementation)" in column "AUTHOR" that has maximum length of 255. Please correct your data! dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:678) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:702) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:722) dtrack-apiserver_1 | at alpine.persistence.AbstractAlpineQueryManager.persist(AbstractAlpineQueryManager.java:427) dtrack-apiserver_1 | at org.dependencytrack.persistence.ComponentQueryManager.createComponent(ComponentQueryManager.java:320) dtrack-apiserver_1 | at org.dependencytrack.persistence.QueryManager.createComponent(QueryManager.java:468) dtrack-apiserver_1 | at org.dependencytrack.tasks.BomUploadProcessingTask.processComponent(BomUploadProcessingTask.java:181) dtrack-apiserver_1 | at org.dependencytrack.tasks.BomUploadProcessingTask.inform(BomUploadProcessingTask.java:127) dtrack-apiserver_1 | at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:101) dtrack-apiserver_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) dtrack-apiserver_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) dtrack-apiserver_1 | at java.base/java.lang.Thread.run(Unknown Source) dtrack-apiserver_1 | Caused by: org.datanucleus.exceptions.NucleusUserException: Attempt to store value "Douglas Bates [aut], Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Timothy A. Davis [ctb] (SuiteSparse and 'cs' C libraries, notably CHOLMOD, AMD; collaborators listed in dir(pattern = '^[A-Z]+[.]txt$', full.names=TRUE, system.file('doc', 'SuiteSparse', package='Matrix'))), Jens Oehlschlägel [ctb] (initial nearPD()), Jason Riedy [ctb] (condest() and onenormest() for octave, Copyright: Regents of the University of California), R Core Team [ctb] (base R matrix implementation)" in column "AUTHOR" that has maximum length of 255. Please correct your data! dtrack-apiserver_1 | at org.datanucleus.store.rdbms.mapping.column.CharColumnMapping.setString(CharColumnMapping.java:253) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:202) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeStringField(ParameterSetter.java:158) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.providedStringField(StateManagerImpl.java:1903) dtrack-apiserver_1 | at org.dependencytrack.model.Component.dnProvideField(Component.java) dtrack-apiserver_1 | at org.dependencytrack.model.Component.dnProvideFields(Component.java) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.provideFields(StateManagerImpl.java:2559) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:383) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObjectInTable(RDBMSPersistenceHandler.java:162) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:138) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.internalMakePersistent(StateManagerImpl.java:4587) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.makePersistent(StateManagerImpl.java:4564) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2014) dtrack-apiserver_1 | at org.datanucleus.ExecutionContext.persistObjectInternal(ExecutionContext.java:320) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1862) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1723) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:697) dtrack-apiserver_1 | ... 10 common frames omitted ``` ### Steps to Reproduce 1. Create a CDX SBOM that has an author field than 255 characters. Any SBOM will do and just manually edit one of the author fields to have grater than 255 characters. Use the data from the above error if needed. 2. Create a project in DT 3. Import the SBOM 4. Check the logs ### Expected Behavior Good question. If length is a concern, truncate it rather than error and in the actual author data imported perhaps put (truncated) at the end so it is clear this has been done. You could lengthen it, but then how long is enough? I suggest the above is good and show in the logs it was truncated as well Cannot answer some of the details below. I just did a curl followed by a docker-compose mid January to create the system, so it is whatever the basics are for one of these systems. ### Dependency-Track Version 4.6.x ### Dependency-Track Distribution Container Image ### Database Server PostgreSQL ### Database Server Version _No response_ ### Browser Mozilla Firefox ### Checklist - [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues) - [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
1.0
DT Track takes error when Author is greater than 255 characters in SBOM during import - ### Current Behavior After importing an SBOM, we noticed that it was not populating the vulnerabilities as expected (might be unrelated) and so I looked at the logs and saw a fair number of these: ``` dtrack-apiserver_1 | 2023-02-10 18:26:19,671 INFO [BomUploadProcessingTask] Processing CycloneDX BOM uploaded to project: 6747cf15-28f8-4ef5-a37d-3b1f3cdb3855 dtrack-apiserver_1 | 2023-02-10 18:27:05,897 ERROR [BomUploadProcessingTask] Error while processing bom dtrack-apiserver_1 | javax.jdo.JDOFatalUserException: Attempt to store value "Douglas Bates [aut], Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Timothy A. Davis [ctb] (SuiteSparse and 'cs' C libraries, notably CHOLMOD, AMD; collaborators listed in dir(pattern = '^[A-Z]+[.]txt$', full.names=TRUE, system.file('doc', 'SuiteSparse', package='Matrix'))), Jens Oehlschlägel [ctb] (initial nearPD()), Jason Riedy [ctb] (condest() and onenormest() for octave, Copyright: Regents of the University of California), R Core Team [ctb] (base R matrix implementation)" in column "AUTHOR" that has maximum length of 255. Please correct your data! dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOAdapter.getJDOExceptionForNucleusException(JDOAdapter.java:678) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:702) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:722) dtrack-apiserver_1 | at alpine.persistence.AbstractAlpineQueryManager.persist(AbstractAlpineQueryManager.java:427) dtrack-apiserver_1 | at org.dependencytrack.persistence.ComponentQueryManager.createComponent(ComponentQueryManager.java:320) dtrack-apiserver_1 | at org.dependencytrack.persistence.QueryManager.createComponent(QueryManager.java:468) dtrack-apiserver_1 | at org.dependencytrack.tasks.BomUploadProcessingTask.processComponent(BomUploadProcessingTask.java:181) dtrack-apiserver_1 | at org.dependencytrack.tasks.BomUploadProcessingTask.inform(BomUploadProcessingTask.java:127) dtrack-apiserver_1 | at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:101) dtrack-apiserver_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) dtrack-apiserver_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) dtrack-apiserver_1 | at java.base/java.lang.Thread.run(Unknown Source) dtrack-apiserver_1 | Caused by: org.datanucleus.exceptions.NucleusUserException: Attempt to store value "Douglas Bates [aut], Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Timothy A. Davis [ctb] (SuiteSparse and 'cs' C libraries, notably CHOLMOD, AMD; collaborators listed in dir(pattern = '^[A-Z]+[.]txt$', full.names=TRUE, system.file('doc', 'SuiteSparse', package='Matrix'))), Jens Oehlschlägel [ctb] (initial nearPD()), Jason Riedy [ctb] (condest() and onenormest() for octave, Copyright: Regents of the University of California), R Core Team [ctb] (base R matrix implementation)" in column "AUTHOR" that has maximum length of 255. Please correct your data! dtrack-apiserver_1 | at org.datanucleus.store.rdbms.mapping.column.CharColumnMapping.setString(CharColumnMapping.java:253) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.mapping.java.SingleFieldMapping.setString(SingleFieldMapping.java:202) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeStringField(ParameterSetter.java:158) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.providedStringField(StateManagerImpl.java:1903) dtrack-apiserver_1 | at org.dependencytrack.model.Component.dnProvideField(Component.java) dtrack-apiserver_1 | at org.dependencytrack.model.Component.dnProvideFields(Component.java) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.provideFields(StateManagerImpl.java:2559) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:383) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObjectInTable(RDBMSPersistenceHandler.java:162) dtrack-apiserver_1 | at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:138) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.internalMakePersistent(StateManagerImpl.java:4587) dtrack-apiserver_1 | at org.datanucleus.state.StateManagerImpl.makePersistent(StateManagerImpl.java:4564) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2014) dtrack-apiserver_1 | at org.datanucleus.ExecutionContext.persistObjectInternal(ExecutionContext.java:320) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1862) dtrack-apiserver_1 | at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1723) dtrack-apiserver_1 | at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:697) dtrack-apiserver_1 | ... 10 common frames omitted ``` ### Steps to Reproduce 1. Create a CDX SBOM that has an author field than 255 characters. Any SBOM will do and just manually edit one of the author fields to have grater than 255 characters. Use the data from the above error if needed. 2. Create a project in DT 3. Import the SBOM 4. Check the logs ### Expected Behavior Good question. If length is a concern, truncate it rather than error and in the actual author data imported perhaps put (truncated) at the end so it is clear this has been done. You could lengthen it, but then how long is enough? I suggest the above is good and show in the logs it was truncated as well Cannot answer some of the details below. I just did a curl followed by a docker-compose mid January to create the system, so it is whatever the basics are for one of these systems. ### Dependency-Track Version 4.6.x ### Dependency-Track Distribution Container Image ### Database Server PostgreSQL ### Database Server Version _No response_ ### Browser Mozilla Firefox ### Checklist - [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues) - [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported
defect
dt track takes error when author is greater than characters in sbom during import current behavior after importing an sbom we noticed that it was not populating the vulnerabilities as expected might be unrelated and so i looked at the logs and saw a fair number of these  processing cyclonedx bom uploaded to project  error while processing bom  martin maechler timothy a davis suitesparse and cs c libraries notably cholmod amd collaborators listed in dir pattern txt full names true system file doc suitesparse package matrix jens oehlschlägel initial nearpd jason riedy condest and onenormest for octave copyright regents of the university of california r core team base r matrix implementation in column author that has maximum length of please correct your data  apiserver  at org datanucleus api jdo jdoadapter getjdoexceptionfornucleusexception jdoadapter java  apiserver  at org datanucleus api jdo jdopersistencemanager jdomakepersistent jdopersistencemanager java  apiserver  at org datanucleus api jdo jdopersistencemanager makepersistent jdopersistencemanager java  apiserver  at alpine persistence abstractalpinequerymanager persist abstractalpinequerymanager java  apiserver  at org dependencytrack persistence componentquerymanager createcomponent componentquerymanager java  apiserver  at org dependencytrack persistence querymanager createcomponent querymanager java  apiserver  at org dependencytrack tasks bomuploadprocessingtask processcomponent bomuploadprocessingtask java  apiserver  at org dependencytrack tasks bomuploadprocessingtask inform bomuploadprocessingtask java  apiserver  at alpine event framework baseeventservice lambda publish baseeventservice java  apiserver  at java base java util concurrent threadpoolexecutor runworker unknown source  apiserver  at java base java util concurrent threadpoolexecutor worker run unknown source  apiserver  at java base java lang thread run unknown source  martin maechler timothy a davis suitesparse and cs c libraries notably cholmod amd collaborators listed in dir pattern txt full names true system file doc suitesparse package matrix jens oehlschlägel initial nearpd jason riedy condest and onenormest for octave copyright regents of the university of california r core team base r matrix implementation in column author that has maximum length of please correct your data  apiserver  at org datanucleus store rdbms mapping column charcolumnmapping setstring charcolumnmapping java  apiserver  at org datanucleus store rdbms mapping java singlefieldmapping setstring singlefieldmapping java  apiserver  at org datanucleus store rdbms fieldmanager parametersetter storestringfield parametersetter java  apiserver  at org datanucleus state statemanagerimpl providedstringfield statemanagerimpl java  apiserver  at org dependencytrack model component dnprovidefield component java  apiserver  at org dependencytrack model component dnprovidefields component java  apiserver  at org datanucleus state statemanagerimpl providefields statemanagerimpl java  apiserver  at org datanucleus store rdbms request insertrequest execute insertrequest java  apiserver  at org datanucleus store rdbms rdbmspersistencehandler insertobjectintable rdbmspersistencehandler java  apiserver  at org datanucleus store rdbms rdbmspersistencehandler insertobject rdbmspersistencehandler java  apiserver  at org datanucleus state statemanagerimpl internalmakepersistent statemanagerimpl java  apiserver  at org datanucleus state statemanagerimpl makepersistent statemanagerimpl java  apiserver  at org datanucleus executioncontextimpl persistobjectinternal executioncontextimpl java  apiserver  at org datanucleus executioncontext persistobjectinternal executioncontext java  apiserver  at org datanucleus executioncontextimpl persistobjectwork executioncontextimpl java  apiserver  at org datanucleus executioncontextimpl persistobject executioncontextimpl java  apiserver  at org datanucleus api jdo jdopersistencemanager jdomakepersistent jdopersistencemanager java  apiserver  common frames omitted steps to reproduce create a cdx sbom that has an author field than characters any sbom will do and just manually edit one of the author fields to have grater than characters use the data from the above error if needed create a project in dt import the sbom check the logs expected behavior good question if length is a concern truncate it rather than error and in the actual author data imported perhaps put truncated at the end so it is clear this has been done you could lengthen it but then how long is enough i suggest the above is good and show in the logs it was truncated as well cannot answer some of the details below i just did a curl followed by a docker compose mid january to create the system so it is whatever the basics are for one of these systems dependency track version x dependency track distribution container image database server postgresql database server version no response browser mozilla firefox checklist i have read and understand the i have checked the for whether this defect was already reported
1
67,452
27,852,337,967
IssuesEvent
2023-03-20 19:44:10
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
[API Proposal]: Add Marshal.ReadInt128 and Marshal.WriteInt128 methods
api-suggestion area-System.Runtime.InteropServices
### Background and motivation Now that we have Int128 support (#67151), it would be nice to have the ability to read and write those values directly from/to unmanaged memory similar to Marshal.ReadInt64 and Marshal.WriteInt64. Several Arm64 and x64 (SSE) processors support this already: https://developer.arm.com/documentation/ka004805/1-0/?lang=en https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/x64-instructions The value added would be atomic access to 128-bit unmanaged values and faster 128-bit interop. ### API Proposal ```csharp namespace System.Runtime.InteropServices; public static partial class Marshal { public static unsafe Int128 ReadInt128(IntPtr ptr, int ofs); public static Int128 ReadInt128(IntPtr ptr) => ReadInt128(ptr, 0); public static unsafe void WriteIn128(IntPtr ptr, int ofs, Int128 val); public static void WriteInt128(IntPtr ptr, Int128 val) => WriteInt128(ptr, 0, val); } ``` ### API Usage ```csharp // Pulled from ReadInt64 Example. // Allocate unmanaged memory. int elementSize = 16; IntPtr unmanagedArray = Marshal.AllocHGlobal(10 * elementSize); // Set the 10 elements of the C-style unmanagedArray for (int i = 0; i < 10; i++) { Marshal.WriteInt128(unmanagedArray, i * elementSize, ((Int128)(i + 1))); } Console.WriteLine("Unmanaged memory written."); Console.WriteLine("Reading unmanaged memory:"); // Print the 10 elements of the C-style unmanagedArray for (int i = 0; i < 10; i++) { Console.WriteLine(Marshal.ReadInt128(unmanagedArray, i * elementSize)); } Marshal.FreeHGlobal(unmanagedArray); Console.WriteLine("Done. Press Enter to continue."); Console.ReadLine(); ``` ### Alternative Designs The above API uses Int128, but the Int64 versions use the long alias. If a similar alias is developed for Int128 (llong maybe?) then that alias should be used instead of Int128. ### Risks This shouldn't affect any previous code, but the risk would be if a processor architecture doesn't support 128-bit access then what would the call do. The easiest would be to provide a NotImplementedException (or something similar). x64 and arm64 should be ok, but arm32 may not have an equivalent.
1.0
[API Proposal]: Add Marshal.ReadInt128 and Marshal.WriteInt128 methods - ### Background and motivation Now that we have Int128 support (#67151), it would be nice to have the ability to read and write those values directly from/to unmanaged memory similar to Marshal.ReadInt64 and Marshal.WriteInt64. Several Arm64 and x64 (SSE) processors support this already: https://developer.arm.com/documentation/ka004805/1-0/?lang=en https://learn.microsoft.com/en-us/windows-hardware/drivers/debugger/x64-instructions The value added would be atomic access to 128-bit unmanaged values and faster 128-bit interop. ### API Proposal ```csharp namespace System.Runtime.InteropServices; public static partial class Marshal { public static unsafe Int128 ReadInt128(IntPtr ptr, int ofs); public static Int128 ReadInt128(IntPtr ptr) => ReadInt128(ptr, 0); public static unsafe void WriteIn128(IntPtr ptr, int ofs, Int128 val); public static void WriteInt128(IntPtr ptr, Int128 val) => WriteInt128(ptr, 0, val); } ``` ### API Usage ```csharp // Pulled from ReadInt64 Example. // Allocate unmanaged memory. int elementSize = 16; IntPtr unmanagedArray = Marshal.AllocHGlobal(10 * elementSize); // Set the 10 elements of the C-style unmanagedArray for (int i = 0; i < 10; i++) { Marshal.WriteInt128(unmanagedArray, i * elementSize, ((Int128)(i + 1))); } Console.WriteLine("Unmanaged memory written."); Console.WriteLine("Reading unmanaged memory:"); // Print the 10 elements of the C-style unmanagedArray for (int i = 0; i < 10; i++) { Console.WriteLine(Marshal.ReadInt128(unmanagedArray, i * elementSize)); } Marshal.FreeHGlobal(unmanagedArray); Console.WriteLine("Done. Press Enter to continue."); Console.ReadLine(); ``` ### Alternative Designs The above API uses Int128, but the Int64 versions use the long alias. If a similar alias is developed for Int128 (llong maybe?) then that alias should be used instead of Int128. ### Risks This shouldn't affect any previous code, but the risk would be if a processor architecture doesn't support 128-bit access then what would the call do. The easiest would be to provide a NotImplementedException (or something similar). x64 and arm64 should be ok, but arm32 may not have an equivalent.
non_defect
add marshal and marshal methods background and motivation now that we have support it would be nice to have the ability to read and write those values directly from to unmanaged memory similar to marshal and marshal several and sse processors support this already the value added would be atomic access to bit unmanaged values and faster bit interop api proposal csharp namespace system runtime interopservices public static partial class marshal public static unsafe intptr ptr int ofs public static intptr ptr ptr public static unsafe void intptr ptr int ofs val public static void intptr ptr val ptr val api usage csharp pulled from example allocate unmanaged memory int elementsize intptr unmanagedarray marshal allochglobal elementsize set the elements of the c style unmanagedarray for int i i i marshal unmanagedarray i elementsize i console writeline unmanaged memory written console writeline reading unmanaged memory print the elements of the c style unmanagedarray for int i i i console writeline marshal unmanagedarray i elementsize marshal freehglobal unmanagedarray console writeline done press enter to continue console readline alternative designs the above api uses but the versions use the long alias if a similar alias is developed for llong maybe then that alias should be used instead of risks this shouldn t affect any previous code but the risk would be if a processor architecture doesn t support bit access then what would the call do the easiest would be to provide a notimplementedexception or something similar and should be ok but may not have an equivalent
0
49,637
13,187,244,140
IssuesEvent
2020-08-13 02:48:12
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[VHESelfVeto] references to null pointers (Trac #1793)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1793">https://code.icecube.wisc.edu/ticket/1793</a>, reported by kjmeagher and owned by claudio.kopper</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T05:27:21", "description": "found by static analysis\nhttp://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-e1bdb1.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-855ebc.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "worksforme", "_ts": "1568784441079751", "component": "combo reconstruction", "summary": "[VHESelfVeto] references to null pointers", "priority": "normal", "keywords": "", "time": "2016-07-27T07:45:46", "milestone": "Long-Term Future", "owner": "claudio.kopper", "type": "defect" } ``` </p> </details>
1.0
[VHESelfVeto] references to null pointers (Trac #1793) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1793">https://code.icecube.wisc.edu/ticket/1793</a>, reported by kjmeagher and owned by claudio.kopper</em></summary> <p> ```json { "status": "closed", "changetime": "2019-09-18T05:27:21", "description": "found by static analysis\nhttp://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-e1bdb1.html#EndPath\nhttp://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-855ebc.html#EndPath", "reporter": "kjmeagher", "cc": "", "resolution": "worksforme", "_ts": "1568784441079751", "component": "combo reconstruction", "summary": "[VHESelfVeto] references to null pointers", "priority": "normal", "keywords": "", "time": "2016-07-27T07:45:46", "milestone": "Long-Term Future", "owner": "claudio.kopper", "type": "defect" } ``` </p> </details>
defect
references to null pointers trac migrated from json status closed changetime description found by static analysis n reporter kjmeagher cc resolution worksforme ts component combo reconstruction summary references to null pointers priority normal keywords time milestone long term future owner claudio kopper type defect
1
11,370
13,308,997,856
IssuesEvent
2020-08-26 02:40:43
apache/incubator-doris
https://api.github.com/repos/apache/incubator-doris
closed
[Bug][MySQL compatibility] Query to info schema table does not return correct results.
area/mysql-compatibility
**Describe the bug** ``` SELECT TABLE_SCHEMA TABLE_CAT, NULL TABLE_SCHEM, TABLE_NAME, IF(TABLE_TYPE='BASE TABLE', 'TABLE', TABLE_TYPE) as TABLE_TYPE, TABLE_COMMENT REMARKS, NULL TYPE_CAT, NULL TYPE_SCHEM, NULL TYPE_NAME, NULL SELF_REFERENCING_COL_NAME, NULL REF_GENERATION FROM INFORMATION_SCHEMA.TABLES WHERE (ISNULL(database()) OR (TABLE_SCHEMA = database())) AND (TABLE_NAME LIKE '%') AND TABLE_TYPE IN ('BASE TABLE','VIEW','FOREIGN TABLE','MATERIALIZED VIEW','EXTERNAL TABLE') ORDER BY TABLE_TYPE, TABLE_SCHEMA, TABLE_NAME; ``` Above sql will return empty, which expected to return the table info in the current database. **Why** This is because `database()` function will return the full name of a database like `default_cluster:db1`. But `TABLE_SCHEMA` returned from `INFORMATION_SCHEMA.TABLES` is the database name without `default_cluster`. So the where predicate `TABLE_SCHEMA = database()` is false.
True
[Bug][MySQL compatibility] Query to info schema table does not return correct results. - **Describe the bug** ``` SELECT TABLE_SCHEMA TABLE_CAT, NULL TABLE_SCHEM, TABLE_NAME, IF(TABLE_TYPE='BASE TABLE', 'TABLE', TABLE_TYPE) as TABLE_TYPE, TABLE_COMMENT REMARKS, NULL TYPE_CAT, NULL TYPE_SCHEM, NULL TYPE_NAME, NULL SELF_REFERENCING_COL_NAME, NULL REF_GENERATION FROM INFORMATION_SCHEMA.TABLES WHERE (ISNULL(database()) OR (TABLE_SCHEMA = database())) AND (TABLE_NAME LIKE '%') AND TABLE_TYPE IN ('BASE TABLE','VIEW','FOREIGN TABLE','MATERIALIZED VIEW','EXTERNAL TABLE') ORDER BY TABLE_TYPE, TABLE_SCHEMA, TABLE_NAME; ``` Above sql will return empty, which expected to return the table info in the current database. **Why** This is because `database()` function will return the full name of a database like `default_cluster:db1`. But `TABLE_SCHEMA` returned from `INFORMATION_SCHEMA.TABLES` is the database name without `default_cluster`. So the where predicate `TABLE_SCHEMA = database()` is false.
non_defect
query to info schema table does not return correct results describe the bug select table schema table cat null table schem table name if table type base table table table type as table type table comment remarks null type cat null type schem null type name null self referencing col name null ref generation from information schema tables where isnull database or table schema database and table name like and table type in base table view foreign table materialized view external table order by table type table schema table name above sql will return empty which expected to return the table info in the current database why this is because database function will return the full name of a database like default cluster but table schema returned from information schema tables is the database name without default cluster so the where predicate table schema database is false
0
27,025
4,859,464,771
IssuesEvent
2016-11-13 17:26:26
TASVideos/BizHawk
https://api.github.com/repos/TASVideos/BizHawk
closed
Cannot call emu.frameadvance() in Lua Console
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Open Lua Console. 2. Type emu.frameadvance(). 3. Alternatively, call any user-defined Lua function that internally calls emu.frameadvance(). What happens: LuaInterface.LuaScriptException: attempt to yield across metamethod/C-call boundary What should happen: It should just work. :) BizHawk 1.9.3, tested this with bsnes core. ``` Original issue reported on code.google.com by `denilsonsa` on 23 Mar 2015 at 4:17
1.0
Cannot call emu.frameadvance() in Lua Console - ``` What steps will reproduce the problem? 1. Open Lua Console. 2. Type emu.frameadvance(). 3. Alternatively, call any user-defined Lua function that internally calls emu.frameadvance(). What happens: LuaInterface.LuaScriptException: attempt to yield across metamethod/C-call boundary What should happen: It should just work. :) BizHawk 1.9.3, tested this with bsnes core. ``` Original issue reported on code.google.com by `denilsonsa` on 23 Mar 2015 at 4:17
defect
cannot call emu frameadvance in lua console what steps will reproduce the problem open lua console type emu frameadvance alternatively call any user defined lua function that internally calls emu frameadvance what happens luainterface luascriptexception attempt to yield across metamethod c call boundary what should happen it should just work bizhawk tested this with bsnes core original issue reported on code google com by denilsonsa on mar at
1
14,300
9,255,991,894
IssuesEvent
2019-03-16 15:22:28
zygopleural/favourite-colour
https://api.github.com/repos/zygopleural/favourite-colour
closed
CVE-2018-11694 High Severity Vulnerability detected by WhiteSource
security vulnerability
## CVE-2018-11694 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/zygopleural/favourite-colour/commit/668057d71ac058e3fdefc9574b8b39c6703b807c">668057d71ac058e3fdefc9574b8b39c6703b807c</a></p> </p> </details> </p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /favourite-colour/node_modules/node-sass/src/libsass/src/expand.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/color_maps.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_util.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8/unchecked.h - /favourite-colour/node_modules/node-sass/src/libsass/src/output.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_values.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/util.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/emitter.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/lexer.cpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_node.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/plugins.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/base.h - /favourite-colour/node_modules/node-sass/src/libsass/src/position.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/subset_map.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operation.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/error_handling.hpp - /favourite-colour/node_modules/node-sass/src/custom_importer_bridge.cpp - /favourite-colour/node_modules/node-sass/src/libsass/contrib/plugin.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/functions.hpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_superselector.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/eval.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8_string.hpp - /favourite-colour/node_modules/node-sass/src/sass_context_wrapper.h - /favourite-colour/node_modules/node-sass/src/libsass/src/error_handling.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/node.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/parser.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/subset_map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/emitter.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/listize.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_functions.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/output.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/functions.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cssize.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/prelexer.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/paths.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/inspect.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/color.cpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_unification.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/values.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_util.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/source_map.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/list.h - /favourite-colour/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/json.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/units.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/units.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/context.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8/checked.h - /favourite-colour/node_modules/node-sass/src/libsass/src/listize.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/string.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/prelexer.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/context.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/boolean.h - /favourite-colour/node_modules/node-sass/src/libsass/include/sass2scss.h - /favourite-colour/node_modules/node-sass/src/libsass/src/eval.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/expand.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/factory.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operators.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/boolean.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/source_map.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/value.h - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8_string.cpp - /favourite-colour/node_modules/node-sass/src/callback_bridge.h - /favourite-colour/node_modules/node-sass/src/libsass/src/file.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/node.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/environment.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/extend.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_context.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operators.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/constants.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/parser.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/constants.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/list.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cssize.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/functions.h - /favourite-colour/node_modules/node-sass/src/libsass/src/util.cpp - /favourite-colour/node_modules/node-sass/src/custom_function_bridge.cpp - /favourite-colour/node_modules/node-sass/src/custom_importer_bridge.h - /favourite-colour/node_modules/node-sass/src/libsass/src/bind.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/inspect.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_functions.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/backtrace.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/extend.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /favourite-colour/node_modules/node-sass/src/libsass/src/debugger.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cencode.c - /favourite-colour/node_modules/node-sass/src/libsass/src/base64vlq.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/number.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/color.h - /favourite-colour/node_modules/node-sass/src/libsass/src/c99func.c - /favourite-colour/node_modules/node-sass/src/libsass/src/position.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_values.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/values.h - /favourite-colour/node_modules/node-sass/src/libsass/test/test_subset_map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass2scss.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/null.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/context.h - /favourite-colour/node_modules/node-sass/src/libsass/src/to_c.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_value.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/color_maps.hpp - /favourite-colour/node_modules/node-sass/src/sass_context_wrapper.cpp - /favourite-colour/node_modules/node-sass/src/libsass/script/test-leaks.pl - /favourite-colour/node_modules/node-sass/src/libsass/src/lexer.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_c.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_value.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/b64/encode.h - /favourite-colour/node_modules/node-sass/src/libsass/src/file.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/environment.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/plugins.hpp - /favourite-colour/node_modules/node-sass/src/binding.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_context.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/debug.hpp </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694>CVE-2018-11694</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-11694 High Severity Vulnerability detected by WhiteSource - ## CVE-2018-11694 - High Severity Vulnerability <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/zygopleural/favourite-colour/commit/668057d71ac058e3fdefc9574b8b39c6703b807c">668057d71ac058e3fdefc9574b8b39c6703b807c</a></p> </p> </details> </p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=19 height=20> Library Source Files (125)</summary> <p></p> <p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p> <p> - /favourite-colour/node_modules/node-sass/src/libsass/src/expand.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/color_maps.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_util.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8/unchecked.h - /favourite-colour/node_modules/node-sass/src/libsass/src/output.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_values.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/util.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/emitter.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/lexer.cpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_node.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/plugins.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/base.h - /favourite-colour/node_modules/node-sass/src/libsass/src/position.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/subset_map.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operation.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/remove_placeholders.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/error_handling.hpp - /favourite-colour/node_modules/node-sass/src/custom_importer_bridge.cpp - /favourite-colour/node_modules/node-sass/src/libsass/contrib/plugin.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/functions.hpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_superselector.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/eval.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8_string.hpp - /favourite-colour/node_modules/node-sass/src/sass_context_wrapper.h - /favourite-colour/node_modules/node-sass/src/libsass/src/error_handling.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/node.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/parser.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/subset_map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/emitter.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/listize.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_functions.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/memory/SharedPtr.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/output.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/check_nesting.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_def_macros.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/functions.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cssize.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/prelexer.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/paths.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_fwd_decl.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/inspect.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/color.cpp - /favourite-colour/node_modules/node-sass/src/libsass/test/test_unification.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/values.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_util.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/source_map.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/list.h - /favourite-colour/node_modules/node-sass/src/libsass/src/check_nesting.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/json.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/units.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/units.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/context.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8/checked.h - /favourite-colour/node_modules/node-sass/src/libsass/src/listize.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/string.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/prelexer.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/context.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/boolean.h - /favourite-colour/node_modules/node-sass/src/libsass/include/sass2scss.h - /favourite-colour/node_modules/node-sass/src/libsass/src/eval.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/expand.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/factory.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operators.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/boolean.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/source_map.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/value.h - /favourite-colour/node_modules/node-sass/src/libsass/src/utf8_string.cpp - /favourite-colour/node_modules/node-sass/src/callback_bridge.h - /favourite-colour/node_modules/node-sass/src/libsass/src/file.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/node.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/environment.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/extend.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_context.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/operators.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/constants.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast_fwd_decl.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/parser.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/constants.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/list.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cssize.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/functions.h - /favourite-colour/node_modules/node-sass/src/libsass/src/util.cpp - /favourite-colour/node_modules/node-sass/src/custom_function_bridge.cpp - /favourite-colour/node_modules/node-sass/src/custom_importer_bridge.h - /favourite-colour/node_modules/node-sass/src/libsass/src/bind.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/inspect.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_functions.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/backtrace.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/extend.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/sass_value_wrapper.h - /favourite-colour/node_modules/node-sass/src/libsass/src/debugger.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/cencode.c - /favourite-colour/node_modules/node-sass/src/libsass/src/base64vlq.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/number.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/color.h - /favourite-colour/node_modules/node-sass/src/libsass/src/c99func.c - /favourite-colour/node_modules/node-sass/src/libsass/src/position.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/remove_placeholders.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_values.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/values.h - /favourite-colour/node_modules/node-sass/src/libsass/test/test_subset_map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass2scss.cpp - /favourite-colour/node_modules/node-sass/src/sass_types/null.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/ast.cpp - /favourite-colour/node_modules/node-sass/src/libsass/include/sass/context.h - /favourite-colour/node_modules/node-sass/src/libsass/src/to_c.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_value.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/color_maps.hpp - /favourite-colour/node_modules/node-sass/src/sass_context_wrapper.cpp - /favourite-colour/node_modules/node-sass/src/libsass/script/test-leaks.pl - /favourite-colour/node_modules/node-sass/src/libsass/src/lexer.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/memory/SharedPtr.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_c.hpp - /favourite-colour/node_modules/node-sass/src/sass_types/map.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/to_value.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/b64/encode.h - /favourite-colour/node_modules/node-sass/src/libsass/src/file.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/environment.hpp - /favourite-colour/node_modules/node-sass/src/libsass/src/plugins.hpp - /favourite-colour/node_modules/node-sass/src/binding.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/sass_context.cpp - /favourite-colour/node_modules/node-sass/src/libsass/src/debug.hpp </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-11694>CVE-2018-11694</a></p> </p> </details> <p></p> <details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high severity vulnerability detected by whitesource cve high severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries favourite colour node modules node sass src libsass src expand hpp favourite colour node modules node sass src libsass src color maps cpp favourite colour node modules node sass src libsass src sass util hpp favourite colour node modules node sass src libsass src unchecked h favourite colour node modules node sass src libsass src output hpp favourite colour node modules node sass src libsass src sass values hpp favourite colour node modules node sass src libsass src util hpp favourite colour node modules node sass src libsass src emitter hpp favourite colour node modules node sass src libsass src lexer cpp favourite colour node modules node sass src libsass test test node cpp favourite colour node modules node sass src libsass src plugins cpp favourite colour node modules node sass src libsass include sass base h favourite colour node modules node sass src libsass src position hpp favourite colour node modules node sass src libsass src subset map hpp favourite colour node modules node sass src libsass src operation hpp favourite colour node modules node sass src libsass src remove placeholders cpp favourite colour node modules node sass src libsass src error handling hpp favourite colour node modules node sass src custom importer bridge cpp favourite colour node modules node sass src libsass contrib plugin cpp favourite colour node modules node sass src libsass src functions hpp favourite colour node modules node sass src libsass test test superselector cpp favourite colour node modules node sass src libsass src eval hpp favourite colour node modules node sass src libsass src string hpp favourite colour node modules node sass src sass context wrapper h favourite colour node modules node sass src libsass src error handling cpp favourite colour node modules node sass src libsass src node cpp favourite colour node modules node sass src libsass src parser cpp favourite colour node modules node sass src libsass src subset map cpp favourite colour node modules node sass src libsass src emitter cpp favourite colour node modules node sass src libsass src listize cpp favourite colour node modules node sass src libsass src ast hpp favourite colour node modules node sass src libsass src sass functions hpp favourite colour node modules node sass src libsass src memory sharedptr cpp favourite colour node modules node sass src libsass src output cpp favourite colour node modules node sass src libsass src check nesting cpp favourite colour node modules node sass src libsass src ast def macros hpp favourite colour node modules node sass src libsass src functions cpp favourite colour node modules node sass src libsass src cssize hpp favourite colour node modules node sass src libsass src prelexer cpp favourite colour node modules node sass src libsass src paths hpp favourite colour node modules node sass src libsass src ast fwd decl hpp favourite colour node modules node sass src libsass src inspect hpp favourite colour node modules node sass src sass types color cpp favourite colour node modules node sass src libsass test test unification cpp favourite colour node modules node sass src libsass src values cpp favourite colour node modules node sass src libsass src sass util cpp favourite colour node modules node sass src libsass src source map hpp favourite colour node modules node sass src sass types list h favourite colour node modules node sass src libsass src check nesting hpp favourite colour node modules node sass src libsass src json cpp favourite colour node modules node sass src libsass src units cpp favourite colour node modules node sass src libsass src units hpp favourite colour node modules node sass src libsass src context cpp favourite colour node modules node sass src libsass src checked h favourite colour node modules node sass src libsass src listize hpp favourite colour node modules node sass src sass types string cpp favourite colour node modules node sass src libsass src prelexer hpp favourite colour node modules node sass src libsass src context hpp favourite colour node modules node sass src sass types boolean h favourite colour node modules node sass src libsass include h favourite colour node modules node sass src libsass src eval cpp favourite colour node modules node sass src libsass src expand cpp favourite colour node modules node sass src sass types factory cpp favourite colour node modules node sass src libsass src operators cpp favourite colour node modules node sass src sass types boolean cpp favourite colour node modules node sass src libsass src source map cpp favourite colour node modules node sass src sass types value h favourite colour node modules node sass src libsass src string cpp favourite colour node modules node sass src callback bridge h favourite colour node modules node sass src libsass src file cpp favourite colour node modules node sass src libsass src sass cpp favourite colour node modules node sass src libsass src node hpp favourite colour node modules node sass src libsass src environment cpp favourite colour node modules node sass src libsass src extend hpp favourite colour node modules node sass src libsass src sass context hpp favourite colour node modules node sass src libsass src operators hpp favourite colour node modules node sass src libsass src constants hpp favourite colour node modules node sass src libsass src sass hpp favourite colour node modules node sass src libsass src ast fwd decl cpp favourite colour node modules node sass src libsass src parser hpp favourite colour node modules node sass src libsass src constants cpp favourite colour node modules node sass src sass types list cpp favourite colour node modules node sass src libsass src cssize cpp favourite colour node modules node sass src libsass include sass functions h favourite colour node modules node sass src libsass src util cpp favourite colour node modules node sass src custom function bridge cpp favourite colour node modules node sass src custom importer bridge h favourite colour node modules node sass src libsass src bind cpp favourite colour node modules node sass src libsass src inspect cpp favourite colour node modules node sass src libsass src sass functions cpp favourite colour node modules node sass src libsass src backtrace cpp favourite colour node modules node sass src libsass src extend cpp favourite colour node modules node sass src sass types sass value wrapper h favourite colour node modules node sass src libsass src debugger hpp favourite colour node modules node sass src libsass src cencode c favourite colour node modules node sass src libsass src cpp favourite colour node modules node sass src sass types number cpp favourite colour node modules node sass src sass types color h favourite colour node modules node sass src libsass src c favourite colour node modules node sass src libsass src position cpp favourite colour node modules node sass src libsass src remove placeholders hpp favourite colour node modules node sass src libsass src sass values cpp favourite colour node modules node sass src libsass include sass values h favourite colour node modules node sass src libsass test test subset map cpp favourite colour node modules node sass src libsass src cpp favourite colour node modules node sass src sass types null cpp favourite colour node modules node sass src libsass src ast cpp favourite colour node modules node sass src libsass include sass context h favourite colour node modules node sass src libsass src to c cpp favourite colour node modules node sass src libsass src to value hpp favourite colour node modules node sass src libsass src color maps hpp favourite colour node modules node sass src sass context wrapper cpp favourite colour node modules node sass src libsass script test leaks pl favourite colour node modules node sass src libsass src lexer hpp favourite colour node modules node sass src libsass src memory sharedptr hpp favourite colour node modules node sass src libsass src to c hpp favourite colour node modules node sass src sass types map cpp favourite colour node modules node sass src libsass src to value cpp favourite colour node modules node sass src libsass src encode h favourite colour node modules node sass src libsass src file hpp favourite colour node modules node sass src libsass src environment hpp favourite colour node modules node sass src libsass src plugins hpp favourite colour node modules node sass src binding cpp favourite colour node modules node sass src libsass src sass context cpp favourite colour node modules node sass src libsass src debug hpp vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource
0
63,215
17,463,946,120
IssuesEvent
2021-08-06 14:19:30
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
IllegalArgumentException: Minimum abbreviation width is 4 when TXTFormat::minColWidth is less than 4
T: Defect C: Functionality P: Medium E: All Editions
### Expected behavior No exception occurs ### Actual behavior ``` java.lang.IllegalArgumentException: Minimum abbreviation width is 4 at org.jooq_3.15.1.POSTGRES_11.debug(Unknown Source) at org.jooq.tools.StringUtils.abbreviate(StringUtils.java:650) at org.jooq.tools.StringUtils.abbreviate(StringUtils.java:607) at org.jooq.impl.AbstractResult.format(AbstractResult.java:236) at org.jooq.impl.AbstractFormattable.format(AbstractFormattable.java:80) at org.jooq.tools.LoggerListener.log(LoggerListener.java:181) at org.jooq.tools.LoggerListener.resultEnd(LoggerListener.java:164) at org.jooq.impl.ExecuteListeners.resultEnd(ExecuteListeners.java:245) at org.jooq.impl.CursorImpl.fetchNext(CursorImpl.java:224) at org.jooq.impl.AbstractCursor.fetch(AbstractCursor.java:177) at org.jooq.impl.AbstractCursor.fetch(AbstractCursor.java:88) at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:259) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:335) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:284) at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2842) ``` ### Steps to reproduce the problem First time I'm seeing this, have no idea how to reproduce ### Versions - jOOQ: - Java: 11.0.x - Database (include vendor): PostgreSQL 12.6.0 - OS: ubuntu (GH Actions) - JDBC Driver (include name if inofficial driver): org.postgresql:postgresql:42.2.23
1.0
IllegalArgumentException: Minimum abbreviation width is 4 when TXTFormat::minColWidth is less than 4 - ### Expected behavior No exception occurs ### Actual behavior ``` java.lang.IllegalArgumentException: Minimum abbreviation width is 4 at org.jooq_3.15.1.POSTGRES_11.debug(Unknown Source) at org.jooq.tools.StringUtils.abbreviate(StringUtils.java:650) at org.jooq.tools.StringUtils.abbreviate(StringUtils.java:607) at org.jooq.impl.AbstractResult.format(AbstractResult.java:236) at org.jooq.impl.AbstractFormattable.format(AbstractFormattable.java:80) at org.jooq.tools.LoggerListener.log(LoggerListener.java:181) at org.jooq.tools.LoggerListener.resultEnd(LoggerListener.java:164) at org.jooq.impl.ExecuteListeners.resultEnd(ExecuteListeners.java:245) at org.jooq.impl.CursorImpl.fetchNext(CursorImpl.java:224) at org.jooq.impl.AbstractCursor.fetch(AbstractCursor.java:177) at org.jooq.impl.AbstractCursor.fetch(AbstractCursor.java:88) at org.jooq.impl.AbstractResultQuery.execute(AbstractResultQuery.java:259) at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:335) at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:284) at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2842) ``` ### Steps to reproduce the problem First time I'm seeing this, have no idea how to reproduce ### Versions - jOOQ: - Java: 11.0.x - Database (include vendor): PostgreSQL 12.6.0 - OS: ubuntu (GH Actions) - JDBC Driver (include name if inofficial driver): org.postgresql:postgresql:42.2.23
defect
illegalargumentexception minimum abbreviation width is when txtformat mincolwidth is less than expected behavior no exception occurs actual behavior java lang illegalargumentexception minimum abbreviation width is at org jooq postgres debug unknown source at org jooq tools stringutils abbreviate stringutils java at org jooq tools stringutils abbreviate stringutils java at org jooq impl abstractresult format abstractresult java at org jooq impl abstractformattable format abstractformattable java at org jooq tools loggerlistener log loggerlistener java at org jooq tools loggerlistener resultend loggerlistener java at org jooq impl executelisteners resultend executelisteners java at org jooq impl cursorimpl fetchnext cursorimpl java at org jooq impl abstractcursor fetch abstractcursor java at org jooq impl abstractcursor fetch abstractcursor java at org jooq impl abstractresultquery execute abstractresultquery java at org jooq impl abstractquery execute abstractquery java at org jooq impl abstractresultquery fetch abstractresultquery java at org jooq impl selectimpl fetch selectimpl java steps to reproduce the problem first time i m seeing this have no idea how to reproduce versions jooq java x database include vendor postgresql os ubuntu gh actions jdbc driver include name if inofficial driver org postgresql postgresql
1
14,706
2,831,388,627
IssuesEvent
2015-05-24 15:53:31
nobodyguy/dslrdashboard
https://api.github.com/repos/nobodyguy/dslrdashboard
closed
Automatically changes the recording to SD card (5d mark III)
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. connect camera to dslr dashboard 2. automatically changes the recording to SD card 3. What version of the product are you using? On what operating system? Camera: Canon 5D mark III, phone: SM-N9005, android: 4.4.2 Please provide any additional information below. When I plug in the camera application automatically changes the recording to SD card. When I changeing manually in camera the CF card is also automatically changes to SD. Please fix this problem ``` Original issue reported on code.google.com by `genos...@gmail.com` on 23 Feb 2014 at 11:42
1.0
Automatically changes the recording to SD card (5d mark III) - ``` What steps will reproduce the problem? 1. connect camera to dslr dashboard 2. automatically changes the recording to SD card 3. What version of the product are you using? On what operating system? Camera: Canon 5D mark III, phone: SM-N9005, android: 4.4.2 Please provide any additional information below. When I plug in the camera application automatically changes the recording to SD card. When I changeing manually in camera the CF card is also automatically changes to SD. Please fix this problem ``` Original issue reported on code.google.com by `genos...@gmail.com` on 23 Feb 2014 at 11:42
defect
automatically changes the recording to sd card mark iii what steps will reproduce the problem connect camera to dslr dashboard automatically changes the recording to sd card what version of the product are you using on what operating system camera canon mark iii phone sm android please provide any additional information below when i plug in the camera application automatically changes the recording to sd card when i changeing manually in camera the cf card is also automatically changes to sd please fix this problem original issue reported on code google com by genos gmail com on feb at
1
6,039
2,610,219,839
IssuesEvent
2015-02-26 19:09:52
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
формулы по химии за 8-9 класс.doc
auto-migrated Priority-Medium Type-Defect
``` '''Гордей Гуляев''' Привет всем не подскажите где можно найти .формулы по химии за 8-9 класс.doc. как то выкладывали уже '''Аскольд Кириллов''' Вот держи линк http://bit.ly/16CVJXu '''Борис Бобылёв''' Просит ввести номер мобилы!Не опасно ли это? '''Ахмат Кошелев''' Не это не влияет на баланс '''Аверкий Горбачёв''' Не это не влияет на баланс Информация о файле: формулы по химии за 8-9 класс.doc Загружен: В этом месяце Скачан раз: 540 Рейтинг: 572 Средняя скорость скачивания: 567 Похожих файлов: 31 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 5:13
1.0
формулы по химии за 8-9 класс.doc - ``` '''Гордей Гуляев''' Привет всем не подскажите где можно найти .формулы по химии за 8-9 класс.doc. как то выкладывали уже '''Аскольд Кириллов''' Вот держи линк http://bit.ly/16CVJXu '''Борис Бобылёв''' Просит ввести номер мобилы!Не опасно ли это? '''Ахмат Кошелев''' Не это не влияет на баланс '''Аверкий Горбачёв''' Не это не влияет на баланс Информация о файле: формулы по химии за 8-9 класс.doc Загружен: В этом месяце Скачан раз: 540 Рейтинг: 572 Средняя скорость скачивания: 567 Похожих файлов: 31 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 5:13
defect
формулы по химии за класс doc гордей гуляев привет всем не подскажите где можно найти формулы по химии за класс doc как то выкладывали уже аскольд кириллов вот держи линк борис бобылёв просит ввести номер мобилы не опасно ли это ахмат кошелев не это не влияет на баланс аверкий горбачёв не это не влияет на баланс информация о файле формулы по химии за класс doc загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
1
13,411
23,048,775,365
IssuesEvent
2022-07-24 10:03:59
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Error updating Yarn binary
type:bug status:requirements priority-5-triage
### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I am seeing quite a lot of these errors in the hosted app. Example repo: https://github.com/Darkflame72/SMS-Discord-Bot in this PR: https://github.com/Darkflame72/SMS-Discord-Bot/pull/7 Seems to have increased recently: ![image](https://user-images.githubusercontent.com/6311784/180642076-baf12b27-7a96-473b-bd7a-317460ca8679.png) ### Relevant debug logs <details><summary>Logs</summary> ``` {"level":20,"branch":"renovate/yarn-3.x","msg":"getBranchPr(renovate/yarn-3.x)","time":"2022-07-24T09:12:55.903Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"findPr(renovate/yarn-3.x, undefined, open)","time":"2022-07-24T09:12:55.903Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Found PR #7","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"branchExists=true","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"dependencyDashboardCheck=undefined","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"PR rebase requested=true","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Checking if PR has been edited","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Found existing branch PR","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Checking schedule(at any time, null)","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"No schedule defined","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Setting current branch to main","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","branchName":"main","latestCommitDate":"2022-07-24T21:07:00+12:00","msg":"latest commit","time":"2022-07-24T09:12:56.257Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Manual rebase requested via Dependency Dashboard","time":"2022-07-24T09:12:56.370Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Using reuseExistingBranch: false","time":"2022-07-24T09:12:56.371Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"manager.getUpdatedPackageFiles() reuseExistinbranch=false","time":"2022-07-24T09:12:56.376Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"npm.updateDependency(): packageManager.yarn = 3.2.2","time":"2022-07-24T09:12:56.628Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updating yarn in package.json","time":"2022-07-24T09:12:56.637Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updated 1 package files","time":"2022-07-24T09:12:56.638Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Getting updated lock files","time":"2022-07-24T09:12:56.639Z"} {"level":20,"branch":"renovate/yarn-3.x","packageFiles":["package.json"],"msg":"Writing package.json files","time":"2022-07-24T09:12:56.640Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing any updated package files","time":"2022-07-24T09:12:56.641Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing package.json","time":"2022-07-24T09:12:56.641Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"npmrc file found in repository","time":"2022-07-24T09:12:56.670Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing updated .npmrc file to .npmrc","time":"2022-07-24T09:12:56.670Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Generating yarn.lock for .","time":"2022-07-24T09:12:56.671Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Spawning yarn install to create yarn.lock","time":"2022-07-24T09:12:56.672Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Enabling global cache as zero-install is not detected","time":"2022-07-24T09:12:56.673Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"No node constraint found - using latest","time":"2022-07-24T09:12:56.674Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updating Yarn binary","time":"2022-07-24T09:12:56.674Z"} {"level":20,"branch":"renovate/yarn-3.x","image":"node","msg":"Using docker to execute","time":"2022-07-24T09:12:56.675Z"} {"level":20,"branch":"renovate/yarn-3.x","toolName":"corepack","resolvedVersion":"0.12.1","msg":"Resolved stable matching version","time":"2022-07-24T09:12:56.690Z"} {"level":20,"branch":"renovate/yarn-3.x","image":"docker.io/renovate/node","msg":"No tag or tagConstraint specified","time":"2022-07-24T09:12:56.692Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Fetching Docker image: docker.io/renovate/node","time":"2022-07-24T09:12:56.692Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Finished fetching Docker image docker.io/renovate/node@sha256:dfc6f5d8a32593133be29f098d53f36a49ef270b58eec97dcf10cc230ad37894","time":"2022-07-24T09:13:18.693Z"} {"level":20,"branch":"renovate/yarn-3.x","command":"docker run --rm --name=renovate_node --label=renovate_child -v \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\":\"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -e NPM_CONFIG_CACHE -e npm_config_store -e CI -e YARN_ENABLE_IMMUTABLE_INSTALLS -e YARN_HTTP_TIMEOUT -e YARN_GLOBAL_FOLDER -e YARN_ENABLE_GLOBAL_CACHE -e BUILDPACK_CACHE_DIR -w \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" docker.io/renovate/node bash -l -c \"install-tool corepack 0.12.1 && yarn set version 3.2.2 && yarn install --mode=update-lockfile\"","msg":"Executing command","time":"2022-07-24T09:13:18.871Z"} {"level":20,"branch":"renovate/yarn-3.x","cmd":"docker run --rm --name=renovate_node --label=renovate_child -v \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\":\"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -e NPM_CONFIG_CACHE -e npm_config_store -e CI -e YARN_ENABLE_IMMUTABLE_INSTALLS -e YARN_HTTP_TIMEOUT -e YARN_GLOBAL_FOLDER -e YARN_ENABLE_GLOBAL_CACHE -e BUILDPACK_CACHE_DIR -w \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" docker.io/renovate/node bash -l -c \"install-tool corepack 0.12.1 && yarn set version 3.2.2 && yarn install --mode=update-lockfile\"","durationMs":175369,"stdout":"installing v2 tool corepack v0.12.1\nnpm WARN config global `--global`, `--local` are deprecated. Use `--location=global` instead.\n\nadded 1 package in 1s\nlinking tool corepack v0.12.1\n0.12.1\nInstalled v2 /usr/local/buildpack/tools/v2/corepack.sh in 2 seconds\n➤ YN0000: Retrieving https://repo.yarnpkg.com/3.2.2/packages/yarnpkg-cli/bin/yarn.js\n➤ YN0000: Saving the new release in .yarn/releases/yarn-3.2.2.cjs\n➤ YN0000: Done in 1s 476ms\n➤ YN0000: ┌ Resolution step\n➤ YN0000: └ Completed in 0s 714ms\n➤ YN0000: ┌ Fetch step\n➤ YN0013: │ 32 packages were already cached, 4 had to be fetched\n➤ YN0000: └ Completed in 2m 42s\n➤ YN0000: ┌ Link step\n➤ YN0073: │ Skipped due to mode=update-lockfile\n➤ YN0000: └ Completed\n➤ YN0000: Done with warnings in 2m 43s\n","stderr":"","msg":"exec completed","time":"2022-07-24T09:16:14.062Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"yarn.lock needs updating","time":"2022-07-24T09:16:14.162Z"} {"level":20,"branch":"renovate/yarn-3.x","resolvedPaths":[".yarn/cache",".pnp.cjs",".pnp.js",".pnp.loader.mjs"],"msg":"updateYarnOffline resolvedPaths","time":"2022-07-24T09:16:14.337Z"} {"level":50,"branch":"renovate/yarn-3.x","err":{"code":"ERR_INVALID_ARG_TYPE","message":"The \"path\" argument must be of type string. Received undefined","stack":"TypeError [ERR_INVALID_ARG_TYPE]: The \"path\" argument must be of type string. Received undefined\n at new NodeError (internal/errors.js:322:7)\n at validateString (internal/validators.js:124:11)\n at Object.join (path.js:1148:7)\n at Object.join (/home/ubuntu/renovateapp/node_modules/upath/build/code/upath.js:51:33)\n at updateYarnBinary (/home/ubuntu/renovateapp/node_modules/renovate/dist/modules/manager/npm/post-update/index.js:349:49)\n at async getAdditionalFiles (/home/ubuntu/renovateapp/node_modules/renovate/dist/modules/manager/npm/post-update/index.js:530:44)\n at async processBranch (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/update/branch/index.js:304:33)\n at async writeUpdates (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/process/write.js:25:21)\n at async update (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/process/extract-update.js:109:15)\n at async Object.renovateRepository (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/index.js:43:25)\n at async renovateRepository (/home/ubuntu/renovateapp/app/worker/index.js:310:26)\n at async /home/ubuntu/renovateapp/app/worker/index.js:570:5"},"msg":"Error updating Yarn binary","time":"2022-07-24T09:16:14.508Z"} {"level":20,"branch":"renovate/yarn-3.x","updatedArtifacts":["yarn.lock"],"msg":"Updated 1 lock files","time":"2022-07-24T09:16:14.511Z"} ``` </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
1.0
Error updating Yarn binary - ### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. _No response_ ### Please select which platform you are using if self-hosting. _No response_ ### If you're self-hosting Renovate, tell us what version of the platform you run. _No response_ ### Was this something which used to work for you, and then stopped? I never saw this working ### Describe the bug I am seeing quite a lot of these errors in the hosted app. Example repo: https://github.com/Darkflame72/SMS-Discord-Bot in this PR: https://github.com/Darkflame72/SMS-Discord-Bot/pull/7 Seems to have increased recently: ![image](https://user-images.githubusercontent.com/6311784/180642076-baf12b27-7a96-473b-bd7a-317460ca8679.png) ### Relevant debug logs <details><summary>Logs</summary> ``` {"level":20,"branch":"renovate/yarn-3.x","msg":"getBranchPr(renovate/yarn-3.x)","time":"2022-07-24T09:12:55.903Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"findPr(renovate/yarn-3.x, undefined, open)","time":"2022-07-24T09:12:55.903Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Found PR #7","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"branchExists=true","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"dependencyDashboardCheck=undefined","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"PR rebase requested=true","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Checking if PR has been edited","time":"2022-07-24T09:12:55.904Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Found existing branch PR","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Checking schedule(at any time, null)","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"No schedule defined","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Setting current branch to main","time":"2022-07-24T09:12:55.905Z"} {"level":20,"branch":"renovate/yarn-3.x","branchName":"main","latestCommitDate":"2022-07-24T21:07:00+12:00","msg":"latest commit","time":"2022-07-24T09:12:56.257Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Manual rebase requested via Dependency Dashboard","time":"2022-07-24T09:12:56.370Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Using reuseExistingBranch: false","time":"2022-07-24T09:12:56.371Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"manager.getUpdatedPackageFiles() reuseExistinbranch=false","time":"2022-07-24T09:12:56.376Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"npm.updateDependency(): packageManager.yarn = 3.2.2","time":"2022-07-24T09:12:56.628Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updating yarn in package.json","time":"2022-07-24T09:12:56.637Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updated 1 package files","time":"2022-07-24T09:12:56.638Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Getting updated lock files","time":"2022-07-24T09:12:56.639Z"} {"level":20,"branch":"renovate/yarn-3.x","packageFiles":["package.json"],"msg":"Writing package.json files","time":"2022-07-24T09:12:56.640Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing any updated package files","time":"2022-07-24T09:12:56.641Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing package.json","time":"2022-07-24T09:12:56.641Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"npmrc file found in repository","time":"2022-07-24T09:12:56.670Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Writing updated .npmrc file to .npmrc","time":"2022-07-24T09:12:56.670Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Generating yarn.lock for .","time":"2022-07-24T09:12:56.671Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Spawning yarn install to create yarn.lock","time":"2022-07-24T09:12:56.672Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Enabling global cache as zero-install is not detected","time":"2022-07-24T09:12:56.673Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"No node constraint found - using latest","time":"2022-07-24T09:12:56.674Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Updating Yarn binary","time":"2022-07-24T09:12:56.674Z"} {"level":20,"branch":"renovate/yarn-3.x","image":"node","msg":"Using docker to execute","time":"2022-07-24T09:12:56.675Z"} {"level":20,"branch":"renovate/yarn-3.x","toolName":"corepack","resolvedVersion":"0.12.1","msg":"Resolved stable matching version","time":"2022-07-24T09:12:56.690Z"} {"level":20,"branch":"renovate/yarn-3.x","image":"docker.io/renovate/node","msg":"No tag or tagConstraint specified","time":"2022-07-24T09:12:56.692Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Fetching Docker image: docker.io/renovate/node","time":"2022-07-24T09:12:56.692Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"Finished fetching Docker image docker.io/renovate/node@sha256:dfc6f5d8a32593133be29f098d53f36a49ef270b58eec97dcf10cc230ad37894","time":"2022-07-24T09:13:18.693Z"} {"level":20,"branch":"renovate/yarn-3.x","command":"docker run --rm --name=renovate_node --label=renovate_child -v \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\":\"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -e NPM_CONFIG_CACHE -e npm_config_store -e CI -e YARN_ENABLE_IMMUTABLE_INSTALLS -e YARN_HTTP_TIMEOUT -e YARN_GLOBAL_FOLDER -e YARN_ENABLE_GLOBAL_CACHE -e BUILDPACK_CACHE_DIR -w \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" docker.io/renovate/node bash -l -c \"install-tool corepack 0.12.1 && yarn set version 3.2.2 && yarn install --mode=update-lockfile\"","msg":"Executing command","time":"2022-07-24T09:13:18.871Z"} {"level":20,"branch":"renovate/yarn-3.x","cmd":"docker run --rm --name=renovate_node --label=renovate_child -v \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\":\"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" -v \"/tmp/renovate-cache\":\"/tmp/renovate-cache\" -e NPM_CONFIG_CACHE -e npm_config_store -e CI -e YARN_ENABLE_IMMUTABLE_INSTALLS -e YARN_HTTP_TIMEOUT -e YARN_GLOBAL_FOLDER -e YARN_ENABLE_GLOBAL_CACHE -e BUILDPACK_CACHE_DIR -w \"/mnt/renovate/gh/Darkflame72/SMS-Discord-Bot\" docker.io/renovate/node bash -l -c \"install-tool corepack 0.12.1 && yarn set version 3.2.2 && yarn install --mode=update-lockfile\"","durationMs":175369,"stdout":"installing v2 tool corepack v0.12.1\nnpm WARN config global `--global`, `--local` are deprecated. Use `--location=global` instead.\n\nadded 1 package in 1s\nlinking tool corepack v0.12.1\n0.12.1\nInstalled v2 /usr/local/buildpack/tools/v2/corepack.sh in 2 seconds\n➤ YN0000: Retrieving https://repo.yarnpkg.com/3.2.2/packages/yarnpkg-cli/bin/yarn.js\n➤ YN0000: Saving the new release in .yarn/releases/yarn-3.2.2.cjs\n➤ YN0000: Done in 1s 476ms\n➤ YN0000: ┌ Resolution step\n➤ YN0000: └ Completed in 0s 714ms\n➤ YN0000: ┌ Fetch step\n➤ YN0013: │ 32 packages were already cached, 4 had to be fetched\n➤ YN0000: └ Completed in 2m 42s\n➤ YN0000: ┌ Link step\n➤ YN0073: │ Skipped due to mode=update-lockfile\n➤ YN0000: └ Completed\n➤ YN0000: Done with warnings in 2m 43s\n","stderr":"","msg":"exec completed","time":"2022-07-24T09:16:14.062Z"} {"level":20,"branch":"renovate/yarn-3.x","msg":"yarn.lock needs updating","time":"2022-07-24T09:16:14.162Z"} {"level":20,"branch":"renovate/yarn-3.x","resolvedPaths":[".yarn/cache",".pnp.cjs",".pnp.js",".pnp.loader.mjs"],"msg":"updateYarnOffline resolvedPaths","time":"2022-07-24T09:16:14.337Z"} {"level":50,"branch":"renovate/yarn-3.x","err":{"code":"ERR_INVALID_ARG_TYPE","message":"The \"path\" argument must be of type string. Received undefined","stack":"TypeError [ERR_INVALID_ARG_TYPE]: The \"path\" argument must be of type string. Received undefined\n at new NodeError (internal/errors.js:322:7)\n at validateString (internal/validators.js:124:11)\n at Object.join (path.js:1148:7)\n at Object.join (/home/ubuntu/renovateapp/node_modules/upath/build/code/upath.js:51:33)\n at updateYarnBinary (/home/ubuntu/renovateapp/node_modules/renovate/dist/modules/manager/npm/post-update/index.js:349:49)\n at async getAdditionalFiles (/home/ubuntu/renovateapp/node_modules/renovate/dist/modules/manager/npm/post-update/index.js:530:44)\n at async processBranch (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/update/branch/index.js:304:33)\n at async writeUpdates (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/process/write.js:25:21)\n at async update (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/process/extract-update.js:109:15)\n at async Object.renovateRepository (/home/ubuntu/renovateapp/node_modules/renovate/dist/workers/repository/index.js:43:25)\n at async renovateRepository (/home/ubuntu/renovateapp/app/worker/index.js:310:26)\n at async /home/ubuntu/renovateapp/app/worker/index.js:570:5"},"msg":"Error updating Yarn binary","time":"2022-07-24T09:16:14.508Z"} {"level":20,"branch":"renovate/yarn-3.x","updatedArtifacts":["yarn.lock"],"msg":"Updated 1 lock files","time":"2022-07-24T09:16:14.511Z"} ``` </details> ### Have you created a minimal reproduction repository? No reproduction, but I have linked to a public repo where it occurs
non_defect
error updating yarn binary how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug i am seeing quite a lot of these errors in the hosted app example repo in this pr seems to have increased recently relevant debug logs logs level branch renovate yarn x msg getbranchpr renovate yarn x time level branch renovate yarn x msg findpr renovate yarn x undefined open time level branch renovate yarn x msg found pr time level branch renovate yarn x msg branchexists true time level branch renovate yarn x msg dependencydashboardcheck undefined time level branch renovate yarn x msg pr rebase requested true time level branch renovate yarn x msg checking if pr has been edited time level branch renovate yarn x msg found existing branch pr time level branch renovate yarn x msg checking schedule at any time null time level branch renovate yarn x msg no schedule defined time level branch renovate yarn x msg setting current branch to main time level branch renovate yarn x branchname main latestcommitdate msg latest commit time level branch renovate yarn x msg manual rebase requested via dependency dashboard time level branch renovate yarn x msg using reuseexistingbranch false time level branch renovate yarn x msg manager getupdatedpackagefiles reuseexistinbranch false time level branch renovate yarn x msg npm updatedependency packagemanager yarn time level branch renovate yarn x msg updating yarn in package json time level branch renovate yarn x msg updated package files time level branch renovate yarn x msg getting updated lock files time level branch renovate yarn x packagefiles msg writing package json files time level branch renovate yarn x msg writing any updated package files time level branch renovate yarn x msg writing package json time level branch renovate yarn x msg npmrc file found in repository time level branch renovate yarn x msg writing updated npmrc file to npmrc time level branch renovate yarn x msg generating yarn lock for time level branch renovate yarn x msg spawning yarn install to create yarn lock time level branch renovate yarn x msg enabling global cache as zero install is not detected time level branch renovate yarn x msg no node constraint found using latest time level branch renovate yarn x msg updating yarn binary time level branch renovate yarn x image node msg using docker to execute time level branch renovate yarn x toolname corepack resolvedversion msg resolved stable matching version time level branch renovate yarn x image docker io renovate node msg no tag or tagconstraint specified time level branch renovate yarn x msg fetching docker image docker io renovate node time level branch renovate yarn x msg finished fetching docker image docker io renovate node time level branch renovate yarn x command docker run rm name renovate node label renovate child v mnt renovate gh sms discord bot mnt renovate gh sms discord bot v tmp renovate cache tmp renovate cache e npm config cache e npm config store e ci e yarn enable immutable installs e yarn http timeout e yarn global folder e yarn enable global cache e buildpack cache dir w mnt renovate gh sms discord bot docker io renovate node bash l c install tool corepack yarn set version yarn install mode update lockfile msg executing command time level branch renovate yarn x cmd docker run rm name renovate node label renovate child v mnt renovate gh sms discord bot mnt renovate gh sms discord bot v tmp renovate cache tmp renovate cache e npm config cache e npm config store e ci e yarn enable immutable installs e yarn http timeout e yarn global folder e yarn enable global cache e buildpack cache dir w mnt renovate gh sms discord bot docker io renovate node bash l c install tool corepack yarn set version yarn install mode update lockfile durationms stdout installing tool corepack nnpm warn config global global local are deprecated use location global instead n nadded package in nlinking tool corepack ninstalled usr local buildpack tools corepack sh in seconds n➤ retrieving saving the new release in yarn releases yarn cjs n➤ done in n➤ ├ resolution step n➤ └ completed in n➤ ├ fetch step n➤ │ packages were already cached had to be fetched n➤ └ completed in n➤ ├ link step n➤ │ skipped due to mode update lockfile n➤ └ completed n➤ done with warnings in n stderr msg exec completed time level branch renovate yarn x msg yarn lock needs updating time level branch renovate yarn x resolvedpaths msg updateyarnoffline resolvedpaths time level branch renovate yarn x err code err invalid arg type message the path argument must be of type string received undefined stack typeerror the path argument must be of type string received undefined n at new nodeerror internal errors js n at validatestring internal validators js n at object join path js n at object join home ubuntu renovateapp node modules upath build code upath js n at updateyarnbinary home ubuntu renovateapp node modules renovate dist modules manager npm post update index js n at async getadditionalfiles home ubuntu renovateapp node modules renovate dist modules manager npm post update index js n at async processbranch home ubuntu renovateapp node modules renovate dist workers repository update branch index js n at async writeupdates home ubuntu renovateapp node modules renovate dist workers repository process write js n at async update home ubuntu renovateapp node modules renovate dist workers repository process extract update js n at async object renovaterepository home ubuntu renovateapp node modules renovate dist workers repository index js n at async renovaterepository home ubuntu renovateapp app worker index js n at async home ubuntu renovateapp app worker index js msg error updating yarn binary time level branch renovate yarn x updatedartifacts msg updated lock files time have you created a minimal reproduction repository no reproduction but i have linked to a public repo where it occurs
0
10,462
2,622,165,004
IssuesEvent
2015-03-04 00:11:56
byzhang/graphchi
https://api.github.com/repos/byzhang/graphchi
opened
Ensure that vertex aggregators (toplist, labelanalysis, general) work with multiplex
auto-migrated Priority-Medium Type-Defect
``` If GraphChi uses multiple disks, also vertex data needs to be read in a striped fashion. ``` Original issue reported on code.google.com by `akyrola...@gmail.com` on 27 Jun 2012 at 3:02
1.0
Ensure that vertex aggregators (toplist, labelanalysis, general) work with multiplex - ``` If GraphChi uses multiple disks, also vertex data needs to be read in a striped fashion. ``` Original issue reported on code.google.com by `akyrola...@gmail.com` on 27 Jun 2012 at 3:02
defect
ensure that vertex aggregators toplist labelanalysis general work with multiplex if graphchi uses multiple disks also vertex data needs to be read in a striped fashion original issue reported on code google com by akyrola gmail com on jun at
1
496,319
14,345,012,999
IssuesEvent
2020-11-28 17:03:50
OpenPrinting/cups
https://api.github.com/repos/OpenPrinting/cups
closed
[Debian] 2.3.3op1 test fail with 'Verifying that history still exists: FAIL'
bug platform issue priority-low
After pushing 2.3.3op1 to Debian/unstable, the Gitlab CI pipeline ran, but the build failed on both `i386` and `amd64`, with the following errors; ``` … Verifying that history still exists: FAIL Test Summary PASS: cupsd exited with no errors. FAIL: 1 job control files were not purged. … ``` See: - [amd64 build log](https://salsa.debian.org/printing-team/cups/-/jobs/1195619) - [I386 build log](https://salsa.debian.org/printing-team/cups/-/jobs/1195620) In the right part of the Gitlab screen, on can download build artifacts (zipfiles), which contain the full test log files (in `debian/output/source_dir/test`. There are at least two issues; ``` PWG 5100.12 section 6.2 - Required Printer Description Attributes [FAIL] RECEIVED: 8686 bytes in response status-code = successful-ok (successful-ok) EXPECTED: media-supported WITH-ALL-VALUES /^(choice(_((custom|na|asme|roc|oe|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])in|(custom|iso|jis|jpn|prc|om|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])mm)){2,}|(custom|na|asme|roc|oe|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])in|(custom|iso|jis|jpn|prc|om|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])mm)$/ GOT: media-supported="na_letter_8.5x11in" GOT: media-supported="na_legal_8.5x14in" GOT: media-supported="na_executive_7.25x10.5in" GOT: media-supported="na_ledger_11x17in" GOT: media-supported="iso_a3_297x420mm" GOT: media-supported="iso_a4_210x297mm" GOT: media-supported="custom_148.52x209.9mm_148.52x209.9mm" GOT: media-supported="jis_b5_182x257mm" GOT: media-supported="iso_b5_176x250mm" GOT: media-supported="na_number-10_4.125x9.5in" GOT: media-supported="iso_c5_162x229mm" GOT: media-supported="iso_dl_110x220mm" GOT: media-supported="na_monarch_3.875x7.5in" "../examples/ipp-2.1.test": ``` and ``` [27/Nov/2020:17:00:12 +0000] "5.11-history": lp -d Test1 testfile.jpg request id is Test1-63 (1 file(s)) PASSED Waiting for jobs to complete... ls -l /tmp/cups.aywkOD/spool FAILED (job control files not present) total 4 drwxrwx--T 2 salsaci salsaci 4096 Nov 27 16:59 temp ```
1.0
[Debian] 2.3.3op1 test fail with 'Verifying that history still exists: FAIL' - After pushing 2.3.3op1 to Debian/unstable, the Gitlab CI pipeline ran, but the build failed on both `i386` and `amd64`, with the following errors; ``` … Verifying that history still exists: FAIL Test Summary PASS: cupsd exited with no errors. FAIL: 1 job control files were not purged. … ``` See: - [amd64 build log](https://salsa.debian.org/printing-team/cups/-/jobs/1195619) - [I386 build log](https://salsa.debian.org/printing-team/cups/-/jobs/1195620) In the right part of the Gitlab screen, on can download build artifacts (zipfiles), which contain the full test log files (in `debian/output/source_dir/test`. There are at least two issues; ``` PWG 5100.12 section 6.2 - Required Printer Description Attributes [FAIL] RECEIVED: 8686 bytes in response status-code = successful-ok (successful-ok) EXPECTED: media-supported WITH-ALL-VALUES /^(choice(_((custom|na|asme|roc|oe|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])in|(custom|iso|jis|jpn|prc|om|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])mm)){2,}|(custom|na|asme|roc|oe|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])in|(custom|iso|jis|jpn|prc|om|roll)_[a-z0-9][-a-z0-9]*_([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])x([1-9][0-9]*(.[0-9]*[1-9])?|0.[0-9]*[1-9])mm)$/ GOT: media-supported="na_letter_8.5x11in" GOT: media-supported="na_legal_8.5x14in" GOT: media-supported="na_executive_7.25x10.5in" GOT: media-supported="na_ledger_11x17in" GOT: media-supported="iso_a3_297x420mm" GOT: media-supported="iso_a4_210x297mm" GOT: media-supported="custom_148.52x209.9mm_148.52x209.9mm" GOT: media-supported="jis_b5_182x257mm" GOT: media-supported="iso_b5_176x250mm" GOT: media-supported="na_number-10_4.125x9.5in" GOT: media-supported="iso_c5_162x229mm" GOT: media-supported="iso_dl_110x220mm" GOT: media-supported="na_monarch_3.875x7.5in" "../examples/ipp-2.1.test": ``` and ``` [27/Nov/2020:17:00:12 +0000] "5.11-history": lp -d Test1 testfile.jpg request id is Test1-63 (1 file(s)) PASSED Waiting for jobs to complete... ls -l /tmp/cups.aywkOD/spool FAILED (job control files not present) total 4 drwxrwx--T 2 salsaci salsaci 4096 Nov 27 16:59 temp ```
non_defect
test fail with verifying that history still exists fail after pushing to debian unstable the gitlab ci pipeline ran but the build failed on both and with the following errors … verifying that history still exists fail test summary pass cupsd exited with no errors fail job control files were not purged … see in the right part of the gitlab screen on can download build artifacts zipfiles which contain the full test log files in debian output source dir test there are at least two issues pwg section required printer description attributes received bytes in response status code successful ok successful ok expected media supported with all values choice custom na asme roc oe roll x in custom iso jis jpn prc om roll x mm custom na asme roc oe roll x in custom iso jis jpn prc om roll x mm got media supported na letter got media supported na legal got media supported na executive got media supported na ledger got media supported iso got media supported iso got media supported custom got media supported jis got media supported iso got media supported na number got media supported iso got media supported iso dl got media supported na monarch examples ipp test and history lp d testfile jpg request id is file s passed waiting for jobs to complete ls l tmp cups aywkod spool failed job control files not present total drwxrwx t salsaci salsaci nov temp
0
62,285
17,023,889,377
IssuesEvent
2021-07-03 04:23:27
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
light mint green and light grey text on white background is not accessible
Component: website Priority: minor Resolution: wontfix Type: defect
**[Submitted to the original trac issue database at 4.22pm, Tuesday, 3rd December 2013]** the recent UI downgrade uses colours in the page headline/menu which are not useful for ease of reading. Not a good contrast for visually impaired people. Ever seen a book printed in light grey letters? Only to save ink/toner if you print drafts yourself, right? ;-)
1.0
light mint green and light grey text on white background is not accessible - **[Submitted to the original trac issue database at 4.22pm, Tuesday, 3rd December 2013]** the recent UI downgrade uses colours in the page headline/menu which are not useful for ease of reading. Not a good contrast for visually impaired people. Ever seen a book printed in light grey letters? Only to save ink/toner if you print drafts yourself, right? ;-)
defect
light mint green and light grey text on white background is not accessible the recent ui downgrade uses colours in the page headline menu which are not useful for ease of reading not a good contrast for visually impaired people ever seen a book printed in light grey letters only to save ink toner if you print drafts yourself right
1
262,481
22,842,709,679
IssuesEvent
2022-07-13 00:34:42
osmosis-labs/osmosis
https://api.github.com/repos/osmosis-labs/osmosis
opened
proposal: implement the ability to print out container logs when e2e fails waiting on node to reach a certain height
T:tests T:dev-UX T:story C:e2e
<!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺ v ✰ Thanks for creating an issue! ✰ ☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --> ## Background Currently, it is challenging to debug e2e upgrades and setup both in CI and locally because, when a node container fails, its logs are not printed to stdout. We need to be able to print the logs from a failing container to the host's stdout. **Story:** >As an Osmosis engineer, I would like to be able to easily identify the source of e2e node container failures so that I can avoid debugging our tooling and focus at being efficient at fixing the actual failures ## Suggested Design We should be able to achieve this by adding logic to [the following branch](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L117) of `WaitUntil` method of `chain.Config` module. `chain.Config` has `containerManager` embedded as [its field](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L30). Therefore, it should be possible to get access to the node's container by calling [`GetValidatorResource(...)`](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/containers/containers.go#L245) Once we have the container resource, we can access its `LogPath`. Next, read the last 100-200 files of the `LogPath` file directly and try to output it to the host's stdout. I propose implementing a method on `containers.Manager`: ``` // ExtractLogs extracts logs from a container dedicated to a node running on chainId and identified by nodeIndex. It // returns the last 100 lines of the container logs, if available. Returns error if not available or if a container does not exist. func (m *Manager) ExtractLogs(chainId string, nodeIndex) ([]byte, error) ``` This method should be called from [this branch](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L117) in `WaitUntil` Then, the result ican be used to log the bytes to the host's stdout. ## Acceptance Criteria - `ExtractLogs` is implemented - Logs are printed to the host's stdout - Debugging e2e container failures is convenient
1.0
proposal: implement the ability to print out container logs when e2e fails waiting on node to reach a certain height - <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺ v ✰ Thanks for creating an issue! ✰ ☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > --> ## Background Currently, it is challenging to debug e2e upgrades and setup both in CI and locally because, when a node container fails, its logs are not printed to stdout. We need to be able to print the logs from a failing container to the host's stdout. **Story:** >As an Osmosis engineer, I would like to be able to easily identify the source of e2e node container failures so that I can avoid debugging our tooling and focus at being efficient at fixing the actual failures ## Suggested Design We should be able to achieve this by adding logic to [the following branch](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L117) of `WaitUntil` method of `chain.Config` module. `chain.Config` has `containerManager` embedded as [its field](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L30). Therefore, it should be possible to get access to the node's container by calling [`GetValidatorResource(...)`](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/containers/containers.go#L245) Once we have the container resource, we can access its `LogPath`. Next, read the last 100-200 files of the `LogPath` file directly and try to output it to the host's stdout. I propose implementing a method on `containers.Manager`: ``` // ExtractLogs extracts logs from a container dedicated to a node running on chainId and identified by nodeIndex. It // returns the last 100 lines of the container logs, if available. Returns error if not available or if a container does not exist. func (m *Manager) ExtractLogs(chainId string, nodeIndex) ([]byte, error) ``` This method should be called from [this branch](https://github.com/osmosis-labs/osmosis/blob/3e449625e06fe8d86ad320d1a2989c19075fea3b/tests/e2e/configurer/chain/chain.go#L117) in `WaitUntil` Then, the result ican be used to log the bytes to the host's stdout. ## Acceptance Criteria - `ExtractLogs` is implemented - Logs are printed to the host's stdout - Debugging e2e container failures is convenient
non_defect
proposal implement the ability to print out container logs when fails waiting on node to reach a certain height ☺ v ✰ thanks for creating an issue ✰ ☺ background currently it is challenging to debug upgrades and setup both in ci and locally because when a node container fails its logs are not printed to stdout we need to be able to print the logs from a failing container to the host s stdout story as an osmosis engineer i would like to be able to easily identify the source of node container failures so that i can avoid debugging our tooling and focus at being efficient at fixing the actual failures suggested design we should be able to achieve this by adding logic to of waituntil method of chain config module chain config has containermanager embedded as therefore it should be possible to get access to the node s container by calling once we have the container resource we can access its logpath next read the last files of the logpath file directly and try to output it to the host s stdout i propose implementing a method on containers manager extractlogs extracts logs from a container dedicated to a node running on chainid and identified by nodeindex it returns the last lines of the container logs if available returns error if not available or if a container does not exist func m manager extractlogs chainid string nodeindex byte error this method should be called from in waituntil then the result ican be used to log the bytes to the host s stdout acceptance criteria extractlogs is implemented logs are printed to the host s stdout debugging container failures is convenient
0
296,596
25,561,883,271
IssuesEvent
2022-11-30 11:24:23
mozilla-mobile/focus-ios
https://api.github.com/repos/mozilla-mobile/focus-ios
closed
You
eng:ui-test eng:automation
### Bitrise Test Run:valid Provide a Bitrise test run report link here showcasing the problem ### Stacktrace: media ### Build: track ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FOCUSIOS-216)
1.0
You - ### Bitrise Test Run:valid Provide a Bitrise test run report link here showcasing the problem ### Stacktrace: media ### Build: track ┆Issue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FOCUSIOS-216)
non_defect
you bitrise test run valid provide a bitrise test run report link here showcasing the problem stacktrace media build track ┆issue is synchronized with this
0
58,248
16,448,175,650
IssuesEvent
2021-05-20 22:52:44
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
Fix Tugboat backup job
Core Application Team Defect DevOps Unplanned work
## Tasks - [x] Ansible: Remove stopping docker and tbctl tasks - [x] Ansible: Remove AMI creation - [x] Terraform: new S3 Bucket (`dsva-vetsgov-utility-tugboat`) - [x] Ansible: Backup the nightly Mongo Tugboat to S3 - [x] Remove any other code - [x] Docs: Add documentation for how the backups work and how to restore a backup - In READMES/devops/tugboat-on-prem-install.md via https://github.com/department-of-veterans-affairs/va.gov-cms/pull/5409
1.0
Fix Tugboat backup job - ## Tasks - [x] Ansible: Remove stopping docker and tbctl tasks - [x] Ansible: Remove AMI creation - [x] Terraform: new S3 Bucket (`dsva-vetsgov-utility-tugboat`) - [x] Ansible: Backup the nightly Mongo Tugboat to S3 - [x] Remove any other code - [x] Docs: Add documentation for how the backups work and how to restore a backup - In READMES/devops/tugboat-on-prem-install.md via https://github.com/department-of-veterans-affairs/va.gov-cms/pull/5409
defect
fix tugboat backup job tasks ansible remove stopping docker and tbctl tasks ansible remove ami creation terraform new bucket dsva vetsgov utility tugboat ansible backup the nightly mongo tugboat to remove any other code docs add documentation for how the backups work and how to restore a backup in readmes devops tugboat on prem install md via
1
330,974
10,058,334,786
IssuesEvent
2019-07-22 13:43:33
wulkano/kap
https://api.github.com/repos/wulkano/kap
closed
FPS option is a switch and not a dropdown
Priority: High Status: In Progress Type: Bug
<!-- Thank you for taking the time to report an issue! ❤️ Before you continue; please make sure you've searched our existing issues to avoid duplicates. When you're ready to open a new issue include as much information as possible. You can use the handy template below for bug reports. macOS version: The output of `$ sw_vers`. Remember that we currently only support macOS 10.12 or later. Kap version: Find this in the about section of Kap, or by right-clicking on the Kap icon and pressing "Get Info". Step to reproduce: If applicable, provide steps to reproduce the issue you're having. Current behavior: A description of how Kap is currently behaving. Expected behavior: How you expected Kap to behave. Workaround: A workaround for the issue if you've found on. (this will help others experiencing the same issue!) --> **macOS version:** Mac OS X 10.14.5 18F96h **Kap version:** 3.0.0-beta.5 #### Steps to reproduce - Open Kap - Navigate to Settings #### Current behaviour - FPS option is switch #### Expected behaviour - FPS option should be a dropdown OR be a switch and not toggle the native select on click #### Workaround <!-- If you have additional information, enter it below. --> <img width="592" alt="Screen Shot 2019-04-06 at 11 13 38 PM" src="https://user-images.githubusercontent.com/14323370/55678091-0c9c9980-58c2-11e9-9ead-04df31abb172.png">
1.0
FPS option is a switch and not a dropdown - <!-- Thank you for taking the time to report an issue! ❤️ Before you continue; please make sure you've searched our existing issues to avoid duplicates. When you're ready to open a new issue include as much information as possible. You can use the handy template below for bug reports. macOS version: The output of `$ sw_vers`. Remember that we currently only support macOS 10.12 or later. Kap version: Find this in the about section of Kap, or by right-clicking on the Kap icon and pressing "Get Info". Step to reproduce: If applicable, provide steps to reproduce the issue you're having. Current behavior: A description of how Kap is currently behaving. Expected behavior: How you expected Kap to behave. Workaround: A workaround for the issue if you've found on. (this will help others experiencing the same issue!) --> **macOS version:** Mac OS X 10.14.5 18F96h **Kap version:** 3.0.0-beta.5 #### Steps to reproduce - Open Kap - Navigate to Settings #### Current behaviour - FPS option is switch #### Expected behaviour - FPS option should be a dropdown OR be a switch and not toggle the native select on click #### Workaround <!-- If you have additional information, enter it below. --> <img width="592" alt="Screen Shot 2019-04-06 at 11 13 38 PM" src="https://user-images.githubusercontent.com/14323370/55678091-0c9c9980-58c2-11e9-9ead-04df31abb172.png">
non_defect
fps option is a switch and not a dropdown thank you for taking the time to report an issue ❤️ before you continue please make sure you ve searched our existing issues to avoid duplicates when you re ready to open a new issue include as much information as possible you can use the handy template below for bug reports macos version the output of sw vers remember that we currently only support macos or later kap version find this in the about section of kap or by right clicking on the kap icon and pressing get info step to reproduce if applicable provide steps to reproduce the issue you re having current behavior a description of how kap is currently behaving expected behavior how you expected kap to behave workaround a workaround for the issue if you ve found on this will help others experiencing the same issue macos version mac os x kap version beta steps to reproduce open kap navigate to settings current behaviour fps option is switch expected behaviour fps option should be a dropdown or be a switch and not toggle the native select on click workaround img width alt screen shot at pm src
0
50,309
13,519,718,460
IssuesEvent
2020-09-15 02:43:30
raindigi/GraphqlType-API-Registration
https://api.github.com/repos/raindigi/GraphqlType-API-Registration
opened
CVE-2020-15168 (Low) detected in node-fetch-2.3.0.tgz
security vulnerability
## CVE-2020-15168 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.3.0.tgz</b></p></summary> <p>A light-weight module that brings window.fetch to node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz</a></p> <p>Path to dependency file: GraphqlType-API-Registration/package.json</p> <p>Path to vulnerable library: GraphqlType-API-Registration/node_modules/node-fetch/package.json</p> <p> Dependency Hierarchy: - apollo-server-express-2.3.3.tgz (Root Library) - apollo-server-core-2.3.3.tgz - apollo-server-env-2.2.0.tgz - :x: **node-fetch-2.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/raindigi/GraphqlType-API-Registration/commit/cad13c9d11d43833b4c876b96fb7c30dced4857c">cad13c9d11d43833b4c876b96fb7c30dced4857c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing. <p>Publish Date: 2020-07-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.6.1,3.0.0-beta.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-15168 (Low) detected in node-fetch-2.3.0.tgz - ## CVE-2020-15168 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-fetch-2.3.0.tgz</b></p></summary> <p>A light-weight module that brings window.fetch to node.js</p> <p>Library home page: <a href="https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz">https://registry.npmjs.org/node-fetch/-/node-fetch-2.3.0.tgz</a></p> <p>Path to dependency file: GraphqlType-API-Registration/package.json</p> <p>Path to vulnerable library: GraphqlType-API-Registration/node_modules/node-fetch/package.json</p> <p> Dependency Hierarchy: - apollo-server-express-2.3.3.tgz (Root Library) - apollo-server-core-2.3.3.tgz - apollo-server-env-2.2.0.tgz - :x: **node-fetch-2.3.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/raindigi/GraphqlType-API-Registration/commit/cad13c9d11d43833b4c876b96fb7c30dced4857c">cad13c9d11d43833b4c876b96fb7c30dced4857c</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> node-fetch before versions 2.6.1 and 3.0.0-beta.9 did not honor the size option after following a redirect, which means that when a content size was over the limit, a FetchError would never get thrown and the process would end without failure. For most people, this fix will have a little or no impact. However, if you are relying on node-fetch to gate files above a size, the impact could be significant, for example: If you don't double-check the size of the data after fetch() has completed, your JS thread could get tied up doing work on a large file (DoS) and/or cost you money in computing. <p>Publish Date: 2020-07-21 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15168>CVE-2020-15168</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r">https://github.com/node-fetch/node-fetch/security/advisories/GHSA-w7rc-rwvf-8q5r</a></p> <p>Release Date: 2020-07-21</p> <p>Fix Resolution: 2.6.1,3.0.0-beta.9</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve low detected in node fetch tgz cve low severity vulnerability vulnerable library node fetch tgz a light weight module that brings window fetch to node js library home page a href path to dependency file graphqltype api registration package json path to vulnerable library graphqltype api registration node modules node fetch package json dependency hierarchy apollo server express tgz root library apollo server core tgz apollo server env tgz x node fetch tgz vulnerable library found in head commit a href vulnerability details node fetch before versions and beta did not honor the size option after following a redirect which means that when a content size was over the limit a fetcherror would never get thrown and the process would end without failure for most people this fix will have a little or no impact however if you are relying on node fetch to gate files above a size the impact could be significant for example if you don t double check the size of the data after fetch has completed your js thread could get tied up doing work on a large file dos and or cost you money in computing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution beta step up your open source security game with whitesource
0
10,793
2,622,189,977
IssuesEvent
2015-03-04 00:22:38
byzhang/cudpp
https://api.github.com/repos/byzhang/cudpp
closed
sorting test failed.
auto-migrated Milestone-Release1.2 OpSys-Linux Priority-High Type-Defect
``` Hi, I just build cudpp and ran cudpp_testrig which failed with (all previous tests were correct) Running a sort of 1048581 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 2.586515 ms Running a sort of 2097152 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms Running a sort of 4194304 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms Running a sort of 8388608 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms My gpu card is a Tesla C1060. ``` Original issue reported on code.google.com by `korgulec@gmail.com` on 19 Jul 2009 at 10:07
1.0
sorting test failed. - ``` Hi, I just build cudpp and ran cudpp_testrig which failed with (all previous tests were correct) Running a sort of 1048581 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 2.586515 ms Running a sort of 2097152 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms Running a sort of 4194304 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms Running a sort of 8388608 unsigned int key-value pairs Unordered key[1048576]:4294966923 > key[1048577]:0 Incorrectly sorted value[1048577] (0) 3530798281 != 0 GPU test FAILED Average execution time: 0.000000 ms My gpu card is a Tesla C1060. ``` Original issue reported on code.google.com by `korgulec@gmail.com` on 19 Jul 2009 at 10:07
defect
sorting test failed hi i just build cudpp and ran cudpp testrig which failed with all previous tests were correct running a sort of unsigned int key value pairs unordered key key incorrectly sorted value gpu test failed average execution time ms running a sort of unsigned int key value pairs unordered key key incorrectly sorted value gpu test failed average execution time ms running a sort of unsigned int key value pairs unordered key key incorrectly sorted value gpu test failed average execution time ms running a sort of unsigned int key value pairs unordered key key incorrectly sorted value gpu test failed average execution time ms my gpu card is a tesla original issue reported on code google com by korgulec gmail com on jul at
1
264,582
20,025,704,209
IssuesEvent
2022-02-01 21:03:31
WesleyB003/u-develop-it
https://api.github.com/repos/WesleyB003/u-develop-it
opened
Create the parties table
documentation
* As a user, I can update a candidate's party affiliation. * As a user, I can request a single candidate's information, including party affiliation. * As a user, I can request a list of all the parties. * As a user, I can request a single party's information. * As a user, I can delete a party. * As a user, I can request a single candidate's information. * As a user, I want to delete a candidate. * As a user, I want to create a candidate.
1.0
Create the parties table - * As a user, I can update a candidate's party affiliation. * As a user, I can request a single candidate's information, including party affiliation. * As a user, I can request a list of all the parties. * As a user, I can request a single party's information. * As a user, I can delete a party. * As a user, I can request a single candidate's information. * As a user, I want to delete a candidate. * As a user, I want to create a candidate.
non_defect
create the parties table as a user i can update a candidate s party affiliation as a user i can request a single candidate s information including party affiliation as a user i can request a list of all the parties as a user i can request a single party s information as a user i can delete a party as a user i can request a single candidate s information as a user i want to delete a candidate as a user i want to create a candidate
0
379,306
11,219,669,840
IssuesEvent
2020-01-07 14:22:07
AugurProject/augur
https://api.github.com/repos/AugurProject/augur
closed
Reporting and Disputing: Add rules section to top of form
Needed for V2 launch Priority: High
Add the rules section into the top of the reporting and disputing forms. Use the same design across both. Design https://www.figma.com/file/aAzKHh4cA6OT2t7WFv2BQ7fB/Reporting-and-Disputing?node-id=3242%3A109036 ****Use the Same Rules as Market Creation
1.0
Reporting and Disputing: Add rules section to top of form - Add the rules section into the top of the reporting and disputing forms. Use the same design across both. Design https://www.figma.com/file/aAzKHh4cA6OT2t7WFv2BQ7fB/Reporting-and-Disputing?node-id=3242%3A109036 ****Use the Same Rules as Market Creation
non_defect
reporting and disputing add rules section to top of form add the rules section into the top of the reporting and disputing forms use the same design across both design use the same rules as market creation
0
25,319
4,288,720,678
IssuesEvent
2016-07-17 17:02:01
kraigs-android/kraigsandroid
https://api.github.com/repos/kraigs-android/kraigsandroid
closed
[feature request] random song as alarm tone
auto-migrated Priority-Medium Type-Defect
``` According to our correspondence (I'll just quote it): [quote] (...) great app! But there's one thing I'll certainly miss from AlarmDroid - using random song from library as an alarm tone, random everytime. I'd like to ask if such option could be included? BUT only if it doesnt require much more space in memory - are you able to estimate what would be the size of the app with that option? [/quote] ``` Original issue reported on code.google.com by `winn...@gmail.com` on 10 Sep 2012 at 6:47
1.0
[feature request] random song as alarm tone - ``` According to our correspondence (I'll just quote it): [quote] (...) great app! But there's one thing I'll certainly miss from AlarmDroid - using random song from library as an alarm tone, random everytime. I'd like to ask if such option could be included? BUT only if it doesnt require much more space in memory - are you able to estimate what would be the size of the app with that option? [/quote] ``` Original issue reported on code.google.com by `winn...@gmail.com` on 10 Sep 2012 at 6:47
defect
random song as alarm tone according to our correspondence i ll just quote it great app but there s one thing i ll certainly miss from alarmdroid using random song from library as an alarm tone random everytime i d like to ask if such option could be included but only if it doesnt require much more space in memory are you able to estimate what would be the size of the app with that option original issue reported on code google com by winn gmail com on sep at
1
288,371
31,861,313,224
IssuesEvent
2023-09-15 11:07:48
nidhi7598/linux-v4.19.72_CVE-2022-3564
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
opened
CVE-2022-3629 (Low) detected in linuxlinux-4.19.294
Mend: dependency security vulnerability
## CVE-2022-3629 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/vmw_vsock/af_vsock.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in Linux Kernel. It has been declared as problematic. This vulnerability affects the function vsock_connect of the file net/vmw_vsock/af_vsock.c. The manipulation leads to memory leak. It is recommended to apply a patch to fix this issue. VDB-211930 is the identifier assigned to this vulnerability. <p>Publish Date: 2022-10-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3629>CVE-2022-3629</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3629">https://www.linuxkernelcves.com/cves/CVE-2022-3629</a></p> <p>Release Date: 2022-10-21</p> <p>Fix Resolution: v4.9.326,v4.14.291,v4.19.256,v5.4.211,v5.10.138,v5.15.63</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-3629 (Low) detected in linuxlinux-4.19.294 - ## CVE-2022-3629 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/vmw_vsock/af_vsock.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was found in Linux Kernel. It has been declared as problematic. This vulnerability affects the function vsock_connect of the file net/vmw_vsock/af_vsock.c. The manipulation leads to memory leak. It is recommended to apply a patch to fix this issue. VDB-211930 is the identifier assigned to this vulnerability. <p>Publish Date: 2022-10-21 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3629>CVE-2022-3629</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-3629">https://www.linuxkernelcves.com/cves/CVE-2022-3629</a></p> <p>Release Date: 2022-10-21</p> <p>Fix Resolution: v4.9.326,v4.14.291,v4.19.256,v5.4.211,v5.10.138,v5.15.63</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve low detected in linuxlinux cve low severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files net vmw vsock af vsock c vulnerability details a vulnerability was found in linux kernel it has been declared as problematic this vulnerability affects the function vsock connect of the file net vmw vsock af vsock c the manipulation leads to memory leak it is recommended to apply a patch to fix this issue vdb is the identifier assigned to this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
71,232
23,495,997,792
IssuesEvent
2022-08-18 01:31:17
idaholab/moose
https://api.github.com/repos/idaholab/moose
opened
Nonuity build results in VPP errors using conda stacs on Macs
C: Framework T: defect P: normal
## Bug Description If moose is compiled using non-unity build and the conda stacks on Macs (both ARM and Intel), we get the following errors: ``` vectorpostprocessors/parallel_consistency.test ................................... [min_cpus=2] FAILED (CRASH) vectorpostprocessors/parallel_consistency.broadcast .............................. [min_cpus=2] FAILED (CRASH) auxkernels/vector_postprocessor_visualization.test ............................... [min_cpus=3] FAILED (CRASH) ``` ## Steps to Reproduce ``` git activate moose (latest moose environment with libmesh) git clone https://www.github.com/idaholab/moose cd moose/test MOOSE_UNITY=false make -j8 ./run_tests -j8 ``` ## Impact Will enable libtorch-related merges due to the fact that it disables unity build in certain folders.
1.0
Nonuity build results in VPP errors using conda stacs on Macs - ## Bug Description If moose is compiled using non-unity build and the conda stacks on Macs (both ARM and Intel), we get the following errors: ``` vectorpostprocessors/parallel_consistency.test ................................... [min_cpus=2] FAILED (CRASH) vectorpostprocessors/parallel_consistency.broadcast .............................. [min_cpus=2] FAILED (CRASH) auxkernels/vector_postprocessor_visualization.test ............................... [min_cpus=3] FAILED (CRASH) ``` ## Steps to Reproduce ``` git activate moose (latest moose environment with libmesh) git clone https://www.github.com/idaholab/moose cd moose/test MOOSE_UNITY=false make -j8 ./run_tests -j8 ``` ## Impact Will enable libtorch-related merges due to the fact that it disables unity build in certain folders.
defect
nonuity build results in vpp errors using conda stacs on macs bug description if moose is compiled using non unity build and the conda stacks on macs both arm and intel we get the following errors vectorpostprocessors parallel consistency test failed crash vectorpostprocessors parallel consistency broadcast failed crash auxkernels vector postprocessor visualization test failed crash steps to reproduce git activate moose latest moose environment with libmesh git clone cd moose test moose unity false make run tests impact will enable libtorch related merges due to the fact that it disables unity build in certain folders
1
436,517
30,555,944,822
IssuesEvent
2023-07-20 11:42:53
navibyte/geospatial
https://api.github.com/repos/navibyte/geospatial
closed
Document and analyze spatial index methods
documentation specifications
Document some spatial index methods used to reference to some coordinates with some codes. This is for documentation. Consider later whether the library could support such indexes or codes. [Open Location Code](https://github.com/google/open-location-code) * *Open Location Code is a technology that gives a way of encoding location into a form that is easier to use than latitude and longitude. The codes generated are called plus codes, as their distinguishing attribute is that they include a "+" character.* * has also [Dart library](https://github.com/google/open-location-code/tree/main/dart)
1.0
Document and analyze spatial index methods - Document some spatial index methods used to reference to some coordinates with some codes. This is for documentation. Consider later whether the library could support such indexes or codes. [Open Location Code](https://github.com/google/open-location-code) * *Open Location Code is a technology that gives a way of encoding location into a form that is easier to use than latitude and longitude. The codes generated are called plus codes, as their distinguishing attribute is that they include a "+" character.* * has also [Dart library](https://github.com/google/open-location-code/tree/main/dart)
non_defect
document and analyze spatial index methods document some spatial index methods used to reference to some coordinates with some codes this is for documentation consider later whether the library could support such indexes or codes open location code is a technology that gives a way of encoding location into a form that is easier to use than latitude and longitude the codes generated are called plus codes as their distinguishing attribute is that they include a character has also
0
327,151
24,120,309,921
IssuesEvent
2022-09-20 18:05:10
oybek703/node-architecture
https://api.github.com/repos/oybek703/node-architecture
closed
basics and two cli projects
documentation
- [x] 01 Введение - [x] 02 Настройка окружения - [x] 03 Начало работы с Node.js - [x] 04 Как работает Node.js_ - [x] 05 Многопоточность - [x] 06 Движок V8 - [x] 07 Node Package Manager - [x] 08 Приложение 1 - CLI прогноз погоды - [x] 09 Приложение 2 - API с ExpressJS
1.0
basics and two cli projects - - [x] 01 Введение - [x] 02 Настройка окружения - [x] 03 Начало работы с Node.js - [x] 04 Как работает Node.js_ - [x] 05 Многопоточность - [x] 06 Движок V8 - [x] 07 Node Package Manager - [x] 08 Приложение 1 - CLI прогноз погоды - [x] 09 Приложение 2 - API с ExpressJS
non_defect
basics and two cli projects введение настройка окружения начало работы с node js как работает node js многопоточность движок node package manager приложение cli прогноз погоды приложение api с expressjs
0
387,398
11,460,617,412
IssuesEvent
2020-02-07 10:06:47
nim-lang/Nim
https://api.github.com/repos/nim-lang/Nim
closed
Hashes: Adding Macros to Make an User Defined object Hashable
Feature Low Priority Macros Pragmas
I want macro used as pragma who makes `proc hash(x: UserDefinedObject): Hash`. example: (`hashable` is the name of the macro) ```example1.nim import hashes type MyObject {.hashable.} = object foo: int bar: string ``` is converted to ```converted1.nim import hashes type MyObject = object foo: int bar: string proc hash(x: MyObject): Hash = ## Computes a Hash from `x`. var h: Hash = 0 h = h !& hash(x.foo) h = h !& hash(x.bar) result = !$h ``` another example: ```example2.nim import hashes type MyKind = enum A, B MyObject {.hashable.} = object case kind: MyKind of A: foo: int of B: bar: string ``` is converted to ```converted2.nim import hashes type MyKind = enum A, B MyObject = object case kind: MyKind of A: foo: int of B: bar: string proc hash(x: MyObject): Hash = ## Computes a Hash from `x`. var h: Hash = 0 h = h !& hash(x.kind) case x.kind of A: h = h !& hash(x.foo) of B: h = h !& hash(x.bar) result = !$h ```
1.0
Hashes: Adding Macros to Make an User Defined object Hashable - I want macro used as pragma who makes `proc hash(x: UserDefinedObject): Hash`. example: (`hashable` is the name of the macro) ```example1.nim import hashes type MyObject {.hashable.} = object foo: int bar: string ``` is converted to ```converted1.nim import hashes type MyObject = object foo: int bar: string proc hash(x: MyObject): Hash = ## Computes a Hash from `x`. var h: Hash = 0 h = h !& hash(x.foo) h = h !& hash(x.bar) result = !$h ``` another example: ```example2.nim import hashes type MyKind = enum A, B MyObject {.hashable.} = object case kind: MyKind of A: foo: int of B: bar: string ``` is converted to ```converted2.nim import hashes type MyKind = enum A, B MyObject = object case kind: MyKind of A: foo: int of B: bar: string proc hash(x: MyObject): Hash = ## Computes a Hash from `x`. var h: Hash = 0 h = h !& hash(x.kind) case x.kind of A: h = h !& hash(x.foo) of B: h = h !& hash(x.bar) result = !$h ```
non_defect
hashes adding macros to make an user defined object hashable i want macro used as pragma who makes proc hash x userdefinedobject hash example hashable is the name of the macro nim import hashes type myobject hashable object foo int bar string is converted to nim import hashes type myobject object foo int bar string proc hash x myobject hash computes a hash from x var h hash h h hash x foo h h hash x bar result h another example nim import hashes type mykind enum a b myobject hashable object case kind mykind of a foo int of b bar string is converted to nim import hashes type mykind enum a b myobject object case kind mykind of a foo int of b bar string proc hash x myobject hash computes a hash from x var h hash h h hash x kind case x kind of a h h hash x foo of b h h hash x bar result h
0
71,699
23,764,986,330
IssuesEvent
2022-09-01 12:05:52
TykTechnologies/tyk-operator
https://api.github.com/repos/TykTechnologies/tyk-operator
closed
[TT-3672] Ingress Controller + Security Policy incompatibility
defect
We're using the Tyk Operator Ingress controller to create api configurations within tyk, and also using Tyk Operator to define our policies within tyk, however, operator appears to be unable to reconcile access right entries within policies.. It appears that the only way to achieve this is to use the automatically generated full name of the API that's created within Tyk by the controller from the ingress defintion.. this is unacceptable as it's an auto-generate name that we do not know at design/dev time. We believe that the best solution is for the reconciler to be able to find api's based on the name of the tyk ingress within k8s. # Example API Definition (Template): ```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: ApiDefinition metadata: name: myapideftemplate labels: template: "true" spec: name: foo protocol: http use_keyless: true proxy: target_url: http://example.com ``` Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpbin-ingress annotations: kubernetes.io/ingress.class: tyk tyk.io/template: myapideftemplate # <--- refers to the api definition (template) above spec: rules: - http: paths: - path: /httpbin pathType: Prefix backend: service: name: httpbin port: number: 8000 ``` Security Policy: ```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: SecurityPolicy metadata: name: policy-give-me-access spec: name: 'Give Me Access' state: active active: true access_rights_array: ## Option 1: - name: httpbin-ingress # <- refers to the ingress above, does not work namespace: default versions: - Default ## Option 2: - name: myapideftemplate # <- refers to the api definition (template) above, does not work namespace: default versions: - Default ## Option 3: - name: auto-generated-ingres-full-name-1u72yba # <- refers to the api definition generated by the controller from the ingress definition, works, but unacceptable behaviour namespace: default versions: - Default quota_max: 10 quota_renewal_rate: 60 rate: 5 per: 5 throttle_interval: 2 throttle_retry_limit: 2 ```
1.0
[TT-3672] Ingress Controller + Security Policy incompatibility - We're using the Tyk Operator Ingress controller to create api configurations within tyk, and also using Tyk Operator to define our policies within tyk, however, operator appears to be unable to reconcile access right entries within policies.. It appears that the only way to achieve this is to use the automatically generated full name of the API that's created within Tyk by the controller from the ingress defintion.. this is unacceptable as it's an auto-generate name that we do not know at design/dev time. We believe that the best solution is for the reconciler to be able to find api's based on the name of the tyk ingress within k8s. # Example API Definition (Template): ```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: ApiDefinition metadata: name: myapideftemplate labels: template: "true" spec: name: foo protocol: http use_keyless: true proxy: target_url: http://example.com ``` Ingress: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpbin-ingress annotations: kubernetes.io/ingress.class: tyk tyk.io/template: myapideftemplate # <--- refers to the api definition (template) above spec: rules: - http: paths: - path: /httpbin pathType: Prefix backend: service: name: httpbin port: number: 8000 ``` Security Policy: ```yaml apiVersion: tyk.tyk.io/v1alpha1 kind: SecurityPolicy metadata: name: policy-give-me-access spec: name: 'Give Me Access' state: active active: true access_rights_array: ## Option 1: - name: httpbin-ingress # <- refers to the ingress above, does not work namespace: default versions: - Default ## Option 2: - name: myapideftemplate # <- refers to the api definition (template) above, does not work namespace: default versions: - Default ## Option 3: - name: auto-generated-ingres-full-name-1u72yba # <- refers to the api definition generated by the controller from the ingress definition, works, but unacceptable behaviour namespace: default versions: - Default quota_max: 10 quota_renewal_rate: 60 rate: 5 per: 5 throttle_interval: 2 throttle_retry_limit: 2 ```
defect
ingress controller security policy incompatibility we re using the tyk operator ingress controller to create api configurations within tyk and also using tyk operator to define our policies within tyk however operator appears to be unable to reconcile access right entries within policies it appears that the only way to achieve this is to use the automatically generated full name of the api that s created within tyk by the controller from the ingress defintion this is unacceptable as it s an auto generate name that we do not know at design dev time we believe that the best solution is for the reconciler to be able to find api s based on the name of the tyk ingress within example api definition template yaml apiversion tyk tyk io kind apidefinition metadata name myapideftemplate labels template true spec name foo protocol http use keyless true proxy target url ingress yaml apiversion networking io kind ingress metadata name httpbin ingress annotations kubernetes io ingress class tyk tyk io template myapideftemplate refers to the api definition template above spec rules http paths path httpbin pathtype prefix backend service name httpbin port number security policy yaml apiversion tyk tyk io kind securitypolicy metadata name policy give me access spec name give me access state active active true access rights array option name httpbin ingress refers to the ingress above does not work namespace default versions default option name myapideftemplate refers to the api definition template above does not work namespace default versions default option name auto generated ingres full name refers to the api definition generated by the controller from the ingress definition works but unacceptable behaviour namespace default versions default quota max quota renewal rate rate per throttle interval throttle retry limit
1
230,516
25,482,690,210
IssuesEvent
2022-11-26 01:13:42
Nivaskumark/kernel_v4.1.15
https://api.github.com/repos/Nivaskumark/kernel_v4.1.15
reopened
CVE-2017-18249 (Medium) detected in linuxlinux-4.6
security vulnerability
## CVE-2017-18249 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/node.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/node.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The add_free_nid function in fs/f2fs/node.c in the Linux kernel before 4.12 does not properly track an allocated nid, which allows local users to cause a denial of service (race condition) or possibly have unspecified other impact via concurrent threads. <p>Publish Date: 2018-03-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18249>CVE-2017-18249</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18249">https://nvd.nist.gov/vuln/detail/CVE-2017-18249</a></p> <p>Release Date: 2018-03-26</p> <p>Fix Resolution: 4.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-18249 (Medium) detected in linuxlinux-4.6 - ## CVE-2017-18249 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.1.15/commit/00db4e8795bcbec692fb60b19160bdd763ad42e3">00db4e8795bcbec692fb60b19160bdd763ad42e3</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/node.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/f2fs/node.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The add_free_nid function in fs/f2fs/node.c in the Linux kernel before 4.12 does not properly track an allocated nid, which allows local users to cause a denial of service (race condition) or possibly have unspecified other impact via concurrent threads. <p>Publish Date: 2018-03-26 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-18249>CVE-2017-18249</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18249">https://nvd.nist.gov/vuln/detail/CVE-2017-18249</a></p> <p>Release Date: 2018-03-26</p> <p>Fix Resolution: 4.12</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files fs node c fs node c vulnerability details the add free nid function in fs node c in the linux kernel before does not properly track an allocated nid which allows local users to cause a denial of service race condition or possibly have unspecified other impact via concurrent threads publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
20,213
3,316,732,161
IssuesEvent
2015-11-06 18:15:20
martindevans/fortune-voronoi
https://api.github.com/repos/martindevans/fortune-voronoi
closed
sample code of how to use the Fortune Vornoi dll
auto-migrated Priority-Medium Type-Defect
``` Hi BenDi Using the current version of the dll-code in VS2008, in VB.Net Thanks very much for sharing the code. The downloaded project zip file contains code to generate the FortuneVoronoi.dll but nothing on how to use it. Would you have a small code example that shows the use the dll, especially how to construct the edges from results returned from the ComputeVoronoiGraph call, please? Alternatively, can you please tell us which data from the VoronoiGraph.Edges structure we should use to construct the edges (from and to-points). Note: I get a few 1.#INF values in Bx and By using 5 input vertices. Cheers ``` Original issue reported on code.google.com by `pant...@gmail.com` on 6 Jan 2014 at 8:15
1.0
sample code of how to use the Fortune Vornoi dll - ``` Hi BenDi Using the current version of the dll-code in VS2008, in VB.Net Thanks very much for sharing the code. The downloaded project zip file contains code to generate the FortuneVoronoi.dll but nothing on how to use it. Would you have a small code example that shows the use the dll, especially how to construct the edges from results returned from the ComputeVoronoiGraph call, please? Alternatively, can you please tell us which data from the VoronoiGraph.Edges structure we should use to construct the edges (from and to-points). Note: I get a few 1.#INF values in Bx and By using 5 input vertices. Cheers ``` Original issue reported on code.google.com by `pant...@gmail.com` on 6 Jan 2014 at 8:15
defect
sample code of how to use the fortune vornoi dll hi bendi using the current version of the dll code in in vb net thanks very much for sharing the code the downloaded project zip file contains code to generate the fortunevoronoi dll but nothing on how to use it would you have a small code example that shows the use the dll especially how to construct the edges from results returned from the computevoronoigraph call please alternatively can you please tell us which data from the voronoigraph edges structure we should use to construct the edges from and to points note i get a few inf values in bx and by using input vertices cheers original issue reported on code google com by pant gmail com on jan at
1
127,910
5,040,025,778
IssuesEvent
2016-12-19 02:20:05
coreos/bugs
https://api.github.com/repos/coreos/bugs
closed
Using Ignition on VMware ESXi results in "failed to start initrd-switch-root.target"
area/usability component/ignition kind/bug priority/P1 team/os
# Issue Report # As we need to use static IP addresses for our CoreOS VMs on VMware ESXi, I was trying to create an ignition.json to pass the correct settings. ## Bug ## Upon the first boot, I see a `failed to start initrd-switch-root.target: Transaction is desctructive` error. The boot screen halts. The subsequent error is `initrd-cleanup.service: Main process exited, code=exited, status=4/NOPERMISSION` Screenshot: ![Error](http://i.imgur.com/NOLL15r.png) ### CoreOS Version ### 1185.3.0 ### Environment ### What hardware/cloud provider/hypervisor is being used to run CoreOS? VMware ESXi v5.1 ### Expected Behavior ### The ignition configuration gets applied correctly. ### Actual Behavior ### The ignition configuration doesn't get applied correctly. ### Reproduction Steps ### 1. I upload the current stable OVA image and import it 2. Idownload the .vmx file and add the `guestinfo.coreos.config.data` as well as `guestinfo.coreos.config.data.encoding` info. I validated that the ignition.json is correct via the online validator before. 3. Then, I upload the .vmx file again to the ESXi host, and start the VM for the first time.
1.0
Using Ignition on VMware ESXi results in "failed to start initrd-switch-root.target" - # Issue Report # As we need to use static IP addresses for our CoreOS VMs on VMware ESXi, I was trying to create an ignition.json to pass the correct settings. ## Bug ## Upon the first boot, I see a `failed to start initrd-switch-root.target: Transaction is desctructive` error. The boot screen halts. The subsequent error is `initrd-cleanup.service: Main process exited, code=exited, status=4/NOPERMISSION` Screenshot: ![Error](http://i.imgur.com/NOLL15r.png) ### CoreOS Version ### 1185.3.0 ### Environment ### What hardware/cloud provider/hypervisor is being used to run CoreOS? VMware ESXi v5.1 ### Expected Behavior ### The ignition configuration gets applied correctly. ### Actual Behavior ### The ignition configuration doesn't get applied correctly. ### Reproduction Steps ### 1. I upload the current stable OVA image and import it 2. Idownload the .vmx file and add the `guestinfo.coreos.config.data` as well as `guestinfo.coreos.config.data.encoding` info. I validated that the ignition.json is correct via the online validator before. 3. Then, I upload the .vmx file again to the ESXi host, and start the VM for the first time.
non_defect
using ignition on vmware esxi results in failed to start initrd switch root target issue report as we need to use static ip addresses for our coreos vms on vmware esxi i was trying to create an ignition json to pass the correct settings bug upon the first boot i see a failed to start initrd switch root target transaction is desctructive error the boot screen halts the subsequent error is initrd cleanup service main process exited code exited status nopermission screenshot coreos version environment what hardware cloud provider hypervisor is being used to run coreos vmware esxi expected behavior the ignition configuration gets applied correctly actual behavior the ignition configuration doesn t get applied correctly reproduction steps i upload the current stable ova image and import it idownload the vmx file and add the guestinfo coreos config data as well as guestinfo coreos config data encoding info i validated that the ignition json is correct via the online validator before then i upload the vmx file again to the esxi host and start the vm for the first time
0
28,250
5,225,049,528
IssuesEvent
2017-01-27 17:06:17
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
TranslateBehavior has collision problems when the table name is used twice
Defect
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.3.8 * Platform and Target: PHP 5.6, MySQL 5.6 ### What you did ``` $record1 = TableRegistry::get('Vendor/Table1.TranslationKeys') ->find('all') ->where(['key' => 'One']) ->first(); $record2 = TableRegistry::get('Vendor/Table2.TranslationKeys') ->find('all') ->where(['key' => 'Two']) ->first(); ``` ### What happened Translations for the second record are being taken from the first table. ### What you expected to happen Translations for the second record should be taken from the second table. This can be fixed by changing the `alias` used in `TranslateBehavior::setupFieldAssociations` and `TranslateBehavior::beforeFind` from `` $alias = $this->_table->alias(); `` To `` $alias = Inflector::slug($this->_table->registryAlias()); `` I'm pretty sure this came up in the past, so not sure when it broke.
1.0
TranslateBehavior has collision problems when the table name is used twice - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.3.8 * Platform and Target: PHP 5.6, MySQL 5.6 ### What you did ``` $record1 = TableRegistry::get('Vendor/Table1.TranslationKeys') ->find('all') ->where(['key' => 'One']) ->first(); $record2 = TableRegistry::get('Vendor/Table2.TranslationKeys') ->find('all') ->where(['key' => 'Two']) ->first(); ``` ### What happened Translations for the second record are being taken from the first table. ### What you expected to happen Translations for the second record should be taken from the second table. This can be fixed by changing the `alias` used in `TranslateBehavior::setupFieldAssociations` and `TranslateBehavior::beforeFind` from `` $alias = $this->_table->alias(); `` To `` $alias = Inflector::slug($this->_table->registryAlias()); `` I'm pretty sure this came up in the past, so not sure when it broke.
defect
translatebehavior has collision problems when the table name is used twice this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target php mysql what you did tableregistry get vendor translationkeys find all where first tableregistry get vendor translationkeys find all where first what happened translations for the second record are being taken from the first table what you expected to happen translations for the second record should be taken from the second table this can be fixed by changing the alias used in translatebehavior setupfieldassociations and translatebehavior beforefind from alias this table alias to alias inflector slug this table registryalias i m pretty sure this came up in the past so not sure when it broke
1
714,397
24,560,333,833
IssuesEvent
2022-10-12 19:38:04
kubeflow/kubeflow
https://api.github.com/repos/kubeflow/kubeflow
closed
Installing Kubeflow on AWS EKS
priority/p1 area/installation
I had tried so many option without success, my final configuration run kfctl on EC2 following the [instruction](https://www.kubeflow.org/docs/distributions/aws/deploy/install-kubeflow/) is: ``` kfctl version kfctl v1.2.0-0-gbc038f9 ``` eks have Kubernetes version 1.21 I tried to change the yaml from ``` repos: - name: manifests uri: https://github.com/kubeflow/manifests/archive/v1.2.0.tar.gz version: v1.2-branch ``` to ``` repos: - name: manifests uri: /home/ec2-user/tmp/v1.2.0.tar.gz version: v1.2-branch ``` error that keep occurring for ever ``` WARN[0008] Encountered error applying application istio: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout209765219": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host filename="kustomize/kustomize.go:284" WARN[0008] Will retry in 5 seconds. filename="kustomize/kustomize.go:285" ``` until final error is: ``` ERRO[0610] Permanently failed applying application istio: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout568084052": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host filename="kustomize/kustomize.go:288" Error: failed to apply: (kubeflow.error): Code 500 with message: kfApp Apply failed for kustomize: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout568084052": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host ``` Any suggestion is much appreciated
1.0
Installing Kubeflow on AWS EKS - I had tried so many option without success, my final configuration run kfctl on EC2 following the [instruction](https://www.kubeflow.org/docs/distributions/aws/deploy/install-kubeflow/) is: ``` kfctl version kfctl v1.2.0-0-gbc038f9 ``` eks have Kubernetes version 1.21 I tried to change the yaml from ``` repos: - name: manifests uri: https://github.com/kubeflow/manifests/archive/v1.2.0.tar.gz version: v1.2-branch ``` to ``` repos: - name: manifests uri: /home/ec2-user/tmp/v1.2.0.tar.gz version: v1.2-branch ``` error that keep occurring for ever ``` WARN[0008] Encountered error applying application istio: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout209765219": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host filename="kustomize/kustomize.go:284" WARN[0008] Will retry in 5 seconds. filename="kustomize/kustomize.go:285" ``` until final error is: ``` ERRO[0610] Permanently failed applying application istio: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout568084052": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host filename="kustomize/kustomize.go:288" Error: failed to apply: (kubeflow.error): Code 500 with message: kfApp Apply failed for kustomize: (kubeflow.error): Code 500 with message: Apply.Run : error when creating "/tmp/kout568084052": admission webhook "validation.istio.io" denied the request: configuration is invalid: TLS match must have at least one SNI host ``` Any suggestion is much appreciated
non_defect
installing kubeflow on aws eks i had tried so many option without success my final configuration run kfctl on following the is kfctl version kfctl eks have kubernetes version i tried to change the yaml from repos name manifests uri version branch to repos name manifests uri home user tmp tar gz version branch error that keep occurring for ever warn encountered error applying application istio kubeflow error code with message apply run error when creating tmp admission webhook validation istio io denied the request configuration is invalid tls match must have at least one sni host filename kustomize kustomize go warn will retry in seconds filename kustomize kustomize go until final error is erro permanently failed applying application istio kubeflow error code with message apply run error when creating tmp admission webhook validation istio io denied the request configuration is invalid tls match must have at least one sni host filename kustomize kustomize go error failed to apply kubeflow error code with message kfapp apply failed for kustomize kubeflow error code with message apply run error when creating tmp admission webhook validation istio io denied the request configuration is invalid tls match must have at least one sni host any suggestion is much appreciated
0
315,336
27,066,406,782
IssuesEvent
2023-02-14 01:01:39
dotnet/maui
https://api.github.com/repos/dotnet/maui
closed
.NET Maui application throws System.DllNotFoundException for libMicrosoft.CognitiveServices.Speech.core.so
t/bug s/verified external s/try-latest-version
### Description .NET Maui application throws System.DllNotFoundException for libMicrosoft.CognitiveServices.Speech.core.so when execute SpeechSDK's methods. ![image](https://user-images.githubusercontent.com/43431002/172188850-8817dc02-4b2e-49ff-97d2-0b8b17f68ad4.png) Environment: - Windows 10 Enterprise 21H2 19044.1466 - Microsoft Visual Studio Enterprise 2022 (64-bit) - Preview Version 17.3.0 Preview 1.1 - .NET MAUI 1.0 - Microsoft.CognitiveServices.Speech v1.22 ### Steps to Reproduce 1. Create a MAUI project 2. Install Microsoft.CognitiveServices.Speech with nuget 3. Add a code which call a SpeechSDK's method. 4. Change the build target for Android ![image](https://user-images.githubusercontent.com/43431002/172189811-13889ad5-e669-45f2-80e5-f37f8dc7a8c5.png) 5. Run the app. ### Version with bug 6.0 (current) ### Last version that worked well Unknown/Other ### Affected platforms Android ### Affected platform versions Android12.1 ### Did you find any workaround? _No response_ ### Relevant log output _No response_
1.0
.NET Maui application throws System.DllNotFoundException for libMicrosoft.CognitiveServices.Speech.core.so - ### Description .NET Maui application throws System.DllNotFoundException for libMicrosoft.CognitiveServices.Speech.core.so when execute SpeechSDK's methods. ![image](https://user-images.githubusercontent.com/43431002/172188850-8817dc02-4b2e-49ff-97d2-0b8b17f68ad4.png) Environment: - Windows 10 Enterprise 21H2 19044.1466 - Microsoft Visual Studio Enterprise 2022 (64-bit) - Preview Version 17.3.0 Preview 1.1 - .NET MAUI 1.0 - Microsoft.CognitiveServices.Speech v1.22 ### Steps to Reproduce 1. Create a MAUI project 2. Install Microsoft.CognitiveServices.Speech with nuget 3. Add a code which call a SpeechSDK's method. 4. Change the build target for Android ![image](https://user-images.githubusercontent.com/43431002/172189811-13889ad5-e669-45f2-80e5-f37f8dc7a8c5.png) 5. Run the app. ### Version with bug 6.0 (current) ### Last version that worked well Unknown/Other ### Affected platforms Android ### Affected platform versions Android12.1 ### Did you find any workaround? _No response_ ### Relevant log output _No response_
non_defect
net maui application throws system dllnotfoundexception for libmicrosoft cognitiveservices speech core so description net maui application throws system dllnotfoundexception for libmicrosoft cognitiveservices speech core so when execute speechsdk s methods environment windows enterprise microsoft visual studio enterprise bit preview version preview net maui microsoft cognitiveservices speech steps to reproduce create a maui project install microsoft cognitiveservices speech with nuget add a code which call a speechsdk s method change the build target for android run the app version with bug current last version that worked well unknown other affected platforms android affected platform versions did you find any workaround no response relevant log output no response
0
134,344
19,180,113,794
IssuesEvent
2021-12-04 08:18:52
cse110-fa21-group8/cse110-fa21-group8
https://api.github.com/repos/cse110-fa21-group8/cse110-fa21-group8
closed
Polish Edit/View Page
task pending design
**Is your feature request related to a problem? Please describe.** Our View and Edit pages currently have a skeleton of what we need the MVP to look like. There is no finalized styling or design on the pages. **Describe the solution you'd like** We will work on adding in proper styling/spacing/fonts/colors to the pages to clean up and refine our client side. **Describe alternatives you've considered** N/A **Additional context** Will need to match similar styling to Joannes Explore and Home Page
1.0
Polish Edit/View Page - **Is your feature request related to a problem? Please describe.** Our View and Edit pages currently have a skeleton of what we need the MVP to look like. There is no finalized styling or design on the pages. **Describe the solution you'd like** We will work on adding in proper styling/spacing/fonts/colors to the pages to clean up and refine our client side. **Describe alternatives you've considered** N/A **Additional context** Will need to match similar styling to Joannes Explore and Home Page
non_defect
polish edit view page is your feature request related to a problem please describe our view and edit pages currently have a skeleton of what we need the mvp to look like there is no finalized styling or design on the pages describe the solution you d like we will work on adding in proper styling spacing fonts colors to the pages to clean up and refine our client side describe alternatives you ve considered n a additional context will need to match similar styling to joannes explore and home page
0
126,295
17,016,908,450
IssuesEvent
2021-07-02 13:19:24
derailed/k9s
https://api.github.com/repos/derailed/k9s
reopened
Cannot list roles
AsDesigned
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_err.png" align="right" width="100" height="auto"/> <br/> <br/> <br/> **Describe the bug** listing of roles seems to be empty A clear and concise description of what the bug is. **To Reproduce** in my cluster ![image](https://user-images.githubusercontent.com/32150474/124091994-bcf5aa00-da99-11eb-8b03-a8266b3f86a1.png) ``` pankaj.tolani@tolani-mac  ~  kubectx -c clusteradmin@beta-apse2-v1 pankaj.tolani@tolani-mac  ~  kubectl get role NAME CREATED AT ack-elasticache-reader 2021-02-10T07:11:08Z ack-elasticache-writer 2021-02-10T07:11:08Z ... ``` **Expected behavior** see the list in k9s too? **Screenshots** If applicable, add screenshots to help explain your problem. **Versions (please complete the following information):** - OS: [e.g. OSX] big sur - K9s: [e.g. 0.1.0] v0.24.12 - K8s: [e.g. 1.11.0] kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.8-eks-96780e", GitCommit:"96780e1b30acbf0a52c38b6030d7853e575bcdf3", GitTreeState:"clean", BuildDate:"2021-03-10T21:32:29Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"} **Additional context** Add any other context about the problem here.
1.0
Cannot list roles - <img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_err.png" align="right" width="100" height="auto"/> <br/> <br/> <br/> **Describe the bug** listing of roles seems to be empty A clear and concise description of what the bug is. **To Reproduce** in my cluster ![image](https://user-images.githubusercontent.com/32150474/124091994-bcf5aa00-da99-11eb-8b03-a8266b3f86a1.png) ``` pankaj.tolani@tolani-mac  ~  kubectx -c clusteradmin@beta-apse2-v1 pankaj.tolani@tolani-mac  ~  kubectl get role NAME CREATED AT ack-elasticache-reader 2021-02-10T07:11:08Z ack-elasticache-writer 2021-02-10T07:11:08Z ... ``` **Expected behavior** see the list in k9s too? **Screenshots** If applicable, add screenshots to help explain your problem. **Versions (please complete the following information):** - OS: [e.g. OSX] big sur - K9s: [e.g. 0.1.0] v0.24.12 - K8s: [e.g. 1.11.0] kubectl version Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.8-eks-96780e", GitCommit:"96780e1b30acbf0a52c38b6030d7853e575bcdf3", GitTreeState:"clean", BuildDate:"2021-03-10T21:32:29Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"} **Additional context** Add any other context about the problem here.
non_defect
cannot list roles describe the bug listing of roles seems to be empty a clear and concise description of what the bug is to reproduce in my cluster pankaj tolani tolani mac   kubectx c clusteradmin beta pankaj tolani tolani mac   kubectl get role name created at ack elasticache reader ack elasticache writer expected behavior see the list in too screenshots if applicable add screenshots to help explain your problem versions please complete the following information os big sur kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform darwin server version version info major minor gitversion eks gitcommit gittreestate clean builddate goversion compiler gc platform linux additional context add any other context about the problem here
0
80,987
30,647,216,624
IssuesEvent
2023-07-25 06:10:01
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
opened
SelectCheckboxMenu: Can't associate label to input
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug I am using a selectCheckboxMenu, but can't associate a label to it. p:outputLabel and the label attribute don't work, and also accessibility still requires a label for this ### Reproducer ` <p:outputLabel for="@next" value="Label:"/> <p:selectCheckboxMenu ... label="label" ... ` ### Expected behavior The label should be linked to the component, and also on click on the label the component should be selected (like on other inputs) ### PrimeFaces edition None ### PrimeFaces version 13.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.x ### Java version 11 ### Browser(s) _No response_
1.0
SelectCheckboxMenu: Can't associate label to input - ### Describe the bug I am using a selectCheckboxMenu, but can't associate a label to it. p:outputLabel and the label attribute don't work, and also accessibility still requires a label for this ### Reproducer ` <p:outputLabel for="@next" value="Label:"/> <p:selectCheckboxMenu ... label="label" ... ` ### Expected behavior The label should be linked to the component, and also on click on the label the component should be selected (like on other inputs) ### PrimeFaces edition None ### PrimeFaces version 13.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.x ### Java version 11 ### Browser(s) _No response_
defect
selectcheckboxmenu can t associate label to input describe the bug i am using a selectcheckboxmenu but can t associate a label to it p outputlabel and the label attribute don t work and also accessibility still requires a label for this reproducer p selectcheckboxmenu label label expected behavior the label should be linked to the component and also on click on the label the component should be selected like on other inputs primefaces edition none primefaces version theme no response jsf implementation mojarra jsf version x java version browser s no response
1
4,025
2,610,086,077
IssuesEvent
2015-02-26 18:26:10
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳彻底祛除青春痘
auto-migrated Priority-Medium Type-Defect
``` 深圳彻底祛除青春痘【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:09
1.0
深圳彻底祛除青春痘 - ``` 深圳彻底祛除青春痘【深圳韩方科颜全国热线400-869-1818,24小 时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:09
defect
深圳彻底祛除青春痘 深圳彻底祛除青春痘【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
1
367,563
25,749,166,484
IssuesEvent
2022-12-08 11:59:22
sidebase/nuxt-auth
https://api.github.com/repos/sidebase/nuxt-auth
closed
Custom fields for Credentials Provider
documentation
### Describe the feature I would like to suggest a custom/optional field in the user interface object to store additional data. F.e when logging in with Strapi using Credential flow, the server will return a JWT. This token needs to stay unaltered otherwise the secret is no longer valid. I would like to propose a extra optional field for users, so that they can add in content that can be then safely stored within the encoded session jwt. By using getToken in Nitro we could then reacquire the original raw JWT token from Strapi. ### Additional information Within the [...].ts auth catch all endpoint: `if (user) { const u = { id: user.id, name: user.user.username, // Passing the OG JWT through the email field. email: user.jwt }; return u; } else { // If you return null then an error will be displayed advising the user to check their details. return null; // You can also Reject this callback with an Error thus the user will be sent to the error page with the error message as a query parameter }` And then in Nitro Api I hack it out the email field currently with getToken: `const token = await getToken({ event }); const settings = { method: "GET", headers: { "Content-Type": "application/json", Authorization: "Bearer " + token.email, }, };` _No response_
1.0
Custom fields for Credentials Provider - ### Describe the feature I would like to suggest a custom/optional field in the user interface object to store additional data. F.e when logging in with Strapi using Credential flow, the server will return a JWT. This token needs to stay unaltered otherwise the secret is no longer valid. I would like to propose a extra optional field for users, so that they can add in content that can be then safely stored within the encoded session jwt. By using getToken in Nitro we could then reacquire the original raw JWT token from Strapi. ### Additional information Within the [...].ts auth catch all endpoint: `if (user) { const u = { id: user.id, name: user.user.username, // Passing the OG JWT through the email field. email: user.jwt }; return u; } else { // If you return null then an error will be displayed advising the user to check their details. return null; // You can also Reject this callback with an Error thus the user will be sent to the error page with the error message as a query parameter }` And then in Nitro Api I hack it out the email field currently with getToken: `const token = await getToken({ event }); const settings = { method: "GET", headers: { "Content-Type": "application/json", Authorization: "Bearer " + token.email, }, };` _No response_
non_defect
custom fields for credentials provider describe the feature i would like to suggest a custom optional field in the user interface object to store additional data f e when logging in with strapi using credential flow the server will return a jwt this token needs to stay unaltered otherwise the secret is no longer valid i would like to propose a extra optional field for users so that they can add in content that can be then safely stored within the encoded session jwt by using gettoken in nitro we could then reacquire the original raw jwt token from strapi additional information within the ts auth catch all endpoint if user const u id user id name user user username passing the og jwt through the email field email user jwt return u else if you return null then an error will be displayed advising the user to check their details return null you can also reject this callback with an error thus the user will be sent to the error page with the error message as a query parameter and then in nitro api i hack it out the email field currently with gettoken const token await gettoken event const settings method get headers content type application json authorization bearer token email no response
0
18,792
3,087,412,129
IssuesEvent
2015-08-25 11:48:51
akvo/akvo-flow
https://api.github.com/repos/akvo/akvo-flow
closed
Importing cleaned data when Raw Data Report is in Spanish
1 - Defect 2 - Discussion
I don't think this is a new issue, I think I just haven't noticed it before now. On the Data > Data Cleaning tab when you change the raw data report language from the default English option to Spanish, click to export, make some changes to the spreadsheet, and the try to import this cleaned data the importer falls over. Perhaps the easiest solution would be just removing the option to export a raw data report in a different language on the Data Cleaning tab? But leave it on the actual Reports tab.
1.0
Importing cleaned data when Raw Data Report is in Spanish - I don't think this is a new issue, I think I just haven't noticed it before now. On the Data > Data Cleaning tab when you change the raw data report language from the default English option to Spanish, click to export, make some changes to the spreadsheet, and the try to import this cleaned data the importer falls over. Perhaps the easiest solution would be just removing the option to export a raw data report in a different language on the Data Cleaning tab? But leave it on the actual Reports tab.
defect
importing cleaned data when raw data report is in spanish i don t think this is a new issue i think i just haven t noticed it before now on the data data cleaning tab when you change the raw data report language from the default english option to spanish click to export make some changes to the spreadsheet and the try to import this cleaned data the importer falls over perhaps the easiest solution would be just removing the option to export a raw data report in a different language on the data cleaning tab but leave it on the actual reports tab
1
66,660
20,423,905,103
IssuesEvent
2022-02-24 00:20:59
scoutplan/scoutplan
https://api.github.com/repos/scoutplan/scoutplan
closed
[Scoutplan Production/production] NameError: undefined local variable or method `find_unit' for #<EventsController:0x00000000186988> Did you mean? find_unit_info
defect
## Backtrace line 187 of [PROJECT_ROOT]/app/controllers/events_controller.rb: perform_cancellation [View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/84183258)
1.0
[Scoutplan Production/production] NameError: undefined local variable or method `find_unit' for #<EventsController:0x00000000186988> Did you mean? find_unit_info - ## Backtrace line 187 of [PROJECT_ROOT]/app/controllers/events_controller.rb: perform_cancellation [View full backtrace and more info at honeybadger.io](https://app.honeybadger.io/projects/97676/faults/84183258)
defect
nameerror undefined local variable or method find unit for did you mean find unit info backtrace line of app controllers events controller rb perform cancellation
1
10,844
2,622,192,871
IssuesEvent
2015-03-04 00:23:52
byzhang/cudpp
https://api.github.com/repos/byzhang/cudpp
opened
Compiling cudpp on VS2008/2010 causes fatal error C1060: compiler is out of heap space
auto-migrated Milestone-Release2.1 Priority-High Type-Defect
``` What steps will reproduce the problem? 1. Download CUDPP Release 2.0, Aug 8 release 2. Use CMake took to create sln files 3. build on VS2010 using the v90 platform toolset What is the expected output? What do you see instead? 12_segmented_scan_app.compute_10.cudafe2.gpu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-6_segmented_scan_app.compute_13.cudafe1.gpu 2> tmpxft_000007b8_00000000-16_segmented_scan_app.compute_13.cudafe2.gpu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-3_segmented_scan_app.compute_20.cudafe1.gpu 2> tmpxft_000007b8_00000000-20_segmented_scan_app.compute_20.cudafe2.gpu 2> segmented_scan_app.cu 2>ptxas C : /Users/Peter/AppData/Local/Temp/tmpxft_000007b8_00000000-9_segmented_scan_app.co mpute_10.ptx, line 183650; warning : Double is not supported. Demoting to float 2> segmented_scan_app.cu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-8_segmented_scan_app.compute_10.cudafe1.cpp 2> tmpxft_000007b8_00000000-32_segmented_scan_app.compute_10.ii 2> 2>C:/Users/Peter/AppData/Local/Temp/tmpxft_000007b8_00000000-5_segmented_scan_ap p.fatbin.c(2090276): fatal error C1060: compiler is out of heap space 2> 2> CMake Error at CMakeFiles/cudpp_generated_segmented_scan_app.cu.obj.cmake:256 (message): 2> Error generating file 2> C:/Users/Peter/Documents/Labwork/Thesis/CodeSamples/cudpp_src_2.0/build/src/cudp p/Debug/cudpp_generated_segmented_scan_app.cu.obj What version of the product are you using? On what operating system? Win7 x32, Visual Studio 2010 (also tested on 2008) Please provide any additional information below. Attempted several fixes suggested by MS, none of which worked: increased page file size to maximum allowed Used the /Zm option with values ranging from 100 to 2000 Removed the /Zm options ``` Original issue reported on code.google.com by `Peter.Ch...@gmail.com` on 7 Sep 2011 at 5:01
1.0
Compiling cudpp on VS2008/2010 causes fatal error C1060: compiler is out of heap space - ``` What steps will reproduce the problem? 1. Download CUDPP Release 2.0, Aug 8 release 2. Use CMake took to create sln files 3. build on VS2010 using the v90 platform toolset What is the expected output? What do you see instead? 12_segmented_scan_app.compute_10.cudafe2.gpu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-6_segmented_scan_app.compute_13.cudafe1.gpu 2> tmpxft_000007b8_00000000-16_segmented_scan_app.compute_13.cudafe2.gpu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-3_segmented_scan_app.compute_20.cudafe1.gpu 2> tmpxft_000007b8_00000000-20_segmented_scan_app.compute_20.cudafe2.gpu 2> segmented_scan_app.cu 2>ptxas C : /Users/Peter/AppData/Local/Temp/tmpxft_000007b8_00000000-9_segmented_scan_app.co mpute_10.ptx, line 183650; warning : Double is not supported. Demoting to float 2> segmented_scan_app.cu 2> segmented_scan_app.cu 2> tmpxft_000007b8_00000000-8_segmented_scan_app.compute_10.cudafe1.cpp 2> tmpxft_000007b8_00000000-32_segmented_scan_app.compute_10.ii 2> 2>C:/Users/Peter/AppData/Local/Temp/tmpxft_000007b8_00000000-5_segmented_scan_ap p.fatbin.c(2090276): fatal error C1060: compiler is out of heap space 2> 2> CMake Error at CMakeFiles/cudpp_generated_segmented_scan_app.cu.obj.cmake:256 (message): 2> Error generating file 2> C:/Users/Peter/Documents/Labwork/Thesis/CodeSamples/cudpp_src_2.0/build/src/cudp p/Debug/cudpp_generated_segmented_scan_app.cu.obj What version of the product are you using? On what operating system? Win7 x32, Visual Studio 2010 (also tested on 2008) Please provide any additional information below. Attempted several fixes suggested by MS, none of which worked: increased page file size to maximum allowed Used the /Zm option with values ranging from 100 to 2000 Removed the /Zm options ``` Original issue reported on code.google.com by `Peter.Ch...@gmail.com` on 7 Sep 2011 at 5:01
defect
compiling cudpp on causes fatal error compiler is out of heap space what steps will reproduce the problem download cudpp release aug release use cmake took to create sln files build on using the platform toolset what is the expected output what do you see instead segmented scan app compute gpu segmented scan app cu tmpxft segmented scan app compute gpu tmpxft segmented scan app compute gpu segmented scan app cu tmpxft segmented scan app compute gpu tmpxft segmented scan app compute gpu segmented scan app cu ptxas c users peter appdata local temp tmpxft segmented scan app co mpute ptx line warning double is not supported demoting to float segmented scan app cu segmented scan app cu tmpxft segmented scan app compute cpp tmpxft segmented scan app compute ii c users peter appdata local temp tmpxft segmented scan ap p fatbin c fatal error compiler is out of heap space cmake error at cmakefiles cudpp generated segmented scan app cu obj cmake message error generating file c users peter documents labwork thesis codesamples cudpp src build src cudp p debug cudpp generated segmented scan app cu obj what version of the product are you using on what operating system visual studio also tested on please provide any additional information below attempted several fixes suggested by ms none of which worked increased page file size to maximum allowed used the zm option with values ranging from to removed the zm options original issue reported on code google com by peter ch gmail com on sep at
1
57,609
15,881,780,255
IssuesEvent
2021-04-09 15:12:57
Ferlab-Ste-Justine/clin-project
https://api.github.com/repos/Ferlab-Ste-Justine/clin-project
closed
modal d'annulation - X ne fait aucune action
Defects Front-end
modal d'annulation dans le formulaire de prescription --> lorsqu'on clique sur X (coin supérieur droit du modal) ne produit aucune action.
1.0
modal d'annulation - X ne fait aucune action - modal d'annulation dans le formulaire de prescription --> lorsqu'on clique sur X (coin supérieur droit du modal) ne produit aucune action.
defect
modal d annulation x ne fait aucune action modal d annulation dans le formulaire de prescription lorsqu on clique sur x coin supérieur droit du modal ne produit aucune action
1
27,308
4,958,914,575
IssuesEvent
2016-12-02 11:26:05
rbei-etas/busmaster
https://api.github.com/repos/rbei-etas/busmaster
opened
Issue if configured "Enter" key as short cut key in Customize Quick access Toolbar
1.3 patch (defect) 3.3 low priority (EC3)
Enter key not working in Node Simulation editor if "ENTER" key is mapped as short cut key for a command in Customize Quick access Toolbar
1.0
Issue if configured "Enter" key as short cut key in Customize Quick access Toolbar - Enter key not working in Node Simulation editor if "ENTER" key is mapped as short cut key for a command in Customize Quick access Toolbar
defect
issue if configured enter key as short cut key in customize quick access toolbar enter key not working in node simulation editor if enter key is mapped as short cut key for a command in customize quick access toolbar
1
87,157
15,756,006,138
IssuesEvent
2021-03-31 02:45:50
turkdevops/nexus-iq-chrome-extension
https://api.github.com/repos/turkdevops/nexus-iq-chrome-extension
opened
CVE-2017-16026 (Medium) detected in request-2.12.0.tgz
security vulnerability
## CVE-2017-16026 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>request-2.12.0.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.12.0.tgz">https://registry.npmjs.org/request/-/request-2.12.0.tgz</a></p> <p>Path to dependency file: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/package.json</p> <p>Path to vulnerable library: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/testswarm/node_modules/request/package.json,nexus-iq-chrome-extension/src/Scripts/lib/jquery-ui-1.12.1/node_modules/testswarm/node_modules/request/package.json</p> <p> Dependency Hierarchy: - testswarm-1.1.0.tgz (Root Library) - :x: **request-2.12.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>fixVersionHistory-gh</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request >=2.2.6 <2.47.0 || >2.51.0 <=2.67.0. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026>CVE-2017-16026</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16026">https://nvd.nist.gov/vuln/detail/CVE-2017-16026</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: 2.47.1,2.67.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2017-16026 (Medium) detected in request-2.12.0.tgz - ## CVE-2017-16026 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>request-2.12.0.tgz</b></p></summary> <p>Simplified HTTP request client.</p> <p>Library home page: <a href="https://registry.npmjs.org/request/-/request-2.12.0.tgz">https://registry.npmjs.org/request/-/request-2.12.0.tgz</a></p> <p>Path to dependency file: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/package.json</p> <p>Path to vulnerable library: nexus-iq-chrome-extension/release/1.8.1/src/Scripts/lib/jquery-ui-1.12.1/node_modules/testswarm/node_modules/request/package.json,nexus-iq-chrome-extension/src/Scripts/lib/jquery-ui-1.12.1/node_modules/testswarm/node_modules/request/package.json</p> <p> Dependency Hierarchy: - testswarm-1.1.0.tgz (Root Library) - :x: **request-2.12.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>fixVersionHistory-gh</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Request is an http client. If a request is made using ```multipart```, and the body type is a ```number```, then the specified number of non-zero memory is passed in the body. This affects Request >=2.2.6 <2.47.0 || >2.51.0 <=2.67.0. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16026>CVE-2017-16026</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-16026">https://nvd.nist.gov/vuln/detail/CVE-2017-16026</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: 2.47.1,2.67.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in request tgz cve medium severity vulnerability vulnerable library request tgz simplified http request client library home page a href path to dependency file nexus iq chrome extension release src scripts lib jquery ui package json path to vulnerable library nexus iq chrome extension release src scripts lib jquery ui node modules testswarm node modules request package json nexus iq chrome extension src scripts lib jquery ui node modules testswarm node modules request package json dependency hierarchy testswarm tgz root library x request tgz vulnerable library found in base branch fixversionhistory gh vulnerability details request is an http client if a request is made using multipart and the body type is a number then the specified number of non zero memory is passed in the body this affects request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
54,104
13,393,540,035
IssuesEvent
2020-09-03 04:32:09
fieldenms/tg
https://api.github.com/repos/fieldenms/tg
closed
Simple master shows hamburger button
Defect Entity master Pull request UI / UX
### Description refs #1533 After detaching and then attaching dialog with compound master to document DOM `tg-master-menu` fires event that indicates that menu was attached and adds hamburger button again to dialog. In order to prevent compound master from firing this event twice one should `offloadDom` of compound master when it gets detached from document DOM. ### Expected outcome Simple master will appear without hamburger button after compound master was opened and closed.
1.0
Simple master shows hamburger button - ### Description refs #1533 After detaching and then attaching dialog with compound master to document DOM `tg-master-menu` fires event that indicates that menu was attached and adds hamburger button again to dialog. In order to prevent compound master from firing this event twice one should `offloadDom` of compound master when it gets detached from document DOM. ### Expected outcome Simple master will appear without hamburger button after compound master was opened and closed.
defect
simple master shows hamburger button description refs after detaching and then attaching dialog with compound master to document dom tg master menu fires event that indicates that menu was attached and adds hamburger button again to dialog in order to prevent compound master from firing this event twice one should offloaddom of compound master when it gets detached from document dom expected outcome simple master will appear without hamburger button after compound master was opened and closed
1
96,536
8,617,573,319
IssuesEvent
2018-11-20 06:17:23
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
closed
projecttesting20 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeNegativeNumber
projecttesting20
Project : projecttesting20 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2NhMDdlMTUtNTkwYi00ZTdlLWIyMjctNmRhMDBiNDE4YzQw; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 20 Nov 2018 06:14:12 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/ItwKscgV/search-auto-suggestions/search/ItwKscgV?pageSize=-1 Request : Response : { "timestamp" : "2018-11-20T06:14:12.568+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/ItwKscgV/search-auto-suggestions/search/ItwKscgV" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
1.0
projecttesting20 : ApiV1ProjectsIdSearchAutoSuggestionsSearchStatusGetQueryParamPagesizeNegativeNumber - Project : projecttesting20 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 404 Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=Y2NhMDdlMTUtNTkwYi00ZTdlLWIyMjctNmRhMDBiNDE4YzQw; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Tue, 20 Nov 2018 06:14:12 GMT]} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/ItwKscgV/search-auto-suggestions/search/ItwKscgV?pageSize=-1 Request : Response : { "timestamp" : "2018-11-20T06:14:12.568+0000", "status" : 404, "error" : "Not Found", "message" : "No message available", "path" : "/api/v1/api/v1/projects/ItwKscgV/search-auto-suggestions/search/ItwKscgV" } Logs : Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed] --- FX Bot ---
non_defect
project job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api projects itwkscgv search auto suggestions search itwkscgv logs assertion resolved to result assertion resolved to result fx bot
0
180,426
21,625,735,297
IssuesEvent
2022-05-05 01:42:08
jnfaerch/skillsupp
https://api.github.com/repos/jnfaerch/skillsupp
opened
CVE-2022-27777 (Medium) detected in actionview-5.1.5.gem
security vulnerability
## CVE-2022-27777 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionview-5.1.5.gem</b></p></summary> <p>Simple, battle-tested conventions and helpers for building web pages.</p> <p>Library home page: <a href="https://rubygems.org/gems/actionview-5.1.5.gem">https://rubygems.org/gems/actionview-5.1.5.gem</a></p> <p> Dependency Hierarchy: - rails-5.1.5.gem (Root Library) - :x: **actionview-5.1.5.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jnfaerch/skillsupp/commit/55010d67cd874d1e661f01130ef08fbca55fa0ea">55010d67cd874d1e661f01130ef08fbca55fa0ea</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is a possible XSS vulnerability in Action View tag helpers. Passing untrusted input as hash keys can lead to a possible XSS vulnerability. Fixed Versions: 7.0.2.4, 6.1.5.1, 6.0.4.8, 5.2.7.1 <p>Publish Date: 2022-03-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27777>CVE-2022-27777</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-ch3h-j2vf-95pv">https://github.com/advisories/GHSA-ch3h-j2vf-95pv</a></p> <p>Release Date: 2022-03-24</p> <p>Fix Resolution: actionview - 5.2.7.1,6.0.4.8,6.1.5.1,7.0.2.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-27777 (Medium) detected in actionview-5.1.5.gem - ## CVE-2022-27777 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>actionview-5.1.5.gem</b></p></summary> <p>Simple, battle-tested conventions and helpers for building web pages.</p> <p>Library home page: <a href="https://rubygems.org/gems/actionview-5.1.5.gem">https://rubygems.org/gems/actionview-5.1.5.gem</a></p> <p> Dependency Hierarchy: - rails-5.1.5.gem (Root Library) - :x: **actionview-5.1.5.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/jnfaerch/skillsupp/commit/55010d67cd874d1e661f01130ef08fbca55fa0ea">55010d67cd874d1e661f01130ef08fbca55fa0ea</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is a possible XSS vulnerability in Action View tag helpers. Passing untrusted input as hash keys can lead to a possible XSS vulnerability. Fixed Versions: 7.0.2.4, 6.1.5.1, 6.0.4.8, 5.2.7.1 <p>Publish Date: 2022-03-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-27777>CVE-2022-27777</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-ch3h-j2vf-95pv">https://github.com/advisories/GHSA-ch3h-j2vf-95pv</a></p> <p>Release Date: 2022-03-24</p> <p>Fix Resolution: actionview - 5.2.7.1,6.0.4.8,6.1.5.1,7.0.2.4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in actionview gem cve medium severity vulnerability vulnerable library actionview gem simple battle tested conventions and helpers for building web pages library home page a href dependency hierarchy rails gem root library x actionview gem vulnerable library found in head commit a href vulnerability details there is a possible xss vulnerability in action view tag helpers passing untrusted input as hash keys can lead to a possible xss vulnerability fixed versions publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution actionview step up your open source security game with whitesource
0
71,246
30,841,870,341
IssuesEvent
2023-08-02 11:13:50
hashicorp/terraform-provider-azurerm
https://api.github.com/repos/hashicorp/terraform-provider-azurerm
closed
azurerm_api_management_product_group "provider[\"registry.terraform.io/hashicorp/azurerm\"]" produced an unexpected new value: Root resource was present, but now absent.
service/api-management v/3.x
### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help. <!--- Thank you for keeping this note for the community ---> ### Terraform Version Terraform v1.5.4 & 0.14.11 ### AzureRM Provider Version 3.56 ### Affected Resource(s)/Data Source(s) Resource azurerm_api_management_product_group ### Terraform Configuration Files ```hcl resource "azurerm_api_management_product" "product" { product_id = "da-qa" api_management_name = data.azurerm_api_management.APIM.name resource_group_name = data.azurerm_api_management.APIM.resource_group_name display_name = "da-qa" subscription_required = false approval_required = false published = true } ############# Assigning groups to the product ######################## resource "azurerm_api_management_product_group" "product_group" { product_id = "da-qa" group_name = "Guests" api_management_name = data.azurerm_api_management.APIM.name resource_group_name = data.azurerm_api_management.APIM.resource_group_name depends_on = [azurerm_api_management_product.product] } ``` ### Debug Output/Panic Output ```shell https://gist.github.com/piyushjain-15/1cf63fada4654e7e44d7f1b08e0246ae ``` ### Expected Behaviour Guest group should be added to API Product. ### Actual Behaviour On checking Guest group is added to API Product but it is not available in terraform state and getting error. ### Steps to Reproduce Apply above configuration file with the correct values. ### Important Factoids _No response_ ### References _No response_
1.0
azurerm_api_management_product_group "provider[\"registry.terraform.io/hashicorp/azurerm\"]" produced an unexpected new value: Root resource was present, but now absent. - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Community Note <!--- Please keep this note for the community ---> * Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request * Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request * If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help. <!--- Thank you for keeping this note for the community ---> ### Terraform Version Terraform v1.5.4 & 0.14.11 ### AzureRM Provider Version 3.56 ### Affected Resource(s)/Data Source(s) Resource azurerm_api_management_product_group ### Terraform Configuration Files ```hcl resource "azurerm_api_management_product" "product" { product_id = "da-qa" api_management_name = data.azurerm_api_management.APIM.name resource_group_name = data.azurerm_api_management.APIM.resource_group_name display_name = "da-qa" subscription_required = false approval_required = false published = true } ############# Assigning groups to the product ######################## resource "azurerm_api_management_product_group" "product_group" { product_id = "da-qa" group_name = "Guests" api_management_name = data.azurerm_api_management.APIM.name resource_group_name = data.azurerm_api_management.APIM.resource_group_name depends_on = [azurerm_api_management_product.product] } ``` ### Debug Output/Panic Output ```shell https://gist.github.com/piyushjain-15/1cf63fada4654e7e44d7f1b08e0246ae ``` ### Expected Behaviour Guest group should be added to API Product. ### Actual Behaviour On checking Guest group is added to API Product but it is not available in terraform state and getting error. ### Steps to Reproduce Apply above configuration file with the correct values. ### Important Factoids _No response_ ### References _No response_
non_defect
azurerm api management product group provider produced an unexpected new value root resource was present but now absent is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment and review the to help terraform version terraform azurerm provider version affected resource s data source s resource azurerm api management product group terraform configuration files hcl resource azurerm api management product product product id da qa api management name data azurerm api management apim name resource group name data azurerm api management apim resource group name display name da qa subscription required false approval required false published true assigning groups to the product resource azurerm api management product group product group product id da qa group name guests api management name data azurerm api management apim name resource group name data azurerm api management apim resource group name depends on debug output panic output shell expected behaviour guest group should be added to api product actual behaviour on checking guest group is added to api product but it is not available in terraform state and getting error steps to reproduce apply above configuration file with the correct values important factoids no response references no response
0
275,081
8,571,139,882
IssuesEvent
2018-11-12 02:12:11
SEG2105-Group/JobZi
https://api.github.com/repos/SEG2105-Group/JobZi
opened
Add different activities for different things.
low-priority
The current way different "views" are handled is by using fragments. This can be improved. Other activities should follow the example of `UserActivity` in terms of how modular they should be/what they should display (even `UserActivity` should be renamed to something like `LandingActivity` or `HomeActivity`). These are the following activities that need to be added: - [ ] `ProfileActivity` - This should display the user's full profile (i.e. every field in the `User` class) and allow the user to edit *most* fields (i.e. not the account type, username, or email). - Note this should replace some of the menu items in the `NavigationDrawer`. This is important due to the menu items abruptly cutting off entire words when the text is too long and/or the screen is too small (without anyway around this). These views were not intended for this purpose anyway. - [ ] `AdminPanelActivity` - This should contain the `ViewPager` which holds the two fragments (`UserListFragment` and `ServiceListFragment`) which is currently in the `AdminFragment` (i.e. this activity - `AdminPanelActivity` - should replace the `AdminFragment` entirely). Might also allow removing of the two list fragments as well (although the `ServiceListFragment` might be used for other users to request/offer services).
1.0
Add different activities for different things. - The current way different "views" are handled is by using fragments. This can be improved. Other activities should follow the example of `UserActivity` in terms of how modular they should be/what they should display (even `UserActivity` should be renamed to something like `LandingActivity` or `HomeActivity`). These are the following activities that need to be added: - [ ] `ProfileActivity` - This should display the user's full profile (i.e. every field in the `User` class) and allow the user to edit *most* fields (i.e. not the account type, username, or email). - Note this should replace some of the menu items in the `NavigationDrawer`. This is important due to the menu items abruptly cutting off entire words when the text is too long and/or the screen is too small (without anyway around this). These views were not intended for this purpose anyway. - [ ] `AdminPanelActivity` - This should contain the `ViewPager` which holds the two fragments (`UserListFragment` and `ServiceListFragment`) which is currently in the `AdminFragment` (i.e. this activity - `AdminPanelActivity` - should replace the `AdminFragment` entirely). Might also allow removing of the two list fragments as well (although the `ServiceListFragment` might be used for other users to request/offer services).
non_defect
add different activities for different things the current way different views are handled is by using fragments this can be improved other activities should follow the example of useractivity in terms of how modular they should be what they should display even useractivity should be renamed to something like landingactivity or homeactivity these are the following activities that need to be added profileactivity this should display the user s full profile i e every field in the user class and allow the user to edit most fields i e not the account type username or email note this should replace some of the menu items in the navigationdrawer this is important due to the menu items abruptly cutting off entire words when the text is too long and or the screen is too small without anyway around this these views were not intended for this purpose anyway adminpanelactivity this should contain the viewpager which holds the two fragments userlistfragment and servicelistfragment which is currently in the adminfragment i e this activity adminpanelactivity should replace the adminfragment entirely might also allow removing of the two list fragments as well although the servicelistfragment might be used for other users to request offer services
0
125,947
26,753,901,292
IssuesEvent
2023-01-30 22:03:15
ClickHouse/ClickHouse
https://api.github.com/repos/ClickHouse/ClickHouse
closed
generateRandom does not support some data types (Map, IPv4)
easy task unfinished code
**Describe the issue** Working on generating test data, I received at least the following errors: ``` The 'GenerateRandom' is not implemented for type IPv4: While executing GenerateRandom. (NOT_IMPLEMENTED) (version 22.13.1.1) The 'GenerateRandom' is not implemented for type Map: While executing GenerateRandom. (NOT_IMPLEMENTED) (version 22.13.1.1) # And when trying Enum types DB::Exception: Syntax error: ... Unmatched parentheses: (. ``` The [documentation](https://clickhouse.com/docs/en/sql-reference/table-functions/generate/) currently says ` Supports all data types that can be stored in table except LowCardinality and AggregateFunction.` which appears to be incorrect.
1.0
generateRandom does not support some data types (Map, IPv4) - **Describe the issue** Working on generating test data, I received at least the following errors: ``` The 'GenerateRandom' is not implemented for type IPv4: While executing GenerateRandom. (NOT_IMPLEMENTED) (version 22.13.1.1) The 'GenerateRandom' is not implemented for type Map: While executing GenerateRandom. (NOT_IMPLEMENTED) (version 22.13.1.1) # And when trying Enum types DB::Exception: Syntax error: ... Unmatched parentheses: (. ``` The [documentation](https://clickhouse.com/docs/en/sql-reference/table-functions/generate/) currently says ` Supports all data types that can be stored in table except LowCardinality and AggregateFunction.` which appears to be incorrect.
non_defect
generaterandom does not support some data types map describe the issue working on generating test data i received at least the following errors the generaterandom is not implemented for type while executing generaterandom not implemented version the generaterandom is not implemented for type map while executing generaterandom not implemented version and when trying enum types db exception syntax error unmatched parentheses the currently says supports all data types that can be stored in table except lowcardinality and aggregatefunction which appears to be incorrect
0
822,942
30,921,665,701
IssuesEvent
2023-08-06 01:16:24
azerothcore/azerothcore-wotlk
https://api.github.com/repos/azerothcore/azerothcore-wotlk
closed
[Quest] Summon Robot does not get interrupted when dismissing the robot during A Dip in the Moonwell
Confirmed Quest 20-29 Priority-Low
https://github.com/chromiecraft/chromiecraft/issues/4268 ### What client do you play on? enUS ### Faction Horde ### Content Phase: 20-29 ### Current Behaviour Player keeps channeling when dismissing the summoned robot ### Expected Blizzlike Behaviour Channeling gets interrupted when dismissing the summoned robot ### Source https://youtu.be/3zAVZypGdGY?t=61 ### Steps to reproduce the problem .go xyz -4514 -848 -41 .quest add 9433 Use the quest item Right click on the Robotron Portrait and dismiss ### Extra Notes https://wowgaming.altervista.org/aowow/?quest=9433 ### AC rev. hash/commit https://github.com/chromiecraft/azerothcore-wotlk/commit/b427e8e18cb6baa80124ce8e3a7e79aa670bcc74 ### Operating system Ubuntu 20.04 ### Modules - [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot) - [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward) - [mod-cfbg](https://github.com/azerothcore/mod-cfbg) - [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter) - [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp) - [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch) - [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings) - [mod-dmf-switch](https://github.com/azerothcore/mod-dmf-switch) - [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset) - [mod-eluna](https://github.com/azerothcore/mod-eluna) - [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker) - [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena) - [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg) - [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check) - [mod-progression-system](https://github.com/azerothcore/mod-progression-system) - [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles) - [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer) - [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache) - [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion) - [mod-transmog](https://github.com/azerothcore/mod-transmog) - [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp) - [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses) - [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy) - [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc) - [lua-custom-worldboss](https://github.com/55Honey/Acore_CustomWorldboss) - [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward) - [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend) - [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind) - [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements) - [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck) - [lua-zone-debuff](https://github.com/55Honey/Acore_ZoneDebuff) ### Customizations None ### Server ChromieCraft
1.0
[Quest] Summon Robot does not get interrupted when dismissing the robot during A Dip in the Moonwell - https://github.com/chromiecraft/chromiecraft/issues/4268 ### What client do you play on? enUS ### Faction Horde ### Content Phase: 20-29 ### Current Behaviour Player keeps channeling when dismissing the summoned robot ### Expected Blizzlike Behaviour Channeling gets interrupted when dismissing the summoned robot ### Source https://youtu.be/3zAVZypGdGY?t=61 ### Steps to reproduce the problem .go xyz -4514 -848 -41 .quest add 9433 Use the quest item Right click on the Robotron Portrait and dismiss ### Extra Notes https://wowgaming.altervista.org/aowow/?quest=9433 ### AC rev. hash/commit https://github.com/chromiecraft/azerothcore-wotlk/commit/b427e8e18cb6baa80124ce8e3a7e79aa670bcc74 ### Operating system Ubuntu 20.04 ### Modules - [mod-ah-bot](https://github.com/azerothcore/mod-ah-bot) - [mod-bg-item-reward](https://github.com/azerothcore/mod-bg-item-reward) - [mod-cfbg](https://github.com/azerothcore/mod-cfbg) - [mod-chat-transmitter](https://github.com/azerothcore/mod-chat-transmitter) - [mod-chromie-xp](https://github.com/azerothcore/mod-chromie-xp) - [mod-cta-switch](https://github.com/azerothcore/mod-cta-switch) - [mod-desertion-warnings](https://github.com/azerothcore/mod-desertion-warnings) - [mod-dmf-switch](https://github.com/azerothcore/mod-dmf-switch) - [mod-duel-reset](https://github.com/azerothcore/mod-duel-reset) - [mod-eluna](https://github.com/azerothcore/mod-eluna) - [mod-ip-tracker](https://github.com/azerothcore/mod-ip-tracker) - [mod-low-level-arena](https://github.com/azerothcore/mod-low-level-arena) - [mod-low-level-rbg](https://github.com/azerothcore/mod-low-level-rbg) - [mod-multi-client-check](https://github.com/azerothcore/mod-multi-client-check) - [mod-progression-system](https://github.com/azerothcore/mod-progression-system) - [mod-pvp-titles](https://github.com/azerothcore/mod-pvp-titles) - [mod-pvpstats-announcer](https://github.com/azerothcore/mod-pvpstats-announcer) - [mod-queue-list-cache](https://github.com/azerothcore/mod-queue-list-cache) - [mod-rdf-expansion](https://github.com/azerothcore/mod-rdf-expansion) - [mod-transmog](https://github.com/azerothcore/mod-transmog) - [mod-weekend-xp](https://github.com/azerothcore/mod-weekend-xp) - [mod-instanced-worldbosses](https://github.com/nyeriah/mod-instanced-worldbosses) - [lua-carbon-copy](https://github.com/55Honey/Acore_CarbonCopy) - [lua-exchange-npc](https://github.com/55Honey/Acore_ExchangeNpc) - [lua-custom-worldboss](https://github.com/55Honey/Acore_CustomWorldboss) - [lua-level-up-reward](https://github.com/55Honey/Acore_LevelUpReward) - [lua-recruit-a-friend](https://github.com/55Honey/Acore_RecruitAFriend) - [lua-send-and-bind](https://github.com/55Honey/Acore_SendAndBind) - [lua-temp-announcements](https://github.com/55Honey/Acore_TempAnnouncements) - [lua-zonecheck](https://github.com/55Honey/acore_Zonecheck) - [lua-zone-debuff](https://github.com/55Honey/Acore_ZoneDebuff) ### Customizations None ### Server ChromieCraft
non_defect
summon robot does not get interrupted when dismissing the robot during a dip in the moonwell what client do you play on enus faction horde content phase current behaviour player keeps channeling when dismissing the summoned robot expected blizzlike behaviour channeling gets interrupted when dismissing the summoned robot source steps to reproduce the problem go xyz quest add use the quest item right click on the robotron portrait and dismiss extra notes ac rev hash commit operating system ubuntu modules customizations none server chromiecraft
0
56,271
6,515,379,358
IssuesEvent
2017-08-26 14:59:55
modelica/Modelica
https://api.github.com/repos/modelica/Modelica
closed
Test library for Modelica tool capability
discussion L: ModelicaTest worksforme
**Reported by fcasella on 26 Sep 2013 14:57 UTC** The ModelicaTest library is designed to test the models of the MSL, and only as a side effect to test the ability of Modelica tool to use all the language features and solve all the mathematical problems that show up in the MSL. A library to test the capabilities of Modelica tools to solve certain classes of mathematical problems that can be formulated using Modelica, and possibly to do it efficiently or reliably, in parallel, over the cloud, etc, should be promoted and maintained by the MA. As such, it should contain categorized test cases for difficult initialization problems, numerically critical models, high-index models, overconstrained models, very large sparse models, hybrid models, etc. It might use the MSL when that is convenient, but that won't necessarily be appropriate, as MSL models tend to be rather cumbersome and might not be optimal to spot problems and solve them quickly. Tool vendors already have such libraries for internal testing and development; for instance, the Open Source Modelica Consortium has developed a large test suite for internal development of OMC. It would be interesting and in the spirit of the Modelica library to combine all these efforts together for the benefit of the community, and also to facilitate new tool vendors in their effort to provide good Modelica tools. ---- Migrated-From: https://trac.modelica.org/Modelica/ticket/1291
1.0
Test library for Modelica tool capability - **Reported by fcasella on 26 Sep 2013 14:57 UTC** The ModelicaTest library is designed to test the models of the MSL, and only as a side effect to test the ability of Modelica tool to use all the language features and solve all the mathematical problems that show up in the MSL. A library to test the capabilities of Modelica tools to solve certain classes of mathematical problems that can be formulated using Modelica, and possibly to do it efficiently or reliably, in parallel, over the cloud, etc, should be promoted and maintained by the MA. As such, it should contain categorized test cases for difficult initialization problems, numerically critical models, high-index models, overconstrained models, very large sparse models, hybrid models, etc. It might use the MSL when that is convenient, but that won't necessarily be appropriate, as MSL models tend to be rather cumbersome and might not be optimal to spot problems and solve them quickly. Tool vendors already have such libraries for internal testing and development; for instance, the Open Source Modelica Consortium has developed a large test suite for internal development of OMC. It would be interesting and in the spirit of the Modelica library to combine all these efforts together for the benefit of the community, and also to facilitate new tool vendors in their effort to provide good Modelica tools. ---- Migrated-From: https://trac.modelica.org/Modelica/ticket/1291
non_defect
test library for modelica tool capability reported by fcasella on sep utc the modelicatest library is designed to test the models of the msl and only as a side effect to test the ability of modelica tool to use all the language features and solve all the mathematical problems that show up in the msl a library to test the capabilities of modelica tools to solve certain classes of mathematical problems that can be formulated using modelica and possibly to do it efficiently or reliably in parallel over the cloud etc should be promoted and maintained by the ma as such it should contain categorized test cases for difficult initialization problems numerically critical models high index models overconstrained models very large sparse models hybrid models etc it might use the msl when that is convenient but that won t necessarily be appropriate as msl models tend to be rather cumbersome and might not be optimal to spot problems and solve them quickly tool vendors already have such libraries for internal testing and development for instance the open source modelica consortium has developed a large test suite for internal development of omc it would be interesting and in the spirit of the modelica library to combine all these efforts together for the benefit of the community and also to facilitate new tool vendors in their effort to provide good modelica tools migrated from
0
204,521
15,934,310,624
IssuesEvent
2021-04-14 08:31:42
playcanvas/editor
https://api.github.com/repos/playcanvas/editor
closed
Unable to pick an asset for Render Component
area: asset import documentation
It looks like I am not able to select an asset for Render Component in the Editor. It simply doesn't see any assets in the folder, which contains GLB and JSON models. All of them can be used in a Model Component, though. Edit: Ok, I see, it is because it requires a container type of an asset. However, isn't the parsed FBX > GLB is a container? Or do I need to use pc.ContainerResource explicitly? I also saw this forum topic: https://forum.playcanvas.com/t/is-it-possible-to-add-rigid-body-and-collision-to-a-glb-file-which-i-have-generated-with-a-script/18487 I tried to set an attribute to binary, but it doesn't see the asset in the folder. I guess it is because the GLB is a parsed one and not the original? I tried to load it via loadFromUrl, and it complained about the GLB magic word in the header.
1.0
Unable to pick an asset for Render Component - It looks like I am not able to select an asset for Render Component in the Editor. It simply doesn't see any assets in the folder, which contains GLB and JSON models. All of them can be used in a Model Component, though. Edit: Ok, I see, it is because it requires a container type of an asset. However, isn't the parsed FBX > GLB is a container? Or do I need to use pc.ContainerResource explicitly? I also saw this forum topic: https://forum.playcanvas.com/t/is-it-possible-to-add-rigid-body-and-collision-to-a-glb-file-which-i-have-generated-with-a-script/18487 I tried to set an attribute to binary, but it doesn't see the asset in the folder. I guess it is because the GLB is a parsed one and not the original? I tried to load it via loadFromUrl, and it complained about the GLB magic word in the header.
non_defect
unable to pick an asset for render component it looks like i am not able to select an asset for render component in the editor it simply doesn t see any assets in the folder which contains glb and json models all of them can be used in a model component though edit ok i see it is because it requires a container type of an asset however isn t the parsed fbx glb is a container or do i need to use pc containerresource explicitly i also saw this forum topic i tried to set an attribute to binary but it doesn t see the asset in the folder i guess it is because the glb is a parsed one and not the original i tried to load it via loadfromurl and it complained about the glb magic word in the header
0
187,106
22,031,532,451
IssuesEvent
2022-05-28 00:39:17
vincenzodistasio97/Slack-Clone
https://api.github.com/repos/vincenzodistasio97/Slack-Clone
closed
WS-2021-0152 (High) detected in color-string-1.5.3.tgz - autoclosed
security vulnerability
## WS-2021-0152 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary> <p>Parser and generator for CSS color strings</p> <p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p> <p>Path to dependency file: /server/package.json</p> <p>Path to vulnerable library: /server/node_modules/color-string/package.json</p> <p> Dependency Hierarchy: - winston-3.1.0.tgz (Root Library) - diagnostics-1.1.1.tgz - colorspace-1.1.1.tgz - color-3.0.0.tgz - :x: **color-string-1.5.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/Slack-Clone/commit/125be6381c29e8f8e1d4b2fed216db288fad9798">125be6381c29e8f8e1d4b2fed216db288fad9798</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution (color-string): 1.5.5</p> <p>Direct dependency fix Resolution (winston): 3.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2021-0152 (High) detected in color-string-1.5.3.tgz - autoclosed - ## WS-2021-0152 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>color-string-1.5.3.tgz</b></p></summary> <p>Parser and generator for CSS color strings</p> <p>Library home page: <a href="https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz">https://registry.npmjs.org/color-string/-/color-string-1.5.3.tgz</a></p> <p>Path to dependency file: /server/package.json</p> <p>Path to vulnerable library: /server/node_modules/color-string/package.json</p> <p> Dependency Hierarchy: - winston-3.1.0.tgz (Root Library) - diagnostics-1.1.1.tgz - colorspace-1.1.1.tgz - color-3.0.0.tgz - :x: **color-string-1.5.3.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/vincenzodistasio97/Slack-Clone/commit/125be6381c29e8f8e1d4b2fed216db288fad9798">125be6381c29e8f8e1d4b2fed216db288fad9798</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Regular Expression Denial of Service (ReDoS) was found in color-string before 1.5.5. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://github.com/Qix-/color-string/commit/0789e21284c33d89ebc4ab4ca6f759b9375ac9d3>WS-2021-0152</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/Qix-/color-string/releases/tag/1.5.5">https://github.com/Qix-/color-string/releases/tag/1.5.5</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution (color-string): 1.5.5</p> <p>Direct dependency fix Resolution (winston): 3.2.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
ws high detected in color string tgz autoclosed ws high severity vulnerability vulnerable library color string tgz parser and generator for css color strings library home page a href path to dependency file server package json path to vulnerable library server node modules color string package json dependency hierarchy winston tgz root library diagnostics tgz colorspace tgz color tgz x color string tgz vulnerable library found in head commit a href found in base branch master vulnerability details regular expression denial of service redos was found in color string before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color string direct dependency fix resolution winston step up your open source security game with whitesource
0
81,058
30,690,276,252
IssuesEvent
2023-07-26 14:46:15
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Double count for new messages
T-Defect
### Steps to reproduce 1. Receive one message (in a DM) ### Outcome #### What did you expect? See one unread message in the left panel for the room #### What happened instead? See that unread count is 2 in the left panel for the room ### Operating system _No response_ ### Browser information Chromium 113.0.5672.92 (Official Build) Arch Linux (64-bit) ### URL for webapp develop.element.io ### Application version Element version: cc8afed1968f-react-9319911a27ad-js-0e95df5dba26 Olm version: 3.2.14 ### Homeserver matrix.org ### Will you send logs? Yes
1.0
Double count for new messages - ### Steps to reproduce 1. Receive one message (in a DM) ### Outcome #### What did you expect? See one unread message in the left panel for the room #### What happened instead? See that unread count is 2 in the left panel for the room ### Operating system _No response_ ### Browser information Chromium 113.0.5672.92 (Official Build) Arch Linux (64-bit) ### URL for webapp develop.element.io ### Application version Element version: cc8afed1968f-react-9319911a27ad-js-0e95df5dba26 Olm version: 3.2.14 ### Homeserver matrix.org ### Will you send logs? Yes
defect
double count for new messages steps to reproduce receive one message in a dm outcome what did you expect see one unread message in the left panel for the room what happened instead see that unread count is in the left panel for the room operating system no response browser information chromium official build arch linux bit url for webapp develop element io application version element version react js olm version homeserver matrix org will you send logs yes
1