Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1
value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3
values | title stringlengths 1 1k | labels stringlengths 4 1.38k | body stringlengths 1 262k | index stringclasses 16
values | text_combine stringlengths 96 262k | label stringclasses 2
values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
129,980 | 17,950,361,115 | IssuesEvent | 2021-09-12 16:00:10 | decred/politeiagui | https://api.github.com/repos/decred/politeiagui | closed | [redesign] Post-vote instructions | Needs design specs | When proposal successfully votes, instruct its owner what to do next. For example, send him an email linking to a relevant docs page that tells how and when to invoice. Open an issue in dcrdocs for that page.
Later we may also add some guidance what to do if proposal failed. E.g. if it failed at 40-50%, it means there was definitely some interest and the author should adjust and try again. | 1.0 | [redesign] Post-vote instructions - When proposal successfully votes, instruct its owner what to do next. For example, send him an email linking to a relevant docs page that tells how and when to invoice. Open an issue in dcrdocs for that page.
Later we may also add some guidance what to do if proposal failed. E.g. if it failed at 40-50%, it means there was definitely some interest and the author should adjust and try again. | non_priority | post vote instructions when proposal successfully votes instruct its owner what to do next for example send him an email linking to a relevant docs page that tells how and when to invoice open an issue in dcrdocs for that page later we may also add some guidance what to do if proposal failed e g if it failed at it means there was definitely some interest and the author should adjust and try again | 0 |
29,058 | 5,515,183,470 | IssuesEvent | 2017-03-17 16:48:48 | EOxServer/eoxserver | https://api.github.com/repos/EOxServer/eoxserver | opened | wrong assertion in xmltools.py | Priority: trivial Type: defect | After the recent 0.4 merges the Django commands started complaining about the wrong assertion:
```
/usr/local/eoxserver/eoxserver/core/util/xmltools.py:144: SyntaxWarning: assertion is always true, perhaps remove parentheses?
```
The actual code was added in cef1df1a5a065619473c393cf417158588f2bd75 and it does:
```
assert (
not element.text,
"Can't add a CDATA section. Element already has some text: %r"
% element.text
)
```
(The tuple is always cast to `True` regardless of the tested condition.)
Please fix it to get rid of that annoying warning. | 1.0 | wrong assertion in xmltools.py - After the recent 0.4 merges the Django commands started complaining about the wrong assertion:
```
/usr/local/eoxserver/eoxserver/core/util/xmltools.py:144: SyntaxWarning: assertion is always true, perhaps remove parentheses?
```
The actual code was added in cef1df1a5a065619473c393cf417158588f2bd75 and it does:
```
assert (
not element.text,
"Can't add a CDATA section. Element already has some text: %r"
% element.text
)
```
(The tuple is always cast to `True` regardless of the tested condition.)
Please fix it to get rid of that annoying warning. | non_priority | wrong assertion in xmltools py after the recent merges the django commands started complaining about the wrong assertion usr local eoxserver eoxserver core util xmltools py syntaxwarning assertion is always true perhaps remove parentheses the actual code was added in and it does assert not element text can t add a cdata section element already has some text r element text the tuple is always cast to true regardless of the tested condition please fix it to get rid of that annoying warning | 0 |
596,014 | 18,094,375,723 | IssuesEvent | 2021-09-22 07:22:46 | elan-ev/opencast-editor | https://api.github.com/repos/elan-ev/opencast-editor | closed | Screen reader error in "Save and Process" | bug accessibility priority:low | The button "Start processing" is read two times. Also the buttons are weirdly named. -Wien | 1.0 | Screen reader error in "Save and Process" - The button "Start processing" is read two times. Also the buttons are weirdly named. -Wien | priority | screen reader error in save and process the button start processing is read two times also the buttons are weirdly named wien | 1 |
116,465 | 4,702,590,496 | IssuesEvent | 2016-10-13 03:07:42 | Kaezon/Unreal-SCP | https://api.github.com/repos/Kaezon/Unreal-SCP | opened | Add obstruction check to inSight | Blueprint enhancement High Priority | Currently, SCP-173 cannot move if the player is looking at it regardless of whether something is in the way or not.
@asegavac suggested doing some ray casting as a cheap way to check for obstructions. | 1.0 | Add obstruction check to inSight - Currently, SCP-173 cannot move if the player is looking at it regardless of whether something is in the way or not.
@asegavac suggested doing some ray casting as a cheap way to check for obstructions. | priority | add obstruction check to insight currently scp cannot move if the player is looking at it regardless of whether something is in the way or not asegavac suggested doing some ray casting as a cheap way to check for obstructions | 1 |
351,683 | 10,522,092,724 | IssuesEvent | 2019-09-30 07:55:38 | mozilla/addons-server | https://api.github.com/repos/mozilla/addons-server | closed | `docker-compose up -d` should not start addons-frontend and selenium/firefox | env: local dev priority: p3 state: stale | ### Describe the problem and steps to reproduce it:
See the install steps at: https://addons-server.readthedocs.io/en/latest/topics/install/docker.html
### What happened?
The addons-frontend and selenium-firefox containers are started when running `docker-compose up [-d]`.
### What did you expect to happen?
While booting addons-frontend might be a good idea, I don't expect selenium/firefox to be up by default. It fails to initialize anyway, but it is still listed in `docker-compose ps`.
I would also argue that we should not boot addons-frontend by default, given that there is a port conflict (same port is used for the `addons-frontend` container here and in the addons-frontend repo). From the addons-frontend repo, one can run `yarn amo` to use the addons-server API.
Every time I run `docker-compose up` in this project, I have to kill the addons-frontend container (or make sure I run the addons-frontend project first, so that the port is already bound and the container dies by itself).
That's my very own opinion though, with the POV of a new contributor to addons-server. I find it perfect to run the API and Dev Hub via docker and this project, but to have the addons-frontend project installed on its own.
| 1.0 | `docker-compose up -d` should not start addons-frontend and selenium/firefox - ### Describe the problem and steps to reproduce it:
See the install steps at: https://addons-server.readthedocs.io/en/latest/topics/install/docker.html
### What happened?
The addons-frontend and selenium-firefox containers are started when running `docker-compose up [-d]`.
### What did you expect to happen?
While booting addons-frontend might be a good idea, I don't expect selenium/firefox to be up by default. It fails to initialize anyway, but it is still listed in `docker-compose ps`.
I would also argue that we should not boot addons-frontend by default, given that there is a port conflict (same port is used for the `addons-frontend` container here and in the addons-frontend repo). From the addons-frontend repo, one can run `yarn amo` to use the addons-server API.
Every time I run `docker-compose up` in this project, I have to kill the addons-frontend container (or make sure I run the addons-frontend project first, so that the port is already bound and the container dies by itself).
That's my very own opinion though, with the POV of a new contributor to addons-server. I find it perfect to run the API and Dev Hub via docker and this project, but to have the addons-frontend project installed on its own.
| priority | docker compose up d should not start addons frontend and selenium firefox describe the problem and steps to reproduce it see the install steps at what happened the addons frontend and selenium firefox containers are started when running docker compose up what did you expect to happen while booting addons frontend might be a good idea i don t expect selenium firefox to be up by default it fails to initialize anyway but it is still listed in docker compose ps i would also argue that we should not boot addons frontend by default given that there is a port conflict same port is used for the addons frontend container here and in the addons frontend repo from the addons frontend repo one can run yarn amo to use the addons server api every time i run docker compose up in this project i have to kill the addons frontend container or make sure i run the addons frontend project first so that the port is already bound and the container dies by itself that s my very own opinion though with the pov of a new contributor to addons server i find it perfect to run the api and dev hub via docker and this project but to have the addons frontend project installed on its own | 1 |
3,062 | 2,536,312,035 | IssuesEvent | 2015-01-26 12:59:12 | nlbdev/nordic-epub3-dtbook-migrator | https://api.github.com/repos/nlbdev/nordic-epub3-dtbook-migrator | opened | Make nordic13a work for multi-document fileset | 2 - High priority bug epub3-validator | ("All books must have frontmatter and bodymatter")
This rule currently only works for the single-HTML representation.
nordic13b might need a similar update. | 1.0 | Make nordic13a work for multi-document fileset - ("All books must have frontmatter and bodymatter")
This rule currently only works for the single-HTML representation.
nordic13b might need a similar update. | priority | make work for multi document fileset all books must have frontmatter and bodymatter this rule currently only works for the single html representation might need a similar update | 1 |
225,131 | 7,478,593,571 | IssuesEvent | 2018-04-04 12:13:43 | akroma-project/akroma.io | https://api.github.com/repos/akroma-project/akroma.io | closed | Transaction Page - Confirmations | enhancement top priority | Transaction Page - Confirmations - display on transaction page (blockHeight - blockNumber = confirmations) | 1.0 | Transaction Page - Confirmations - Transaction Page - Confirmations - display on transaction page (blockHeight - blockNumber = confirmations) | priority | transaction page confirmations transaction page confirmations display on transaction page blockheight blocknumber confirmations | 1 |
224,973 | 17,209,463,737 | IssuesEvent | 2021-07-19 00:14:09 | seancorfield/next-jdbc | https://api.github.com/repos/seancorfield/next-jdbc | closed | Migration docs need to warn about c.j.j ops inside n.j/with-transaction | documentation | Because c.j.j ops attempt to create a (c.j.j) TX by default, if you use them inside `next.jdbc/with-transaction` operations will still be auto-committed.
You can either pass `:transaction? false` to all such nested c.j.j ops or convert them to `next.jdbc` calls (recommended). | 1.0 | Migration docs need to warn about c.j.j ops inside n.j/with-transaction - Because c.j.j ops attempt to create a (c.j.j) TX by default, if you use them inside `next.jdbc/with-transaction` operations will still be auto-committed.
You can either pass `:transaction? false` to all such nested c.j.j ops or convert them to `next.jdbc` calls (recommended). | non_priority | migration docs need to warn about c j j ops inside n j with transaction because c j j ops attempt to create a c j j tx by default if you use them inside next jdbc with transaction operations will still be auto committed you can either pass transaction false to all such nested c j j ops or convert them to next jdbc calls recommended | 0 |
270,360 | 8,454,959,133 | IssuesEvent | 2018-10-21 10:04:36 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | coco.fr - see bug description | browser-chrome priority-normal | <!-- @browser: Chrome 69.0.3497 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 -->
<!-- @reported_with: -->
**URL**: http://coco.fr/
**Browser / Version**: Chrome 69.0.3497
**Operating System**: Linux
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Test
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | coco.fr - see bug description - <!-- @browser: Chrome 69.0.3497 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 -->
<!-- @reported_with: -->
**URL**: http://coco.fr/
**Browser / Version**: Chrome 69.0.3497
**Operating System**: Linux
**Tested Another Browser**: No
**Problem type**: Something else
**Description**: Test
**Steps to Reproduce**:
_From [webcompat.com](https://webcompat.com/) with ❤️_ | priority | coco fr see bug description url browser version chrome operating system linux tested another browser no problem type something else description test steps to reproduce from with ❤️ | 1 |
60,236 | 12,070,184,115 | IssuesEvent | 2020-04-16 17:13:23 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Do not simplify away field as clause - it breaks compilation | 4 - In Review Area-IDE Bug IDE-CodeStyle | **Version Used**:
codeanalysis 3.5.0
**Steps to Reproduce**:
On a Visual Basic
1. Call Reduce on a single variable field with an As clause and an initializer
```vbnet
<Workspace>
<Project Language="Visual Basic" CommonReferences="true">
<Document>
Imports System
Imports I = System.Int32
Module Program
Public Dim {|SimplifyParent:x As Integer = 5|}
End Module
</Document>
</Project>
</Workspace>
```
**Expected Behavior**:
Same output as input
**Actual Behavior**:
As clause gets removed.
If option strict is on: Removing the as clause causes a compile error
If option strict is off: It changes the inferred type of the field to be Object in all cases
**Other details**:
This shot up my priority list when I realized the option to workaround this bug had been removed, though I'd much prefer we fix this bug than bring the option back. | 1.0 | Do not simplify away field as clause - it breaks compilation - **Version Used**:
codeanalysis 3.5.0
**Steps to Reproduce**:
On a Visual Basic
1. Call Reduce on a single variable field with an As clause and an initializer
```vbnet
<Workspace>
<Project Language="Visual Basic" CommonReferences="true">
<Document>
Imports System
Imports I = System.Int32
Module Program
Public Dim {|SimplifyParent:x As Integer = 5|}
End Module
</Document>
</Project>
</Workspace>
```
**Expected Behavior**:
Same output as input
**Actual Behavior**:
As clause gets removed.
If option strict is on: Removing the as clause causes a compile error
If option strict is off: It changes the inferred type of the field to be Object in all cases
**Other details**:
This shot up my priority list when I realized the option to workaround this bug had been removed, though I'd much prefer we fix this bug than bring the option back. | non_priority | do not simplify away field as clause it breaks compilation version used codeanalysis steps to reproduce on a visual basic call reduce on a single variable field with an as clause and an initializer vbnet imports system imports i system module program public dim simplifyparent x as integer end module expected behavior same output as input actual behavior as clause gets removed if option strict is on removing the as clause causes a compile error if option strict is off it changes the inferred type of the field to be object in all cases other details this shot up my priority list when i realized the option to workaround this bug had been removed though i d much prefer we fix this bug than bring the option back | 0 |
832,281 | 32,077,457,363 | IssuesEvent | 2023-09-25 11:59:48 | googleapis/google-cloud-ruby | https://api.github.com/repos/googleapis/google-cloud-ruby | closed | [Nightly CI Failures] Failures detected for google-cloud-bigquery-storage | type: bug priority: p1 nightly failure | At 2023-09-09 09:39:44 UTC, detected failures in google-cloud-bigquery-storage for: test.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/6129868852)
report_key_e0627aab1a8c51af683a5975db459b06 | 1.0 | [Nightly CI Failures] Failures detected for google-cloud-bigquery-storage - At 2023-09-09 09:39:44 UTC, detected failures in google-cloud-bigquery-storage for: test.
The CI logs can be found [here](https://github.com/googleapis/google-cloud-ruby/actions/runs/6129868852)
report_key_e0627aab1a8c51af683a5975db459b06 | priority | failures detected for google cloud bigquery storage at utc detected failures in google cloud bigquery storage for test the ci logs can be found report key | 1 |
96,663 | 27,970,960,001 | IssuesEvent | 2023-03-25 02:25:47 | ClickHouse/ClickHouse | https://api.github.com/repos/ClickHouse/ClickHouse | opened | Build source osx error | build | > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/
**Operating system**
macOS 12.2.1
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
**Cmake version**
cmake version 3.26.1
**Ninja version**
1.10.2
**Compiler name and version**
Homebrew clang version 16.0.0
**Full cmake and/or ninja output**
cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
-- The C compiler identification is Clang 16.0.0
-- The CXX compiler identification is Clang 16.0.0
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /opt/homebrew/opt/llvm/bin/clang
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - failed
-- Check for working C compiler: /opt/homebrew/opt/llvm/bin/clang
-- Check for working C compiler: /opt/homebrew/opt/llvm/bin/clang - broken
CMake Error at /opt/homebrew/Cellar/cmake/3.26.1/share/cmake/Modules/CMakeTestCCompiler.cmake:67 (message):
The C compiler
"/opt/homebrew/opt/llvm/bin/clang"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /Users/zepengchen/ClickHouse/build/CMakeFiles/CMakeScratch/TryCompile-rRIPXi
Run Build Command(s):/Users/zepengchen/opt/anaconda3/bin/ninja -v cmTC_72bf8 && [1/2] /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -MD -MT CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -MF CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o.d -o CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -c /Users/zepengchen/ClickHouse/build/CMakeFiles/CMakeScratch/TryCompile-rRIPXi/testCCompiler.c
[2/2] : && /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -o cmTC_72bf8 && :
FAILED: cmTC_72bf8
: && /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -o cmTC_72bf8 && :
ld: library not found for -lSystem
clang-16: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:3 (project)
-- Configuring incomplete, errors occurred!
| 1.0 | Build source osx error - > Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.com/docs/en/development/build/
**Operating system**
macOS 12.2.1
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
**Cmake version**
cmake version 3.26.1
**Ninja version**
1.10.2
**Compiler name and version**
Homebrew clang version 16.0.0
**Full cmake and/or ninja output**
cmake -DCMAKE_C_COMPILER=$(brew --prefix llvm)/bin/clang -DCMAKE_CXX_COMPILER=$(brew --prefix llvm)/bin/clang++ -DCMAKE_BUILD_TYPE=RelWithDebInfo ..
-- The C compiler identification is Clang 16.0.0
-- The CXX compiler identification is Clang 16.0.0
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: /opt/homebrew/opt/llvm/bin/clang
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - failed
-- Check for working C compiler: /opt/homebrew/opt/llvm/bin/clang
-- Check for working C compiler: /opt/homebrew/opt/llvm/bin/clang - broken
CMake Error at /opt/homebrew/Cellar/cmake/3.26.1/share/cmake/Modules/CMakeTestCCompiler.cmake:67 (message):
The C compiler
"/opt/homebrew/opt/llvm/bin/clang"
is not able to compile a simple test program.
It fails with the following output:
Change Dir: /Users/zepengchen/ClickHouse/build/CMakeFiles/CMakeScratch/TryCompile-rRIPXi
Run Build Command(s):/Users/zepengchen/opt/anaconda3/bin/ninja -v cmTC_72bf8 && [1/2] /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -MD -MT CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -MF CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o.d -o CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -c /Users/zepengchen/ClickHouse/build/CMakeFiles/CMakeScratch/TryCompile-rRIPXi/testCCompiler.c
[2/2] : && /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -o cmTC_72bf8 && :
FAILED: cmTC_72bf8
: && /opt/homebrew/opt/llvm/bin/clang -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX12.3.sdk -mmacosx-version-min=12.2 -Wl,-search_paths_first -Wl,-headerpad_max_install_names CMakeFiles/cmTC_72bf8.dir/testCCompiler.c.o -o cmTC_72bf8 && :
ld: library not found for -lSystem
clang-16: error: linker command failed with exit code 1 (use -v to see invocation)
ninja: build stopped: subcommand failed.
CMake will not be able to correctly generate this project.
Call Stack (most recent call first):
CMakeLists.txt:3 (project)
-- Configuring incomplete, errors occurred!
| non_priority | build source osx error make sure that git diff result is empty and you ve just pulled fresh master try cleaning up cmake cache just in case official build instructions are published here operating system macos os kind or distribution specific version release non standard kernel if any if you are trying to build inside virtual machine please mention it too cmake version cmake version ninja version compiler name and version homebrew clang version full cmake and or ninja output cmake dcmake c compiler brew prefix llvm bin clang dcmake cxx compiler brew prefix llvm bin clang dcmake build type relwithdebinfo the c compiler identification is clang the cxx compiler identification is clang the asm compiler identification is clang with gnu like command line found assembler opt homebrew opt llvm bin clang detecting c compiler abi info detecting c compiler abi info failed check for working c compiler opt homebrew opt llvm bin clang check for working c compiler opt homebrew opt llvm bin clang broken cmake error at opt homebrew cellar cmake share cmake modules cmaketestccompiler cmake message the c compiler opt homebrew opt llvm bin clang is not able to compile a simple test program it fails with the following output change dir users zepengchen clickhouse build cmakefiles cmakescratch trycompile rripxi run build command s users zepengchen opt bin ninja v cmtc opt homebrew opt llvm bin clang arch isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk mmacosx version min md mt cmakefiles cmtc dir testccompiler c o mf cmakefiles cmtc dir testccompiler c o d o cmakefiles cmtc dir testccompiler c o c users zepengchen clickhouse build cmakefiles cmakescratch trycompile rripxi testccompiler c opt homebrew opt llvm bin clang arch isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk mmacosx version min wl search paths first wl headerpad max install names cmakefiles cmtc dir testccompiler c o o cmtc failed cmtc opt homebrew opt llvm bin clang arch isysroot applications xcode app contents developer platforms macosx platform developer sdks sdk mmacosx version min wl search paths first wl headerpad max install names cmakefiles cmtc dir testccompiler c o o cmtc ld library not found for lsystem clang error linker command failed with exit code use v to see invocation ninja build stopped subcommand failed cmake will not be able to correctly generate this project call stack most recent call first cmakelists txt project configuring incomplete errors occurred | 0 |
51,891 | 27,294,876,862 | IssuesEvent | 2023-02-23 19:26:55 | enso-org/enso | https://api.github.com/repos/enso-org/enso | closed | Pre-emptive shader compilation | -gui --low-performance | Now that we have static information about shaders, we should compile them all at the beginning of application startup. | True | Pre-emptive shader compilation - Now that we have static information about shaders, we should compile them all at the beginning of application startup. | non_priority | pre emptive shader compilation now that we have static information about shaders we should compile them all at the beginning of application startup | 0 |
254,570 | 8,075,329,823 | IssuesEvent | 2018-08-07 05:00:52 | CarbonLDP/carbonldp-website | https://api.github.com/repos/CarbonLDP/carbonldp-website | closed | Move formal vocabulary into website so terms are resolvable by URI | in progress priority2: required type: task | The vocabulary needs to be resolvable by URI at:
https://carbonldp.com/ns/v1/platform# | 1.0 | Move formal vocabulary into website so terms are resolvable by URI - The vocabulary needs to be resolvable by URI at:
https://carbonldp.com/ns/v1/platform# | priority | move formal vocabulary into website so terms are resolvable by uri the vocabulary needs to be resolvable by uri at | 1 |
168,357 | 6,370,332,113 | IssuesEvent | 2017-08-01 13:59:25 | tardis-sn/tardis | https://api.github.com/repos/tardis-sn/tardis | closed | Add flexibility to callback mechanism of Simulation object | easy feature request priority - low | Requested feature:
Let the callback function know if it is ran after plasma or after the spectral run (the last run/iteration).
Possible implementations:
- Add a separate set of callback functions that are executed after the last iteration `Simulation.add_last_callback(...)` or something similiar
- Modify the existing mechanism to check if the function accepts `last` as a kwarg or if it accepts `**kwargs`. If that's the case, execute as `func(args, last=True)` | 1.0 | Add flexibility to callback mechanism of Simulation object - Requested feature:
Let the callback function know if it is ran after plasma or after the spectral run (the last run/iteration).
Possible implementations:
- Add a separate set of callback functions that are executed after the last iteration `Simulation.add_last_callback(...)` or something similiar
- Modify the existing mechanism to check if the function accepts `last` as a kwarg or if it accepts `**kwargs`. If that's the case, execute as `func(args, last=True)` | priority | add flexibility to callback mechanism of simulation object requested feature let the callback function know if it is ran after plasma or after the spectral run the last run iteration possible implementations add a separate set of callback functions that are executed after the last iteration simulation add last callback or something similiar modify the existing mechanism to check if the function accepts last as a kwarg or if it accepts kwargs if that s the case execute as func args last true | 1 |
76,901 | 3,505,724,823 | IssuesEvent | 2016-01-08 00:33:55 | Code60Home/fbs | https://api.github.com/repos/Code60Home/fbs | closed | Parachute icons don't have a side | bug priority 4 | Parachute icons don't show what side they belong to, so it's hard to tell who I have to go rescue | 1.0 | Parachute icons don't have a side - Parachute icons don't show what side they belong to, so it's hard to tell who I have to go rescue | priority | parachute icons don t have a side parachute icons don t show what side they belong to so it s hard to tell who i have to go rescue | 1 |
347,618 | 31,235,272,466 | IssuesEvent | 2023-08-20 07:34:39 | softeerbootcamp-2nd/A1-PoongCha | https://api.github.com/repos/softeerbootcamp-2nd/A1-PoongCha | opened | BE/08.20/추가 질문 기능 구현 | ✨ Feature 📃 Docs 📬 API ✅ Test | ### Domain
Backend
### 어떠한 작업을 진행하실 건가요?
- [ ] 추가 질문 생성
- [ ] 추가 질문 전체 ID 조회
- [ ] 추가 질문 식별자 목록으로 조회
### 고려사항
_No response_
### Priority Label Setting
- [X] Yes!
### Projects Card
- [X] Yes! | 1.0 | BE/08.20/추가 질문 기능 구현 - ### Domain
Backend
### 어떠한 작업을 진행하실 건가요?
- [ ] 추가 질문 생성
- [ ] 추가 질문 전체 ID 조회
- [ ] 추가 질문 식별자 목록으로 조회
### 고려사항
_No response_
### Priority Label Setting
- [X] Yes!
### Projects Card
- [X] Yes! | non_priority | be 추가 질문 기능 구현 domain backend 어떠한 작업을 진행하실 건가요 추가 질문 생성 추가 질문 전체 id 조회 추가 질문 식별자 목록으로 조회 고려사항 no response priority label setting yes projects card yes | 0 |
357,466 | 10,606,810,233 | IssuesEvent | 2019-10-11 01:00:36 | AY1920S1-CS2113T-T09-1/main | https://api.github.com/repos/AY1920S1-CS2113T-T09-1/main | opened | Work on an outline of the class diagram for Hustler v1.4 | priority.High type.Task | Create a general outline into which variables and functions can be filled. | 1.0 | Work on an outline of the class diagram for Hustler v1.4 - Create a general outline into which variables and functions can be filled. | priority | work on an outline of the class diagram for hustler create a general outline into which variables and functions can be filled | 1 |
73,013 | 3,398,840,466 | IssuesEvent | 2015-12-02 07:27:53 | ckaiser79/webtest-rb | https://api.github.com/repos/ckaiser79/webtest-rb | closed | Use jquery datatables in report for easier filtering | enhancement Priority-Medium | The file log/last_run/run.log.html should be easier searchable. By using datatables, the columns should be orderable and filterable by text | 1.0 | Use jquery datatables in report for easier filtering - The file log/last_run/run.log.html should be easier searchable. By using datatables, the columns should be orderable and filterable by text | priority | use jquery datatables in report for easier filtering the file log last run run log html should be easier searchable by using datatables the columns should be orderable and filterable by text | 1 |
46,540 | 5,820,977,468 | IssuesEvent | 2017-05-06 00:47:23 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | ci-kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-cluster: broken test run | kind/flake priority/P1 team/test-infra | https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-cluster/17/
Multiple broken tests:
Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201
```
Issues about this test specifically: #37427
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
<*errors.errorString | 0xc821e76b90>: {
s: "failed to wait for pods responding: pod with UID 1a9da759-bcbf-11e6-9b57-42010af00028 is no longer a member of the replica set. Must have been restarted for some reason. Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods 37452} [{{ } {my-hostname-delete-node-f2462 my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-f2462 1a9e1683-bcbf-11e6-9b57-42010af00028 37131 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37114\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5447}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5540 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-u76l 0xc8221fb0c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.4 10.96.2.178 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f00314751b6b3f6b7d4e488fc4939f43627a3ffad3e36710dc1d6aabb566b131}]}} {{ } {my-hostname-delete-node-pfc8c my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-pfc8c 53391fa8-bcbf-11e6-9b57-42010af00028 37308 0 2016-12-07 12:54:20 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37195\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a57d7}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a58d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb1c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:22 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST }] 10.240.0.5 10.96.3.30 2016-12-07 12:54:20 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a0951b9e823e5b1faf518575bd9e23660d15c4ca7afc2a223051082e550fe417}]}} {{ } {my-hostname-delete-node-tnjhg my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-tnjhg 1a9de57f-bcbf-11e6-9b57-42010af00028 37129 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37114\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5b67}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1ef0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb280 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.5 10.96.3.27 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c4fdf14cf0f41945a2c2ba22a2deaf79d4f326a04db18c1e6c889fc1d2c10d4a}]}}]}",
}
failed to wait for pods responding: pod with UID 1a9da759-bcbf-11e6-9b57-42010af00028 is no longer a member of the replica set. Must have been restarted for some reason. Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods 37452} [{{ } {my-hostname-delete-node-f2462 my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-f2462 1a9e1683-bcbf-11e6-9b57-42010af00028 37131 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37114"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5447}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5540 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-u76l 0xc8221fb0c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.4 10.96.2.178 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f00314751b6b3f6b7d4e488fc4939f43627a3ffad3e36710dc1d6aabb566b131}]}} {{ } {my-hostname-delete-node-pfc8c my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-pfc8c 53391fa8-bcbf-11e6-9b57-42010af00028 37308 0 2016-12-07 12:54:20 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37195"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a57d7}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a58d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb1c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:22 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST }] 10.240.0.5 10.96.3.30 2016-12-07 12:54:20 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a0951b9e823e5b1faf518575bd9e23660d15c4ca7afc2a223051082e550fe417}]}} {{ } {my-hostname-delete-node-tnjhg my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-tnjhg 1a9de57f-bcbf-11e6-9b57-42010af00028 37129 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37114"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5b67}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1ef0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb280 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.5 10.96.3.27 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c4fdf14cf0f41945a2c2ba22a2deaf79d4f326a04db18c1e6c889fc1d2c10d4a}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451
```
Issues about this test specifically: #27233 #36204
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:149
Dec 7 13:04:49.977: Failed to find expected endpoints:
Tries 0
Command curl -q -s --connect-timeout 1 http://35.184.49.101:30631/hostName
retrieved map[netserver-1:{} netserver-2:{} :{} netserver-0:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:228
```
Issues about this test specifically: #32684 #36278 #37948
Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197
```
Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
<int>: 0
to equal
<int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59
```
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531
Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244
```
Issues about this test specifically: #31502 #32947
| 1.0 | ci-kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-cluster: broken test run - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gke-gci-1.4-gci-1.5-upgrade-cluster/17/
Multiple broken tests:
Failed: [k8s.io] V1Job should fail a job [Slow] {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:202
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/batch_v1_jobs.go:201
```
Issues about this test specifically: #37427
Failed: [k8s.io] Nodes [Disruptive] [k8s.io] Resize [Slow] should be able to delete nodes {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:452
Expected error:
<*errors.errorString | 0xc821e76b90>: {
s: "failed to wait for pods responding: pod with UID 1a9da759-bcbf-11e6-9b57-42010af00028 is no longer a member of the replica set. Must have been restarted for some reason. Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods 37452} [{{ } {my-hostname-delete-node-f2462 my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-f2462 1a9e1683-bcbf-11e6-9b57-42010af00028 37131 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37114\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5447}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5540 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-u76l 0xc8221fb0c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.4 10.96.2.178 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f00314751b6b3f6b7d4e488fc4939f43627a3ffad3e36710dc1d6aabb566b131}]}} {{ } {my-hostname-delete-node-pfc8c my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-pfc8c 53391fa8-bcbf-11e6-9b57-42010af00028 37308 0 2016-12-07 12:54:20 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37195\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a57d7}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a58d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb1c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:22 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST }] 10.240.0.5 10.96.3.30 2016-12-07 12:54:20 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a0951b9e823e5b1faf518575bd9e23660d15c4ca7afc2a223051082e550fe417}]}} {{ } {my-hostname-delete-node-tnjhg my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-tnjhg 1a9de57f-bcbf-11e6-9b57-42010af00028 37129 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicationController\",\"namespace\":\"e2e-tests-resize-nodes-hr4f1\",\"name\":\"my-hostname-delete-node\",\"uid\":\"1a9b3c78-bcbf-11e6-9b57-42010af00028\",\"apiVersion\":\"v1\",\"resourceVersion\":\"37114\"}}\n] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5b67}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1ef0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb280 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.5 10.96.3.27 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c4fdf14cf0f41945a2c2ba22a2deaf79d4f326a04db18c1e6c889fc1d2c10d4a}]}}]}",
}
failed to wait for pods responding: pod with UID 1a9da759-bcbf-11e6-9b57-42010af00028 is no longer a member of the replica set. Must have been restarted for some reason. Current replica set: &{{ } {/api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods 37452} [{{ } {my-hostname-delete-node-f2462 my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-f2462 1a9e1683-bcbf-11e6-9b57-42010af00028 37131 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37114"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5447}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e30 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5540 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-u76l 0xc8221fb0c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.4 10.96.2.178 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18340 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://f00314751b6b3f6b7d4e488fc4939f43627a3ffad3e36710dc1d6aabb566b131}]}} {{ } {my-hostname-delete-node-pfc8c my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-pfc8c 53391fa8-bcbf-11e6-9b57-42010af00028 37308 0 2016-12-07 12:54:20 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37195"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a57d7}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1e90 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a58d0 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb1c0 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:22 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:54:20 -0800 PST }] 10.240.0.5 10.96.3.30 2016-12-07 12:54:20 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18360 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://a0951b9e823e5b1faf518575bd9e23660d15c4ca7afc2a223051082e550fe417}]}} {{ } {my-hostname-delete-node-tnjhg my-hostname-delete-node- e2e-tests-resize-nodes-hr4f1 /api/v1/namespaces/e2e-tests-resize-nodes-hr4f1/pods/my-hostname-delete-node-tnjhg 1a9de57f-bcbf-11e6-9b57-42010af00028 37129 0 2016-12-07 12:52:45 -0800 PST <nil> <nil> map[name:my-hostname-delete-node] map[kubernetes.io/created-by:{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"e2e-tests-resize-nodes-hr4f1","name":"my-hostname-delete-node","uid":"1a9b3c78-bcbf-11e6-9b57-42010af00028","apiVersion":"v1","resourceVersion":"37114"}}
] [{v1 ReplicationController my-hostname-delete-node 1a9b3c78-bcbf-11e6-9b57-42010af00028 0xc8208a5b67}] [] } {[{default-token-8t4rx {<nil> <nil> <nil> <nil> <nil> 0xc8215a1ef0 <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil> <nil>}}] [] [{my-hostname-delete-node gcr.io/google_containers/serve_hostname:v1.4 [] [] [{ 0 9376 TCP }] [] {map[] map[]} [{default-token-8t4rx true /var/run/secrets/kubernetes.io/serviceaccount }] <nil> <nil> <nil> /dev/termination-log IfNotPresent <nil> false false false}] Always 0xc8208a5c60 <nil> ClusterFirst map[] default gke-bootstrap-e2e-default-pool-aebc45ee-9gqd 0xc8221fb280 [] } {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:46 -0800 PST } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-12-07 12:52:45 -0800 PST }] 10.240.0.5 10.96.3.27 2016-12-07 12:52:45 -0800 PST [] [{my-hostname-delete-node {<nil> 0xc821f18380 <nil>} {<nil> <nil> <nil>} true 0 gcr.io/google_containers/serve_hostname:v1.4 docker://sha256:7f39284ddc3df6c8a89394c18278c895689f68c8bf180cfd03326771f4be9fb5 docker://c4fdf14cf0f41945a2c2ba22a2deaf79d4f326a04db18c1e6c889fc1d2c10d4a}]}}]}
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/resize_nodes.go:451
```
Issues about this test specifically: #27233 #36204
Failed: [k8s.io] Networking [k8s.io] Granular Checks: Services [Slow] should function for node-Service: http {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking.go:149
Dec 7 13:04:49.977: Failed to find expected endpoints:
Tries 0
Command curl -q -s --connect-timeout 1 http://35.184.49.101:30631/hostName
retrieved map[netserver-1:{} netserver-2:{} :{} netserver-0:{}]
expected map[netserver-1:{} netserver-2:{} netserver-0:{}]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/networking_utils.go:228
```
Issues about this test specifically: #32684 #36278 #37948
Failed: [k8s.io] Job should fail a job {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:198
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/job.go:197
```
Issues about this test specifically: #28773 #29506 #30699 #32734 #34585 #38391
Failed: [k8s.io] Rescheduler [Serial] should ensure that critical pod is scheduled in case there is no resources available {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:67
Expected
<int>: 0
to equal
<int>: 1
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/rescheduler.go:59
```
Issues about this test specifically: #31277 #31347 #31710 #32260 #32531
Failed: [k8s.io] Deployment overlapping deployment should not fight with each other {Kubernetes e2e suite}
```
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:92
Failed to update the first deployment's overlapping annotation
Expected error:
<*errors.errorString | 0xc820018b30>: {
s: "timed out waiting for the condition",
}
timed out waiting for the condition
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/deployment.go:1244
```
Issues about this test specifically: #31502 #32947
| non_priority | ci kubernetes gke gci gci upgrade cluster broken test run multiple broken tests failed should fail a job kubernetes suite go src io kubernetes output dockerized go src io kubernetes test batch jobs go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test batch jobs go issues about this test specifically failed nodes resize should be able to delete nodes kubernetes suite go src io kubernetes output dockerized go src io kubernetes test resize nodes go expected error s failed to wait for pods responding pod with uid bcbf is no longer a member of the replica set must have been restarted for some reason current replica set api namespaces tests resize nodes pods map map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst my hostname delete node my hostname delete node tests resize nodes api namespaces tests resize nodes pods my hostname delete node bcbf pst map map map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst my hostname delete node tnjhg my hostname delete node tests resize nodes api namespaces tests resize nodes pods my hostname delete node tnjhg bcbf pst map map map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst failed to wait for pods responding pod with uid bcbf is no longer a member of the replica set must have been restarted for some reason current replica set api namespaces tests resize nodes pods map kubernetes io created by kind serializedreference apiversion reference kind replicationcontroller namespace tests resize nodes name my hostname delete node uid bcbf apiversion resourceversion map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst my hostname delete node my hostname delete node tests resize nodes api namespaces tests resize nodes pods my hostname delete node bcbf pst map map kubernetes io created by kind serializedreference apiversion reference kind replicationcontroller namespace tests resize nodes name my hostname delete node uid bcbf apiversion resourceversion map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst my hostname delete node tnjhg my hostname delete node tests resize nodes api namespaces tests resize nodes pods my hostname delete node tnjhg bcbf pst map map kubernetes io created by kind serializedreference apiversion reference kind replicationcontroller namespace tests resize nodes name my hostname delete node uid bcbf apiversion resourceversion map map dev termination log ifnotpresent false false false always clusterfirst map default gke bootstrap default pool running pst not to have occurred go src io kubernetes output dockerized go src io kubernetes test resize nodes go issues about this test specifically failed networking granular checks services should function for node service http kubernetes suite go src io kubernetes output dockerized go src io kubernetes test networking go dec failed to find expected endpoints tries command curl q s connect timeout retrieved map expected map go src io kubernetes output dockerized go src io kubernetes test networking utils go issues about this test specifically failed job should fail a job kubernetes suite go src io kubernetes output dockerized go src io kubernetes test job go expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test job go issues about this test specifically failed rescheduler should ensure that critical pod is scheduled in case there is no resources available kubernetes suite go src io kubernetes output dockerized go src io kubernetes test rescheduler go expected to equal go src io kubernetes output dockerized go src io kubernetes test rescheduler go issues about this test specifically failed deployment overlapping deployment should not fight with each other kubernetes suite go src io kubernetes output dockerized go src io kubernetes test deployment go failed to update the first deployment s overlapping annotation expected error s timed out waiting for the condition timed out waiting for the condition not to have occurred go src io kubernetes output dockerized go src io kubernetes test deployment go issues about this test specifically | 0 |
280,156 | 24,281,023,986 | IssuesEvent | 2022-09-28 17:24:42 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: cdc/pubsub-sink/assume-role failed | C-test-failure O-robot O-roachtest release-blocker branch-release-22.2 | roachtest.cdc/pubsub-sink/assume-role [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6661627?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6661627?buildTab=artifacts#/cdc/pubsub-sink/assume-role) on release-22.2 @ [08974d4b0433a14aa83251a44df9659eb8e3ae65](https://github.com/cockroachdb/cockroach/commits/08974d4b0433a14aa83251a44df9659eb8e3ae65):
```
test artifacts and logs in: /artifacts/cdc/pubsub-sink/assume-role/run_1
monitor.go:127,cdc.go:300,cdc.go:820,test_runner.go:908: monitor failure: monitor task failed: dial tcp 34.75.140.93:26257: connect: connection refused
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:300
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func9
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:820
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (4) monitor task failed
Wraps: (5) dial tcp 34.75.140.93:26257
Wraps: (6) connect
Wraps: (7) connection refused
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno
test_runner.go:1039,test_runner.go:938: test timed out (0s)
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/pubsub-sink/assume-role.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: cdc/pubsub-sink/assume-role failed - roachtest.cdc/pubsub-sink/assume-role [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6661627?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6661627?buildTab=artifacts#/cdc/pubsub-sink/assume-role) on release-22.2 @ [08974d4b0433a14aa83251a44df9659eb8e3ae65](https://github.com/cockroachdb/cockroach/commits/08974d4b0433a14aa83251a44df9659eb8e3ae65):
```
test artifacts and logs in: /artifacts/cdc/pubsub-sink/assume-role/run_1
monitor.go:127,cdc.go:300,cdc.go:820,test_runner.go:908: monitor failure: monitor task failed: dial tcp 34.75.140.93:26257: connect: connection refused
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:300
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func9
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:820
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (4) monitor task failed
Wraps: (5) dial tcp 34.75.140.93:26257
Wraps: (6) connect
Wraps: (7) connection refused
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *net.OpError (6) *os.SyscallError (7) syscall.Errno
test_runner.go:1039,test_runner.go:938: test timed out (0s)
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=16</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/pubsub-sink/assume-role.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_priority | roachtest cdc pubsub sink assume role failed roachtest cdc pubsub sink assume role with on release test artifacts and logs in artifacts cdc pubsub sink assume role run monitor go cdc go cdc go test runner go monitor failure monitor task failed dial tcp connect connection refused attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests cdcbasictest github com cockroachdb cockroach pkg cmd roachtest tests cdc go github com cockroachdb cockroach pkg cmd roachtest tests registercdc github com cockroachdb cockroach pkg cmd roachtest tests cdc go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go runtime goexit goroot src runtime asm s wraps monitor task failed wraps dial tcp wraps connect wraps connection refused error types withstack withstack errutil withprefix withstack withstack errutil withprefix net operror os syscallerror syscall errno test runner go test runner go test timed out parameters roachtest cloud gce roachtest cpu roachtest ssd help see see cc cockroachdb cdc | 0 |
121,809 | 26,035,858,044 | IssuesEvent | 2022-12-22 04:51:44 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | LSRA Reg Optional: Folding of operations using a tree temp | enhancement tenet-performance area-CodeGen-coreclr optimization | Say we have the following expr
a = (b + (c + (d+ (e+f))))
in LIR form
t0 = e+f
t1 = t0+d
t2 = t1+c
a = t2+b
Say it is profitable to not to allocate a reg to Use position of '+' in (e+f). That is tree temp t0 needs to be spilled to memory. Furthermore it is profitable to not to allocate a reg to both use and def positions of '+' in (t0+d) and (t1+c)
Since all of these tree temps are not live at the same time, using a single tree temp, we can generate the following
reg = e+f
spill reg to stack location given by spill tmp1
add [addr of tmp1], reg of d
add [add or tmp1], reg of c
mov a'reg, b's reg
add a'reg, [addr of tmp1]
To perform such an optimization, LSRA would need to annotate a tree node with a spill temp number that codegen is supposed to use for spill/reload purposes.
category:cq
theme:register-allocator
skill-level:expert
cost:medium | 1.0 | LSRA Reg Optional: Folding of operations using a tree temp - Say we have the following expr
a = (b + (c + (d+ (e+f))))
in LIR form
t0 = e+f
t1 = t0+d
t2 = t1+c
a = t2+b
Say it is profitable to not to allocate a reg to Use position of '+' in (e+f). That is tree temp t0 needs to be spilled to memory. Furthermore it is profitable to not to allocate a reg to both use and def positions of '+' in (t0+d) and (t1+c)
Since all of these tree temps are not live at the same time, using a single tree temp, we can generate the following
reg = e+f
spill reg to stack location given by spill tmp1
add [addr of tmp1], reg of d
add [add or tmp1], reg of c
mov a'reg, b's reg
add a'reg, [addr of tmp1]
To perform such an optimization, LSRA would need to annotate a tree node with a spill temp number that codegen is supposed to use for spill/reload purposes.
category:cq
theme:register-allocator
skill-level:expert
cost:medium | non_priority | lsra reg optional folding of operations using a tree temp say we have the following expr a b c d e f in lir form e f d c a b say it is profitable to not to allocate a reg to use position of in e f that is tree temp needs to be spilled to memory furthermore it is profitable to not to allocate a reg to both use and def positions of in d and c since all of these tree temps are not live at the same time using a single tree temp we can generate the following reg e f spill reg to stack location given by spill add reg of d add reg of c mov a reg b s reg add a reg to perform such an optimization lsra would need to annotate a tree node with a spill temp number that codegen is supposed to use for spill reload purposes category cq theme register allocator skill level expert cost medium | 0 |
408,186 | 11,942,883,039 | IssuesEvent | 2020-04-02 21:53:50 | wazuh/wazuh-kibana-app | https://api.github.com/repos/wazuh/wazuh-kibana-app | closed | Pagination is not shown in table-type visualizations | bug priority/high | | Wazuh | Elastic | Rev |
| ----- | ------- | --- |
| 3.12.0 | 7.6.1 | n/a |
**Description**
The pagination controls in the table-type Kibana visualizations are not shown.
**Steps to reproduce**
1. Go to 'Overview > Security Events'
2. Navigate to the Alerts Summary table
3. See error
**Screenshots**

**Cause**
The problem is that table-type visualizations have a certain height, and the pagination is staying outside that height. We'll have to set the visualizations higher or raise the paging controls.
| 1.0 | Pagination is not shown in table-type visualizations - | Wazuh | Elastic | Rev |
| ----- | ------- | --- |
| 3.12.0 | 7.6.1 | n/a |
**Description**
The pagination controls in the table-type Kibana visualizations are not shown.
**Steps to reproduce**
1. Go to 'Overview > Security Events'
2. Navigate to the Alerts Summary table
3. See error
**Screenshots**

**Cause**
The problem is that table-type visualizations have a certain height, and the pagination is staying outside that height. We'll have to set the visualizations higher or raise the paging controls.
| priority | pagination is not shown in table type visualizations wazuh elastic rev n a description the pagination controls in the table type kibana visualizations are not shown steps to reproduce go to overview security events navigate to the alerts summary table see error screenshots cause the problem is that table type visualizations have a certain height and the pagination is staying outside that height we ll have to set the visualizations higher or raise the paging controls | 1 |
151,235 | 5,808,230,256 | IssuesEvent | 2017-05-04 10:04:53 | learnweb/moodle-mod_ratingallocate | https://api.github.com/repos/learnweb/moodle-mod_ratingallocate | closed | Feature Request: delete own rating (or even selected student ratings) | Effort: Medium Priority: High | Some teachers tend to test their acitivities before publishing them. If they try to rate the choices by simulating student role (or by using test students) this test ratings will be used for allocation and therefore disturb the allocation algorithm.
Therefore it should be possible to delete ratings. In my opinion there are several options to solve this. Any idea itself can solve the problem. But they could also coexist and supplement each other.
Idea 1.
Option to delete his/her own rating for every user/rater.
This option could be implemented in the "edit rating"-Screen.
However: it might be useful to integrate a popup warning after selcting delete:
"Do you really want to delete your rating? The data will be lost and to be included in the allocation process you have to submit a new rating before the rating ends."

Idea 2.
A bit more general would be the option to delete ratings from other users. Only teachers should be allowed to do that. It could be integrated in the "ratings and allocations"-Table.
The standard moodle choice activity has a comparable feature and it could be solved similarly (for moodle choice see the "view responses"-table):
There should be an additional column (e.g. at the end, could be the first column as well) with a checkbox for each user. Furhermore a "select all" and a "deselect all" text and a button (or select box) to delete the selected ratings (maybe again with a popup warning).

| 1.0 | Feature Request: delete own rating (or even selected student ratings) - Some teachers tend to test their acitivities before publishing them. If they try to rate the choices by simulating student role (or by using test students) this test ratings will be used for allocation and therefore disturb the allocation algorithm.
Therefore it should be possible to delete ratings. In my opinion there are several options to solve this. Any idea itself can solve the problem. But they could also coexist and supplement each other.
Idea 1.
Option to delete his/her own rating for every user/rater.
This option could be implemented in the "edit rating"-Screen.
However: it might be useful to integrate a popup warning after selcting delete:
"Do you really want to delete your rating? The data will be lost and to be included in the allocation process you have to submit a new rating before the rating ends."

Idea 2.
A bit more general would be the option to delete ratings from other users. Only teachers should be allowed to do that. It could be integrated in the "ratings and allocations"-Table.
The standard moodle choice activity has a comparable feature and it could be solved similarly (for moodle choice see the "view responses"-table):
There should be an additional column (e.g. at the end, could be the first column as well) with a checkbox for each user. Furhermore a "select all" and a "deselect all" text and a button (or select box) to delete the selected ratings (maybe again with a popup warning).

| priority | feature request delete own rating or even selected student ratings some teachers tend to test their acitivities before publishing them if they try to rate the choices by simulating student role or by using test students this test ratings will be used for allocation and therefore disturb the allocation algorithm therefore it should be possible to delete ratings in my opinion there are several options to solve this any idea itself can solve the problem but they could also coexist and supplement each other idea option to delete his her own rating for every user rater this option could be implemented in the edit rating screen however it might be useful to integrate a popup warning after selcting delete do you really want to delete your rating the data will be lost and to be included in the allocation process you have to submit a new rating before the rating ends idea a bit more general would be the option to delete ratings from other users only teachers should be allowed to do that it could be integrated in the ratings and allocations table the standard moodle choice activity has a comparable feature and it could be solved similarly for moodle choice see the view responses table there should be an additional column e g at the end could be the first column as well with a checkbox for each user furhermore a select all and a deselect all text and a button or select box to delete the selected ratings maybe again with a popup warning | 1 |
356,000 | 10,587,328,696 | IssuesEvent | 2019-10-08 21:50:01 | flextype/flextype | https://api.github.com/repos/flextype/flextype | closed | Flextype Core: Entries API - improve and_where & or_where for fetchAll method | priority: medium type: improvement | We should have ability to path unlimited and_where & or_where expressions
Query example:
<img width="668" alt="Screenshot 2019-09-23 at 23 59 53" src="https://user-images.githubusercontent.com/477114/65463605-b5cf5000-de60-11e9-98df-e7598b5cfdfa.png">
| 1.0 | Flextype Core: Entries API - improve and_where & or_where for fetchAll method - We should have ability to path unlimited and_where & or_where expressions
Query example:
<img width="668" alt="Screenshot 2019-09-23 at 23 59 53" src="https://user-images.githubusercontent.com/477114/65463605-b5cf5000-de60-11e9-98df-e7598b5cfdfa.png">
| priority | flextype core entries api improve and where or where for fetchall method we should have ability to path unlimited and where or where expressions query example img width alt screenshot at src | 1 |
7,349 | 6,916,269,003 | IssuesEvent | 2017-11-29 01:35:24 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Fix traceback error output from test_realm_scenarios | area: testing-infrastructure bug priority: high | I think this error output is the result of our having recently changed the queue processors to run the `consume` methods when `queue_json_publish` is called in tests. Not sure yet.
```
Running zerver.tests.test_messages.TestCrossRealmPMs.test_realm_scenarios
2017-10-27 23:29:50.575 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.645 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.713 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.775 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
``` | 1.0 | Fix traceback error output from test_realm_scenarios - I think this error output is the result of our having recently changed the queue processors to run the `consume` methods when `queue_json_publish` is called in tests. Not sure yet.
```
Running zerver.tests.test_messages.TestCrossRealmPMs.test_realm_scenarios
2017-10-27 23:29:50.575 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.645 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.713 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.775 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
``` | non_priority | fix traceback error output from test realm scenarios i think this error output is the result of our having recently changed the queue processors to run the consume methods when queue json publish is called in tests not sure yet running zerver tests test messages testcrossrealmpms test realm scenarios err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror | 0 |
608,571 | 18,842,671,619 | IssuesEvent | 2021-11-11 11:26:13 | googleapis/google-api-python-client | https://api.github.com/repos/googleapis/google-api-python-client | closed | MediaIoBaseDownload next_chunk() Range header off-by-one | type: bug priority: p2 status: investigating | #### Environment details
- OS type and version:
```
Ubuntu 20.04.3 LTS
```
- Python version: `python --version`
```
Python 3.9.7
```
- pip version: `pip --version`
```
pip 21.3.1
```
- `google-api-python-client` version: `pip show google-api-python-client`
```
Name: google-api-python-client
Version: 2.29.0
Summary: Google API Client Library for Python
Home-page: https://github.com/googleapis/google-api-python-client/
Author: Google LLC
Author-email: googleapis-packages@google.com
License: Apache 2.0
Requires: google-api-core, google-auth, google-auth-httplib2, httplib2, uritemplate
```
#### Steps to reproduce
1. Use `MediaIoBaseDownload(..., chunksize=1024)` for downloading a file from Google Drive (see https://developers.google.com/drive/api/v3/manage-downloads#python)
2. Received chunk is 1025 bytes
3. See examples / behavior defined in https://httpwg.org/specs/rfc7233.html#rule.ranges-specifier
#### Code example
```python
import logging
from io import BytesIO
logging.basicConfig(level="DEBUG")
# example / skipping drive, auth setup for obvious reasons
file_id = "foobar123456789"
chunk_size = 1024
file_obj = BytesIO()
request = drive.files().get_media(fileId=file_id)
downloader = MediaIoBaseDownload(file_obj, request, chunksize=chunk_size)
done = False
while not done:
# debug log the lower level HTTP request headers?
status, done = downloader.next_chunk()
logging.debug(
"Download status %r %s/%s bytes (%.1f%%)",
file_id,
status.resumable_progress,
status.total_size,
status.progress() * 100,
)
if not done:
assert file_obj.tell() == status.resumable_progress
assert file_obj.tell() % chunk_size == 0
else:
assert file_obj.tell() == status.total_size
```
In real life chunk size should be few megabytes at least. Default seems to be 100MiB:
```
googleapiclient/http.py:DEFAULT_CHUNK_SIZE = 100 * 1024 * 1024
```
#### Stack trace
.
#### Patch
Is it worth making a PR, signing CLA? :D
```diff
diff --git a/googleapiclient/http.py b/googleapiclient/http.py
index 1b661e1b..927464e9 100644
--- a/googleapiclient/http.py
+++ b/googleapiclient/http.py
@@ -733,7 +733,7 @@ class MediaIoBaseDownload(object):
headers = self._headers.copy()
headers["range"] = "bytes=%d-%d" % (
self._progress,
- self._progress + self._chunksize,
+ self._progress + self._chunksize - 1,
)
http = self._request.http
```
| 1.0 | MediaIoBaseDownload next_chunk() Range header off-by-one - #### Environment details
- OS type and version:
```
Ubuntu 20.04.3 LTS
```
- Python version: `python --version`
```
Python 3.9.7
```
- pip version: `pip --version`
```
pip 21.3.1
```
- `google-api-python-client` version: `pip show google-api-python-client`
```
Name: google-api-python-client
Version: 2.29.0
Summary: Google API Client Library for Python
Home-page: https://github.com/googleapis/google-api-python-client/
Author: Google LLC
Author-email: googleapis-packages@google.com
License: Apache 2.0
Requires: google-api-core, google-auth, google-auth-httplib2, httplib2, uritemplate
```
#### Steps to reproduce
1. Use `MediaIoBaseDownload(..., chunksize=1024)` for downloading a file from Google Drive (see https://developers.google.com/drive/api/v3/manage-downloads#python)
2. Received chunk is 1025 bytes
3. See examples / behavior defined in https://httpwg.org/specs/rfc7233.html#rule.ranges-specifier
#### Code example
```python
import logging
from io import BytesIO
logging.basicConfig(level="DEBUG")
# example / skipping drive, auth setup for obvious reasons
file_id = "foobar123456789"
chunk_size = 1024
file_obj = BytesIO()
request = drive.files().get_media(fileId=file_id)
downloader = MediaIoBaseDownload(file_obj, request, chunksize=chunk_size)
done = False
while not done:
# debug log the lower level HTTP request headers?
status, done = downloader.next_chunk()
logging.debug(
"Download status %r %s/%s bytes (%.1f%%)",
file_id,
status.resumable_progress,
status.total_size,
status.progress() * 100,
)
if not done:
assert file_obj.tell() == status.resumable_progress
assert file_obj.tell() % chunk_size == 0
else:
assert file_obj.tell() == status.total_size
```
In real life chunk size should be few megabytes at least. Default seems to be 100MiB:
```
googleapiclient/http.py:DEFAULT_CHUNK_SIZE = 100 * 1024 * 1024
```
#### Stack trace
.
#### Patch
Is it worth making a PR, signing CLA? :D
```diff
diff --git a/googleapiclient/http.py b/googleapiclient/http.py
index 1b661e1b..927464e9 100644
--- a/googleapiclient/http.py
+++ b/googleapiclient/http.py
@@ -733,7 +733,7 @@ class MediaIoBaseDownload(object):
headers = self._headers.copy()
headers["range"] = "bytes=%d-%d" % (
self._progress,
- self._progress + self._chunksize,
+ self._progress + self._chunksize - 1,
)
http = self._request.http
```
| priority | mediaiobasedownload next chunk range header off by one environment details os type and version ubuntu lts python version python version python pip version pip version pip google api python client version pip show google api python client name google api python client version summary google api client library for python home page author google llc author email googleapis packages google com license apache requires google api core google auth google auth uritemplate steps to reproduce use mediaiobasedownload chunksize for downloading a file from google drive see received chunk is bytes see examples behavior defined in code example python import logging from io import bytesio logging basicconfig level debug example skipping drive auth setup for obvious reasons file id chunk size file obj bytesio request drive files get media fileid file id downloader mediaiobasedownload file obj request chunksize chunk size done false while not done debug log the lower level http request headers status done downloader next chunk logging debug download status r s s bytes file id status resumable progress status total size status progress if not done assert file obj tell status resumable progress assert file obj tell chunk size else assert file obj tell status total size in real life chunk size should be few megabytes at least default seems to be googleapiclient http py default chunk size stack trace patch is it worth making a pr signing cla d diff diff git a googleapiclient http py b googleapiclient http py index a googleapiclient http py b googleapiclient http py class mediaiobasedownload object headers self headers copy headers bytes d d self progress self progress self chunksize self progress self chunksize http self request http | 1 |
300,825 | 9,212,871,212 | IssuesEvent | 2019-03-10 06:05:19 | CS2103-AY1819S2-W10-1/main | https://api.github.com/repos/CS2103-AY1819S2-W10-1/main | opened | As an offline user, I can view a saved link in a browser while offline | priority.High type.Story | so that I can read the content of links I’ve saved.
Part of epics:
#28 Offline support | 1.0 | As an offline user, I can view a saved link in a browser while offline - so that I can read the content of links I’ve saved.
Part of epics:
#28 Offline support | priority | as an offline user i can view a saved link in a browser while offline so that i can read the content of links i’ve saved part of epics offline support | 1 |
662,998 | 22,159,469,740 | IssuesEvent | 2022-06-04 09:15:19 | application-research/estuary | https://api.github.com/repos/application-research/estuary | closed | Current filecoin-ffi submodule version fails to build on linux-arm machines | Bug Priority: high | Commit version `5d00bb4` of filecoin-ffi does not build on linux arm.
Commit version `8b32252` builds fine but I can't get estuary to build whilst using this version. Help would be greatly appreciated. | 1.0 | Current filecoin-ffi submodule version fails to build on linux-arm machines - Commit version `5d00bb4` of filecoin-ffi does not build on linux arm.
Commit version `8b32252` builds fine but I can't get estuary to build whilst using this version. Help would be greatly appreciated. | priority | current filecoin ffi submodule version fails to build on linux arm machines commit version of filecoin ffi does not build on linux arm commit version builds fine but i can t get estuary to build whilst using this version help would be greatly appreciated | 1 |
609,125 | 18,854,993,713 | IssuesEvent | 2021-11-12 04:18:20 | hoffstadt/DearPyGui | https://api.github.com/repos/hoffstadt/DearPyGui | closed | "Input_float" widget cannot be disabled | state: ready priority: high type: bug | ## Version of Dear PyGui
Version: 1.0.2
Python: 3.9.7
Operating System: SolusOS 64bit
## My Issue
The "input_float" widget cannot be disabled, both via the definition of the widget (see code example), and via the function "disable_item(item).
## To Reproduce
Steps to reproduce the behavior:
```python
import dearpygui.dearpygui as dpg
dpg.create_context()
dpg.create_viewport()
dpg.setup_dearpygui()
with dpg.window(label="tutorial"):
dpg.add_input_float(label="Input float", enabled=False)
dpg.add_input_int(label="Input int", enabled=False)
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
```
Greetings
LuminousLizard | 1.0 | "Input_float" widget cannot be disabled - ## Version of Dear PyGui
Version: 1.0.2
Python: 3.9.7
Operating System: SolusOS 64bit
## My Issue
The "input_float" widget cannot be disabled, both via the definition of the widget (see code example), and via the function "disable_item(item).
## To Reproduce
Steps to reproduce the behavior:
```python
import dearpygui.dearpygui as dpg
dpg.create_context()
dpg.create_viewport()
dpg.setup_dearpygui()
with dpg.window(label="tutorial"):
dpg.add_input_float(label="Input float", enabled=False)
dpg.add_input_int(label="Input int", enabled=False)
dpg.show_viewport()
dpg.start_dearpygui()
dpg.destroy_context()
```
Greetings
LuminousLizard | priority | input float widget cannot be disabled version of dear pygui version python operating system solusos my issue the input float widget cannot be disabled both via the definition of the widget see code example and via the function disable item item to reproduce steps to reproduce the behavior python import dearpygui dearpygui as dpg dpg create context dpg create viewport dpg setup dearpygui with dpg window label tutorial dpg add input float label input float enabled false dpg add input int label input int enabled false dpg show viewport dpg start dearpygui dpg destroy context greetings luminouslizard | 1 |
701,143 | 24,088,053,694 | IssuesEvent | 2022-09-19 12:42:53 | AY2223S1-CS2103T-T09-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-T09-2/tp | opened | Update AboutUs page | priority.High task | This page (in the `/docs` folder) is used for module admin purposes.
[ ] Add your own details. Include a suitable photo as described here.
There is no need to mention the the tutor/lecturer, but OK to do so too.
The filename of the profile photo should be docs/images/github_username_in_lower_case.png
Note the need for lower case e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png.
If your photo is in jpg format, name the file as .png anyway.
[ ] Indicate the different roles played and responsibilities held by each team member. | 1.0 | Update AboutUs page - This page (in the `/docs` folder) is used for module admin purposes.
[ ] Add your own details. Include a suitable photo as described here.
There is no need to mention the the tutor/lecturer, but OK to do so too.
The filename of the profile photo should be docs/images/github_username_in_lower_case.png
Note the need for lower case e.g. JohnDoe123 -> docs/images/johndoe123.png not docs/images/JohnDoe123.png.
If your photo is in jpg format, name the file as .png anyway.
[ ] Indicate the different roles played and responsibilities held by each team member. | priority | update aboutus page this page in the docs folder is used for module admin purposes add your own details include a suitable photo as described here there is no need to mention the the tutor lecturer but ok to do so too the filename of the profile photo should be docs images github username in lower case png note the need for lower case e g docs images png not docs images png if your photo is in jpg format name the file as png anyway indicate the different roles played and responsibilities held by each team member | 1 |
416,567 | 12,148,579,972 | IssuesEvent | 2020-04-24 14:48:26 | TykTechnologies/tyk | https://api.github.com/repos/TykTechnologies/tyk | closed | Requesting the notion of "Applications" within the portal | Priority: High customer request enhancement wontfix | **Do you want to request a *feature* or report a *bug*?**
*feature*
**What is the current behavior?**
API consumers are consuming Policies in the portal. Keys are tied to the policy.
**What is the expected behavior?**
Ideally, the following behavior should be accomplished:
✔️ As a consumer, I can sign up for the portal
- As a consumer, I can register an application in the portal
- As a consumer, I can create a key (Application API Key) for the application that I registered in the portal
- As a consumer, I can subscribe to policies for the application
- As a consumer, I can use the Application API Key to access to all the APIs that is allowed under the policy, which this is subscribed to the application
This basically allows a third component (application) which is tied to the keys and policies to be defined in the system. Such that the consumer of the API (developer of third-party applications) can use a single key to subscribe to multiple policies (which has defined access roles to downstream APIs)
Then the API providers/Admins can approve the "subscription" event, rather than approving the request for keys in the dashboard
This allows the API consumer/3rd party developer to manage keys, access to policies. Rather than having the API provider to provide custom combination policies to the API consumers.
This basically follows how Google Developer Console provision keys.
You login as a consumer, create an application/project, then get a key, and finally activate/subscribe one or more Google APIs, then that key you've received can access to all the APIs that you've subscribed to. | 1.0 | Requesting the notion of "Applications" within the portal - **Do you want to request a *feature* or report a *bug*?**
*feature*
**What is the current behavior?**
API consumers are consuming Policies in the portal. Keys are tied to the policy.
**What is the expected behavior?**
Ideally, the following behavior should be accomplished:
✔️ As a consumer, I can sign up for the portal
- As a consumer, I can register an application in the portal
- As a consumer, I can create a key (Application API Key) for the application that I registered in the portal
- As a consumer, I can subscribe to policies for the application
- As a consumer, I can use the Application API Key to access to all the APIs that is allowed under the policy, which this is subscribed to the application
This basically allows a third component (application) which is tied to the keys and policies to be defined in the system. Such that the consumer of the API (developer of third-party applications) can use a single key to subscribe to multiple policies (which has defined access roles to downstream APIs)
Then the API providers/Admins can approve the "subscription" event, rather than approving the request for keys in the dashboard
This allows the API consumer/3rd party developer to manage keys, access to policies. Rather than having the API provider to provide custom combination policies to the API consumers.
This basically follows how Google Developer Console provision keys.
You login as a consumer, create an application/project, then get a key, and finally activate/subscribe one or more Google APIs, then that key you've received can access to all the APIs that you've subscribed to. | priority | requesting the notion of applications within the portal do you want to request a feature or report a bug feature what is the current behavior api consumers are consuming policies in the portal keys are tied to the policy what is the expected behavior ideally the following behavior should be accomplished ✔️ as a consumer i can sign up for the portal as a consumer i can register an application in the portal as a consumer i can create a key application api key for the application that i registered in the portal as a consumer i can subscribe to policies for the application as a consumer i can use the application api key to access to all the apis that is allowed under the policy which this is subscribed to the application this basically allows a third component application which is tied to the keys and policies to be defined in the system such that the consumer of the api developer of third party applications can use a single key to subscribe to multiple policies which has defined access roles to downstream apis then the api providers admins can approve the subscription event rather than approving the request for keys in the dashboard this allows the api consumer party developer to manage keys access to policies rather than having the api provider to provide custom combination policies to the api consumers this basically follows how google developer console provision keys you login as a consumer create an application project then get a key and finally activate subscribe one or more google apis then that key you ve received can access to all the apis that you ve subscribed to | 1 |
160,489 | 6,098,557,940 | IssuesEvent | 2017-06-20 07:54:39 | McStasMcXtrace/iFit | https://api.github.com/repos/McStasMcXtrace/iFit | closed | Models: Phonons: recent PhonoPy has issue to get mas | bug priority |
```
s=sqw_phonons([ ifitpath 'Data/POSCAR_Al'],'metal','emt');
qh=linspace(0.01,.5,50);qk=qh; ql=qh; w=linspace(0.01,50,51);
f=iData(s,[],qh,qk,ql',w);
```
fails with:
```
Traceback (most recent call last):
File "/tmp/tpea7560eb_ef46_49a5_8071_74f86e168658/sqw_phonons_eval.py", line 17, in <module>
omega_kn, polar_kn = ifit.phonopy_band_structure(ph, HKL, modes=True)
File "/tmp/tpea7560eb_ef46_49a5_8071_74f86e168658/ifit.py", line 1043, in phonopy_band_structure
m_inv_x = numpy.repeat(numpy.sqrt(D._mass), 3)
AttributeError: 'DynamicalMatrix' object has no attribute '_mass'
```
| 1.0 | Models: Phonons: recent PhonoPy has issue to get mas -
```
s=sqw_phonons([ ifitpath 'Data/POSCAR_Al'],'metal','emt');
qh=linspace(0.01,.5,50);qk=qh; ql=qh; w=linspace(0.01,50,51);
f=iData(s,[],qh,qk,ql',w);
```
fails with:
```
Traceback (most recent call last):
File "/tmp/tpea7560eb_ef46_49a5_8071_74f86e168658/sqw_phonons_eval.py", line 17, in <module>
omega_kn, polar_kn = ifit.phonopy_band_structure(ph, HKL, modes=True)
File "/tmp/tpea7560eb_ef46_49a5_8071_74f86e168658/ifit.py", line 1043, in phonopy_band_structure
m_inv_x = numpy.repeat(numpy.sqrt(D._mass), 3)
AttributeError: 'DynamicalMatrix' object has no attribute '_mass'
```
| priority | models phonons recent phonopy has issue to get mas s sqw phonons metal emt qh linspace qk qh ql qh w linspace f idata s qh qk ql w fails with traceback most recent call last file tmp sqw phonons eval py line in omega kn polar kn ifit phonopy band structure ph hkl modes true file tmp ifit py line in phonopy band structure m inv x numpy repeat numpy sqrt d mass attributeerror dynamicalmatrix object has no attribute mass | 1 |
60,885 | 3,135,240,286 | IssuesEvent | 2015-09-10 14:28:14 | minetest/minetest | https://api.github.com/repos/minetest/minetest | closed | Locale directory regression | bug Low priority | On the forums, user mahmutelmas06 [points out](https://forum.minetest.net/viewtopic.php?f=42&p=190382#p190382) that the main menu isn't translated anymore. Most likely his binary has been compiled with `RUN_IN_PLACE=false`, and he experiences the regression of commit 645e2086734e3d2d1ec95f50faa39f0f24304761 that the locale isn't detected anymore if `RUN_IN_PLACE=false` and the locale isn't present at the absolute system directory. Before it worked, even though RUN_IN_PLACE was false.
This issue tracks that regression. | 1.0 | Locale directory regression - On the forums, user mahmutelmas06 [points out](https://forum.minetest.net/viewtopic.php?f=42&p=190382#p190382) that the main menu isn't translated anymore. Most likely his binary has been compiled with `RUN_IN_PLACE=false`, and he experiences the regression of commit 645e2086734e3d2d1ec95f50faa39f0f24304761 that the locale isn't detected anymore if `RUN_IN_PLACE=false` and the locale isn't present at the absolute system directory. Before it worked, even though RUN_IN_PLACE was false.
This issue tracks that regression. | priority | locale directory regression on the forums user that the main menu isn t translated anymore most likely his binary has been compiled with run in place false and he experiences the regression of commit that the locale isn t detected anymore if run in place false and the locale isn t present at the absolute system directory before it worked even though run in place was false this issue tracks that regression | 1 |
550,388 | 16,110,938,741 | IssuesEvent | 2021-04-27 21:05:55 | ExoCTK/exoctk | https://api.github.com/repos/ExoCTK/exoctk | opened | Request development web server | 1: HIGH PRIORITY Tool: exoctk_app | ExoCTK currently has a test and production web server. By adding a development web server, we would be able to test out changes on the fly, and leave the test and production servers to be used only when pushing major releases. | 1.0 | Request development web server - ExoCTK currently has a test and production web server. By adding a development web server, we would be able to test out changes on the fly, and leave the test and production servers to be used only when pushing major releases. | priority | request development web server exoctk currently has a test and production web server by adding a development web server we would be able to test out changes on the fly and leave the test and production servers to be used only when pushing major releases | 1 |
444,439 | 12,812,656,937 | IssuesEvent | 2020-07-04 07:55:21 | shakram02/PiFloor | https://api.github.com/repos/shakram02/PiFloor | closed | Make an emulator for the detection and communication functionality | Medium Priority enhancement | It'll be convenient to make a desktop application that can facilitate testing, which has the following features
1. Websocket & http server
2. Directory to serve the page from
3. A method to select the invisible tile
The downside to the current pipeline is that the person doing development needs to have android studio and the web stuff. Which isn't cool | 1.0 | Make an emulator for the detection and communication functionality - It'll be convenient to make a desktop application that can facilitate testing, which has the following features
1. Websocket & http server
2. Directory to serve the page from
3. A method to select the invisible tile
The downside to the current pipeline is that the person doing development needs to have android studio and the web stuff. Which isn't cool | priority | make an emulator for the detection and communication functionality it ll be convenient to make a desktop application that can facilitate testing which has the following features websocket http server directory to serve the page from a method to select the invisible tile the downside to the current pipeline is that the person doing development needs to have android studio and the web stuff which isn t cool | 1 |
718,199 | 24,707,347,544 | IssuesEvent | 2022-10-19 20:18:26 | TimZoet/cppql | https://api.github.com/repos/TimZoet/cppql | closed | Add NULL ordering to OrderBy expression | feature low priority | Add support for ordering NULLS first/last to OrderBy expression. See also [sqlite doc](https://www.sqlite.org/lang_select.html). | 1.0 | Add NULL ordering to OrderBy expression - Add support for ordering NULLS first/last to OrderBy expression. See also [sqlite doc](https://www.sqlite.org/lang_select.html). | priority | add null ordering to orderby expression add support for ordering nulls first last to orderby expression see also | 1 |
232,714 | 7,674,145,340 | IssuesEvent | 2018-05-15 02:08:55 | Ryan6578/TTS-Codenames | https://api.github.com/repos/Ryan6578/TTS-Codenames | closed | Save State | enhancement feature request high priority | On a rewind, load the previous game's save state. This is so that the script doesn't completely break on rewinding. | 1.0 | Save State - On a rewind, load the previous game's save state. This is so that the script doesn't completely break on rewinding. | priority | save state on a rewind load the previous game s save state this is so that the script doesn t completely break on rewinding | 1 |
642,500 | 20,905,694,357 | IssuesEvent | 2022-03-24 01:45:43 | GoogleContainerTools/skaffold | https://api.github.com/repos/GoogleContainerTools/skaffold | reopened | Security Policy violation Binary Artifacts | java priority/p1 kind/todo area/example allstar | _This issue was automatically created by [Allstar](https://github.com/ossf/allstar/)._
**Security Policy Violation**
Project is out of compliance with Binary Artifacts policy: binaries present in source code
**Rule Description**
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the [Security Scorecards Documentation](https://github.com/ossf/scorecard/blob/main/docs/checks.md#binary-artifacts) for Binary Artifacts.
**Remediation Steps**
To remediate, remove the generated executable artifacts from the repository.
**Artifacts Found**
- examples/jib-gradle/gradle/wrapper/gradle-wrapper.jar
- examples/jib-multimodule/.mvn/wrapper/maven-wrapper.jar
- examples/jib-sync/.mvn/wrapper/maven-wrapper.jar
- examples/jib-sync/gradle/wrapper/gradle-wrapper.jar
- examples/jib/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-gradle/gradle/wrapper/gradle-wrapper.jar
- integration/examples/jib-multimodule/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-sync/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-sync/gradle/wrapper/gradle-wrapper.jar
- integration/examples/jib/.mvn/wrapper/maven-wrapper.jar
**Additional Information**
This policy is drawn from [Security Scorecards](https://github.com/ossf/scorecard/), which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.
---
This issue will auto resolve when the policy is in compliance.
Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer. | 1.0 | Security Policy violation Binary Artifacts - _This issue was automatically created by [Allstar](https://github.com/ossf/allstar/)._
**Security Policy Violation**
Project is out of compliance with Binary Artifacts policy: binaries present in source code
**Rule Description**
Binary Artifacts are an increased security risk in your repository. Binary artifacts cannot be reviewed, allowing the introduction of possibly obsolete or maliciously subverted executables. For more information see the [Security Scorecards Documentation](https://github.com/ossf/scorecard/blob/main/docs/checks.md#binary-artifacts) for Binary Artifacts.
**Remediation Steps**
To remediate, remove the generated executable artifacts from the repository.
**Artifacts Found**
- examples/jib-gradle/gradle/wrapper/gradle-wrapper.jar
- examples/jib-multimodule/.mvn/wrapper/maven-wrapper.jar
- examples/jib-sync/.mvn/wrapper/maven-wrapper.jar
- examples/jib-sync/gradle/wrapper/gradle-wrapper.jar
- examples/jib/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-gradle/gradle/wrapper/gradle-wrapper.jar
- integration/examples/jib-multimodule/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-sync/.mvn/wrapper/maven-wrapper.jar
- integration/examples/jib-sync/gradle/wrapper/gradle-wrapper.jar
- integration/examples/jib/.mvn/wrapper/maven-wrapper.jar
**Additional Information**
This policy is drawn from [Security Scorecards](https://github.com/ossf/scorecard/), which is a tool that scores a project's adherence to security best practices. You may wish to run a Scorecards scan directly on this repository for more details.
---
This issue will auto resolve when the policy is in compliance.
Issue created by Allstar. See https://github.com/ossf/allstar/ for more information. For questions specific to the repository, please contact the owner or maintainer. | priority | security policy violation binary artifacts this issue was automatically created by security policy violation project is out of compliance with binary artifacts policy binaries present in source code rule description binary artifacts are an increased security risk in your repository binary artifacts cannot be reviewed allowing the introduction of possibly obsolete or maliciously subverted executables for more information see the for binary artifacts remediation steps to remediate remove the generated executable artifacts from the repository artifacts found examples jib gradle gradle wrapper gradle wrapper jar examples jib multimodule mvn wrapper maven wrapper jar examples jib sync mvn wrapper maven wrapper jar examples jib sync gradle wrapper gradle wrapper jar examples jib mvn wrapper maven wrapper jar integration examples jib gradle gradle wrapper gradle wrapper jar integration examples jib multimodule mvn wrapper maven wrapper jar integration examples jib sync mvn wrapper maven wrapper jar integration examples jib sync gradle wrapper gradle wrapper jar integration examples jib mvn wrapper maven wrapper jar additional information this policy is drawn from which is a tool that scores a project s adherence to security best practices you may wish to run a scorecards scan directly on this repository for more details this issue will auto resolve when the policy is in compliance issue created by allstar see for more information for questions specific to the repository please contact the owner or maintainer | 1 |
245,938 | 18,799,131,614 | IssuesEvent | 2021-11-09 04:03:16 | IBM/aihwkit | https://api.github.com/repos/IBM/aihwkit | closed | How to access the source code? | documentation | Hi,
Thanks to the IBM developers for this great package. Can you please help me to access the source code of the package? I could not find the original source code from the docs page. | 1.0 | How to access the source code? - Hi,
Thanks to the IBM developers for this great package. Can you please help me to access the source code of the package? I could not find the original source code from the docs page. | non_priority | how to access the source code hi thanks to the ibm developers for this great package can you please help me to access the source code of the package i could not find the original source code from the docs page | 0 |
457,780 | 13,162,199,875 | IssuesEvent | 2020-08-10 21:05:13 | jenkins-x/jx | https://api.github.com/repos/jenkins-x/jx | closed | Cannot upgrade jx client without a functioning server install | area/cli area/upgrade kind/bug lifecycle/rotten priority/important-longterm | ### Summary
`jx upgrade client` has started failing if there is no active server install associated.
```
➜ jx upgrade cli
error: failed to load TeamSettings: failed to create the jx client: Get https://35.189.218.13/api/v1/namespaces/jx: dial tcp 35.189.218.13:443: connect: connection refused
```
### Jx version
The output of `jx version` is:
```
2.0.866
```
| 1.0 | Cannot upgrade jx client without a functioning server install - ### Summary
`jx upgrade client` has started failing if there is no active server install associated.
```
➜ jx upgrade cli
error: failed to load TeamSettings: failed to create the jx client: Get https://35.189.218.13/api/v1/namespaces/jx: dial tcp 35.189.218.13:443: connect: connection refused
```
### Jx version
The output of `jx version` is:
```
2.0.866
```
| priority | cannot upgrade jx client without a functioning server install summary jx upgrade client has started failing if there is no active server install associated ➜ jx upgrade cli error failed to load teamsettings failed to create the jx client get dial tcp connect connection refused jx version the output of jx version is | 1 |
60,379 | 3,125,879,748 | IssuesEvent | 2015-09-08 04:59:55 | AtlasOfLivingAustralia/biocache-hubs | https://api.github.com/repos/AtlasOfLivingAustralia/biocache-hubs | closed | Print CSS file needed | enhancement in progress priority-high | Printing results page is a mess. Needs a separate print.css file to hide most of the divs so we just get the results list and search meta data.
Should also work with record pages. | 1.0 | Print CSS file needed - Printing results page is a mess. Needs a separate print.css file to hide most of the divs so we just get the results list and search meta data.
Should also work with record pages. | priority | print css file needed printing results page is a mess needs a separate print css file to hide most of the divs so we just get the results list and search meta data should also work with record pages | 1 |
10,676 | 7,274,667,547 | IssuesEvent | 2018-02-21 10:47:49 | gradle/kotlin-dsl | https://api.github.com/repos/gradle/kotlin-dsl | opened | KT-22977 - Kotlin compilation should produce the same result when compiling the same source files | a:bug by:jetbrains in:kt-compiler re:performance | See https://youtrack.jetbrains.com/issue/KT-22977
This may have an impact on embedded script compilation caching and scripts compiled by the `kotlin-gradle-plugin` as in #666 | True | KT-22977 - Kotlin compilation should produce the same result when compiling the same source files - See https://youtrack.jetbrains.com/issue/KT-22977
This may have an impact on embedded script compilation caching and scripts compiled by the `kotlin-gradle-plugin` as in #666 | non_priority | kt kotlin compilation should produce the same result when compiling the same source files see this may have an impact on embedded script compilation caching and scripts compiled by the kotlin gradle plugin as in | 0 |
211,235 | 7,199,468,003 | IssuesEvent | 2018-02-05 16:02:12 | Jumpscale/prefab9 | https://api.github.com/repos/Jumpscale/prefab9 | closed | Allow to use prefab.web.portal.configure() to set membership to distinct value | priority_major state_verification type_feature | #### Installation information
- jumpscale version: 9.2.0
Also see: https://github.com/Jumpscale/prefab9/issues/87
#### Detailed description
Currently if you use `prefab.web.portal.configure()` to configure the OAuth settings of JumpScale Portal you cannot set a value for `memberof` in `oauth.client_scope` that is different from the value that gets set for `oauth.client_scope`.
Here's the example of a what we typically try to achieve:

In this example:
- Only users that are member of the `artilium-dev2.ays-server-clients.ays-portal-users` organization can login to the AYS Portal
- Only clients that have access to an API access key of the `artilium-dev2.ays-server-clients` can access the AYS RESTful server
This example configuration can only be created MANUALLY; `refab.web.portal.configure()` doesn't allow you to do so.
| 1.0 | Allow to use prefab.web.portal.configure() to set membership to distinct value - #### Installation information
- jumpscale version: 9.2.0
Also see: https://github.com/Jumpscale/prefab9/issues/87
#### Detailed description
Currently if you use `prefab.web.portal.configure()` to configure the OAuth settings of JumpScale Portal you cannot set a value for `memberof` in `oauth.client_scope` that is different from the value that gets set for `oauth.client_scope`.
Here's the example of a what we typically try to achieve:

In this example:
- Only users that are member of the `artilium-dev2.ays-server-clients.ays-portal-users` organization can login to the AYS Portal
- Only clients that have access to an API access key of the `artilium-dev2.ays-server-clients` can access the AYS RESTful server
This example configuration can only be created MANUALLY; `refab.web.portal.configure()` doesn't allow you to do so.
| priority | allow to use prefab web portal configure to set membership to distinct value installation information jumpscale version also see detailed description currently if you use prefab web portal configure to configure the oauth settings of jumpscale portal you cannot set a value for memberof in oauth client scope that is different from the value that gets set for oauth client scope here s the example of a what we typically try to achieve in this example only users that are member of the artilium ays server clients ays portal users organization can login to the ays portal only clients that have access to an api access key of the artilium ays server clients can access the ays restful server this example configuration can only be created manually refab web portal configure doesn t allow you to do so | 1 |
17,046 | 10,594,225,259 | IssuesEvent | 2019-10-09 16:18:31 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Code samples not valid for in-page cloud shell | Pri2 cognitive-services/svc cxp docs-experience immersive-reader/subsvc triaged | The code samples on this page are specific to Azure **Powershell**, get the `Try It` buttons around them launch the in-page Azure Cloud Shell **using bash**. This is extremely confusing. Recommend fixing one of these issues so readers can actually utilize the code as they read the doc.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bc3c5d9-bb8d-a780-8f38-df8941c7917a
* Version Independent ID: 30f0fa59-5c03-2f4f-5ad6-afaae4d7be31
* Content: [Azure Active Directory (Azure AD) authentication - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/immersive-reader/azure-active-directory-authentication)
* Content Source: [articles/cognitive-services/immersive-reader/azure-active-directory-authentication.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/immersive-reader/azure-active-directory-authentication.md)
* Service: **cognitive-services**
* Sub-service: **immersive-reader**
* GitHub Login: @rwaller
* Microsoft Alias: **rwaller** | 1.0 | Code samples not valid for in-page cloud shell - The code samples on this page are specific to Azure **Powershell**, get the `Try It` buttons around them launch the in-page Azure Cloud Shell **using bash**. This is extremely confusing. Recommend fixing one of these issues so readers can actually utilize the code as they read the doc.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bc3c5d9-bb8d-a780-8f38-df8941c7917a
* Version Independent ID: 30f0fa59-5c03-2f4f-5ad6-afaae4d7be31
* Content: [Azure Active Directory (Azure AD) authentication - Azure Cognitive Services](https://docs.microsoft.com/en-us/azure/cognitive-services/immersive-reader/azure-active-directory-authentication)
* Content Source: [articles/cognitive-services/immersive-reader/azure-active-directory-authentication.md](https://github.com/Microsoft/azure-docs/blob/master/articles/cognitive-services/immersive-reader/azure-active-directory-authentication.md)
* Service: **cognitive-services**
* Sub-service: **immersive-reader**
* GitHub Login: @rwaller
* Microsoft Alias: **rwaller** | non_priority | code samples not valid for in page cloud shell the code samples on this page are specific to azure powershell get the try it buttons around them launch the in page azure cloud shell using bash this is extremely confusing recommend fixing one of these issues so readers can actually utilize the code as they read the doc document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cognitive services sub service immersive reader github login rwaller microsoft alias rwaller | 0 |
383,855 | 26,566,556,107 | IssuesEvent | 2023-01-20 20:52:26 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Props incorrectly documented for FocalPointPicker | [Type] Developer Documentation Good First Issue [Status] In Progress [Package] Components | **Describe the bug**
The docs for `components/focal-point-picker` state that `dimensions` is both a property, and required.
It appears that that prop was replaced with an internal function to get the dimensions from the DOM itself.
This should probably be removed to avoid confusion.
**The relevant portion of the outdated docs:**
https://github.com/WordPress/gutenberg/blob/a57202e635ff0cd190a09128d27951620db9a870/packages/components/src/focal-point-picker/README.md#L52-L56
**The relevant portion of the code:**
https://github.com/WordPress/gutenberg/blob/a57202e635ff0cd190a09128d27951620db9a870/packages/components/src/focal-point-picker/index.js#L57-L63 | 1.0 | Props incorrectly documented for FocalPointPicker - **Describe the bug**
The docs for `components/focal-point-picker` state that `dimensions` is both a property, and required.
It appears that that prop was replaced with an internal function to get the dimensions from the DOM itself.
This should probably be removed to avoid confusion.
**The relevant portion of the outdated docs:**
https://github.com/WordPress/gutenberg/blob/a57202e635ff0cd190a09128d27951620db9a870/packages/components/src/focal-point-picker/README.md#L52-L56
**The relevant portion of the code:**
https://github.com/WordPress/gutenberg/blob/a57202e635ff0cd190a09128d27951620db9a870/packages/components/src/focal-point-picker/index.js#L57-L63 | non_priority | props incorrectly documented for focalpointpicker describe the bug the docs for components focal point picker state that dimensions is both a property and required it appears that that prop was replaced with an internal function to get the dimensions from the dom itself this should probably be removed to avoid confusion the relevant portion of the outdated docs the relevant portion of the code | 0 |
88,241 | 11,047,433,718 | IssuesEvent | 2019-12-09 18:57:56 | codeforboston/plogalong | https://api.github.com/repos/codeforboston/plogalong | opened | Replace nav with svgs | behavior visual design |
Replace nav icons with the following:
https://www.dropbox.com/sh/wrzsq7mntxsny87/AAAW9XL4kVdP_Jf8HbTx-SOua/nav?dl=0
Use purple as selected state, i.e. if I'm on the history page, history icon should be purple | 1.0 | Replace nav with svgs -
Replace nav icons with the following:
https://www.dropbox.com/sh/wrzsq7mntxsny87/AAAW9XL4kVdP_Jf8HbTx-SOua/nav?dl=0
Use purple as selected state, i.e. if I'm on the history page, history icon should be purple | non_priority | replace nav with svgs replace nav icons with the following use purple as selected state i e if i m on the history page history icon should be purple | 0 |
673,693 | 23,027,826,428 | IssuesEvent | 2022-07-22 10:57:41 | microsoft/PowerToys | https://api.github.com/repos/microsoft/PowerToys | closed | Always On Top Square border with Windows 11 | Issue-Bug Resolution-Fix-Committed Area-User Interface Product-Always On Top Priority-2 | ### Microsoft PowerToys version
0.51.1
### Running as admin
- [ ] Yes
### Area(s) with issue?
Always on Top
### Steps to reproduce
Toggle a window to Always On Top on a Windows 11 Build
### ✔️ Expected Behavior
Coloured border should wrap with the curve of the window
### ❌ Actual Behavior
Coloured border is square when the window is round so it looks odd.

### Other Software
_No response_ | 1.0 | Always On Top Square border with Windows 11 - ### Microsoft PowerToys version
0.51.1
### Running as admin
- [ ] Yes
### Area(s) with issue?
Always on Top
### Steps to reproduce
Toggle a window to Always On Top on a Windows 11 Build
### ✔️ Expected Behavior
Coloured border should wrap with the curve of the window
### ❌ Actual Behavior
Coloured border is square when the window is round so it looks odd.

### Other Software
_No response_ | priority | always on top square border with windows microsoft powertoys version running as admin yes area s with issue always on top steps to reproduce toggle a window to always on top on a windows build ✔️ expected behavior coloured border should wrap with the curve of the window ❌ actual behavior coloured border is square when the window is round so it looks odd other software no response | 1 |
1,681 | 24,395,824,043 | IssuesEvent | 2022-10-04 19:09:34 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | Hot threads should treat `other=` time on transport workers as idle time | >enhancement :Core/Infra/Core Team:Core/Infra Supportability | Transport worker threads should basically always be in state `RUNNABLE` which means that today they are reported by the hot threads API as 100% hot even when completely idle:
```
100.0% [cpu=0.0%, other=100.0%] (500ms out of 500ms) cpu usage by thread 'elasticsearch[instance-0000000004][transport_worker][T#1]'
10/10 snapshots sharing following 9 elements
java.base@17.0.2/sun.nio.ch.EPoll.wait(Native Method)
java.base@17.0.2/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:118)
java.base@17.0.2/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:129)
java.base@17.0.2/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:146)
io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:813)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
java.base@17.0.2/java.lang.Thread.run(Thread.java:833)
```
This is misleading even to experienced engineers, and we have fielded rather a large number of cases and questions that focus inappropriately on this output. There's [docs on the subject](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) of course but this is only useful for folks that know their existence.
I think we could reduce confusion in this area a great deal if for threads for which `Transports#isTransportThread` returned true we adjusted this output slightly:
- rename the `other=` field to `idle=`
- sort these threads according to the `cpu=` time only rather than the total `RUNNABLE` time.
It is true that these threads might be `RUNNABLE` off the CPU in some other native method (e.g. waiting for disk IO, perhaps related to trace logging) so we do still want to see these numbers, but in practice we never see `other=` time in anything but `EPoll#wait`, so renaming the `other` time to `idle` solves a lot more confusion than it causes. | True | Hot threads should treat `other=` time on transport workers as idle time - Transport worker threads should basically always be in state `RUNNABLE` which means that today they are reported by the hot threads API as 100% hot even when completely idle:
```
100.0% [cpu=0.0%, other=100.0%] (500ms out of 500ms) cpu usage by thread 'elasticsearch[instance-0000000004][transport_worker][T#1]'
10/10 snapshots sharing following 9 elements
java.base@17.0.2/sun.nio.ch.EPoll.wait(Native Method)
java.base@17.0.2/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:118)
java.base@17.0.2/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:129)
java.base@17.0.2/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:146)
io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:813)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:460)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
java.base@17.0.2/java.lang.Thread.run(Thread.java:833)
```
This is misleading even to experienced engineers, and we have fielded rather a large number of cases and questions that focus inappropriately on this output. There's [docs on the subject](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html#modules-network-threading-model) of course but this is only useful for folks that know their existence.
I think we could reduce confusion in this area a great deal if for threads for which `Transports#isTransportThread` returned true we adjusted this output slightly:
- rename the `other=` field to `idle=`
- sort these threads according to the `cpu=` time only rather than the total `RUNNABLE` time.
It is true that these threads might be `RUNNABLE` off the CPU in some other native method (e.g. waiting for disk IO, perhaps related to trace logging) so we do still want to see these numbers, but in practice we never see `other=` time in anything but `EPoll#wait`, so renaming the `other` time to `idle` solves a lot more confusion than it causes. | non_priority | hot threads should treat other time on transport workers as idle time transport worker threads should basically always be in state runnable which means that today they are reported by the hot threads api as hot even when completely idle out of cpu usage by thread elasticsearch snapshots sharing following elements java base sun nio ch epoll wait native method java base sun nio ch epollselectorimpl doselect epollselectorimpl java java base sun nio ch selectorimpl lockanddoselect selectorimpl java java base sun nio ch selectorimpl select selectorimpl java io netty channel nio nioeventloop select nioeventloop java io netty channel nio nioeventloop run nioeventloop java io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java io netty util internal threadexecutormap run threadexecutormap java java base java lang thread run thread java this is misleading even to experienced engineers and we have fielded rather a large number of cases and questions that focus inappropriately on this output there s of course but this is only useful for folks that know their existence i think we could reduce confusion in this area a great deal if for threads for which transports istransportthread returned true we adjusted this output slightly rename the other field to idle sort these threads according to the cpu time only rather than the total runnable time it is true that these threads might be runnable off the cpu in some other native method e g waiting for disk io perhaps related to trace logging so we do still want to see these numbers but in practice we never see other time in anything but epoll wait so renaming the other time to idle solves a lot more confusion than it causes | 0 |
158,576 | 24,858,631,197 | IssuesEvent | 2022-10-27 06:09:48 | Kanika637/amazon-clone | https://api.github.com/repos/Kanika637/amazon-clone | closed | Subtotal Container floating on top of the footer | bug enhancement good first issue invalid UX Design hacktoberfest | ### Describe the bug
No Response
### Expected behaviour
Expected to be in the same place
### Screenshots
https://user-images.githubusercontent.com/97578285/196969299-3295f2e1-c6d5-4037-9d91-16a201b2915b.mp4
### Additional context
No Response | 1.0 | Subtotal Container floating on top of the footer - ### Describe the bug
No Response
### Expected behaviour
Expected to be in the same place
### Screenshots
https://user-images.githubusercontent.com/97578285/196969299-3295f2e1-c6d5-4037-9d91-16a201b2915b.mp4
### Additional context
No Response | non_priority | subtotal container floating on top of the footer describe the bug no response expected behaviour expected to be in the same place screenshots additional context no response | 0 |
401,120 | 27,317,459,079 | IssuesEvent | 2023-02-24 16:48:06 | vercel/next.js | https://api.github.com/repos/vercel/next.js | opened | Docs: Beta Docs JSON-LD recommendation is insecure by default | template: documentation | ### What is the improvement or update you wish to see?
The beta SEO docs suggest `dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}`.
Turns out this is vulnerable to XSS and not actually compliant with JSON-LD. e.g. a `name: "abc</script><MOREHTML>"` is valid JSON that stringifies without an escaped `</script>`. The JSON-LD spec gets away with this by adding extra escaping rules for any JSON-LD included inline in an HTML document.
I saw this in google/react-schemaorg#9 and solved this in google/react-schemaorg#10
Either recommend a package that does this already (e.g. my own google/react-schemaorg) or by including a sample of what the escaping looks like (you're free to copy the linked PR, it's licensed under Apache 2.0).
### Is there any context that might help us understand?
google/react-schemaorg#10 has a proper fix. The repro in google/react-schemaorg#9 can be triggered from the beta.nextjs.org docs too.
(I can't find where the beta docs live, otherwise I would have contributed a fix myself)
### Does the docs page already exist? Please link to it.
https://beta.nextjs.org/docs/guides/seo#json-ld | 1.0 | Docs: Beta Docs JSON-LD recommendation is insecure by default - ### What is the improvement or update you wish to see?
The beta SEO docs suggest `dangerouslySetInnerHTML={{ __html: JSON.stringify(jsonLd) }}`.
Turns out this is vulnerable to XSS and not actually compliant with JSON-LD. e.g. a `name: "abc</script><MOREHTML>"` is valid JSON that stringifies without an escaped `</script>`. The JSON-LD spec gets away with this by adding extra escaping rules for any JSON-LD included inline in an HTML document.
I saw this in google/react-schemaorg#9 and solved this in google/react-schemaorg#10
Either recommend a package that does this already (e.g. my own google/react-schemaorg) or by including a sample of what the escaping looks like (you're free to copy the linked PR, it's licensed under Apache 2.0).
### Is there any context that might help us understand?
google/react-schemaorg#10 has a proper fix. The repro in google/react-schemaorg#9 can be triggered from the beta.nextjs.org docs too.
(I can't find where the beta docs live, otherwise I would have contributed a fix myself)
### Does the docs page already exist? Please link to it.
https://beta.nextjs.org/docs/guides/seo#json-ld | non_priority | docs beta docs json ld recommendation is insecure by default what is the improvement or update you wish to see the beta seo docs suggest dangerouslysetinnerhtml html json stringify jsonld turns out this is vulnerable to xss and not actually compliant with json ld e g a name abc is valid json that stringifies without an escaped the json ld spec gets away with this by adding extra escaping rules for any json ld included inline in an html document i saw this in google react schemaorg and solved this in google react schemaorg either recommend a package that does this already e g my own google react schemaorg or by including a sample of what the escaping looks like you re free to copy the linked pr it s licensed under apache is there any context that might help us understand google react schemaorg has a proper fix the repro in google react schemaorg can be triggered from the beta nextjs org docs too i can t find where the beta docs live otherwise i would have contributed a fix myself does the docs page already exist please link to it | 0 |
45,158 | 11,597,056,586 | IssuesEvent | 2020-02-24 20:05:42 | dotnet/source-build | https://api.github.com/repos/dotnet/source-build | opened | Transport package creation | area-build area-infra | - Add transport package generation/publishing logic
- Choose a leaf repo to initially generate transport package
- Roll out to all repos from leaves up to core-sdk | 1.0 | Transport package creation - - Add transport package generation/publishing logic
- Choose a leaf repo to initially generate transport package
- Roll out to all repos from leaves up to core-sdk | non_priority | transport package creation add transport package generation publishing logic choose a leaf repo to initially generate transport package roll out to all repos from leaves up to core sdk | 0 |
59,997 | 17,023,307,436 | IssuesEvent | 2021-07-03 01:20:47 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Grey background on living_street | Component: osmarender Priority: minor Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 4.46pm, Wednesday, 8th October 2008]**
The end of a "living_street" tagged highway looks a bit strange, it has got an grey edge, no other "highway" shows this behavior, so there might be something wrong.
You can see it here:
http://www.openstreetmap.org/?lat=51.11222&lon=13.65293&zoom=17&layers=0B00FTF
the end of "Am Gymnasium" | 1.0 | Grey background on living_street - **[Submitted to the original trac issue database at 4.46pm, Wednesday, 8th October 2008]**
The end of a "living_street" tagged highway looks a bit strange, it has got an grey edge, no other "highway" shows this behavior, so there might be something wrong.
You can see it here:
http://www.openstreetmap.org/?lat=51.11222&lon=13.65293&zoom=17&layers=0B00FTF
the end of "Am Gymnasium" | non_priority | grey background on living street the end of a living street tagged highway looks a bit strange it has got an grey edge no other highway shows this behavior so there might be something wrong you can see it here the end of am gymnasium | 0 |
424,586 | 12,313,364,510 | IssuesEvent | 2020-05-12 15:12:27 | balena-io/balena-supervisor | https://api.github.com/repos/balena-io/balena-supervisor | reopened | Send an event when we poll with details of whether it gave us a new target state | Medium Priority area/state-engine type/enhancement | Using this information, we can decrease the poll interval if we find that it's rarely helpful, but currently we have no information on it. | 1.0 | Send an event when we poll with details of whether it gave us a new target state - Using this information, we can decrease the poll interval if we find that it's rarely helpful, but currently we have no information on it. | priority | send an event when we poll with details of whether it gave us a new target state using this information we can decrease the poll interval if we find that it s rarely helpful but currently we have no information on it | 1 |
594,823 | 18,054,842,896 | IssuesEvent | 2021-09-20 06:38:31 | nmrih/source-game | https://api.github.com/repos/nmrih/source-game | closed | [public-1.11.4] nmo_underground ; key item into the elevator before activating, the game get stuck | Status: Reviewed Type: Map Priority: Minimal | I can not get pliers.

| 1.0 | [public-1.11.4] nmo_underground ; key item into the elevator before activating, the game get stuck - I can not get pliers.

| priority | nmo underground key item into the elevator before activating the game get stuck i can not get pliers | 1 |
95,427 | 19,698,557,324 | IssuesEvent | 2022-01-12 14:34:31 | google/web-stories-wp | https://api.github.com/repos/google/web-stories-wp | closed | Code Quality: Consolidate useCombinedRefs & useComposeRefs usage | P2 Type: Code Quality Pod: WP & Infra | These two hooks seem to duplicate the same functionality. This ticket is to just use one of them and replace the other everywhere else.
https://github.com/google/web-stories-wp/blob/main/packages/react/src/useCombinedRefs.js
https://github.com/google/web-stories-wp/blob/main/packages/design-system/src/utils/useComposeRefs.js
Probably best to keep whichever hook we consolidate too inside the `/react` package because of the current dependency setup. | 1.0 | Code Quality: Consolidate useCombinedRefs & useComposeRefs usage - These two hooks seem to duplicate the same functionality. This ticket is to just use one of them and replace the other everywhere else.
https://github.com/google/web-stories-wp/blob/main/packages/react/src/useCombinedRefs.js
https://github.com/google/web-stories-wp/blob/main/packages/design-system/src/utils/useComposeRefs.js
Probably best to keep whichever hook we consolidate too inside the `/react` package because of the current dependency setup. | non_priority | code quality consolidate usecombinedrefs usecomposerefs usage these two hooks seem to duplicate the same functionality this ticket is to just use one of them and replace the other everywhere else probably best to keep whichever hook we consolidate too inside the react package because of the current dependency setup | 0 |
145,061 | 13,133,307,666 | IssuesEvent | 2020-08-06 20:39:11 | jenkins-infra/jenkins.io | https://api.github.com/repos/jenkins-infra/jenkins.io | closed | Migrate 'Product requirements' page from wiki | documentation | ## Essential information
Page to Migrate: https://wiki.jenkins.io/display/JENKINS/Product+requirements#
Watch the tutorial to learn more about each page migration process:
[](https://www.youtube.com/watch?v=KB-NPlRvLoY)
## Additional information
Note: check jenkins.io first to see if there's content for this already, possibly there's some useful content on the wiki that should be pulled in
This is one of the top most visited page on the wiki according to:
https://wiki.jenkins.io/.well-known/reports/top_urls.txt
It should be improved on and cleaned up, not just imported,
You can use the wiki-exporter to save some time:
https://jenkins-wiki-exporter.jenkins.io/
## Redirecting pages
After it has been migrated a redirect should be setup, by sending a PR to this file:
https://github.com/jenkins-infra/jenkins-infra/blob/staging/dist/profile/templates/confluence/vhost.conf,
see past PRs for examples, or just take a look. | 1.0 | Migrate 'Product requirements' page from wiki - ## Essential information
Page to Migrate: https://wiki.jenkins.io/display/JENKINS/Product+requirements#
Watch the tutorial to learn more about each page migration process:
[](https://www.youtube.com/watch?v=KB-NPlRvLoY)
## Additional information
Note: check jenkins.io first to see if there's content for this already, possibly there's some useful content on the wiki that should be pulled in
This is one of the top most visited page on the wiki according to:
https://wiki.jenkins.io/.well-known/reports/top_urls.txt
It should be improved on and cleaned up, not just imported,
You can use the wiki-exporter to save some time:
https://jenkins-wiki-exporter.jenkins.io/
## Redirecting pages
After it has been migrated a redirect should be setup, by sending a PR to this file:
https://github.com/jenkins-infra/jenkins-infra/blob/staging/dist/profile/templates/confluence/vhost.conf,
see past PRs for examples, or just take a look. | non_priority | migrate product requirements page from wiki essential information page to migrate watch the tutorial to learn more about each page migration process additional information note check jenkins io first to see if there s content for this already possibly there s some useful content on the wiki that should be pulled in this is one of the top most visited page on the wiki according to it should be improved on and cleaned up not just imported you can use the wiki exporter to save some time redirecting pages after it has been migrated a redirect should be setup by sending a pr to this file see past prs for examples or just take a look | 0 |
284,953 | 31,017,874,134 | IssuesEvent | 2023-08-10 01:05:28 | amaybaum-dev/verademo2 | https://api.github.com/repos/amaybaum-dev/verademo2 | opened | jackson-databind-2.11.0.jar: 4 vulnerabilities (highest severity is: 7.5) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jackson-databind version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-42004](https://www.mend.io/vulnerability-database/CVE-2022-42004) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.7.1 | ❌ |
| [CVE-2022-42003](https://www.mend.io/vulnerability-database/CVE-2022-42003) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.7.1 | ❌ |
| [CVE-2020-36518](https://www.mend.io/vulnerability-database/CVE-2020-36518) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.6.1 | ❌ |
| [CVE-2021-46877](https://www.mend.io/vulnerability-database/CVE-2021-46877) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.6 | ❌ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42004</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: 2.12.7.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42003</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: 2.12.7.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2020-36518</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-36518>CVE-2020-36518</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: 2.12.6.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2021-46877</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
<p>Publish Date: 2023-03-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-46877>CVE-2021-46877</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2021-46877">https://www.cve.org/CVERecord?id=CVE-2021-46877</a></p>
<p>Release Date: 2023-03-18</p>
<p>Fix Resolution: 2.12.6</p>
</p>
<p></p>
</details> | True | jackson-databind-2.11.0.jar: 4 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (jackson-databind version) | Remediation Possible** |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-42004](https://www.mend.io/vulnerability-database/CVE-2022-42004) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.7.1 | ❌ |
| [CVE-2022-42003](https://www.mend.io/vulnerability-database/CVE-2022-42003) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.7.1 | ❌ |
| [CVE-2020-36518](https://www.mend.io/vulnerability-database/CVE-2020-36518) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.6.1 | ❌ |
| [CVE-2021-46877](https://www.mend.io/vulnerability-database/CVE-2021-46877) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | jackson-databind-2.11.0.jar | Direct | 2.12.6 | ❌ |
<p>**In some cases, Remediation PR cannot be created automatically for a vulnerability despite the availability of remediation</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42004</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.13.4, resource exhaustion can occur because of a lack of a check in BeanDeserializer._deserializeFromArray to prevent use of deeply nested arrays. An application is vulnerable only with certain customized choices for deserialization.
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42004>CVE-2022-42004</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: 2.12.7.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-42003</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In FasterXML jackson-databind before 2.14.0-rc1, resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting, when the UNWRAP_SINGLE_VALUE_ARRAYS feature is enabled. Additional fix version in 2.13.4.1 and 2.12.17.1
<p>Publish Date: 2022-10-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42003>CVE-2022-42003</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-10-02</p>
<p>Fix Resolution: 2.12.7.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2020-36518</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jackson-databind before 2.13.0 allows a Java StackOverflow exception and denial of service via a large depth of nested objects.
Mend Note: After conducting further research, Mend has determined that all versions of com.fasterxml.jackson.core:jackson-databind up to version 2.13.2 are vulnerable to CVE-2020-36518.
<p>Publish Date: 2022-03-11
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-36518>CVE-2020-36518</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-11</p>
<p>Fix Resolution: 2.12.6.1</p>
</p>
<p></p>
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2021-46877</summary>
### Vulnerable Library - <b>jackson-databind-2.11.0.jar</b></p>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /app/target/verademo/WEB-INF/lib/jackson-databind-2.11.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.11.0.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
jackson-databind 2.10.x through 2.12.x before 2.12.6 and 2.13.x before 2.13.1 allows attackers to cause a denial of service (2 GB transient heap usage per read) in uncommon situations involving JsonNode JDK serialization.
<p>Publish Date: 2023-03-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-46877>CVE-2021-46877</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2021-46877">https://www.cve.org/CVERecord?id=CVE-2021-46877</a></p>
<p>Release Date: 2023-03-18</p>
<p>Fix Resolution: 2.12.6</p>
</p>
<p></p>
</details> | non_priority | jackson databind jar vulnerabilities highest severity is vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library app target verademo web inf lib jackson databind jar vulnerabilities cve severity cvss dependency type fixed in jackson databind version remediation possible high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct high jackson databind jar direct in some cases remediation pr cannot be created automatically for a vulnerability despite the availability of remediation details cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library app target verademo web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch main vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in beandeserializer deserializefromarray to prevent use of deeply nested arrays an application is vulnerable only with certain customized choices for deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library app target verademo web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch main vulnerability details in fasterxml jackson databind before resource exhaustion can occur because of a lack of a check in primitive value deserializers to avoid deep wrapper array nesting when the unwrap single value arrays feature is enabled additional fix version in and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library app target verademo web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch main vulnerability details jackson databind before allows a java stackoverflow exception and denial of service via a large depth of nested objects mend note after conducting further research mend has determined that all versions of com fasterxml jackson core jackson databind up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution cve vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library app target verademo web inf lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in base branch main vulnerability details jackson databind x through x before and x before allows attackers to cause a denial of service gb transient heap usage per read in uncommon situations involving jsonnode jdk serialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
225,608 | 7,493,729,922 | IssuesEvent | 2018-04-07 00:16:43 | AdChain/AdChainRegistryDapp | https://api.github.com/repos/AdChain/AdChainRegistryDapp | opened | Re-Design Claim Rewards in Governance Module | Priority: Medium Type: UX Enhancement | # Problem
The claim rewards container in the governance module isn't working correctly. The purpose of the container is to alert the user when they have ADT rewards to claim from them voting on the winning side when a parameter proposal is in question.
# Solution
Foobar | 1.0 | Re-Design Claim Rewards in Governance Module - # Problem
The claim rewards container in the governance module isn't working correctly. The purpose of the container is to alert the user when they have ADT rewards to claim from them voting on the winning side when a parameter proposal is in question.
# Solution
Foobar | priority | re design claim rewards in governance module problem the claim rewards container in the governance module isn t working correctly the purpose of the container is to alert the user when they have adt rewards to claim from them voting on the winning side when a parameter proposal is in question solution foobar | 1 |
184,778 | 32,044,491,569 | IssuesEvent | 2023-09-22 23:08:35 | department-of-veterans-affairs/vets-design-system-documentation | https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation | opened | H1 white font is incorrect | bug platform-design-system-team | # Bug Report
- [ ] I’ve searched for any related issues and avoided creating a duplicate issue.
## What happened
The `H1 white` is set in `Helvetica` instead of `Bitter` in the Component Library text styles.

## What I expected to happen
The text style needs to be updated to Bitter in the Component Library text style.
## Reproducing
Steps to reproduce:
1. Try to apple the Component Libaray `H1 white` text style to a text field.
## Urgency
How urgent is this request? Please select the approriate option below and/or provide details
- [ ] This bug is blocking work currently in progress
- [ ] This bug is affecting work currently in progress but we have a workaround
- [ ] This bug is blocking work planned within the next few sprints
- [ ] This bug is not blocking any work
- [ ] Other
### Details
| 1.0 | H1 white font is incorrect - # Bug Report
- [ ] I’ve searched for any related issues and avoided creating a duplicate issue.
## What happened
The `H1 white` is set in `Helvetica` instead of `Bitter` in the Component Library text styles.

## What I expected to happen
The text style needs to be updated to Bitter in the Component Library text style.
## Reproducing
Steps to reproduce:
1. Try to apple the Component Libaray `H1 white` text style to a text field.
## Urgency
How urgent is this request? Please select the approriate option below and/or provide details
- [ ] This bug is blocking work currently in progress
- [ ] This bug is affecting work currently in progress but we have a workaround
- [ ] This bug is blocking work planned within the next few sprints
- [ ] This bug is not blocking any work
- [ ] Other
### Details
| non_priority | white font is incorrect bug report i’ve searched for any related issues and avoided creating a duplicate issue what happened the white is set in helvetica instead of bitter in the component library text styles what i expected to happen the text style needs to be updated to bitter in the component library text style reproducing steps to reproduce try to apple the component libaray white text style to a text field urgency how urgent is this request please select the approriate option below and or provide details this bug is blocking work currently in progress this bug is affecting work currently in progress but we have a workaround this bug is blocking work planned within the next few sprints this bug is not blocking any work other details | 0 |
675,877 | 23,110,838,410 | IssuesEvent | 2022-07-27 12:54:49 | ChainSafe/forest | https://api.github.com/repos/ChainSafe/forest | closed | Test that `FOREST_CONFIG_PATH` is correctly handled. | Priority: 4 - Low Ready | **Issue summary**
<!-- A clear and concise description of what the task is. -->
Add test to `forest/tests/config_tests.rs` that verifies that the environment variable `FOREST_CONFIG_PATH` is correctly used.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | 1.0 | Test that `FOREST_CONFIG_PATH` is correctly handled. - **Issue summary**
<!-- A clear and concise description of what the task is. -->
Add test to `forest/tests/config_tests.rs` that verifies that the environment variable `FOREST_CONFIG_PATH` is correctly used.
**Other information and links**
<!-- Add any other context or screenshots about the issue here. -->
<!-- Thank you 🙏 --> | priority | test that forest config path is correctly handled issue summary add test to forest tests config tests rs that verifies that the environment variable forest config path is correctly used other information and links | 1 |
126,806 | 12,300,032,811 | IssuesEvent | 2020-05-11 13:23:16 | raini-dev/raini | https://api.github.com/repos/raini-dev/raini | closed | Add .github/SUPPORT | GCPPR documentation good first issue | Define the "Getting Started" section so that it is visible for everyone who is about to create an issue. The minimal scope is covering the process of where to look for known issues and how to pick the right issue template if it is still required. | 1.0 | Add .github/SUPPORT - Define the "Getting Started" section so that it is visible for everyone who is about to create an issue. The minimal scope is covering the process of where to look for known issues and how to pick the right issue template if it is still required. | non_priority | add github support define the getting started section so that it is visible for everyone who is about to create an issue the minimal scope is covering the process of where to look for known issues and how to pick the right issue template if it is still required | 0 |
773,443 | 27,157,781,425 | IssuesEvent | 2023-02-17 09:22:24 | memgraph/docs | https://api.github.com/repos/memgraph/docs | closed | Should we add EXPLAIN to Cypher manual? | priority: medium (missing info) | It seems that EXPLAIN clause is missing in the Cypher manual. Should we add it? | 1.0 | Should we add EXPLAIN to Cypher manual? - It seems that EXPLAIN clause is missing in the Cypher manual. Should we add it? | priority | should we add explain to cypher manual it seems that explain clause is missing in the cypher manual should we add it | 1 |
17,491 | 24,105,245,579 | IssuesEvent | 2022-09-20 06:51:07 | Juuxel/Adorn | https://api.github.com/repos/Juuxel/Adorn | closed | Drawers create a hole in the floor and sides of other blocks (Screenshot attached) / Sodium - Iris - Continuity - CITResewn etc. | bug wontfix mod compatibility fabric | ### Adorn version
3.6.1+1.19
### Minecraft version
1.19.2
### Mod loader
Fabric
### Mod loader version
Fabric Loader 0.14.9 - Fabric Api 0.60.0+1.19.2
### Describe the bug
Using the Oak Drawer, it removes a 4x16 pixel part of any floor directly below the protruding drawers, revealing the map below. They also remove sideparts of blocks next to them (look at the kitchen counter top next to the drawer to the left)
Running the game with Sodium / Iris (latest as of August 30) and using all recommended / required mods like Continuity and CITResewn to mimic Optifine capabilities. No other issues have been found so far with Adorn.
Placed the drawer on different surfaces and the result is the same, part of the blocks below disappear.

### To reproduce
Just place the Drawer block somewhere.
I have not tested any other than the Oak Drawers, but I suspect the results to be the same for other types.
### Game logs
_No response_
### Additional context
_No response_ | True | Drawers create a hole in the floor and sides of other blocks (Screenshot attached) / Sodium - Iris - Continuity - CITResewn etc. - ### Adorn version
3.6.1+1.19
### Minecraft version
1.19.2
### Mod loader
Fabric
### Mod loader version
Fabric Loader 0.14.9 - Fabric Api 0.60.0+1.19.2
### Describe the bug
Using the Oak Drawer, it removes a 4x16 pixel part of any floor directly below the protruding drawers, revealing the map below. They also remove sideparts of blocks next to them (look at the kitchen counter top next to the drawer to the left)
Running the game with Sodium / Iris (latest as of August 30) and using all recommended / required mods like Continuity and CITResewn to mimic Optifine capabilities. No other issues have been found so far with Adorn.
Placed the drawer on different surfaces and the result is the same, part of the blocks below disappear.

### To reproduce
Just place the Drawer block somewhere.
I have not tested any other than the Oak Drawers, but I suspect the results to be the same for other types.
### Game logs
_No response_
### Additional context
_No response_ | non_priority | drawers create a hole in the floor and sides of other blocks screenshot attached sodium iris continuity citresewn etc adorn version minecraft version mod loader fabric mod loader version fabric loader fabric api describe the bug using the oak drawer it removes a pixel part of any floor directly below the protruding drawers revealing the map below they also remove sideparts of blocks next to them look at the kitchen counter top next to the drawer to the left running the game with sodium iris latest as of august and using all recommended required mods like continuity and citresewn to mimic optifine capabilities no other issues have been found so far with adorn placed the drawer on different surfaces and the result is the same part of the blocks below disappear to reproduce just place the drawer block somewhere i have not tested any other than the oak drawers but i suspect the results to be the same for other types game logs no response additional context no response | 0 |
636,908 | 20,612,716,184 | IssuesEvent | 2022-03-07 10:12:47 | deltaDAO/mvg-portal | https://api.github.com/repos/deltaDAO/mvg-portal | closed | [Enhancement] Switch to correct network button should have disabled style | Priority: Mid Type: Enhancement | ## Motivation / Problem
When the user is connected through a different network compared to the asset displayed, the button to automatically switch to the correct network looks like it can't be clicked.
<img width="552" alt="Screenshot 2022-03-04 at 11 34 56" src="https://user-images.githubusercontent.com/20192135/156781424-f5628098-7a60-4de6-893e-1f6bc8fb36f1.png">
| 1.0 | [Enhancement] Switch to correct network button should have disabled style - ## Motivation / Problem
When the user is connected through a different network compared to the asset displayed, the button to automatically switch to the correct network looks like it can't be clicked.
<img width="552" alt="Screenshot 2022-03-04 at 11 34 56" src="https://user-images.githubusercontent.com/20192135/156781424-f5628098-7a60-4de6-893e-1f6bc8fb36f1.png">
| priority | switch to correct network button should have disabled style motivation problem when the user is connected through a different network compared to the asset displayed the button to automatically switch to the correct network looks like it can t be clicked img width alt screenshot at src | 1 |
17,218 | 11,801,088,487 | IssuesEvent | 2020-03-18 18:45:22 | MakeInBelgium/babbelbox | https://api.github.com/repos/MakeInBelgium/babbelbox | closed | FAQ-pagina toevoegen | usability | Er zijn wel enkele vragen die beantwoord zullen moeten worden.
Deze zijn al opgelijst in een document. 10 vragen en hun antwoorden.
Zie: https://docs.google.com/presentation/d/1UzhwKhPo_CY9kIW727_CnjXgZJkMBVx88KJvqBDf8y0/edit#slide=id.g7f216d806c_19_0
Het zou goed zijn als deze in de tool beschikbaar komen.
Persoonlijk zou ik gaan voor een aparte pagina (nieuw tabblad). Geen download van PDF en niet linken naar de presentatie. | True | FAQ-pagina toevoegen - Er zijn wel enkele vragen die beantwoord zullen moeten worden.
Deze zijn al opgelijst in een document. 10 vragen en hun antwoorden.
Zie: https://docs.google.com/presentation/d/1UzhwKhPo_CY9kIW727_CnjXgZJkMBVx88KJvqBDf8y0/edit#slide=id.g7f216d806c_19_0
Het zou goed zijn als deze in de tool beschikbaar komen.
Persoonlijk zou ik gaan voor een aparte pagina (nieuw tabblad). Geen download van PDF en niet linken naar de presentatie. | non_priority | faq pagina toevoegen er zijn wel enkele vragen die beantwoord zullen moeten worden deze zijn al opgelijst in een document vragen en hun antwoorden zie het zou goed zijn als deze in de tool beschikbaar komen persoonlijk zou ik gaan voor een aparte pagina nieuw tabblad geen download van pdf en niet linken naar de presentatie | 0 |
678,242 | 23,190,839,384 | IssuesEvent | 2022-08-01 12:32:03 | SAP/xsk | https://api.github.com/repos/SAP/xsk | closed | $.util.zip to support $.web.Body as source type | wontfix API priority-low effort-low incomplete | Related to #684
Support providing $.web.Body as source type as shown in the [SAP documentation](https://help.sap.com/doc/3de842783af24336b6305a3c0223a369/2.0.03/en-US/$.util.Zip.html#:~:text=WebRequest/WebResponse%20Body%20Integration).
If instance of $.web.Body just take the arrayBuffer | 1.0 | $.util.zip to support $.web.Body as source type - Related to #684
Support providing $.web.Body as source type as shown in the [SAP documentation](https://help.sap.com/doc/3de842783af24336b6305a3c0223a369/2.0.03/en-US/$.util.Zip.html#:~:text=WebRequest/WebResponse%20Body%20Integration).
If instance of $.web.Body just take the arrayBuffer | priority | util zip to support web body as source type related to support providing web body as source type as shown in the if instance of web body just take the arraybuffer | 1 |
427,674 | 12,397,943,787 | IssuesEvent | 2020-05-21 00:14:40 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | More than maximum number of characters can be entered for create-file-user | Component: security ERR: Assignee Priority: Minor Stale Type: Bug | More than maximum number of characters can be entered in the UserID, Group List and Password for create-file-user on command or Admin console.
1.Open admin GUI. [http://localhost:4848](http://localhost:4848)
2.Configurations > server-config > Security > Realms > file
3.Click Manage Users button.
4.In File Users, click New button.
5.Enter more than 255 characters in User ID, Group List and Password.
The description says **"Name can be up to 255 characters, must contain only alphanumeric, underscore, dash, or dot characters."**
6.Click OK button.
7.Check the new user. The user name is more than 255 characters.
Similarly, more than the maximum number of characters 255 can be entered for these parameters when create-file-user command is used.
1\. asadmin> create-file-user
2\. Enter the value for the username operand> Enter more than 255 characters
3\. Enter the user password> Enter more than 255 characters
4\. Check the new user.
#### Environment
Windows 7 | 1.0 | More than maximum number of characters can be entered for create-file-user - More than maximum number of characters can be entered in the UserID, Group List and Password for create-file-user on command or Admin console.
1.Open admin GUI. [http://localhost:4848](http://localhost:4848)
2.Configurations > server-config > Security > Realms > file
3.Click Manage Users button.
4.In File Users, click New button.
5.Enter more than 255 characters in User ID, Group List and Password.
The description says **"Name can be up to 255 characters, must contain only alphanumeric, underscore, dash, or dot characters."**
6.Click OK button.
7.Check the new user. The user name is more than 255 characters.
Similarly, more than the maximum number of characters 255 can be entered for these parameters when create-file-user command is used.
1\. asadmin> create-file-user
2\. Enter the value for the username operand> Enter more than 255 characters
3\. Enter the user password> Enter more than 255 characters
4\. Check the new user.
#### Environment
Windows 7 | priority | more than maximum number of characters can be entered for create file user more than maximum number of characters can be entered in the userid group list and password for create file user on command or admin console open admin gui configurations server config security realms file click manage users button in file users click new button enter more than characters in user id group list and password the description says name can be up to characters must contain only alphanumeric underscore dash or dot characters click ok button check the new user the user name is more than characters similarly more than the maximum number of characters can be entered for these parameters when create file user command is used asadmin create file user enter the value for the username operand enter more than characters enter the user password enter more than characters check the new user environment windows | 1 |
76,668 | 9,479,092,166 | IssuesEvent | 2019-04-20 04:41:56 | merenlab/anvio | https://api.github.com/repos/merenlab/anvio | closed | Adding collections of PCs to pan genomes | design enhancement feature request pangenomic workflow time-out / fade-out | Is there a way to choose and export pangenome collections via the command line? It would be very useful to be able export all PCs that are found in a certain genomes without having to highlight them via the GUI. Highlighting the PCs via the GUI is prone to error. For example, it would be excellent to be able to easily export all PCs found in genomes A,B,C; or the PCs shared by all genomes; or the PCs found in only one genome.
| 1.0 | Adding collections of PCs to pan genomes - Is there a way to choose and export pangenome collections via the command line? It would be very useful to be able export all PCs that are found in a certain genomes without having to highlight them via the GUI. Highlighting the PCs via the GUI is prone to error. For example, it would be excellent to be able to easily export all PCs found in genomes A,B,C; or the PCs shared by all genomes; or the PCs found in only one genome.
| non_priority | adding collections of pcs to pan genomes is there a way to choose and export pangenome collections via the command line it would be very useful to be able export all pcs that are found in a certain genomes without having to highlight them via the gui highlighting the pcs via the gui is prone to error for example it would be excellent to be able to easily export all pcs found in genomes a b c or the pcs shared by all genomes or the pcs found in only one genome | 0 |
54,233 | 11,205,229,983 | IssuesEvent | 2020-01-05 12:50:57 | rubinera1n/blog | https://api.github.com/repos/rubinera1n/blog | opened | LeetCode-102-Binary Tree Level Order Traversal | Algorithm LCT-Breadth-first Search LCT-Depth-first Search LCT-Tree LeetCode-Medium | ERROR: type should be string, got "https://leetcode.com/problems/binary-tree-level-order-traversal/description/\r\n\r\n* algorithms\r\n* Medium (48.77%)\r\n* Source Code: 102.binary-tree-level-order-traversal.py\r\n* Total Accepted: 487K\r\n* Total Submissions: 945.8K\r\n* Testcase Example: '[3,9,20,null,null,15,7]'\r\n\r\nGiven a binary tree, return the level order traversal of its nodes' values. (ie, from left to right, level by level\r\n).\r\n\r\n\r\nFor example:\r\nGiven binary tree `[3,9,20,null,null,15,7]`,\r\n\r\n```\r\n 3\r\n / \\\r\n 9 20\r\n / \\\r\n 15 7\r\n```\r\n\r\nreturn its level order traversal as:\r\n```\r\n[\r\n [3],\r\n [9,20],\r\n [15,7]\r\n]\r\n```\r\n\r\n### C++\r\n```cpp\r\nclass Solution {\r\npublic:\r\n vector<vector<int>> levelOrder(TreeNode* root) {\r\n vector<vector<int>> ans;\r\n DFS(root, 0, ans);\r\n return ans;\r\n }\r\nprivate:\r\n void DFS(TreeNode* root, int depth, vector<vector<int>>& ans) {\r\n if (!root) return;\r\n if (ans.size() <= depth) ans.push_back({});\r\n ans[depth].push_back(root->val);\r\n DFS(root->left, depth+1, ans);\r\n DFS(root->right, depth+1, ans);\r\n }\r\n};\r\n```\r\n\r\n### Python\r\n```python\r\nclass Solution:\r\n \"\"\"\r\n def levelOrder(self, root: TreeNode) -> List[List[int]]:\r\n res = []\r\n self.dfs(root, 0, res)\r\n return res\r\n # DFS\r\n def dfs(self, root, level, res):\r\n if not root:\r\n return \r\n if len(res) < level + 1:\r\n res.append([])\r\n res[level].append(root.val)\r\n self.dfs(root.left, level + 1, res)\r\n self.dfs(root.right, level + 1, res)\r\n \"\"\"\r\n \r\n \r\n def levelOrder(self, root: TreeNode) -> List[List[int]]:\r\n # BFS + queue\r\n res, queue = [], [(root, 0)]\r\n while queue:\r\n curr, level = queue.pop(0)\r\n if curr:\r\n if len(res) < level + 1:\r\n res.append([])\r\n res[level].append(curr.val)\r\n queue.append((curr.left, level+1))\r\n queue.append((curr.right, level+1))\r\n return res\r\n```" | 1.0 | LeetCode-102-Binary Tree Level Order Traversal - https://leetcode.com/problems/binary-tree-level-order-traversal/description/
* algorithms
* Medium (48.77%)
* Source Code: 102.binary-tree-level-order-traversal.py
* Total Accepted: 487K
* Total Submissions: 945.8K
* Testcase Example: '[3,9,20,null,null,15,7]'
Given a binary tree, return the level order traversal of its nodes' values. (ie, from left to right, level by level
).
For example:
Given binary tree `[3,9,20,null,null,15,7]`,
```
3
/ \
9 20
/ \
15 7
```
return its level order traversal as:
```
[
[3],
[9,20],
[15,7]
]
```
### C++
```cpp
class Solution {
public:
vector<vector<int>> levelOrder(TreeNode* root) {
vector<vector<int>> ans;
DFS(root, 0, ans);
return ans;
}
private:
void DFS(TreeNode* root, int depth, vector<vector<int>>& ans) {
if (!root) return;
if (ans.size() <= depth) ans.push_back({});
ans[depth].push_back(root->val);
DFS(root->left, depth+1, ans);
DFS(root->right, depth+1, ans);
}
};
```
### Python
```python
class Solution:
"""
def levelOrder(self, root: TreeNode) -> List[List[int]]:
res = []
self.dfs(root, 0, res)
return res
# DFS
def dfs(self, root, level, res):
if not root:
return
if len(res) < level + 1:
res.append([])
res[level].append(root.val)
self.dfs(root.left, level + 1, res)
self.dfs(root.right, level + 1, res)
"""
def levelOrder(self, root: TreeNode) -> List[List[int]]:
# BFS + queue
res, queue = [], [(root, 0)]
while queue:
curr, level = queue.pop(0)
if curr:
if len(res) < level + 1:
res.append([])
res[level].append(curr.val)
queue.append((curr.left, level+1))
queue.append((curr.right, level+1))
return res
``` | non_priority | leetcode binary tree level order traversal algorithms medium source code binary tree level order traversal py total accepted total submissions testcase example given a binary tree return the level order traversal of its nodes values ie from left to right level by level for example given binary tree return its level order traversal as c cpp class solution public vector levelorder treenode root vector ans dfs root ans return ans private void dfs treenode root int depth vector ans if root return if ans size depth ans push back ans push back root val dfs root left depth ans dfs root right depth ans python python class solution def levelorder self root treenode list res self dfs root res return res dfs def dfs self root level res if not root return if len res level res append res append root val self dfs root left level res self dfs root right level res def levelorder self root treenode list bfs queue res queue while queue curr level queue pop if curr if len res level res append res append curr val queue append curr left level queue append curr right level return res | 0 |
630,953 | 20,121,708,821 | IssuesEvent | 2022-02-08 03:30:17 | threefoldtech/vgrid | https://api.github.com/repos/threefoldtech/vgrid | opened | get_nodes_by_city_country not working | priority_minor | examples/get_nodes_by_city_country.v
doesn't seem to work
## also need to deal with case insensitivity
- need to make sure, search is all lower case, do we need to change this in DB?
- and we need to lower the name we look for as well to match
| 1.0 | get_nodes_by_city_country not working - examples/get_nodes_by_city_country.v
doesn't seem to work
## also need to deal with case insensitivity
- need to make sure, search is all lower case, do we need to change this in DB?
- and we need to lower the name we look for as well to match
| priority | get nodes by city country not working examples get nodes by city country v doesn t seem to work also need to deal with case insensitivity need to make sure search is all lower case do we need to change this in db and we need to lower the name we look for as well to match | 1 |
324,185 | 23,987,478,727 | IssuesEvent | 2022-09-13 20:29:55 | airbytehq/airbyte | https://api.github.com/repos/airbytehq/airbyte | closed | Document the Job Orchestrator. | area/documentation type/enhancement team/oss autoteam team/documentation | ### Tell us about the documentation you'd like us to add or update.
Airbyte Kubernetes has the Job Orchestrator to decouple worker deploys from job execution. We should document this.
### If applicable, add links to the relevant docs that should be updated
- [Link to relevant docs page](https://docs.airbyte.io) | 2.0 | Document the Job Orchestrator. - ### Tell us about the documentation you'd like us to add or update.
Airbyte Kubernetes has the Job Orchestrator to decouple worker deploys from job execution. We should document this.
### If applicable, add links to the relevant docs that should be updated
- [Link to relevant docs page](https://docs.airbyte.io) | non_priority | document the job orchestrator tell us about the documentation you d like us to add or update airbyte kubernetes has the job orchestrator to decouple worker deploys from job execution we should document this if applicable add links to the relevant docs that should be updated | 0 |
1,100 | 2,595,143,282 | IssuesEvent | 2015-02-20 11:52:06 | keyboardsurfer/blinkendroid | https://api.github.com/repos/keyboardsurfer/blinkendroid | opened | Abmelden von Client funktioniert nicht | auto-migrated Priority-Medium Type-Defect | ```
Ich starte Client und Server auf getrennten Geraeten.
Auf dem Server erscheint der Client in der Liste (connectionOpenend() wird
aufgerufen).
Ich beende den Client mit "Back".
Auf dem Server bleibt der Client in der Liste stehen (connectionClosed()
wird NICHT aufgerufen).
```
-----
Original issue reported on code.google.com by `andreas....@gmail.com` on 22 May 2010 at 6:04 | 1.0 | Abmelden von Client funktioniert nicht - ```
Ich starte Client und Server auf getrennten Geraeten.
Auf dem Server erscheint der Client in der Liste (connectionOpenend() wird
aufgerufen).
Ich beende den Client mit "Back".
Auf dem Server bleibt der Client in der Liste stehen (connectionClosed()
wird NICHT aufgerufen).
```
-----
Original issue reported on code.google.com by `andreas....@gmail.com` on 22 May 2010 at 6:04 | non_priority | abmelden von client funktioniert nicht ich starte client und server auf getrennten geraeten auf dem server erscheint der client in der liste connectionopenend wird aufgerufen ich beende den client mit back auf dem server bleibt der client in der liste stehen connectionclosed wird nicht aufgerufen original issue reported on code google com by andreas gmail com on may at | 0 |
38,673 | 2,849,991,260 | IssuesEvent | 2015-05-31 06:01:14 | GLolol/PyLink | https://api.github.com/repos/GLolol/PyLink | closed | Add shared functions for sending messages to users (ircmsgs.py) | feature priority:medium | This is a lot simpler than writing `_sendFromUser('PRIVMSG person :hello!')` every time you want to send a message from a pseudoclient. | 1.0 | Add shared functions for sending messages to users (ircmsgs.py) - This is a lot simpler than writing `_sendFromUser('PRIVMSG person :hello!')` every time you want to send a message from a pseudoclient. | priority | add shared functions for sending messages to users ircmsgs py this is a lot simpler than writing sendfromuser privmsg person hello every time you want to send a message from a pseudoclient | 1 |
673,318 | 22,957,819,855 | IssuesEvent | 2022-07-19 13:07:48 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | SHDGSS odd code | SeverityMedium PriorityLow | This code in SHDGSS seems odd/inefficient and possibly not as intended:
``` cpp
NVR = HCNV( 1 );
for ( N = 1; N <= NumVertInShadowOrClippedSurface; ++N ) {
for ( M = 1; M <= NVR; ++M ) {
if ( std::abs( HCX( M, 1 ) - HCX( N, NS3 ) ) > 6 ) continue;
if ( std::abs( HCY( M, 1 ) - HCY( N, NS3 ) ) > 6 ) continue;
HCX( N, NS3 ) = HCX( M, 1 );
HCY( N, NS3 ) = HCY( M, 1 );
}
}
```
`( N, NS3 )` is fixed in the M loop so the effect is just to assign those HCX/Y elements for the last M that gets past the `abs` checks. If it really needs the last one (doubt it) you could loop in reverse and then break. If it needs any then you can break after the assignments.
Can someone who better understands the intent comment?
| 1.0 | SHDGSS odd code - This code in SHDGSS seems odd/inefficient and possibly not as intended:
``` cpp
NVR = HCNV( 1 );
for ( N = 1; N <= NumVertInShadowOrClippedSurface; ++N ) {
for ( M = 1; M <= NVR; ++M ) {
if ( std::abs( HCX( M, 1 ) - HCX( N, NS3 ) ) > 6 ) continue;
if ( std::abs( HCY( M, 1 ) - HCY( N, NS3 ) ) > 6 ) continue;
HCX( N, NS3 ) = HCX( M, 1 );
HCY( N, NS3 ) = HCY( M, 1 );
}
}
```
`( N, NS3 )` is fixed in the M loop so the effect is just to assign those HCX/Y elements for the last M that gets past the `abs` checks. If it really needs the last one (doubt it) you could loop in reverse and then break. If it needs any then you can break after the assignments.
Can someone who better understands the intent comment?
| priority | shdgss odd code this code in shdgss seems odd inefficient and possibly not as intended cpp nvr hcnv for n n numvertinshadoworclippedsurface n for m m nvr m if std abs hcx m hcx n continue if std abs hcy m hcy n continue hcx n hcx m hcy n hcy m n is fixed in the m loop so the effect is just to assign those hcx y elements for the last m that gets past the abs checks if it really needs the last one doubt it you could loop in reverse and then break if it needs any then you can break after the assignments can someone who better understands the intent comment | 1 |
239,616 | 26,231,972,950 | IssuesEvent | 2023-01-05 01:35:19 | keanhankins/ranger | https://api.github.com/repos/keanhankins/ranger | opened | CVE-2021-35515 (High) detected in commons-compress-1.8.1.jar | security vulnerability | ## CVE-2021-35515 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.8.1.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with compression and archive formats.
These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: /security-admin/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.8.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-35515>CVE-2021-35515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: 1.21</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-35515 (High) detected in commons-compress-1.8.1.jar - ## CVE-2021-35515 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.8.1.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with compression and archive formats.
These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional Unix Compress and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: /security-admin/pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/apache/commons/commons-compress/1.8.1/commons-compress-1.8.1.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-compress-1.8.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, the construction of the list of codecs that decompress an entry can result in an infinite loop. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-35515>CVE-2021-35515</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: 1.21</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_priority | cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress and ar cpio jar tar zip dump arj path to dependency file security admin pom xml path to vulnerable library canner repository org apache commons commons compress commons compress jar dependency hierarchy x commons compress jar vulnerable library found in base branch master vulnerability details when reading a specially crafted archive the construction of the list of codecs that decompress an entry can result in an infinite loop this could be used to mount a denial of service attack against services that use compress sevenz package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue | 0 |
62,816 | 7,646,885,280 | IssuesEvent | 2018-05-09 00:31:22 | plugdj/Issues-and-Reports | https://api.github.com/repos/plugdj/Issues-and-Reports | opened | Join Waitlist button partially obstructed in smaller resolutions | Bug Redesign V1 Web Application | <!-- Examples: [Web App Issue, Mobile App Issue] -->
### Web App Issue
<!-- Examples: [plug.dj Web version 1.5.6.10975], [plug.dj Android Beta Version 2.6.7] -->
#### plug.dj version 1.6.0.11743
<!-- Examples: [Google Nexus 4 Android 5.0.1, Chrome 60.0.3112.66 Beta 64-bit] -->
<!-- HOW TO CHECK: https://github.com/plugdj/Issues-and-Reports/blob/master/.github/CONTRIBUTING.md#versions -->
#### OS/Browser Name and Version
Google Chrome Version 66.0.3359.139 (Official Build) (64-bit)
#### I have tried
- [x] Reloading the page/app
- [x] Clearing (browser) cache, cookies or (mobile) app data
- [x] Switching to the default language (English)
- [x] Disabling all extensions and scripts (browser only)
- [ ] Reinstalling the app (mobile only)
<!-- Please be as descriptive as possible, upload images to https://imgur.com -->
#### Description, Images, Logs
In smaller resolutions, a banner element blocks part of the join button (specially if the button is made longer by a translation). The current DJ avatar canvas also blocks a few pixels on the right side of the button.

##### Steps to reproduce:
1. Resize your browser window to a smaller resolution (like most laptops's page size, 1366x682)
2. Hover the button to see the clickable area.
Other tips: Try changing the language to one where the button is larger, like Portuguese
<!-- Do not post your email address as that could put your account in danger -->
<!-- Examples: [Plug Technologies, Inc.; 23810294; https://plug.dj/@/plug-technologies-inc] -->
#### plug.dj username, profile link or user ID
[@Burkes](https://plug.dj/@/burkes) (3703511) | 1.0 | Join Waitlist button partially obstructed in smaller resolutions - <!-- Examples: [Web App Issue, Mobile App Issue] -->
### Web App Issue
<!-- Examples: [plug.dj Web version 1.5.6.10975], [plug.dj Android Beta Version 2.6.7] -->
#### plug.dj version 1.6.0.11743
<!-- Examples: [Google Nexus 4 Android 5.0.1, Chrome 60.0.3112.66 Beta 64-bit] -->
<!-- HOW TO CHECK: https://github.com/plugdj/Issues-and-Reports/blob/master/.github/CONTRIBUTING.md#versions -->
#### OS/Browser Name and Version
Google Chrome Version 66.0.3359.139 (Official Build) (64-bit)
#### I have tried
- [x] Reloading the page/app
- [x] Clearing (browser) cache, cookies or (mobile) app data
- [x] Switching to the default language (English)
- [x] Disabling all extensions and scripts (browser only)
- [ ] Reinstalling the app (mobile only)
<!-- Please be as descriptive as possible, upload images to https://imgur.com -->
#### Description, Images, Logs
In smaller resolutions, a banner element blocks part of the join button (specially if the button is made longer by a translation). The current DJ avatar canvas also blocks a few pixels on the right side of the button.

##### Steps to reproduce:
1. Resize your browser window to a smaller resolution (like most laptops's page size, 1366x682)
2. Hover the button to see the clickable area.
Other tips: Try changing the language to one where the button is larger, like Portuguese
<!-- Do not post your email address as that could put your account in danger -->
<!-- Examples: [Plug Technologies, Inc.; 23810294; https://plug.dj/@/plug-technologies-inc] -->
#### plug.dj username, profile link or user ID
[@Burkes](https://plug.dj/@/burkes) (3703511) | non_priority | join waitlist button partially obstructed in smaller resolutions web app issue plug dj version os browser name and version google chrome version official build bit i have tried reloading the page app clearing browser cache cookies or mobile app data switching to the default language english disabling all extensions and scripts browser only reinstalling the app mobile only description images logs in smaller resolutions a banner element blocks part of the join button specially if the button is made longer by a translation the current dj avatar canvas also blocks a few pixels on the right side of the button steps to reproduce resize your browser window to a smaller resolution like most laptops s page size hover the button to see the clickable area other tips try changing the language to one where the button is larger like portuguese plug dj username profile link or user id | 0 |
525,238 | 15,241,525,501 | IssuesEvent | 2021-02-19 08:34:38 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Cleanup VLAN code of service maps | low-priority bug priority ticket | Code that handles service maps with VLANs is full of concatenations and ifs. Example:

All this code must be replaced with library functions:
- `hostinfo2hostkey`
- `hostkey2hostinfo`
- `hostinfo2url`
- `url2hostinfo`
That automatically and transparently take care of the VLAN and possibly append them when necessary. Basically they already implement the logic. | 2.0 | Cleanup VLAN code of service maps - Code that handles service maps with VLANs is full of concatenations and ifs. Example:

All this code must be replaced with library functions:
- `hostinfo2hostkey`
- `hostkey2hostinfo`
- `hostinfo2url`
- `url2hostinfo`
That automatically and transparently take care of the VLAN and possibly append them when necessary. Basically they already implement the logic. | priority | cleanup vlan code of service maps code that handles service maps with vlans is full of concatenations and ifs example all this code must be replaced with library functions that automatically and transparently take care of the vlan and possibly append them when necessary basically they already implement the logic | 1 |
184,366 | 31,864,350,659 | IssuesEvent | 2023-09-15 13:11:13 | solid-design-system/solid | https://api.github.com/repos/solid-design-system/solid | opened | feat: ✨ new headline icon size | 🎨 needs design | ## User Story
As a user of the Solid Design System, I would like to use another icon size in the headline, so that the space between the icon and the second row of the headline is increased, and readability is improved.
<img width="666" alt="CleanShot 2023-09-07 at 11 48 00@2x" src="https://github.com/solid-design-system/solid/assets/26542182/4b8f91ad-5db7-4fad-8f91-e166ee05b2a7">
<img width="703" alt="CleanShot 2023-09-07 at 11 46 19@2x" src="https://github.com/solid-design-system/solid/assets/26542182/a691c6ae-0b21-4449-aea0-dfe8ecc80e4d">
### Context
Currently brand decided on 2 different icon sizes in the headline (4xl & 3xl => 48px, everything else 32px). They also agreed on the minimum size of 32px for content icons.
These decisions would need to be challenged.
## DoR
- [ ] Item has business value
- [ ] Item has been estimated by the team
- [ ] Item is clear and well-defined
- [ ] Item dependencies have been identified
## DoD
- [ ] Documentation has been created/updated (if applicable)
- [ ] Migration Guide has been created/updated (if applicable)
- [ ] Relevant E2E tests (Features, A11y, Bug fixes) are created/updated
- [ ] Relevant stories (Features, A11y) are created/updated
- [ ] Implementation works successfully on `feature` branch
| 1.0 | feat: ✨ new headline icon size - ## User Story
As a user of the Solid Design System, I would like to use another icon size in the headline, so that the space between the icon and the second row of the headline is increased, and readability is improved.
<img width="666" alt="CleanShot 2023-09-07 at 11 48 00@2x" src="https://github.com/solid-design-system/solid/assets/26542182/4b8f91ad-5db7-4fad-8f91-e166ee05b2a7">
<img width="703" alt="CleanShot 2023-09-07 at 11 46 19@2x" src="https://github.com/solid-design-system/solid/assets/26542182/a691c6ae-0b21-4449-aea0-dfe8ecc80e4d">
### Context
Currently brand decided on 2 different icon sizes in the headline (4xl & 3xl => 48px, everything else 32px). They also agreed on the minimum size of 32px for content icons.
These decisions would need to be challenged.
## DoR
- [ ] Item has business value
- [ ] Item has been estimated by the team
- [ ] Item is clear and well-defined
- [ ] Item dependencies have been identified
## DoD
- [ ] Documentation has been created/updated (if applicable)
- [ ] Migration Guide has been created/updated (if applicable)
- [ ] Relevant E2E tests (Features, A11y, Bug fixes) are created/updated
- [ ] Relevant stories (Features, A11y) are created/updated
- [ ] Implementation works successfully on `feature` branch
| non_priority | feat ✨ new headline icon size user story as a user of the solid design system i would like to use another icon size in the headline so that the space between the icon and the second row of the headline is increased and readability is improved img width alt cleanshot at src img width alt cleanshot at src context currently brand decided on different icon sizes in the headline everything else they also agreed on the minimum size of for content icons these decisions would need to be challenged dor item has business value item has been estimated by the team item is clear and well defined item dependencies have been identified dod documentation has been created updated if applicable migration guide has been created updated if applicable relevant tests features bug fixes are created updated relevant stories features are created updated implementation works successfully on feature branch | 0 |
33,372 | 12,216,516,240 | IssuesEvent | 2020-05-01 15:18:17 | robertjfinn/hadoop | https://api.github.com/repos/robertjfinn/hadoop | opened | CVE-2017-15010 (High) detected in tough-cookie-2.2.0.tgz, tough-cookie-0.12.1.tgz | security vulnerability | ## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tough-cookie-2.2.0.tgz</b>, <b>tough-cookie-0.12.1.tgz</b></p></summary>
<p>
<details><summary><b>tough-cookie-2.2.0.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.0.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/npm/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- npm-2.14.10.tgz
- request-2.65.0.tgz
- :x: **tough-cookie-2.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>tough-cookie-0.12.1.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-0.12.1.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-0.12.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/leek/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- leek-0.0.18.tgz
- request-2.53.0.tgz
- :x: **tough-cookie-0.12.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/robertjfinn/hadoop/commit/876b3d37847317582197087627081de9f19f88d9">876b3d37847317582197087627081de9f19f88d9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"2.2.0","isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;npm:2.14.10;request:2.65.0;tough-cookie:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.3"},{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"0.12.1","isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;leek:0.0.18;request:2.53.0;tough-cookie:0.12.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.3"}],"vulnerabilityIdentifier":"CVE-2017-15010","vulnerabilityDetails":"A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2017-15010 (High) detected in tough-cookie-2.2.0.tgz, tough-cookie-0.12.1.tgz - ## CVE-2017-15010 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tough-cookie-2.2.0.tgz</b>, <b>tough-cookie-0.12.1.tgz</b></p></summary>
<p>
<details><summary><b>tough-cookie-2.2.0.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.0.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-2.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/npm/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- npm-2.14.10.tgz
- request-2.65.0.tgz
- :x: **tough-cookie-2.2.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>tough-cookie-0.12.1.tgz</b></p></summary>
<p>RFC6265 Cookies and Cookie Jar for node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/tough-cookie/-/tough-cookie-0.12.1.tgz">https://registry.npmjs.org/tough-cookie/-/tough-cookie-0.12.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/node_modules/leek/node_modules/request/node_modules/tough-cookie/package.json</p>
<p>
Dependency Hierarchy:
- ember-cli-1.13.14.tgz (Root Library)
- leek-0.0.18.tgz
- request-2.53.0.tgz
- :x: **tough-cookie-0.12.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/robertjfinn/hadoop/commit/876b3d37847317582197087627081de9f19f88d9">876b3d37847317582197087627081de9f19f88d9</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.
<p>Publish Date: 2017-10-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010>CVE-2017-15010</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-15010">https://nvd.nist.gov/vuln/detail/CVE-2017-15010</a></p>
<p>Release Date: 2017-10-04</p>
<p>Fix Resolution: 2.3.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"2.2.0","isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;npm:2.14.10;request:2.65.0;tough-cookie:2.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.3"},{"packageType":"javascript/Node.js","packageName":"tough-cookie","packageVersion":"0.12.1","isTransitiveDependency":true,"dependencyTree":"ember-cli:1.13.14;leek:0.0.18;request:2.53.0;tough-cookie:0.12.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.3.3"}],"vulnerabilityIdentifier":"CVE-2017-15010","vulnerabilityDetails":"A ReDoS (regular expression denial of service) flaw was found in the tough-cookie module before 2.3.3 for Node.js. An attacker that is able to make an HTTP request using a specially crafted cookie may cause the application to consume an excessive amount of CPU.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-15010","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in tough cookie tgz tough cookie tgz cve high severity vulnerability vulnerable libraries tough cookie tgz tough cookie tgz tough cookie tgz cookies and cookie jar for node js library home page a href path to dependency file tmp ws scm hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp package json path to vulnerable library tmp ws scm hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules npm node modules request node modules tough cookie package json dependency hierarchy ember cli tgz root library npm tgz request tgz x tough cookie tgz vulnerable library tough cookie tgz cookies and cookie jar for node js library home page a href path to dependency file tmp ws scm hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp package json path to vulnerable library tmp ws scm hadoop hadoop yarn project hadoop yarn hadoop yarn ui src main webapp node modules leek node modules request node modules tough cookie package json dependency hierarchy ember cli tgz root library leek tgz request tgz x tough cookie tgz vulnerable library found in head commit a href vulnerability details a redos regular expression denial of service flaw was found in the tough cookie module before for node js an attacker that is able to make an http request using a specially crafted cookie may cause the application to consume an excessive amount of cpu publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a redos regular expression denial of service flaw was found in the tough cookie module before for node js an attacker that is able to make an http request using a specially crafted cookie may cause the application to consume an excessive amount of cpu vulnerabilityurl | 0 |
78,852 | 15,085,728,722 | IssuesEvent | 2021-02-05 19:08:34 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | closed | Error while using distributed_backed = "ddp" | DDP bug / fix question with code | My code works perfectly fine with distributed_backend='dp', but fails when I use distributed_backend='ddp' with the following error:
Traceback (most recent call last):
File "/scratch/nvarshn2/explore/test_ddp.py", line 89, in <module>
trainer.fit(model, train_data, val_data)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 510, in fit
results = self.accelerator_backend.train()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 158, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 307, in ddp_train
results = self.train_or_test()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 561, in train
self.train_loop.run_training_epoch()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 541, in run_training_epoch
for batch_idx, (batch, is_last_batch) in train_dataloader:
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/profiler/profilers.py", line 85, in profile_iterable
value = next(iterator)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 45, in _with_is_last
it = iter(iterable)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 801, in __init__
w.start()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Code:
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
if __name__ == '__main__':
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64), num_workers=8)
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64), num_workers=8)
model = BoringModel()
trainer = Trainer(
limit_train_batches=1,
limit_val_batches=1,
max_epochs=1,
gpus=-1,
distributed_backend="ddp",
)
trainer.fit(model, train_data, val_data)
Note: I am using 4 gpus and a single machine
What could be the reason behind this? | 1.0 | Error while using distributed_backed = "ddp" - My code works perfectly fine with distributed_backend='dp', but fails when I use distributed_backend='ddp' with the following error:
Traceback (most recent call last):
File "/scratch/nvarshn2/explore/test_ddp.py", line 89, in <module>
trainer.fit(model, train_data, val_data)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 510, in fit
results = self.accelerator_backend.train()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 158, in train
results = self.ddp_train(process_idx=self.task_idx, model=model)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 307, in ddp_train
results = self.train_or_test()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
results = self.trainer.train()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 561, in train
self.train_loop.run_training_epoch()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 541, in run_training_epoch
for batch_idx, (batch, is_last_batch) in train_dataloader:
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/profiler/profilers.py", line 85, in profile_iterable
value = next(iterator)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 45, in _with_is_last
it = iter(iterable)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 801, in __init__
w.start()
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/nvarshn2/.conda/envs/pytorch_lightning_with_deepseed_env/lib/python3.6/multiprocessing/popen_fork.py", line 66, in _launch
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
Code:
import os
import torch
from torch.utils.data import Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
# An arbitrary loss to have a loss that updates the model weights during `Trainer.fit` calls
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def step(self, x):
x = self.layer(x)
out = torch.nn.functional.mse_loss(x, torch.ones_like(x))
return out
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
return training_step_outputs
def training_epoch_end(self, outputs) -> None:
torch.stack([x["loss"] for x in outputs]).mean()
def validation_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"x": loss}
def validation_epoch_end(self, outputs) -> None:
torch.stack([x['x'] for x in outputs]).mean()
def test_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"y": loss}
def test_epoch_end(self, outputs) -> None:
torch.stack([x["y"] for x in outputs]).mean()
def configure_optimizers(self):
optimizer = torch.optim.SGD(self.layer.parameters(), lr=0.1)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1)
return [optimizer], [lr_scheduler]
if __name__ == '__main__':
train_data = torch.utils.data.DataLoader(RandomDataset(32, 64), num_workers=8)
val_data = torch.utils.data.DataLoader(RandomDataset(32, 64), num_workers=8)
model = BoringModel()
trainer = Trainer(
limit_train_batches=1,
limit_val_batches=1,
max_epochs=1,
gpus=-1,
distributed_backend="ddp",
)
trainer.fit(model, train_data, val_data)
Note: I am using 4 gpus and a single machine
What could be the reason behind this? | non_priority | error while using distributed backed ddp my code works perfectly fine with distributed backend dp but fails when i use distributed backend ddp with the following error traceback most recent call last file scratch explore test ddp py line in trainer fit model train data val data file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning trainer trainer py line in fit results self accelerator backend train file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning accelerators ddp accelerator py line in train results self ddp train process idx self task idx model model file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning accelerators ddp accelerator py line in ddp train results self train or test file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning accelerators accelerator py line in train or test results self trainer train file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning trainer trainer py line in train self train loop run training epoch file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning trainer training loop py line in run training epoch for batch idx batch is last batch in train dataloader file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning profiler profilers py line in profile iterable value next iterator file home conda envs pytorch lightning with deepseed env lib site packages pytorch lightning trainer connectors data connector py line in with is last it iter iterable file home conda envs pytorch lightning with deepseed env lib site packages torch utils data dataloader py line in iter return self get iterator file home conda envs pytorch lightning with deepseed env lib site packages torch utils data dataloader py line in get iterator return multiprocessingdataloaderiter self file home conda envs pytorch lightning with deepseed env lib site packages torch utils data dataloader py line in init w start file home conda envs pytorch lightning with deepseed env lib multiprocessing process py line in start self popen self popen self file home conda envs pytorch lightning with deepseed env lib multiprocessing context py line in popen return default context get context process popen process obj file home conda envs pytorch lightning with deepseed env lib multiprocessing context py line in popen return popen process obj file home conda envs pytorch lightning with deepseed env lib multiprocessing popen fork py line in init self launch process obj file home conda envs pytorch lightning with deepseed env lib multiprocessing popen fork py line in launch self pid os fork oserror cannot allocate memory code import os import torch from torch utils data import dataset from pytorch lightning import lightningmodule trainer class randomdataset dataset def init self size length self len length self data torch randn length size def getitem self index return self data def len self return self len class boringmodel lightningmodule def init self super init self layer torch nn linear def forward self x return self layer x def loss self batch prediction an arbitrary loss to have a loss that updates the model weights during trainer fit calls return torch nn functional mse loss prediction torch ones like prediction def step self x x self layer x out torch nn functional mse loss x torch ones like x return out def training step self batch batch idx output self layer batch loss self loss batch output return loss loss def training step end self training step outputs return training step outputs def training epoch end self outputs none torch stack for x in outputs mean def validation step self batch batch idx output self layer batch loss self loss batch output return x loss def validation epoch end self outputs none torch stack for x in outputs mean def test step self batch batch idx output self layer batch loss self loss batch output return y loss def test epoch end self outputs none torch stack for x in outputs mean def configure optimizers self optimizer torch optim sgd self layer parameters lr lr scheduler torch optim lr scheduler steplr optimizer step size return if name main train data torch utils data dataloader randomdataset num workers val data torch utils data dataloader randomdataset num workers model boringmodel trainer trainer limit train batches limit val batches max epochs gpus distributed backend ddp trainer fit model train data val data note i am using gpus and a single machine what could be the reason behind this | 0 |
568,052 | 16,945,725,787 | IssuesEvent | 2021-06-28 06:26:22 | nextcloud/mail | https://api.github.com/repos/nextcloud/mail | closed | Blank page on Safari mobile | 1. to develop bug priority:high regression | ### Expected behavior
Happy @LukasReschke
### Actual behavior
@LukasReschke sends me a screenshot from his iphone showing the mail app just renders an empty page
### Mail app
**Mail app version:** v1.10.0 RC1 | 1.0 | Blank page on Safari mobile - ### Expected behavior
Happy @LukasReschke
### Actual behavior
@LukasReschke sends me a screenshot from his iphone showing the mail app just renders an empty page
### Mail app
**Mail app version:** v1.10.0 RC1 | priority | blank page on safari mobile expected behavior happy lukasreschke actual behavior lukasreschke sends me a screenshot from his iphone showing the mail app just renders an empty page mail app mail app version | 1 |
23,735 | 4,971,593,384 | IssuesEvent | 2016-12-05 19:09:01 | tox-dev/tox | https://api.github.com/repos/tox-dev/tox | closed | Eliminate testrun.org from code and docs? | documentation question | @hpk42 the test on testrun.org states that it can be taken over if anyone is interested in it. Toc docs are redirecting to readthedocs.io. Should I replace all references in code and documentation to the corresponding readthedocs.io location? | 1.0 | Eliminate testrun.org from code and docs? - @hpk42 the test on testrun.org states that it can be taken over if anyone is interested in it. Toc docs are redirecting to readthedocs.io. Should I replace all references in code and documentation to the corresponding readthedocs.io location? | non_priority | eliminate testrun org from code and docs the test on testrun org states that it can be taken over if anyone is interested in it toc docs are redirecting to readthedocs io should i replace all references in code and documentation to the corresponding readthedocs io location | 0 |
97,341 | 12,228,434,238 | IssuesEvent | 2020-05-03 19:24:48 | WorldHealthOrganization/app | https://api.github.com/repos/WorldHealthOrganization/app | opened | New News & Press screen | a:ux-design f:client-ui needs:ux-design | New design for this screen
<img width="372" alt="Screen Shot 2020-05-03 at 12 22 31 PM" src="https://user-images.githubusercontent.com/11729521/80923500-0e40c700-8d39-11ea-9182-188eb7384bb7.png">
| 2.0 | New News & Press screen - New design for this screen
<img width="372" alt="Screen Shot 2020-05-03 at 12 22 31 PM" src="https://user-images.githubusercontent.com/11729521/80923500-0e40c700-8d39-11ea-9182-188eb7384bb7.png">
| non_priority | new news press screen new design for this screen img width alt screen shot at pm src | 0 |
93,166 | 11,745,919,239 | IssuesEvent | 2020-03-12 10:41:47 | eventespresso/event-espresso-core | https://api.github.com/repos/eventespresso/event-espresso-core | closed | Application Registry for Services and UI Elements | EDTR v2 type:design 📐 | During discussions regarding the extraction of common logic related to the Entity List Filter Bar components, we realized that some sort of central registry would be best for managing any and all subscription type functionality so that core domain use cases, add-ons, and even third party software, have a consistent and globally available API for doing so.
This issue is for planning the requirements and architecture for this central registry. | 1.0 | Application Registry for Services and UI Elements - During discussions regarding the extraction of common logic related to the Entity List Filter Bar components, we realized that some sort of central registry would be best for managing any and all subscription type functionality so that core domain use cases, add-ons, and even third party software, have a consistent and globally available API for doing so.
This issue is for planning the requirements and architecture for this central registry. | non_priority | application registry for services and ui elements during discussions regarding the extraction of common logic related to the entity list filter bar components we realized that some sort of central registry would be best for managing any and all subscription type functionality so that core domain use cases add ons and even third party software have a consistent and globally available api for doing so this issue is for planning the requirements and architecture for this central registry | 0 |
125,100 | 16,723,520,555 | IssuesEvent | 2021-06-10 10:08:36 | raychenv/blog | https://api.github.com/repos/raychenv/blog | opened | Template of implementation document | design doc documentation | 1. Intro: Scope, ref document, revision history, issues/assumption
2. Requirement analysis: background, Scope analysis, DoD, Dependencies, Priority, requirement to solution, feature/function/ config
3. User case analysis: external, internal
4. Effort/Cost
5. Overview of realization: solution
6. Impact to system behaviors
7. Impact to software
8. Impact to hardware
9. Impact to test
10. Impact to external stakeholders
11. Terminology
12. References | 1.0 | Template of implementation document - 1. Intro: Scope, ref document, revision history, issues/assumption
2. Requirement analysis: background, Scope analysis, DoD, Dependencies, Priority, requirement to solution, feature/function/ config
3. User case analysis: external, internal
4. Effort/Cost
5. Overview of realization: solution
6. Impact to system behaviors
7. Impact to software
8. Impact to hardware
9. Impact to test
10. Impact to external stakeholders
11. Terminology
12. References | non_priority | template of implementation document intro scope ref document revision history issues assumption requirement analysis background scope analysis dod dependencies priority requirement to solution feature function config user case analysis external internal effort cost overview of realization solution impact to system behaviors impact to software impact to hardware impact to test impact to external stakeholders terminology references | 0 |
373,246 | 11,036,124,323 | IssuesEvent | 2019-12-07 18:28:22 | athina-edu/athina | https://api.github.com/repos/athina-edu/athina | closed | Only one student receives feedback on canvas | High Priority bug | Inside of sending the file you can send a text comment (check the public vs not public submit_grade option) and disable the double grade submission for the whole group. Canvas has a flag that in theory will show the grade for both students. | 1.0 | Only one student receives feedback on canvas - Inside of sending the file you can send a text comment (check the public vs not public submit_grade option) and disable the double grade submission for the whole group. Canvas has a flag that in theory will show the grade for both students. | priority | only one student receives feedback on canvas inside of sending the file you can send a text comment check the public vs not public submit grade option and disable the double grade submission for the whole group canvas has a flag that in theory will show the grade for both students | 1 |
58,986 | 3,098,405,341 | IssuesEvent | 2015-08-28 10:46:43 | pombase/pombase-chado | https://api.github.com/repos/pombase/pombase-chado | closed | Fix modelling of exons in Chado | high priority | According to this:
http://gmod.org/wiki/Chado_Best_Practices#Trans-spliced_Gene
we need to store a rank in the feature_relationship table that determines the exon order.
We also need to fix the exon uniquenames. On the reverse strand we currently have exon 1 last. eg.
fmin, fmax, uniquename:
1807717 | 1807781 | SPAC1002.06c.1:exon:3
1807482 | 1807642 | SPAC1002.06c.1:exon:2
1807269 | 1807402 | SPAC1002.06c.1:exon:1
Needed for #172 | 1.0 | Fix modelling of exons in Chado - According to this:
http://gmod.org/wiki/Chado_Best_Practices#Trans-spliced_Gene
we need to store a rank in the feature_relationship table that determines the exon order.
We also need to fix the exon uniquenames. On the reverse strand we currently have exon 1 last. eg.
fmin, fmax, uniquename:
1807717 | 1807781 | SPAC1002.06c.1:exon:3
1807482 | 1807642 | SPAC1002.06c.1:exon:2
1807269 | 1807402 | SPAC1002.06c.1:exon:1
Needed for #172 | priority | fix modelling of exons in chado according to this we need to store a rank in the feature relationship table that determines the exon order we also need to fix the exon uniquenames on the reverse strand we currently have exon last eg fmin fmax uniquename exon exon exon needed for | 1 |
93,540 | 15,893,294,144 | IssuesEvent | 2021-04-11 05:04:07 | wallanpsantos/hr-config-server | https://api.github.com/repos/wallanpsantos/hr-config-server | closed | CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.41.jar | security vulnerability | ## CVE-2021-25329 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.41.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: hr-config-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.41/tomcat-embed-core-9.0.41.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.7.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.7.RELEASE.jar
- :x: **tomcat-embed-core-9.0.41.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wallanpsantos/hr-config-server/commit/f4204bf84bdfa65b59ff4c99045c6542ce2dc905">f4204bf84bdfa65b59ff4c99045c6542ce2dc905</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-25329 (High) detected in tomcat-embed-core-9.0.41.jar - ## CVE-2021-25329 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.41.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: hr-config-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.41/tomcat-embed-core-9.0.41.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.7.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.7.RELEASE.jar
- :x: **tomcat-embed-core-9.0.41.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/wallanpsantos/hr-config-server/commit/f4204bf84bdfa65b59ff4c99045c6542ce2dc905">f4204bf84bdfa65b59ff4c99045c6542ce2dc905</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The fix for CVE-2020-9484 was incomplete. When using Apache Tomcat 10.0.0-M1 to 10.0.0, 9.0.0.M1 to 9.0.41, 8.5.0 to 8.5.61 or 7.0.0. to 7.0.107 with a configuration edge case that was highly unlikely to be used, the Tomcat instance was still vulnerable to CVE-2020-9494. Note that both the previously published prerequisites for CVE-2020-9484 and the previously published mitigations for CVE-2020-9484 also apply to this issue.
<p>Publish Date: 2021-03-01
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25329>CVE-2021-25329</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/rfe62fbf9d4c314f166fe8c668e50e5d9dd882a99447f26f0367474bf%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2021-03-01</p>
<p>Fix Resolution: org.apache.tomcat:tomcat:7.0.108, org.apache.tomcat:tomcat:8.5.63, org.apache.tomcat:tomcat:9.0.43,org.apache.tomcat:tomcat:10.0.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file hr config server pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details the fix for cve was incomplete when using apache tomcat to to to or to with a configuration edge case that was highly unlikely to be used the tomcat instance was still vulnerable to cve note that both the previously published prerequisites for cve and the previously published mitigations for cve also apply to this issue publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat org apache tomcat tomcat step up your open source security game with whitesource | 0 |
95,398 | 10,879,205,157 | IssuesEvent | 2019-11-16 23:27:05 | fused-effects/fused-effects | https://api.github.com/repos/fused-effects/fused-effects | closed | Document WriterC issuing left-associated mappends | documentation | `WriterC` left-nests its `mappend`s, a fact we should document since this is the worst-case for `[a]`, a likely monoid to want to write.
🎩 @lexi-lambda for pointing this out. | 1.0 | Document WriterC issuing left-associated mappends - `WriterC` left-nests its `mappend`s, a fact we should document since this is the worst-case for `[a]`, a likely monoid to want to write.
🎩 @lexi-lambda for pointing this out. | non_priority | document writerc issuing left associated mappends writerc left nests its mappend s a fact we should document since this is the worst case for a likely monoid to want to write 🎩 lexi lambda for pointing this out | 0 |
335,211 | 10,150,507,137 | IssuesEvent | 2019-08-05 17:51:32 | DDMAL/mei-mapping-tool | https://api.github.com/repos/DDMAL/mei-mapping-tool | closed | Drag and drop neumes | enhancement high priority | If a user wants to classify neumes together in a section, they should be able to drag and drop neumes first from different parts in the classifier or drag a neume on top of another neume and create a section for both neumes. (If a neume is added to another neume, a popup will be created to make a section and choose its name). | 1.0 | Drag and drop neumes - If a user wants to classify neumes together in a section, they should be able to drag and drop neumes first from different parts in the classifier or drag a neume on top of another neume and create a section for both neumes. (If a neume is added to another neume, a popup will be created to make a section and choose its name). | priority | drag and drop neumes if a user wants to classify neumes together in a section they should be able to drag and drop neumes first from different parts in the classifier or drag a neume on top of another neume and create a section for both neumes if a neume is added to another neume a popup will be created to make a section and choose its name | 1 |
325,586 | 24,054,815,230 | IssuesEvent | 2022-09-16 15:48:50 | gradle/gradle | https://api.github.com/repos/gradle/gradle | closed | Gradle doc's navigation padding is sometimes not respected | in:documentation-infrastructure stale | ### Expected Behavior
The navigation panel for the [Gradle user manual](https://docs.gradle.org/current/userguide/userguide.html) should have left padding or hide when there is not enough width in the current window.
### Current Behavior
In a certain width of the window, the left padding of the navigation panel is ignored and the text starts at the edge of the window making the first characters on a line hard to read.
### Steps to Reproduce (for bugs)
In a web browser, open the [Gradle user manual](https://docs.gradle.org/current/userguide/userguide.html) and manipulate the size of the webpage to where the padding no longer exists. For me this was a width of 1185 (px?) which is the width of my vertical monitor.
### Your Environment
OS: macOS 10.13.6
Affected browsers: Firefox 62.0.2 (64-bit) and Chrome 69.0.3497.100 (Official Build) (64-bit)
 | 1.0 | Gradle doc's navigation padding is sometimes not respected - ### Expected Behavior
The navigation panel for the [Gradle user manual](https://docs.gradle.org/current/userguide/userguide.html) should have left padding or hide when there is not enough width in the current window.
### Current Behavior
In a certain width of the window, the left padding of the navigation panel is ignored and the text starts at the edge of the window making the first characters on a line hard to read.
### Steps to Reproduce (for bugs)
In a web browser, open the [Gradle user manual](https://docs.gradle.org/current/userguide/userguide.html) and manipulate the size of the webpage to where the padding no longer exists. For me this was a width of 1185 (px?) which is the width of my vertical monitor.
### Your Environment
OS: macOS 10.13.6
Affected browsers: Firefox 62.0.2 (64-bit) and Chrome 69.0.3497.100 (Official Build) (64-bit)
 | non_priority | gradle doc s navigation padding is sometimes not respected expected behavior the navigation panel for the should have left padding or hide when there is not enough width in the current window current behavior in a certain width of the window the left padding of the navigation panel is ignored and the text starts at the edge of the window making the first characters on a line hard to read steps to reproduce for bugs in a web browser open the and manipulate the size of the webpage to where the padding no longer exists for me this was a width of px which is the width of my vertical monitor your environment os macos affected browsers firefox bit and chrome official build bit | 0 |
700,782 | 24,073,199,036 | IssuesEvent | 2022-09-18 13:01:07 | Tachyonite/Pawnmorpher | https://api.github.com/repos/Tachyonite/Pawnmorpher | closed | Wrong pawn referred in bonded thought. | bug fixed in dev low priority | **Describe the bug**
Nina's bonded thought refers to Nina as being with Nina.
**Screenshots**

**Additional context**
https://discord.com/channels/603713246461165578/603713451633803274/997532780000968815 | 1.0 | Wrong pawn referred in bonded thought. - **Describe the bug**
Nina's bonded thought refers to Nina as being with Nina.
**Screenshots**

**Additional context**
https://discord.com/channels/603713246461165578/603713451633803274/997532780000968815 | priority | wrong pawn referred in bonded thought describe the bug nina s bonded thought refers to nina as being with nina screenshots additional context | 1 |
88,535 | 17,604,540,591 | IssuesEvent | 2021-08-17 15:28:26 | github/vscode-codeql | https://api.github.com/repos/github/vscode-codeql | closed | Test compilation errors appear in output diff | bug good first issue VSCode | **Version**
CodeQL CLI v2.5.0
Extension version: v1.4.5
**Describe the bug**
When compilation of a test fails, the extension currently creates an output diff between the expected output and the compilation error, which is pretty pointless and confusing. E.g.:
```
--- expected
+++ actual
@@ -1,12 +1,1 @@
-| Expressions.java:5:9:5:18 | toString(...) |
-| Expressions.java:9:9:9:13 | ...=... |
-| Expressions.java:10:9:10:11 | ...++ |
-| Expressions.java:11:9:11:20 | new Object(...) |
-| Expressions.java:14:9:14:28 | clone(...) |
+ERROR: no viable parse for input 'a', expecting one of : 'boolean', 'class', 'newtype', 'date', 'float', 'from', 'import', 'int', 'module', 'predicate', 'select', 'string', 'where', Lowerid, Upperid, Atlowerid (Expressions.ql:6,1-2)
```
It looks like the underlying issue might be caused by the CodeQL CLI which reports this as `diff`. It appears this is currently 'misused' to perform tests on CodeQL itself, see https://github.com/github/vscode-codeql/issues/608#issuecomment-702933731.
Regardless of whether that is intended or not, the extension could inspect the `failureStage` of the response, only if it is `"failureStage" : "RESULT"` the diff should be shown.
**To Reproduce**
1. Create an `.expected` file
2. Create a query test file containing a compilation error
3. Run `codeql test run --format=json`
**Expected behavior**
Only the compilation error is reported, the diff is not shown.
| 1.0 | Test compilation errors appear in output diff - **Version**
CodeQL CLI v2.5.0
Extension version: v1.4.5
**Describe the bug**
When compilation of a test fails, the extension currently creates an output diff between the expected output and the compilation error, which is pretty pointless and confusing. E.g.:
```
--- expected
+++ actual
@@ -1,12 +1,1 @@
-| Expressions.java:5:9:5:18 | toString(...) |
-| Expressions.java:9:9:9:13 | ...=... |
-| Expressions.java:10:9:10:11 | ...++ |
-| Expressions.java:11:9:11:20 | new Object(...) |
-| Expressions.java:14:9:14:28 | clone(...) |
+ERROR: no viable parse for input 'a', expecting one of : 'boolean', 'class', 'newtype', 'date', 'float', 'from', 'import', 'int', 'module', 'predicate', 'select', 'string', 'where', Lowerid, Upperid, Atlowerid (Expressions.ql:6,1-2)
```
It looks like the underlying issue might be caused by the CodeQL CLI which reports this as `diff`. It appears this is currently 'misused' to perform tests on CodeQL itself, see https://github.com/github/vscode-codeql/issues/608#issuecomment-702933731.
Regardless of whether that is intended or not, the extension could inspect the `failureStage` of the response, only if it is `"failureStage" : "RESULT"` the diff should be shown.
**To Reproduce**
1. Create an `.expected` file
2. Create a query test file containing a compilation error
3. Run `codeql test run --format=json`
**Expected behavior**
Only the compilation error is reported, the diff is not shown.
| non_priority | test compilation errors appear in output diff version codeql cli extension version describe the bug when compilation of a test fails the extension currently creates an output diff between the expected output and the compilation error which is pretty pointless and confusing e g expected actual expressions java tostring expressions java expressions java expressions java new object expressions java clone error no viable parse for input a expecting one of boolean class newtype date float from import int module predicate select string where lowerid upperid atlowerid expressions ql it looks like the underlying issue might be caused by the codeql cli which reports this as diff it appears this is currently misused to perform tests on codeql itself see regardless of whether that is intended or not the extension could inspect the failurestage of the response only if it is failurestage result the diff should be shown to reproduce create an expected file create a query test file containing a compilation error run codeql test run format json expected behavior only the compilation error is reported the diff is not shown | 0 |
181,428 | 14,019,908,629 | IssuesEvent | 2020-10-29 18:52:05 | galactic-forensics/iniabu | https://api.github.com/repos/galactic-forensics/iniabu | closed | Improve test suite | tests | Test suite is currently poorly written. Needs to be extended with parameterized testing, use hypothesis to randomly pick stuff.
Also include specific `pytest` markers to check for database integrity. Integrity doesn't need to be tested every time when developing, so that would help to make it faster (in the long run)... This has lower priority...
| 1.0 | Improve test suite - Test suite is currently poorly written. Needs to be extended with parameterized testing, use hypothesis to randomly pick stuff.
Also include specific `pytest` markers to check for database integrity. Integrity doesn't need to be tested every time when developing, so that would help to make it faster (in the long run)... This has lower priority...
| non_priority | improve test suite test suite is currently poorly written needs to be extended with parameterized testing use hypothesis to randomly pick stuff also include specific pytest markers to check for database integrity integrity doesn t need to be tested every time when developing so that would help to make it faster in the long run this has lower priority | 0 |
862 | 2,644,413,337 | IssuesEvent | 2015-03-12 16:48:33 | OSTraining/Simplerenew | https://api.github.com/repos/OSTraining/Simplerenew | opened | Improve messaging when credit card validation fails | Enhancement Usability | Whenever validation of a credit card fails, the unhelpful message from Recurly *There was an error validating your request.* is passed through to the user. There is a better way to manage this. | True | Improve messaging when credit card validation fails - Whenever validation of a credit card fails, the unhelpful message from Recurly *There was an error validating your request.* is passed through to the user. There is a better way to manage this. | non_priority | improve messaging when credit card validation fails whenever validation of a credit card fails the unhelpful message from recurly there was an error validating your request is passed through to the user there is a better way to manage this | 0 |
228,675 | 7,565,506,928 | IssuesEvent | 2018-04-21 10:45:12 | tavorperry/Kafe_Emon | https://api.github.com/repos/tavorperry/Kafe_Emon | closed | Change PHP.ini file in the server | High Priority | in php.ini file on the server, Please change the following:
1. upload_max_filesize=
2. post_max_size=15M
3. file_uploads=On | 1.0 | Change PHP.ini file in the server - in php.ini file on the server, Please change the following:
1. upload_max_filesize=
2. post_max_size=15M
3. file_uploads=On | priority | change php ini file in the server in php ini file on the server please change the following upload max filesize post max size file uploads on | 1 |
91,659 | 15,856,552,816 | IssuesEvent | 2021-04-08 02:37:02 | L-47/unstoppable-chat | https://api.github.com/repos/L-47/unstoppable-chat | opened | CVE-2020-7751 (High) detected in pathval-1.1.0.tgz | security vulnerability | ## CVE-2020-7751 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pathval-1.1.0.tgz</b></p></summary>
<p>Object value retrieval given a string path</p>
<p>Library home page: <a href="https://registry.npmjs.org/pathval/-/pathval-1.1.0.tgz">https://registry.npmjs.org/pathval/-/pathval-1.1.0.tgz</a></p>
<p>Path to dependency file: unstoppable-chat/package.json</p>
<p>Path to vulnerable library: unstoppable-chat/node_modules/pathval/package.json</p>
<p>
Dependency Hierarchy:
- chai-4.2.0.tgz (Root Library)
- :x: **pathval-1.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package pathval.
<p>Publish Date: 2020-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7751>CVE-2020-7751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7751 (High) detected in pathval-1.1.0.tgz - ## CVE-2020-7751 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pathval-1.1.0.tgz</b></p></summary>
<p>Object value retrieval given a string path</p>
<p>Library home page: <a href="https://registry.npmjs.org/pathval/-/pathval-1.1.0.tgz">https://registry.npmjs.org/pathval/-/pathval-1.1.0.tgz</a></p>
<p>Path to dependency file: unstoppable-chat/package.json</p>
<p>Path to vulnerable library: unstoppable-chat/node_modules/pathval/package.json</p>
<p>
Dependency Hierarchy:
- chai-4.2.0.tgz (Root Library)
- :x: **pathval-1.1.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects all versions of package pathval.
<p>Publish Date: 2020-10-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7751>CVE-2020-7751</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in pathval tgz cve high severity vulnerability vulnerable library pathval tgz object value retrieval given a string path library home page a href path to dependency file unstoppable chat package json path to vulnerable library unstoppable chat node modules pathval package json dependency hierarchy chai tgz root library x pathval tgz vulnerable library found in base branch master vulnerability details this affects all versions of package pathval publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
26,447 | 2,684,496,626 | IssuesEvent | 2015-03-29 01:41:46 | gtcasl/gpuocelot | https://api.github.com/repos/gtcasl/gpuocelot | opened | Compilation from source fails | bug imported Priority-Low | _From [topgun...@gmail.com](https://code.google.com/u/105691057539035044123/) on August 13, 2011 19:02:16_
Compilation from the source fails.
$ocelot> ./build.py
g++ -o .release_build/ocelot/ir/implementation/Kernel.os -c -O2 -Wall -Werror -std=c++0x -I/home/users/pushkar/local/include -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -fPIC -I.release_build -I. -I/usr/include -I/home/users/pushkar/local/include ocelot/ir/implementation/Kernel.cpp
ocelot/ir/implementation/Kernel.cpp:93:82: error: no 'void ir::Kernel::insertParameter(const ir::Parameter&, bool)' member function declared in class 'ir::Kernel'
scons: *** [.release_build/ocelot/ir/implementation/Kernel.os] Error 1
scons: building terminated because of errors.
-Pushkar
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=56_ | 1.0 | Compilation from source fails - _From [topgun...@gmail.com](https://code.google.com/u/105691057539035044123/) on August 13, 2011 19:02:16_
Compilation from the source fails.
$ocelot> ./build.py
g++ -o .release_build/ocelot/ir/implementation/Kernel.os -c -O2 -Wall -Werror -std=c++0x -I/home/users/pushkar/local/include -D_DEBUG -D_GNU_SOURCE -D__STDC_LIMIT_MACROS -D__STDC_CONSTANT_MACROS -fPIC -I.release_build -I. -I/usr/include -I/home/users/pushkar/local/include ocelot/ir/implementation/Kernel.cpp
ocelot/ir/implementation/Kernel.cpp:93:82: error: no 'void ir::Kernel::insertParameter(const ir::Parameter&, bool)' member function declared in class 'ir::Kernel'
scons: *** [.release_build/ocelot/ir/implementation/Kernel.os] Error 1
scons: building terminated because of errors.
-Pushkar
_Original issue: http://code.google.com/p/gpuocelot/issues/detail?id=56_ | priority | compilation from source fails from on august compilation from the source fails ocelot build py g o release build ocelot ir implementation kernel os c wall werror std c i home users pushkar local include d debug d gnu source d stdc limit macros d stdc constant macros fpic i release build i i usr include i home users pushkar local include ocelot ir implementation kernel cpp ocelot ir implementation kernel cpp error no void ir kernel insertparameter const ir parameter bool member function declared in class ir kernel scons error scons building terminated because of errors pushkar original issue | 1 |
106,304 | 9,126,119,255 | IssuesEvent | 2019-02-24 19:07:37 | svigerske/Bt | https://api.github.com/repos/svigerske/Bt | closed | User-given FFLAGS ignored for dlamch compilation | bug configuration tests major | Issue created by migration from Trac.
Original creator: andreasw
Original creation time: 2008-09-08 13:08:23
Assignee: andreasw
Version: 0.5
Hi Stefan :-)
The compilation of dlamch still has some problems. For example, if someone sets FFLAGS, this is completely ignored when compiling it. (In an IBM context, someone had -fPIC included in FFLAGS and uses --disable-shared, so this doesn't work when you later try to make a shared library)
Instead of setting it to the coin_warn_flags etc, would it make more sense to scan the content of FFLAGS and remove anything of the form '-O*' and replace it by '-O0'? Not sure if that works for all compilers though (Windoofs?). At the very least, we should make DLAMCH_FFLAGS an argument that one can set.
What do you think?
Andreas | 1.0 | User-given FFLAGS ignored for dlamch compilation - Issue created by migration from Trac.
Original creator: andreasw
Original creation time: 2008-09-08 13:08:23
Assignee: andreasw
Version: 0.5
Hi Stefan :-)
The compilation of dlamch still has some problems. For example, if someone sets FFLAGS, this is completely ignored when compiling it. (In an IBM context, someone had -fPIC included in FFLAGS and uses --disable-shared, so this doesn't work when you later try to make a shared library)
Instead of setting it to the coin_warn_flags etc, would it make more sense to scan the content of FFLAGS and remove anything of the form '-O*' and replace it by '-O0'? Not sure if that works for all compilers though (Windoofs?). At the very least, we should make DLAMCH_FFLAGS an argument that one can set.
What do you think?
Andreas | non_priority | user given fflags ignored for dlamch compilation issue created by migration from trac original creator andreasw original creation time assignee andreasw version hi stefan the compilation of dlamch still has some problems for example if someone sets fflags this is completely ignored when compiling it in an ibm context someone had fpic included in fflags and uses disable shared so this doesn t work when you later try to make a shared library instead of setting it to the coin warn flags etc would it make more sense to scan the content of fflags and remove anything of the form o and replace it by not sure if that works for all compilers though windoofs at the very least we should make dlamch fflags an argument that one can set what do you think andreas | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.