Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 4 112 | repo_url stringlengths 33 141 | action stringclasses 3 values | title stringlengths 1 1.02k | labels stringlengths 4 1.54k | body stringlengths 1 262k | index stringclasses 17 values | text_combine stringlengths 95 262k | label stringclasses 2 values | text stringlengths 96 252k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6,650 | 2,855,530,019 | IssuesEvent | 2015-06-02 10:01:51 | medic/medic-webapp | https://api.github.com/repos/medic/medic-webapp | closed | Outgoing sample messages aren't resolved to contact names | 3 - Acceptance testing Bug | When the sample messages load, the incoming messages are resolved to contact names but the outgoing messages aren't. This results in two message threads instead of one (one for incoming, one for outgoing). This doesn't seem to affect message threads for contacts that I added on my own, only sample data. The reason why this is an issue is because when you start up the app and run the Messages tour, it doesn't run all the way through unless there is a message thread with both sent and received messages. | 1.0 | Outgoing sample messages aren't resolved to contact names - When the sample messages load, the incoming messages are resolved to contact names but the outgoing messages aren't. This results in two message threads instead of one (one for incoming, one for outgoing). This doesn't seem to affect message threads for contacts that I added on my own, only sample data. The reason why this is an issue is because when you start up the app and run the Messages tour, it doesn't run all the way through unless there is a message thread with both sent and received messages. | test | outgoing sample messages aren t resolved to contact names when the sample messages load the incoming messages are resolved to contact names but the outgoing messages aren t this results in two message threads instead of one one for incoming one for outgoing this doesn t seem to affect message threads for contacts that i added on my own only sample data the reason why this is an issue is because when you start up the app and run the messages tour it doesn t run all the way through unless there is a message thread with both sent and received messages | 1 |
227,188 | 18,053,441,245 | IssuesEvent | 2021-09-20 03:13:28 | logicmoo/logicmoo_workspace | https://api.github.com/repos/logicmoo/logicmoo_workspace | opened | logicmoo.pfc.test.sanity_base.SV_FWD_01A_B JUnit | Test_9999 logicmoo.pfc.test.sanity_base unit_test SV_FWD_01A_B Failing | (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif sv_fwd_01a_b.pfc)
ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/
EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc
JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/SV_FWD_01A_B/logicmoo_pfc_test_sanity_base_SV_FWD_01A_B_JUnit/
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ASV_FWD_01A_B
```
%~ init_phase(after_load)
%~ init_phase(restore_state)
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- set_fileAssertMt(header_sane).
%~ set_fileAssertMt(header_sane)
/*~
%~ set_fileAssertMt(header_sane)
~*/
:- expects_dialect(pfc).
arity(inChairZ,1).
prologSingleValued(inChairZ).
prologSingleValuedInArg(inChairZ,1).
singleValuedInArgAX(inChairZ, 1, 1).
:- ain( inChairZ(aZa)).
:- (ain( inChairZ(bYb))).
:- listing(inChairZ/1).
%~ skipped( listing( inChairZ/1))
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc:50
%~ unused(no_junit_results)
%~ test_completed_exit(0)
```
totalTime=1.000
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k sv_fwd_01a_b.pfc (returned 0) Add_LABELS='' Rem_LABELS='Skipped,Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
| 3.0 | logicmoo.pfc.test.sanity_base.SV_FWD_01A_B JUnit - (cd /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base ; swipl -x /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-clif sv_fwd_01a_b.pfc)
ISSUE: https://github.com/logicmoo/logicmoo_workspace/issues/
EDIT: https://github.com/logicmoo/logicmoo_workspace/edit/master/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc
JENKINS: https://jenkins.logicmoo.org/job/logicmoo_workspace/lastBuild/testReport/logicmoo.pfc.test.sanity_base/SV_FWD_01A_B/logicmoo_pfc_test_sanity_base_SV_FWD_01A_B_JUnit/
ISSUE_SEARCH: https://github.com/logicmoo/logicmoo_workspace/issues?q=is%3Aissue+label%3ASV_FWD_01A_B
```
%~ init_phase(after_load)
%~ init_phase(restore_state)
%
running('/var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc'),
%~ this_test_might_need( :-( use_module( library(logicmoo_plarkc))))
:- set_fileAssertMt(header_sane).
%~ set_fileAssertMt(header_sane)
/*~
%~ set_fileAssertMt(header_sane)
~*/
:- expects_dialect(pfc).
arity(inChairZ,1).
prologSingleValued(inChairZ).
prologSingleValuedInArg(inChairZ,1).
singleValuedInArgAX(inChairZ, 1, 1).
:- ain( inChairZ(aZa)).
:- (ain( inChairZ(bYb))).
:- listing(inChairZ/1).
%~ skipped( listing( inChairZ/1))
%~ /var/lib/jenkins/workspace/logicmoo_workspace/packs_sys/pfc/t/sanity_base/sv_fwd_01a_b.pfc:50
%~ unused(no_junit_results)
%~ test_completed_exit(0)
```
totalTime=1.000
FAILED: /var/lib/jenkins/workspace/logicmoo_workspace/bin/lmoo-junit-minor -k sv_fwd_01a_b.pfc (returned 0) Add_LABELS='' Rem_LABELS='Skipped,Skipped,Errors,Warnings,Overtime,Skipped,Skipped'
| test | logicmoo pfc test sanity base sv fwd b junit cd var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base swipl x var lib jenkins workspace logicmoo workspace bin lmoo clif sv fwd b pfc issue edit jenkins issue search init phase after load init phase restore state running var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base sv fwd b pfc this test might need use module library logicmoo plarkc set fileassertmt header sane set fileassertmt header sane set fileassertmt header sane expects dialect pfc arity inchairz prologsinglevalued inchairz prologsinglevaluedinarg inchairz singlevaluedinargax inchairz ain inchairz aza ain inchairz byb listing inchairz skipped listing inchairz var lib jenkins workspace logicmoo workspace packs sys pfc t sanity base sv fwd b pfc unused no junit results test completed exit totaltime failed var lib jenkins workspace logicmoo workspace bin lmoo junit minor k sv fwd b pfc returned add labels rem labels skipped skipped errors warnings overtime skipped skipped | 1 |
304,473 | 26,279,562,338 | IssuesEvent | 2023-01-07 06:04:36 | serai-dex/serai | https://api.github.com/repos/serai-dex/serai | opened | Fingerprinting of monero-serai by receiving funds | untested monero | Will monero-serai scan (or refuse to scan) TXs wallet2 won't? We need to review torsion handling, TX extra encoding support and more.
I can immediately cite monero-serai ignores secondary outputs, for a larger amount, when the burning bug is exploited. wallet2 will credit the difference, if the original output hasn't already been spent. This is a safety issue I won't budge on. | 1.0 | Fingerprinting of monero-serai by receiving funds - Will monero-serai scan (or refuse to scan) TXs wallet2 won't? We need to review torsion handling, TX extra encoding support and more.
I can immediately cite monero-serai ignores secondary outputs, for a larger amount, when the burning bug is exploited. wallet2 will credit the difference, if the original output hasn't already been spent. This is a safety issue I won't budge on. | test | fingerprinting of monero serai by receiving funds will monero serai scan or refuse to scan txs won t we need to review torsion handling tx extra encoding support and more i can immediately cite monero serai ignores secondary outputs for a larger amount when the burning bug is exploited will credit the difference if the original output hasn t already been spent this is a safety issue i won t budge on | 1 |
130,988 | 27,804,493,105 | IssuesEvent | 2023-03-17 18:32:39 | apache/camel-karavan | https://api.github.com/repos/apache/camel-karavan | closed | [vs-code] Integrate with VS Code AtlasMap | enhancement wontfix usability designer vs-code | several scenarii in mind:
- when a data transformation exists, clicking on it is opening the Atlasmap editor and allows editing it
- when adding a data transformation, ask for the name/path(?) then open the AtlasMap editor
using VS Code AtlasMap commands to open files named .adm will work (done by VS Code language Support and VS Code Designer for Camel)
Advantages: can leverage existing VS Code AtlasMap and should simplfiy integration compared to have Karavan embedding AtlasMap
Drawback: this will be specific to VS Code | 1.0 | [vs-code] Integrate with VS Code AtlasMap - several scenarii in mind:
- when a data transformation exists, clicking on it is opening the Atlasmap editor and allows editing it
- when adding a data transformation, ask for the name/path(?) then open the AtlasMap editor
using VS Code AtlasMap commands to open files named .adm will work (done by VS Code language Support and VS Code Designer for Camel)
Advantages: can leverage existing VS Code AtlasMap and should simplfiy integration compared to have Karavan embedding AtlasMap
Drawback: this will be specific to VS Code | non_test | integrate with vs code atlasmap several scenarii in mind when a data transformation exists clicking on it is opening the atlasmap editor and allows editing it when adding a data transformation ask for the name path then open the atlasmap editor using vs code atlasmap commands to open files named adm will work done by vs code language support and vs code designer for camel advantages can leverage existing vs code atlasmap and should simplfiy integration compared to have karavan embedding atlasmap drawback this will be specific to vs code | 0 |
9,569 | 4,546,835,156 | IssuesEvent | 2016-09-12 00:47:26 | VOREStation/VOREStation | https://api.github.com/repos/VOREStation/VOREStation | closed | Saddlebags broken | Pri: 1-Minor Type: Bug Type: Sprite Works in latest build | #### Brief description of the issue
Saddlebags appear to be broken on horse tails.
#### What you expected to happen
The saddlebags are supposed to work on horse-taur/taur type characters.
#### What actually happened
It sticks out ahead of the character as though it were a backpack on a normal player.
#### Steps to reproduce
Strap a saddlepack onto a character with the horse tail.
#### Additional info:
- **Server Revision**: Found using the "Show Server Revision" verb under the OOC tab.
- **Anything else you may wish to add** (Location if it's a mapping issue, etc)

| 1.0 | Saddlebags broken - #### Brief description of the issue
Saddlebags appear to be broken on horse tails.
#### What you expected to happen
The saddlebags are supposed to work on horse-taur/taur type characters.
#### What actually happened
It sticks out ahead of the character as though it were a backpack on a normal player.
#### Steps to reproduce
Strap a saddlepack onto a character with the horse tail.
#### Additional info:
- **Server Revision**: Found using the "Show Server Revision" verb under the OOC tab.
- **Anything else you may wish to add** (Location if it's a mapping issue, etc)

| non_test | saddlebags broken brief description of the issue saddlebags appear to be broken on horse tails what you expected to happen the saddlebags are supposed to work on horse taur taur type characters what actually happened it sticks out ahead of the character as though it were a backpack on a normal player steps to reproduce strap a saddlepack onto a character with the horse tail additional info server revision found using the show server revision verb under the ooc tab anything else you may wish to add location if it s a mapping issue etc | 0 |
317,082 | 27,210,989,553 | IssuesEvent | 2023-02-20 16:30:54 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | closed | Reviewdog tests seem broken. | Automated testing CMS Team Quality Assurance | ## Description
```
Run reviewdog/action-eslint@d3395027ea2cfc5cf8f460b1ea939b6c86fea656
Run $GITHUB_ACTION_PATH/script.sh
🐶 Installing reviewdog ... https://github.com/reviewdog/reviewdog
Running `npm install` to install eslint ...
npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
npm WARN deprecated @stylelint/postcss-markdown@0.[36](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:39).2: Use the original unforked package instead: postcss-markdown
npm WARN deprecated gherkin@5.1.0: This package is now published under @cucumber/gherkin
npm WARN deprecated cucumber-expressions@6.6.2: This package is now published under @cucumber/cucumber-expressions
npm WARN deprecated cucumber-expressions@5.0.18: This package is now published under @cucumber/cucumber-expressions
npm WARN deprecated cucumber@4.2.1: Cucumber is publishing new releases under @cucumber/cucumber
npm WARN deprecated core-js@2.6.9: core-js@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js.
npm WARN deprecated core-js-pure@3.8.1: core-js-pure@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js-pure.
added 1054 packages, and audited 1055 packages in 26s
132 packages are looking for funding
run `npm fund` for details
4 high severity vulnerabilities
To address issues that do not require attention, run:
npm audit fix
To address all issues (including breaking changes), run:
npm audit fix --force
Run `npm audit` for details.
/home/runner/work/_actions/reviewdog/action-eslint/d3[39](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:42)5027ea2cfc5cf8f[46](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:49)0b1ea939b6c86fea656/script.sh: 20: Unknown: not found
eslint version:
Running eslint with reviewdog 🐶 ...
reviewdog: This GitHub token doesn't have write permission of Review API [1],
so reviewdog will report results via logging command [2] and create annotations similar to
github-pr-check reporter as a fallback.
[1]: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request_target,
[2]: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/development-tools-for-github-actions#logging-commands
/home/runner/work/_actions/reviewdog/action-eslint/d339[50](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:53)27ea2cfc5cf8f460b1ea939b6c86fea6[56](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:59)/script.sh: 23: Unknown: not found
reviewdog: parse error: failed to unmarshal rdjson (DiagnosticResult): proto: syntax error (line 1:1): unexpected token
Error: Process completed with exit code 1.
```
## Acceptance Criteria
- [ ] Testable_Outcome_X
- [ ] Testable_Outcome_Y
- [ ] Testable_Outcome_Z
- [ ] Requires design review
### Team
Please check the team(s) that will do this work.
- [ ] `CMS Team`
- [ ] `Public Websites`
- [ ] `Facilities`
- [ ] `User support`
| 1.0 | Reviewdog tests seem broken. - ## Description
```
Run reviewdog/action-eslint@d3395027ea2cfc5cf8f460b1ea939b6c86fea656
Run $GITHUB_ACTION_PATH/script.sh
🐶 Installing reviewdog ... https://github.com/reviewdog/reviewdog
Running `npm install` to install eslint ...
npm WARN deprecated querystring@0.2.0: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
npm WARN deprecated @stylelint/postcss-markdown@0.[36](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:39).2: Use the original unforked package instead: postcss-markdown
npm WARN deprecated gherkin@5.1.0: This package is now published under @cucumber/gherkin
npm WARN deprecated cucumber-expressions@6.6.2: This package is now published under @cucumber/cucumber-expressions
npm WARN deprecated cucumber-expressions@5.0.18: This package is now published under @cucumber/cucumber-expressions
npm WARN deprecated cucumber@4.2.1: Cucumber is publishing new releases under @cucumber/cucumber
npm WARN deprecated core-js@2.6.9: core-js@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js.
npm WARN deprecated core-js-pure@3.8.1: core-js-pure@<3.23.3 is no longer maintained and not recommended for usage due to the number of issues. Because of the V8 engine whims, feature detection in old core-js versions could cause a slowdown up to 100x even if nothing is polyfilled. Some versions have web compatibility issues. Please, upgrade your dependencies to the actual version of core-js-pure.
added 1054 packages, and audited 1055 packages in 26s
132 packages are looking for funding
run `npm fund` for details
4 high severity vulnerabilities
To address issues that do not require attention, run:
npm audit fix
To address all issues (including breaking changes), run:
npm audit fix --force
Run `npm audit` for details.
/home/runner/work/_actions/reviewdog/action-eslint/d3[39](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:42)5027ea2cfc5cf8f[46](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:49)0b1ea939b6c86fea656/script.sh: 20: Unknown: not found
eslint version:
Running eslint with reviewdog 🐶 ...
reviewdog: This GitHub token doesn't have write permission of Review API [1],
so reviewdog will report results via logging command [2] and create annotations similar to
github-pr-check reporter as a fallback.
[1]: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request_target,
[2]: https://help.github.com/en/actions/automating-your-workflow-with-github-actions/development-tools-for-github-actions#logging-commands
/home/runner/work/_actions/reviewdog/action-eslint/d339[50](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:53)27ea2cfc5cf8f460b1ea939b6c86fea6[56](https://github.com/department-of-veterans-affairs/va.gov-cms/actions/runs/4224602430/jobs/7335769435#step:4:59)/script.sh: 23: Unknown: not found
reviewdog: parse error: failed to unmarshal rdjson (DiagnosticResult): proto: syntax error (line 1:1): unexpected token
Error: Process completed with exit code 1.
```
## Acceptance Criteria
- [ ] Testable_Outcome_X
- [ ] Testable_Outcome_Y
- [ ] Testable_Outcome_Z
- [ ] Requires design review
### Team
Please check the team(s) that will do this work.
- [ ] `CMS Team`
- [ ] `Public Websites`
- [ ] `Facilities`
- [ ] `User support`
| test | reviewdog tests seem broken description run reviewdog action eslint run github action path script sh 🐶 installing reviewdog running npm install to install eslint npm warn deprecated querystring the querystring api is considered legacy new code should use the urlsearchparams api instead npm warn deprecated stylelint postcss markdown use the original unforked package instead postcss markdown npm warn deprecated gherkin this package is now published under cucumber gherkin npm warn deprecated cucumber expressions this package is now published under cucumber cucumber expressions npm warn deprecated cucumber expressions this package is now published under cucumber cucumber expressions npm warn deprecated cucumber cucumber is publishing new releases under cucumber cucumber npm warn deprecated core js core js is no longer maintained and not recommended for usage due to the number of issues because of the engine whims feature detection in old core js versions could cause a slowdown up to even if nothing is polyfilled some versions have web compatibility issues please upgrade your dependencies to the actual version of core js npm warn deprecated core js pure core js pure is no longer maintained and not recommended for usage due to the number of issues because of the engine whims feature detection in old core js versions could cause a slowdown up to even if nothing is polyfilled some versions have web compatibility issues please upgrade your dependencies to the actual version of core js pure added packages and audited packages in packages are looking for funding run npm fund for details high severity vulnerabilities to address issues that do not require attention run npm audit fix to address all issues including breaking changes run npm audit fix force run npm audit for details home runner work actions reviewdog action eslint unknown not found eslint version running eslint with reviewdog 🐶 reviewdog this github token doesn t have write permission of review api so reviewdog will report results via logging command and create annotations similar to github pr check reporter as a fallback home runner work actions reviewdog action eslint unknown not found reviewdog parse error failed to unmarshal rdjson diagnosticresult proto syntax error line unexpected token error process completed with exit code acceptance criteria testable outcome x testable outcome y testable outcome z requires design review team please check the team s that will do this work cms team public websites facilities user support | 1 |
65,096 | 16,104,179,348 | IssuesEvent | 2021-04-27 13:09:06 | xamarin/xamarin-android | https://api.github.com/repos/xamarin/xamarin-android | closed | [NET6] libxamarin-debug-app-helper.so should not be packaged for Release builds | Area: App+Library Build | `libxamarin-debug-app-helper.so` is a helper DSO used by Debug builds only, however `dotnet build -c Release` for NET6 places it in the apk:
```shell
net6-samples/HelloMaui/bin/Release/net6.0-android $ zipinfo com.microsoft.hellomaui-Signed.apk | grep helper
-rwxr-xr-x 6.3 unx 37104 b- defX 21-Apr-20 16:45 lib/arm64-v8a/libxamarin-debug-app-helper.so
-rwxr-xr-x 6.3 unx 31796 b- defX 21-Apr-20 16:45 lib/x86/libxamarin-debug-app-helper.so
```
The DSO comes from the `microsoft.android.runtime.android-{ARCH}` nuget packages which contains all the native libraries from Xamarin.Android, for both Release and Debug builds. | 1.0 | [NET6] libxamarin-debug-app-helper.so should not be packaged for Release builds - `libxamarin-debug-app-helper.so` is a helper DSO used by Debug builds only, however `dotnet build -c Release` for NET6 places it in the apk:
```shell
net6-samples/HelloMaui/bin/Release/net6.0-android $ zipinfo com.microsoft.hellomaui-Signed.apk | grep helper
-rwxr-xr-x 6.3 unx 37104 b- defX 21-Apr-20 16:45 lib/arm64-v8a/libxamarin-debug-app-helper.so
-rwxr-xr-x 6.3 unx 31796 b- defX 21-Apr-20 16:45 lib/x86/libxamarin-debug-app-helper.so
```
The DSO comes from the `microsoft.android.runtime.android-{ARCH}` nuget packages which contains all the native libraries from Xamarin.Android, for both Release and Debug builds. | non_test | libxamarin debug app helper so should not be packaged for release builds libxamarin debug app helper so is a helper dso used by debug builds only however dotnet build c release for places it in the apk shell samples hellomaui bin release android zipinfo com microsoft hellomaui signed apk grep helper rwxr xr x unx b defx apr lib libxamarin debug app helper so rwxr xr x unx b defx apr lib libxamarin debug app helper so the dso comes from the microsoft android runtime android arch nuget packages which contains all the native libraries from xamarin android for both release and debug builds | 0 |
55,294 | 6,469,084,952 | IssuesEvent | 2017-08-17 04:05:04 | fossasia/phimpme-android | https://api.github.com/repos/fossasia/phimpme-android | closed | Espresso Test for Single Media Activity | Testing | **Actual Behaviour**
There is no espresso test for Single Media Activity.
**Expected Behaviour**
Add espresso test for Single Media Activity
**Would you like to work on the issue?**
Yes.
| 1.0 | Espresso Test for Single Media Activity - **Actual Behaviour**
There is no espresso test for Single Media Activity.
**Expected Behaviour**
Add espresso test for Single Media Activity
**Would you like to work on the issue?**
Yes.
| test | espresso test for single media activity actual behaviour there is no espresso test for single media activity expected behaviour add espresso test for single media activity would you like to work on the issue yes | 1 |
144,550 | 5,542,422,263 | IssuesEvent | 2017-03-22 14:59:40 | lucy-marko/centrepoint | https://api.github.com/repos/lucy-marko/centrepoint | closed | Enhanced implementation of request status | enhancement epic in-progress priority-2 | - [x] Show open / closed / in-progress status
- [x] Show other visual cues (e.g. bold text for open request)
- [x] Ability to assign admins (including themselves) to specific request
- [x] Ability to close a request (with 'are you sure' message) | 1.0 | Enhanced implementation of request status - - [x] Show open / closed / in-progress status
- [x] Show other visual cues (e.g. bold text for open request)
- [x] Ability to assign admins (including themselves) to specific request
- [x] Ability to close a request (with 'are you sure' message) | non_test | enhanced implementation of request status show open closed in progress status show other visual cues e g bold text for open request ability to assign admins including themselves to specific request ability to close a request with are you sure message | 0 |
353,296 | 25,111,019,888 | IssuesEvent | 2022-11-08 20:32:18 | aws/aws-sdk-js-v3 | https://api.github.com/repos/aws/aws-sdk-js-v3 | opened | AWS sdk is not following semantic versioning and not documented as such | documentation needs-triage | ### Describe the issue
There has been a lot of churn with the aws sdk in our application. We've had to perform emergency fixes several times over the last 2 weeks. I come to find out through a comment in an issue](https://github.com/aws/aws-sdk-js-v3/issues/4122#issuecomment-1306559498) that this SDK does not follow semantic versioning. The Node.js ecosystem relies on adhering to [semantic versioning to build trust](https://docs.npmjs.com/about-semantic-versioning). It's implicitly understood that 3rd party packages follow semantic versioning. It would be wonderful if there was an obvious warning in the README that this library, which is used by over [56.7k applications](https://github.com/aws/aws-sdk-js-v3/network/dependents), does not adhere to semantic versioning so users can properly mitigate this risk.
The project I work on [New Relic Node.js agen](https://github.com/newrelic/node-newrelic) is in a difficult position. We are an agent which means we build instrumentation for the AWS sdk. It's harder for us to lock down versions because it affects our customers when they upgrade past the version we support. It's untenable to ship a new version of our [aws-sdk instrumentation](https://github.com/newrelic/node-newrelic-aws-sdk) on every release as the AWS SDK seems to release daily.
### Links
https://github.com/aws/aws-sdk-js-v3/blob/main/README.md | 1.0 | AWS sdk is not following semantic versioning and not documented as such - ### Describe the issue
There has been a lot of churn with the aws sdk in our application. We've had to perform emergency fixes several times over the last 2 weeks. I come to find out through a comment in an issue](https://github.com/aws/aws-sdk-js-v3/issues/4122#issuecomment-1306559498) that this SDK does not follow semantic versioning. The Node.js ecosystem relies on adhering to [semantic versioning to build trust](https://docs.npmjs.com/about-semantic-versioning). It's implicitly understood that 3rd party packages follow semantic versioning. It would be wonderful if there was an obvious warning in the README that this library, which is used by over [56.7k applications](https://github.com/aws/aws-sdk-js-v3/network/dependents), does not adhere to semantic versioning so users can properly mitigate this risk.
The project I work on [New Relic Node.js agen](https://github.com/newrelic/node-newrelic) is in a difficult position. We are an agent which means we build instrumentation for the AWS sdk. It's harder for us to lock down versions because it affects our customers when they upgrade past the version we support. It's untenable to ship a new version of our [aws-sdk instrumentation](https://github.com/newrelic/node-newrelic-aws-sdk) on every release as the AWS SDK seems to release daily.
### Links
https://github.com/aws/aws-sdk-js-v3/blob/main/README.md | non_test | aws sdk is not following semantic versioning and not documented as such describe the issue there has been a lot of churn with the aws sdk in our application we ve had to perform emergency fixes several times over the last weeks i come to find out through a comment in an issue that this sdk does not follow semantic versioning the node js ecosystem relies on adhering to it s implicitly understood that party packages follow semantic versioning it would be wonderful if there was an obvious warning in the readme that this library which is used by over does not adhere to semantic versioning so users can properly mitigate this risk the project i work on is in a difficult position we are an agent which means we build instrumentation for the aws sdk it s harder for us to lock down versions because it affects our customers when they upgrade past the version we support it s untenable to ship a new version of our on every release as the aws sdk seems to release daily links | 0 |
197,656 | 14,937,950,643 | IssuesEvent | 2021-01-25 15:13:21 | golang/go | https://api.github.com/repos/golang/go | closed | os: spurious TestDirFS failures due to directory mtime skew on Windows | NeedsInvestigation OS-Windows Testing release-blocker | https://storage.googleapis.com/go-build-log/f31194a9/windows-386-2008_061172ea.log:
```
--- FAIL: TestDirFS (0.35s)
os_test.go:2690: TestFS found errors:
testdata/simple: mismatch:
entry.Info() = simple IsDir=true Mode=drwxrwxrwx Size=0 ModTime=2020-11-16 02:32:02.0111083 +0000 GMT
file.Stat() = simple IsDir=true Mode=drwxrwxrwx Size=0 ModTime=2020-11-16 02:32:02.012085 +0000 GMT
FAIL
FAIL os 2.635s
```
Marking as release-blocker for Go 1.16 because `DirFS` is a new API.
CC @rsc @robpike @alexbrainman @networkimprov @zx2c4 | 1.0 | os: spurious TestDirFS failures due to directory mtime skew on Windows - https://storage.googleapis.com/go-build-log/f31194a9/windows-386-2008_061172ea.log:
```
--- FAIL: TestDirFS (0.35s)
os_test.go:2690: TestFS found errors:
testdata/simple: mismatch:
entry.Info() = simple IsDir=true Mode=drwxrwxrwx Size=0 ModTime=2020-11-16 02:32:02.0111083 +0000 GMT
file.Stat() = simple IsDir=true Mode=drwxrwxrwx Size=0 ModTime=2020-11-16 02:32:02.012085 +0000 GMT
FAIL
FAIL os 2.635s
```
Marking as release-blocker for Go 1.16 because `DirFS` is a new API.
CC @rsc @robpike @alexbrainman @networkimprov @zx2c4 | test | os spurious testdirfs failures due to directory mtime skew on windows fail testdirfs os test go testfs found errors testdata simple mismatch entry info simple isdir true mode drwxrwxrwx size modtime gmt file stat simple isdir true mode drwxrwxrwx size modtime gmt fail fail os marking as release blocker for go because dirfs is a new api cc rsc robpike alexbrainman networkimprov | 1 |
120,928 | 25,895,505,510 | IssuesEvent | 2022-12-14 22:01:07 | WebDevStudios/custom-post-type-ui | https://api.github.com/repos/WebDevStudios/custom-post-type-ui | closed | Default `$data['cpt_labels']` to empty array if empty or not array for both post types and taxonomies. | Code QA | Much like we do at https://github.com/WebDevStudios/custom-post-type-ui/blob/1.13.2/inc/post-types.php#L2025-L2031 for some other data points. | 1.0 | Default `$data['cpt_labels']` to empty array if empty or not array for both post types and taxonomies. - Much like we do at https://github.com/WebDevStudios/custom-post-type-ui/blob/1.13.2/inc/post-types.php#L2025-L2031 for some other data points. | non_test | default data to empty array if empty or not array for both post types and taxonomies much like we do at for some other data points | 0 |
97,484 | 8,657,569,049 | IssuesEvent | 2018-11-27 21:43:04 | Microsoft/AzureStorageExplorer | https://api.github.com/repos/Microsoft/AzureStorageExplorer | closed | Unable to expand services nodes for attached accounts with name and key using HTTP | :gear: attach :white_check_mark: won't fix testing | Storage Explorer Version: 1.5.0
Platform/OS Version: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04
Architecture: ia32
Build Number: 20181127.2
Commit: 71a17300
Regression From: Not a regression
#### Steps to Reproduce: ####
1. Choose one GPV2 account -> Attach it with key and name.
**Key**: raUzzv+OExzxlz0G6h7m4PvpETc3DZhVZicmZF2Zs9RfYHnp7Ggcs3H42PyK2L/qvupsX1K813iI8lyVSWY6Aw==
**Account Name**: lclyan
2. Check "Use HTTP" on 'Connect with Name and Key' dialog when adding the account info -> 'Connect'.
3. Expand 'Local & Attached' -> Storage Accounts -> The attached account -> Blob Containers node.
#### Expected Experience: ####
Blob Containers node expanded well.
#### Actual Experience: ####
Unable to expand the Blob Containers node and pop up an error dialog.

**More Info:**
1. This issue reproduces for File Shares/ Queues/Tables node under the same account.
2. This issue reproduces for StorageV1 & Premium & Blob storage account.
3. This issue doesn't reproduce for Classic account.
| 1.0 | Unable to expand services nodes for attached accounts with name and key using HTTP - Storage Explorer Version: 1.5.0
Platform/OS Version: Windows 10/ MacOS High Sierra/ Linux Ubuntu 16.04
Architecture: ia32
Build Number: 20181127.2
Commit: 71a17300
Regression From: Not a regression
#### Steps to Reproduce: ####
1. Choose one GPV2 account -> Attach it with key and name.
**Key**: raUzzv+OExzxlz0G6h7m4PvpETc3DZhVZicmZF2Zs9RfYHnp7Ggcs3H42PyK2L/qvupsX1K813iI8lyVSWY6Aw==
**Account Name**: lclyan
2. Check "Use HTTP" on 'Connect with Name and Key' dialog when adding the account info -> 'Connect'.
3. Expand 'Local & Attached' -> Storage Accounts -> The attached account -> Blob Containers node.
#### Expected Experience: ####
Blob Containers node expanded well.
#### Actual Experience: ####
Unable to expand the Blob Containers node and pop up an error dialog.

**More Info:**
1. This issue reproduces for File Shares/ Queues/Tables node under the same account.
2. This issue reproduces for StorageV1 & Premium & Blob storage account.
3. This issue doesn't reproduce for Classic account.
| test | unable to expand services nodes for attached accounts with name and key using http storage explorer version platform os version windows macos high sierra linux ubuntu architecture build number commit regression from not a regression steps to reproduce choose one account attach it with key and name key rauzzv account name lclyan check use http on connect with name and key dialog when adding the account info connect expand local attached storage accounts the attached account blob containers node expected experience blob containers node expanded well actual experience unable to expand the blob containers node and pop up an error dialog more info this issue reproduces for file shares queues tables node under the same account this issue reproduces for premium blob storage account this issue doesn t reproduce for classic account | 1 |
126,456 | 10,423,429,511 | IssuesEvent | 2019-09-16 11:24:54 | GetTerminus/terminus-ui | https://api.github.com/repos/GetTerminus/terminus-ui | closed | Add additional allowed video mime types to the file upload component | Focus: component Target: latest Type: feature | ### Is your feature request related to a problem? Please describe.
Product would like to allow additional video file types to the creative upload component.
### Describe the solution you'd like.
For the following file types to be added to `TsFileAcceptedMimeTypes`:
• FLV (`video/x-flv`)
• WebM (`video/webm`)
• MOV (`video/quicktime`)
• MPEG (`video/mpeg`)
### Describe alternatives you've considered
Wishing as hard as I can, and then wishing a little harder!
### Additional context
Han shot first.
| 1.0 | Add additional allowed video mime types to the file upload component - ### Is your feature request related to a problem? Please describe.
Product would like to allow additional video file types to the creative upload component.
### Describe the solution you'd like.
For the following file types to be added to `TsFileAcceptedMimeTypes`:
• FLV (`video/x-flv`)
• WebM (`video/webm`)
• MOV (`video/quicktime`)
• MPEG (`video/mpeg`)
### Describe alternatives you've considered
Wishing as hard as I can, and then wishing a little harder!
### Additional context
Han shot first.
| test | add additional allowed video mime types to the file upload component is your feature request related to a problem please describe product would like to allow additional video file types to the creative upload component describe the solution you d like for the following file types to be added to tsfileacceptedmimetypes • flv video x flv • webm video webm • mov video quicktime • mpeg video mpeg describe alternatives you ve considered wishing as hard as i can and then wishing a little harder additional context han shot first | 1 |
577,171 | 17,104,675,738 | IssuesEvent | 2021-07-09 15:52:53 | bcgov/entity | https://api.github.com/repos/bcgov/entity | opened | 0789965 B.C. LTD. BC0789965 Alteration to a Benefit Company | ENTITY OPS Priority1 |
#### ServiceNow incident: INC0099791
#### Contact information
Staff Name: Kathy Langlois
Staff Email: Kathy.Langlois@gov.bc.ca
#### Description
Can you please open a ticket with the lab to file an Alteration to a Benefit Company. Request includes change to Articles to include Benefit statement.
Affiliation info (mandatory - BC Registries account name or BCOL a/c): 330190 Company email (mandatory): trippon@comoxlaw.ca
Company phone (optional):250-339-7977
Filing date (date/time received at Registries. If email, time email received. If mail, time scanned document received): July 8, 2021 at 3:12 PM
Effective date (if different from filing date - ie. Future effective): n/a
Name request # (optional): n/a
Phone number or e-mail from Name request: 250-339-7977
Receipt info:
Method (Routing Slip or BCOL): BCOL
BCOL Account number or Route slip #:
BCOL DAT (if BCOL): C1019483
Folio (optional): n/a
Amount ($100 for immediate or $200 for future effective): $100.00
Request includes change to Articles to include Benefit statement.
[https://app.zenhub.com/files/157936592/2f9280b0-323a-4d1e-b8eb-1c63f2be2738/download](https://app.zenhub.com/files/157936592/2f9280b0-323a-4d1e-b8eb-1c63f2be2738/download)
DEV TASKS:
- [ ] use jupyter notebooks to file LTD > BEN
- [ ] Dev inform BAs that it has been filed and paid
- [ ] BAs review in SOFI or bcregistry.ca
- [ ] IF BEN > COLIN, BA downloads the alteration outputs
- [ ] BAs tell dev to unaffiliate the entity
- [ ] Devs unaffiliate the entity from the account
- [ ] BAs inform staff that they can proceed with their steps
#### Tasks
- [x] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel
- [x] Add **entity** or **relationships** label to zenhub ticket
- [x] Add 'Priority1' label to zenhub ticket
- [x] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog
- [x] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to
- [ ] Dev/BAs to complete work & close zenhub ticket
- [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
| 1.0 | 0789965 B.C. LTD. BC0789965 Alteration to a Benefit Company -
#### ServiceNow incident: INC0099791
#### Contact information
Staff Name: Kathy Langlois
Staff Email: Kathy.Langlois@gov.bc.ca
#### Description
Can you please open a ticket with the lab to file an Alteration to a Benefit Company. Request includes change to Articles to include Benefit statement.
Affiliation info (mandatory - BC Registries account name or BCOL a/c): 330190 Company email (mandatory): trippon@comoxlaw.ca
Company phone (optional):250-339-7977
Filing date (date/time received at Registries. If email, time email received. If mail, time scanned document received): July 8, 2021 at 3:12 PM
Effective date (if different from filing date - ie. Future effective): n/a
Name request # (optional): n/a
Phone number or e-mail from Name request: 250-339-7977
Receipt info:
Method (Routing Slip or BCOL): BCOL
BCOL Account number or Route slip #:
BCOL DAT (if BCOL): C1019483
Folio (optional): n/a
Amount ($100 for immediate or $200 for future effective): $100.00
Request includes change to Articles to include Benefit statement.
[https://app.zenhub.com/files/157936592/2f9280b0-323a-4d1e-b8eb-1c63f2be2738/download](https://app.zenhub.com/files/157936592/2f9280b0-323a-4d1e-b8eb-1c63f2be2738/download)
DEV TASKS:
- [ ] use jupyter notebooks to file LTD > BEN
- [ ] Dev inform BAs that it has been filed and paid
- [ ] BAs review in SOFI or bcregistry.ca
- [ ] IF BEN > COLIN, BA downloads the alteration outputs
- [ ] BAs tell dev to unaffiliate the entity
- [ ] Devs unaffiliate the entity from the account
- [ ] BAs inform staff that they can proceed with their steps
#### Tasks
- [x] When ticket has been created, post the ticket in RocketChat '#Operations Tasks' channel
- [x] Add **entity** or **relationships** label to zenhub ticket
- [x] Add 'Priority1' label to zenhub ticket
- [x] Assign zenhub ticket to milestone: current, and place in pipeline: sprint backlog
- [x] Reply All to IT Ops email and provide zenhub ticket number opened and which team it was assigned to
- [ ] Dev/BAs to complete work & close zenhub ticket
- [ ] Author of zenhub ticket to mark ServiceNow ticket as resolved or ask IT Ops to do so
| non_test | b c ltd alteration to a benefit company servicenow incident contact information staff name kathy langlois staff email kathy langlois gov bc ca description can you please open a ticket with the lab to file an alteration to a benefit company request includes change to articles to include benefit statement affiliation info mandatory bc registries account name or bcol a c company email mandatory trippon comoxlaw ca company phone optional filing date date time received at registries if email time email received if mail time scanned document received july at pm effective date if different from filing date ie future effective n a name request optional n a phone number or e mail from name request receipt info method routing slip or bcol bcol bcol account number or route slip bcol dat if bcol folio optional n a amount for immediate or for future effective request includes change to articles to include benefit statement dev tasks use jupyter notebooks to file ltd ben dev inform bas that it has been filed and paid bas review in sofi or bcregistry ca if ben colin ba downloads the alteration outputs bas tell dev to unaffiliate the entity devs unaffiliate the entity from the account bas inform staff that they can proceed with their steps tasks when ticket has been created post the ticket in rocketchat operations tasks channel add entity or relationships label to zenhub ticket add label to zenhub ticket assign zenhub ticket to milestone current and place in pipeline sprint backlog reply all to it ops email and provide zenhub ticket number opened and which team it was assigned to dev bas to complete work close zenhub ticket author of zenhub ticket to mark servicenow ticket as resolved or ask it ops to do so | 0 |
45,132 | 23,923,190,064 | IssuesEvent | 2022-09-09 19:09:00 | yds12/mexe | https://api.github.com/repos/yds12/mexe | closed | Check impact of `smallvec` | performance | Run benchmarks replacing `Vec` by the one in the `smallvec` crate, and see if it's worth it to add a dependency. | True | Check impact of `smallvec` - Run benchmarks replacing `Vec` by the one in the `smallvec` crate, and see if it's worth it to add a dependency. | non_test | check impact of smallvec run benchmarks replacing vec by the one in the smallvec crate and see if it s worth it to add a dependency | 0 |
160,417 | 12,510,872,801 | IssuesEvent | 2020-06-02 19:28:34 | PowerShell/PowerShell | https://api.github.com/repos/PowerShell/PowerShell | closed | Restore markdownlint tests | Area-Test Issue-Question | `markdownlint` tests were removed in #10163 due to a security issue in a dependancy ([CVE-2019-10746](https://snyk.io/vuln/SNYK-JS-MIXINDEEP-450212)).
According to [snyk](https://snyk.io/test/npm/markdownlint/), `markdownlint` has no currently known vulnerabilities, so we should restore these tests.
| 1.0 | Restore markdownlint tests - `markdownlint` tests were removed in #10163 due to a security issue in a dependancy ([CVE-2019-10746](https://snyk.io/vuln/SNYK-JS-MIXINDEEP-450212)).
According to [snyk](https://snyk.io/test/npm/markdownlint/), `markdownlint` has no currently known vulnerabilities, so we should restore these tests.
| test | restore markdownlint tests markdownlint tests were removed in due to a security issue in a dependancy according to markdownlint has no currently known vulnerabilities so we should restore these tests | 1 |
205,667 | 15,987,760,150 | IssuesEvent | 2021-04-19 01:38:44 | AsthMattic/Parallax | https://api.github.com/repos/AsthMattic/Parallax | closed | Add STL's for tracker to GitHub | documentation | The STL's need to be uploaded so that anyone who wants to make their own tracker can download and print them. | 1.0 | Add STL's for tracker to GitHub - The STL's need to be uploaded so that anyone who wants to make their own tracker can download and print them. | non_test | add stl s for tracker to github the stl s need to be uploaded so that anyone who wants to make their own tracker can download and print them | 0 |
94,871 | 8,526,472,134 | IssuesEvent | 2018-11-02 16:20:58 | SME-Issues/issues | https://api.github.com/repos/SME-Issues/issues | closed | Intent Errors (5004) - 01/11/2018 | NLP Api pulse_tests | |Expression|Result|
|---|---|
| _did i bill delta ltd for july_ | expected intent to be `query_invoices` but found `query_payment` |
| _Do I need to chase ABC Ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _do i need to pay anyone_ | expected intent to be `query_invoices` but found `query_payment` |
| _Do I need to pay anyone at the moment?_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me all the bills I have to pay by next Tues_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me all the invoices that are owed_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me the bills i need to pay next week_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much do i have to pay HMRC this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much do i have to pay on Tues_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much is owed on the johnson ring pulls account_ | expected intent to be `query_invoices` but found `query_payment` |
| _is carl overdue in paying this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _show me the transactions on wigglycom's account_ | expected intent to be `query_invoices` but found `query_payment` |
| _Tell me who I need to chase for payments?_ | expected intent to be `query_invoices` but found `query_payment` |
| _What do I need to pay Tau Ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what do i need to pay this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _what does HSBC owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what is the biggest one I owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what is the biggest one owed to me?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what should i expect to have to pay out this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _what supplier invoices need to be paid_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will cybg owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will CYGB owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will cygb owe?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will hsbc owe?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will i have to pay my landlord this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _What will I have to pay next month?_ | expected intent to be `query_invoices` but found `query_payment` |
| _when do i have to pay people_ | expected intent to be `query_invoices` but found `query_payment` |
| _when do i need to pay people?_ | expected intent to be `query_invoices` but found `query_payment` |
| _When should I get the money in from ABC ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _when will i have to pay my bills_ | expected intent to be `query_invoices` but found `query_payment` |
| _which bills need to be paid this week_ | expected intent to be `query_invoices` but found `query_payment` |
| _which is the oldest one I owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _which is the oldest one owed to me?_ | expected intent to be `query_invoices` but found `query_payment` |
| _which supplier invoices need to be paid_ | expected intent to be `query_invoices` but found `query_payment` |
| _who do i need to pay_ | expected intent to be `query_invoices` but found `query_payment` |
| _who do i need to pay this month?_ | expected intent to be `query_invoices` but found `query_payment` |
| _who hasn't paid me yet?_ | expected intent to be `query_invoices` but found `query_payment` |
| _who needs to pay me_ | expected intent to be `query_invoices` but found `query_payment` |
| _what do i need to pay and what should i be paid_ | Multiple failures or warnings in test:
1) Test needs updating
2) expected intent to be `query_invoices` but found `query_payment`
|
| 1.0 | Intent Errors (5004) - 01/11/2018 - |Expression|Result|
|---|---|
| _did i bill delta ltd for july_ | expected intent to be `query_invoices` but found `query_payment` |
| _Do I need to chase ABC Ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _do i need to pay anyone_ | expected intent to be `query_invoices` but found `query_payment` |
| _Do I need to pay anyone at the moment?_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me all the bills I have to pay by next Tues_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me all the invoices that are owed_ | expected intent to be `query_invoices` but found `query_payment` |
| _give me the bills i need to pay next week_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much do i have to pay HMRC this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much do i have to pay on Tues_ | expected intent to be `query_invoices` but found `query_payment` |
| _how much is owed on the johnson ring pulls account_ | expected intent to be `query_invoices` but found `query_payment` |
| _is carl overdue in paying this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _show me the transactions on wigglycom's account_ | expected intent to be `query_invoices` but found `query_payment` |
| _Tell me who I need to chase for payments?_ | expected intent to be `query_invoices` but found `query_payment` |
| _What do I need to pay Tau Ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what do i need to pay this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _what does HSBC owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what is the biggest one I owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what is the biggest one owed to me?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what should i expect to have to pay out this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _what supplier invoices need to be paid_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will cybg owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will CYGB owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will cygb owe?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will hsbc owe?_ | expected intent to be `query_invoices` but found `query_payment` |
| _what will i have to pay my landlord this month_ | expected intent to be `query_invoices` but found `query_payment` |
| _What will I have to pay next month?_ | expected intent to be `query_invoices` but found `query_payment` |
| _when do i have to pay people_ | expected intent to be `query_invoices` but found `query_payment` |
| _when do i need to pay people?_ | expected intent to be `query_invoices` but found `query_payment` |
| _When should I get the money in from ABC ltd?_ | expected intent to be `query_invoices` but found `query_payment` |
| _when will i have to pay my bills_ | expected intent to be `query_invoices` but found `query_payment` |
| _which bills need to be paid this week_ | expected intent to be `query_invoices` but found `query_payment` |
| _which is the oldest one I owe_ | expected intent to be `query_invoices` but found `query_payment` |
| _which is the oldest one owed to me?_ | expected intent to be `query_invoices` but found `query_payment` |
| _which supplier invoices need to be paid_ | expected intent to be `query_invoices` but found `query_payment` |
| _who do i need to pay_ | expected intent to be `query_invoices` but found `query_payment` |
| _who do i need to pay this month?_ | expected intent to be `query_invoices` but found `query_payment` |
| _who hasn't paid me yet?_ | expected intent to be `query_invoices` but found `query_payment` |
| _who needs to pay me_ | expected intent to be `query_invoices` but found `query_payment` |
| _what do i need to pay and what should i be paid_ | Multiple failures or warnings in test:
1) Test needs updating
2) expected intent to be `query_invoices` but found `query_payment`
|
| test | intent errors expression result did i bill delta ltd for july expected intent to be query invoices but found query payment do i need to chase abc ltd expected intent to be query invoices but found query payment do i need to pay anyone expected intent to be query invoices but found query payment do i need to pay anyone at the moment expected intent to be query invoices but found query payment give me all the bills i have to pay by next tues expected intent to be query invoices but found query payment give me all the invoices that are owed expected intent to be query invoices but found query payment give me the bills i need to pay next week expected intent to be query invoices but found query payment how much do i have to pay hmrc this month expected intent to be query invoices but found query payment how much do i have to pay on tues expected intent to be query invoices but found query payment how much is owed on the johnson ring pulls account expected intent to be query invoices but found query payment is carl overdue in paying this month expected intent to be query invoices but found query payment show me the transactions on wigglycom s account expected intent to be query invoices but found query payment tell me who i need to chase for payments expected intent to be query invoices but found query payment what do i need to pay tau ltd expected intent to be query invoices but found query payment what do i need to pay this month expected intent to be query invoices but found query payment what does hsbc owe expected intent to be query invoices but found query payment what is the biggest one i owe expected intent to be query invoices but found query payment what is the biggest one owed to me expected intent to be query invoices but found query payment what should i expect to have to pay out this month expected intent to be query invoices but found query payment what supplier invoices need to be paid expected intent to be query invoices but found query payment what will cybg owe expected intent to be query invoices but found query payment what will cygb owe expected intent to be query invoices but found query payment what will cygb owe expected intent to be query invoices but found query payment what will hsbc owe expected intent to be query invoices but found query payment what will i have to pay my landlord this month expected intent to be query invoices but found query payment what will i have to pay next month expected intent to be query invoices but found query payment when do i have to pay people expected intent to be query invoices but found query payment when do i need to pay people expected intent to be query invoices but found query payment when should i get the money in from abc ltd expected intent to be query invoices but found query payment when will i have to pay my bills expected intent to be query invoices but found query payment which bills need to be paid this week expected intent to be query invoices but found query payment which is the oldest one i owe expected intent to be query invoices but found query payment which is the oldest one owed to me expected intent to be query invoices but found query payment which supplier invoices need to be paid expected intent to be query invoices but found query payment who do i need to pay expected intent to be query invoices but found query payment who do i need to pay this month expected intent to be query invoices but found query payment who hasn t paid me yet expected intent to be query invoices but found query payment who needs to pay me expected intent to be query invoices but found query payment what do i need to pay and what should i be paid multiple failures or warnings in test test needs updating expected intent to be query invoices but found query payment | 1 |
76,125 | 26,254,298,160 | IssuesEvent | 2023-01-05 22:31:10 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | BUG: Adding np.nan to list changes pvalue | defect | ### Describe your issue.
Not sure if this counts as a bug but it seems like unintended functionality. Adding np.nan's to lists changes the pvalues when they should probably be ignored.
### Reproducing Code Example
```python
l1 = [1, 2, 3]
l2 = [0, 1, 2]
print(scipy.stats.ranksums(l1, l2))
RanksumsResult(statistic=1.091089451179962, pvalue=0.27523352407483426)
l1 = [1, 2, 3] + [np.nan]*100
l2 = [0, 1, 2] + [np.nan]*100
print(scipy.stats.ranksums(l1, l2))
RanksumsResult(statistic=11.343152467935827, pvalue=8.02022721910526e-30)
```
### Error message
```shell
No error message
```
### SciPy/NumPy/Python version information
1.7.3, 1.21.0, sys.version_info(major=3, minor=9, micro=9, releaselevel='final', serial=0) | 1.0 | BUG: Adding np.nan to list changes pvalue - ### Describe your issue.
Not sure if this counts as a bug but it seems like unintended functionality. Adding np.nan's to lists changes the pvalues when they should probably be ignored.
### Reproducing Code Example
```python
l1 = [1, 2, 3]
l2 = [0, 1, 2]
print(scipy.stats.ranksums(l1, l2))
RanksumsResult(statistic=1.091089451179962, pvalue=0.27523352407483426)
l1 = [1, 2, 3] + [np.nan]*100
l2 = [0, 1, 2] + [np.nan]*100
print(scipy.stats.ranksums(l1, l2))
RanksumsResult(statistic=11.343152467935827, pvalue=8.02022721910526e-30)
```
### Error message
```shell
No error message
```
### SciPy/NumPy/Python version information
1.7.3, 1.21.0, sys.version_info(major=3, minor=9, micro=9, releaselevel='final', serial=0) | non_test | bug adding np nan to list changes pvalue describe your issue not sure if this counts as a bug but it seems like unintended functionality adding np nan s to lists changes the pvalues when they should probably be ignored reproducing code example python print scipy stats ranksums ranksumsresult statistic pvalue print scipy stats ranksums ranksumsresult statistic pvalue error message shell no error message scipy numpy python version information sys version info major minor micro releaselevel final serial | 0 |
97,971 | 8,673,895,115 | IssuesEvent | 2018-11-30 04:53:14 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | closed | FXLabs Testing : ApiV1EnvsIdGetPathParamIdMysqlSqlInjectionTimebound | FXLabs Testing | Project : FXLabs Testing
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YTM3ZjdiZGMtNGQyMi00MDUwLWI4ZTMtYTkwNDE4M2I2NjBm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 04:51:34 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/envs/
Request :
Response :
{
"timestamp" : "2018-11-30T04:51:35.326+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/envs/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [491 < 7000 OR 491 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | FXLabs Testing : ApiV1EnvsIdGetPathParamIdMysqlSqlInjectionTimebound - Project : FXLabs Testing
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=YTM3ZjdiZGMtNGQyMi00MDUwLWI4ZTMtYTkwNDE4M2I2NjBm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 04:51:34 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/envs/
Request :
Response :
{
"timestamp" : "2018-11-30T04:51:35.326+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/envs/"
}
Logs :
Assertion [@ResponseTime < 7000 OR @ResponseTime > 10000] resolved-to [491 < 7000 OR 491 > 10000] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | test | fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api envs logs assertion resolved to result assertion resolved to result fx bot | 1 |
116,494 | 9,854,174,798 | IssuesEvent | 2019-06-19 16:13:50 | input-output-hk/rust-cardano | https://api.github.com/repos/input-output-hk/rust-cardano | closed | CircleCI: add minimal rust compiler version job too | D - medium Testing | We need to make sure we support a minimal compiler version in the circle ci jobs.
the minimal version should be `1.31.0` (edition: 2018). | 1.0 | CircleCI: add minimal rust compiler version job too - We need to make sure we support a minimal compiler version in the circle ci jobs.
the minimal version should be `1.31.0` (edition: 2018). | test | circleci add minimal rust compiler version job too we need to make sure we support a minimal compiler version in the circle ci jobs the minimal version should be edition | 1 |
130,370 | 10,607,340,851 | IssuesEvent | 2019-10-11 03:21:12 | rsx-labs/aide-frontend | https://api.github.com/repos/rsx-labs/aide-frontend | closed | Create Project label consistency | Bug For QA Testing | **Describe the bug**
1. Change 'Select Category' to 'Select category' using lowercase on the word category
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Version (please complete the following information):**
- Version 2.6 | 1.0 | Create Project label consistency - **Describe the bug**
1. Change 'Select Category' to 'Select category' using lowercase on the word category
**Screenshots**
If applicable, add screenshots to help explain your problem.

**Version (please complete the following information):**
- Version 2.6 | test | create project label consistency describe the bug change select category to select category using lowercase on the word category screenshots if applicable add screenshots to help explain your problem version please complete the following information version | 1 |
156,304 | 19,847,808,646 | IssuesEvent | 2022-01-21 08:54:06 | qiangmao/axios | https://api.github.com/repos/qiangmao/axios | opened | CVE-2017-16032 (Medium) detected in brace-expansion-1.1.6.tgz | security vulnerability | ## CVE-2017-16032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/brace-expansion/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-nodeunit-1.0.0.tgz (Root Library)
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- glob-7.0.5.tgz
- minimatch-3.0.2.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/qiangmao/axios/commit/91ceb6046aaa22e9934ed13ea5acba9c988c490c">91ceb6046aaa22e9934ed13ea5acba9c988c490c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032>CVE-2017-16032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/338">https://www.npmjs.com/advisories/338</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v1.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-16032 (Medium) detected in brace-expansion-1.1.6.tgz - ## CVE-2017-16032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/brace-expansion/package.json</p>
<p>
Dependency Hierarchy:
- grunt-contrib-nodeunit-1.0.0.tgz (Root Library)
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- nyc-7.1.0.tgz
- glob-7.0.5.tgz
- minimatch-3.0.2.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/qiangmao/axios/commit/91ceb6046aaa22e9934ed13ea5acba9c988c490c">91ceb6046aaa22e9934ed13ea5acba9c988c490c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032>CVE-2017-16032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/338">https://www.npmjs.com/advisories/338</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v1.1.7</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in brace expansion tgz cve medium severity vulnerability vulnerable library brace expansion tgz brace expansion as known from sh bash library home page a href path to dependency file package json path to vulnerable library node modules nyc node modules brace expansion package json dependency hierarchy grunt contrib nodeunit tgz root library nodeunit tgz tap tgz nyc tgz glob tgz minimatch tgz x brace expansion tgz vulnerable library found in head commit a href found in base branch master vulnerability details brace expansion before are vulnerable to a regular expression denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
115,242 | 9,784,974,630 | IssuesEvent | 2019-06-09 01:12:03 | fergusfrl/Recog | https://api.github.com/repos/fergusfrl/Recog | opened | More robust Unit tests | testing | The test suite does not currently cover 3 major use cases:
1. error states. eg: `Dir 'x' already exists`
2. unit level of the `handlers` file
3. unit level of the `templates` file. Jest + CSS is P1. React file template is P2
| 1.0 | More robust Unit tests - The test suite does not currently cover 3 major use cases:
1. error states. eg: `Dir 'x' already exists`
2. unit level of the `handlers` file
3. unit level of the `templates` file. Jest + CSS is P1. React file template is P2
| test | more robust unit tests the test suite does not currently cover major use cases error states eg dir x already exists unit level of the handlers file unit level of the templates file jest css is react file template is | 1 |
248,984 | 21,092,681,672 | IssuesEvent | 2022-04-04 07:20:17 | proarc/proarc-client | https://api.github.com/repos/proarc/proarc-client | closed | Úprava řazení v roletce Export | 1 chyba 6 k testování 7 návrh na zavření 6c otestováno: KNAV | Prosím upravit řazení v roletce export podle jádra.
Teď naskočí jako první Skeny. To se exportuje jen výjimečně.

| 2.0 | Úprava řazení v roletce Export - Prosím upravit řazení v roletce export podle jádra.
Teď naskočí jako první Skeny. To se exportuje jen výjimečně.

| test | úprava řazení v roletce export prosím upravit řazení v roletce export podle jádra teď naskočí jako první skeny to se exportuje jen výjimečně | 1 |
326,657 | 28,010,048,049 | IssuesEvent | 2023-03-27 17:56:19 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | pkg/server: `TestDrain` is flakey | C-bug skipped-test GA-blocker T-kv branch-release-23.1 | **Describe the problem**
<img width="759" alt="Screen Shot 2022-08-26 at 5 53 10 PM" src="https://user-images.githubusercontent.com/6658984/187007842-633f91a6-66eb-45da-b9b2-7c9beecb902a.png">
TestDrain has been flaking a lot today.
```
Failed
=== RUN TestDrain
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/ab5462d2cb3989e2a33018db970f23de/logTestDrain523638963
test_log_scope.go:79: use -show-logs to present logs inline
drain_test.go:254: expected remaining false, got true
panic.go:500: -- test log scope end --
--- FAIL: TestDrain (74.31s)
```
Jira issue: CRDB-19064 | 1.0 | pkg/server: `TestDrain` is flakey - **Describe the problem**
<img width="759" alt="Screen Shot 2022-08-26 at 5 53 10 PM" src="https://user-images.githubusercontent.com/6658984/187007842-633f91a6-66eb-45da-b9b2-7c9beecb902a.png">
TestDrain has been flaking a lot today.
```
Failed
=== RUN TestDrain
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/ab5462d2cb3989e2a33018db970f23de/logTestDrain523638963
test_log_scope.go:79: use -show-logs to present logs inline
drain_test.go:254: expected remaining false, got true
panic.go:500: -- test log scope end --
--- FAIL: TestDrain (74.31s)
```
Jira issue: CRDB-19064 | test | pkg server testdrain is flakey describe the problem img width alt screen shot at pm src testdrain has been flaking a lot today failed run testdrain test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline drain test go expected remaining false got true panic go test log scope end fail testdrain jira issue crdb | 1 |
78,258 | 9,682,658,965 | IssuesEvent | 2019-05-23 09:39:33 | quicwg/base-drafts | https://api.github.com/repos/quicwg/base-drafts | closed | Initial maximum table size needs clarification. | -qpack design | Initial maximum table size needs clarification.
A QPACK context is shared between the encoder of one endpoint, call it endpoint
A, and the decoder of another endpoint, call it endpoint B. Endpoint A is the
one sending HTTP messages (requests or responses). (There is another,
completely independent QPACK context shared between the encoder of B and the
decoder of A used for messages sent by B.) A and B must have a shared
understanding of the initial maximum table size for this context. In HTTP/3,
endpoint B can send endpoint A a SETTINGS_HEADER_TABLE_SIZE setting.
In https://quicwg.org/base-drafts/draft-ietf-quic-qpack.html#maximum-table-size,
it is difficult for me to apply the sentence "The initial maximum size is
determined by the corresponding setting when HTTP requests or responses are
first permitted to be sent." from B's prospective, because B does not sent HTTP
messages using this QPACK context. And from A's prospective, I understand that
A is allowed to send messages before receiving B's SETTINGS frame, at which
point the value of SETTINGS_HEADER_TABLE_SIZE is zero according to issue #2038
in the 1-RTT case, in contradiction with the current last sentence of this
paragraph.
Similarily, I find the phrase "the initial maximum table size is the value of the
setting in the peer’s SETTINGS frame" confusing from B's prospective, because
it is B, not its peer sending the relevant SETTING.
I propose to reword this paragraph. In the 1-RTT case, there are two options
for the initial maximum table size: zero, see #2256, or the value of
SETTINGS_HEADER_TABLE_SIZE, see #2257. In either case, the encoder
cannot insert entries into the dynamic table until it receives the SETTINGS frame
from the decoder.
As for the case where QPACK is used in a protocol other than HTTP/3, I feel like
the initial maximum table size can either be left unspecified, placing the
burden on that other protocol. Another option is to define it to be zero unless
that other protocol specifies otherwise. | 1.0 | Initial maximum table size needs clarification. - Initial maximum table size needs clarification.
A QPACK context is shared between the encoder of one endpoint, call it endpoint
A, and the decoder of another endpoint, call it endpoint B. Endpoint A is the
one sending HTTP messages (requests or responses). (There is another,
completely independent QPACK context shared between the encoder of B and the
decoder of A used for messages sent by B.) A and B must have a shared
understanding of the initial maximum table size for this context. In HTTP/3,
endpoint B can send endpoint A a SETTINGS_HEADER_TABLE_SIZE setting.
In https://quicwg.org/base-drafts/draft-ietf-quic-qpack.html#maximum-table-size,
it is difficult for me to apply the sentence "The initial maximum size is
determined by the corresponding setting when HTTP requests or responses are
first permitted to be sent." from B's prospective, because B does not sent HTTP
messages using this QPACK context. And from A's prospective, I understand that
A is allowed to send messages before receiving B's SETTINGS frame, at which
point the value of SETTINGS_HEADER_TABLE_SIZE is zero according to issue #2038
in the 1-RTT case, in contradiction with the current last sentence of this
paragraph.
Similarily, I find the phrase "the initial maximum table size is the value of the
setting in the peer’s SETTINGS frame" confusing from B's prospective, because
it is B, not its peer sending the relevant SETTING.
I propose to reword this paragraph. In the 1-RTT case, there are two options
for the initial maximum table size: zero, see #2256, or the value of
SETTINGS_HEADER_TABLE_SIZE, see #2257. In either case, the encoder
cannot insert entries into the dynamic table until it receives the SETTINGS frame
from the decoder.
As for the case where QPACK is used in a protocol other than HTTP/3, I feel like
the initial maximum table size can either be left unspecified, placing the
burden on that other protocol. Another option is to define it to be zero unless
that other protocol specifies otherwise. | non_test | initial maximum table size needs clarification initial maximum table size needs clarification a qpack context is shared between the encoder of one endpoint call it endpoint a and the decoder of another endpoint call it endpoint b endpoint a is the one sending http messages requests or responses there is another completely independent qpack context shared between the encoder of b and the decoder of a used for messages sent by b a and b must have a shared understanding of the initial maximum table size for this context in http endpoint b can send endpoint a a settings header table size setting in it is difficult for me to apply the sentence the initial maximum size is determined by the corresponding setting when http requests or responses are first permitted to be sent from b s prospective because b does not sent http messages using this qpack context and from a s prospective i understand that a is allowed to send messages before receiving b s settings frame at which point the value of settings header table size is zero according to issue in the rtt case in contradiction with the current last sentence of this paragraph similarily i find the phrase the initial maximum table size is the value of the setting in the peer’s settings frame confusing from b s prospective because it is b not its peer sending the relevant setting i propose to reword this paragraph in the rtt case there are two options for the initial maximum table size zero see or the value of settings header table size see in either case the encoder cannot insert entries into the dynamic table until it receives the settings frame from the decoder as for the case where qpack is used in a protocol other than http i feel like the initial maximum table size can either be left unspecified placing the burden on that other protocol another option is to define it to be zero unless that other protocol specifies otherwise | 0 |
39,016 | 15,856,620,207 | IssuesEvent | 2021-04-08 02:46:51 | vmware/singleton | https://api.github.com/repos/vmware/singleton | opened | [BUG] [Go Service]Unfriendly error message pops up when product name is illegal. | area/service kind/bug priority/low | **Describe the bug**
Build with commit 5b682b55bdfa517ffde50e28c7f9a9d9a5f4f86a
Unfriendly error message pops up when product name is illegal.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to translation-product-api
2. Input "vmware vip"(with non-accepted character: blank space) in product field.
http://localhost:8091/i18n/api/v1/bundles/components?productName=vmware%20vip&version=1.0
3. See the response:
```
{
"response": {
"code": 400,
"message": "Bad Request:map[productName:productName can only contain alphanumeric characters]"
}
}
```
**Expected behavior**
Just show the unmixed message.
```
{
"response": {
"code": 400,
"message": "productName can only contain alphanumeric characters."
}
}
```
**Additional context**
The issue is reproducible in all product/component/string based APIs.
| 1.0 | [BUG] [Go Service]Unfriendly error message pops up when product name is illegal. - **Describe the bug**
Build with commit 5b682b55bdfa517ffde50e28c7f9a9d9a5f4f86a
Unfriendly error message pops up when product name is illegal.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to translation-product-api
2. Input "vmware vip"(with non-accepted character: blank space) in product field.
http://localhost:8091/i18n/api/v1/bundles/components?productName=vmware%20vip&version=1.0
3. See the response:
```
{
"response": {
"code": 400,
"message": "Bad Request:map[productName:productName can only contain alphanumeric characters]"
}
}
```
**Expected behavior**
Just show the unmixed message.
```
{
"response": {
"code": 400,
"message": "productName can only contain alphanumeric characters."
}
}
```
**Additional context**
The issue is reproducible in all product/component/string based APIs.
| non_test | unfriendly error message pops up when product name is illegal describe the bug build with commit unfriendly error message pops up when product name is illegal to reproduce steps to reproduce the behavior go to translation product api input vmware vip with non accepted character blank space in product field see the response response code message bad request map expected behavior just show the unmixed message response code message productname can only contain alphanumeric characters additional context the issue is reproducible in all product component string based apis | 0 |
77,051 | 14,708,575,277 | IssuesEvent | 2021-01-05 00:04:13 | Torkin1/pokemon_goose_game | https://api.github.com/repos/Torkin1/pokemon_goose_game | closed | Pokemon health is not set in CoreController.setGame | CorrectiveCode | inGamePlayers list must be populated before iterating on it in setGame | 1.0 | Pokemon health is not set in CoreController.setGame - inGamePlayers list must be populated before iterating on it in setGame | non_test | pokemon health is not set in corecontroller setgame ingameplayers list must be populated before iterating on it in setgame | 0 |
393,018 | 26,968,718,190 | IssuesEvent | 2023-02-09 01:42:10 | ClaireG-J/IntroSE_03 | https://api.github.com/repos/ClaireG-J/IntroSE_03 | closed | Add information to README.md | documentation | README.md needs the following information:
* Description of the project
* Project objectives
* Add Project Features
The Languages and Techniques Section needs the following changes:
* Update python hyperlink to the documentation link for python 3
* Re-order items in alphabetical order | 1.0 | Add information to README.md - README.md needs the following information:
* Description of the project
* Project objectives
* Add Project Features
The Languages and Techniques Section needs the following changes:
* Update python hyperlink to the documentation link for python 3
* Re-order items in alphabetical order | non_test | add information to readme md readme md needs the following information description of the project project objectives add project features the languages and techniques section needs the following changes update python hyperlink to the documentation link for python re order items in alphabetical order | 0 |
36,043 | 14,919,614,733 | IssuesEvent | 2021-01-23 00:45:06 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | POST /refreshes - refresh ID problem | Pri2 assigned-to-author azure-analysis-services/svc doc-bug triaged | It seems that POST /refreshes does not contain the Location header in the response including the refresh ID
The next is returned for a call to https://westeurope.asazure.windows.net/servers/<server name>/models/<model name>/refreshes :
{
"statusCode": 200,
"headers": {
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"x-ms-root-activity-id": "7e3aa5ec-85d8-4b0b-a637-5dd38613c58c",
"x-ms-current-utc-date": "12/15/2020 9:58:22 PM",
"X-Frame-Options": "deny",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Content-Security-Policy": "script-src 'self'",
"Date": "Tue, 15 Dec 2020 21:58:21 GMT",
"Server": "Microsoft-HTTPAPI/2.0",
"Content-Length": "148",
"Content-Type": "application/json"
},
"body": {
"startTime": "2020-12-15T21:58:20.2712217Z",
"type": "full",
"status": "notStarted",
"currentRefreshType": "full",
"objects": []
}
}
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6f706b7d-0a82-4535-f7ec-3851bd204bf2
* Version Independent ID: fefdb693-fb98-25a0-684c-d828bb709114
* Content: [Asynchronous refresh for Azure Analysis Services models](https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-async-refresh)
* Content Source: [articles/analysis-services/analysis-services-async-refresh.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/analysis-services/analysis-services-async-refresh.md)
* Service: **azure-analysis-services**
* GitHub Login: @Minewiskan
* Microsoft Alias: **owend** | 1.0 | POST /refreshes - refresh ID problem - It seems that POST /refreshes does not contain the Location header in the response including the refresh ID
The next is returned for a call to https://westeurope.asazure.windows.net/servers/<server name>/models/<model name>/refreshes :
{
"statusCode": 200,
"headers": {
"Strict-Transport-Security": "max-age=31536000; includeSubDomains",
"x-ms-root-activity-id": "7e3aa5ec-85d8-4b0b-a637-5dd38613c58c",
"x-ms-current-utc-date": "12/15/2020 9:58:22 PM",
"X-Frame-Options": "deny",
"X-Content-Type-Options": "nosniff",
"X-XSS-Protection": "1; mode=block",
"Content-Security-Policy": "script-src 'self'",
"Date": "Tue, 15 Dec 2020 21:58:21 GMT",
"Server": "Microsoft-HTTPAPI/2.0",
"Content-Length": "148",
"Content-Type": "application/json"
},
"body": {
"startTime": "2020-12-15T21:58:20.2712217Z",
"type": "full",
"status": "notStarted",
"currentRefreshType": "full",
"objects": []
}
}
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6f706b7d-0a82-4535-f7ec-3851bd204bf2
* Version Independent ID: fefdb693-fb98-25a0-684c-d828bb709114
* Content: [Asynchronous refresh for Azure Analysis Services models](https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-async-refresh)
* Content Source: [articles/analysis-services/analysis-services-async-refresh.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/analysis-services/analysis-services-async-refresh.md)
* Service: **azure-analysis-services**
* GitHub Login: @Minewiskan
* Microsoft Alias: **owend** | non_test | post refreshes refresh id problem it seems that post refreshes does not contain the location header in the response including the refresh id the next is returned for a call to name models refreshes statuscode headers strict transport security max age includesubdomains x ms root activity id x ms current utc date pm x frame options deny x content type options nosniff x xss protection mode block content security policy script src self date tue dec gmt server microsoft httpapi content length content type application json body starttime type full status notstarted currentrefreshtype full objects document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service azure analysis services github login minewiskan microsoft alias owend | 0 |
26,957 | 27,428,239,003 | IssuesEvent | 2023-03-01 22:14:23 | facebook/RapiD | https://api.github.com/repos/facebook/RapiD | closed | New Data should not be abide by same rules of edition than ID, before being uploaded to the OSM Database | feature-usability | ### Description
One of the problems that I ever encounted with the AI Data suggestion is that several magenta lines once accepted as highway=residential (despite what later will become) are too big to be moved, adjusted or otherwise edited, that produce an unnessesary burden for the mapper to be splitting the line and adjust move or offset chunck by chunck. This is totally ilogic to me, we should be able to move or adjust the whole line created independently of the size or zoom, because that line is not yet in the Database of OSM.
Mappers are being restricted, by an ID rule that is perhaps Ok for EXISTING DATA, but not ok for data that is not YET part of the Database, as is the data suggested by RAPID AI. | True | New Data should not be abide by same rules of edition than ID, before being uploaded to the OSM Database - ### Description
One of the problems that I ever encounted with the AI Data suggestion is that several magenta lines once accepted as highway=residential (despite what later will become) are too big to be moved, adjusted or otherwise edited, that produce an unnessesary burden for the mapper to be splitting the line and adjust move or offset chunck by chunck. This is totally ilogic to me, we should be able to move or adjust the whole line created independently of the size or zoom, because that line is not yet in the Database of OSM.
Mappers are being restricted, by an ID rule that is perhaps Ok for EXISTING DATA, but not ok for data that is not YET part of the Database, as is the data suggested by RAPID AI. | non_test | new data should not be abide by same rules of edition than id before being uploaded to the osm database description one of the problems that i ever encounted with the ai data suggestion is that several magenta lines once accepted as highway residential despite what later will become are too big to be moved adjusted or otherwise edited that produce an unnessesary burden for the mapper to be splitting the line and adjust move or offset chunck by chunck this is totally ilogic to me we should be able to move or adjust the whole line created independently of the size or zoom because that line is not yet in the database of osm mappers are being restricted by an id rule that is perhaps ok for existing data but not ok for data that is not yet part of the database as is the data suggested by rapid ai | 0 |
7,333 | 7,891,261,976 | IssuesEvent | 2018-06-28 11:32:48 | molgenis/molgenis | https://api.github.com/repos/molgenis/molgenis | opened | Cannot map to existing entity in mapping service | 7.0.0-RC bug mod:data-mapping-service | ### How to Reproduce
Upload:
[test_nillable_expression_questionnaire.xlsx](https://github.com/molgenis/molgenis/files/2145073/test_nillable_expression_questionnaire.xlsx)
Fill in one row
Create mapping
Map test:nillable expression to the same table
Click create integrated dataset
Select existing dataset, select test: nillable expression
Click on map
### Expected behavior
Mapping succeeds
### Observed behavior
Error: null id | 1.0 | Cannot map to existing entity in mapping service - ### How to Reproduce
Upload:
[test_nillable_expression_questionnaire.xlsx](https://github.com/molgenis/molgenis/files/2145073/test_nillable_expression_questionnaire.xlsx)
Fill in one row
Create mapping
Map test:nillable expression to the same table
Click create integrated dataset
Select existing dataset, select test: nillable expression
Click on map
### Expected behavior
Mapping succeeds
### Observed behavior
Error: null id | non_test | cannot map to existing entity in mapping service how to reproduce upload fill in one row create mapping map test nillable expression to the same table click create integrated dataset select existing dataset select test nillable expression click on map expected behavior mapping succeeds observed behavior error null id | 0 |
30,466 | 4,621,982,992 | IssuesEvent | 2016-09-27 05:00:22 | fpco/store | https://api.github.com/repos/fpco/store | opened | Add Liquid Haskell to our travis CI | testing | Pinging @kantp , I think we should add Liquid Haskell to our CI, since we now have code that uses it! | 1.0 | Add Liquid Haskell to our travis CI - Pinging @kantp , I think we should add Liquid Haskell to our CI, since we now have code that uses it! | test | add liquid haskell to our travis ci pinging kantp i think we should add liquid haskell to our ci since we now have code that uses it | 1 |
17,637 | 10,739,429,436 | IssuesEvent | 2019-10-29 16:20:46 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | closed | Meeting: Tablet Configuration with Parking Meters | Service: Apps Type: IT Support Type: Meeting Workgroup: PE | ### Objective
Meeting to hand off tablets to Parking Meters
### Participants
Diana, David Smith, Gilbert Marrero, Steven Slawson, Steve Smith, Jeremy Stephens
### Agenda
Have David have invite the folks that will be assigned the tablets.
------
- [x] Schedule meeting
- Scheduled for: 10/29/19 @ 8:15 am, Toomey location
- [x] Have users enter emails to finish enrollment
- [x] Admin pin code
- [x] set up individual pin code
| 1.0 | Meeting: Tablet Configuration with Parking Meters - ### Objective
Meeting to hand off tablets to Parking Meters
### Participants
Diana, David Smith, Gilbert Marrero, Steven Slawson, Steve Smith, Jeremy Stephens
### Agenda
Have David have invite the folks that will be assigned the tablets.
------
- [x] Schedule meeting
- Scheduled for: 10/29/19 @ 8:15 am, Toomey location
- [x] Have users enter emails to finish enrollment
- [x] Admin pin code
- [x] set up individual pin code
| non_test | meeting tablet configuration with parking meters objective meeting to hand off tablets to parking meters participants diana david smith gilbert marrero steven slawson steve smith jeremy stephens agenda have david have invite the folks that will be assigned the tablets schedule meeting scheduled for am toomey location have users enter emails to finish enrollment admin pin code set up individual pin code | 0 |
239,293 | 19,844,497,200 | IssuesEvent | 2022-01-21 03:28:17 | denoland/deno | https://api.github.com/repos/denoland/deno | opened | webcrypto: run JOSE test suite in CI | suggestion tests ext/crypto | We've used jose test suite alongside WPT when working on ECC support. [I agree with @panva](https://github.com/denoland/deno/issues/11690#issuecomment-994937320) that it will be a nice addition to our tests. | 1.0 | webcrypto: run JOSE test suite in CI - We've used jose test suite alongside WPT when working on ECC support. [I agree with @panva](https://github.com/denoland/deno/issues/11690#issuecomment-994937320) that it will be a nice addition to our tests. | test | webcrypto run jose test suite in ci we ve used jose test suite alongside wpt when working on ecc support that it will be a nice addition to our tests | 1 |
112,453 | 24,274,914,355 | IssuesEvent | 2022-09-28 13:13:34 | firecracker-microvm/firecracker | https://api.github.com/repos/firecracker-microvm/firecracker | opened | Rootfs built by `devtool` with default parameters leads to a warning message | Codebase: Usability | # Description
When rootfs is built by `devtool`, the size of the deliverable is 300MB (300000000B) by default.
https://github.com/firecracker-microvm/firecracker/blob/main/tools/devtool#L2119-L2124
```
# `./devtool build_rootfs -s 500MB`
# Build a rootfs of custom size.
#
cmd_build_rootfs() {
# Default size for the resulting rootfs image is 300MB.
SIZE="300MB"
```
But as firecracker prefers a multiple of sector size 512, the following warning message is displayed when booting with the rootfs:
```
2022-09-28T12:58:57.099476095 [anonymous-instance:main:WARN:src/devices/src/virtio/block/device.rs:104] Disk size 300000000 is not a multiple of sector size 512; the remainder will not be visible to the guest.
```
https://github.com/firecracker-microvm/firecracker/blob/main/src/devices/src/virtio/block/device.rs#L101-L109
```
// We only support disk size, which uses the first two words of the configuration space.
// If the image is not a multiple of the sector size, the tail bits are not exposed.
if disk_size % SECTOR_SIZE != 0 {
warn!(
"Disk size {} is not a multiple of sector size {}; the remainder will not be \
visible to the guest.",
disk_size, SECTOR_SIZE
);
}
```
## To Reproduce
1. Crete rootfs with `tools/devtool`
```
$ tools/devtool build_rootfs
```
2. Run firecracker
```
$ firecracker --api-sock /tmp/firecracker.sock --config-file config.json
2022-09-28T12:58:57.099476095 [anonymous-instance:main:WARN:src/devices/src/virtio/block/device.rs:104] Disk size 300000000 is not a multiple of sector size 512; the remainder will not be visible to the guest.
...
root@ubuntu-fc-uvm:~#
```
## Expected behaviour
The rootfs built with default parameters should align the firecracker preference.
## Environment
- Firecracker version: latest main branch
- Host and guest kernel versions: x86_64 / 5.10.135-122.509.amzn2.x86_64
## Additional context
`build_rootfs` uses `truncate` command to determine the rootfs size.
https://github.com/firecracker-microvm/firecracker/blob/main/tools/devtool#L2177
```
truncate -s "$SIZE" "$img_file"
```
Thus, we can align it by just changing it from "MB" to "M".
https://man7.org/linux/man-pages/man1/truncate.1.html
```
The SIZE argument is an integer and optional unit (example: 10K
is 10*1024). Units are K,M,G,T,P,E,Z,Y (powers of 1024) or
KB,MB,... (powers of 1000). Binary prefixes can be used, too:
KiB=K, MiB=M, and so on.
```
## Checks
- [x] Have you searched the Firecracker Issues database for similar problems?
- [x] Have you read the existing relevant Firecracker documentation?
- [x] Are you certain the bug being reported is a Firecracker issue?
| 1.0 | Rootfs built by `devtool` with default parameters leads to a warning message - # Description
When rootfs is built by `devtool`, the size of the deliverable is 300MB (300000000B) by default.
https://github.com/firecracker-microvm/firecracker/blob/main/tools/devtool#L2119-L2124
```
# `./devtool build_rootfs -s 500MB`
# Build a rootfs of custom size.
#
cmd_build_rootfs() {
# Default size for the resulting rootfs image is 300MB.
SIZE="300MB"
```
But as firecracker prefers a multiple of sector size 512, the following warning message is displayed when booting with the rootfs:
```
2022-09-28T12:58:57.099476095 [anonymous-instance:main:WARN:src/devices/src/virtio/block/device.rs:104] Disk size 300000000 is not a multiple of sector size 512; the remainder will not be visible to the guest.
```
https://github.com/firecracker-microvm/firecracker/blob/main/src/devices/src/virtio/block/device.rs#L101-L109
```
// We only support disk size, which uses the first two words of the configuration space.
// If the image is not a multiple of the sector size, the tail bits are not exposed.
if disk_size % SECTOR_SIZE != 0 {
warn!(
"Disk size {} is not a multiple of sector size {}; the remainder will not be \
visible to the guest.",
disk_size, SECTOR_SIZE
);
}
```
## To Reproduce
1. Crete rootfs with `tools/devtool`
```
$ tools/devtool build_rootfs
```
2. Run firecracker
```
$ firecracker --api-sock /tmp/firecracker.sock --config-file config.json
2022-09-28T12:58:57.099476095 [anonymous-instance:main:WARN:src/devices/src/virtio/block/device.rs:104] Disk size 300000000 is not a multiple of sector size 512; the remainder will not be visible to the guest.
...
root@ubuntu-fc-uvm:~#
```
## Expected behaviour
The rootfs built with default parameters should align the firecracker preference.
## Environment
- Firecracker version: latest main branch
- Host and guest kernel versions: x86_64 / 5.10.135-122.509.amzn2.x86_64
## Additional context
`build_rootfs` uses `truncate` command to determine the rootfs size.
https://github.com/firecracker-microvm/firecracker/blob/main/tools/devtool#L2177
```
truncate -s "$SIZE" "$img_file"
```
Thus, we can align it by just changing it from "MB" to "M".
https://man7.org/linux/man-pages/man1/truncate.1.html
```
The SIZE argument is an integer and optional unit (example: 10K
is 10*1024). Units are K,M,G,T,P,E,Z,Y (powers of 1024) or
KB,MB,... (powers of 1000). Binary prefixes can be used, too:
KiB=K, MiB=M, and so on.
```
## Checks
- [x] Have you searched the Firecracker Issues database for similar problems?
- [x] Have you read the existing relevant Firecracker documentation?
- [x] Are you certain the bug being reported is a Firecracker issue?
| non_test | rootfs built by devtool with default parameters leads to a warning message description when rootfs is built by devtool the size of the deliverable is by default devtool build rootfs s build a rootfs of custom size cmd build rootfs default size for the resulting rootfs image is size but as firecracker prefers a multiple of sector size the following warning message is displayed when booting with the rootfs disk size is not a multiple of sector size the remainder will not be visible to the guest we only support disk size which uses the first two words of the configuration space if the image is not a multiple of the sector size the tail bits are not exposed if disk size sector size warn disk size is not a multiple of sector size the remainder will not be visible to the guest disk size sector size to reproduce crete rootfs with tools devtool tools devtool build rootfs run firecracker firecracker api sock tmp firecracker sock config file config json disk size is not a multiple of sector size the remainder will not be visible to the guest root ubuntu fc uvm expected behaviour the rootfs built with default parameters should align the firecracker preference environment firecracker version latest main branch host and guest kernel versions additional context build rootfs uses truncate command to determine the rootfs size truncate s size img file thus we can align it by just changing it from mb to m the size argument is an integer and optional unit example is units are k m g t p e z y powers of or kb mb powers of binary prefixes can be used too kib k mib m and so on checks have you searched the firecracker issues database for similar problems have you read the existing relevant firecracker documentation are you certain the bug being reported is a firecracker issue | 0 |
61,178 | 6,726,898,568 | IssuesEvent | 2017-10-17 11:40:34 | QubesOS/updates-status | https://api.github.com/repos/QubesOS/updates-status | closed | mgmt-salt-dom0-qvm v4.0.3 (r4.0) | r4.0-dom0-testing | Update of mgmt-salt-dom0-qvm to v4.0.3 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/commit/cb408e6e77be244ed400c7338e533a6e44d271e2
[Changes since previous version](https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/compare/v4.0.2...v4.0.3):
QubesOS/qubes-mgmt-salt-dom0-qvm@cb408e6 version 4.0.3
QubesOS/qubes-mgmt-salt-dom0-qvm@0934e16 Add 'default-dispvm' property support to qvm.prefs.
QubesOS/qubes-mgmt-salt-dom0-qvm@e9ae760 Rename 'dispvm-allowed' to 'template-for-dispvms'
QubesOS/qubes-mgmt-salt-dom0-qvm@28e8540 Add support for managing tags
QubesOS/qubes-mgmt-salt-dom0-qvm@14a98e0 qvm.features: fix changes reporting.
Referenced issues:
QubesOS/qubes-issues#3047
If you're release manager, you can issue GPG-inline signed command:
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 current repo` (available 7 days from now)
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| 1.0 | mgmt-salt-dom0-qvm v4.0.3 (r4.0) - Update of mgmt-salt-dom0-qvm to v4.0.3 for Qubes r4.0, see comments below for details.
Built from: https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/commit/cb408e6e77be244ed400c7338e533a6e44d271e2
[Changes since previous version](https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/compare/v4.0.2...v4.0.3):
QubesOS/qubes-mgmt-salt-dom0-qvm@cb408e6 version 4.0.3
QubesOS/qubes-mgmt-salt-dom0-qvm@0934e16 Add 'default-dispvm' property support to qvm.prefs.
QubesOS/qubes-mgmt-salt-dom0-qvm@e9ae760 Rename 'dispvm-allowed' to 'template-for-dispvms'
QubesOS/qubes-mgmt-salt-dom0-qvm@28e8540 Add support for managing tags
QubesOS/qubes-mgmt-salt-dom0-qvm@14a98e0 qvm.features: fix changes reporting.
Referenced issues:
QubesOS/qubes-issues#3047
If you're release manager, you can issue GPG-inline signed command:
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 current repo` (available 7 days from now)
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now)
* `Upload mgmt-salt-dom0-qvm cb408e6e77be244ed400c7338e533a6e44d271e2 r4.0 security-testing repo`
Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
| test | mgmt salt qvm update of mgmt salt qvm to for qubes see comments below for details built from qubesos qubes mgmt salt qvm version qubesos qubes mgmt salt qvm add default dispvm property support to qvm prefs qubesos qubes mgmt salt qvm rename dispvm allowed to template for dispvms qubesos qubes mgmt salt qvm add support for managing tags qubesos qubes mgmt salt qvm qvm features fix changes reporting referenced issues qubesos qubes issues if you re release manager you can issue gpg inline signed command upload mgmt salt qvm current repo available days from now upload mgmt salt qvm current dists repo you can choose subset of distributions like vm vm available days from now upload mgmt salt qvm security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it | 1 |
326,555 | 28,001,189,209 | IssuesEvent | 2023-03-27 11:53:53 | harvester/harvester | https://api.github.com/repos/harvester/harvester | closed | [BUG] When in Rancher Harvester dashboard and adding a network config with an empty NIC you get the wrong error | kind/bug area/ui severity/4 area/rancher-related reproduce/always not-require/test-plan | **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When you are
**To Reproduce**
Steps to reproduce the behavior:
1. Navigate to the Harvester dashboard from an upstream Rancher cluster
2. Add a cluster network
3. Start to add a network config with a name
4. For uplink select Add NIC
5. Put in a NIC for one of the entries and leave the otehr blank
6. Click create
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
You should get the same error that you see in the Harvester dashboard
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
[supportbundle_66d0018e-bc98-4375-9cc8-af8500be7163_2023-03-23T22-08-39Z.zip](https://github.com/harvester/harvester/files/11056731/supportbundle_66d0018e-bc98-4375-9cc8-af8500be7163_2023-03-23T22-08-39Z.zip)
**Environment**
- Harvester ISO version: v1.1.2-rc3
- Rancher version: v2.7.2-rc6 single node Docker
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): libvirt 3 node with ipxe-examples
**Additional context**
Add any other context about the problem here.
This error show properly in the Harvester dashboard. You can see in the screenshots. The darkmode screenshot it the native Harvester dashboard. The error that is showing in Rancher is `%validation.arrayCountRequired(key: NICs, count: 1)%`


| 1.0 | [BUG] When in Rancher Harvester dashboard and adding a network config with an empty NIC you get the wrong error - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When you are
**To Reproduce**
Steps to reproduce the behavior:
1. Navigate to the Harvester dashboard from an upstream Rancher cluster
2. Add a cluster network
3. Start to add a network config with a name
4. For uplink select Add NIC
5. Put in a NIC for one of the entries and leave the otehr blank
6. Click create
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
You should get the same error that you see in the Harvester dashboard
**Support bundle**
<!--
You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue.
Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider:
- Wait for a developer to reach you and provide the bundle file by any secure methods.
- Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle.
- Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. -->
[supportbundle_66d0018e-bc98-4375-9cc8-af8500be7163_2023-03-23T22-08-39Z.zip](https://github.com/harvester/harvester/files/11056731/supportbundle_66d0018e-bc98-4375-9cc8-af8500be7163_2023-03-23T22-08-39Z.zip)
**Environment**
- Harvester ISO version: v1.1.2-rc3
- Rancher version: v2.7.2-rc6 single node Docker
- Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): libvirt 3 node with ipxe-examples
**Additional context**
Add any other context about the problem here.
This error show properly in the Harvester dashboard. You can see in the screenshots. The darkmode screenshot it the native Harvester dashboard. The error that is showing in Rancher is `%validation.arrayCountRequired(key: NICs, count: 1)%`


| test | when in rancher harvester dashboard and adding a network config with an empty nic you get the wrong error describe the bug when you are to reproduce steps to reproduce the behavior navigate to the harvester dashboard from an upstream rancher cluster add a cluster network start to add a network config with a name for uplink select add nic put in a nic for one of the entries and leave the otehr blank click create expected behavior you should get the same error that you see in the harvester dashboard support bundle you can generate a support bundle in the bottom of harvester ui it includes logs and configurations that help diagnose the issue tokens passwords and secrets are automatically removed from support bundles if you feel it s not appropriate to share the bundle files publicly please consider wait for a developer to reach you and provide the bundle file by any secure methods join our slack community to provide the bundle send the bundle to harvester support bundle suse com with the correct issue id environment harvester iso version rancher version single node docker underlying infrastructure e g baremetal with dell poweredge libvirt node with ipxe examples additional context add any other context about the problem here this error show properly in the harvester dashboard you can see in the screenshots the darkmode screenshot it the native harvester dashboard the error that is showing in rancher is validation arraycountrequired key nics count | 1 |
154,738 | 12,226,950,334 | IssuesEvent | 2020-05-03 13:16:25 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | [Failing Test] kind-ipv6-master-parallel (ci-kubernetes-kind-ipv6-e2e-parallel) | kind/failing-test | **Which jobs are failing**:
> kind-ipv6-master-parallel (ci-kubernetes-kind-ipv6-e2e-parallel)
**Which test(s) are failing**:
> Overall
**Since when has it been failing**:
> 05-03 02:53 PDT
**Testgrid link**:
> https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel
**Reason for failure**:
> • Failure [74.001 seconds]
[sig-cli] Kubectl client
test/e2e/kubectl/framework.go:23
Simple pod
test/e2e/kubectl/kubectl.go:378
should support inline execution and attach [It]
skipped 45395 lines unfold_more
[Fail] [sig-cli] Kubectl client Simple pod [It] should support inline execution and attach
test/e2e/kubectl/kubectl.go:568
> Ran 591 of 5094 Specs in 1144.372 seconds
> FAIL! -- 590 Passed | 1 Failed | 0 Pending | 4503 Skipped
**Anything else we need to know**:
/cc @kubernetes/ci-signal
/milestone v1.19
/priority critical-urgent
/sig testing | 1.0 | [Failing Test] kind-ipv6-master-parallel (ci-kubernetes-kind-ipv6-e2e-parallel) - **Which jobs are failing**:
> kind-ipv6-master-parallel (ci-kubernetes-kind-ipv6-e2e-parallel)
**Which test(s) are failing**:
> Overall
**Since when has it been failing**:
> 05-03 02:53 PDT
**Testgrid link**:
> https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel
**Reason for failure**:
> • Failure [74.001 seconds]
[sig-cli] Kubectl client
test/e2e/kubectl/framework.go:23
Simple pod
test/e2e/kubectl/kubectl.go:378
should support inline execution and attach [It]
skipped 45395 lines unfold_more
[Fail] [sig-cli] Kubectl client Simple pod [It] should support inline execution and attach
test/e2e/kubectl/kubectl.go:568
> Ran 591 of 5094 Specs in 1144.372 seconds
> FAIL! -- 590 Passed | 1 Failed | 0 Pending | 4503 Skipped
**Anything else we need to know**:
/cc @kubernetes/ci-signal
/milestone v1.19
/priority critical-urgent
/sig testing | test | kind master parallel ci kubernetes kind parallel which jobs are failing kind master parallel ci kubernetes kind parallel which test s are failing overall since when has it been failing pdt testgrid link reason for failure • failure kubectl client test kubectl framework go simple pod test kubectl kubectl go should support inline execution and attach skipped lines unfold more kubectl client simple pod should support inline execution and attach test kubectl kubectl go ran of specs in seconds fail passed failed pending skipped anything else we need to know cc kubernetes ci signal milestone priority critical urgent sig testing | 1 |
349,592 | 31,814,783,789 | IssuesEvent | 2023-09-13 19:35:02 | saltstack/salt | https://api.github.com/repos/saltstack/salt | closed | [TESTS] MacOS Mojave Flakey integration.modules.test_saltcheck | Tests | ### Description of Issue
MacOS Mojave
integration.modules.test_saltcheck.SaltcheckModuleTest.test_saltcheck_checkall
https://jenkinsci.saltstack.com/job/pr-macosxmojave-py3-slow/job/master/681/console | 1.0 | [TESTS] MacOS Mojave Flakey integration.modules.test_saltcheck - ### Description of Issue
MacOS Mojave
integration.modules.test_saltcheck.SaltcheckModuleTest.test_saltcheck_checkall
https://jenkinsci.saltstack.com/job/pr-macosxmojave-py3-slow/job/master/681/console | test | macos mojave flakey integration modules test saltcheck description of issue macos mojave integration modules test saltcheck saltcheckmoduletest test saltcheck checkall | 1 |
39,451 | 5,234,222,405 | IssuesEvent | 2017-01-30 15:09:04 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | github.com/cockroachdb/cockroach/pkg/kv: TestNoSequenceCachePutOnRangeMismatchError failed under stress | Robot test-failure | SHA: https://github.com/cockroachdb/cockroach/commits/4c53128707d07d268833ff5ccb5acee9d8720544
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=133235&tab=buildLog
```
W170130 11:01:56.727834 3356197 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170130 11:01:56.728962 3356197 server/config.go:456 1 storage engine initialized
I170130 11:01:56.730499 3356197 server/node.go:444 [n?] store [n0,s0] not bootstrapped
I170130 11:01:56.737899 3356681 storage/replica.go:4339 [n?,s1,r1/1:/M{in-ax},@c42a9d9b00] gossip not initialized
I170130 11:01:56.740385 3356197 server/node.go:373 [n?] **** cluster f6396b17-126b-4aa5-a5ba-f0cf0b062f9c has been created
I170130 11:01:56.740444 3356197 server/node.go:374 [n?] **** add additional nodes by specifying --join=127.0.0.1:41971
I170130 11:01:56.743333 3356197 storage/store.go:1255 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170130 11:01:56.743522 3356197 server/node.go:457 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170130 11:01:56.743688 3356197 server/node.go:342 [n1] node ID 1 initialized
I170130 11:01:56.743791 3356197 gossip/gossip.go:293 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41971" > attrs:<> locality:<>
I170130 11:01:56.744007 3356197 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170130 11:01:56.744198 3356197 server/node.go:589 [n1] connecting to gossip network to verify cluster ID...
I170130 11:01:56.744294 3356197 server/node.go:613 [n1] node connected via gossip and verified as part of cluster "f6396b17-126b-4aa5-a5ba-f0cf0b062f9c"
I170130 11:01:56.744400 3356197 server/node.go:392 [n1] node=1: started with [[]=] engine(s) and attributes []
I170130 11:01:56.744479 3356197 sql/executor.go:332 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:41971}
I170130 11:01:56.746684 3356197 server/server.go:629 [n1] starting https server at 127.0.0.1:56992
I170130 11:01:56.746749 3356197 server/server.go:630 [n1] starting grpc/postgres server at 127.0.0.1:41971
I170130 11:01:56.746813 3356197 server/server.go:631 [n1] advertising CockroachDB node at 127.0.0.1:41971
I170130 11:01:56.778273 3356197 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I170130 11:01:56.811261 3357451 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41971} Attrs: Locality:} ClusterID:f6396b17-126b-4aa5-a5ba-f0cf0b062f9c StartedAt:1485774116744342336}
I170130 11:01:56.817656 3356197 server/server.go:686 [n1] done ensuring all necessary migrations have run
I170130 11:01:56.817716 3356197 server/server.go:688 [n1] serving sql connections
I170130 11:02:06.936257 3357973 vendor/google.golang.org/grpc/transport/http2_client.go:1123 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170130 11:02:06.936526 3357719 vendor/google.golang.org/grpc/transport/http2_server.go:320 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:41971->127.0.0.1:32874: use of closed network connection
test_server_shim.go:133: had 1 ranges at startup, expected 6
``` | 1.0 | github.com/cockroachdb/cockroach/pkg/kv: TestNoSequenceCachePutOnRangeMismatchError failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/4c53128707d07d268833ff5ccb5acee9d8720544
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=false
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=133235&tab=buildLog
```
W170130 11:01:56.727834 3356197 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170130 11:01:56.728962 3356197 server/config.go:456 1 storage engine initialized
I170130 11:01:56.730499 3356197 server/node.go:444 [n?] store [n0,s0] not bootstrapped
I170130 11:01:56.737899 3356681 storage/replica.go:4339 [n?,s1,r1/1:/M{in-ax},@c42a9d9b00] gossip not initialized
I170130 11:01:56.740385 3356197 server/node.go:373 [n?] **** cluster f6396b17-126b-4aa5-a5ba-f0cf0b062f9c has been created
I170130 11:01:56.740444 3356197 server/node.go:374 [n?] **** add additional nodes by specifying --join=127.0.0.1:41971
I170130 11:01:56.743333 3356197 storage/store.go:1255 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170130 11:01:56.743522 3356197 server/node.go:457 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170130 11:01:56.743688 3356197 server/node.go:342 [n1] node ID 1 initialized
I170130 11:01:56.743791 3356197 gossip/gossip.go:293 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:41971" > attrs:<> locality:<>
I170130 11:01:56.744007 3356197 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170130 11:01:56.744198 3356197 server/node.go:589 [n1] connecting to gossip network to verify cluster ID...
I170130 11:01:56.744294 3356197 server/node.go:613 [n1] node connected via gossip and verified as part of cluster "f6396b17-126b-4aa5-a5ba-f0cf0b062f9c"
I170130 11:01:56.744400 3356197 server/node.go:392 [n1] node=1: started with [[]=] engine(s) and attributes []
I170130 11:01:56.744479 3356197 sql/executor.go:332 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:41971}
I170130 11:01:56.746684 3356197 server/server.go:629 [n1] starting https server at 127.0.0.1:56992
I170130 11:01:56.746749 3356197 server/server.go:630 [n1] starting grpc/postgres server at 127.0.0.1:41971
I170130 11:01:56.746813 3356197 server/server.go:631 [n1] advertising CockroachDB node at 127.0.0.1:41971
I170130 11:01:56.778273 3356197 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
I170130 11:01:56.811261 3357451 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:41971} Attrs: Locality:} ClusterID:f6396b17-126b-4aa5-a5ba-f0cf0b062f9c StartedAt:1485774116744342336}
I170130 11:01:56.817656 3356197 server/server.go:686 [n1] done ensuring all necessary migrations have run
I170130 11:01:56.817716 3356197 server/server.go:688 [n1] serving sql connections
I170130 11:02:06.936257 3357973 vendor/google.golang.org/grpc/transport/http2_client.go:1123 transport: http2Client.notifyError got notified that the client transport was broken EOF.
I170130 11:02:06.936526 3357719 vendor/google.golang.org/grpc/transport/http2_server.go:320 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:41971->127.0.0.1:32874: use of closed network connection
test_server_shim.go:133: had 1 ranges at startup, expected 6
``` | test | github com cockroachdb cockroach pkg kv testnosequencecacheputonrangemismatcherror failed under stress sha parameters cockroach proposer evaluated kv false tags deadlock goflags stress build found a failed test server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage replica go gossip not initialized server node go cluster has been created server node go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat server server go done ensuring all necessary migrations have run server server go serving sql connections vendor google golang org grpc transport client go transport notifyerror got notified that the client transport was broken eof vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection test server shim go had ranges at startup expected | 1 |
183,559 | 6,689,588,158 | IssuesEvent | 2017-10-09 03:31:24 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.youtube.com - video or audio doesn't play | browser-firefox priority-critical status-needstriage | <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: web -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 57.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: Audio
**Steps to Reproduce**:
Pulseaudio incompatibility for the debian stretch.
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.youtube.com - video or audio doesn't play - <!-- @browser: Firefox 57.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 -->
<!-- @reported_with: web -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 57.0
**Operating System**: Linux
**Tested Another Browser**: Yes
**Problem type**: Video or audio doesn't play
**Description**: Audio
**Steps to Reproduce**:
Pulseaudio incompatibility for the debian stretch.
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | video or audio doesn t play url browser version firefox operating system linux tested another browser yes problem type video or audio doesn t play description audio steps to reproduce pulseaudio incompatibility for the debian stretch from with ❤️ | 0 |
95,453 | 8,559,271,418 | IssuesEvent | 2018-11-08 20:45:57 | LLK/scratch-gui | https://api.github.com/repos/LLK/scratch-gui | closed | Issue found in Bug Hunt on 11/2/18 | smoke-testing | A number of issues were marked as fixed or with issue numbers in the doc so I didn't put them here.
* [x] Edge shows block stacks in backpack as a white square @BryceLTaylor
* [x] Design thing: There is a blue hover state for dragging code into workspace, but not for dragging sprites in sprite area. It would help to mirror the workspace behavior here: @kathymakes

* [x] I was in the costume tab, and added a new sprite, I got switched to the code tab for the new sprite, and the block category that was selected was still the old one from when I was in the blocks tab of the previous sprite. This is a feature we implemented for switching between sprites, but it is a little more unexpected/surprising when the last thing I was in was the costume tab, not necessarily a bug but maybe something to keep an eye on @kchadha
* [x] Backpack should not appear if you are not logged in. @carljbowman
* [x] Sprite tiles in the backpack do not display fonts properly. See “Test” sprite below @carljbowman

* [x] I can share the love of a costume to another sprite but not of a costume from the backpack to another sprite, iPad Safari @kchadha
* [x] What should happen in the backpack when you have something with a particular name (e.g. “toucan”), and you drag in an asset of the same type with the same name? Should the backpack rename it the way e.g. the costume list does? If not, you can have identical-looking entries in the backpack (especially for sounds, which share the same icon) @ericrosenbaum
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Eric R
Mac | Chrome | ChrisG
iPad** | Safari | Karishma
Chromebook | Chrome | Carl
Windows* | Firefox | DD
Android Tablet | Chrome | Ben
Windows* | Edge | Bryce
Mac | Safari | Andrew
Mac | Firefox | kathy
| 1.0 | Issue found in Bug Hunt on 11/2/18 - A number of issues were marked as fixed or with issue numbers in the doc so I didn't put them here.
* [x] Edge shows block stacks in backpack as a white square @BryceLTaylor
* [x] Design thing: There is a blue hover state for dragging code into workspace, but not for dragging sprites in sprite area. It would help to mirror the workspace behavior here: @kathymakes

* [x] I was in the costume tab, and added a new sprite, I got switched to the code tab for the new sprite, and the block category that was selected was still the old one from when I was in the blocks tab of the previous sprite. This is a feature we implemented for switching between sprites, but it is a little more unexpected/surprising when the last thing I was in was the costume tab, not necessarily a bug but maybe something to keep an eye on @kchadha
* [x] Backpack should not appear if you are not logged in. @carljbowman
* [x] Sprite tiles in the backpack do not display fonts properly. See “Test” sprite below @carljbowman

* [x] I can share the love of a costume to another sprite but not of a costume from the backpack to another sprite, iPad Safari @kchadha
* [x] What should happen in the backpack when you have something with a particular name (e.g. “toucan”), and you drag in an asset of the same type with the same name? Should the backpack rename it the way e.g. the costume list does? If not, you can have identical-looking entries in the backpack (especially for sounds, which share the same icon) @ericrosenbaum
Device | Browser | Name
-- | -- | --
Windows* | Chrome | Eric R
Mac | Chrome | ChrisG
iPad** | Safari | Karishma
Chromebook | Chrome | Carl
Windows* | Firefox | DD
Android Tablet | Chrome | Ben
Windows* | Edge | Bryce
Mac | Safari | Andrew
Mac | Firefox | kathy
| test | issue found in bug hunt on a number of issues were marked as fixed or with issue numbers in the doc so i didn t put them here edge shows block stacks in backpack as a white square bryceltaylor design thing there is a blue hover state for dragging code into workspace but not for dragging sprites in sprite area it would help to mirror the workspace behavior here kathymakes i was in the costume tab and added a new sprite i got switched to the code tab for the new sprite and the block category that was selected was still the old one from when i was in the blocks tab of the previous sprite this is a feature we implemented for switching between sprites but it is a little more unexpected surprising when the last thing i was in was the costume tab not necessarily a bug but maybe something to keep an eye on kchadha backpack should not appear if you are not logged in carljbowman sprite tiles in the backpack do not display fonts properly see “test” sprite below carljbowman i can share the love of a costume to another sprite but not of a costume from the backpack to another sprite ipad safari kchadha what should happen in the backpack when you have something with a particular name e g “toucan” and you drag in an asset of the same type with the same name should the backpack rename it the way e g the costume list does if not you can have identical looking entries in the backpack especially for sounds which share the same icon ericrosenbaum device browser name windows chrome eric r mac chrome chrisg ipad safari karishma chromebook chrome carl windows firefox dd android tablet chrome ben windows edge bryce mac safari andrew mac firefox kathy | 1 |
721,709 | 24,835,503,856 | IssuesEvent | 2022-10-26 08:31:46 | AY2223S1-CS2103T-T09-4/tp | https://api.github.com/repos/AY2223S1-CS2103T-T09-4/tp | closed | As a tutor, I can mark my student as present | type.Story priority.MEDIUM | ... so that I know how much the student owes me and can set the date of the next class to be a week later. | 1.0 | As a tutor, I can mark my student as present - ... so that I know how much the student owes me and can set the date of the next class to be a week later. | non_test | as a tutor i can mark my student as present so that i know how much the student owes me and can set the date of the next class to be a week later | 0 |
434,028 | 12,512,956,452 | IssuesEvent | 2020-06-03 00:23:34 | eclipse-ee4j/glassfish | https://api.github.com/repos/eclipse-ee4j/glassfish | closed | SJSU Deployment ‘Location’ field should be marked as mandatory with a red asterisk | Component: admin_gui ERR: Assignee Priority: Trivial Stale Type: Improvement | Deployment of a war file using the GlassFish Admin Console UI fails after
leaving the ‘Location’ field empty, which is not marked as a required field.
The ‘Location’ field is not marked with a red asterisk to indicate a required
field. Only the ‘Application Name’ field is marked as such. The ‘Location’
field needs to be indicated as mandatory as well.
Steps to reproduce:
1. Start the GlassFish server build b44.
2. Open a browser window and type in the URL [http://localhost:4848](http://localhost:4848) to
bring up the Admin Console.
3. Click on ‘Applications’ on the left hand pane.
4. Click on the ‘Deploy’ button in the right hand pane.
5. Only ‘Application Name’ in the right hand pane appears as a required
field (indicated with a red asterisk). The ‘Location’ field needs to be
indicated as such as well.
#### Environment
Operating System: Windows Vista
Platform: PC
#### Affected Versions
[V3] | 1.0 | SJSU Deployment ‘Location’ field should be marked as mandatory with a red asterisk - Deployment of a war file using the GlassFish Admin Console UI fails after
leaving the ‘Location’ field empty, which is not marked as a required field.
The ‘Location’ field is not marked with a red asterisk to indicate a required
field. Only the ‘Application Name’ field is marked as such. The ‘Location’
field needs to be indicated as mandatory as well.
Steps to reproduce:
1. Start the GlassFish server build b44.
2. Open a browser window and type in the URL [http://localhost:4848](http://localhost:4848) to
bring up the Admin Console.
3. Click on ‘Applications’ on the left hand pane.
4. Click on the ‘Deploy’ button in the right hand pane.
5. Only ‘Application Name’ in the right hand pane appears as a required
field (indicated with a red asterisk). The ‘Location’ field needs to be
indicated as such as well.
#### Environment
Operating System: Windows Vista
Platform: PC
#### Affected Versions
[V3] | non_test | sjsu deployment ‘location’ field should be marked as mandatory with a red asterisk deployment of a war file using the glassfish admin console ui fails after leaving the ‘location’ field empty which is not marked as a required field the ‘location’ field is not marked with a red asterisk to indicate a required field only the ‘application name’ field is marked as such the ‘location’ field needs to be indicated as mandatory as well steps to reproduce start the glassfish server build open a browser window and type in the url to bring up the admin console click on ‘applications’ on the left hand pane click on the ‘deploy’ button in the right hand pane only ‘application name’ in the right hand pane appears as a required field indicated with a red asterisk the ‘location’ field needs to be indicated as such as well environment operating system windows vista platform pc affected versions | 0 |
324,581 | 27,811,991,728 | IssuesEvent | 2023-03-18 08:12:05 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | opened | Fix set.test_unique_values | Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_set.py::test_unique_values[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-18T02:22:42.4870104Z E hypothesis.errors.InvalidArgument: Cannot sample from <hypothesis.strategies._internal.core.CompositeStrategy object at 0x7ffa08b105e0>, not an ordered collection. Hypothesis goes to some length to ensure that the sampled_from strategy has stable results between runs. To replay a saved example, the sampled values must have the same iteration order on every run - ruling out sets, dicts, etc due to hash randomization. Most cases can simply use `sorted(values)`, but mixed types or special values such as math.nan require careful handling - and note that when simplifying an example, Hypothesis treats earlier values as simpler.
2023-03-18T02:22:42.4873933Z E hypothesis.errors.Flaky: Inconsistent data generation! Data generation behaved differently between different runs. Is your data generation depending on external state?
</details>
| 1.0 | Fix set.test_unique_values - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4452785526/jobs/7820747992" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_set.py::test_unique_values[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-18T02:22:42.4870104Z E hypothesis.errors.InvalidArgument: Cannot sample from <hypothesis.strategies._internal.core.CompositeStrategy object at 0x7ffa08b105e0>, not an ordered collection. Hypothesis goes to some length to ensure that the sampled_from strategy has stable results between runs. To replay a saved example, the sampled values must have the same iteration order on every run - ruling out sets, dicts, etc due to hash randomization. Most cases can simply use `sorted(values)`, but mixed types or special values such as math.nan require careful handling - and note that when simplifying an example, Hypothesis treats earlier values as simpler.
2023-03-18T02:22:42.4873933Z E hypothesis.errors.Flaky: Inconsistent data generation! Data generation behaved differently between different runs. Is your data generation depending on external state?
</details>
| test | fix set test unique values tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test functional test core test set py test unique values e hypothesis errors invalidargument cannot sample from not an ordered collection hypothesis goes to some length to ensure that the sampled from strategy has stable results between runs to replay a saved example the sampled values must have the same iteration order on every run ruling out sets dicts etc due to hash randomization most cases can simply use sorted values but mixed types or special values such as math nan require careful handling and note that when simplifying an example hypothesis treats earlier values as simpler e hypothesis errors flaky inconsistent data generation data generation behaved differently between different runs is your data generation depending on external state | 1 |
340,974 | 10,280,989,137 | IssuesEvent | 2019-08-26 07:20:16 | kubeflow/pipelines | https://api.github.com/repos/kubeflow/pipelines | closed | Update the test infrastructure to use kustomize | priority/p0 | Currently the test infra deploys a cluster with pipeline code from HEAD
https://github.com/kubeflow/pipelines/blob/master/test/deploy-kubeflow.sh#L35
https://github.com/kubeflow/pipelines/blob/master/test/deploy-pipeline.sh#L50
relying on ksonnet.
We need to update these script to use kustomize as kubeflow is deprecating ksonnet in favor of kustomize. | 1.0 | Update the test infrastructure to use kustomize - Currently the test infra deploys a cluster with pipeline code from HEAD
https://github.com/kubeflow/pipelines/blob/master/test/deploy-kubeflow.sh#L35
https://github.com/kubeflow/pipelines/blob/master/test/deploy-pipeline.sh#L50
relying on ksonnet.
We need to update these script to use kustomize as kubeflow is deprecating ksonnet in favor of kustomize. | non_test | update the test infrastructure to use kustomize currently the test infra deploys a cluster with pipeline code from head relying on ksonnet we need to update these script to use kustomize as kubeflow is deprecating ksonnet in favor of kustomize | 0 |
206,756 | 15,772,563,477 | IssuesEvent | 2021-03-31 21:58:21 | arron1993/blackbox.arron.id | https://api.github.com/repos/arron1993/blackbox.arron.id | closed | Add total cars in session when recording the lap data | enhancement needs testing | The cars position in the field is recorded at the end of each lap but not how many cars are currently racing, so add it. | 1.0 | Add total cars in session when recording the lap data - The cars position in the field is recorded at the end of each lap but not how many cars are currently racing, so add it. | test | add total cars in session when recording the lap data the cars position in the field is recorded at the end of each lap but not how many cars are currently racing so add it | 1 |
316,110 | 27,137,631,780 | IssuesEvent | 2023-02-16 14:18:56 | pandas-dev/pandas | https://api.github.com/repos/pandas-dev/pandas | closed | Unexpected behavior in cut() with nullable Int64 dtype | Missing-data good first issue ExtensionArray Needs Tests cut NA - MaskedArrays | #### Code Sample
```python
import pandas as pd
series = pd.Series([0, 1, 2, 3, 4, pd.np.nan, 6, 7], dtype='Int64')
breaks = [0, 2, 4, 6, 8]
breaks_cut = pd.cut(series, breaks)
breaks_cut
```
```
0 NaN
1 (0.0, 2.0]
2 (0.0, 2.0]
3 (2.0, 4.0]
4 (2.0, 4.0]
5 NaN
6 (0.0, 2.0]
7 (6.0, 8.0]
dtype: category
Categories (4, interval[int64]): [(0, 2] < (2, 4] < (4, 6] < (6, 8]]
```
#### Problem Description
When using the `pd.Int64` nullable integer data type, `pd.cut()` unexpectedly bins the first non-`np.nan` value after an `np.nan` into the lowest interval. In the above example, the number `6` is binned into `(0.0, 2.0]`.
#### Expected Output
```
0 NaN
1 (0.0, 2.0]
2 (0.0, 2.0]
3 (2.0, 4.0]
4 (2.0, 4.0]
5 NaN
6 (4.0, 6.0]
7 (6.0, 8.0]
dtype: category
Categories (4, interval[int64]): [(0, 2] < (2, 4] < (4, 6] < (6, 8]]
```
Note that using an `IntervalIndex` produces the expected output.
```python
import pandas as pd
series = pd.Series([0, 1, 2, 3, 4, pd.np.nan, 6, 7], dtype='Int64')
breaks = [0, 2, 4, 6, 8]
intervals = [pd.Interval(x, y) for x, y in zip(breaks[:-1], breaks[1:])]
interval_index = pd.IntervalIndex(intervals)
interval_cut = pd.cut(series, interval_index)
interval_cut
```
#### Output of `pd.show_versions()`
<details>
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.0.0-37-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 44.0.0.post20200102
Cython : None
pytest : 5.3.2
hypothesis : None
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
```
</details> | 1.0 | Unexpected behavior in cut() with nullable Int64 dtype - #### Code Sample
```python
import pandas as pd
series = pd.Series([0, 1, 2, 3, 4, pd.np.nan, 6, 7], dtype='Int64')
breaks = [0, 2, 4, 6, 8]
breaks_cut = pd.cut(series, breaks)
breaks_cut
```
```
0 NaN
1 (0.0, 2.0]
2 (0.0, 2.0]
3 (2.0, 4.0]
4 (2.0, 4.0]
5 NaN
6 (0.0, 2.0]
7 (6.0, 8.0]
dtype: category
Categories (4, interval[int64]): [(0, 2] < (2, 4] < (4, 6] < (6, 8]]
```
#### Problem Description
When using the `pd.Int64` nullable integer data type, `pd.cut()` unexpectedly bins the first non-`np.nan` value after an `np.nan` into the lowest interval. In the above example, the number `6` is binned into `(0.0, 2.0]`.
#### Expected Output
```
0 NaN
1 (0.0, 2.0]
2 (0.0, 2.0]
3 (2.0, 4.0]
4 (2.0, 4.0]
5 NaN
6 (4.0, 6.0]
7 (6.0, 8.0]
dtype: category
Categories (4, interval[int64]): [(0, 2] < (2, 4] < (4, 6] < (6, 8]]
```
Note that using an `IntervalIndex` produces the expected output.
```python
import pandas as pd
series = pd.Series([0, 1, 2, 3, 4, pd.np.nan, 6, 7], dtype='Int64')
breaks = [0, 2, 4, 6, 8]
intervals = [pd.Interval(x, y) for x, y in zip(breaks[:-1], breaks[1:])]
interval_index = pd.IntervalIndex(intervals)
interval_cut = pd.cut(series, interval_index)
interval_cut
```
#### Output of `pd.show_versions()`
<details>
```
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 5.0.0-37-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 44.0.0.post20200102
Cython : None
pytest : 5.3.2
hypothesis : None
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.4.1
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
```
</details> | test | unexpected behavior in cut with nullable dtype code sample python import pandas as pd series pd series dtype breaks breaks cut pd cut series breaks breaks cut nan nan dtype category categories interval problem description when using the pd nullable integer data type pd cut unexpectedly bins the first non np nan value after an np nan into the lowest interval in the above example the number is binned into expected output nan nan dtype category categories interval note that using an intervalindex produces the expected output python import pandas as pd series pd series dtype breaks intervals breaks interval index pd intervalindex intervals interval cut pd cut series interval index interval cut output of pd show versions installed versions commit none python final python bits os linux os release generic machine processor byteorder little lc all none lang en us utf locale en us utf pandas numpy pytz dateutil pip setuptools cython none pytest hypothesis none sphinx blosc none feather none xlsxwriter none lxml etree none pymysql none none ipython pandas datareader none bottleneck none fastparquet none gcsfs none lxml etree matplotlib numexpr none odfpy none openpyxl none pandas gbq none pyarrow none pytables none none scipy sqlalchemy none tables none xarray none xlrd none xlwt none xlsxwriter none | 1 |
157,034 | 19,912,492,378 | IssuesEvent | 2022-01-25 18:38:11 | Recidiviz/github-issue-due-dates-action | https://api.github.com/repos/Recidiviz/github-issue-due-dates-action | opened | Security Alert - Package: axios; Severity: HIGH | Subject: Security Severity: HIGH Subject: Vulnerability |
---
due: 2022-02-24
---
Affected package: axios
Ecosystem: NPM
Affected version range: < 0.21.1
Summary: Server-Side Request Forgery in Axios
Description: Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-4w2v-q235-vp99'}, {'type': 'CVE', 'value': 'CVE-2020-28168'}]
Fixed Version: 0.21.1
Created Date = November 04, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: <= 0.21.1
Summary: Incorrect Comparison in axios
Description: axios is vulnerable to Inefficient Regular Expression Complexity
identifiers: [{'type': 'GHSA', 'value': 'GHSA-cph5-m8f7-6c5x'}, {'type': 'CVE', 'value': 'CVE-2021-3749'}]
Fixed Version: 0.21.2
Created Date = November 04, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: < 0.21.1
Summary: Server-Side Request Forgery in Axios
Description: Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-4w2v-q235-vp99'}, {'type': 'CVE', 'value': 'CVE-2020-28168'}]
Fixed Version: 0.21.1
Created Date = November 19, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: <= 0.21.1
Summary: Incorrect Comparison in axios
Description: axios is vulnerable to Inefficient Regular Expression Complexity
identifiers: [{'type': 'GHSA', 'value': 'GHSA-cph5-m8f7-6c5x'}, {'type': 'CVE', 'value': 'CVE-2021-3749'}]
Fixed Version: 0.21.2
Created Date = November 19, 2021
---
| True | Security Alert - Package: axios; Severity: HIGH -
---
due: 2022-02-24
---
Affected package: axios
Ecosystem: NPM
Affected version range: < 0.21.1
Summary: Server-Side Request Forgery in Axios
Description: Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-4w2v-q235-vp99'}, {'type': 'CVE', 'value': 'CVE-2020-28168'}]
Fixed Version: 0.21.1
Created Date = November 04, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: <= 0.21.1
Summary: Incorrect Comparison in axios
Description: axios is vulnerable to Inefficient Regular Expression Complexity
identifiers: [{'type': 'GHSA', 'value': 'GHSA-cph5-m8f7-6c5x'}, {'type': 'CVE', 'value': 'CVE-2021-3749'}]
Fixed Version: 0.21.2
Created Date = November 04, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: < 0.21.1
Summary: Server-Side Request Forgery in Axios
Description: Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address.
identifiers: [{'type': 'GHSA', 'value': 'GHSA-4w2v-q235-vp99'}, {'type': 'CVE', 'value': 'CVE-2020-28168'}]
Fixed Version: 0.21.1
Created Date = November 19, 2021
---
Affected package: axios
Ecosystem: NPM
Affected version range: <= 0.21.1
Summary: Incorrect Comparison in axios
Description: axios is vulnerable to Inefficient Regular Expression Complexity
identifiers: [{'type': 'GHSA', 'value': 'GHSA-cph5-m8f7-6c5x'}, {'type': 'CVE', 'value': 'CVE-2021-3749'}]
Fixed Version: 0.21.2
Created Date = November 19, 2021
---
| non_test | security alert package axios severity high due affected package axios ecosystem npm affected version range summary server side request forgery in axios description axios npm package contains a server side request forgery ssrf vulnerability where an attacker is able to bypass a proxy by providing a url that responds with a redirect to a restricted host or ip address identifiers fixed version created date november affected package axios ecosystem npm affected version range summary incorrect comparison in axios description axios is vulnerable to inefficient regular expression complexity identifiers fixed version created date november affected package axios ecosystem npm affected version range summary server side request forgery in axios description axios npm package contains a server side request forgery ssrf vulnerability where an attacker is able to bypass a proxy by providing a url that responds with a redirect to a restricted host or ip address identifiers fixed version created date november affected package axios ecosystem npm affected version range summary incorrect comparison in axios description axios is vulnerable to inefficient regular expression complexity identifiers fixed version created date november | 0 |
126,398 | 10,420,642,433 | IssuesEvent | 2019-09-16 01:52:17 | mozilla/iris_firefox | https://api.github.com/repos/mozilla/iris_firefox | closed | Fix test remove_search_engine | regression test case | Update according to testing functionality.
Test description doesn't correspond to declared test's case ID. Investigate this later. | 1.0 | Fix test remove_search_engine - Update according to testing functionality.
Test description doesn't correspond to declared test's case ID. Investigate this later. | test | fix test remove search engine update according to testing functionality test description doesn t correspond to declared test s case id investigate this later | 1 |
419,403 | 12,223,068,074 | IssuesEvent | 2020-05-02 16:00:33 | Broken-Gem-Studio/Broken-Engine | https://api.github.com/repos/Broken-Gem-Studio/Broken-Engine | closed | Audio doesn't changes between Scenes | Bug High Priority | ## Bug Description
The audio isn't changing between Scenes changes
## Type of Bug
Select the type of bug with and "x" ([x])
* [ ] Visual
* [ ] Physics
* [x] Audio
* [ ] Particles
* [ ] Resource Management & Save/Load
* [ ] Materials
* [ ] Components
* [ ] Game Objects
* [ ] UI/UX
* [ ] Scripting
* [ ] Other
## Severity
Select the severity of bug affection and mark with "x" ([x])
- [ ] Crash
- [ ] Game stopper/slower
- [x] Cosmetic
## Reproduction
Steps to reproduce the behavior:
1. Load the game build scenes in the engine
2. Run and go through scenes
3. Hear the audio :D
4.
## Frequency
Select the frequency with which the bug appears and mark it "x" ([x])
* [x] Always
* [ ] Very Often
* [ ] Usually
* [ ] Few Times
* [ ] Few Times under specific conditions
## Conduct
### Expected result:
Audio should work between scenes
### Actual result:
Audio has a weird behaviour
## Screenshots and Illustrations:
## Build
- **Please specify the build:** ``Insert the build here``
v0.4.3.4
## Observations and Additional Information
| 1.0 | Audio doesn't changes between Scenes - ## Bug Description
The audio isn't changing between Scenes changes
## Type of Bug
Select the type of bug with and "x" ([x])
* [ ] Visual
* [ ] Physics
* [x] Audio
* [ ] Particles
* [ ] Resource Management & Save/Load
* [ ] Materials
* [ ] Components
* [ ] Game Objects
* [ ] UI/UX
* [ ] Scripting
* [ ] Other
## Severity
Select the severity of bug affection and mark with "x" ([x])
- [ ] Crash
- [ ] Game stopper/slower
- [x] Cosmetic
## Reproduction
Steps to reproduce the behavior:
1. Load the game build scenes in the engine
2. Run and go through scenes
3. Hear the audio :D
4.
## Frequency
Select the frequency with which the bug appears and mark it "x" ([x])
* [x] Always
* [ ] Very Often
* [ ] Usually
* [ ] Few Times
* [ ] Few Times under specific conditions
## Conduct
### Expected result:
Audio should work between scenes
### Actual result:
Audio has a weird behaviour
## Screenshots and Illustrations:
## Build
- **Please specify the build:** ``Insert the build here``
v0.4.3.4
## Observations and Additional Information
| non_test | audio doesn t changes between scenes bug description the audio isn t changing between scenes changes type of bug select the type of bug with and x visual physics audio particles resource management save load materials components game objects ui ux scripting other severity select the severity of bug affection and mark with x crash game stopper slower cosmetic reproduction steps to reproduce the behavior load the game build scenes in the engine run and go through scenes hear the audio d frequency select the frequency with which the bug appears and mark it x always very often usually few times few times under specific conditions conduct expected result audio should work between scenes actual result audio has a weird behaviour screenshots and illustrations build please specify the build insert the build here observations and additional information | 0 |
2,240 | 3,354,730,692 | IssuesEvent | 2015-11-18 13:43:10 | cogneco/ooc-kean | https://api.github.com/repos/cogneco/ooc-kean | closed | FloatMatrix: get elements pointer once | performance | As a result of #488, in `FloatMatrix`, each usage of `elements` results in a function call. This results in lots of function calls when elements are accessed in loops (which happens in a lot of functions in `FloatMatrix`. To fix this, create a variable for `this elements` first in the function, and then use that instead of `this elements`. | True | FloatMatrix: get elements pointer once - As a result of #488, in `FloatMatrix`, each usage of `elements` results in a function call. This results in lots of function calls when elements are accessed in loops (which happens in a lot of functions in `FloatMatrix`. To fix this, create a variable for `this elements` first in the function, and then use that instead of `this elements`. | non_test | floatmatrix get elements pointer once as a result of in floatmatrix each usage of elements results in a function call this results in lots of function calls when elements are accessed in loops which happens in a lot of functions in floatmatrix to fix this create a variable for this elements first in the function and then use that instead of this elements | 0 |
455,919 | 13,134,133,292 | IssuesEvent | 2020-08-06 22:32:21 | UC-Davis-molecular-computing/scadnano | https://api.github.com/repos/UC-Davis-molecular-computing/scadnano | closed | fix bug where crossover backbone information is shown for incorrect helix when hiding helices | bug closed in dev high priority | This is with the mouse over the top left crossover.

This is due to the invisible rectangles used to sense the mouse position being drawn at incorrect coordinates, e.g.,

**Update:** The problem is that hidden helices should not have their rectangles drawn, but they are.
| 1.0 | fix bug where crossover backbone information is shown for incorrect helix when hiding helices - This is with the mouse over the top left crossover.

This is due to the invisible rectangles used to sense the mouse position being drawn at incorrect coordinates, e.g.,

**Update:** The problem is that hidden helices should not have their rectangles drawn, but they are.
| non_test | fix bug where crossover backbone information is shown for incorrect helix when hiding helices this is with the mouse over the top left crossover this is due to the invisible rectangles used to sense the mouse position being drawn at incorrect coordinates e g update the problem is that hidden helices should not have their rectangles drawn but they are | 0 |
151,733 | 12,057,043,907 | IssuesEvent | 2020-04-15 15:16:57 | OmniSharp/omnisharp-vscode | https://api.github.com/repos/OmniSharp/omnisharp-vscode | closed | .Net Test Log output shows messy code for Chinese characters | Needs Investigation OmniSharp Test Triaged | <!-- To prefill this information:
1. Open Visual Studio Code
2. Bring up the command palette (press <kbd>F1</kbd>)
3. Type `CSharp: Report an issue`
If the `CSharp: Report an issue` command doesn't appear, make sure that you have C# extension version 1.17.0 or newer installed.
-->
## Environment data
`dotnet --info` output:
```
.NET Core SDK(反映任何 global.json):
Version: 3.0.100
Commit: 04339c3a26
运行时环境:
OS Name: Windows
OS Version: 10.0.18362
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.0.100\
Host (useful for support):
Version: 3.0.0
Commit: 7d57652f33
.NET Core SDKs installed:
3.0.100 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
VS Code version: 1.40.2
C# Extension version: 1.21.8
## Steps to reproduce
1. Write a test and cilck run test/debug test
2. Look at .Net Test Log output
## Expected behavior
shows Chinese characters normally like OmniSharp Log output
## Actual behavior

I have tried many workarounds but failed. | 1.0 | .Net Test Log output shows messy code for Chinese characters - <!-- To prefill this information:
1. Open Visual Studio Code
2. Bring up the command palette (press <kbd>F1</kbd>)
3. Type `CSharp: Report an issue`
If the `CSharp: Report an issue` command doesn't appear, make sure that you have C# extension version 1.17.0 or newer installed.
-->
## Environment data
`dotnet --info` output:
```
.NET Core SDK(反映任何 global.json):
Version: 3.0.100
Commit: 04339c3a26
运行时环境:
OS Name: Windows
OS Version: 10.0.18362
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.0.100\
Host (useful for support):
Version: 3.0.0
Commit: 7d57652f33
.NET Core SDKs installed:
3.0.100 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 2.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
VS Code version: 1.40.2
C# Extension version: 1.21.8
## Steps to reproduce
1. Write a test and cilck run test/debug test
2. Look at .Net Test Log output
## Expected behavior
shows Chinese characters normally like OmniSharp Log output
## Actual behavior

I have tried many workarounds but failed. | test | net test log output shows messy code for chinese characters to prefill this information open visual studio code bring up the command palette press type csharp report an issue if the csharp report an issue command doesn t appear make sure that you have c extension version or newer installed environment data dotnet info output net core sdk(反映任何 global json) version commit 运行时环境 os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit net core sdks installed net core runtimes installed microsoft aspnetcore all microsoft aspnetcore app microsoft aspnetcore app microsoft netcore app microsoft netcore app microsoft windowsdesktop app vs code version c extension version steps to reproduce write a test and cilck run test debug test look at net test log output expected behavior shows chinese characters normally like omnisharp log output actual behavior i have tried many workarounds but failed | 1 |
419,574 | 28,148,139,796 | IssuesEvent | 2023-04-02 18:20:40 | btschwertfeger/Python-Kraken-SDK | https://api.github.com/repos/btschwertfeger/Python-Kraken-SDK | opened | Create a workflow that builds the documentation | Documentation Should | **Is your feature request related to a problem? Please describe.**
When the documentation is implemented (#58) there should be a workflow that builds the documentation and uploads it to Github pages (only on the master branch, when a new release is made?).
| 1.0 | Create a workflow that builds the documentation - **Is your feature request related to a problem? Please describe.**
When the documentation is implemented (#58) there should be a workflow that builds the documentation and uploads it to Github pages (only on the master branch, when a new release is made?).
| non_test | create a workflow that builds the documentation is your feature request related to a problem please describe when the documentation is implemented there should be a workflow that builds the documentation and uploads it to github pages only on the master branch when a new release is made | 0 |
98,095 | 8,674,302,360 | IssuesEvent | 2018-11-30 07:00:32 | humera987/FXLabs-Test-Automation | https://api.github.com/repos/humera987/FXLabs-Test-Automation | reopened | FXLabs Testing 30 : ApiV1RunsJobidTestsuiteTestSuiteResponsesNameGetQueryParamPagesizeNegativeNumber | FXLabs Testing 30 | Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDk0Y2M1NWYtMzc4NS00MmM1LTgxZWQtOTY4ZTU3ZWE5ZDlm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:56:03 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs/rHRfczGs/testSuite/test-suite-responses/rHRfczGs?pageSize=-1
Request :
Response :
{
"timestamp" : "2018-11-30T06:56:03.718+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/rHRfczGs/testSuite/test-suite-responses/rHRfczGs"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | FXLabs Testing 30 : ApiV1RunsJobidTestsuiteTestSuiteResponsesNameGetQueryParamPagesizeNegativeNumber - Project : FXLabs Testing 30
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZDk0Y2M1NWYtMzc4NS00MmM1LTgxZWQtOTY4ZTU3ZWE5ZDlm; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 06:56:03 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/runs/rHRfczGs/testSuite/test-suite-responses/rHRfczGs?pageSize=-1
Request :
Response :
{
"timestamp" : "2018-11-30T06:56:03.718+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/runs/rHRfczGs/testSuite/test-suite-responses/rHRfczGs"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | test | fxlabs testing project fxlabs testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api runs rhrfczgs testsuite test suite responses rhrfczgs logs assertion resolved to result assertion resolved to result fx bot | 1 |
296,969 | 22,333,792,208 | IssuesEvent | 2022-06-14 16:35:42 | DavidT3/XGA | https://api.github.com/repos/DavidT3/XGA | opened | Add a top-level page of XGA's shortcomings | documentation enhancement | Just to make people aware. Currently I can think of things like fits not being aware of their starting conditions, and spectra etc. not automatically knowing if there has been a region change. | 1.0 | Add a top-level page of XGA's shortcomings - Just to make people aware. Currently I can think of things like fits not being aware of their starting conditions, and spectra etc. not automatically knowing if there has been a region change. | non_test | add a top level page of xga s shortcomings just to make people aware currently i can think of things like fits not being aware of their starting conditions and spectra etc not automatically knowing if there has been a region change | 0 |
244,210 | 20,616,770,256 | IssuesEvent | 2022-03-07 13:58:35 | mennaelkashef/eShop | https://api.github.com/repos/mennaelkashef/eShop | opened | No description entered by the user. | Hello! RULE-GOT-APPLIED DOES-NOT-CONTAIN-STRING Rule-works-on-convert-to-bug test instabug | # :clipboard: Bug Details
>No description entered by the user.
key | value
--|--
Reported At | 2022-03-07 13:57:32 UTC
Email | imohamady@instabug.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 24
Device | Google AOSP on IA Emulator, OS Level 28
Display | 768x1080 (hdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.crash.CrashFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 52.4% - 0.76/1.46 GB
Used Storage | 20.4% - 0.15/0.76 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name 1453909925: key value bla bla bla la
key_name -1971117103: key value bla bla bla la
key_name -109708096: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
13:57:23 Long press in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:24 Scroll in "android.widget.ScrollView" in "com.example.app.main.MainActivity"
13:57:24 Long press in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:25 Scroll in "android.widget.ScrollView" in "com.example.app.main.MainActivity"
13:57:25 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:27 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:29 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:30 com.example.app.main.MainActivity was paused.
13:57:30 In activity com.example.app.main.MainActivity: fragment com.example.app.crash.CrashFragment was paused.
13:57:31 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,converting>json started
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,converting>json ended
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,set inspection state
13:57:34 D/IB-BaseReportingPresenter( 6849): receive a view hierarchy inspection action, action value: COMPLETED
13:57:34 D/IB-BaseReportingPresenter( 6849): receive a view hierarchy inspection action, action value: COMPLETED
13:57:34 D/IBGTest ( 6849): MemoryUsage: used memory = 7725088 total memory = 13246472 Free memory percentage = 41.68192104282559
13:57:34 D/EGL_emulation( 6849): eglMakeCurrent: 0xea862d60: ver 3 1 (tinfo 0xcad71df0)
13:57:34 D/IB-BaseReportingPresenter( 6849): checkUserEmailValid :non-empty-email
13:57:34 D/IB-ActionsOrchestrator( 6849): runAction
13:57:34 D/IB-AttachmentsUtility( 6849): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :camera: Images
[](https://d38gnqwzxziyyy.cloudfront.net/attachments/bugs/17969717/abd9274c7a8aa609a36761794a63d381_original/25265799/bug_1646661449929_.jpg?Expires=4802335113&Signature=iPkRGUwhwHrNJRpZQg4b1kMqXZuWxJ17RQT1UAwAJ-RIp2N7okj2BJ~Meo7yELFewr4sYPaYUHqSHM2wPTJQgEmkAyNoL6RRdl3qoqTcXPC1toyrASr28vVf9PShDNmXjXCl85XVuN7P-SzZwCgQXgeD72uoJoMv6lDLfUp1pxJrMjz532ziM~1meZWu7SL9Y98nJ3t57wIG9na4z9pTvc-d1fKYkYlLGYagE~iFolZzFrmKF7iaN9uJOVaRAPSgyMm39m6PSkI9AD36M0IOAjmKb43b78NdH2M1OY429ejYrbHPv~f51jscM-APX~ULpEEHfECSS7Ade1IdnXSBTg__&Key-Pair-Id=APKAIXAG65U6UUX7JAQQ)
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | 1.0 | No description entered by the user. - # :clipboard: Bug Details
>No description entered by the user.
key | value
--|--
Reported At | 2022-03-07 13:57:32 UTC
Email | imohamady@instabug.com
Categories | Report a bug
Tags | test, Hello!, RULE-GOT-APPLIED, DOES-NOT-CONTAIN-STRING, Rule-works-on-convert-to-bug, instabug
App Version | 1.1 (1)
Session Duration | 24
Device | Google AOSP on IA Emulator, OS Level 28
Display | 768x1080 (hdpi)
Location | Cairo, Egypt (en)
## :point_right: [View Full Bug Report on Instabug](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?utm_source=github&utm_medium=integrations) :point_left:
___
# :iphone: View Hierarchy
This bug was reported from **com.example.app.crash.CrashFragment**
Find its interactive view hierarchy with all its subviews here: :point_right: **[Check View Hierarchy](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-hierarchy-view=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :chart_with_downwards_trend: Session Profiler
Here is what the app was doing right before the bug was reported:
Key | Value
--|--
Used Memory | 52.4% - 0.76/1.46 GB
Used Storage | 20.4% - 0.15/0.76 GB
Connectivity | WiFi
Battery | 100% - unplugged
Orientation | portrait
Find all the changes that happened in the parameters mentioned above during the last 60 seconds before the bug was reported here: :point_right: **[View Full Session Profiler](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-session-profiler=true&utm_source=github&utm_medium=integrations)** :point_left:
___
# :bust_in_silhouette: User Info
### User Attributes
```
key_name 1453909925: key value bla bla bla la
key_name -1971117103: key value bla bla bla la
key_name -109708096: key value bla bla bla la
```
___
# :mag_right: Logs
### User Steps
Here are the last 10 steps done by the user right before the bug was reported:
```
13:57:23 Long press in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:24 Scroll in "android.widget.ScrollView" in "com.example.app.main.MainActivity"
13:57:24 Long press in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:25 Scroll in "android.widget.ScrollView" in "com.example.app.main.MainActivity"
13:57:25 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:27 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:29 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
13:57:30 com.example.app.main.MainActivity was paused.
13:57:30 In activity com.example.app.main.MainActivity: fragment com.example.app.crash.CrashFragment was paused.
13:57:31 Tap in "androidx.constraintlayout.widget.ConstraintLayout" in "com.example.app.main.MainActivity"
```
Find all the user steps done by the user throughout the session here: :point_right: **[View All User Steps](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-logs=user_steps&utm_source=github&utm_medium=integrations)** :point_left:
### Console Log
Here are the last 10 console logs logged right before the bug was reported:
```
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,converting>json started
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,converting>json ended
13:57:34 D/IB-ActivityViewInspectorTask( 6849): bug ! null,set inspection state
13:57:34 D/IB-BaseReportingPresenter( 6849): receive a view hierarchy inspection action, action value: COMPLETED
13:57:34 D/IB-BaseReportingPresenter( 6849): receive a view hierarchy inspection action, action value: COMPLETED
13:57:34 D/IBGTest ( 6849): MemoryUsage: used memory = 7725088 total memory = 13246472 Free memory percentage = 41.68192104282559
13:57:34 D/EGL_emulation( 6849): eglMakeCurrent: 0xea862d60: ver 3 1 (tinfo 0xcad71df0)
13:57:34 D/IB-BaseReportingPresenter( 6849): checkUserEmailValid :non-empty-email
13:57:34 D/IB-ActionsOrchestrator( 6849): runAction
13:57:34 D/IB-AttachmentsUtility( 6849): encryptAttachments
```
Find all the logged console logs throughout the session here: :point_right: **[View All Console Log](https://dashboard.instabug.com/applications/android-sample/beta/bugs/8611?show-logs=console_log&utm_source=github&utm_medium=integrations)** :point_left:
___
# :camera: Images
[](https://d38gnqwzxziyyy.cloudfront.net/attachments/bugs/17969717/abd9274c7a8aa609a36761794a63d381_original/25265799/bug_1646661449929_.jpg?Expires=4802335113&Signature=iPkRGUwhwHrNJRpZQg4b1kMqXZuWxJ17RQT1UAwAJ-RIp2N7okj2BJ~Meo7yELFewr4sYPaYUHqSHM2wPTJQgEmkAyNoL6RRdl3qoqTcXPC1toyrASr28vVf9PShDNmXjXCl85XVuN7P-SzZwCgQXgeD72uoJoMv6lDLfUp1pxJrMjz532ziM~1meZWu7SL9Y98nJ3t57wIG9na4z9pTvc-d1fKYkYlLGYagE~iFolZzFrmKF7iaN9uJOVaRAPSgyMm39m6PSkI9AD36M0IOAjmKb43b78NdH2M1OY429ejYrbHPv~f51jscM-APX~ULpEEHfECSS7Ade1IdnXSBTg__&Key-Pair-Id=APKAIXAG65U6UUX7JAQQ)
___
# :warning: Looking for More Details?
1. **Network Log**: we are unable to capture your network requests automatically. If you are using HttpUrlConnection or Okhttp requests, [**check the details mentioned here**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations#section-network-logs).
2. **User Events**: start capturing custom User Events to send them along with each report. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations).
3. **Instabug Log**: start adding Instabug logs to see them right inside each report you receive. [**Find all the details in the docs**](https://docs.instabug.com/docs/android-logging?utm_source=github&utm_medium=integrations). | test | no description entered by the user clipboard bug details no description entered by the user key value reported at utc email imohamady instabug com categories report a bug tags test hello rule got applied does not contain string rule works on convert to bug instabug app version session duration device google aosp on ia emulator os level display hdpi location cairo egypt en point right point left iphone view hierarchy this bug was reported from com example app crash crashfragment find its interactive view hierarchy with all its subviews here point right point left chart with downwards trend session profiler here is what the app was doing right before the bug was reported key value used memory gb used storage gb connectivity wifi battery unplugged orientation portrait find all the changes that happened in the parameters mentioned above during the last seconds before the bug was reported here point right point left bust in silhouette user info user attributes key name key value bla bla bla la key name key value bla bla bla la key name key value bla bla bla la mag right logs user steps here are the last steps done by the user right before the bug was reported long press in androidx constraintlayout widget constraintlayout in com example app main mainactivity scroll in android widget scrollview in com example app main mainactivity long press in androidx constraintlayout widget constraintlayout in com example app main mainactivity scroll in android widget scrollview in com example app main mainactivity tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity com example app main mainactivity was paused in activity com example app main mainactivity fragment com example app crash crashfragment was paused tap in androidx constraintlayout widget constraintlayout in com example app main mainactivity find all the user steps done by the user throughout the session here point right point left console log here are the last console logs logged right before the bug was reported d ib activityviewinspectortask bug null converting json started d ib activityviewinspectortask bug null converting json ended d ib activityviewinspectortask bug null set inspection state d ib basereportingpresenter receive a view hierarchy inspection action action value completed d ib basereportingpresenter receive a view hierarchy inspection action action value completed d ibgtest memoryusage used memory total memory free memory percentage d egl emulation eglmakecurrent ver tinfo d ib basereportingpresenter checkuseremailvalid non empty email d ib actionsorchestrator runaction d ib attachmentsutility encryptattachments find all the logged console logs throughout the session here point right point left camera images warning looking for more details network log we are unable to capture your network requests automatically if you are using httpurlconnection or okhttp requests user events start capturing custom user events to send them along with each report instabug log start adding instabug logs to see them right inside each report you receive | 1 |
215,288 | 16,662,881,515 | IssuesEvent | 2021-06-06 16:50:35 | ultimate-research/ssbh_lib | https://api.github.com/repos/ultimate-research/ssbh_lib | closed | Test error handling for mesh creation | ssbh_data testing | - [x] create empty mesh with invalid version (should fail)
- [x] create version 1.8 mesh
- [x] create version 1.10 mesh | 1.0 | Test error handling for mesh creation - - [x] create empty mesh with invalid version (should fail)
- [x] create version 1.8 mesh
- [x] create version 1.10 mesh | test | test error handling for mesh creation create empty mesh with invalid version should fail create version mesh create version mesh | 1 |
367,123 | 10,840,755,528 | IssuesEvent | 2019-11-12 09:03:24 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | m.genk.vn - see bug description | browser-fenix engine-gecko priority-normal | <!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: http://m.genk.vn/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Why is the ad still appearing? Does Firefox disable ad blocking?
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | m.genk.vn - see bug description - <!-- @browser: Firefox Mobile 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:70.0) Gecko/70.0 Firefox/70.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: http://m.genk.vn/
**Browser / Version**: Firefox Mobile 70.0
**Operating System**: Android
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: Why is the ad still appearing? Does Firefox disable ad blocking?
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_test | m genk vn see bug description url browser version firefox mobile operating system android tested another browser unknown problem type something else description why is the ad still appearing does firefox disable ad blocking steps to reproduce browser configuration none from with ❤️ | 0 |
7,459 | 2,904,612,152 | IssuesEvent | 2015-06-18 19:04:47 | metafizzy/flickity | https://api.github.com/repos/metafizzy/flickity | closed | Images Loading Very Slowly | test case required | Hi, I have a main gallery and on that gallery every images getting another gallery but its take a lot of time. Why is that?
```
$(document).ready(function() {
$(".loader-inner").hide();
});
$(document).ready(function() {
$("#nbg1").on("click", function(){
$("#gnc2").css("display","none");
$(".loader-inner").show();
$('#mmg1').load('galeri.html #tr1 img', function() {
$('.main-gallery').flickity( {
cellAlign: 'left',
contain: true,
prevNextButtons: false,
freeScroll: true,
pageDots: false,
imagesLoaded: true
});
$('#mmg1').waitForImages({
each: function(){
$(".loader-inner").fadeOut("slow")
}
});
});
});
});
``` | 1.0 | Images Loading Very Slowly - Hi, I have a main gallery and on that gallery every images getting another gallery but its take a lot of time. Why is that?
```
$(document).ready(function() {
$(".loader-inner").hide();
});
$(document).ready(function() {
$("#nbg1").on("click", function(){
$("#gnc2").css("display","none");
$(".loader-inner").show();
$('#mmg1').load('galeri.html #tr1 img', function() {
$('.main-gallery').flickity( {
cellAlign: 'left',
contain: true,
prevNextButtons: false,
freeScroll: true,
pageDots: false,
imagesLoaded: true
});
$('#mmg1').waitForImages({
each: function(){
$(".loader-inner").fadeOut("slow")
}
});
});
});
});
``` | test | images loading very slowly hi i have a main gallery and on that gallery every images getting another gallery but its take a lot of time why is that document ready function loader inner hide document ready function on click function css display none loader inner show load galeri html img function main gallery flickity cellalign left contain true prevnextbuttons false freescroll true pagedots false imagesloaded true waitforimages each function loader inner fadeout slow | 1 |
92,888 | 8,380,952,951 | IssuesEvent | 2018-10-07 19:50:09 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | Password Protected Page NOT allowing enter Password Into Box | Needs Testing | 9/30/2018: Fresh WP install, ONLY Gutenberg 4.9.8 Plugin Active, Host Cache Cleared, SiteGround Host with Cache Plugin Deactivated
Password Protect worked for a few days on Public computer and Iphone. Has stopped allowing Public to enter provided Password.
1. https://reeinc.org
2. Select SalesMovie or BrokerMovie on computer. Pasword protected blocked pages and Box to enter Password displays, but box will not accept typing password.
3. On Iphone Chrome app that used to work on Gutenberg 4.9.8 - Now not even presenting covered screen with the PASSWORD box to enter, but shows protected page to public.
<img width="1271" alt="screen shot 2018-09-30 at 11 52 11 am" src="https://user-images.githubusercontent.com/40929949/46261363-5afe5500-c4a7-11e8-84f0-43ce0fddd9a6.png">
| 1.0 | Password Protected Page NOT allowing enter Password Into Box - 9/30/2018: Fresh WP install, ONLY Gutenberg 4.9.8 Plugin Active, Host Cache Cleared, SiteGround Host with Cache Plugin Deactivated
Password Protect worked for a few days on Public computer and Iphone. Has stopped allowing Public to enter provided Password.
1. https://reeinc.org
2. Select SalesMovie or BrokerMovie on computer. Pasword protected blocked pages and Box to enter Password displays, but box will not accept typing password.
3. On Iphone Chrome app that used to work on Gutenberg 4.9.8 - Now not even presenting covered screen with the PASSWORD box to enter, but shows protected page to public.
<img width="1271" alt="screen shot 2018-09-30 at 11 52 11 am" src="https://user-images.githubusercontent.com/40929949/46261363-5afe5500-c4a7-11e8-84f0-43ce0fddd9a6.png">
| test | password protected page not allowing enter password into box fresh wp install only gutenberg plugin active host cache cleared siteground host with cache plugin deactivated password protect worked for a few days on public computer and iphone has stopped allowing public to enter provided password select salesmovie or brokermovie on computer pasword protected blocked pages and box to enter password displays but box will not accept typing password on iphone chrome app that used to work on gutenberg now not even presenting covered screen with the password box to enter but shows protected page to public img width alt screen shot at am src | 1 |
128,262 | 18,040,540,287 | IssuesEvent | 2021-09-18 01:34:30 | RG4421/cbp-theme | https://api.github.com/repos/RG4421/cbp-theme | opened | CVE-2021-37712 (High) detected in multiple libraries | security vulnerability | ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-4.4.8.tgz</b>, <b>tar-6.0.1.tgz</b>, <b>tar-4.4.13.tgz</b></p></summary>
<p>
<details><summary><b>tar-4.4.8.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- chokidar-2.1.5.tgz
- fsevents-1.2.8.tgz
- node-pre-gyp-0.12.0.tgz
- :x: **tar-4.4.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-6.0.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.0.1.tgz">https://registry.npmjs.org/tar/-/tar-6.0.1.tgz</a></p>
<p>Path to dependency file: cbp-theme/ds-ux-guidelines/package.json</p>
<p>Path to vulnerable library: cbp-theme/ds-ux-guidelines/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-manifest-2.3.3.tgz (Root Library)
- sharp-0.25.2.tgz
- :x: **tar-6.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>
Dependency Hierarchy:
- postcss-cli-6.1.3.tgz (Root Library)
- chokidar-2.1.8.tgz
- fsevents-1.2.12.tgz
- node-pre-gyp-0.14.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/cbp-theme/commit/db5c4fa25ec88267c04038c72a69b190152a78c5">db5c4fa25ec88267c04038c72a69b190152a78c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
| True | CVE-2021-37712 (High) detected in multiple libraries - ## CVE-2021-37712 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tar-4.4.8.tgz</b>, <b>tar-6.0.1.tgz</b>, <b>tar-4.4.13.tgz</b></p></summary>
<p>
<details><summary><b>tar-4.4.8.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p>
<p>
Dependency Hierarchy:
- browser-sync-2.26.3.tgz (Root Library)
- chokidar-2.1.5.tgz
- fsevents-1.2.8.tgz
- node-pre-gyp-0.12.0.tgz
- :x: **tar-4.4.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-6.0.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-6.0.1.tgz">https://registry.npmjs.org/tar/-/tar-6.0.1.tgz</a></p>
<p>Path to dependency file: cbp-theme/ds-ux-guidelines/package.json</p>
<p>Path to vulnerable library: cbp-theme/ds-ux-guidelines/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- gatsby-plugin-manifest-2.3.3.tgz (Root Library)
- sharp-0.25.2.tgz
- :x: **tar-6.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>tar-4.4.13.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.13.tgz">https://registry.npmjs.org/tar/-/tar-4.4.13.tgz</a></p>
<p>
Dependency Hierarchy:
- postcss-cli-6.1.3.tgz (Root Library)
- chokidar-2.1.8.tgz
- fsevents-1.2.12.tgz
- node-pre-gyp-0.14.0.tgz
- :x: **tar-4.4.13.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/RG4421/cbp-theme/commit/db5c4fa25ec88267c04038c72a69b190152a78c5">db5c4fa25ec88267c04038c72a69b190152a78c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.18, 5.0.10, and 6.1.9 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value. Additionally, on Windows systems, long path portions would resolve to the same file system entities as their 8.3 "short path" counterparts. A specially crafted tar archive could thus include a directory with one form of the path, followed by a symbolic link with a different string that resolves to the same file system entity, followed by a file using the first form. By first creating a directory, and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. These issues were addressed in releases 4.4.18, 5.0.10 and 6.1.9. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-qq89-hq3f-393p.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37712>CVE-2021-37712</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p">https://github.com/npm/node-tar/security/advisories/GHSA-qq89-hq3f-393p</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution: tar - 4.4.18, 5.0.10, 6.1.9</p>
</p>
</details>
<p></p>
| non_test | cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tar tgz tar tgz tar tgz tar tgz tar for node library home page a href dependency hierarchy browser sync tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library tar tgz tar for node library home page a href path to dependency file cbp theme ds ux guidelines package json path to vulnerable library cbp theme ds ux guidelines node modules tar package json dependency hierarchy gatsby plugin manifest tgz root library sharp tgz x tar tgz vulnerable library tar tgz tar for node library home page a href dependency hierarchy postcss cli tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with names containing unicode values that normalized to the same value additionally on windows systems long path portions would resolve to the same file system entities as their short path counterparts a specially crafted tar archive could thus include a directory with one form of the path followed by a symbolic link with a different string that resolves to the same file system entity followed by a file using the first form by first creating a directory and then replacing that directory with a symlink that had a different apparent name that resolved to the same entry in the filesystem it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar | 0 |
593,964 | 18,021,179,176 | IssuesEvent | 2021-09-16 19:39:23 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | opened | [GM] (MARLO) Share deliverable between projects | Priority - Medium Type - Enhancement Type -Task | In MARLO projects, there is a deliverables section, within which it is required to develop the functionality of sharing them between projects.
Currently, deliverables are only associated with one project.
- [x] Make migrations and back end architecture
- [x] Develop front end component

- [x] Show shared deliverables in deliverable list
- [x] Add Owner column to deliverable list

- [x] Show deliverables in innovations dropdown list

**Move to Closed when:** Tested in Marlo Dev | 1.0 | [GM] (MARLO) Share deliverable between projects - In MARLO projects, there is a deliverables section, within which it is required to develop the functionality of sharing them between projects.
Currently, deliverables are only associated with one project.
- [x] Make migrations and back end architecture
- [x] Develop front end component

- [x] Show shared deliverables in deliverable list
- [x] Add Owner column to deliverable list

- [x] Show deliverables in innovations dropdown list

**Move to Closed when:** Tested in Marlo Dev | non_test | marlo share deliverable between projects in marlo projects there is a deliverables section within which it is required to develop the functionality of sharing them between projects currently deliverables are only associated with one project make migrations and back end architecture develop front end component show shared deliverables in deliverable list add owner column to deliverable list show deliverables in innovations dropdown list move to closed when tested in marlo dev | 0 |
101,056 | 8,773,349,804 | IssuesEvent | 2018-12-18 16:38:45 | SME-Issues/issues | https://api.github.com/repos/SME-Issues/issues | closed | Test Summary - 18/12/2018 - 5003 | NLP Api PETEDEV pulse_tests | ### Compound
- Compound Query Tests Balance None (12): **100%** pass (11), 0 failed understood (#2486)
- Compound Query Tests Invoice None (24): **80%** pass (16), 4 failed understood (#2487)
- Compound Query Tests Payment None (7): **50%** pass (3), 3 failed understood (#2488)
| 1.0 | Test Summary - 18/12/2018 - 5003 - ### Compound
- Compound Query Tests Balance None (12): **100%** pass (11), 0 failed understood (#2486)
- Compound Query Tests Invoice None (24): **80%** pass (16), 4 failed understood (#2487)
- Compound Query Tests Payment None (7): **50%** pass (3), 3 failed understood (#2488)
| test | test summary compound compound query tests balance none pass failed understood compound query tests invoice none pass failed understood compound query tests payment none pass failed understood | 1 |
168,543 | 13,094,563,552 | IssuesEvent | 2020-08-03 12:38:57 | ICIJ/datashare | https://api.github.com/repos/ICIJ/datashare | closed | Unauthenticated user will briefly see the error page | bug front need testing | **Describe the bug**
In server mode, unauthenticated users are briefly seen the "network error" page before being redirected to the login page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to production/staging server
2. Sign out if you're already logged in
4. See error
**Expected behavior**
The login page should be seen directly after the splash-screen, with no interruption.
**Desktop (please complete the following information):**
- OS: observed on Linux and MacOS
- Browser: observed on Chrome and Firefox
- Version: 7.3.1 | 1.0 | Unauthenticated user will briefly see the error page - **Describe the bug**
In server mode, unauthenticated users are briefly seen the "network error" page before being redirected to the login page
**To Reproduce**
Steps to reproduce the behavior:
1. Go to production/staging server
2. Sign out if you're already logged in
4. See error
**Expected behavior**
The login page should be seen directly after the splash-screen, with no interruption.
**Desktop (please complete the following information):**
- OS: observed on Linux and MacOS
- Browser: observed on Chrome and Firefox
- Version: 7.3.1 | test | unauthenticated user will briefly see the error page describe the bug in server mode unauthenticated users are briefly seen the network error page before being redirected to the login page to reproduce steps to reproduce the behavior go to production staging server sign out if you re already logged in see error expected behavior the login page should be seen directly after the splash screen with no interruption desktop please complete the following information os observed on linux and macos browser observed on chrome and firefox version | 1 |
80,325 | 7,745,445,096 | IssuesEvent | 2018-05-29 18:22:19 | golang/go | https://api.github.com/repos/golang/go | opened | os/exec: TestExtraFiles is flaky on linux-386-sid bulder | NeedsInvestigation Testing | I've seen several of these:
```
--- FAIL: TestExtraFiles (0.10s)
exec_test.go:607: Run: exit status 1; stdout "leaked parent file. fd = 66; want 65\n", stderr ""
FAIL
FAIL os/exec 0.687s
```
on the `linux-386-sid` builder in the past weeks. Here's one:
https://build.golang.org/log/b95e0220b4130af6c1bcfed59ec934568717d0ef | 1.0 | os/exec: TestExtraFiles is flaky on linux-386-sid bulder - I've seen several of these:
```
--- FAIL: TestExtraFiles (0.10s)
exec_test.go:607: Run: exit status 1; stdout "leaked parent file. fd = 66; want 65\n", stderr ""
FAIL
FAIL os/exec 0.687s
```
on the `linux-386-sid` builder in the past weeks. Here's one:
https://build.golang.org/log/b95e0220b4130af6c1bcfed59ec934568717d0ef | test | os exec testextrafiles is flaky on linux sid bulder i ve seen several of these fail testextrafiles exec test go run exit status stdout leaked parent file fd want n stderr fail fail os exec on the linux sid builder in the past weeks here s one | 1 |
50,732 | 3,006,582,596 | IssuesEvent | 2015-07-27 11:26:06 | Itseez/opencv | https://api.github.com/repos/Itseez/opencv | opened | cv2.dft output changes between loop iterations for the same input array in complex output mode | affected: 2.4 auto-transferred bug category: python bindings priority: normal | Transferred from http://code.opencv.org/issues/4364
```
|| Roger Olivé on 2015-05-27 20:48
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: python bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
cv2.dft output changes between loop iterations for the same input array in complex output mode
-----------
```
I posted a question in OpenCV answers because I've been getting +changing outputs+ for DFT operations in Python over +the same input array+ under very specific conditions:
http://answers.opencv.org/question/62514/strange-output-when-using-cv2dft-inside-loops/
After some research, it seems to be a bug that appears when using cv2.dft under the following conditions:
* Calling cv2.dft to convert at least two different random arrays inside a loop. (haven't tried with more)
* One of the converted arrays is real and the other complex.
* flags=cv2.DFT_COMPLEX_OUTPUT specified at least for the DFT of the real input.
* The array returned by the second DFT is NOT explicitly assigned to a variable name.
* The problem doesn't seem to appear in C++. Looks specific to the Python bindings.
I only tried it on Linux x86_64 but I have a feeling this is not something HW and/or OS specific.
Compact code to reproduce the issue (courtesy of mshabunin from OpenCV answers):
<pre>
import cv2
import numpy as np
A = np.random.rand(16384)
X = np.random.rand(2, 16384)
for i in range (0,5):
Am = cv2.dft(A, flags=cv2.DFT_COMPLEX_OUTPUT)
print("DFT min: %f" % np.min(Am))
cv2.dft(X)
</pre>
```
History
------- | 1.0 | cv2.dft output changes between loop iterations for the same input array in complex output mode - Transferred from http://code.opencv.org/issues/4364
```
|| Roger Olivé on 2015-05-27 20:48
|| Priority: Normal
|| Affected: 2.4.9 (latest release)
|| Category: python bindings
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
cv2.dft output changes between loop iterations for the same input array in complex output mode
-----------
```
I posted a question in OpenCV answers because I've been getting +changing outputs+ for DFT operations in Python over +the same input array+ under very specific conditions:
http://answers.opencv.org/question/62514/strange-output-when-using-cv2dft-inside-loops/
After some research, it seems to be a bug that appears when using cv2.dft under the following conditions:
* Calling cv2.dft to convert at least two different random arrays inside a loop. (haven't tried with more)
* One of the converted arrays is real and the other complex.
* flags=cv2.DFT_COMPLEX_OUTPUT specified at least for the DFT of the real input.
* The array returned by the second DFT is NOT explicitly assigned to a variable name.
* The problem doesn't seem to appear in C++. Looks specific to the Python bindings.
I only tried it on Linux x86_64 but I have a feeling this is not something HW and/or OS specific.
Compact code to reproduce the issue (courtesy of mshabunin from OpenCV answers):
<pre>
import cv2
import numpy as np
A = np.random.rand(16384)
X = np.random.rand(2, 16384)
for i in range (0,5):
Am = cv2.dft(A, flags=cv2.DFT_COMPLEX_OUTPUT)
print("DFT min: %f" % np.min(Am))
cv2.dft(X)
</pre>
```
History
------- | non_test | dft output changes between loop iterations for the same input array in complex output mode transferred from roger olivé on priority normal affected latest release category python bindings tracker bug difficulty pr platform any any dft output changes between loop iterations for the same input array in complex output mode i posted a question in opencv answers because i ve been getting changing outputs for dft operations in python over the same input array under very specific conditions after some research it seems to be a bug that appears when using dft under the following conditions calling dft to convert at least two different random arrays inside a loop haven t tried with more one of the converted arrays is real and the other complex flags dft complex output specified at least for the dft of the real input the array returned by the second dft is not explicitly assigned to a variable name the problem doesn t seem to appear in c looks specific to the python bindings i only tried it on linux but i have a feeling this is not something hw and or os specific compact code to reproduce the issue courtesy of mshabunin from opencv answers import import numpy as np a np random rand x np random rand for i in range am dft a flags dft complex output print dft min f np min am dft x history | 0 |
240,547 | 20,035,957,949 | IssuesEvent | 2022-02-02 11:55:24 | bp/resqpy | https://api.github.com/repos/bp/resqpy | closed | Grid/faults.py find_faults check | bug tests | Check final value when fault reaches end of grid - shows zero. | 1.0 | Grid/faults.py find_faults check - Check final value when fault reaches end of grid - shows zero. | test | grid faults py find faults check check final value when fault reaches end of grid shows zero | 1 |
23,858 | 7,404,560,581 | IssuesEvent | 2018-03-20 05:36:49 | openshiftio/openshift.io | https://api.github.com/repos/openshiftio/openshift.io | closed | Jenkins moves plugins on every startup | SEV1-urgent area/jenkins issue/crucial-dep team/build-cd type/bug | Every time Jenkins boots, either due to new-init for user, or un-idle or pod relocation for infra work or on user demand, it will move 400+ MB of data to /var/lib/jenkins/plugins - if this is anything other than a new-init, Jenkins first removes each of the plugins, one at a time, and then replaces the exact same content into place.
/var/lib/jenkins/ is mounted as a pvc
We should not be moving this data into the jenkins-home PVC - can we replace the /var/lib/jenkins/plugins/ folder to just be a symlink to the place the content is stored anyway ? we are not expecting this content to be replaced, and in basic testing it looks ok to sit on container storage. | 1.0 | Jenkins moves plugins on every startup - Every time Jenkins boots, either due to new-init for user, or un-idle or pod relocation for infra work or on user demand, it will move 400+ MB of data to /var/lib/jenkins/plugins - if this is anything other than a new-init, Jenkins first removes each of the plugins, one at a time, and then replaces the exact same content into place.
/var/lib/jenkins/ is mounted as a pvc
We should not be moving this data into the jenkins-home PVC - can we replace the /var/lib/jenkins/plugins/ folder to just be a symlink to the place the content is stored anyway ? we are not expecting this content to be replaced, and in basic testing it looks ok to sit on container storage. | non_test | jenkins moves plugins on every startup every time jenkins boots either due to new init for user or un idle or pod relocation for infra work or on user demand it will move mb of data to var lib jenkins plugins if this is anything other than a new init jenkins first removes each of the plugins one at a time and then replaces the exact same content into place var lib jenkins is mounted as a pvc we should not be moving this data into the jenkins home pvc can we replace the var lib jenkins plugins folder to just be a symlink to the place the content is stored anyway we are not expecting this content to be replaced and in basic testing it looks ok to sit on container storage | 0 |
120,871 | 10,137,820,603 | IssuesEvent | 2019-08-02 16:12:42 | jupyter/repo2docker | https://api.github.com/repos/jupyter/repo2docker | closed | Rewrite semver tests to use pytest's parametrize functionality | good first issue help wanted needs: tests | via https://github.com/jupyter/repo2docker/pull/613#discussion_r263836315:
> In a future refactor, we should use pytest's parameterize capability to feed in versions to get closer to DRY. [See example from JupyterHub tests](https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/tests/test_proxy.py#L169).
This would make a good issue for getting started on contributing to repo2docker and/or someone who wants to learn more about pytest. | 1.0 | Rewrite semver tests to use pytest's parametrize functionality - via https://github.com/jupyter/repo2docker/pull/613#discussion_r263836315:
> In a future refactor, we should use pytest's parameterize capability to feed in versions to get closer to DRY. [See example from JupyterHub tests](https://github.com/jupyterhub/jupyterhub/blob/master/jupyterhub/tests/test_proxy.py#L169).
This would make a good issue for getting started on contributing to repo2docker and/or someone who wants to learn more about pytest. | test | rewrite semver tests to use pytest s parametrize functionality via in a future refactor we should use pytest s parameterize capability to feed in versions to get closer to dry this would make a good issue for getting started on contributing to and or someone who wants to learn more about pytest | 1 |
344,031 | 30,708,336,577 | IssuesEvent | 2023-07-27 08:02:35 | KubeJS-Mods/KubeJS | https://api.github.com/repos/KubeJS-Mods/KubeJS | closed | stats.get/set/add don't work | bug needs testing | Versions:
>kjs: kubejs-fabric-1802.5.4-build.506.jar
rhino: rhino-fabric-1802.1.14-build.190.jar
fabric: 0.14.6
fabric-api: fabric-api-0.56.1+1.18.2.jar
https://github.com/KubeJS-Mods/KubeJS/blob/614f5813340bfc08f441e2c209d170e91b3e6798/common/src/main/java/dev/latvian/mods/kubejs/player/PlayerStatsJS.java#L115
I wasn't able to find the method "Stats.CUSTOM.id(string)" (from the line above) in the fabric docs (unclear if this works on forge) and the functions player.stats.get/set/add(id) return a wrapped NPE.
Also, I'm pretty sure these methods read the stat file in /world/stats/playeruuid.json, but editing the file does not change the in game stat because it appears to need to be [updated](https://github.com/FabricMC/yarn/blob/db6b7357f8cb28790ae6f877f497150188b30cdb/mappings/net/minecraft/stat/ServerStatHandler.mapping#L24) but I don't think it accepts changes to the file in that direction (file -> server). In the end I was able to edit the stat using `serverPlayerEntity.resetStat` to which I passed `Stats.CUSTOM.getOrCreateStat(Stats.TIME_SINCE_REST)`
It seems like you should be able to pass instead, for example, a new Identifier with `let dmg = new Identifier("minecraft:damage_taken")` but it gives me the error `Wrapped java.lang.NullPointerException: Cannot invoke "net.minecraft.class_2960.toString()" because "$$0" is null` even though `Stats.DAMAGE_TAKEN` is the same class (class 2960) and `dmg == Stats.DAMAGE_TAKEN` returns `true`
Anyway I hope this has provided enough information.
here are the relevant mappings
https://github.com/FabricMC/yarn/blob/dfda2435aa0ca2f8707fa5a7d6ecb57b3e5546a9/mappings/net/minecraft/entity/player/PlayerEntity.mapping#L132
https://maven.fabricmc.net/docs/yarn-21w15a+build.2/net/minecraft/stat/Stats.html (this is not for the correct version but is easier to read and includes the fields)
https://github.com/FabricMC/yarn/blob/1.18.2/mappings/net/minecraft/stat/Stats.mapping
and here is my awful reflection code (works)
```js
const Stats = java("net.minecraft.class_3468")
const Stat = java("net.minecraft.class_3445")
const Javaobj = java("java.lang.Object")
onEvent('player.chat', function (event) {
let serverPlayerEntity = event.player.minecraftPlayer
let resetStat = serverPlayerEntity.class.getMethod("method_7266", Stat)
let getOrCreateStat = Stats.CUSTOM.class.getMethod("method_14956", Javaobj)
resetStat.invoke(serverPlayerEntity, getOrCreateStat.invoke(Stats.CUSTOM, Stats.TIME_SINCE_REST))
event.server.tell(event.player.stats.getTimeSinceRest())
})
```
By the way, the reason I'm using reflection in this manner instead of the new deobsfuscated methods is because theres a fabric-specific bug with it starting with version 1.18.2; thread on the discord here: https://discord.com/channels/303440391124942858/981262916051423303 | 1.0 | stats.get/set/add don't work - Versions:
>kjs: kubejs-fabric-1802.5.4-build.506.jar
rhino: rhino-fabric-1802.1.14-build.190.jar
fabric: 0.14.6
fabric-api: fabric-api-0.56.1+1.18.2.jar
https://github.com/KubeJS-Mods/KubeJS/blob/614f5813340bfc08f441e2c209d170e91b3e6798/common/src/main/java/dev/latvian/mods/kubejs/player/PlayerStatsJS.java#L115
I wasn't able to find the method "Stats.CUSTOM.id(string)" (from the line above) in the fabric docs (unclear if this works on forge) and the functions player.stats.get/set/add(id) return a wrapped NPE.
Also, I'm pretty sure these methods read the stat file in /world/stats/playeruuid.json, but editing the file does not change the in game stat because it appears to need to be [updated](https://github.com/FabricMC/yarn/blob/db6b7357f8cb28790ae6f877f497150188b30cdb/mappings/net/minecraft/stat/ServerStatHandler.mapping#L24) but I don't think it accepts changes to the file in that direction (file -> server). In the end I was able to edit the stat using `serverPlayerEntity.resetStat` to which I passed `Stats.CUSTOM.getOrCreateStat(Stats.TIME_SINCE_REST)`
It seems like you should be able to pass instead, for example, a new Identifier with `let dmg = new Identifier("minecraft:damage_taken")` but it gives me the error `Wrapped java.lang.NullPointerException: Cannot invoke "net.minecraft.class_2960.toString()" because "$$0" is null` even though `Stats.DAMAGE_TAKEN` is the same class (class 2960) and `dmg == Stats.DAMAGE_TAKEN` returns `true`
Anyway I hope this has provided enough information.
here are the relevant mappings
https://github.com/FabricMC/yarn/blob/dfda2435aa0ca2f8707fa5a7d6ecb57b3e5546a9/mappings/net/minecraft/entity/player/PlayerEntity.mapping#L132
https://maven.fabricmc.net/docs/yarn-21w15a+build.2/net/minecraft/stat/Stats.html (this is not for the correct version but is easier to read and includes the fields)
https://github.com/FabricMC/yarn/blob/1.18.2/mappings/net/minecraft/stat/Stats.mapping
and here is my awful reflection code (works)
```js
const Stats = java("net.minecraft.class_3468")
const Stat = java("net.minecraft.class_3445")
const Javaobj = java("java.lang.Object")
onEvent('player.chat', function (event) {
let serverPlayerEntity = event.player.minecraftPlayer
let resetStat = serverPlayerEntity.class.getMethod("method_7266", Stat)
let getOrCreateStat = Stats.CUSTOM.class.getMethod("method_14956", Javaobj)
resetStat.invoke(serverPlayerEntity, getOrCreateStat.invoke(Stats.CUSTOM, Stats.TIME_SINCE_REST))
event.server.tell(event.player.stats.getTimeSinceRest())
})
```
By the way, the reason I'm using reflection in this manner instead of the new deobsfuscated methods is because theres a fabric-specific bug with it starting with version 1.18.2; thread on the discord here: https://discord.com/channels/303440391124942858/981262916051423303 | test | stats get set add don t work versions kjs kubejs fabric build jar rhino rhino fabric build jar fabric fabric api fabric api jar i wasn t able to find the method stats custom id string from the line above in the fabric docs unclear if this works on forge and the functions player stats get set add id return a wrapped npe also i m pretty sure these methods read the stat file in world stats playeruuid json but editing the file does not change the in game stat because it appears to need to be but i don t think it accepts changes to the file in that direction file server in the end i was able to edit the stat using serverplayerentity resetstat to which i passed stats custom getorcreatestat stats time since rest it seems like you should be able to pass instead for example a new identifier with let dmg new identifier minecraft damage taken but it gives me the error wrapped java lang nullpointerexception cannot invoke net minecraft class tostring because is null even though stats damage taken is the same class class and dmg stats damage taken returns true anyway i hope this has provided enough information here are the relevant mappings this is not for the correct version but is easier to read and includes the fields and here is my awful reflection code works js const stats java net minecraft class const stat java net minecraft class const javaobj java java lang object onevent player chat function event let serverplayerentity event player minecraftplayer let resetstat serverplayerentity class getmethod method stat let getorcreatestat stats custom class getmethod method javaobj resetstat invoke serverplayerentity getorcreatestat invoke stats custom stats time since rest event server tell event player stats gettimesincerest by the way the reason i m using reflection in this manner instead of the new deobsfuscated methods is because theres a fabric specific bug with it starting with version thread on the discord here | 1 |
348,579 | 31,653,664,201 | IssuesEvent | 2023-09-07 01:56:55 | masters2023-project-06-second-hand/be-a | https://api.github.com/repos/masters2023-project-06-second-hand/be-a | opened | [BE] Oauth 테스트 코드 작성 | 🚦test | ## ✨ 해당 기능을 구현하기 위해 할 일이 무엇인가요?
- [x] 테스트 코드 작성
## ✅ Test Case
- [x] 테스트 코드 작성
| 1.0 | [BE] Oauth 테스트 코드 작성 - ## ✨ 해당 기능을 구현하기 위해 할 일이 무엇인가요?
- [x] 테스트 코드 작성
## ✅ Test Case
- [x] 테스트 코드 작성
| test | oauth 테스트 코드 작성 ✨ 해당 기능을 구현하기 위해 할 일이 무엇인가요 테스트 코드 작성 ✅ test case 테스트 코드 작성 | 1 |
427,450 | 29,813,124,177 | IssuesEvent | 2023-06-16 16:39:25 | charmockridge/LotteryNumberProbabilityPredictor | https://api.github.com/repos/charmockridge/LotteryNumberProbabilityPredictor | opened | AI Model Testing | documentation testing | Use test plan created alongside the development of ann.py and document testing results. Test plan should now be a test report once finished. Create issues for any failed tests and label it as bug. | 1.0 | AI Model Testing - Use test plan created alongside the development of ann.py and document testing results. Test plan should now be a test report once finished. Create issues for any failed tests and label it as bug. | non_test | ai model testing use test plan created alongside the development of ann py and document testing results test plan should now be a test report once finished create issues for any failed tests and label it as bug | 0 |
209,310 | 16,015,118,080 | IssuesEvent | 2021-04-20 15:08:58 | istio/istio.io | https://api.github.com/repos/istio/istio.io | closed | virtual machine examples need a bit of refinement | area/networking community/testing days | The virtual machine examples are probably ok as is, but the install directions in the virtual machine examples are abysmally bad and just don't work. The install problems are tracked in issue https://github.com/istio/istio.io/issues/7168.
IMO this is a TRUE RB, although we shipped with this regression in 1.5, so shipping an old regression is borderline P0.
As such, I'll accept the P0 for now...
To rectify the virtual machine integration, the examples should be consolidated into one file and all instructions removed. That work is in progress in PR: https://github.com/istio/istio.io/pull/7267 | 1.0 | virtual machine examples need a bit of refinement - The virtual machine examples are probably ok as is, but the install directions in the virtual machine examples are abysmally bad and just don't work. The install problems are tracked in issue https://github.com/istio/istio.io/issues/7168.
IMO this is a TRUE RB, although we shipped with this regression in 1.5, so shipping an old regression is borderline P0.
As such, I'll accept the P0 for now...
To rectify the virtual machine integration, the examples should be consolidated into one file and all instructions removed. That work is in progress in PR: https://github.com/istio/istio.io/pull/7267 | test | virtual machine examples need a bit of refinement the virtual machine examples are probably ok as is but the install directions in the virtual machine examples are abysmally bad and just don t work the install problems are tracked in issue imo this is a true rb although we shipped with this regression in so shipping an old regression is borderline as such i ll accept the for now to rectify the virtual machine integration the examples should be consolidated into one file and all instructions removed that work is in progress in pr | 1 |
2,491 | 5,267,387,166 | IssuesEvent | 2017-02-04 21:58:02 | jlm2017/jlm-video-subtitles | https://api.github.com/repos/jlm2017/jlm-video-subtitles | opened | [subtitles] [eng] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney | Language: English Process: [0] Awaiting subtitles | # Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Langue des sous-titres (Anglais)
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&forceedit=captions&ui=hd&tab=captions&v=jO8TCOMU2i8&lang=en&action_mde_edit_form=1&bl=vmp | 1.0 | [subtitles] [eng] MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney - # Video title
MÉLENCHON - Discours sur l'abolition de l'esclavage à Champagney
# URL
https://www.youtube.com/watch?v=jO8TCOMU2i8
# Youtube subtitles language
Langue des sous-titres (Anglais)
# Duration
36:27
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&forceedit=captions&ui=hd&tab=captions&v=jO8TCOMU2i8&lang=en&action_mde_edit_form=1&bl=vmp | non_test | mélenchon discours sur l abolition de l esclavage à champagney video title mélenchon discours sur l abolition de l esclavage à champagney url youtube subtitles language langue des sous titres anglais duration subtitles url | 0 |
73,171 | 19,587,571,137 | IssuesEvent | 2022-01-05 09:05:49 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | opened | [SB] [GCI] In add admin screen phone number field is accepting invalid input | Bug P1 Study builder | 1. In Add admin screen, complete all the fields and then > Click on phone number field.
2. Enter the invalid input such as only alphabets or combinations of alphabets and numbers.
3. Click on add admin and Observe.
**AR:** Accepting invalid inputs in the phone number field.
**ER:** Should not accept invalid inputs in the phone number field.
**Note:** Issue needs to be fixed even in set up your study builder account screen.
| 1.0 | [SB] [GCI] In add admin screen phone number field is accepting invalid input - 1. In Add admin screen, complete all the fields and then > Click on phone number field.
2. Enter the invalid input such as only alphabets or combinations of alphabets and numbers.
3. Click on add admin and Observe.
**AR:** Accepting invalid inputs in the phone number field.
**ER:** Should not accept invalid inputs in the phone number field.
**Note:** Issue needs to be fixed even in set up your study builder account screen.
| non_test | in add admin screen phone number field is accepting invalid input in add admin screen complete all the fields and then click on phone number field enter the invalid input such as only alphabets or combinations of alphabets and numbers click on add admin and observe ar accepting invalid inputs in the phone number field er should not accept invalid inputs in the phone number field note issue needs to be fixed even in set up your study builder account screen | 0 |
282,807 | 24,498,007,190 | IssuesEvent | 2022-10-10 10:20:43 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | Ubuntu Linux 22.04 SCA Policy - Update and rework - checks 4.1.4 to 4.2.3 | team/qa type/qa-testing status/not-tracked | | Target version | Related issue | Related PR |
| -------------- | ------------- | ----------------------------------------- |
| 4.4.0 | #3391 | https://github.com/wazuh/wazuh/pull/10487 |
| Check ID | Check Name | Implemented | Ready for review | QA review |
| --------- | -------------------------------------------------------------------------------------------- | ----------- | ---------------- | --------- |
| 4.1.4 | Configure auditd file access | | | |
| 4.1.4.1 | Ensure audit log files are mode 0640 or less permissive (Automated) | | | |
| 4.1.4.2 | Ensure only authorized users own audit log files (Automated) | | | |
| 4.1.4.3 | Ensure only authorized groups are assigned ownership of audit log files (Automated) | | | |
| 4.1.4.4 | Ensure the audit log directory is 0750 or more restrictive (Automated) | | | |
| 4.1.4.5 | Ensure audit configuration files are 640 or more restrictive (Automated) | | | |
| 4.1.4.6 | Ensure audit configuration files are owned by root (Automated) | | | |
| 4.1.4.7 | Ensure audit configuration files belong to group root (Automated) | | | |
| 4.1.4.8 | Ensure audit tools are 755 or more restrictive (Automated) | | | |
| 4.1.4.9 | Ensure audit tools are owned by root (Automated) | | | |
| 4.1.4.10 | Ensure audit tools belong to group root (Automated) | | | |
| 4.1.4.11 | Ensure cryptographic mechanisms are used to protect the integrity of audit tools (Automated) | | | |
| 4.2 | Configure Logging | | | |
| 4.2.1 | Configure journald | | | |
| 4.2.1.1 | Ensure journald is configured to send logs to a remote log host | | | |
| 4.2.1.1.1 | Ensure systemd-journal-remote is installed (Automated) | | | |
| 4.2.1.1.2 | Ensure systemd-journal-remote is configured (Manual) | | | |
| 4.2.1.1.3 | Ensure systemd-journal-remote is enabled (Manual) | | | |
| 4.2.1.1.4 | Ensure journald is not configured to recieve logs from a remote client (Automated) | | | |
| 4.2.1.2 | Ensure journald service is enabled (Automated) | | | |
| 4.2.1.3 | Ensure journald is configured to compress large log files (Automated) | | | |
| 4.2.1.4 | Ensure journald is configured to write logfiles to persistent disk (Automated) | | | |
| 4.2.1.5 | Ensure journald is not configured to send logs to rsyslog (Manual) | | | |
| 4.2.1.6 | Ensure journald log rotation is configured per site policy (Manual) | | | |
| 4.2.1.7 | Ensure journald default file permissions configured (Manual) | | | |
| 4.2.2 | Configure rsyslog | | | |
| 4.2.2.1 | Ensure rsyslog is installed (Automated) | | | |
| 4.2.2.2 | Ensure rsyslog service is enabled (Automated) | | | |
| 4.2.2.3 | Ensure journald is configured to send logs to rsyslog (Manual) | | | |
| 4.2.2.4 | Ensure rsyslog default file permissions are configured (Automated) | | | |
| 4.2.2.5 | Ensure logging is configured (Manual) | | | |
| 4.2.2.6 | Ensure rsyslog is configured to send logs to a remote log host (Manual) | | | |
| 4.2.2.7 | Ensure rsyslog is not configured to receive logs from a remote client (Automated) | | | |
| 4.2.3 | Ensure all logfiles have appropriate permissions and ownership (Automated) | | | |
| 1.0 | Ubuntu Linux 22.04 SCA Policy - Update and rework - checks 4.1.4 to 4.2.3 - | Target version | Related issue | Related PR |
| -------------- | ------------- | ----------------------------------------- |
| 4.4.0 | #3391 | https://github.com/wazuh/wazuh/pull/10487 |
| Check ID | Check Name | Implemented | Ready for review | QA review |
| --------- | -------------------------------------------------------------------------------------------- | ----------- | ---------------- | --------- |
| 4.1.4 | Configure auditd file access | | | |
| 4.1.4.1 | Ensure audit log files are mode 0640 or less permissive (Automated) | | | |
| 4.1.4.2 | Ensure only authorized users own audit log files (Automated) | | | |
| 4.1.4.3 | Ensure only authorized groups are assigned ownership of audit log files (Automated) | | | |
| 4.1.4.4 | Ensure the audit log directory is 0750 or more restrictive (Automated) | | | |
| 4.1.4.5 | Ensure audit configuration files are 640 or more restrictive (Automated) | | | |
| 4.1.4.6 | Ensure audit configuration files are owned by root (Automated) | | | |
| 4.1.4.7 | Ensure audit configuration files belong to group root (Automated) | | | |
| 4.1.4.8 | Ensure audit tools are 755 or more restrictive (Automated) | | | |
| 4.1.4.9 | Ensure audit tools are owned by root (Automated) | | | |
| 4.1.4.10 | Ensure audit tools belong to group root (Automated) | | | |
| 4.1.4.11 | Ensure cryptographic mechanisms are used to protect the integrity of audit tools (Automated) | | | |
| 4.2 | Configure Logging | | | |
| 4.2.1 | Configure journald | | | |
| 4.2.1.1 | Ensure journald is configured to send logs to a remote log host | | | |
| 4.2.1.1.1 | Ensure systemd-journal-remote is installed (Automated) | | | |
| 4.2.1.1.2 | Ensure systemd-journal-remote is configured (Manual) | | | |
| 4.2.1.1.3 | Ensure systemd-journal-remote is enabled (Manual) | | | |
| 4.2.1.1.4 | Ensure journald is not configured to recieve logs from a remote client (Automated) | | | |
| 4.2.1.2 | Ensure journald service is enabled (Automated) | | | |
| 4.2.1.3 | Ensure journald is configured to compress large log files (Automated) | | | |
| 4.2.1.4 | Ensure journald is configured to write logfiles to persistent disk (Automated) | | | |
| 4.2.1.5 | Ensure journald is not configured to send logs to rsyslog (Manual) | | | |
| 4.2.1.6 | Ensure journald log rotation is configured per site policy (Manual) | | | |
| 4.2.1.7 | Ensure journald default file permissions configured (Manual) | | | |
| 4.2.2 | Configure rsyslog | | | |
| 4.2.2.1 | Ensure rsyslog is installed (Automated) | | | |
| 4.2.2.2 | Ensure rsyslog service is enabled (Automated) | | | |
| 4.2.2.3 | Ensure journald is configured to send logs to rsyslog (Manual) | | | |
| 4.2.2.4 | Ensure rsyslog default file permissions are configured (Automated) | | | |
| 4.2.2.5 | Ensure logging is configured (Manual) | | | |
| 4.2.2.6 | Ensure rsyslog is configured to send logs to a remote log host (Manual) | | | |
| 4.2.2.7 | Ensure rsyslog is not configured to receive logs from a remote client (Automated) | | | |
| 4.2.3 | Ensure all logfiles have appropriate permissions and ownership (Automated) | | | |
| test | ubuntu linux sca policy update and rework checks to target version related issue related pr check id check name implemented ready for review qa review configure auditd file access ensure audit log files are mode or less permissive automated ensure only authorized users own audit log files automated ensure only authorized groups are assigned ownership of audit log files automated ensure the audit log directory is or more restrictive automated ensure audit configuration files are or more restrictive automated ensure audit configuration files are owned by root automated ensure audit configuration files belong to group root automated ensure audit tools are or more restrictive automated ensure audit tools are owned by root automated ensure audit tools belong to group root automated ensure cryptographic mechanisms are used to protect the integrity of audit tools automated configure logging configure journald ensure journald is configured to send logs to a remote log host ensure systemd journal remote is installed automated ensure systemd journal remote is configured manual ensure systemd journal remote is enabled manual ensure journald is not configured to recieve logs from a remote client automated ensure journald service is enabled automated ensure journald is configured to compress large log files automated ensure journald is configured to write logfiles to persistent disk automated ensure journald is not configured to send logs to rsyslog manual ensure journald log rotation is configured per site policy manual ensure journald default file permissions configured manual configure rsyslog ensure rsyslog is installed automated ensure rsyslog service is enabled automated ensure journald is configured to send logs to rsyslog manual ensure rsyslog default file permissions are configured automated ensure logging is configured manual ensure rsyslog is configured to send logs to a remote log host manual ensure rsyslog is not configured to receive logs from a remote client automated ensure all logfiles have appropriate permissions and ownership automated | 1 |
46,434 | 5,809,194,365 | IssuesEvent | 2017-05-04 12:51:08 | IDgis/geoportaal-test | https://api.github.com/repos/IDgis/geoportaal-test | closed | Rapportage geodatasets wms extern beschikbaar | gebruikerstest impact laag wens | In de rapportage voor Geodatasets een extra kolom opnemen, met als titel WMS extern.
Deze is "ja" als in de metadata staat "WMS extern beschikbaar" en "nee" als dit er niet staat
hang samen met #388 | 1.0 | Rapportage geodatasets wms extern beschikbaar - In de rapportage voor Geodatasets een extra kolom opnemen, met als titel WMS extern.
Deze is "ja" als in de metadata staat "WMS extern beschikbaar" en "nee" als dit er niet staat
hang samen met #388 | test | rapportage geodatasets wms extern beschikbaar in de rapportage voor geodatasets een extra kolom opnemen met als titel wms extern deze is ja als in de metadata staat wms extern beschikbaar en nee als dit er niet staat hang samen met | 1 |
282,834 | 30,889,440,216 | IssuesEvent | 2023-08-04 02:43:39 | madhans23/linux-4.1.15 | https://api.github.com/repos/madhans23/linux-4.1.15 | reopened | CVE-2022-30594 (High) detected in linux-stable-rtv4.1.33 | Mend: dependency security vulnerability | ## CVE-2022-30594 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.17.2 mishandles seccomp permissions. The PTRACE_SEIZE code path allows attackers to bypass intended restrictions on setting the PT_SUSPEND_SECCOMP flag.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-30594>CVE-2022-30594</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-30594">https://www.linuxkernelcves.com/cves/CVE-2022-30594</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-30594 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2022-30594 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The Linux kernel before 5.17.2 mishandles seccomp permissions. The PTRACE_SEIZE code path allows attackers to bypass intended restrictions on setting the PT_SUSPEND_SECCOMP flag.
<p>Publish Date: 2022-05-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-30594>CVE-2022-30594</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-30594">https://www.linuxkernelcves.com/cves/CVE-2022-30594</a></p>
<p>Release Date: 2022-05-12</p>
<p>Fix Resolution: v4.9.311,v4.14.276,v4.19.238,v5.4.189,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files vulnerability details the linux kernel before mishandles seccomp permissions the ptrace seize code path allows attackers to bypass intended restrictions on setting the pt suspend seccomp flag publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
283,888 | 24,569,520,087 | IssuesEvent | 2022-10-13 07:30:52 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Add TS test for datasource property `referentialIntegrity` and `relationMode` | topic: tests tech/typescript kind/tech team/schema topic: referentialIntegrity/relationMode | [We renamed](https://github.com/prisma/prisma/issues/15601) the datasource property `referentialIntegrity` to `relationMode`, both are valid for backward compatibility.
We should have tests that show that both works as expected.
| 1.0 | Add TS test for datasource property `referentialIntegrity` and `relationMode` - [We renamed](https://github.com/prisma/prisma/issues/15601) the datasource property `referentialIntegrity` to `relationMode`, both are valid for backward compatibility.
We should have tests that show that both works as expected.
| test | add ts test for datasource property referentialintegrity and relationmode the datasource property referentialintegrity to relationmode both are valid for backward compatibility we should have tests that show that both works as expected | 1 |
147,597 | 19,522,850,282 | IssuesEvent | 2021-12-29 22:31:30 | swagger-api/swagger-codegen | https://api.github.com/repos/swagger-api/swagger-codegen | opened | CVE-2019-8331 (Medium) detected in bootstrap-2.2.2.js | security vulnerability | ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.2.2.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.2.2/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.2.2/bootstrap.js</a></p>
<p>Path to vulnerable library: /src/main/resources/swagger-static/assets/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-2.2.2.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"2.2.2","packageFilePaths":[null],"isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:2.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-8331","vulnerabilityDetails":"In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-8331 (Medium) detected in bootstrap-2.2.2.js - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-2.2.2.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.2.2/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.2.2/bootstrap.js</a></p>
<p>Path to vulnerable library: /src/main/resources/swagger-static/assets/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-2.2.2.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/swagger-api/swagger-codegen/commit/4b7a8d7d7384aa6a27d6309c35ade0916edae7ed">4b7a8d7d7384aa6a27d6309c35ade0916edae7ed</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"twitter-bootstrap","packageVersion":"2.2.2","packageFilePaths":[null],"isTransitiveDependency":false,"dependencyTree":"twitter-bootstrap:2.2.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-8331","vulnerabilityDetails":"In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_test | cve medium detected in bootstrap js cve medium severity vulnerability vulnerable library bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library src main resources swagger static assets js bootstrap js dependency hierarchy x bootstrap js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree twitter bootstrap isminimumfixversionavailable true minimumfixversion bootstrap bootstrap sass isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails in bootstrap before and x before xss is possible in the tooltip or popover data template attribute vulnerabilityurl | 0 |
156,919 | 24,626,224,225 | IssuesEvent | 2022-10-16 14:58:40 | dotnet/efcore | https://api.github.com/repos/dotnet/efcore | closed | HasAlternateKey migration generation create new column | closed-by-design customer-reported | I added .HasAlternateKey on one property and it created a new column (projectId1) and but use the column has was expecting to be (projectId) to add the unique constraint. The projectId1 should not have been created, why that is there ?
We decided to use HasAlternateKey because it can be use like the primary key to find the entity.
### Steps to reproduce
``` C#
// DbContext (OnModelCreating)
modelBuilder.Entity<CustomerBasicInformation>()
.OwnsOne(cbi => cbi.CustomerAddress);
modelBuilder.Entity<CustomerBasicInformation>()
.HasAlternateKey(cbi => cbi.ProjectId);
```
```C#
public class CustomerBasicInformation : BaseEntity
{
public Guid ProjectId { get; set; }
public virtual Project? Project { get; set; }
// ... other properties
}
```
Migrations generated :
```C#
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation");
migrationBuilder.DropIndex(
name: "IX_CustomerBasicInformation_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation");
migrationBuilder.AddColumn<Guid>(
name: "ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
nullable: false,
defaultValue: new Guid("00000000-0000-0000-0000-000000000000"));
migrationBuilder.AddUniqueConstraint(
name: "AK_CustomerBasicInformation_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId");
migrationBuilder.CreateIndex(
name: "IX_CustomerBasicInformation_ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId1");
migrationBuilder.AddForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId1",
principalSchema: "dbo",
principalTable: "Project",
principalColumn: "Id",
onDelete: ReferentialAction.Cascade);
}
```
here the previous migration that created the table
```C#
migrationBuilder.CreateTable(
name: "CustomerBasicInformation",
schema: "dbo",
columns: table => new
{
Id = table.Column<Guid>(nullable: false),
IsDeleted = table.Column<bool>(nullable: false),
ProjectId = table.Column<Guid>(nullable: false),
OpportunityNumber = table.Column<string>(nullable: true),
CustomerLanguageId = table.Column<int>(nullable: true),
CustomerName = table.Column<string>(nullable: true),
CustomerAddress_NumberAndStreet = table.Column<string>(nullable: true),
CustomerAddress_City = table.Column<string>(nullable: true),
CustomerAddress_ProvinceId = table.Column<int>(nullable: true),
CustomerAddress_PostalCode = table.Column<string>(nullable: true),
CustomerAddress_UnitFloorSuite = table.Column<string>(nullable: true),
MarketTypeId = table.Column<int>(nullable: true),
MainBusinessPhoneNumber = table.Column<string>(nullable: true),
ContactGroupAdmin = table.Column<string>(nullable: true),
GroupAdminPhoneNumber = table.Column<string>(nullable: true),
ContactGroupAdminEmail = table.Column<string>(nullable: true),
CreatedAt = table.Column<DateTime>(nullable: false, defaultValueSql: "SYSUTCDATETIME()"),
CreatedBy = table.Column<string>(nullable: false, defaultValueSql: "SUSER_SNAME()"),
UpdatedAt = table.Column<DateTime>(nullable: false, defaultValueSql: "SYSUTCDATETIME()"),
UpdatedBy = table.Column<string>(nullable: false, defaultValueSql: "SUSER_SNAME()")
},
constraints: table =>
{
table.PrimaryKey("PK_CustomerBasicInformation", x => x.Id);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Province_CustomerAddress_ProvinceId",
column: x => x.CustomerAddress_ProvinceId,
principalSchema: "dbo",
principalTable: "Province",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Language_CustomerLanguageId",
column: x => x.CustomerLanguageId,
principalSchema: "dbo",
principalTable: "Language",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_MarketType_MarketTypeId",
column: x => x.MarketTypeId,
principalSchema: "dbo",
principalTable: "MarketType",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId",
column: x => x.ProjectId,
principalSchema: "dbo",
principalTable: "Project",
principalColumn: "Id",
onDelete: ReferentialAction.Cascade);
});
```
### Further technical details
EF Core version: 3.0.0
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: .NET Core 3.1
Operating system: Windows
IDE: none, i used the dotnet ef cli
| 1.0 | HasAlternateKey migration generation create new column - I added .HasAlternateKey on one property and it created a new column (projectId1) and but use the column has was expecting to be (projectId) to add the unique constraint. The projectId1 should not have been created, why that is there ?
We decided to use HasAlternateKey because it can be use like the primary key to find the entity.
### Steps to reproduce
``` C#
// DbContext (OnModelCreating)
modelBuilder.Entity<CustomerBasicInformation>()
.OwnsOne(cbi => cbi.CustomerAddress);
modelBuilder.Entity<CustomerBasicInformation>()
.HasAlternateKey(cbi => cbi.ProjectId);
```
```C#
public class CustomerBasicInformation : BaseEntity
{
public Guid ProjectId { get; set; }
public virtual Project? Project { get; set; }
// ... other properties
}
```
Migrations generated :
```C#
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation");
migrationBuilder.DropIndex(
name: "IX_CustomerBasicInformation_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation");
migrationBuilder.AddColumn<Guid>(
name: "ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
nullable: false,
defaultValue: new Guid("00000000-0000-0000-0000-000000000000"));
migrationBuilder.AddUniqueConstraint(
name: "AK_CustomerBasicInformation_ProjectId",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId");
migrationBuilder.CreateIndex(
name: "IX_CustomerBasicInformation_ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId1");
migrationBuilder.AddForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId1",
schema: "dbo",
table: "CustomerBasicInformation",
column: "ProjectId1",
principalSchema: "dbo",
principalTable: "Project",
principalColumn: "Id",
onDelete: ReferentialAction.Cascade);
}
```
here the previous migration that created the table
```C#
migrationBuilder.CreateTable(
name: "CustomerBasicInformation",
schema: "dbo",
columns: table => new
{
Id = table.Column<Guid>(nullable: false),
IsDeleted = table.Column<bool>(nullable: false),
ProjectId = table.Column<Guid>(nullable: false),
OpportunityNumber = table.Column<string>(nullable: true),
CustomerLanguageId = table.Column<int>(nullable: true),
CustomerName = table.Column<string>(nullable: true),
CustomerAddress_NumberAndStreet = table.Column<string>(nullable: true),
CustomerAddress_City = table.Column<string>(nullable: true),
CustomerAddress_ProvinceId = table.Column<int>(nullable: true),
CustomerAddress_PostalCode = table.Column<string>(nullable: true),
CustomerAddress_UnitFloorSuite = table.Column<string>(nullable: true),
MarketTypeId = table.Column<int>(nullable: true),
MainBusinessPhoneNumber = table.Column<string>(nullable: true),
ContactGroupAdmin = table.Column<string>(nullable: true),
GroupAdminPhoneNumber = table.Column<string>(nullable: true),
ContactGroupAdminEmail = table.Column<string>(nullable: true),
CreatedAt = table.Column<DateTime>(nullable: false, defaultValueSql: "SYSUTCDATETIME()"),
CreatedBy = table.Column<string>(nullable: false, defaultValueSql: "SUSER_SNAME()"),
UpdatedAt = table.Column<DateTime>(nullable: false, defaultValueSql: "SYSUTCDATETIME()"),
UpdatedBy = table.Column<string>(nullable: false, defaultValueSql: "SUSER_SNAME()")
},
constraints: table =>
{
table.PrimaryKey("PK_CustomerBasicInformation", x => x.Id);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Province_CustomerAddress_ProvinceId",
column: x => x.CustomerAddress_ProvinceId,
principalSchema: "dbo",
principalTable: "Province",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Language_CustomerLanguageId",
column: x => x.CustomerLanguageId,
principalSchema: "dbo",
principalTable: "Language",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_MarketType_MarketTypeId",
column: x => x.MarketTypeId,
principalSchema: "dbo",
principalTable: "MarketType",
principalColumn: "Id",
onDelete: ReferentialAction.Restrict);
table.ForeignKey(
name: "FK_CustomerBasicInformation_Project_ProjectId",
column: x => x.ProjectId,
principalSchema: "dbo",
principalTable: "Project",
principalColumn: "Id",
onDelete: ReferentialAction.Cascade);
});
```
### Further technical details
EF Core version: 3.0.0
Database provider: Microsoft.EntityFrameworkCore.SqlServer
Target framework: .NET Core 3.1
Operating system: Windows
IDE: none, i used the dotnet ef cli
| non_test | hasalternatekey migration generation create new column i added hasalternatekey on one property and it created a new column and but use the column has was expecting to be projectid to add the unique constraint the should not have been created why that is there we decided to use hasalternatekey because it can be use like the primary key to find the entity steps to reproduce c dbcontext onmodelcreating modelbuilder entity ownsone cbi cbi customeraddress modelbuilder entity hasalternatekey cbi cbi projectid c public class customerbasicinformation baseentity public guid projectid get set public virtual project project get set other properties migrations generated c protected override void up migrationbuilder migrationbuilder migrationbuilder dropforeignkey name fk customerbasicinformation project projectid schema dbo table customerbasicinformation migrationbuilder dropindex name ix customerbasicinformation projectid schema dbo table customerbasicinformation migrationbuilder addcolumn name schema dbo table customerbasicinformation nullable false defaultvalue new guid migrationbuilder adduniqueconstraint name ak customerbasicinformation projectid schema dbo table customerbasicinformation column projectid migrationbuilder createindex name ix customerbasicinformation schema dbo table customerbasicinformation column migrationbuilder addforeignkey name fk customerbasicinformation project schema dbo table customerbasicinformation column principalschema dbo principaltable project principalcolumn id ondelete referentialaction cascade here the previous migration that created the table c migrationbuilder createtable name customerbasicinformation schema dbo columns table new id table column nullable false isdeleted table column nullable false projectid table column nullable false opportunitynumber table column nullable true customerlanguageid table column nullable true customername table column nullable true customeraddress numberandstreet table column nullable true customeraddress city table column nullable true customeraddress provinceid table column nullable true customeraddress postalcode table column nullable true customeraddress unitfloorsuite table column nullable true markettypeid table column nullable true mainbusinessphonenumber table column nullable true contactgroupadmin table column nullable true groupadminphonenumber table column nullable true contactgroupadminemail table column nullable true createdat table column nullable false defaultvaluesql sysutcdatetime createdby table column nullable false defaultvaluesql suser sname updatedat table column nullable false defaultvaluesql sysutcdatetime updatedby table column nullable false defaultvaluesql suser sname constraints table table primarykey pk customerbasicinformation x x id table foreignkey name fk customerbasicinformation province customeraddress provinceid column x x customeraddress provinceid principalschema dbo principaltable province principalcolumn id ondelete referentialaction restrict table foreignkey name fk customerbasicinformation language customerlanguageid column x x customerlanguageid principalschema dbo principaltable language principalcolumn id ondelete referentialaction restrict table foreignkey name fk customerbasicinformation markettype markettypeid column x x markettypeid principalschema dbo principaltable markettype principalcolumn id ondelete referentialaction restrict table foreignkey name fk customerbasicinformation project projectid column x x projectid principalschema dbo principaltable project principalcolumn id ondelete referentialaction cascade further technical details ef core version database provider microsoft entityframeworkcore sqlserver target framework net core operating system windows ide none i used the dotnet ef cli | 0 |
8,656 | 11,796,228,753 | IssuesEvent | 2020-03-18 10:24:13 | TOMP-WG/TOMP-API | https://api.github.com/repos/TOMP-WG/TOMP-API | closed | A high-level roadmap for TO-MP API developments | process | In discussion with Ross we came to the conclusion that it would be beneficial to have a sort of roadmap that shows our development ideas (visually) and also shows for example data sharing standardization efforts that are at least considering and might be undertaking.
| 1.0 | A high-level roadmap for TO-MP API developments - In discussion with Ross we came to the conclusion that it would be beneficial to have a sort of roadmap that shows our development ideas (visually) and also shows for example data sharing standardization efforts that are at least considering and might be undertaking.
| non_test | a high level roadmap for to mp api developments in discussion with ross we came to the conclusion that it would be beneficial to have a sort of roadmap that shows our development ideas visually and also shows for example data sharing standardization efforts that are at least considering and might be undertaking | 0 |
274,180 | 23,817,389,997 | IssuesEvent | 2022-09-05 08:07:57 | Uuvana-Studios/longvinter-windows-client | https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client | closed | Inventory Disappearing Bug | Bug Not Tested | **Describe the bug**
A clear and concise description of what the bug is.
This has happened twice now, my inventory will have items in it and then it disappears suddenly and I'm unsure as to why. I check the ground to make sure that it didn't somehow randomly drop, but it vanishes.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See an error
I don't know how to reproduce this bug, I was playing on Uuvana East 6 and it would happen seemingly at random times.
**Expected behavior**
A clear and concise description of what you expected to happen.
I expected my inventory to be there when I opened it, but everything was gone.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. Windows]
- Game Version [e.g. 1.0]
- Steam Version [e.g. 1.0]
Windows 1.0.8b
**Additional context**
Add any other context about the problem here.
| 1.0 | Inventory Disappearing Bug - **Describe the bug**
A clear and concise description of what the bug is.
This has happened twice now, my inventory will have items in it and then it disappears suddenly and I'm unsure as to why. I check the ground to make sure that it didn't somehow randomly drop, but it vanishes.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See an error
I don't know how to reproduce this bug, I was playing on Uuvana East 6 and it would happen seemingly at random times.
**Expected behavior**
A clear and concise description of what you expected to happen.
I expected my inventory to be there when I opened it, but everything was gone.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. Windows]
- Game Version [e.g. 1.0]
- Steam Version [e.g. 1.0]
Windows 1.0.8b
**Additional context**
Add any other context about the problem here.
| test | inventory disappearing bug describe the bug a clear and concise description of what the bug is this has happened twice now my inventory will have items in it and then it disappears suddenly and i m unsure as to why i check the ground to make sure that it didn t somehow randomly drop but it vanishes to reproduce steps to reproduce the behavior go to click on scroll down to see an error i don t know how to reproduce this bug i was playing on uuvana east and it would happen seemingly at random times expected behavior a clear and concise description of what you expected to happen i expected my inventory to be there when i opened it but everything was gone screenshots if applicable add screenshots to help explain your problem desktop please complete the following information os game version steam version windows additional context add any other context about the problem here | 1 |
277,341 | 30,620,459,476 | IssuesEvent | 2023-07-24 07:54:15 | DEV-REPO-URIEL/TKA-5156 | https://api.github.com/repos/DEV-REPO-URIEL/TKA-5156 | opened | xercesImpl-2.8.0.jar: 3 vulnerabilities (highest severity is: 7.5) | Mend: dependency security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (xercesImpl version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2012-0881](https://www.mend.io/vulnerability-database/CVE-2012-0881) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | xercesImpl-2.8.0.jar | Direct | 2.12.0 | ✅ |
| [CVE-2013-4002](https://www.mend.io/vulnerability-database/CVE-2013-4002) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.9 | xercesImpl-2.8.0.jar | Direct | 2.9.0 | ✅ |
| [CVE-2009-2625](https://www.mend.io/vulnerability-database/CVE-2009-2625) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | xercesImpl-2.8.0.jar | Direct | 2.9.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2012-0881</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Xerces2 Java Parser before 2.12.0 allows remote attackers to cause a denial of service (CPU consumption) via a crafted message to an XML service, which triggers hash table collisions.
<p>Publish Date: 2017-10-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2012-0881>CVE-2012-0881</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0881">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0881</a></p>
<p>Release Date: 2017-10-30</p>
<p>Fix Resolution: 2.12.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2013-4002</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.
<p>Publish Date: 2013-07-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-4002>CVE-2013-4002</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p>
<p>Release Date: 2013-07-23</p>
<p>Fix Resolution: 2.9.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2009-2625</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XMLScanner.java in Apache Xerces2 Java, as used in Sun Java Runtime Environment (JRE) in JDK and JRE 6 before Update 15 and JDK and JRE 5.0 before Update 20, and in other products, allows remote attackers to cause a denial of service (infinite loop and application hang) via malformed XML input, as demonstrated by the Codenomicon XML fuzzing framework.
<p>Publish Date: 2009-08-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2009-2625>CVE-2009-2625</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2009-2625">https://nvd.nist.gov/vuln/detail/CVE-2009-2625</a></p>
<p>Release Date: 2009-08-06</p>
<p>Fix Resolution: 2.9.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | xercesImpl-2.8.0.jar: 3 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p></summary>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (xercesImpl version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2012-0881](https://www.mend.io/vulnerability-database/CVE-2012-0881) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.5 | xercesImpl-2.8.0.jar | Direct | 2.12.0 | ✅ |
| [CVE-2013-4002](https://www.mend.io/vulnerability-database/CVE-2013-4002) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.9 | xercesImpl-2.8.0.jar | Direct | 2.9.0 | ✅ |
| [CVE-2009-2625](https://www.mend.io/vulnerability-database/CVE-2009-2625) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Medium | 5.3 | xercesImpl-2.8.0.jar | Direct | 2.9.0 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2012-0881</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Xerces2 Java Parser before 2.12.0 allows remote attackers to cause a denial of service (CPU consumption) via a crafted message to an XML service, which triggers hash table collisions.
<p>Publish Date: 2017-10-30
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2012-0881>CVE-2012-0881</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0881">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-0881</a></p>
<p>Release Date: 2017-10-30</p>
<p>Fix Resolution: 2.12.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2013-4002</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XMLscanner.java in Apache Xerces2 Java Parser before 2.12.0, as used in the Java Runtime Environment (JRE) in IBM Java 5.0 before 5.0 SR16-FP3, 6 before 6 SR14, 6.0.1 before 6.0.1 SR6, and 7 before 7 SR5 as well as Oracle Java SE 7u40 and earlier, Java SE 6u60 and earlier, Java SE 5.0u51 and earlier, JRockit R28.2.8 and earlier, JRockit R27.7.6 and earlier, Java SE Embedded 7u40 and earlier, and possibly other products allows remote attackers to cause a denial of service via vectors related to XML attribute names.
<p>Publish Date: 2013-07-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2013-4002>CVE-2013-4002</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-4002</a></p>
<p>Release Date: 2013-07-23</p>
<p>Fix Resolution: 2.9.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> CVE-2009-2625</summary>
### Vulnerable Library - <b>xercesImpl-2.8.0.jar</b></p>
<p>Xerces2 is the next generation of high performance, fully compliant XML parsers in the
Apache Xerces family. This new version of Xerces introduces the Xerces Native Interface (XNI),
a complete framework for building parser components and configurations that is extremely
modular and easy to program.</p>
<p>Library home page: <a href="http://xerces.apache.org/xerces2-j">http://xerces.apache.org/xerces2-j</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /target/easybuggy-1-SNAPSHOT/WEB-INF/lib/xercesImpl-2.8.0.jar,/home/wss-scanner/.m2/repository/xerces/xercesImpl/2.8.0/xercesImpl-2.8.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **xercesImpl-2.8.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DEV-REPO-URIEL/TKA-5156/commit/f0b626911693d1589d6d7e5509dd3697961785f0">f0b626911693d1589d6d7e5509dd3697961785f0</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
XMLScanner.java in Apache Xerces2 Java, as used in Sun Java Runtime Environment (JRE) in JDK and JRE 6 before Update 15 and JDK and JRE 5.0 before Update 20, and in other products, allows remote attackers to cause a denial of service (infinite loop and application hang) via malformed XML input, as demonstrated by the Codenomicon XML fuzzing framework.
<p>Publish Date: 2009-08-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2009-2625>CVE-2009-2625</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2009-2625">https://nvd.nist.gov/vuln/detail/CVE-2009-2625</a></p>
<p>Release Date: 2009-08-06</p>
<p>Fix Resolution: 2.9.0</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_test | xercesimpl jar vulnerabilities highest severity is vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file pom xml path to vulnerable library target easybuggy snapshot web inf lib xercesimpl jar home wss scanner repository xerces xercesimpl xercesimpl jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in xercesimpl version remediation available high xercesimpl jar direct medium xercesimpl jar direct medium xercesimpl jar direct details cve vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file pom xml path to vulnerable library target easybuggy snapshot web inf lib xercesimpl jar home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details apache java parser before allows remote attackers to cause a denial of service cpu consumption via a crafted message to an xml service which triggers hash table collisions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file pom xml path to vulnerable library target easybuggy snapshot web inf lib xercesimpl jar home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details xmlscanner java in apache java parser before as used in the java runtime environment jre in ibm java before before before and before as well as oracle java se and earlier java se and earlier java se and earlier jrockit and earlier jrockit and earlier java se embedded and earlier and possibly other products allows remote attackers to cause a denial of service via vectors related to xml attribute names publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library xercesimpl jar is the next generation of high performance fully compliant xml parsers in the apache xerces family this new version of xerces introduces the xerces native interface xni a complete framework for building parser components and configurations that is extremely modular and easy to program library home page a href path to dependency file pom xml path to vulnerable library target easybuggy snapshot web inf lib xercesimpl jar home wss scanner repository xerces xercesimpl xercesimpl jar dependency hierarchy x xercesimpl jar vulnerable library found in head commit a href found in base branch main vulnerability details xmlscanner java in apache java as used in sun java runtime environment jre in jdk and jre before update and jdk and jre before update and in other products allows remote attackers to cause a denial of service infinite loop and application hang via malformed xml input as demonstrated by the codenomicon xml fuzzing framework publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
340,731 | 10,277,818,745 | IssuesEvent | 2019-08-25 08:46:28 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | QGIS crashed when I use 3D View | 3D Bug Crash/Data Corruption Feedback High Priority | Author Name: **Wolfgang Figura** (Wolfgang Figura)
Original Redmine Issue: [22040](https://issues.qgis.org/issues/22040)
Affected QGIS version: 3.6.2
Redmine category:3d
---
When I use the 3D view, QGIS crashes reproducibly.
## User Feedback
When I use the 3D view, QGIS crashes reproducibly.
## Report Details
*Crash ID*: 343c12136d6018e5544d50f723e565c45088949d
*Stack Trace*
```
QgsWindow3DEngine::tr :
QgsFlatTerrainGenerator::writeXml :
QMetaObject::activate :
QgsMapLayer::repaintRequested :
QgsRasterLayer::refreshRendererIfNeeded :
QgsRasterLayerRenderer::QgsRasterLayerRenderer :
QgsRasterLayer::createMapRenderer :
QgsMapRendererJob::prepareJobs :
QgsMapRendererCustomPainterJob::start :
QgsMapRendererSequentialJob::start :
QgsTerrainGenerator::typeToString :
QgsFlatTerrainGenerator::writeXml :
QgsFlatTerrainGenerator::writeXml :
QgsCameraPose::setCenterPoint :
QgsWindow3DEngine::tr :
QgsWindow3DEngine::tr :
QMetaObject::activate :
QMetaObject::activate :
Qgs3DAlgorithms::qt_metacast :
QgsTerrainGenerator::typeToString :
QMetaObject::activate :
QMetaObject::activate :
QMetaObject::activate :
QFutureWatcherBase::event :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QCoreApplicationPrivate::sendPostedEvents :
qt_plugin_query_metadata :
QEventDispatcherWin32::processEvents :
TranslateMessageEx :
TranslateMessage :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
```
*QGIS Info*
QGIS Version: 3.6.2-Noosa
QGIS code revision: 656500e0c4
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.1
Running against GDAL: 2.4.1
*System Info*
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 6.1.7601
| 1.0 | QGIS crashed when I use 3D View - Author Name: **Wolfgang Figura** (Wolfgang Figura)
Original Redmine Issue: [22040](https://issues.qgis.org/issues/22040)
Affected QGIS version: 3.6.2
Redmine category:3d
---
When I use the 3D view, QGIS crashes reproducibly.
## User Feedback
When I use the 3D view, QGIS crashes reproducibly.
## Report Details
*Crash ID*: 343c12136d6018e5544d50f723e565c45088949d
*Stack Trace*
```
QgsWindow3DEngine::tr :
QgsFlatTerrainGenerator::writeXml :
QMetaObject::activate :
QgsMapLayer::repaintRequested :
QgsRasterLayer::refreshRendererIfNeeded :
QgsRasterLayerRenderer::QgsRasterLayerRenderer :
QgsRasterLayer::createMapRenderer :
QgsMapRendererJob::prepareJobs :
QgsMapRendererCustomPainterJob::start :
QgsMapRendererSequentialJob::start :
QgsTerrainGenerator::typeToString :
QgsFlatTerrainGenerator::writeXml :
QgsFlatTerrainGenerator::writeXml :
QgsCameraPose::setCenterPoint :
QgsWindow3DEngine::tr :
QgsWindow3DEngine::tr :
QMetaObject::activate :
QMetaObject::activate :
Qgs3DAlgorithms::qt_metacast :
QgsTerrainGenerator::typeToString :
QMetaObject::activate :
QMetaObject::activate :
QMetaObject::activate :
QFutureWatcherBase::event :
QApplicationPrivate::notify_helper :
QApplication::notify :
QgsApplication::notify :
QCoreApplication::notifyInternal2 :
QCoreApplicationPrivate::sendPostedEvents :
qt_plugin_query_metadata :
QEventDispatcherWin32::processEvents :
TranslateMessageEx :
TranslateMessage :
QEventDispatcherWin32::processEvents :
qt_plugin_query_metadata :
QEventLoop::exec :
QCoreApplication::exec :
main :
BaseThreadInitThunk :
RtlUserThreadStart :
```
*QGIS Info*
QGIS Version: 3.6.2-Noosa
QGIS code revision: 656500e0c4
Compiled against Qt: 5.11.2
Running against Qt: 5.11.2
Compiled against GDAL: 2.4.1
Running against GDAL: 2.4.1
*System Info*
CPU Type: x86_64
Kernel Type: winnt
Kernel Version: 6.1.7601
| non_test | qgis crashed when i use view author name wolfgang figura wolfgang figura original redmine issue affected qgis version redmine category when i use the view qgis crashes reproducibly user feedback when i use the view qgis crashes reproducibly report details crash id stack trace tr qgsflatterraingenerator writexml qmetaobject activate qgsmaplayer repaintrequested qgsrasterlayer refreshrendererifneeded qgsrasterlayerrenderer qgsrasterlayerrenderer qgsrasterlayer createmaprenderer qgsmaprendererjob preparejobs qgsmaprenderercustompainterjob start qgsmaprenderersequentialjob start qgsterraingenerator typetostring qgsflatterraingenerator writexml qgsflatterraingenerator writexml qgscamerapose setcenterpoint tr tr qmetaobject activate qmetaobject activate qt metacast qgsterraingenerator typetostring qmetaobject activate qmetaobject activate qmetaobject activate qfuturewatcherbase event qapplicationprivate notify helper qapplication notify qgsapplication notify qcoreapplication qcoreapplicationprivate sendpostedevents qt plugin query metadata processevents translatemessageex translatemessage processevents qt plugin query metadata qeventloop exec qcoreapplication exec main basethreadinitthunk rtluserthreadstart qgis info qgis version noosa qgis code revision compiled against qt running against qt compiled against gdal running against gdal system info cpu type kernel type winnt kernel version | 0 |
580,248 | 17,213,929,041 | IssuesEvent | 2021-07-19 09:04:01 | fitbenchmarking/fitbenchmarking | https://api.github.com/repos/fitbenchmarking/fitbenchmarking | closed | Hellinger nlls cost function -- Jacobian incorrect? | Bug Priority - high | It seems like the Jacobian for the Hellinger nlls cost function is incorrect. If we have

with Jacobian

then for the Hellinger residual:

we should have

However, the code actually calculates:
https://github.com/fitbenchmarking/fitbenchmarking/blob/c35a839af9dc75807c452ebcf2346c18fb4ce558/fitbenchmarking/jacobian/analytic_jacobian.py#L39-L41
This presumably explains the difference between accuracy in `scipy:2-point` and `analytic` Jacobian seen here:

| 1.0 | Hellinger nlls cost function -- Jacobian incorrect? - It seems like the Jacobian for the Hellinger nlls cost function is incorrect. If we have

with Jacobian

then for the Hellinger residual:

we should have

However, the code actually calculates:
https://github.com/fitbenchmarking/fitbenchmarking/blob/c35a839af9dc75807c452ebcf2346c18fb4ce558/fitbenchmarking/jacobian/analytic_jacobian.py#L39-L41
This presumably explains the difference between accuracy in `scipy:2-point` and `analytic` Jacobian seen here:

| non_test | hellinger nlls cost function jacobian incorrect it seems like the jacobian for the hellinger nlls cost function is incorrect if we have with jacobian then for the hellinger residual we should have however the code actually calculates this presumably explains the difference between accuracy in scipy point and analytic jacobian seen here | 0 |
53,478 | 6,331,339,815 | IssuesEvent | 2017-07-26 09:42:22 | manchesergit/material-ui | https://api.github.com/repos/manchesergit/material-ui | closed | Test Stepper produces ID | unit testing | Tests need to be added to prove that Stepper
- [x] produces a unique ID for itself if none is given
- [x] uses the ID given in the properties | 1.0 | Test Stepper produces ID - Tests need to be added to prove that Stepper
- [x] produces a unique ID for itself if none is given
- [x] uses the ID given in the properties | test | test stepper produces id tests need to be added to prove that stepper produces a unique id for itself if none is given uses the id given in the properties | 1 |
286,621 | 24,765,921,578 | IssuesEvent | 2022-10-22 14:20:09 | Thy-Vipe/BeastsOfBermuda-issues | https://api.github.com/repos/Thy-Vipe/BeastsOfBermuda-issues | opened | [Quality of life] Windowed Fullscreen & Double Monitor | Quality of life Graphics public_testing | _Originally written by **Kaesan | 76561198838417017**_
Game Version: 1.1.1615
*===== System Specs =====
CPU Brand: AMD Ryzen 7 3800X 8-Core Processor
Vendor: AuthenticAMD
GPU Brand: AMD Radeon RX 5600 XT
GPU Driver Info: Unknown
Num CPU Cores: 8
===================*
Context: **In general.**
Map: Islatitania
Even using windowed fullscreen/windowed borderless, it acts strange when i tab to my other monitor, the BoB tab instead of appearing normal shrinks to about 2/3 of my monitor size. | 1.0 | [Quality of life] Windowed Fullscreen & Double Monitor - _Originally written by **Kaesan | 76561198838417017**_
Game Version: 1.1.1615
*===== System Specs =====
CPU Brand: AMD Ryzen 7 3800X 8-Core Processor
Vendor: AuthenticAMD
GPU Brand: AMD Radeon RX 5600 XT
GPU Driver Info: Unknown
Num CPU Cores: 8
===================*
Context: **In general.**
Map: Islatitania
Even using windowed fullscreen/windowed borderless, it acts strange when i tab to my other monitor, the BoB tab instead of appearing normal shrinks to about 2/3 of my monitor size. | test | windowed fullscreen double monitor originally written by kaesan game version system specs cpu brand amd ryzen core processor vendor authenticamd gpu brand amd radeon rx xt gpu driver info unknown num cpu cores context in general map islatitania even using windowed fullscreen windowed borderless it acts strange when i tab to my other monitor the bob tab instead of appearing normal shrinks to about of my monitor size | 1 |
19,297 | 6,711,270,370 | IssuesEvent | 2017-10-13 02:39:38 | typeorm/typeorm | https://api.github.com/repos/typeorm/typeorm | closed | @UpdateDateColumn & @VersionColumn not updated when update through createQueryBuilder() | comp: query builder question | ```
::MyBaseEntity.ts::
import { BaseEntity, Column, CreateDateColumn, UpdateDateColumn, VersionColumn, Entity, JoinTable, ManyToMany, PrimaryGeneratedColumn, BeforeUpdate } from 'typeorm';
export abstract class MyBaseEntity extends BaseEntity {
@Column({ name: 'created_by' })
createdBy: number;
@CreateDateColumn({ name: 'created_at', nullable: false })
createdAt: Date;
@Column({ name: 'updated_by', nullable: true })
updatedBy: number;
@UpdateDateColumn({ name: 'updated_at', nullable: true })
updatedAt: Date;
@Column() orgId: number;
@VersionColumn() version: number;
}
```
```
::DemoParent.ts::
import { Column, Entity, JoinTable, OneToMany, ManyToOne, PrimaryGeneratedColumn } from 'typeorm';
import { MyBaseEntity } from '../MyBaseEntity';
@Entity()
export class DemoParent extends MyBaseEntity {
@PrimaryGeneratedColumn() id: number;
@Column({ length: 100 })
name: string;
@Column('text') description: string;
}
```
```
sample update code - 1st approach
//1. this approach wont auto update updated_at & version columns, follows: http://typeorm.io/#/update-query-builder
await getManager()
.createQueryBuilder()
.update(DemoParent)
.set({ name, description })
.where('id = :id', { id: 1 })
.execute();
```
```
sample update code - 2nd approach
//2. this approach will auto updates updated_at & version columns
const demoParentRepository = getRepository(DemoParent);
const latestParent = await demoParentRepository.findOneById(1);
if (latestParent) {
latestParent.name = name;
latestParent.description = description;
await demoParentRepository.save(latestParent);
}
```
Is it a bug on 1st approach?
| 1.0 | @UpdateDateColumn & @VersionColumn not updated when update through createQueryBuilder() - ```
::MyBaseEntity.ts::
import { BaseEntity, Column, CreateDateColumn, UpdateDateColumn, VersionColumn, Entity, JoinTable, ManyToMany, PrimaryGeneratedColumn, BeforeUpdate } from 'typeorm';
export abstract class MyBaseEntity extends BaseEntity {
@Column({ name: 'created_by' })
createdBy: number;
@CreateDateColumn({ name: 'created_at', nullable: false })
createdAt: Date;
@Column({ name: 'updated_by', nullable: true })
updatedBy: number;
@UpdateDateColumn({ name: 'updated_at', nullable: true })
updatedAt: Date;
@Column() orgId: number;
@VersionColumn() version: number;
}
```
```
::DemoParent.ts::
import { Column, Entity, JoinTable, OneToMany, ManyToOne, PrimaryGeneratedColumn } from 'typeorm';
import { MyBaseEntity } from '../MyBaseEntity';
@Entity()
export class DemoParent extends MyBaseEntity {
@PrimaryGeneratedColumn() id: number;
@Column({ length: 100 })
name: string;
@Column('text') description: string;
}
```
```
sample update code - 1st approach
//1. this approach wont auto update updated_at & version columns, follows: http://typeorm.io/#/update-query-builder
await getManager()
.createQueryBuilder()
.update(DemoParent)
.set({ name, description })
.where('id = :id', { id: 1 })
.execute();
```
```
sample update code - 2nd approach
//2. this approach will auto updates updated_at & version columns
const demoParentRepository = getRepository(DemoParent);
const latestParent = await demoParentRepository.findOneById(1);
if (latestParent) {
latestParent.name = name;
latestParent.description = description;
await demoParentRepository.save(latestParent);
}
```
Is it a bug on 1st approach?
| non_test | updatedatecolumn versioncolumn not updated when update through createquerybuilder mybaseentity ts import baseentity column createdatecolumn updatedatecolumn versioncolumn entity jointable manytomany primarygeneratedcolumn beforeupdate from typeorm export abstract class mybaseentity extends baseentity column name created by createdby number createdatecolumn name created at nullable false createdat date column name updated by nullable true updatedby number updatedatecolumn name updated at nullable true updatedat date column orgid number versioncolumn version number demoparent ts import column entity jointable onetomany manytoone primarygeneratedcolumn from typeorm import mybaseentity from mybaseentity entity export class demoparent extends mybaseentity primarygeneratedcolumn id number column length name string column text description string sample update code approach this approach wont auto update updated at version columns follows await getmanager createquerybuilder update demoparent set name description where id id id execute sample update code approach this approach will auto updates updated at version columns const demoparentrepository getrepository demoparent const latestparent await demoparentrepository findonebyid if latestparent latestparent name name latestparent description description await demoparentrepository save latestparent is it a bug on approach | 0 |
162,775 | 12,690,659,168 | IssuesEvent | 2020-06-21 13:18:25 | bitcoin/bitcoin | https://api.github.com/repos/bitcoin/bitcoin | closed | test: Remove sync_blocks global | Tests good first issue | sync_blocks has been made a member of the test framework in https://github.com/bitcoin/bitcoin/pull/15773, but the global helper has been left intact for now.
It has been suggested to move the implementation to the member function as well https://github.com/bitcoin/bitcoin/pull/15773#pullrequestreview-224235071
Implementing the functionality in a single place in the test framework allows the method to take account of contextual information, such as global timeout modifications. Also, having a global that does almost-but-not-quite the same thing is confusing for test writers and reviewers.
So the implementation should be moved, along with removing the global helper.
#### Useful skills:
Python3, basic understanding of the test framework
#### Want to work on this issue?
The purpose of the `good first issue` label is to highlight which issues are suitable for a new contributor without a deep understanding of the codebase.
You do not need to request permission to start working on this. You are encouraged to comment on the issue if you are planning to work on it. This will help other contributors monitor which issues are actively being addressed and is also an effective way to request assistance if and when you need it.
For guidance on contributing, please read [CONTRIBUTING.md](https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md) before opening your pull request.
| 1.0 | test: Remove sync_blocks global - sync_blocks has been made a member of the test framework in https://github.com/bitcoin/bitcoin/pull/15773, but the global helper has been left intact for now.
It has been suggested to move the implementation to the member function as well https://github.com/bitcoin/bitcoin/pull/15773#pullrequestreview-224235071
Implementing the functionality in a single place in the test framework allows the method to take account of contextual information, such as global timeout modifications. Also, having a global that does almost-but-not-quite the same thing is confusing for test writers and reviewers.
So the implementation should be moved, along with removing the global helper.
#### Useful skills:
Python3, basic understanding of the test framework
#### Want to work on this issue?
The purpose of the `good first issue` label is to highlight which issues are suitable for a new contributor without a deep understanding of the codebase.
You do not need to request permission to start working on this. You are encouraged to comment on the issue if you are planning to work on it. This will help other contributors monitor which issues are actively being addressed and is also an effective way to request assistance if and when you need it.
For guidance on contributing, please read [CONTRIBUTING.md](https://github.com/bitcoin/bitcoin/blob/master/CONTRIBUTING.md) before opening your pull request.
| test | test remove sync blocks global sync blocks has been made a member of the test framework in but the global helper has been left intact for now it has been suggested to move the implementation to the member function as well implementing the functionality in a single place in the test framework allows the method to take account of contextual information such as global timeout modifications also having a global that does almost but not quite the same thing is confusing for test writers and reviewers so the implementation should be moved along with removing the global helper useful skills basic understanding of the test framework want to work on this issue the purpose of the good first issue label is to highlight which issues are suitable for a new contributor without a deep understanding of the codebase you do not need to request permission to start working on this you are encouraged to comment on the issue if you are planning to work on it this will help other contributors monitor which issues are actively being addressed and is also an effective way to request assistance if and when you need it for guidance on contributing please read before opening your pull request | 1 |
313,298 | 26,915,617,633 | IssuesEvent | 2023-02-07 06:04:21 | sebastianbergmann/phpunit | https://api.github.com/repos/sebastianbergmann/phpunit | opened | Make it optional that suppressed E_DEPRECATED, E_NOTICE, E_STRICT, and E_WARNING errors are ignored | type/enhancement feature/test-runner | The [error handler](https://github.com/sebastianbergmann/phpunit/blob/b97bd1524a2db22d9caad27d078aca35cb77f596/src/Util/ErrorHandler.php#L45) used by PHPUnit's test runner currently ignores `E_DEPRECATED`, `E_NOTICE`, `E_STRICT`, and `E_WARNING` errors when they are suppressed using the error suppression operator (`@`):
```php
$suppressed = !($errorNumber & error_reporting());
if ($suppressed &&
in_array($errorNumber, [E_DEPRECATED, E_NOTICE, E_STRICT, E_WARNING], true)) {
return false;
}
```
Configurations options could be added to control whether suppressed `E_DEPRECATED`, `E_NOTICE`, `E_STRICT`, and `E_WARNING` errors should be ignored. | 1.0 | Make it optional that suppressed E_DEPRECATED, E_NOTICE, E_STRICT, and E_WARNING errors are ignored - The [error handler](https://github.com/sebastianbergmann/phpunit/blob/b97bd1524a2db22d9caad27d078aca35cb77f596/src/Util/ErrorHandler.php#L45) used by PHPUnit's test runner currently ignores `E_DEPRECATED`, `E_NOTICE`, `E_STRICT`, and `E_WARNING` errors when they are suppressed using the error suppression operator (`@`):
```php
$suppressed = !($errorNumber & error_reporting());
if ($suppressed &&
in_array($errorNumber, [E_DEPRECATED, E_NOTICE, E_STRICT, E_WARNING], true)) {
return false;
}
```
Configurations options could be added to control whether suppressed `E_DEPRECATED`, `E_NOTICE`, `E_STRICT`, and `E_WARNING` errors should be ignored. | test | make it optional that suppressed e deprecated e notice e strict and e warning errors are ignored the used by phpunit s test runner currently ignores e deprecated e notice e strict and e warning errors when they are suppressed using the error suppression operator php suppressed errornumber error reporting if suppressed in array errornumber true return false configurations options could be added to control whether suppressed e deprecated e notice e strict and e warning errors should be ignored | 1 |
193,209 | 22,216,093,678 | IssuesEvent | 2022-06-08 01:55:17 | turkdevops/grafana | https://api.github.com/repos/turkdevops/grafana | opened | CVE-2022-29244 (Medium) detected in npm-6.14.6.tgz | security vulnerability | ## CVE-2022-29244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-6.14.6.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-6.14.6.tgz">https://registry.npmjs.org/npm/-/npm-6.14.6.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **npm-6.14.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm pack ignores root-level .gitignore & .npmignore file exclusion directives when run in a workspace or with a workspace flag (ie. --workspaces, --workspace=<name>). Anyone who has run npm pack or npm publish with workspaces, as of v7.9.0 & v7.13.0 respectively, may be affected and have published files into the npm registry they did not intend to include. Users should upgrade to the patched version of npm (v8.11.0 or greater).
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29244>CVE-2022-29244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hj9c-8jmm-8c52">https://github.com/advisories/GHSA-hj9c-8jmm-8c52</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-29244 (Medium) detected in npm-6.14.6.tgz - ## CVE-2022-29244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-6.14.6.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-6.14.6.tgz">https://registry.npmjs.org/npm/-/npm-6.14.6.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **npm-6.14.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/grafana/commit/a1c271764655c7e3ff81126d5929b8dda6170bf4">a1c271764655c7e3ff81126d5929b8dda6170bf4</a></p>
<p>Found in base branch: <b>datasource-meta</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm pack ignores root-level .gitignore & .npmignore file exclusion directives when run in a workspace or with a workspace flag (ie. --workspaces, --workspace=<name>). Anyone who has run npm pack or npm publish with workspaces, as of v7.9.0 & v7.13.0 respectively, may be affected and have published files into the npm registry they did not intend to include. Users should upgrade to the patched version of npm (v8.11.0 or greater).
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29244>CVE-2022-29244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hj9c-8jmm-8c52">https://github.com/advisories/GHSA-hj9c-8jmm-8c52</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_test | cve medium detected in npm tgz cve medium severity vulnerability vulnerable library npm tgz a package manager for javascript library home page a href dependency hierarchy x npm tgz vulnerable library found in head commit a href found in base branch datasource meta vulnerability details npm pack ignores root level gitignore npmignore file exclusion directives when run in a workspace or with a workspace flag ie workspaces workspace anyone who has run npm pack or npm publish with workspaces as of respectively may be affected and have published files into the npm registry they did not intend to include users should upgrade to the patched version of npm or greater publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
307,370 | 26,526,929,648 | IssuesEvent | 2023-01-19 09:28:56 | ntop/ntopng | https://api.github.com/repos/ntop/ntopng | closed | Move Host Rules to a New Place | Enhancement Ready to Test | Host Rules are currently reported under the already crowded network interface menu.
This is to request to move such page under the left sidebar menu under "Hosts" | 1.0 | Move Host Rules to a New Place - Host Rules are currently reported under the already crowded network interface menu.
This is to request to move such page under the left sidebar menu under "Hosts" | test | move host rules to a new place host rules are currently reported under the already crowded network interface menu this is to request to move such page under the left sidebar menu under hosts | 1 |
27,080 | 4,273,825,440 | IssuesEvent | 2016-07-13 18:32:02 | phetsims/function-builder | https://api.github.com/repos/phetsims/function-builder | closed | Invisible stick figure | status:ready-to-test | @pixelzoom @amanda-phet if you apply Rainbow -> Warhol to the stick figure man, you get four empty squares. Is this another issue similar to #92 in that it really just comes down to colormaps and can't be solved in general?

For phetsims/tasks/issues/638. | 1.0 | Invisible stick figure - @pixelzoom @amanda-phet if you apply Rainbow -> Warhol to the stick figure man, you get four empty squares. Is this another issue similar to #92 in that it really just comes down to colormaps and can't be solved in general?

For phetsims/tasks/issues/638. | test | invisible stick figure pixelzoom amanda phet if you apply rainbow warhol to the stick figure man you get four empty squares is this another issue similar to in that it really just comes down to colormaps and can t be solved in general for phetsims tasks issues | 1 |
363,854 | 10,756,068,000 | IssuesEvent | 2019-10-31 10:25:49 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [TOPIC-GPIO]: Support for legacy interrupt configuration breaks new API contract | area: API area: GPIO bug has-pr priority: medium | In the original API development in #16648 and as merged to the topic branch `gpio_pin_configure()` was changed to not affect any interrupt configuration except `GPIO_INT_DEBOUNCE`.
In #19248 this decision was reverted, and `gpio_pin_configure()` was changed to include a call to `gpio_pin_interrupt_configure()`. https://github.com/zephyrproject-rtos/zephyr/pull/19248#discussion_r326809961 questioned this and was answered with the point that this is necessary to support legacy API that configures both pin and interrupt in the same call. It was described as to-be-removed-later, but there are no details/plans for how to accomplish that.
This breaks the promised new API: it is no longer possible to do things like reconfigure pull state without affecting the interrupt configuration, because presence of `GPIO_INT_DISABLED` cannot be distinguished from a set of flags that does not touch the interrupt configuration, so pin configuration while interrupts are enabled will disable (or reconfigure) the interrupt.
A potential solution is to change `GPIO_INT_DISABLED` to be a non-zero value, and so support a new `GPIO_INT_MODE_UNCHANGED` mode that has no effect. This would not work on legacy code that intentionally disables interrupts by not providing any interrupt configuration flags in the call to `gpio_pin_configure()`. | 1.0 | [TOPIC-GPIO]: Support for legacy interrupt configuration breaks new API contract - In the original API development in #16648 and as merged to the topic branch `gpio_pin_configure()` was changed to not affect any interrupt configuration except `GPIO_INT_DEBOUNCE`.
In #19248 this decision was reverted, and `gpio_pin_configure()` was changed to include a call to `gpio_pin_interrupt_configure()`. https://github.com/zephyrproject-rtos/zephyr/pull/19248#discussion_r326809961 questioned this and was answered with the point that this is necessary to support legacy API that configures both pin and interrupt in the same call. It was described as to-be-removed-later, but there are no details/plans for how to accomplish that.
This breaks the promised new API: it is no longer possible to do things like reconfigure pull state without affecting the interrupt configuration, because presence of `GPIO_INT_DISABLED` cannot be distinguished from a set of flags that does not touch the interrupt configuration, so pin configuration while interrupts are enabled will disable (or reconfigure) the interrupt.
A potential solution is to change `GPIO_INT_DISABLED` to be a non-zero value, and so support a new `GPIO_INT_MODE_UNCHANGED` mode that has no effect. This would not work on legacy code that intentionally disables interrupts by not providing any interrupt configuration flags in the call to `gpio_pin_configure()`. | non_test | support for legacy interrupt configuration breaks new api contract in the original api development in and as merged to the topic branch gpio pin configure was changed to not affect any interrupt configuration except gpio int debounce in this decision was reverted and gpio pin configure was changed to include a call to gpio pin interrupt configure questioned this and was answered with the point that this is necessary to support legacy api that configures both pin and interrupt in the same call it was described as to be removed later but there are no details plans for how to accomplish that this breaks the promised new api it is no longer possible to do things like reconfigure pull state without affecting the interrupt configuration because presence of gpio int disabled cannot be distinguished from a set of flags that does not touch the interrupt configuration so pin configuration while interrupts are enabled will disable or reconfigure the interrupt a potential solution is to change gpio int disabled to be a non zero value and so support a new gpio int mode unchanged mode that has no effect this would not work on legacy code that intentionally disables interrupts by not providing any interrupt configuration flags in the call to gpio pin configure | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.