Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
163,907 | 13,929,632,140 | IssuesEvent | 2020-10-22 00:10:47 | aquiles23/mangahost_navbar_fix | https://api.github.com/repos/aquiles23/mangahost_navbar_fix | closed | criar Readme | documentation good first issue hacktoberfest português | estou realmente com preguiça de criar um readme, qualquer ajuda é bem vinda, e como se trata de um site pt-br é necessario que as instruções de uso do plugin sejam em português e opcionalmente seja traduzido para inglês. | 1.0 | criar Readme - estou realmente com preguiça de criar um readme, qualquer ajuda é bem vinda, e como se trata de um site pt-br é necessario que as instruções de uso do plugin sejam em português e opcionalmente seja traduzido para inglês. | non_priority | criar readme estou realmente com preguiça de criar um readme qualquer ajuda é bem vinda e como se trata de um site pt br é necessario que as instruções de uso do plugin sejam em português e opcionalmente seja traduzido para inglês | 0 |
164,853 | 13,961,396,384 | IssuesEvent | 2020-10-25 03:02:12 | patrick204nqh/linux | https://api.github.com/repos/patrick204nqh/linux | opened | linux command | documentation good first issue | > https://www.hostinger.com/tutorials/linux-commands
1. **pwd** (find out the path of the current working directory (folder) you’re in)
2. **cd**
3. **ls**
4. **cat**
5. **cp**
6. **mv**
7. **mkdir**
8. **rmdir** (rmdir only allows you to delete empty directories)
9. **rm**
10. **touch**
11. **locate** ( like the search command in Windows)
12. **find** (searches for files and directories)
13. **grep** (search through all the text in a given file)
14. **sudo**
15. **df** (report on the system’s disk space usage)
16. **du** (the disk usage summary will show disk block numbers instead of the usual size format)
17. **head** (view the first lines of any text file)
18. **tail** (display the last ten lines of a text file)
19. **diff** (compares the contents of two files line by line)
20. **tar** (archive multiple files into a tarball — a common Linux file format that is similar to zip format, with compression being optional)
21. **chown** (change or transfer the ownership of a file to the specified username)
22. **jobs** (display all current jobs along with their statuses)
23. **kill** (If you have an unresponsive program, you can terminate it manually)
24. **ping** (check your connectivity status to a server)
25. **wget** (you can even download files from the internet)
26. **uname** (will print detailed information about your Linux system like the machine name, operating system, kernel, and so on)
27. **top** (will display a list of running processes and how much CPU each process uses)
28. **chmod** (used to change the read, write, and execute permissions of files and directories)
29. **history** (you’ll quickly notice that you can run hundreds of commands every day)
30. man
31. **echo** (This command is used to move some data into a file)
32. **zip, unzip** (compress/extract from a zip archive)
33. **hostname** (If you want to know the name of your host/network simply type hostname. Adding a -I to the end will display the IP address of your network)
34. **useradd, userdel:**
Since Linux is a multi-user system, this means more than one person can interact with the same system at the same time. useradd is used to create a new user, while passwd is adding a password to that user’s account. To add a new person named John type, useradd John and then to add his password type, passwd 123456789.
To remove a user is very similar to adding a new user. To delete the users account type, userdel UserName
### Bonus Tips and Tricks
Use the **clear** command to clean out the terminal if it is getting cluttered with too many past commands.
Try the **TAB** button to autofill what you are typing. For example, if you need to type Documents, begin to type a command (let’s go with cd Docu, then hit the **TAB** key) and the terminal will fill in the rest, showing you cd Documents.
**Ctrl+C** and **Ctrl+Z** are used to stop any command that is currently working. **Ctrl+C** will stop and terminate the command, while **Ctrl+Z** will simply pause the command.
If you accidental freeze your terminal by using **Ctrl+S**, simply undo this with the unfreeze **Ctrl+Q**.
**Ctrl+A** moves you to the beginning of the line while **Ctrl+E** moves you to the end.
You can run multiple commands in one single command by using the “;” to separate them. For example Command1; Command2; Command3. Or use **&&** if you only want the next command to run when the first one is successful.
| 1.0 | linux command - > https://www.hostinger.com/tutorials/linux-commands
1. **pwd** (find out the path of the current working directory (folder) you’re in)
2. **cd**
3. **ls**
4. **cat**
5. **cp**
6. **mv**
7. **mkdir**
8. **rmdir** (rmdir only allows you to delete empty directories)
9. **rm**
10. **touch**
11. **locate** ( like the search command in Windows)
12. **find** (searches for files and directories)
13. **grep** (search through all the text in a given file)
14. **sudo**
15. **df** (report on the system’s disk space usage)
16. **du** (the disk usage summary will show disk block numbers instead of the usual size format)
17. **head** (view the first lines of any text file)
18. **tail** (display the last ten lines of a text file)
19. **diff** (compares the contents of two files line by line)
20. **tar** (archive multiple files into a tarball — a common Linux file format that is similar to zip format, with compression being optional)
21. **chown** (change or transfer the ownership of a file to the specified username)
22. **jobs** (display all current jobs along with their statuses)
23. **kill** (If you have an unresponsive program, you can terminate it manually)
24. **ping** (check your connectivity status to a server)
25. **wget** (you can even download files from the internet)
26. **uname** (will print detailed information about your Linux system like the machine name, operating system, kernel, and so on)
27. **top** (will display a list of running processes and how much CPU each process uses)
28. **chmod** (used to change the read, write, and execute permissions of files and directories)
29. **history** (you’ll quickly notice that you can run hundreds of commands every day)
30. man
31. **echo** (This command is used to move some data into a file)
32. **zip, unzip** (compress/extract from a zip archive)
33. **hostname** (If you want to know the name of your host/network simply type hostname. Adding a -I to the end will display the IP address of your network)
34. **useradd, userdel:**
Since Linux is a multi-user system, this means more than one person can interact with the same system at the same time. useradd is used to create a new user, while passwd is adding a password to that user’s account. To add a new person named John type, useradd John and then to add his password type, passwd 123456789.
To remove a user is very similar to adding a new user. To delete the users account type, userdel UserName
### Bonus Tips and Tricks
Use the **clear** command to clean out the terminal if it is getting cluttered with too many past commands.
Try the **TAB** button to autofill what you are typing. For example, if you need to type Documents, begin to type a command (let’s go with cd Docu, then hit the **TAB** key) and the terminal will fill in the rest, showing you cd Documents.
**Ctrl+C** and **Ctrl+Z** are used to stop any command that is currently working. **Ctrl+C** will stop and terminate the command, while **Ctrl+Z** will simply pause the command.
If you accidental freeze your terminal by using **Ctrl+S**, simply undo this with the unfreeze **Ctrl+Q**.
**Ctrl+A** moves you to the beginning of the line while **Ctrl+E** moves you to the end.
You can run multiple commands in one single command by using the “;” to separate them. For example Command1; Command2; Command3. Or use **&&** if you only want the next command to run when the first one is successful.
| non_priority | linux command pwd find out the path of the current working directory folder you’re in cd ls cat cp mv mkdir rmdir rmdir only allows you to delete empty directories rm touch locate like the search command in windows find searches for files and directories grep search through all the text in a given file sudo df report on the system’s disk space usage du the disk usage summary will show disk block numbers instead of the usual size format head view the first lines of any text file tail display the last ten lines of a text file diff compares the contents of two files line by line tar archive multiple files into a tarball — a common linux file format that is similar to zip format with compression being optional chown change or transfer the ownership of a file to the specified username jobs display all current jobs along with their statuses kill if you have an unresponsive program you can terminate it manually ping check your connectivity status to a server wget you can even download files from the internet uname will print detailed information about your linux system like the machine name operating system kernel and so on top will display a list of running processes and how much cpu each process uses chmod used to change the read write and execute permissions of files and directories history you’ll quickly notice that you can run hundreds of commands every day man echo this command is used to move some data into a file zip unzip compress extract from a zip archive hostname if you want to know the name of your host network simply type hostname adding a i to the end will display the ip address of your network useradd userdel since linux is a multi user system this means more than one person can interact with the same system at the same time useradd is used to create a new user while passwd is adding a password to that user’s account to add a new person named john type useradd john and then to add his password type passwd to remove a user is very similar to adding a new user to delete the users account type userdel username bonus tips and tricks use the clear command to clean out the terminal if it is getting cluttered with too many past commands try the tab button to autofill what you are typing for example if you need to type documents begin to type a command let’s go with cd docu then hit the tab key and the terminal will fill in the rest showing you cd documents ctrl c and ctrl z are used to stop any command that is currently working ctrl c will stop and terminate the command while ctrl z will simply pause the command if you accidental freeze your terminal by using ctrl s simply undo this with the unfreeze ctrl q ctrl a moves you to the beginning of the line while ctrl e moves you to the end you can run multiple commands in one single command by using the “ ” to separate them for example or use if you only want the next command to run when the first one is successful | 0 |
86,552 | 10,761,930,139 | IssuesEvent | 2019-10-31 22:00:31 | SharePoint/sp-dev-docs | https://api.github.com/repos/SharePoint/sp-dev-docs | closed | removeNavLink doesn't work | area:docs-comment area:other area:site-design status:answered | I create script with following two actions:
{
"verb": "removeNavLink",
"displayName": "Site contents",
"isWebRelative": true
},
{
"verb": "removeNavLink",
"displayName": "Pages",
"isWebRelative": true
}
And, right now there are no specific permissions to scripts, I have associated this with hub site and applied new hub settings, but when creating new sites this does not get applied, nav links still have "Pages" and "Site contents"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 68267476-ee8d-7b95-446b-711ddb683da2
* Version Independent ID: aa099b9d-48db-5513-11c7-32c49a27e5be
* Content: [Site design JSON schema](https://docs.microsoft.com/en-us/sharepoint/dev/declarative-customization/site-design-json-schema)
* Content Source: [docs/declarative-customization/site-design-json-schema.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/declarative-customization/site-design-json-schema.md)
* Product: **sharepoint**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs** | 1.0 | removeNavLink doesn't work - I create script with following two actions:
{
"verb": "removeNavLink",
"displayName": "Site contents",
"isWebRelative": true
},
{
"verb": "removeNavLink",
"displayName": "Pages",
"isWebRelative": true
}
And, right now there are no specific permissions to scripts, I have associated this with hub site and applied new hub settings, but when creating new sites this does not get applied, nav links still have "Pages" and "Site contents"
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 68267476-ee8d-7b95-446b-711ddb683da2
* Version Independent ID: aa099b9d-48db-5513-11c7-32c49a27e5be
* Content: [Site design JSON schema](https://docs.microsoft.com/en-us/sharepoint/dev/declarative-customization/site-design-json-schema)
* Content Source: [docs/declarative-customization/site-design-json-schema.md](https://github.com/SharePoint/sp-dev-docs/blob/master/docs/declarative-customization/site-design-json-schema.md)
* Product: **sharepoint**
* GitHub Login: @spdevdocs
* Microsoft Alias: **spdevdocs** | non_priority | removenavlink doesn t work i create script with following two actions verb removenavlink displayname site contents iswebrelative true verb removenavlink displayname pages iswebrelative true and right now there are no specific permissions to scripts i have associated this with hub site and applied new hub settings but when creating new sites this does not get applied nav links still have pages and site contents document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product sharepoint github login spdevdocs microsoft alias spdevdocs | 0 |
154,705 | 24,329,920,711 | IssuesEvent | 2022-09-30 18:22:06 | Opentrons/opentrons | https://api.github.com/repos/Opentrons/opentrons | closed | Update transfer volume tooltip copy to be inclusive of 1 to many transfers | protocol designer design | ## Background
Our current tooltip copy only works if a user is building a 1-to-many or an n:n transfer.

## Acceptance criteria
Update copy to read "Volume to dispense in each well in a 1-to-many or n-to-n transfer. Volume to aspirate from each well in a many-to-1 transfer." | 2.0 | Update transfer volume tooltip copy to be inclusive of 1 to many transfers - ## Background
Our current tooltip copy only works if a user is building a 1-to-many or an n:n transfer.

## Acceptance criteria
Update copy to read "Volume to dispense in each well in a 1-to-many or n-to-n transfer. Volume to aspirate from each well in a many-to-1 transfer." | non_priority | update transfer volume tooltip copy to be inclusive of to many transfers background our current tooltip copy only works if a user is building a to many or an n n transfer acceptance criteria update copy to read volume to dispense in each well in a to many or n to n transfer volume to aspirate from each well in a many to transfer | 0 |
230,354 | 17,613,741,939 | IssuesEvent | 2021-08-18 07:03:10 | ContextLab/davos | https://api.github.com/repos/ContextLab/davos | closed | add brief "How it works" section to the docs | documentation | - overview/basic/non-context-specific info
- context-dependent info
- google colab
- jupyter notebooks
- Pure (non-interactive) Python | 1.0 | add brief "How it works" section to the docs - - overview/basic/non-context-specific info
- context-dependent info
- google colab
- jupyter notebooks
- Pure (non-interactive) Python | non_priority | add brief how it works section to the docs overview basic non context specific info context dependent info google colab jupyter notebooks pure non interactive python | 0 |
20,480 | 4,552,925,143 | IssuesEvent | 2016-09-13 01:32:18 | dcjones/Gadfly.jl | https://api.github.com/repos/dcjones/Gadfly.jl | opened | Docs are down | documentation | Both https://dcjones.github.io/Gadfly.jl/latest and http://gadflyjl.org are stuck in redirect loops. Not much that I can do until @dcjones fixes this. In the mean time a somewhat functionally complete copy is available at http://tamasnagy.com/Gadfly.jl/
| 1.0 | Docs are down - Both https://dcjones.github.io/Gadfly.jl/latest and http://gadflyjl.org are stuck in redirect loops. Not much that I can do until @dcjones fixes this. In the mean time a somewhat functionally complete copy is available at http://tamasnagy.com/Gadfly.jl/
| non_priority | docs are down both and are stuck in redirect loops not much that i can do until dcjones fixes this in the mean time a somewhat functionally complete copy is available at | 0 |
280,736 | 30,849,849,509 | IssuesEvent | 2023-08-02 15:56:56 | autismdrive/autismdrive | https://api.github.com/repos/autismdrive/autismdrive | opened | Update backend (pypi) packages | Chore Security | Update backend (pypi) packages and fix any resulting breaking changes | True | Update backend (pypi) packages - Update backend (pypi) packages and fix any resulting breaking changes | non_priority | update backend pypi packages update backend pypi packages and fix any resulting breaking changes | 0 |
63,056 | 12,278,906,238 | IssuesEvent | 2020-05-08 10:59:31 | Pokecube-Development/Pokecube-Issues-and-Wiki | https://api.github.com/repos/Pokecube-Development/Pokecube-Issues-and-Wiki | closed | villages, trainers, scattered/ruin structures and mirage spots spawning in dimensions | 1.14.x 1.15.2 Bug - Code Bug - Resources Fixed | #### Issue Description:
I can find villages, trainers, scattered/ruin structures and mirage spots all spawn in the end and nether
Same for the Ultra wormhole, except they also have meteors spawning in there, might be spawning in the end and nether too, but I haven't seen any yet.
#### What happens:
I go into the end, and find villages, trainers, scattered/ruin structures and mirage spots,
for the nether I find the dungeon structure and trainers. For the Ultra wormhole, I can find meteors
#### What you expected to happen:
to not see any villages, trainers, scattered/ruin structures and mirage spots
in these dimensions
#### Steps to reproduce:
1. make a nether portal or end portal
2. go into that dimension
3. and if you search around you can find villages, trainers, scattered/ruin structures and mirage spots (for faster find do /locate)
...
____
#### Affected Versions (Do *not* use "latest"): Replace with a list of all mods you have in.
Affected versions
Pokecube AIO: 2.0.10, 2.0.11 and 2.0.12
My mods-
- Pokecube AIO: 2.0.12
- JustEnoughitems version 6.0.0.2
- Biomesoplenty 10.0.0.345
- Journeymap 5.7.0beta1
- Kiwi 2.6.5
- snow is real magic 1.7.4
- serene seasons 3.0.0.68
- thuttech 8.0.2
- mixinbootstrap 1.0.2
- Minecraft: 1.15.2
- Forge: 31.1.24
- Minecraft: 1.15.2
| 1.0 | villages, trainers, scattered/ruin structures and mirage spots spawning in dimensions - #### Issue Description:
I can find villages, trainers, scattered/ruin structures and mirage spots all spawn in the end and nether
Same for the Ultra wormhole, except they also have meteors spawning in there, might be spawning in the end and nether too, but I haven't seen any yet.
#### What happens:
I go into the end, and find villages, trainers, scattered/ruin structures and mirage spots,
for the nether I find the dungeon structure and trainers. For the Ultra wormhole, I can find meteors
#### What you expected to happen:
to not see any villages, trainers, scattered/ruin structures and mirage spots
in these dimensions
#### Steps to reproduce:
1. make a nether portal or end portal
2. go into that dimension
3. and if you search around you can find villages, trainers, scattered/ruin structures and mirage spots (for faster find do /locate)
...
____
#### Affected Versions (Do *not* use "latest"): Replace with a list of all mods you have in.
Affected versions
Pokecube AIO: 2.0.10, 2.0.11 and 2.0.12
My mods-
- Pokecube AIO: 2.0.12
- JustEnoughitems version 6.0.0.2
- Biomesoplenty 10.0.0.345
- Journeymap 5.7.0beta1
- Kiwi 2.6.5
- snow is real magic 1.7.4
- serene seasons 3.0.0.68
- thuttech 8.0.2
- mixinbootstrap 1.0.2
- Minecraft: 1.15.2
- Forge: 31.1.24
- Minecraft: 1.15.2
| non_priority | villages trainers scattered ruin structures and mirage spots spawning in dimensions issue description i can find villages trainers scattered ruin structures and mirage spots all spawn in the end and nether same for the ultra wormhole except they also have meteors spawning in there might be spawning in the end and nether too but i haven t seen any yet what happens i go into the end and find villages trainers scattered ruin structures and mirage spots for the nether i find the dungeon structure and trainers for the ultra wormhole i can find meteors what you expected to happen to not see any villages trainers scattered ruin structures and mirage spots in these dimensions steps to reproduce make a nether portal or end portal go into that dimension and if you search around you can find villages trainers scattered ruin structures and mirage spots for faster find do locate affected versions do not use latest replace with a list of all mods you have in affected versions pokecube aio and my mods pokecube aio justenoughitems version biomesoplenty journeymap kiwi snow is real magic serene seasons thuttech mixinbootstrap minecraft forge minecraft | 0 |
290,157 | 25,040,486,791 | IssuesEvent | 2022-11-04 20:10:38 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: restoreTPCCIncLatest/nodes=10 failed | C-test-failure O-robot O-roachtest T-disaster-recovery branch-release-22.2 | roachtest.restoreTPCCIncLatest/nodes=10 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6495720?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6495720?buildTab=artifacts#/restoreTPCCIncLatest/nodes=10) on release-22.2 @ [aac413cd4ca62f3392029b42219ebb2788979fb8](https://github.com/cockroachdb/cockroach/commits/aac413cd4ca62f3392029b42219ebb2788979fb8):
```
Wraps: (2) output in run_091015.901604956_n1_cockroach_sql
Wraps: (3) ./cockroach sql --insecure -e "
| RESTORE FROM '/2022/09/07-000000.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals-22.2?AUTH=implicit'
| AS OF SYSTEM TIME '2022-09-07 12:15:00'" returned
| stderr:
| ERROR: backup from version 1000022.1-70 is newer than current version 22.1-70
| Failed running "sql"
|
| stdout:
Wraps: (4) COMMAND_PROBLEM
Wraps: (5) Node 1. Command with error:
| ``````
| ./cockroach sql --insecure -e "
| RESTORE FROM '/2022/09/07-000000.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals-22.2?AUTH=implicit'
| AS OF SYSTEM TIME '2022-09-07 12:15:00'"
| ``````
Wraps: (6) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError
monitor.go:127,restore.go:474,test_runner.go:908: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerRestore.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/restore.go:474
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:908
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| main/pkg/cmd/roachtest/monitor.go:80
| runtime.doInit
| GOROOT/src/runtime/proc.go:6340
| runtime.main
| GOROOT/src/runtime/proc.go:233
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restoreTPCCIncLatest/nodes=10.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-19677 | 2.0 | roachtest: restoreTPCCIncLatest/nodes=10 failed - roachtest.restoreTPCCIncLatest/nodes=10 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6495720?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/6495720?buildTab=artifacts#/restoreTPCCIncLatest/nodes=10) on release-22.2 @ [aac413cd4ca62f3392029b42219ebb2788979fb8](https://github.com/cockroachdb/cockroach/commits/aac413cd4ca62f3392029b42219ebb2788979fb8):
```
Wraps: (2) output in run_091015.901604956_n1_cockroach_sql
Wraps: (3) ./cockroach sql --insecure -e "
| RESTORE FROM '/2022/09/07-000000.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals-22.2?AUTH=implicit'
| AS OF SYSTEM TIME '2022-09-07 12:15:00'" returned
| stderr:
| ERROR: backup from version 1000022.1-70 is newer than current version 22.1-70
| Failed running "sql"
|
| stdout:
Wraps: (4) COMMAND_PROBLEM
Wraps: (5) Node 1. Command with error:
| ``````
| ./cockroach sql --insecure -e "
| RESTORE FROM '/2022/09/07-000000.00' IN
| 'gs://cockroach-fixtures/tpcc-incrementals-22.2?AUTH=implicit'
| AS OF SYSTEM TIME '2022-09-07 12:15:00'"
| ``````
Wraps: (6) exit status 1
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError
monitor.go:127,restore.go:474,test_runner.go:908: monitor failure: monitor task failed: t.Fatal() was called
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerRestore.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/restore.go:474
| main.(*testRunner).runTest.func2
| main/pkg/cmd/roachtest/test_runner.go:908
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
Wraps: (4) monitor task failed
Wraps: (5) attached stack trace
-- stack trace:
| main.init
| main/pkg/cmd/roachtest/monitor.go:80
| runtime.doInit
| GOROOT/src/runtime/proc.go:6340
| runtime.main
| GOROOT/src/runtime/proc.go:233
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1594
Wraps: (6) t.Fatal() was called
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restoreTPCCIncLatest/nodes=10.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-19677 | non_priority | roachtest restoretpccinclatest nodes failed roachtest restoretpccinclatest nodes with on release wraps output in run cockroach sql wraps cockroach sql insecure e restore from in gs cockroach fixtures tpcc incrementals auth implicit as of system time returned stderr error backup from version is newer than current version failed running sql stdout wraps command problem wraps node command with error cockroach sql insecure e restore from in gs cockroach fixtures tpcc incrementals auth implicit as of system time wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors cmd hintdetail withdetail exec exiterror monitor go restore go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests registerrestore github com cockroachdb cockroach pkg cmd roachtest tests restore go main testrunner runtest main pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go wraps monitor task failed wraps attached stack trace stack trace main init main pkg cmd roachtest monitor go runtime doinit goroot src runtime proc go runtime main goroot src runtime proc go runtime goexit goroot src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror parameters roachtest cloud gce roachtest cpu roachtest ssd help see see cc cockroachdb bulk io jira issue crdb | 0 |
101,323 | 21,649,777,303 | IssuesEvent | 2022-05-06 08:08:31 | ProteinsWebTeam/interpro7-client | https://api.github.com/repos/ProteinsWebTeam/interpro7-client | closed | Menu highlight issue in InterProScan result page | bug On code review | The menu for InterProScan search results (Overview / Entries /Sequence) makes the current selected tab name unreadable.

| 1.0 | Menu highlight issue in InterProScan result page - The menu for InterProScan search results (Overview / Entries /Sequence) makes the current selected tab name unreadable.

| non_priority | menu highlight issue in interproscan result page the menu for interproscan search results overview entries sequence makes the current selected tab name unreadable | 0 |
7,304 | 9,552,274,979 | IssuesEvent | 2019-05-02 16:14:34 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | reopened | incompatible_use_python_toolchains: The Python runtime is obtained from a toolchain rather than a flag | breaking-change-0.26 incompatible-change migration-0.25 team-Rules-Python | **Flag:** `--incompatible_use_python_toolchains`
**Available since:** 0.25
**Will be flipped in:** ???
**Feature tracking issue:** #7375
## Motivation
For background on toolchains, see [here](https://docs.bazel.build/versions/master/toolchains.html).
Previously, the Python runtime (i.e., the interpreter used to execute `py_binary` and `py_test` targets) could only be controlled globally, and required passing flags like `--python_top` to the bazel invocation. This is out-of-step with our ambitions for flagless builds and remote-execution-friendly toolchains. Using the toolchain mechanism means that each Python target can automatically select an appropriate runtime based on what target platform it is being built for.
## Change
Enabling this flag triggers the following changes.
1. Executable Python targets will retrieve their runtime from the new Python toolchain.
2. It is forbidden to set any of the legacy flags `--python_top`, `--python2_path`, or `--python3_path`. Note that the last two of those are already no-ops. It is also strongly discouraged to set `--python_path`, but this flag will be removed in a later cleanup due to #7901.
3. The `python_version` attribute of the [`py_runtime`](https://docs.bazel.build/versions/master/be/python.html#py_runtime) rule becomes mandatory. It must be either `"PY2"` or `"PY3"`, indicating which kind of runtime it is describing.
For builds that rely on a Python interpreter installed on the system, it is recommended that users (or platform rule authors) ensure that each platform has an appropriate Python toolchain definition.
If no Python toolchain is explicitly registered, on non-Windows platforms there is a new default toolchain that automatically detects and executes an interpreter (of the appropriate version) from `PATH`. This resolves longstanding issue #4815. A Windows version of this toolchain will come later (#7844).
## Migration
If you were relying on `--python_top`, and you want your whole build to continue to use the `py_runtime` you were pointing it to, you just need to follow the steps below to define a `py_runtime_pair` and `toolchain`, and register this toolchain in your workspace. So long as you don't add any platform constraints that would prevent your toolchain from matching, it will take precedence over the default toolchain described above.
If you were relying on `--python_path`, and you want your whole build to use the interpreter located at the absolute path you were passing in this flag, the steps are the same, except you also have to define a new `py_runtime` with the `interpreter_path` attribute set to that path.
Otherwise, if you were only relying on the default behavior that resolved `python` from `PATH`, just enjoy the new default behavior, which is:
1. First try `python2` or `python3` (depending on the target's version)
2. Then fall back on `python` if not found
3. Fail-fast if the interpreter that is found doesn't match the target's major Python version (`PY2` or `PY3`), as per the `python -V` flag.
On Windows the default behavior is currently unchanged (#7844).
Example toolchain definition:
```python
# In your BUILD file...
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
py_runtime(
name = "my_py2_runtime",
interpreter_path = "/system/python2",
python_version = "PY2",
)
py_runtime(
name = "my_py3_runtime",
interpreter_path = "/system/python3",
python_version = "PY3",
)
py_runtime_pair(
name = "my_py_runtime_pair",
py2_runtime = ":my_py2_runtime",
py3_runtime = ":my_py3_runtime",
)
toolchain(
name = "my_toolchain",
target_compatible_with = [...], # optional platform constraints
toolchain = ":my_py_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
```
```python
# In your WORKSPACE...
register_toolchains("//my_pkg:my_toolchain")
```
Of course, you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms. It is recommended to use the constraint settings `@bazel_tools//tools/python:py2_interpreter_path` and `[...]:py3_interpreter_path` as the namespaces for constraints about where a platform's Python interpreters are located.
The new toolchain-related rules and default toolchain are implemented in Starlark under `@bazel_tools`. Their source code and documentation strings can be read [here](https://github.com/bazelbuild/bazel/blob/master/tools/python/toolchain.bzl). | True | incompatible_use_python_toolchains: The Python runtime is obtained from a toolchain rather than a flag - **Flag:** `--incompatible_use_python_toolchains`
**Available since:** 0.25
**Will be flipped in:** ???
**Feature tracking issue:** #7375
## Motivation
For background on toolchains, see [here](https://docs.bazel.build/versions/master/toolchains.html).
Previously, the Python runtime (i.e., the interpreter used to execute `py_binary` and `py_test` targets) could only be controlled globally, and required passing flags like `--python_top` to the bazel invocation. This is out-of-step with our ambitions for flagless builds and remote-execution-friendly toolchains. Using the toolchain mechanism means that each Python target can automatically select an appropriate runtime based on what target platform it is being built for.
## Change
Enabling this flag triggers the following changes.
1. Executable Python targets will retrieve their runtime from the new Python toolchain.
2. It is forbidden to set any of the legacy flags `--python_top`, `--python2_path`, or `--python3_path`. Note that the last two of those are already no-ops. It is also strongly discouraged to set `--python_path`, but this flag will be removed in a later cleanup due to #7901.
3. The `python_version` attribute of the [`py_runtime`](https://docs.bazel.build/versions/master/be/python.html#py_runtime) rule becomes mandatory. It must be either `"PY2"` or `"PY3"`, indicating which kind of runtime it is describing.
For builds that rely on a Python interpreter installed on the system, it is recommended that users (or platform rule authors) ensure that each platform has an appropriate Python toolchain definition.
If no Python toolchain is explicitly registered, on non-Windows platforms there is a new default toolchain that automatically detects and executes an interpreter (of the appropriate version) from `PATH`. This resolves longstanding issue #4815. A Windows version of this toolchain will come later (#7844).
## Migration
If you were relying on `--python_top`, and you want your whole build to continue to use the `py_runtime` you were pointing it to, you just need to follow the steps below to define a `py_runtime_pair` and `toolchain`, and register this toolchain in your workspace. So long as you don't add any platform constraints that would prevent your toolchain from matching, it will take precedence over the default toolchain described above.
If you were relying on `--python_path`, and you want your whole build to use the interpreter located at the absolute path you were passing in this flag, the steps are the same, except you also have to define a new `py_runtime` with the `interpreter_path` attribute set to that path.
Otherwise, if you were only relying on the default behavior that resolved `python` from `PATH`, just enjoy the new default behavior, which is:
1. First try `python2` or `python3` (depending on the target's version)
2. Then fall back on `python` if not found
3. Fail-fast if the interpreter that is found doesn't match the target's major Python version (`PY2` or `PY3`), as per the `python -V` flag.
On Windows the default behavior is currently unchanged (#7844).
Example toolchain definition:
```python
# In your BUILD file...
load("@bazel_tools//tools/python/toolchain.bzl", "py_runtime_pair")
py_runtime(
name = "my_py2_runtime",
interpreter_path = "/system/python2",
python_version = "PY2",
)
py_runtime(
name = "my_py3_runtime",
interpreter_path = "/system/python3",
python_version = "PY3",
)
py_runtime_pair(
name = "my_py_runtime_pair",
py2_runtime = ":my_py2_runtime",
py3_runtime = ":my_py3_runtime",
)
toolchain(
name = "my_toolchain",
target_compatible_with = [...], # optional platform constraints
toolchain = ":my_py_runtime_pair",
toolchain_type = "@bazel_tools//tools/python:toolchain_type",
)
```
```python
# In your WORKSPACE...
register_toolchains("//my_pkg:my_toolchain")
```
Of course, you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms. It is recommended to use the constraint settings `@bazel_tools//tools/python:py2_interpreter_path` and `[...]:py3_interpreter_path` as the namespaces for constraints about where a platform's Python interpreters are located.
The new toolchain-related rules and default toolchain are implemented in Starlark under `@bazel_tools`. Their source code and documentation strings can be read [here](https://github.com/bazelbuild/bazel/blob/master/tools/python/toolchain.bzl). | non_priority | incompatible use python toolchains the python runtime is obtained from a toolchain rather than a flag flag incompatible use python toolchains available since will be flipped in feature tracking issue motivation for background on toolchains see previously the python runtime i e the interpreter used to execute py binary and py test targets could only be controlled globally and required passing flags like python top to the bazel invocation this is out of step with our ambitions for flagless builds and remote execution friendly toolchains using the toolchain mechanism means that each python target can automatically select an appropriate runtime based on what target platform it is being built for change enabling this flag triggers the following changes executable python targets will retrieve their runtime from the new python toolchain it is forbidden to set any of the legacy flags python top path or path note that the last two of those are already no ops it is also strongly discouraged to set python path but this flag will be removed in a later cleanup due to the python version attribute of the rule becomes mandatory it must be either or indicating which kind of runtime it is describing for builds that rely on a python interpreter installed on the system it is recommended that users or platform rule authors ensure that each platform has an appropriate python toolchain definition if no python toolchain is explicitly registered on non windows platforms there is a new default toolchain that automatically detects and executes an interpreter of the appropriate version from path this resolves longstanding issue a windows version of this toolchain will come later migration if you were relying on python top and you want your whole build to continue to use the py runtime you were pointing it to you just need to follow the steps below to define a py runtime pair and toolchain and register this toolchain in your workspace so long as you don t add any platform constraints that would prevent your toolchain from matching it will take precedence over the default toolchain described above if you were relying on python path and you want your whole build to use the interpreter located at the absolute path you were passing in this flag the steps are the same except you also have to define a new py runtime with the interpreter path attribute set to that path otherwise if you were only relying on the default behavior that resolved python from path just enjoy the new default behavior which is first try or depending on the target s version then fall back on python if not found fail fast if the interpreter that is found doesn t match the target s major python version or as per the python v flag on windows the default behavior is currently unchanged example toolchain definition python in your build file load bazel tools tools python toolchain bzl py runtime pair py runtime name my runtime interpreter path system python version py runtime name my runtime interpreter path system python version py runtime pair name my py runtime pair runtime my runtime runtime my runtime toolchain name my toolchain target compatible with optional platform constraints toolchain my py runtime pair toolchain type bazel tools tools python toolchain type python in your workspace register toolchains my pkg my toolchain of course you can define and register many different toolchains and use platform constraints to restrict them to appropriate target platforms it is recommended to use the constraint settings bazel tools tools python interpreter path and interpreter path as the namespaces for constraints about where a platform s python interpreters are located the new toolchain related rules and default toolchain are implemented in starlark under bazel tools their source code and documentation strings can be read | 0 |
237,085 | 18,152,327,099 | IssuesEvent | 2021-09-26 13:32:34 | AlirizaSari/4S_Handleidingen | https://api.github.com/repos/AlirizaSari/4S_Handleidingen | closed | Pagina van een Merk aanpassen. (Ticket_2a) | documentation | alle type nummers en product nummers in 1 lange lijst zie en dat maakt niet handig uit met scherm grote hebben wil graag zien dat het in een grid zie. | 1.0 | Pagina van een Merk aanpassen. (Ticket_2a) - alle type nummers en product nummers in 1 lange lijst zie en dat maakt niet handig uit met scherm grote hebben wil graag zien dat het in een grid zie. | non_priority | pagina van een merk aanpassen ticket alle type nummers en product nummers in lange lijst zie en dat maakt niet handig uit met scherm grote hebben wil graag zien dat het in een grid zie | 0 |
101,371 | 16,507,858,938 | IssuesEvent | 2021-05-25 21:52:11 | idonthaveafifaaddiction/ember-css-modules | https://api.github.com/repos/idonthaveafifaaddiction/ember-css-modules | opened | CVE-2021-23337 (High) detected in lodash-4.17.20.tgz | security vulnerability | ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: ember-css-modules/package.json</p>
<p>Path to vulnerable library: ember-css-modules/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- @my-namespace/dummy-addon-0.0.0.tgz (Root Library)
- ember-cli-babel-7.23.1.tgz
- core-7.12.10.tgz
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/ember-css-modules/commit/140ae4c7138e7a09febc3739ecdf40a5b752aa77">140ae4c7138e7a09febc3739ecdf40a5b752aa77</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.20","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@my-namespace/dummy-addon:0.0.0;ember-cli-babel:7.23.1;@babel/core:7.12.10;lodash:4.17.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-23337 (High) detected in lodash-4.17.20.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: ember-css-modules/package.json</p>
<p>Path to vulnerable library: ember-css-modules/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- @my-namespace/dummy-addon-0.0.0.tgz (Root Library)
- ember-cli-babel-7.23.1.tgz
- core-7.12.10.tgz
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/idonthaveafifaaddiction/ember-css-modules/commit/140ae4c7138e7a09febc3739ecdf40a5b752aa77">140ae4c7138e7a09febc3739ecdf40a5b752aa77</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.20","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@my-namespace/dummy-addon:0.0.0;ember-cli-babel:7.23.1;@babel/core:7.12.10;lodash:4.17.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.21"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23337","vulnerabilityDetails":"Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337","cvss3Severity":"high","cvss3Score":"7.2","cvss3Metrics":{"A":"High","AC":"Low","PR":"High","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file ember css modules package json path to vulnerable library ember css modules node modules lodash package json dependency hierarchy my namespace dummy addon tgz root library ember cli babel tgz core tgz x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree my namespace dummy addon ember cli babel babel core lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails lodash versions prior to are vulnerable to command injection via the template function vulnerabilityurl | 0 |
68,840 | 29,738,078,871 | IssuesEvent | 2023-06-14 03:50:43 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | reopened | Kubectl apply command is throwing an error | triaged cxp product-issue Pri2 awaiting-customer-response azure-kubernetes-service/svc |
[Enter feedback here]
The command:
kubectl apply -f nvidia-device-plugin-ds.yaml
returns an error when using AKS with version 1.25.6:
error: unable to decode "nvidia-device-plugin-ds.yaml": no kind "DaemonSet" is registered for version "apps/v1"
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: deaac79e-bd40-2d95-7288-e15371726e91
* Version Independent ID: 9fd80908-20f1-7970-3bda-5a4283e4af21
* Content: [Use GPUs on Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://learn.microsoft.com/en-us/azure/aks/gpu-cluster)
* Content Source: [articles/aks/gpu-cluster.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/gpu-cluster.md)
* Service: **azure-kubernetes-service**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | 1.0 | Kubectl apply command is throwing an error -
[Enter feedback here]
The command:
kubectl apply -f nvidia-device-plugin-ds.yaml
returns an error when using AKS with version 1.25.6:
error: unable to decode "nvidia-device-plugin-ds.yaml": no kind "DaemonSet" is registered for version "apps/v1"
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: deaac79e-bd40-2d95-7288-e15371726e91
* Version Independent ID: 9fd80908-20f1-7970-3bda-5a4283e4af21
* Content: [Use GPUs on Azure Kubernetes Service (AKS) - Azure Kubernetes Service](https://learn.microsoft.com/en-us/azure/aks/gpu-cluster)
* Content Source: [articles/aks/gpu-cluster.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/aks/gpu-cluster.md)
* Service: **azure-kubernetes-service**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | non_priority | kubectl apply command is throwing an error the command kubectl apply f nvidia device plugin ds yaml returns an error when using aks with version error unable to decode nvidia device plugin ds yaml no kind daemonset is registered for version apps document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service azure kubernetes service github login mgoedtel microsoft alias magoedte | 0 |
110,524 | 11,704,992,818 | IssuesEvent | 2020-03-07 13:12:41 | lkno0705/MatrixMultiplication | https://api.github.com/repos/lkno0705/MatrixMultiplication | opened | New test runs | documentation | Just to track. Whenever you have time @lkno0705
Needed are:
- [ ] x64 NASM re-run
- [ ] SIMD C++ | 1.0 | New test runs - Just to track. Whenever you have time @lkno0705
Needed are:
- [ ] x64 NASM re-run
- [ ] SIMD C++ | non_priority | new test runs just to track whenever you have time needed are nasm re run simd c | 0 |
30,896 | 4,226,657,419 | IssuesEvent | 2016-07-02 16:16:45 | berkmancenter/bookanook | https://api.github.com/repos/berkmancenter/bookanook | opened | Add filter for Nook/location related reservation statistics | design feature | Admin will be able to choose the nooks/locations for showing the statistics. | 1.0 | Add filter for Nook/location related reservation statistics - Admin will be able to choose the nooks/locations for showing the statistics. | non_priority | add filter for nook location related reservation statistics admin will be able to choose the nooks locations for showing the statistics | 0 |
9,818 | 13,962,451,298 | IssuesEvent | 2020-10-25 09:35:13 | ISISScientificComputing/autoreduce | https://api.github.com/repos/ISISScientificComputing/autoreduce | closed | Tidy up run Summary page for skipped runs | :exclamation: Release requirement :key: WebApp | Issue raised by: [developer]
### What?
From #844 originally
When a run is skipped the run summary page should:
- Start time: Not run
- End time: Not run
- Duration: Not run
- Rerun section: Not visible
- Logs: Not shown
### Where?
`WebApp/autoreduce_webapp/templates/run_summary.html`
### How?
Smoke testing
### Reproducible?
[Yes]
View a skipped run to see the current state
### How to test the issue is resolved
The criteria above should be true when viewing a skipped run
| 1.0 | Tidy up run Summary page for skipped runs - Issue raised by: [developer]
### What?
From #844 originally
When a run is skipped the run summary page should:
- Start time: Not run
- End time: Not run
- Duration: Not run
- Rerun section: Not visible
- Logs: Not shown
### Where?
`WebApp/autoreduce_webapp/templates/run_summary.html`
### How?
Smoke testing
### Reproducible?
[Yes]
View a skipped run to see the current state
### How to test the issue is resolved
The criteria above should be true when viewing a skipped run
| non_priority | tidy up run summary page for skipped runs issue raised by what from originally when a run is skipped the run summary page should start time not run end time not run duration not run rerun section not visible logs not shown where webapp autoreduce webapp templates run summary html how smoke testing reproducible view a skipped run to see the current state how to test the issue is resolved the criteria above should be true when viewing a skipped run | 0 |
55,155 | 7,963,857,473 | IssuesEvent | 2018-07-13 19:07:18 | wedeploy/wedeploy.com | https://api.github.com/repos/wedeploy/wedeploy.com | closed | Add documentation on the new approach of assigning custom domains to services | documentation | @ipeychev commented on [Sat Nov 04 2017](https://github.com/wedeploy/ideas/issues/181)
Once we apply the changes to all repositories, we have to update the documentation too.
---
@jonnilundy commented on [Tue Jan 30 2018](https://github.com/wedeploy/ideas/issues/181#issuecomment-361785704)
Hey @ipeychev, is this issue completed?
If not, can you move it to the appropriate repo and link to [this epic](https://github.com/wedeploy/epics/issues/3)? Thanks!
| 1.0 | Add documentation on the new approach of assigning custom domains to services - @ipeychev commented on [Sat Nov 04 2017](https://github.com/wedeploy/ideas/issues/181)
Once we apply the changes to all repositories, we have to update the documentation too.
---
@jonnilundy commented on [Tue Jan 30 2018](https://github.com/wedeploy/ideas/issues/181#issuecomment-361785704)
Hey @ipeychev, is this issue completed?
If not, can you move it to the appropriate repo and link to [this epic](https://github.com/wedeploy/epics/issues/3)? Thanks!
| non_priority | add documentation on the new approach of assigning custom domains to services ipeychev commented on once we apply the changes to all repositories we have to update the documentation too jonnilundy commented on hey ipeychev is this issue completed if not can you move it to the appropriate repo and link to thanks | 0 |
42,913 | 5,546,738,706 | IssuesEvent | 2017-03-23 02:15:36 | UniGDC/unigdc.github.io | https://api.github.com/repos/UniGDC/unigdc.github.io | closed | Columned designing | Design | Using simple column grid defined in _sass/_layout.scss, columned website should be implemented.
Columned website might allow us to make the website not only more organized but also more beautiful.
2:1 for index page.
| 1.0 | Columned designing - Using simple column grid defined in _sass/_layout.scss, columned website should be implemented.
Columned website might allow us to make the website not only more organized but also more beautiful.
2:1 for index page.
| non_priority | columned designing using simple column grid defined in sass layout scss columned website should be implemented columned website might allow us to make the website not only more organized but also more beautiful for index page | 0 |
9,476 | 7,992,605,333 | IssuesEvent | 2018-07-20 02:33:59 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | opened | Minor documentation errors | interface/infrastructure | A few files have syntax errors in the XML documentation. This is another dummy issue after I accidentally resolved the previous one. | 1.0 | Minor documentation errors - A few files have syntax errors in the XML documentation. This is another dummy issue after I accidentally resolved the previous one. | non_priority | minor documentation errors a few files have syntax errors in the xml documentation this is another dummy issue after i accidentally resolved the previous one | 0 |
64,155 | 8,713,020,767 | IssuesEvent | 2018-12-07 00:34:56 | Microsoft/WindowsTemplateStudio | https://api.github.com/repos/Microsoft/WindowsTemplateStudio | closed | docs/mvvmbasic.md Not sure what this means | Documentation | "You can see examples of this being used in many of the pages that can be included as part of project generation"
I am not sure what this project generation refers to, this needs to be more clear.
| 1.0 | docs/mvvmbasic.md Not sure what this means - "You can see examples of this being used in many of the pages that can be included as part of project generation"
I am not sure what this project generation refers to, this needs to be more clear.
| non_priority | docs mvvmbasic md not sure what this means you can see examples of this being used in many of the pages that can be included as part of project generation i am not sure what this project generation refers to this needs to be more clear | 0 |
47,335 | 12,015,614,785 | IssuesEvent | 2020-04-10 14:22:18 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | Image Classification not working in Local System | type:build/install | **System information**
**### - OS Platform and Distribution (MacOs Catalina):**
### **- TensorFlow version:2.1.0**
### **- Python version: 3.6.9**
### **- Platform - docker with jupyter notebook**
### **During The Image Classification, I got An error like this,
It working properly in google colab notebook**
```
import os
import zipfile
local_zip = '/tf/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tf/tmp/horse-or-human')
zip_ref.close()
train_horse_dir = os.path.join('/tf/tmp/horse-or-human/horses')
train_human_dir = os.path.join('/tf/tmp/horse-or-human/humans')
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
nrows = 4
ncols = 4
pic_index = 0
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
import tensorflow as tf
print(tf.__version__)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tf/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# In[18]:
history = model.fit(train_generator,epochs=15,steps_per_epoch=8,verbose=1)
```
`
and got an error
### **ImportErrorTraceback (most recent call last)
<ipython-input-18-8d2593ec774f> in <module>
----> 1 history = model.fit(train_generator,epochs=15,steps_per_epoch=8,verbose=1)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233 max_queue_size=max_queue_size,
234 workers=workers,
--> 235 use_multiprocessing=use_multiprocessing)
236
237 total_samples = _get_total_number_of_samples(training_data_adapter)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 max_queue_size=max_queue_size,
592 workers=workers,
--> 593 use_multiprocessing=use_multiprocessing)
594 val_adapter = None
595 if validation_data:
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 max_queue_size=max_queue_size,
705 workers=workers,
--> 706 use_multiprocessing=use_multiprocessing)
707
708 return adapter
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, shuffle, workers, use_multiprocessing, max_queue_size, **kwargs)
950 use_multiprocessing=use_multiprocessing,
951 max_queue_size=max_queue_size,
--> 952 **kwargs)
953
954 @staticmethod
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, workers, use_multiprocessing, max_queue_size, **kwargs)
745 # Since we have to know the dtype of the python generator when we build the
746 # dataset, we have to look at a batch to infer the structure.
--> 747 peek, x = self._peek_and_restore(x)
748 assert_not_namedtuple(peek)
749
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in _peek_and_restore(x)
954 @staticmethod
955 def _peek_and_restore(x):
--> 956 return x[0], x
957
958 def _make_callable(self, x, workers, use_multiprocessing, max_queue_size):
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py in __getitem__(self, idx)
63 index_array = self.index_array[self.batch_size * idx:
64 self.batch_size * (idx + 1)]
---> 65 return self._get_batches_of_transformed_samples(index_array)
66
67 def __len__(self):
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py in _get_batches_of_transformed_samples(self, index_array)
228 color_mode=self.color_mode,
229 target_size=self.target_size,
--> 230 interpolation=self.interpolation)
231 x = img_to_array(img, data_format=self.data_format)
232 # Pillow images should be closed after `load_img`,
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/utils.py in load_img(path, grayscale, color_mode, target_size, interpolation)
106 color_mode = 'grayscale'
107 if pil_image is None:
--> 108 raise ImportError('Could not import PIL.Image. '
109 'The use of `load_img` requires PIL.')
110 img = pil_image.open(path)
ImportError: Could not import PIL.Image. The use of `load_img` requires PIL.**
| 1.0 | Image Classification not working in Local System - **System information**
**### - OS Platform and Distribution (MacOs Catalina):**
### **- TensorFlow version:2.1.0**
### **- Python version: 3.6.9**
### **- Platform - docker with jupyter notebook**
### **During The Image Classification, I got An error like this,
It working properly in google colab notebook**
```
import os
import zipfile
local_zip = '/tf/tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tf/tmp/horse-or-human')
zip_ref.close()
train_horse_dir = os.path.join('/tf/tmp/horse-or-human/horses')
train_human_dir = os.path.join('/tf/tmp/horse-or-human/humans')
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
get_ipython().run_line_magic('matplotlib', 'inline')
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
nrows = 4
ncols = 4
pic_index = 0
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
import tensorflow as tf
print(tf.__version__)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 300x300 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['accuracy'])
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/tf/tmp/horse-or-human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# In[18]:
history = model.fit(train_generator,epochs=15,steps_per_epoch=8,verbose=1)
```
`
and got an error
### **ImportErrorTraceback (most recent call last)
<ipython-input-18-8d2593ec774f> in <module>
----> 1 history = model.fit(train_generator,epochs=15,steps_per_epoch=8,verbose=1)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
817 max_queue_size=max_queue_size,
818 workers=workers,
--> 819 use_multiprocessing=use_multiprocessing)
820
821 def evaluate(self,
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
233 max_queue_size=max_queue_size,
234 workers=workers,
--> 235 use_multiprocessing=use_multiprocessing)
236
237 total_samples = _get_total_number_of_samples(training_data_adapter)
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_training_inputs(model, x, y, batch_size, epochs, sample_weights, class_weights, steps_per_epoch, validation_split, validation_data, validation_steps, shuffle, distribution_strategy, max_queue_size, workers, use_multiprocessing)
591 max_queue_size=max_queue_size,
592 workers=workers,
--> 593 use_multiprocessing=use_multiprocessing)
594 val_adapter = None
595 if validation_data:
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_v2.py in _process_inputs(model, mode, x, y, batch_size, epochs, sample_weights, class_weights, shuffle, steps, distribution_strategy, max_queue_size, workers, use_multiprocessing)
704 max_queue_size=max_queue_size,
705 workers=workers,
--> 706 use_multiprocessing=use_multiprocessing)
707
708 return adapter
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, shuffle, workers, use_multiprocessing, max_queue_size, **kwargs)
950 use_multiprocessing=use_multiprocessing,
951 max_queue_size=max_queue_size,
--> 952 **kwargs)
953
954 @staticmethod
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in __init__(self, x, y, sample_weights, standardize_function, workers, use_multiprocessing, max_queue_size, **kwargs)
745 # Since we have to know the dtype of the python generator when we build the
746 # dataset, we have to look at a batch to infer the structure.
--> 747 peek, x = self._peek_and_restore(x)
748 assert_not_namedtuple(peek)
749
/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/data_adapter.py in _peek_and_restore(x)
954 @staticmethod
955 def _peek_and_restore(x):
--> 956 return x[0], x
957
958 def _make_callable(self, x, workers, use_multiprocessing, max_queue_size):
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py in __getitem__(self, idx)
63 index_array = self.index_array[self.batch_size * idx:
64 self.batch_size * (idx + 1)]
---> 65 return self._get_batches_of_transformed_samples(index_array)
66
67 def __len__(self):
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/iterator.py in _get_batches_of_transformed_samples(self, index_array)
228 color_mode=self.color_mode,
229 target_size=self.target_size,
--> 230 interpolation=self.interpolation)
231 x = img_to_array(img, data_format=self.data_format)
232 # Pillow images should be closed after `load_img`,
/usr/local/lib/python3.6/dist-packages/keras_preprocessing/image/utils.py in load_img(path, grayscale, color_mode, target_size, interpolation)
106 color_mode = 'grayscale'
107 if pil_image is None:
--> 108 raise ImportError('Could not import PIL.Image. '
109 'The use of `load_img` requires PIL.')
110 img = pil_image.open(path)
ImportError: Could not import PIL.Image. The use of `load_img` requires PIL.**
| non_priority | image classification not working in local system system information os platform and distribution macos catalina tensorflow version python version platform docker with jupyter notebook during the image classification i got an error like this it working properly in google colab notebook import os import zipfile local zip tf tmp horse or human zip zip ref zipfile zipfile local zip r zip ref extractall tf tmp horse or human zip ref close train horse dir os path join tf tmp horse or human horses train human dir os path join tf tmp horse or human humans train horse names os listdir train horse dir print train horse names train human names os listdir train human dir print train human names print total training horse images len os listdir train horse dir print total training human images len os listdir train human dir get ipython run line magic matplotlib inline import matplotlib pyplot as plt import matplotlib image as mpimg nrows ncols pic index set up matplotlib fig and size it to fit pics fig plt gcf fig set size inches ncols nrows pic index next horse pix os path join train horse dir fname for fname in train horse names next human pix os path join train human dir fname for fname in train human names for i img path in enumerate next horse pix next human pix set up subplot subplot indices start at sp plt subplot nrows ncols i sp axis off don t show axes or gridlines img mpimg imread img path plt imshow img plt show import tensorflow as tf print tf version model tf keras models sequential note the input shape is the desired size of the image with bytes color this is the first convolution tf keras layers activation relu input shape tf keras layers the second convolution tf keras layers activation relu tf keras layers the third convolution tf keras layers activation relu tf keras layers the fourth convolution tf keras layers activation relu tf keras layers the fifth convolution tf keras layers activation relu tf keras layers flatten the results to feed into a dnn tf keras layers flatten neuron hidden layer tf keras layers dense activation relu only output neuron it will contain a value from where for class horses and for the other humans tf keras layers dense activation sigmoid model summary from tensorflow keras optimizers import rmsprop model compile loss binary crossentropy optimizer rmsprop lr metrics from tensorflow keras preprocessing image import imagedatagenerator from tensorflow keras preprocessing image import imagedatagenerator all images will be rescaled by train datagen imagedatagenerator rescale flow training images in batches of using train datagen generator train generator train datagen flow from directory tf tmp horse or human this is the source directory for training images target size all images will be resized to batch size since we use binary crossentropy loss we need binary labels class mode binary in history model fit train generator epochs steps per epoch verbose and got an error importerrortraceback most recent call last in history model fit train generator epochs steps per epoch verbose usr local lib dist packages tensorflow core python keras engine training py in fit self x y batch size epochs verbose callbacks validation split validation data shuffle class weight sample weight initial epoch steps per epoch validation steps validation freq max queue size workers use multiprocessing kwargs max queue size max queue size workers workers use multiprocessing use multiprocessing def evaluate self usr local lib dist packages tensorflow core python keras engine training py in fit self model x y batch size epochs verbose callbacks validation split validation data shuffle class weight sample weight initial epoch steps per epoch validation steps validation freq max queue size workers use multiprocessing kwargs max queue size max queue size workers workers use multiprocessing use multiprocessing total samples get total number of samples training data adapter usr local lib dist packages tensorflow core python keras engine training py in process training inputs model x y batch size epochs sample weights class weights steps per epoch validation split validation data validation steps shuffle distribution strategy max queue size workers use multiprocessing max queue size max queue size workers workers use multiprocessing use multiprocessing val adapter none if validation data usr local lib dist packages tensorflow core python keras engine training py in process inputs model mode x y batch size epochs sample weights class weights shuffle steps distribution strategy max queue size workers use multiprocessing max queue size max queue size workers workers use multiprocessing use multiprocessing return adapter usr local lib dist packages tensorflow core python keras engine data adapter py in init self x y sample weights standardize function shuffle workers use multiprocessing max queue size kwargs use multiprocessing use multiprocessing max queue size max queue size kwargs staticmethod usr local lib dist packages tensorflow core python keras engine data adapter py in init self x y sample weights standardize function workers use multiprocessing max queue size kwargs since we have to know the dtype of the python generator when we build the dataset we have to look at a batch to infer the structure peek x self peek and restore x assert not namedtuple peek usr local lib dist packages tensorflow core python keras engine data adapter py in peek and restore x staticmethod def peek and restore x return x x def make callable self x workers use multiprocessing max queue size usr local lib dist packages keras preprocessing image iterator py in getitem self idx index array self index array self batch size idx self batch size idx return self get batches of transformed samples index array def len self usr local lib dist packages keras preprocessing image iterator py in get batches of transformed samples self index array color mode self color mode target size self target size interpolation self interpolation x img to array img data format self data format pillow images should be closed after load img usr local lib dist packages keras preprocessing image utils py in load img path grayscale color mode target size interpolation color mode grayscale if pil image is none raise importerror could not import pil image the use of load img requires pil img pil image open path importerror could not import pil image the use of load img requires pil | 0 |
20,396 | 3,591,252,553 | IssuesEvent | 2016-02-01 10:50:29 | vigour-io/vjs | https://api.github.com/repos/vigour-io/vjs | closed | [Subscribe] Spec "downward" and "deep" subscribe API, and add tests | new:design | This would search downward to fulfil subscription: | 1.0 | [Subscribe] Spec "downward" and "deep" subscribe API, and add tests - This would search downward to fulfil subscription: | non_priority | spec downward and deep subscribe api and add tests this would search downward to fulfil subscription | 0 |
33,459 | 6,205,581,489 | IssuesEvent | 2017-07-06 16:26:47 | edamontology/edamontology | https://api.github.com/repos/edamontology/edamontology | closed | Improve license info on GitHub | documentation partially fixed | From BOSC 2017 review by @peterjc: "From the GitHub repository alone, it is not obvious how EDAM is licensed. Please add this to the README and/or as a top level LICENSE file (using .md etc as you prefer)." | 1.0 | Improve license info on GitHub - From BOSC 2017 review by @peterjc: "From the GitHub repository alone, it is not obvious how EDAM is licensed. Please add this to the README and/or as a top level LICENSE file (using .md etc as you prefer)." | non_priority | improve license info on github from bosc review by peterjc from the github repository alone it is not obvious how edam is licensed please add this to the readme and or as a top level license file using md etc as you prefer | 0 |
21,919 | 14,934,738,052 | IssuesEvent | 2021-01-25 10:56:23 | sanger/print_my_barcode | https://api.github.com/repos/sanger/print_my_barcode | closed | GPL-622 Move pmb into psd-deployment on TRAINING FCE | Infrastructure | Description
PMB is currently running in the old machines and the config changes need to be added manually. We would like to move the application into TRAINING FCE and configure it to use the printers.
Who the primary contacts are for this work
Harriet
Eduardo | 1.0 | GPL-622 Move pmb into psd-deployment on TRAINING FCE - Description
PMB is currently running in the old machines and the config changes need to be added manually. We would like to move the application into TRAINING FCE and configure it to use the printers.
Who the primary contacts are for this work
Harriet
Eduardo | non_priority | gpl move pmb into psd deployment on training fce description pmb is currently running in the old machines and the config changes need to be added manually we would like to move the application into training fce and configure it to use the printers who the primary contacts are for this work harriet eduardo | 0 |
99,829 | 12,479,667,933 | IssuesEvent | 2020-05-29 18:40:26 | hbersey/BenBonk-Game-Jam-1-Entry | https://api.github.com/repos/hbersey/BenBonk-Game-Jam-1-Entry | closed | Platformer Boiler Plate | Backend Game Design | - [x] NPC
- [x] Player Controls
- [x] Scoring and High Score
- [x] Health and Damage
- [ ] Player
- [ ] Enemies
- [ ] Example | 1.0 | Platformer Boiler Plate - - [x] NPC
- [x] Player Controls
- [x] Scoring and High Score
- [x] Health and Damage
- [ ] Player
- [ ] Enemies
- [ ] Example | non_priority | platformer boiler plate npc player controls scoring and high score health and damage player enemies example | 0 |
67,042 | 14,839,496,024 | IssuesEvent | 2021-01-16 01:00:09 | GooseWSS/gogs | https://api.github.com/repos/GooseWSS/gogs | opened | WS-2020-0163 (Medium) detected in marked-0.8.1.min.js | security vulnerability | ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.8.1.min.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.8.1/marked.min.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.8.1/marked.min.js</a></p>
<p>Path to vulnerable library: gogs/public/plugins/marked-0.8.1/marked.min.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.8.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.8.1","isTransitiveDependency":false,"dependencyTree":"marked:0.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"marked - 1.1.1"}],"vulnerabilityIdentifier":"WS-2020-0163","vulnerabilityDetails":"marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | WS-2020-0163 (Medium) detected in marked-0.8.1.min.js - ## WS-2020-0163 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.8.1.min.js</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/marked/0.8.1/marked.min.js">https://cdnjs.cloudflare.com/ajax/libs/marked/0.8.1/marked.min.js</a></p>
<p>Path to vulnerable library: gogs/public/plugins/marked-0.8.1/marked.min.js</p>
<p>
Dependency Hierarchy:
- :x: **marked-0.8.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.
<p>Publish Date: 2020-07-02
<p>URL: <a href=https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0>WS-2020-0163</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markedjs/marked/releases/tag/v1.1.1">https://github.com/markedjs/marked/releases/tag/v1.1.1</a></p>
<p>Release Date: 2020-07-02</p>
<p>Fix Resolution: marked - 1.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"marked","packageVersion":"0.8.1","isTransitiveDependency":false,"dependencyTree":"marked:0.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"marked - 1.1.1"}],"vulnerabilityIdentifier":"WS-2020-0163","vulnerabilityDetails":"marked before 1.1.1 is vulnerable to Regular Expression Denial of Service (REDoS). rules.js have multiple unused capture groups which can lead to a Denial of Service.","vulnerabilityUrl":"https://github.com/markedjs/marked/commit/bd4f8c464befad2b304d51e33e89e567326e62e0","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_priority | ws medium detected in marked min js ws medium severity vulnerability vulnerable library marked min js a markdown parser built for speed library home page a href path to vulnerable library gogs public plugins marked marked min js dependency hierarchy x marked min js vulnerable library vulnerability details marked before is vulnerable to regular expression denial of service redos rules js have multiple unused capture groups which can lead to a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails marked before is vulnerable to regular expression denial of service redos rules js have multiple unused capture groups which can lead to a denial of service vulnerabilityurl | 0 |
175,863 | 14,543,407,383 | IssuesEvent | 2020-12-15 16:50:32 | thenewboston-developers/Website | https://api.github.com/repos/thenewboston-developers/Website | closed | Update Docs for Bank - Invalid Blocks | PR Reward - 1000 documentation engineering | Update documentation and sample code for Bank - Invalid Blocks
- ensure all available endpoints (`GET`, `POST`, `PATCH`, and `DELETE`) are documented
- update request and response examples from real API data (2 items for `GET` list responses)
- JSON should be formatted with 2 space indents
- ensure any/all params are documented in tables
- https://thenewboston.com/bank-api/invalid-blocks | 1.0 | Update Docs for Bank - Invalid Blocks - Update documentation and sample code for Bank - Invalid Blocks
- ensure all available endpoints (`GET`, `POST`, `PATCH`, and `DELETE`) are documented
- update request and response examples from real API data (2 items for `GET` list responses)
- JSON should be formatted with 2 space indents
- ensure any/all params are documented in tables
- https://thenewboston.com/bank-api/invalid-blocks | non_priority | update docs for bank invalid blocks update documentation and sample code for bank invalid blocks ensure all available endpoints get post patch and delete are documented update request and response examples from real api data items for get list responses json should be formatted with space indents ensure any all params are documented in tables | 0 |
93,321 | 8,408,981,572 | IssuesEvent | 2018-10-12 05:00:33 | nodejs/node | https://api.github.com/repos/nodejs/node | opened | Investigate flaky parallel/test-repl-tab-complete (debug build) | CI / flaky test repl | * **Version**: master
* **Platform**: x64 Linux Debug build
* **Subsystem**: repl
https://ci.nodejs.org/job/node-test-commit-linux-containered/7719/nodes=ubuntu1604_sharedlibs_debug_x64/console
```
14:10:10 not ok 1615 parallel/test-repl-tab-complete
14:10:10 ---
14:10:10 duration_ms: 62.152
14:10:10 severity: fail
14:10:10 stack: |-
14:10:10 ...
```
https://ci.nodejs.org/job/node-test-commit-linux-containered/7723/nodes=ubuntu1604_sharedlibs_debug_x64/console
```
23:58:41 not ok 1614 parallel/test-repl-tab-complete
23:58:41 ---
23:58:41 duration_ms: 86.658
23:58:41 severity: crashed
23:58:41 exitcode: -4
23:58:41 stack: |-
23:58:41
23:58:41
23:58:41 #
23:58:42 # Fatal error in ../deps/v8/src/execution.cc, line 107
23:58:42 # Check failed: AllowJavascriptExecution::IsAllowed(isolate).
23:58:42 #
23:58:42 #
23:58:42 #
23:58:42 #FailureMessage Object: 0x7fff26424a80
```
/cc @Trott just fyi because you recently modified this test | 1.0 | Investigate flaky parallel/test-repl-tab-complete (debug build) - * **Version**: master
* **Platform**: x64 Linux Debug build
* **Subsystem**: repl
https://ci.nodejs.org/job/node-test-commit-linux-containered/7719/nodes=ubuntu1604_sharedlibs_debug_x64/console
```
14:10:10 not ok 1615 parallel/test-repl-tab-complete
14:10:10 ---
14:10:10 duration_ms: 62.152
14:10:10 severity: fail
14:10:10 stack: |-
14:10:10 ...
```
https://ci.nodejs.org/job/node-test-commit-linux-containered/7723/nodes=ubuntu1604_sharedlibs_debug_x64/console
```
23:58:41 not ok 1614 parallel/test-repl-tab-complete
23:58:41 ---
23:58:41 duration_ms: 86.658
23:58:41 severity: crashed
23:58:41 exitcode: -4
23:58:41 stack: |-
23:58:41
23:58:41
23:58:41 #
23:58:42 # Fatal error in ../deps/v8/src/execution.cc, line 107
23:58:42 # Check failed: AllowJavascriptExecution::IsAllowed(isolate).
23:58:42 #
23:58:42 #
23:58:42 #
23:58:42 #FailureMessage Object: 0x7fff26424a80
```
/cc @Trott just fyi because you recently modified this test | non_priority | investigate flaky parallel test repl tab complete debug build version master platform linux debug build subsystem repl not ok parallel test repl tab complete duration ms severity fail stack not ok parallel test repl tab complete duration ms severity crashed exitcode stack fatal error in deps src execution cc line check failed allowjavascriptexecution isallowed isolate failuremessage object cc trott just fyi because you recently modified this test | 0 |
62,385 | 25,979,754,137 | IssuesEvent | 2022-12-19 17:42:29 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | aws_autoscaling_group - setting suspended processes to empty list attaches all processes instead | bug service/autoscaling stale | ### Terraform Version
v0.10.4
### Affected Resource(s)
Please list the resources as a list, for example:
- aws_autoscaling_group
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
variable "load_balancers" {
default = ""
}
variable "target_group_arns" {
default = ""
}
variable "suspended_processes" {
default = ""
}
resource "aws_autoscaling_group" "autoscaling_group" {
[...]
load_balancers = ["${split(",",var.load_balancers)}"]
target_group_arns = ["${split(",",var.target_group_arns)}"]
suspended_processes = ["${split(",",var.suspended_processes)}"]
}
```
### Expected Behavior
terraform plan outputs the following:
```
aws_autoscaling_group.autoscaling_group
load_balancers.#: "0" => "1"
load_balancers.0: "" => ""
suspended_processes.#: "0" => "1"
suspended_processes.0: "" => ""
target_group_arns.#: "0" => "1"
target_group_arns.0: "" => ""
```
Supposedly an update would not change anything. Will not set load balancers, target groups or suspended processes
### Actual Behavior
rerunning terraform plan after applying outputs:
```
aws_autoscaling_group.autoscaling_group
load_balancers.#: "0" => "1"
load_balancers.0: "" => ""
suspended_processes.#: "9" => "1"
suspended_processes.0: "" => ""
suspended_processes.1107475368: "AddToLoadBalancer" => ""
suspended_processes.2115719875: "Launch" => ""
suspended_processes.2282213524: "ScheduledActions" => ""
suspended_processes.2954045122: "RemoveFromLoadBalancerLowPriority" => ""
suspended_processes.3905370587: "AlarmNotification" => ""
suspended_processes.3999558323: "Terminate" => ""
suspended_processes.4273030806: "ReplaceUnhealthy" => ""
suspended_processes.658532077: "AZRebalance" => ""
suspended_processes.997436260: "HealthCheck" => ""
target_group_arns.#: "0" => "1"
target_group_arns.0: "" => ""
```
As above, target groups and load balancers remain empty, but suspended processes have apparently added all possible processes in suspended_processes
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform plan`
2. `terraform apply`
3. `terraform plan`
| 1.0 | aws_autoscaling_group - setting suspended processes to empty list attaches all processes instead - ### Terraform Version
v0.10.4
### Affected Resource(s)
Please list the resources as a list, for example:
- aws_autoscaling_group
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
variable "load_balancers" {
default = ""
}
variable "target_group_arns" {
default = ""
}
variable "suspended_processes" {
default = ""
}
resource "aws_autoscaling_group" "autoscaling_group" {
[...]
load_balancers = ["${split(",",var.load_balancers)}"]
target_group_arns = ["${split(",",var.target_group_arns)}"]
suspended_processes = ["${split(",",var.suspended_processes)}"]
}
```
### Expected Behavior
terraform plan outputs the following:
```
aws_autoscaling_group.autoscaling_group
load_balancers.#: "0" => "1"
load_balancers.0: "" => ""
suspended_processes.#: "0" => "1"
suspended_processes.0: "" => ""
target_group_arns.#: "0" => "1"
target_group_arns.0: "" => ""
```
Supposedly an update would not change anything. Will not set load balancers, target groups or suspended processes
### Actual Behavior
rerunning terraform plan after applying outputs:
```
aws_autoscaling_group.autoscaling_group
load_balancers.#: "0" => "1"
load_balancers.0: "" => ""
suspended_processes.#: "9" => "1"
suspended_processes.0: "" => ""
suspended_processes.1107475368: "AddToLoadBalancer" => ""
suspended_processes.2115719875: "Launch" => ""
suspended_processes.2282213524: "ScheduledActions" => ""
suspended_processes.2954045122: "RemoveFromLoadBalancerLowPriority" => ""
suspended_processes.3905370587: "AlarmNotification" => ""
suspended_processes.3999558323: "Terminate" => ""
suspended_processes.4273030806: "ReplaceUnhealthy" => ""
suspended_processes.658532077: "AZRebalance" => ""
suspended_processes.997436260: "HealthCheck" => ""
target_group_arns.#: "0" => "1"
target_group_arns.0: "" => ""
```
As above, target groups and load balancers remain empty, but suspended processes have apparently added all possible processes in suspended_processes
### Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
1. `terraform plan`
2. `terraform apply`
3. `terraform plan`
| non_priority | aws autoscaling group setting suspended processes to empty list attaches all processes instead terraform version affected resource s please list the resources as a list for example aws autoscaling group if this issue appears to affect multiple resources it may be an issue with terraform s core so please mention this terraform configuration files hcl variable load balancers default variable target group arns default variable suspended processes default resource aws autoscaling group autoscaling group load balancers target group arns suspended processes expected behavior terraform plan outputs the following aws autoscaling group autoscaling group load balancers load balancers suspended processes suspended processes target group arns target group arns supposedly an update would not change anything will not set load balancers target groups or suspended processes actual behavior rerunning terraform plan after applying outputs aws autoscaling group autoscaling group load balancers load balancers suspended processes suspended processes suspended processes addtoloadbalancer suspended processes launch suspended processes scheduledactions suspended processes removefromloadbalancerlowpriority suspended processes alarmnotification suspended processes terminate suspended processes replaceunhealthy suspended processes azrebalance suspended processes healthcheck target group arns target group arns as above target groups and load balancers remain empty but suspended processes have apparently added all possible processes in suspended processes steps to reproduce please list the steps required to reproduce the issue for example terraform plan terraform apply terraform plan | 0 |
1,006 | 2,569,635,093 | IssuesEvent | 2015-02-10 00:09:29 | webgme/webgme | https://api.github.com/repos/webgme/webgme | closed | Add documentation and script for setting up DSML repository. | Documentation Enhancement Major | The main ReadMe should have a new section and modify the src/bin/generate_config.js to put a configuration file and app.js file in the root (cwd) of the repository using webgme. | 1.0 | Add documentation and script for setting up DSML repository. - The main ReadMe should have a new section and modify the src/bin/generate_config.js to put a configuration file and app.js file in the root (cwd) of the repository using webgme. | non_priority | add documentation and script for setting up dsml repository the main readme should have a new section and modify the src bin generate config js to put a configuration file and app js file in the root cwd of the repository using webgme | 0 |
80,356 | 15,586,279,825 | IssuesEvent | 2021-03-18 01:34:50 | attesch/myretail | https://api.github.com/repos/attesch/myretail | opened | CVE-2020-5398 (High) detected in spring-web-5.0.4.RELEASE.jar | security vulnerability | ## CVE-2020-5398 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.0.4.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /myretail/build.gradle</p>
<p>Path to vulnerable library: myretail/build.gradle</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.0.RELEASE.jar
- :x: **spring-web-5.0.4.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework, versions 5.2.x prior to 5.2.3, versions 5.1.x prior to 5.1.13, and versions 5.0.x prior to 5.0.16, an application is vulnerable to a reflected file download (RFD) attack when it sets a "Content-Disposition" header in the response where the filename attribute is derived from user supplied input.
<p>Publish Date: 2020-01-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398>CVE-2020-5398</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2020-5398">https://pivotal.io/security/cve-2020-5398</a></p>
<p>Release Date: 2020-01-17</p>
<p>Fix Resolution: org.springframework:spring-web:5.0.16.RELEASE,org.springframework:spring-web:5.1.13.RELEASE,org.springframework:spring-web:5.2.3.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-5398 (High) detected in spring-web-5.0.4.RELEASE.jar - ## CVE-2020-5398 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-web-5.0.4.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /myretail/build.gradle</p>
<p>Path to vulnerable library: myretail/build.gradle</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.0.RELEASE.jar (Root Library)
- spring-boot-starter-json-2.0.0.RELEASE.jar
- :x: **spring-web-5.0.4.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework, versions 5.2.x prior to 5.2.3, versions 5.1.x prior to 5.1.13, and versions 5.0.x prior to 5.0.16, an application is vulnerable to a reflected file download (RFD) attack when it sets a "Content-Disposition" header in the response where the filename attribute is derived from user supplied input.
<p>Publish Date: 2020-01-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-5398>CVE-2020-5398</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pivotal.io/security/cve-2020-5398">https://pivotal.io/security/cve-2020-5398</a></p>
<p>Release Date: 2020-01-17</p>
<p>Fix Resolution: org.springframework:spring-web:5.0.16.RELEASE,org.springframework:spring-web:5.1.13.RELEASE,org.springframework:spring-web:5.2.3.RELEASE</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in spring web release jar cve high severity vulnerability vulnerable library spring web release jar spring web library home page a href path to dependency file myretail build gradle path to vulnerable library myretail build gradle dependency hierarchy spring boot starter web release jar root library spring boot starter json release jar x spring web release jar vulnerable library vulnerability details in spring framework versions x prior to versions x prior to and versions x prior to an application is vulnerable to a reflected file download rfd attack when it sets a content disposition header in the response where the filename attribute is derived from user supplied input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring web release org springframework spring web release org springframework spring web release step up your open source security game with whitesource | 0 |
321,601 | 23,863,016,009 | IssuesEvent | 2022-09-07 08:42:42 | vercel/next.js | https://api.github.com/repos/vercel/next.js | opened | Docs: `output: "standalone"` conflicts with discouraged usage of Custom Servers | template: documentation | ### What is the improvement or update you wish to see?
The [Custom Server documentation](https://nextjs.org/docs/advanced-features/custom-server) highlights
> A custom server will remove important performance optimizations, like serverless functions and [Automatic Static Optimization](https://nextjs.org/docs/advanced-features/automatic-static-optimization).
However, `[output:"standalone"](https://nextjs.org/docs/advanced-features/output-file-tracing#automatically-copying-traced-files)` builds a custom server inside `.next/standalone/server.js` which is used in the [official Docker examples](https://github.com/vercel/next.js/blob/canary/examples/with-docker/next.config.js#L3).
I'm not sure if this is intended, a bug, or requires additional documentation, but this seems very confusing.
Why would a Next.js feature yield something that is a Next.js "anti-pattern"?
### Is there any context that might help us understand?
N/A
### Does the docs page already exist? Please link to it.
N/A | 1.0 | Docs: `output: "standalone"` conflicts with discouraged usage of Custom Servers - ### What is the improvement or update you wish to see?
The [Custom Server documentation](https://nextjs.org/docs/advanced-features/custom-server) highlights
> A custom server will remove important performance optimizations, like serverless functions and [Automatic Static Optimization](https://nextjs.org/docs/advanced-features/automatic-static-optimization).
However, `[output:"standalone"](https://nextjs.org/docs/advanced-features/output-file-tracing#automatically-copying-traced-files)` builds a custom server inside `.next/standalone/server.js` which is used in the [official Docker examples](https://github.com/vercel/next.js/blob/canary/examples/with-docker/next.config.js#L3).
I'm not sure if this is intended, a bug, or requires additional documentation, but this seems very confusing.
Why would a Next.js feature yield something that is a Next.js "anti-pattern"?
### Is there any context that might help us understand?
N/A
### Does the docs page already exist? Please link to it.
N/A | non_priority | docs output standalone conflicts with discouraged usage of custom servers what is the improvement or update you wish to see the highlights a custom server will remove important performance optimizations like serverless functions and however builds a custom server inside next standalone server js which is used in the i m not sure if this is intended a bug or requires additional documentation but this seems very confusing why would a next js feature yield something that is a next js anti pattern is there any context that might help us understand n a does the docs page already exist please link to it n a | 0 |
82,584 | 15,648,359,139 | IssuesEvent | 2021-03-23 05:33:40 | YJSoft/namuhub | https://api.github.com/repos/YJSoft/namuhub | closed | CVE-2019-16792 (High) detected in waitress-0.8.9.tar.gz - autoclosed | security vulnerability | ## CVE-2019-16792 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>waitress-0.8.9.tar.gz</b></p></summary>
<p>Waitress WSGI server</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ee/65/fc9dee74a909a1187ca51e4f15ad9c4d35476e4ab5813f73421505c48053/waitress-0.8.9.tar.gz">https://files.pythonhosted.org/packages/ee/65/fc9dee74a909a1187ca51e4f15ad9c4d35476e4ab5813f73421505c48053/waitress-0.8.9.tar.gz</a></p>
<p>Path to dependency file: namuhub/requirements.txt</p>
<p>Path to vulnerable library: namuhub/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **waitress-0.8.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/namuhub/commit/3e2e58c112b34d9d89ec2ec01b44226ffaaf7d87">3e2e58c112b34d9d89ec2ec01b44226ffaaf7d87</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Waitress through version 1.3.1 allows request smuggling by sending the Content-Length header twice. Waitress would header fold a double Content-Length header and due to being unable to cast the now comma separated value to an integer would set the Content-Length to 0 internally. If two Content-Length headers are sent in a single request, Waitress would treat the request as having no body, thereby treating the body of the request as a new request in HTTP pipelining. This issue is fixed in Waitress 1.4.0.
<p>Publish Date: 2020-01-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16792>CVE-2019-16792</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16792">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16792</a></p>
<p>Release Date: 2020-01-22</p>
<p>Fix Resolution: 1.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16792 (High) detected in waitress-0.8.9.tar.gz - autoclosed - ## CVE-2019-16792 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>waitress-0.8.9.tar.gz</b></p></summary>
<p>Waitress WSGI server</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ee/65/fc9dee74a909a1187ca51e4f15ad9c4d35476e4ab5813f73421505c48053/waitress-0.8.9.tar.gz">https://files.pythonhosted.org/packages/ee/65/fc9dee74a909a1187ca51e4f15ad9c4d35476e4ab5813f73421505c48053/waitress-0.8.9.tar.gz</a></p>
<p>Path to dependency file: namuhub/requirements.txt</p>
<p>Path to vulnerable library: namuhub/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **waitress-0.8.9.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/namuhub/commit/3e2e58c112b34d9d89ec2ec01b44226ffaaf7d87">3e2e58c112b34d9d89ec2ec01b44226ffaaf7d87</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Waitress through version 1.3.1 allows request smuggling by sending the Content-Length header twice. Waitress would header fold a double Content-Length header and due to being unable to cast the now comma separated value to an integer would set the Content-Length to 0 internally. If two Content-Length headers are sent in a single request, Waitress would treat the request as having no body, thereby treating the body of the request as a new request in HTTP pipelining. This issue is fixed in Waitress 1.4.0.
<p>Publish Date: 2020-01-22
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16792>CVE-2019-16792</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16792">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16792</a></p>
<p>Release Date: 2020-01-22</p>
<p>Fix Resolution: 1.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in waitress tar gz autoclosed cve high severity vulnerability vulnerable library waitress tar gz waitress wsgi server library home page a href path to dependency file namuhub requirements txt path to vulnerable library namuhub requirements txt dependency hierarchy x waitress tar gz vulnerable library found in head commit a href found in base branch develop vulnerability details waitress through version allows request smuggling by sending the content length header twice waitress would header fold a double content length header and due to being unable to cast the now comma separated value to an integer would set the content length to internally if two content length headers are sent in a single request waitress would treat the request as having no body thereby treating the body of the request as a new request in http pipelining this issue is fixed in waitress publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
43,667 | 23,326,840,960 | IssuesEvent | 2022-08-08 22:18:00 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Regressions in System.Tests.Perf_UInt64 | area-System.Runtime tenet-performance tenet-performance-benchmarks refs/heads/main RunKind=micro Windows 10.0.19041 Regression CoreClr arm64 | ### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Collections.ContainsKeyFalse<String, String>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[SortedList - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.ContainsKeyFalse(String%2c%20String).SortedList(Size%3a%20512).html>) | 348.38 μs | 384.99 μs | 1.11 | 0.03 | False | | |
_1.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.ContainsKeyFalse(String,%20String).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Collections.ContainsKeyFalse<String, String>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Collections.ContainsKeyFalse<String, String>.SortedList(Size: 512)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 384.9944918699187 > 366.0416569444444.
IsChangePoint: Marked as a change because one of 6/3/2022 8:17:54 AM, 6/6/2022 3:15:19 PM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -18.592276578561822 (T) = (0 -386239.64188622445) / Math.Sqrt((827784.6548322291 / (25)) + (73255526.55193321 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.10803726811218585 = (348580.00989829405 - 386239.64188622445) / 348580.00989829405 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Tests.Perf_UInt64
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[TryParseHex - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Tests.Perf_UInt64.TryParseHex(value%3a%20%220%22).html>) | 4.52 ns | 5.85 ns | 1.30 | 0.35 | False | | |

[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Tests.Perf_UInt64.html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Tests.Perf_UInt64*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Tests.Perf_UInt64.TryParseHex(value: "0")
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 5.851650109031447 > 5.099529018869719.
IsChangePoint: Marked as a change because one of 3/7/2022 12:49:56 AM, 3/7/2022 10:45:01 PM, 3/12/2022 3:27:04 PM, 3/13/2022 11:50:20 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -22.045017915913398 (T) = (0 -5.800006004212765) / Math.Sqrt((0.03388441533165363 / (24)) + (0.012577560483055143 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.2115580874843213 = (4.7872289939113815 - 5.800006004212765) / 4.7872289939113815 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Reflection.Attributes
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[IsDefinedMethodOverrideMiss - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodOverrideMiss.html>) | 660.32 ns | 694.28 ns | 1.05 | 0.18 | False | | |
[IsDefinedClassMissInherit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedClassMissInherit.html>) | 1.11 μs | 1.21 μs | 1.09 | 0.13 | False | | |
[IsDefinedMethodOverrideHit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodOverrideHit.html>) | 654.04 ns | 705.29 ns | 1.08 | 0.15 | False | | |
[IsDefinedMethodBaseHitInherit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodBaseHitInherit.html>) | 660.85 ns | 701.58 ns | 1.06 | 0.16 | False | | |
[IsDefinedClassHit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedClassHit.html>) | 606.11 ns | 673.82 ns | 1.11 | 0.19 | False | | |





[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Reflection.Attributes.html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Reflection.Attributes*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Reflection.Attributes.IsDefinedMethodOverrideMiss
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 694.281147282633 > 668.6631138733229.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -8.451246977120563 (T) = (0 -708.0806406154733) / Math.Sqrt((495.03377189893916 / (25)) + (567.5766624145348 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.09351064462495992 = (647.5297191627457 - 708.0806406154733) / 647.5297191627457 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedClassMissInherit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 1.2112155248820418 > 1.13694964638556.
IsChangePoint: Marked as a change because one of 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -7.745527387390561 (T) = (0 -1213.9608926620335) / Math.Sqrt((1954.7879162499878 / (24)) + (1039.2474975435828 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.08140105517635117 = (1122.581568467274 - 1213.9608926620335) / 1122.581568467274 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedMethodOverrideHit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 705.2855175216763 > 693.5954236978833.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.1016749220948405 (T) = (0 -709.1377152585584) / Math.Sqrt((647.7044138314708 / (25)) + (994.2426132288017 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.08402020507970037 = (654.1738907960857 - 709.1377152585584) / 654.1738907960857 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedMethodBaseHitInherit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 701.575127475433 > 696.2912043948237.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.276517735042958 (T) = (0 -709.118067717889) / Math.Sqrt((788.1459480483514 / (24)) + (809.2232784828515 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.08468007639528252 = (653.7578067023238 - 709.118067717889) / 653.7578067023238 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedClassHit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 673.8237605760315 > 640.9646320332743.
IsChangePoint: Marked as a change because one of 5/9/2022 6:32:22 AM, 6/22/2022 11:10:17 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.410882182847598 (T) = (0 -669.5355188242203) / Math.Sqrt((867.0339451611512 / (17)) + (709.075798026562 / (26))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (17) + (26) - 2, .025) and -0.09255424522176141 = (612.8167290113098 - 669.5355188242203) / 612.8167290113098 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
| True | Regressions in System.Tests.Perf_UInt64 - ### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Collections.ContainsKeyFalse<String, String>
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[SortedList - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Collections.ContainsKeyFalse(String%2c%20String).SortedList(Size%3a%20512).html>) | 348.38 μs | 384.99 μs | 1.11 | 0.03 | False | | |
_1.png>)
[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Collections.ContainsKeyFalse(String,%20String).html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Collections.ContainsKeyFalse<String, String>*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Collections.ContainsKeyFalse<String, String>.SortedList(Size: 512)
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 384.9944918699187 > 366.0416569444444.
IsChangePoint: Marked as a change because one of 6/3/2022 8:17:54 AM, 6/6/2022 3:15:19 PM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -18.592276578561822 (T) = (0 -386239.64188622445) / Math.Sqrt((827784.6548322291 / (25)) + (73255526.55193321 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.10803726811218585 = (348580.00989829405 - 386239.64188622445) / 348580.00989829405 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Tests.Perf_UInt64
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[TryParseHex - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Tests.Perf_UInt64.TryParseHex(value%3a%20%220%22).html>) | 4.52 ns | 5.85 ns | 1.30 | 0.35 | False | | |

[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Tests.Perf_UInt64.html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Tests.Perf_UInt64*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Tests.Perf_UInt64.TryParseHex(value: "0")
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 5.851650109031447 > 5.099529018869719.
IsChangePoint: Marked as a change because one of 3/7/2022 12:49:56 AM, 3/7/2022 10:45:01 PM, 3/12/2022 3:27:04 PM, 3/13/2022 11:50:20 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -22.045017915913398 (T) = (0 -5.800006004212765) / Math.Sqrt((0.03388441533165363 / (24)) + (0.012577560483055143 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.2115580874843213 = (4.7872289939113815 - 5.800006004212765) / 4.7872289939113815 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
### Run Information
Architecture | arm64
-- | --
OS | Windows 10.0.19041
Baseline | [b92de6bf0351280cd36221f3232b2964a4e61e88](https://github.com/dotnet/runtime/commit/b92de6bf0351280cd36221f3232b2964a4e61e88)
Compare | [d4a9ade2dfbee1ef532e7793ea9c330c51b5c028](https://github.com/dotnet/runtime/commit/d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
Diff | [Diff](https://github.com/dotnet/runtime/compare/b92de6bf0351280cd36221f3232b2964a4e61e88...d4a9ade2dfbee1ef532e7793ea9c330c51b5c028)
### Regressions in System.Reflection.Attributes
Benchmark | Baseline | Test | Test/Base | Test Quality | Edge Detector | Baseline IR | Compare IR | IR Ratio | Baseline ETL | Compare ETL
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
[IsDefinedMethodOverrideMiss - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodOverrideMiss.html>) | 660.32 ns | 694.28 ns | 1.05 | 0.18 | False | | |
[IsDefinedClassMissInherit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedClassMissInherit.html>) | 1.11 μs | 1.21 μs | 1.09 | 0.13 | False | | |
[IsDefinedMethodOverrideHit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodOverrideHit.html>) | 654.04 ns | 705.29 ns | 1.08 | 0.15 | False | | |
[IsDefinedMethodBaseHitInherit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedMethodBaseHitInherit.html>) | 660.85 ns | 701.58 ns | 1.06 | 0.16 | False | | |
[IsDefinedClassHit - Duration of single invocation](<https://pvscmdupload.blob.core.windows.net/reports/allTestHistory/refs/heads/main_arm64_Windows 10.0.19041/System.Reflection.Attributes.IsDefinedClassHit.html>) | 606.11 ns | 673.82 ns | 1.11 | 0.19 | False | | |





[Test Report](<https://pvscmdupload.blob.core.windows.net/autofilereport/autofilereports/06_30_2022/refs/heads/main_arm64_Windows%2010.0.19041_Regression/System.Reflection.Attributes.html>)
### Repro
```cmd
git clone https://github.com/dotnet/performance.git
py .\performance\scripts\benchmarks_ci.py -f net6.0 --filter 'System.Reflection.Attributes*'
```
<details>
### Payloads
[Baseline](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-f2155b21-97f4-43d1-8ea0-e50a2087f79d0f29de5213c4333af/e46ff89c-808b-4db0-af40-ae793cbb2013.zip?sv=2021-06-08&se=2022-07-24T14%3A05%3A13Z&sr=c&sp=rl&sig=cE1HJSPlysJjjb4SJi%2F%2B1OppkzGCW2pl5nOyJPfjV7Q%3D>)
[Compare](<https://helixdi107v0xdeko0k025g8.blob.core.windows.net/helix-job-2e5ad7fb-ee28-49b3-b168-19d3fb9ca43426a03d146a6421b8d/9785ffd5-9cf7-4ecf-a7be-7bffbaa3ec43.zip?sv=2021-06-08&se=2022-07-25T04%3A32%3A22Z&sr=c&sp=rl&sig=MGAYaVRv7WcZ5GCBKQiHYoQJGKrhexYXQjEGs8Sv64Q%3D>)
### Histogram
#### System.Reflection.Attributes.IsDefinedMethodOverrideMiss
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 694.281147282633 > 668.6631138733229.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -8.451246977120563 (T) = (0 -708.0806406154733) / Math.Sqrt((495.03377189893916 / (25)) + (567.5766624145348 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.09351064462495992 = (647.5297191627457 - 708.0806406154733) / 647.5297191627457 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedClassMissInherit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 1.2112155248820418 > 1.13694964638556.
IsChangePoint: Marked as a change because one of 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -7.745527387390561 (T) = (0 -1213.9608926620335) / Math.Sqrt((1954.7879162499878 / (24)) + (1039.2474975435828 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.08140105517635117 = (1122.581568467274 - 1213.9608926620335) / 1122.581568467274 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedMethodOverrideHit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 705.2855175216763 > 693.5954236978833.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.1016749220948405 (T) = (0 -709.1377152585584) / Math.Sqrt((647.7044138314708 / (25)) + (994.2426132288017 / (18))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (25) + (18) - 2, .025) and -0.08402020507970037 = (654.1738907960857 - 709.1377152585584) / 654.1738907960857 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedMethodBaseHitInherit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 701.575127475433 > 696.2912043948237.
IsChangePoint: Marked as a change because one of 3/17/2022 6:54:53 PM, 5/9/2022 6:32:22 AM, 6/24/2022 5:32:42 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.276517735042958 (T) = (0 -709.118067717889) / Math.Sqrt((788.1459480483514 / (24)) + (809.2232784828515 / (18))) is less than -2.0210753903043583 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (24) + (18) - 2, .025) and -0.08468007639528252 = (653.7578067023238 - 709.118067717889) / 653.7578067023238 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```#### System.Reflection.Attributes.IsDefinedClassHit
```log
```
### Description of detection logic
```IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsRegressionBase: Marked as regression because the compare was 5% greater than the baseline, and the value was not too small.
IsRegressionChecked: Marked as regression because the three check build points were 0.05 greater than the baseline.
IsRegressionWindowed: Marked as regression because 673.8237605760315 > 640.9646320332743.
IsChangePoint: Marked as a change because one of 5/9/2022 6:32:22 AM, 6/22/2022 11:10:17 PM, 6/30/2022 12:21:07 PM falls between 6/21/2022 9:29:01 PM and 6/30/2022 12:21:07 PM.
IsRegressionStdDev: Marked as regression because -6.410882182847598 (T) = (0 -669.5355188242203) / Math.Sqrt((867.0339451611512 / (17)) + (709.075798026562 / (26))) is less than -2.019540970439573 = MathNet.Numerics.Distributions.StudentT.InvCDF(0, 1, (17) + (26) - 2, .025) and -0.09255424522176141 = (612.8167290113098 - 669.5355188242203) / 612.8167290113098 is less than -0.05.
IsImprovementBase: Marked as not an improvement because the compare was not 5% less than the baseline, or the value was too small.
IsChangeEdgeDetector: Marked not as a regression because Edge Detector said so.
```
### Docs
[Profiling workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/profiling-workflow-dotnet-runtime.md)
[Benchmarking workflow for dotnet/runtime repository](https://github.com/dotnet/performance/blob/master/docs/benchmarking-workflow-dotnet-runtime.md)
</details>
| non_priority | regressions in system tests perf run information architecture os windows baseline compare diff regressions in system collections containskeyfalse lt string string gt benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl μs μs false repro cmd git clone py performance scripts benchmarks ci py f filter system collections containskeyfalse lt string string gt payloads histogram system collections containskeyfalse lt string string gt sortedlist size log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs run information architecture os windows baseline compare diff regressions in system tests perf benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl ns ns false repro cmd git clone py performance scripts benchmarks ci py f filter system tests perf payloads histogram system tests perf tryparsehex value log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs run information architecture os windows baseline compare diff regressions in system reflection attributes benchmark baseline test test base test quality edge detector baseline ir compare ir ir ratio baseline etl compare etl ns ns false μs μs false ns ns false ns ns false ns ns false repro cmd git clone py performance scripts benchmarks ci py f filter system reflection attributes payloads histogram system reflection attributes isdefinedmethodoverridemiss log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system reflection attributes isdefinedclassmissinherit log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system reflection attributes isdefinedmethodoverridehit log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system reflection attributes isdefinedmethodbasehitinherit log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of pm am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so system reflection attributes isdefinedclasshit log description of detection logic isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small isregressionbase marked as regression because the compare was greater than the baseline and the value was not too small isregressionchecked marked as regression because the three check build points were greater than the baseline isregressionwindowed marked as regression because ischangepoint marked as a change because one of am pm pm falls between pm and pm isregressionstddev marked as regression because t math sqrt is less than mathnet numerics distributions studentt invcdf and is less than isimprovementbase marked as not an improvement because the compare was not less than the baseline or the value was too small ischangeedgedetector marked not as a regression because edge detector said so docs | 0 |
48,121 | 13,301,491,813 | IssuesEvent | 2020-08-25 13:02:07 | rammatzkvosky/jdb | https://api.github.com/repos/rammatzkvosky/jdb | opened | CVE-2019-16335 (High) detected in jackson-databind-2.8.8.jar | security vulnerability | ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/jdb/commit/9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c">9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-16335 (High) detected in jackson-databind-2.8.8.jar - ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/jdb/pom.xml</p>
<p>Path to vulnerable library: canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.8/jackson-databind-2.8.8.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/jdb/commit/9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c">9b00613d0ebc4ddf64e79f7e6c4ca43247c9e93c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x">https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION-2.x</a></p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.8","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.8.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-16335","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm jdb pom xml path to vulnerable library canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve vulnerabilityurl | 0 |
334,854 | 29,993,598,322 | IssuesEvent | 2023-06-26 02:15:58 | TencentBlueKing/bk-cmdb | https://api.github.com/repos/TencentBlueKing/bk-cmdb | closed | 【3.10.24-alpha1 】删除业务时,删除成功提示未国际化及搜索框可搜索字段未国际化 | grayed tested | 一、前提条件
中英文切换中切换语言为英文或者中文
二 、重现步骤
如下图删除业务

确认删除成功提示中包含中文已彻底删除

搜索框可搜索字段未国际化

| 1.0 | 【3.10.24-alpha1 】删除业务时,删除成功提示未国际化及搜索框可搜索字段未国际化 - 一、前提条件
中英文切换中切换语言为英文或者中文
二 、重现步骤
如下图删除业务

确认删除成功提示中包含中文已彻底删除

搜索框可搜索字段未国际化

| non_priority | 【 】删除业务时,删除成功提示未国际化及搜索框可搜索字段未国际化 一、前提条件 中英文切换中切换语言为英文或者中文 二 、重现步骤 如下图删除业务 确认删除成功提示中包含中文已彻底删除 搜索框可搜索字段未国际化 | 0 |
242,780 | 20,263,068,937 | IssuesEvent | 2022-02-15 09:29:50 | elastic/elasticsearch | https://api.github.com/repos/elastic/elasticsearch | closed | [CI] KeyStoreWrapperTests fails on 6.8 with zulu8 and adoptopenjdk8 | >test-failure :Security/Security Team:Security v6.8.16 | **Build scan**:
zulu8: https://gradle-enterprise.elastic.co/s/gbxomp66nkfgq
adoptopenjdk8: https://gradle-enterprise.elastic.co/s/hixg5shxf5ehw
**Repro line**:
```
./gradlew ':server:unitTest' \
-Dtests.seed=AD757CB6ECE0EEDF \
-Dtests.class=org.elasticsearch.common.settings.KeyStoreWrapperTests \
-Dtests.method="testBackcompatV2" \
-Dtests.security.manager=true \
-Dtests.locale=ru \
-Dtests.timezone=Europe/Berlin \
-Dcompiler.java=11 \
-Druntime.java=8
```
**Reproduces locally?**:
I tried running this test locally with AdoptOpenJDK 1.8.0_292 as "Runtime JDK Version" but could not reproduce.
**Applicable branches**:
6.8
**Failure history**:
Sudden increase starting on April 23rd, approx
**Failure excerpt**:
```
java.security.KeyStoreException: Key protection algorithm not found: java.security.NoSuchAlgorithmException: unrecognized algorithm name: PBEWithMD5AndDES
at __randomizedtesting.SeedInfo.seed([BD1A136910B56F09:314EFEE1FED37C83]:0)
at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694)
at sun.security.pkcs12.PKCS12KeyStore.engineSetEntry(PKCS12KeyStore.java:1413)
at java.security.KeyStore.setEntry(KeyStore.java:1557)
at org.elasticsearch.common.settings.KeyStoreWrapperTests.testBackcompatV1(KeyStoreWrapperTests.java:322)
```
| 1.0 | [CI] KeyStoreWrapperTests fails on 6.8 with zulu8 and adoptopenjdk8 - **Build scan**:
zulu8: https://gradle-enterprise.elastic.co/s/gbxomp66nkfgq
adoptopenjdk8: https://gradle-enterprise.elastic.co/s/hixg5shxf5ehw
**Repro line**:
```
./gradlew ':server:unitTest' \
-Dtests.seed=AD757CB6ECE0EEDF \
-Dtests.class=org.elasticsearch.common.settings.KeyStoreWrapperTests \
-Dtests.method="testBackcompatV2" \
-Dtests.security.manager=true \
-Dtests.locale=ru \
-Dtests.timezone=Europe/Berlin \
-Dcompiler.java=11 \
-Druntime.java=8
```
**Reproduces locally?**:
I tried running this test locally with AdoptOpenJDK 1.8.0_292 as "Runtime JDK Version" but could not reproduce.
**Applicable branches**:
6.8
**Failure history**:
Sudden increase starting on April 23rd, approx
**Failure excerpt**:
```
java.security.KeyStoreException: Key protection algorithm not found: java.security.NoSuchAlgorithmException: unrecognized algorithm name: PBEWithMD5AndDES
at __randomizedtesting.SeedInfo.seed([BD1A136910B56F09:314EFEE1FED37C83]:0)
at sun.security.pkcs12.PKCS12KeyStore.setKeyEntry(PKCS12KeyStore.java:694)
at sun.security.pkcs12.PKCS12KeyStore.engineSetEntry(PKCS12KeyStore.java:1413)
at java.security.KeyStore.setEntry(KeyStore.java:1557)
at org.elasticsearch.common.settings.KeyStoreWrapperTests.testBackcompatV1(KeyStoreWrapperTests.java:322)
```
| non_priority | keystorewrappertests fails on with and build scan repro line gradlew server unittest dtests seed dtests class org elasticsearch common settings keystorewrappertests dtests method dtests security manager true dtests locale ru dtests timezone europe berlin dcompiler java druntime java reproduces locally i tried running this test locally with adoptopenjdk as runtime jdk version but could not reproduce applicable branches failure history sudden increase starting on april approx failure excerpt java security keystoreexception key protection algorithm not found java security nosuchalgorithmexception unrecognized algorithm name at randomizedtesting seedinfo seed at sun security setkeyentry java at sun security enginesetentry java at java security keystore setentry keystore java at org elasticsearch common settings keystorewrappertests keystorewrappertests java | 0 |
439,231 | 30,685,945,037 | IssuesEvent | 2023-07-26 12:22:32 | kiran-evans/kreddit | https://api.github.com/repos/kiran-evans/kreddit | closed | Write README that documents the project | documentation | Write a README (using Markdown) that documents your project including:
Wireframes
Technologies used
Features
Future work
| 1.0 | Write README that documents the project - Write a README (using Markdown) that documents your project including:
Wireframes
Technologies used
Features
Future work
| non_priority | write readme that documents the project write a readme using markdown that documents your project including wireframes technologies used features future work | 0 |
4,729 | 2,871,346,919 | IssuesEvent | 2015-06-08 01:31:34 | tjchambers32/Embedded-Reflex-Test | https://api.github.com/repos/tjchambers32/Embedded-Reflex-Test | closed | Document Original Dev Board | documentation | It would be good to document the hardware platform this was originally designed on (the ECEn 330 Dev Board). We should get a picture at least. | 1.0 | Document Original Dev Board - It would be good to document the hardware platform this was originally designed on (the ECEn 330 Dev Board). We should get a picture at least. | non_priority | document original dev board it would be good to document the hardware platform this was originally designed on the ecen dev board we should get a picture at least | 0 |
300,542 | 22,685,832,915 | IssuesEvent | 2022-07-04 14:01:13 | nf-core/eager | https://api.github.com/repos/nf-core/eager | closed | Clarify "very short reads" in helptext of `clip_readlength` | documentation next-patch | The current helptext reads:
>Defines the minimum read length that is required for reads after merging to be considered for downstream analysis after read merging. Default is 30.
>Note that performing read length filtering at this step is not reliable for correct endogenous DNA calculation, when you have a large percentage of very short reads in your library - such as retrieved in single-stranded library protocols. When you have very few reads passing this length filter, it will artificially inflate your endogenous DNA by creating a very small denominator. In these cases it is recommended to set this to 0, and use `--bam_filter_minreadlength` instead, to filter out 'un-usable' short reads after mapping.
We should clarify what "very short reads in your library" means. To my understanding that would be a length distribution peak below 20bp. The added computational work to map all sequenced fragments is considerable, and this approach can be avoided when the length distribution peak is still within 20/25bp. In such cases I think users could lower the `clip_readlength` without actually setting it to 0 and avoid all the extra computation while still getting an Endo % that is comparable to that given with default settings. | 1.0 | Clarify "very short reads" in helptext of `clip_readlength` - The current helptext reads:
>Defines the minimum read length that is required for reads after merging to be considered for downstream analysis after read merging. Default is 30.
>Note that performing read length filtering at this step is not reliable for correct endogenous DNA calculation, when you have a large percentage of very short reads in your library - such as retrieved in single-stranded library protocols. When you have very few reads passing this length filter, it will artificially inflate your endogenous DNA by creating a very small denominator. In these cases it is recommended to set this to 0, and use `--bam_filter_minreadlength` instead, to filter out 'un-usable' short reads after mapping.
We should clarify what "very short reads in your library" means. To my understanding that would be a length distribution peak below 20bp. The added computational work to map all sequenced fragments is considerable, and this approach can be avoided when the length distribution peak is still within 20/25bp. In such cases I think users could lower the `clip_readlength` without actually setting it to 0 and avoid all the extra computation while still getting an Endo % that is comparable to that given with default settings. | non_priority | clarify very short reads in helptext of clip readlength the current helptext reads defines the minimum read length that is required for reads after merging to be considered for downstream analysis after read merging default is note that performing read length filtering at this step is not reliable for correct endogenous dna calculation when you have a large percentage of very short reads in your library such as retrieved in single stranded library protocols when you have very few reads passing this length filter it will artificially inflate your endogenous dna by creating a very small denominator in these cases it is recommended to set this to and use bam filter minreadlength instead to filter out un usable short reads after mapping we should clarify what very short reads in your library means to my understanding that would be a length distribution peak below the added computational work to map all sequenced fragments is considerable and this approach can be avoided when the length distribution peak is still within in such cases i think users could lower the clip readlength without actually setting it to and avoid all the extra computation while still getting an endo that is comparable to that given with default settings | 0 |
221,800 | 24,659,153,978 | IssuesEvent | 2022-10-18 04:21:29 | valtech-ch/microservice-kubernetes-cluster | https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster | reopened | CVE-2019-16335 (High) detected in jackson-databind-2.9.8.jar | security vulnerability | ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-function-web-3.2.7.jar (Root Library)
- spring-boot-starter-web-2.7.4.jar
- spring-boot-starter-json-2.7.4.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-16335 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2019-16335 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-starter-function-web-3.2.7.jar (Root Library)
- spring-boot-starter-web-2.7.4.jar
- spring-boot-starter-json-2.7.4.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/335a4047c89f52dfe860e93daefb32dc86a521a2">335a4047c89f52dfe860e93daefb32dc86a521a2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind before 2.9.10. It is related to com.zaxxer.hikari.HikariDataSource. This is a different vulnerability than CVE-2019-14540.
<p>Publish Date: 2019-09-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16335>CVE-2019-16335</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-09-15</p>
<p>Fix Resolution: 2.9.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file functions build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud starter function web jar root library spring boot starter web jar spring boot starter json jar x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind before it is related to com zaxxer hikari hikaridatasource this is a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend | 0 |
179,484 | 14,704,652,331 | IssuesEvent | 2021-01-04 16:49:18 | SketchUp/api-issue-tracker | https://api.github.com/repos/SketchUp/api-issue-tracker | closed | UI::Notification - The two buttons always appears | Ruby API SketchUp documentation | If you use **only one** button via _#on_accept_ or _#on_dismiss_, BOTH buttons always appear in the notification box, the second one being empty.

| 1.0 | UI::Notification - The two buttons always appears - If you use **only one** button via _#on_accept_ or _#on_dismiss_, BOTH buttons always appear in the notification box, the second one being empty.

| non_priority | ui notification the two buttons always appears if you use only one button via on accept or on dismiss both buttons always appear in the notification box the second one being empty | 0 |
17,936 | 5,535,189,948 | IssuesEvent | 2017-03-21 16:51:55 | phetsims/masses-and-springs | https://api.github.com/repos/phetsims/masses-and-springs | closed | Adjust approach to ToolboxPanel.js start event. | dev:code-review | As per the suggestion of @samreid, this code should be revised for clarity. It seems to be finding the parent ScreenView before starting the "start" event. It isn't clear whether this code is necessary and should be revised.
```js
if ( !timerParentScreenView2 ) {
var testNode = self;
while ( testNode !== null ) {
if ( testNode instanceof ScreenView ) {
timerParentScreenView2 = testNode;
break;
}
testNode = testNode.parents[ 0 ]; // move up the scene graph by one level
}
assert && assert( timerParentScreenView2, 'unable to find parent screen view' );
}
``` | 1.0 | Adjust approach to ToolboxPanel.js start event. - As per the suggestion of @samreid, this code should be revised for clarity. It seems to be finding the parent ScreenView before starting the "start" event. It isn't clear whether this code is necessary and should be revised.
```js
if ( !timerParentScreenView2 ) {
var testNode = self;
while ( testNode !== null ) {
if ( testNode instanceof ScreenView ) {
timerParentScreenView2 = testNode;
break;
}
testNode = testNode.parents[ 0 ]; // move up the scene graph by one level
}
assert && assert( timerParentScreenView2, 'unable to find parent screen view' );
}
``` | non_priority | adjust approach to toolboxpanel js start event as per the suggestion of samreid this code should be revised for clarity it seems to be finding the parent screenview before starting the start event it isn t clear whether this code is necessary and should be revised js if var testnode self while testnode null if testnode instanceof screenview testnode break testnode testnode parents move up the scene graph by one level assert assert unable to find parent screen view | 0 |
130,422 | 18,072,996,785 | IssuesEvent | 2021-09-21 06:27:14 | BitcoinDesign/Guide | https://api.github.com/repos/BitcoinDesign/Guide | opened | Add ⚡️ content to `Designing Bitcoin Products` > `Design resources` | copy Design Bitcoin Products | Add Lightning-related content to the [Design resources](https://bitcoin.design/guide/designing-products/design-resources/) page, as appropriate. This page likely needs only minor tweaks.
Secondary, do a general review of this page (check overlap with other pages, add relevant cross-links...) and check open issues for changes on this page and implement them. | 1.0 | Add ⚡️ content to `Designing Bitcoin Products` > `Design resources` - Add Lightning-related content to the [Design resources](https://bitcoin.design/guide/designing-products/design-resources/) page, as appropriate. This page likely needs only minor tweaks.
Secondary, do a general review of this page (check overlap with other pages, add relevant cross-links...) and check open issues for changes on this page and implement them. | non_priority | add ⚡️ content to designing bitcoin products design resources add lightning related content to the page as appropriate this page likely needs only minor tweaks secondary do a general review of this page check overlap with other pages add relevant cross links and check open issues for changes on this page and implement them | 0 |
6,680 | 3,040,446,607 | IssuesEvent | 2015-08-07 15:27:03 | emberjs/ember.js | https://api.github.com/repos/emberjs/ember.js | closed | Document `unbound` | Documentation Good for New Contributors | Changes to the behavior of `unbound` were made in: https://github.com/emberjs/ember.js/pull/11965
This helper has no docs, but we should add some now that it is roughly sane. | 1.0 | Document `unbound` - Changes to the behavior of `unbound` were made in: https://github.com/emberjs/ember.js/pull/11965
This helper has no docs, but we should add some now that it is roughly sane. | non_priority | document unbound changes to the behavior of unbound were made in this helper has no docs but we should add some now that it is roughly sane | 0 |
5,505 | 7,189,844,847 | IssuesEvent | 2018-02-02 15:22:02 | ga4gh/dockstore | https://api.github.com/repos/ga4gh/dockstore | opened | Refresh of source files should record last modification date | enhancement web service | ## Feature Request
### Desired behaviour
Source files should pull down modification dates, these are files for tools, workflows, dockerfiles, etc. | 1.0 | Refresh of source files should record last modification date - ## Feature Request
### Desired behaviour
Source files should pull down modification dates, these are files for tools, workflows, dockerfiles, etc. | non_priority | refresh of source files should record last modification date feature request desired behaviour source files should pull down modification dates these are files for tools workflows dockerfiles etc | 0 |
89,651 | 11,271,355,744 | IssuesEvent | 2020-01-14 12:53:35 | functional-streams-for-scala/fs2 | https://api.github.com/repos/functional-streams-for-scala/fs2 | closed | Improve Stream#observe | design/refactoring | Find the way how to improve current observe semantics and potentially its performance | 1.0 | Improve Stream#observe - Find the way how to improve current observe semantics and potentially its performance | non_priority | improve stream observe find the way how to improve current observe semantics and potentially its performance | 0 |
149,692 | 13,299,441,272 | IssuesEvent | 2020-08-25 09:44:34 | smsdigital/styleguide-variables | https://api.github.com/repos/smsdigital/styleguide-variables | closed | Proper readme | documentation help wanted | Create a proper readme file.
This should include examples of the output of the converted files but also usage-examples. | 1.0 | Proper readme - Create a proper readme file.
This should include examples of the output of the converted files but also usage-examples. | non_priority | proper readme create a proper readme file this should include examples of the output of the converted files but also usage examples | 0 |
12,250 | 14,767,694,787 | IssuesEvent | 2021-01-10 08:16:30 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Fork start method is susceptible to deadlocks | feature module: multiprocessing module: multithreading todo triaged | ## Versions
```
Ubuntu 16.04
Python 3.6.2
pytorch 0.1.12_2
```
## Issue description
```python
import torch
import torch.multiprocessing as mp
import torch.functional as f
import threading
import numpy as np
from timeit import timeit
def build(cuda=False):
nn = torch.nn.Sequential(
torch.nn.Linear(1024, 1024),
torch.nn.Linear(1024, 1)
)
return nn.cuda() if cuda else nn
def train(nn, X, y, epoch=100):
X = torch.autograd.Variable(X)
y = torch.autograd.Variable(y)
optim = torch.optim.SGD(nn.parameters(), lr=0.1)
for i in range(epoch):
yhat = nn(X)
loss = ((yhat - y) ** 2).mean()
loss.backward()
optim.step()
def data(cuda=False):
X = torch.Tensor(np.random.randn(10, 1024))
y = torch.Tensor(np.random.randn(10, 1))
return (X.cuda(), y.cuda()) if cuda else (X, y)
def cpu_run(i=None):
nn = build(cuda=False)
d = data(cuda=False)
train(nn, *d)
def seq_cpu_run():
for i in range(5):
cpu_run()
def multiprocess_cpu_run():
pool = torch.multiprocessing.Pool(processes=1)
result = pool.map(cpu_run, [() for i in range(1)])
pool.close()
pool.join()
return result
if __name__ == "__main__":
print(timeit(seq_cpu_run, number=1)) # 1
print(timeit(multiprocess_cpu_run, number=1)) # 2
```
#1 run okay alone.
#2 run okay alone.
#2 then #1 runs okay.
#1 then #2 never terminate.
where
#1 = seq_cpu_run, #2 = multiprocess_cpu_run
| 1.0 | Fork start method is susceptible to deadlocks - ## Versions
```
Ubuntu 16.04
Python 3.6.2
pytorch 0.1.12_2
```
## Issue description
```python
import torch
import torch.multiprocessing as mp
import torch.functional as f
import threading
import numpy as np
from timeit import timeit
def build(cuda=False):
nn = torch.nn.Sequential(
torch.nn.Linear(1024, 1024),
torch.nn.Linear(1024, 1)
)
return nn.cuda() if cuda else nn
def train(nn, X, y, epoch=100):
X = torch.autograd.Variable(X)
y = torch.autograd.Variable(y)
optim = torch.optim.SGD(nn.parameters(), lr=0.1)
for i in range(epoch):
yhat = nn(X)
loss = ((yhat - y) ** 2).mean()
loss.backward()
optim.step()
def data(cuda=False):
X = torch.Tensor(np.random.randn(10, 1024))
y = torch.Tensor(np.random.randn(10, 1))
return (X.cuda(), y.cuda()) if cuda else (X, y)
def cpu_run(i=None):
nn = build(cuda=False)
d = data(cuda=False)
train(nn, *d)
def seq_cpu_run():
for i in range(5):
cpu_run()
def multiprocess_cpu_run():
pool = torch.multiprocessing.Pool(processes=1)
result = pool.map(cpu_run, [() for i in range(1)])
pool.close()
pool.join()
return result
if __name__ == "__main__":
print(timeit(seq_cpu_run, number=1)) # 1
print(timeit(multiprocess_cpu_run, number=1)) # 2
```
#1 run okay alone.
#2 run okay alone.
#2 then #1 runs okay.
#1 then #2 never terminate.
where
#1 = seq_cpu_run, #2 = multiprocess_cpu_run
| non_priority | fork start method is susceptible to deadlocks versions ubuntu python pytorch issue description python import torch import torch multiprocessing as mp import torch functional as f import threading import numpy as np from timeit import timeit def build cuda false nn torch nn sequential torch nn linear torch nn linear return nn cuda if cuda else nn def train nn x y epoch x torch autograd variable x y torch autograd variable y optim torch optim sgd nn parameters lr for i in range epoch yhat nn x loss yhat y mean loss backward optim step def data cuda false x torch tensor np random randn y torch tensor np random randn return x cuda y cuda if cuda else x y def cpu run i none nn build cuda false d data cuda false train nn d def seq cpu run for i in range cpu run def multiprocess cpu run pool torch multiprocessing pool processes result pool map cpu run pool close pool join return result if name main print timeit seq cpu run number print timeit multiprocess cpu run number run okay alone run okay alone then runs okay then never terminate where seq cpu run multiprocess cpu run | 0 |
72,601 | 19,345,901,426 | IssuesEvent | 2021-12-15 10:45:58 | icsharpcode/AvalonEdit | https://api.github.com/repos/icsharpcode/AvalonEdit | closed | Fix Unit Test for .NET Core 3.1 & .NET 6.0 | Issue-Bug Area-Build | ```
Log level is set to Informational (Default).
Connected to test environment '< Local Windows Environment >'
Test data store opened in 0.018 sec.
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 6.1 sec ==========
========== Starting test discovery ==========
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.8 sec ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Building Test Projects
========== Starting test run ==========
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 0 Tests (0 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Building Test Projects
========== Starting test run ==========
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 0 Tests (0 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Test data store opened in 0.092 sec.
========== Starting test discovery ==========
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 4.9 sec ==========
========== Starting test discovery ==========
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.7 sec ==========
========== Starting test run ==========
NUnit Adapter 3.13.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor converted 187 of 187 NUnit test cases
NUnit Adapter 3.13.0.0: Test execution complete
NUnit Adapter 3.13.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor converted 187 of 187 NUnit test cases
NUnit Adapter 3.13.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 3.9 sec ==========
Starting test discovery for requested test run
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.9 sec ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
``` | 1.0 | Fix Unit Test for .NET Core 3.1 & .NET 6.0 - ```
Log level is set to Informational (Default).
Connected to test environment '< Local Windows Environment >'
Test data store opened in 0.018 sec.
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 6.1 sec ==========
========== Starting test discovery ==========
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.8 sec ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Building Test Projects
========== Starting test run ==========
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 0 Tests (0 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Building Test Projects
========== Starting test run ==========
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 0 Tests (0 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
Test data store opened in 0.092 sec.
========== Starting test discovery ==========
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 4.9 sec ==========
========== Starting test discovery ==========
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 3.13.0.0: Test discovery starting
NUnit Adapter 3.13.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.7 sec ==========
========== Starting test run ==========
NUnit Adapter 3.13.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor converted 187 of 187 NUnit test cases
NUnit Adapter 3.13.0.0: Test execution complete
NUnit Adapter 3.13.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor converted 187 of 187 NUnit test cases
NUnit Adapter 3.13.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
========== Test discovery aborted: 374 Tests found in 3.9 sec ==========
Starting test discovery for requested test run
========== Starting test discovery ==========
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Sample\bin\Debug\net472\ICSharpCode.AvalonEdit.Sample.exe. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
Microsoft.VisualStudio.TestPlatform.ObjectModel.TestPlatformException: Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostPath(String runtimeConfigDevPath, String depsFilePath, String sourceDirectory)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Hosting.DotnetTestHostManager.GetTestHostProcessStartInfo(IEnumerable`1 sources, IDictionary`2 environmentVariables, TestRunnerConnectionInfo connectionInfo)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyOperationManager.SetupChannel(IEnumerable`1 sources, String runSettings)
at Microsoft.VisualStudio.TestPlatform.CrossPlatEngine.Client.ProxyDiscoveryManager.DiscoverTests(DiscoveryCriteria discoveryCriteria, ITestDiscoveryEventsHandler2 eventHandler)
NUnit Adapter 4.1.0.0: Test discovery starting
NUnit Adapter 4.1.0.0: Test discovery complete
No test is available in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit\bin\Debug\net40\ICSharpCode.AvalonEdit.dll. Make sure that test discoverer & executors are registered and platform & framework version settings are appropriate and try again.
========== Test discovery finished: 187 Tests found in 3.9 sec ==========
========== Starting test run ==========
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net40\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
NUnit Adapter 4.1.0.0: Test execution started
Running selected tests in D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net45\ICSharpCode.AvalonEdit.Tests.dll
NUnit3TestExecutor discovered 187 of 187 NUnit test cases using Current Discovery mode, Non-Explicit run
NUnit Adapter 4.1.0.0: Test execution complete
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\net6.0-windows\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
Unable to find D:\GitWorkspace\AvalonEdit\ICSharpCode.AvalonEdit.Tests\bin\Debug\netcoreapp3.1\testhost.dll. Please publish your test project and retry.
========== Test run aborted: 374 Tests (374 Passed, 0 Failed, 0 Skipped) run in < 1 ms ==========
``` | non_priority | fix unit test for net core net log level is set to informational default connected to test environment test data store opened in sec starting test discovery nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again nunit adapter test discovery starting nunit adapter test discovery complete nunit adapter test discovery starting nunit adapter test discovery complete microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler test discovery aborted tests found in sec starting test discovery microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler nunit adapter test discovery starting nunit adapter test discovery complete nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit bin debug icsharpcode avalonedit dll make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again test discovery finished tests found in sec starting test run nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms starting test run nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms building test projects starting test run unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms building test projects starting test run unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms test data store opened in sec starting test discovery nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again nunit adapter test discovery starting nunit adapter test discovery complete nunit adapter test discovery starting nunit adapter test discovery complete microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler test discovery aborted tests found in sec starting test discovery microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler nunit adapter test discovery starting nunit adapter test discovery complete nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit bin debug icsharpcode avalonedit dll make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again test discovery finished tests found in sec starting test run nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll converted of nunit test cases nunit adapter test execution complete nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll converted of nunit test cases nunit adapter test execution complete unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms starting test discovery nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again nunit adapter test discovery starting nunit adapter test discovery complete nunit adapter test discovery starting nunit adapter test discovery complete microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler test discovery aborted tests found in sec starting test discovery for requested test run starting test discovery nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit sample bin debug icsharpcode avalonedit sample exe make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again nunit adapter test discovery starting nunit adapter test discovery complete microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler microsoft visualstudio testplatform objectmodel testplatformexception unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostpath string runtimeconfigdevpath string depsfilepath string sourcedirectory at microsoft visualstudio testplatform crossplatengine hosting dotnettesthostmanager gettesthostprocessstartinfo ienumerable sources idictionary environmentvariables testrunnerconnectioninfo connectioninfo at microsoft visualstudio testplatform crossplatengine client proxyoperationmanager setupchannel ienumerable sources string runsettings at microsoft visualstudio testplatform crossplatengine client proxydiscoverymanager discovertests discoverycriteria discoverycriteria eventhandler nunit adapter test discovery starting nunit adapter test discovery complete no test is available in d gitworkspace avalonedit icsharpcode avalonedit bin debug icsharpcode avalonedit dll make sure that test discoverer executors are registered and platform framework version settings are appropriate and try again test discovery finished tests found in sec starting test run nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete nunit adapter test execution started running selected tests in d gitworkspace avalonedit icsharpcode avalonedit tests bin debug icsharpcode avalonedit tests dll discovered of nunit test cases using current discovery mode non explicit run nunit adapter test execution complete unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug windows testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry unable to find d gitworkspace avalonedit icsharpcode avalonedit tests bin debug testhost dll please publish your test project and retry test run aborted tests passed failed skipped run in ms | 0 |
260,985 | 19,692,932,654 | IssuesEvent | 2022-01-12 09:11:13 | dreemurrs-embedded/Pine64-Arch | https://api.github.com/repos/dreemurrs-embedded/Pine64-Arch | closed | The Barebones Image Quick Start wiki page should be updated for the Pinephone keyboard | documentation | The Pinephone Keyboard is a third option besides a USB keyboard or SSH.
[The page in question](https://github.com/dreemurrs-embedded/Pine64-Arch/wiki/Barebone-Image-Quick-Start) | 1.0 | The Barebones Image Quick Start wiki page should be updated for the Pinephone keyboard - The Pinephone Keyboard is a third option besides a USB keyboard or SSH.
[The page in question](https://github.com/dreemurrs-embedded/Pine64-Arch/wiki/Barebone-Image-Quick-Start) | non_priority | the barebones image quick start wiki page should be updated for the pinephone keyboard the pinephone keyboard is a third option besides a usb keyboard or ssh | 0 |
66,429 | 20,194,757,055 | IssuesEvent | 2022-02-11 09:38:17 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Settings.parseRetainCommentsBetweenQueries doesn't work for the last comment | T: Defect P: Medium E: All Editions C: Parser | The new feature https://github.com/jOOQ/jOOQ/issues/12538 implemented in jOOQ 3.16 doesn't work for trailing comments:
```sql
-- ==========================================
-- header
-- created 2000-01-01
-- ==========================================
-- Change Request #14813
-- ------------------------------------------
CREATE TABLE x123 (i int);
CREATE TABLE x456 (j int);
-- Special table
CREATE TABLE x789 (k int);
-- Change Request #153819
-- ------------------------------------------
ALTER TABLE x789 ADD x int;
-- Trailing comment
```
That last trailing comment is lost. | 1.0 | Settings.parseRetainCommentsBetweenQueries doesn't work for the last comment - The new feature https://github.com/jOOQ/jOOQ/issues/12538 implemented in jOOQ 3.16 doesn't work for trailing comments:
```sql
-- ==========================================
-- header
-- created 2000-01-01
-- ==========================================
-- Change Request #14813
-- ------------------------------------------
CREATE TABLE x123 (i int);
CREATE TABLE x456 (j int);
-- Special table
CREATE TABLE x789 (k int);
-- Change Request #153819
-- ------------------------------------------
ALTER TABLE x789 ADD x int;
-- Trailing comment
```
That last trailing comment is lost. | non_priority | settings parseretaincommentsbetweenqueries doesn t work for the last comment the new feature implemented in jooq doesn t work for trailing comments sql header created change request create table i int create table j int special table create table k int change request alter table add x int trailing comment that last trailing comment is lost | 0 |
55,708 | 6,489,357,128 | IssuesEvent | 2017-08-21 01:09:03 | FireFly-WoW/FireFly-IssueTracker | https://api.github.com/repos/FireFly-WoW/FireFly-IssueTracker | closed | Icemist village bug | Status: Needs Testing | **Description:**
Theres an Star's Rest Sentinel following me around in Icemist Village.
**Current behaviour:**
This one keeps following me around here. It also keeps evading my attacks and its lvl 75 elite.
**Expected behaviour:**
It shouldnt happen.
**Steps to reproduce the problem:**
Go to Icemist Village
2. Go inside and near the buildings
3. Soon it will start showing up
**Screenshots:**
http://imgur.com/a/GngaD
| 1.0 | Icemist village bug - **Description:**
Theres an Star's Rest Sentinel following me around in Icemist Village.
**Current behaviour:**
This one keeps following me around here. It also keeps evading my attacks and its lvl 75 elite.
**Expected behaviour:**
It shouldnt happen.
**Steps to reproduce the problem:**
Go to Icemist Village
2. Go inside and near the buildings
3. Soon it will start showing up
**Screenshots:**
http://imgur.com/a/GngaD
| non_priority | icemist village bug description theres an star s rest sentinel following me around in icemist village current behaviour this one keeps following me around here it also keeps evading my attacks and its lvl elite expected behaviour it shouldnt happen steps to reproduce the problem go to icemist village go inside and near the buildings soon it will start showing up screenshots | 0 |
89,887 | 10,618,334,719 | IssuesEvent | 2019-10-13 03:27:48 | sharyuwu/optimum-tilt-of-solar-panels | https://api.github.com/repos/sharyuwu/optimum-tilt-of-solar-panels | closed | SRS Review: Internal Document Links Not Working | bug documentation | Some of the references in the document aren't being called correctly in the LaTeX file, which causes them to show up as ?? in the pdf.

| 1.0 | SRS Review: Internal Document Links Not Working - Some of the references in the document aren't being called correctly in the LaTeX file, which causes them to show up as ?? in the pdf.

| non_priority | srs review internal document links not working some of the references in the document aren t being called correctly in the latex file which causes them to show up as in the pdf | 0 |
96,388 | 16,129,628,621 | IssuesEvent | 2021-04-29 01:05:55 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | opened | CVE-2019-19079 (High) detected in linuxv5.2 | security vulnerability | ## CVE-2019-19079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/net/qrtr/tun.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/net/qrtr/tun.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the qrtr_tun_write_iter() function in net/qrtr/tun.c in the Linux kernel before 5.3 allows attackers to cause a denial of service (memory consumption), aka CID-a21b7f0cff19.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19079>CVE-2019-19079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19079">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19079</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.3</p>
</p>
</details>
<p></p>
| True | CVE-2019-19079 (High) detected in linuxv5.2 - ## CVE-2019-19079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/net/qrtr/tun.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>ampere-centos-kernel/net/qrtr/tun.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the qrtr_tun_write_iter() function in net/qrtr/tun.c in the Linux kernel before 5.3 allows attackers to cause a denial of service (memory consumption), aka CID-a21b7f0cff19.
<p>Publish Date: 2019-11-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19079>CVE-2019-19079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19079">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-19079</a></p>
<p>Release Date: 2019-11-18</p>
<p>Fix Resolution: v5.3</p>
</p>
</details>
<p></p>
| non_priority | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files ampere centos kernel net qrtr tun c ampere centos kernel net qrtr tun c vulnerability details a memory leak in the qrtr tun write iter function in net qrtr tun c in the linux kernel before allows attackers to cause a denial of service memory consumption aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
83,755 | 16,361,719,124 | IssuesEvent | 2021-05-14 10:30:14 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | [4.0] Unused modal ? | No Code Attached Yet | Can anyone tell me where `administrator\components\com_fields\tmpl\field\modal.php` is used | 1.0 | [4.0] Unused modal ? - Can anyone tell me where `administrator\components\com_fields\tmpl\field\modal.php` is used | non_priority | unused modal can anyone tell me where administrator components com fields tmpl field modal php is used | 0 |
84,485 | 10,542,113,533 | IssuesEvent | 2019-10-02 12:29:36 | Automattic/woocommerce-services | https://api.github.com/repos/Automattic/woocommerce-services | closed | [Labels UI] UI for selecting rates from multiple carriers | Shipping Labels Shipping Rates [Status] Needs Design | In the "Rates" step of the "Purchase Label" modal, for each package we'll show *all* the available services that the merchant can use. Obviously there won't be rates from all the carriers (if there's a US domestic shipment I doubt we'll get rates from New Zealand Post), but there will be more than now.
We need to figure out how to show those rates to the user. The simplest option would be grouping them by carrier, using `<optgroup>` tags.
Side note: Maybe if the user selects rates from different carriers for different packages we should show a warning. I doubt that going to 2 postal offices for a single shipment is a desired use case 90% of the time. | 1.0 | [Labels UI] UI for selecting rates from multiple carriers - In the "Rates" step of the "Purchase Label" modal, for each package we'll show *all* the available services that the merchant can use. Obviously there won't be rates from all the carriers (if there's a US domestic shipment I doubt we'll get rates from New Zealand Post), but there will be more than now.
We need to figure out how to show those rates to the user. The simplest option would be grouping them by carrier, using `<optgroup>` tags.
Side note: Maybe if the user selects rates from different carriers for different packages we should show a warning. I doubt that going to 2 postal offices for a single shipment is a desired use case 90% of the time. | non_priority | ui for selecting rates from multiple carriers in the rates step of the purchase label modal for each package we ll show all the available services that the merchant can use obviously there won t be rates from all the carriers if there s a us domestic shipment i doubt we ll get rates from new zealand post but there will be more than now we need to figure out how to show those rates to the user the simplest option would be grouping them by carrier using tags side note maybe if the user selects rates from different carriers for different packages we should show a warning i doubt that going to postal offices for a single shipment is a desired use case of the time | 0 |
40,473 | 8,793,715,758 | IssuesEvent | 2018-12-21 21:10:13 | PegaSysEng/artemis | https://api.github.com/repos/PegaSysEng/artemis | closed | member variables should not be public | code style 💅 | ### Description
use accessors/mutators when appropriate
| 1.0 | member variables should not be public - ### Description
use accessors/mutators when appropriate
| non_priority | member variables should not be public description use accessors mutators when appropriate | 0 |
34,581 | 7,457,533,031 | IssuesEvent | 2018-03-30 05:14:29 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | Viga RDFide genereerimisel | C: AIS P: highest R: fixed T: defect | **Reported by sven syld on 23 Apr 2014 08:43 UTC**
Start aggregate RDF generation for DESCRIPTION_UNIT, FNS EAA.2502
[ Catchable Fatal Error: Argument 1 passed to Dira\OpendataBundle\Component\P
uriResolver\PuriResolver::getPuriUrls() must implement interface Dira\Opend
ataBundle\Component\Behavior\Puriable, null given, called in /var/www/ais2/
src/Dira/OpendataBundle/Component/RdfGenerator/RdfGeneratorDescriptionUnit.
php on line 451 and defined in /var/www/ais2/src/Dira/OpendataBundle/Compon
ent/PuriResolver/PuriResolver.php line 48
dira:opendata:aggregate-rdf [--fonds[="..."](ErrorException]
)] [[--log-file[="..."](--rewrite-cache[="..."]])] [... [object-typeN](object-type1])
Paistab, et KÜ valdkond viitab vigasele valdkonnale. | 1.0 | Viga RDFide genereerimisel - **Reported by sven syld on 23 Apr 2014 08:43 UTC**
Start aggregate RDF generation for DESCRIPTION_UNIT, FNS EAA.2502
[ Catchable Fatal Error: Argument 1 passed to Dira\OpendataBundle\Component\P
uriResolver\PuriResolver::getPuriUrls() must implement interface Dira\Opend
ataBundle\Component\Behavior\Puriable, null given, called in /var/www/ais2/
src/Dira/OpendataBundle/Component/RdfGenerator/RdfGeneratorDescriptionUnit.
php on line 451 and defined in /var/www/ais2/src/Dira/OpendataBundle/Compon
ent/PuriResolver/PuriResolver.php line 48
dira:opendata:aggregate-rdf [--fonds[="..."](ErrorException]
)] [[--log-file[="..."](--rewrite-cache[="..."]])] [... [object-typeN](object-type1])
Paistab, et KÜ valdkond viitab vigasele valdkonnale. | non_priority | viga rdfide genereerimisel reported by sven syld on apr utc start aggregate rdf generation for description unit fns eaa catchable fatal error argument passed to dira opendatabundle component p uriresolver puriresolver getpuriurls must implement interface dira opend atabundle component behavior puriable null given called in var www src dira opendatabundle component rdfgenerator rdfgeneratordescriptionunit php on line and defined in var www src dira opendatabundle compon ent puriresolver puriresolver php line dira opendata aggregate rdf errorexception rewrite cache object paistab et kü valdkond viitab vigasele valdkonnale | 0 |
19,337 | 11,200,350,352 | IssuesEvent | 2020-01-03 21:29:22 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | aws_cloudwatch_event_rule is_enabled flag is not working | bug service/cloudwatch service/cloudwatchevents | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
Terraform v0.12.9
+ provider.archive v1.2.2
+ provider.aws v2.30.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_cloudwatch_event_rule
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_cloudwatch_event_rule" "retract_alarms" {
name = "${local.namespace}-schedule-every-8-hours"
description = "Triggers function once every 8 hours"
schedule_expression = "cron(0 */8 * * ? *)"
tags = local.tags
is_enabled = false
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
cloudwatch event rule should be disabled
### Actual Behavior
cloudwatch event rule is enabled
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
This was working in TF 0.12.6
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #0000
| 2.0 | aws_cloudwatch_event_rule is_enabled flag is not working - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
Terraform v0.12.9
+ provider.archive v1.2.2
+ provider.aws v2.30.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_cloudwatch_event_rule
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_cloudwatch_event_rule" "retract_alarms" {
name = "${local.namespace}-schedule-every-8-hours"
description = "Triggers function once every 8 hours"
schedule_expression = "cron(0 */8 * * ? *)"
tags = local.tags
is_enabled = false
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
cloudwatch event rule should be disabled
### Actual Behavior
cloudwatch event rule is enabled
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
This was working in TF 0.12.6
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #0000
| non_priority | aws cloudwatch event rule is enabled flag is not working please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider archive provider aws affected resource s aws cloudwatch event rule terraform configuration files hcl resource aws cloudwatch event rule retract alarms name local namespace schedule every hours description triggers function once every hours schedule expression cron tags local tags is enabled false debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior cloudwatch event rule should be disabled actual behavior cloudwatch event rule is enabled steps to reproduce terraform apply important factoids this was working in tf references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example | 0 |
9,334 | 6,848,317,137 | IssuesEvent | 2017-11-13 18:05:25 | explosion/spaCy | https://api.github.com/repos/explosion/spaCy | closed | Tokenizer: -ing contraction parsed incorrectly | help wanted help wanted (easy) language / english performance | Spacy doesn't properly tokenize words with contracted '-ing' ending:
```python
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp("I'm lovin' it")
print(doc[1])
# 'm – CORRECT!
print(doc[1].lemma_)
# be – CORRECT!
print(doc[2])
# lovin – INCORRECT!
print(doc[2].lemma_)
# lovin – INCORRECT!
print(doc[2].pos_)
# ADJ – INCORRECT!
```
## Info about spaCy
* **spaCy version:** 1.9.0
* **Platform:** Darwin-16.4.0-x86_64-i386-64bit
* **Python version:** 3.6.2
* **Installed models:** en_core_web_sm
Is there a workaround?
| True | Tokenizer: -ing contraction parsed incorrectly - Spacy doesn't properly tokenize words with contracted '-ing' ending:
```python
import en_core_web_sm
nlp = en_core_web_sm.load()
doc = nlp("I'm lovin' it")
print(doc[1])
# 'm – CORRECT!
print(doc[1].lemma_)
# be – CORRECT!
print(doc[2])
# lovin – INCORRECT!
print(doc[2].lemma_)
# lovin – INCORRECT!
print(doc[2].pos_)
# ADJ – INCORRECT!
```
## Info about spaCy
* **spaCy version:** 1.9.0
* **Platform:** Darwin-16.4.0-x86_64-i386-64bit
* **Python version:** 3.6.2
* **Installed models:** en_core_web_sm
Is there a workaround?
| non_priority | tokenizer ing contraction parsed incorrectly spacy doesn t properly tokenize words with contracted ing ending python import en core web sm nlp en core web sm load doc nlp i m lovin it print doc m – correct print doc lemma be – correct print doc lovin – incorrect print doc lemma lovin – incorrect print doc pos adj – incorrect info about spacy spacy version platform darwin python version installed models en core web sm is there a workaround | 0 |
79,990 | 7,735,746,275 | IssuesEvent | 2018-05-27 18:30:56 | kubernetes/test-infra | https://api.github.com/repos/kubernetes/test-infra | closed | Prow job status icons are not showing up correctly in Firefox for Android on Marshmallow | area/prow kind/bug sig/testing | Firefox version: 59.0.2
Android version: 6.0.1

/area prow
/kind bug
@kubernetes/sig-testing-bugs | 1.0 | Prow job status icons are not showing up correctly in Firefox for Android on Marshmallow - Firefox version: 59.0.2
Android version: 6.0.1

/area prow
/kind bug
@kubernetes/sig-testing-bugs | non_priority | prow job status icons are not showing up correctly in firefox for android on marshmallow firefox version android version area prow kind bug kubernetes sig testing bugs | 0 |
312,796 | 26,877,146,689 | IssuesEvent | 2023-02-05 06:39:03 | sebastianbergmann/phpunit | https://api.github.com/repos/sebastianbergmann/phpunit | opened | Allow test runner extensions to disable default progress and result printing | type/enhancement feature/test-runner feature/events | Starting with [PHPUnit 10](https://phpunit.de/announcements/phpunit-10.html) and its [event system](https://github.com/sebastianbergmann/phpunit/issues/4676), the [test runner can be extended](https://phpunit.readthedocs.io/en/10.0/extending-phpunit.html#extending-the-test-runner).
To extend PHPUnit's test runner, you implement the [`PHPUnit\Runner\Extension\Extension`](https://github.com/sebastianbergmann/phpunit/blob/10.0/src/Runner/Extension/Extension.php) interface. This interface requires a `bootstrap()` method to implemented. This method is called by PHPUnit's test runner to bootstrap a configured extension. An extension is configured in PHPUnit's XML configuration file.
For test runner extensions that intend to change how the test runner presents progress information or test result information, the default progress printer and the default result printer currently have to be disabled using the `--no-output`, `--no-progress`, or `--no-results` CLI options.
This ticket proposes the addition of another interface, `PHPUnit\Runner\Extension\OutputReplacing`:
```php
namespace PHPUnit\Runner\Extension;
interface OutputReplacing
{
public function replacesDefaultProgressPrinter(): bool;
public function replacesDefaultResultPrinter(): bool;
}
```
When implemented by a test runner extension's bootstrap class in addition to `PHPUnit\Runner\Extension\Extension`, the PHPUnit test runner will query the extension using these methods on whether or not the extension intends to replace the test runner's default progress printer, the test runner's default result printer, or both.
When multiple test runner extensions are configured (and therefore bootstrapped), only one may replace the test runner's default progress printer or the test runner's default result printer, respectively. The test runner will error out when more than one test runner extension intends to replace a default printer. | 1.0 | Allow test runner extensions to disable default progress and result printing - Starting with [PHPUnit 10](https://phpunit.de/announcements/phpunit-10.html) and its [event system](https://github.com/sebastianbergmann/phpunit/issues/4676), the [test runner can be extended](https://phpunit.readthedocs.io/en/10.0/extending-phpunit.html#extending-the-test-runner).
To extend PHPUnit's test runner, you implement the [`PHPUnit\Runner\Extension\Extension`](https://github.com/sebastianbergmann/phpunit/blob/10.0/src/Runner/Extension/Extension.php) interface. This interface requires a `bootstrap()` method to implemented. This method is called by PHPUnit's test runner to bootstrap a configured extension. An extension is configured in PHPUnit's XML configuration file.
For test runner extensions that intend to change how the test runner presents progress information or test result information, the default progress printer and the default result printer currently have to be disabled using the `--no-output`, `--no-progress`, or `--no-results` CLI options.
This ticket proposes the addition of another interface, `PHPUnit\Runner\Extension\OutputReplacing`:
```php
namespace PHPUnit\Runner\Extension;
interface OutputReplacing
{
public function replacesDefaultProgressPrinter(): bool;
public function replacesDefaultResultPrinter(): bool;
}
```
When implemented by a test runner extension's bootstrap class in addition to `PHPUnit\Runner\Extension\Extension`, the PHPUnit test runner will query the extension using these methods on whether or not the extension intends to replace the test runner's default progress printer, the test runner's default result printer, or both.
When multiple test runner extensions are configured (and therefore bootstrapped), only one may replace the test runner's default progress printer or the test runner's default result printer, respectively. The test runner will error out when more than one test runner extension intends to replace a default printer. | non_priority | allow test runner extensions to disable default progress and result printing starting with and its the to extend phpunit s test runner you implement the interface this interface requires a bootstrap method to implemented this method is called by phpunit s test runner to bootstrap a configured extension an extension is configured in phpunit s xml configuration file for test runner extensions that intend to change how the test runner presents progress information or test result information the default progress printer and the default result printer currently have to be disabled using the no output no progress or no results cli options this ticket proposes the addition of another interface phpunit runner extension outputreplacing php namespace phpunit runner extension interface outputreplacing public function replacesdefaultprogressprinter bool public function replacesdefaultresultprinter bool when implemented by a test runner extension s bootstrap class in addition to phpunit runner extension extension the phpunit test runner will query the extension using these methods on whether or not the extension intends to replace the test runner s default progress printer the test runner s default result printer or both when multiple test runner extensions are configured and therefore bootstrapped only one may replace the test runner s default progress printer or the test runner s default result printer respectively the test runner will error out when more than one test runner extension intends to replace a default printer | 0 |
148,566 | 19,534,403,093 | IssuesEvent | 2021-12-31 01:34:29 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | opened | CVE-2015-8787 (High) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2015-8787 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_nat_redirect.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_nat_redirect.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The nf_nat_redirect_ipv4 function in net/netfilter/nf_nat_redirect.c in the Linux kernel before 4.4 allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by sending certain IPv4 packets to an incompletely configured interface, a related issue to CVE-2003-1604.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8787>CVE-2015-8787</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8787">https://nvd.nist.gov/vuln/detail/CVE-2015-8787</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2015-8787 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2015-8787 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_nat_redirect.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/netfilter/nf_nat_redirect.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The nf_nat_redirect_ipv4 function in net/netfilter/nf_nat_redirect.c in the Linux kernel before 4.4 allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by sending certain IPv4 packets to an incompletely configured interface, a related issue to CVE-2003-1604.
<p>Publish Date: 2016-02-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-8787>CVE-2015-8787</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-8787">https://nvd.nist.gov/vuln/detail/CVE-2015-8787</a></p>
<p>Release Date: 2016-02-08</p>
<p>Fix Resolution: 4.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in base branch master vulnerable source files net netfilter nf nat redirect c net netfilter nf nat redirect c vulnerability details the nf nat redirect function in net netfilter nf nat redirect c in the linux kernel before allows remote attackers to cause a denial of service null pointer dereference and system crash or possibly have unspecified other impact by sending certain packets to an incompletely configured interface a related issue to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
404,629 | 27,491,957,622 | IssuesEvent | 2023-03-04 18:28:34 | ItKlubBozoLagan/kontestis | https://api.github.com/repos/ItKlubBozoLagan/kontestis | opened | Improve documentation | documentation | - [ ] Make a nicer front page
- [ ] Document all new features
- [ ] Add more technical details | 1.0 | Improve documentation - - [ ] Make a nicer front page
- [ ] Document all new features
- [ ] Add more technical details | non_priority | improve documentation make a nicer front page document all new features add more technical details | 0 |
59,031 | 11,939,567,557 | IssuesEvent | 2020-04-02 15:23:35 | magento/magento2-phpstorm-plugin | https://api.github.com/repos/magento/magento2-phpstorm-plugin | opened | Code generation. Module XML template description | code generation distributed-contribution-day-2020 good first issue | ### Description (*)
Describe the purpose of the module.xml file template and containing variables.
See the example here #68

| 1.0 | Code generation. Module XML template description - ### Description (*)
Describe the purpose of the module.xml file template and containing variables.
See the example here #68

| non_priority | code generation module xml template description description describe the purpose of the module xml file template and containing variables see the example here | 0 |
208,814 | 23,655,800,150 | IssuesEvent | 2022-08-26 11:05:31 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution] Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine | bug impact:low Team: SecuritySolution Team:Onboarding and Lifecycle Mgt v8.5.0 | **Description:**
Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine
**Build Details:**
```
VERSION: 8.5.0
BUILD: 55792
COMMIT: 2f65e138d75bf940d12318008b21f440da90fecf
ARTIFACT PAGE: https://snapshots.elastic.co/8.5.0-17b8a62d/summary-8.5.0-SNAPSHOT.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in
**Steps to Reproduce:**
1. Navigate to the `Host isolation exceptions` tab under the Manage section from the left-hand side navigation bar
2. Click on `add host isolation exception` button.
3. Provide name under `name` field.
4. Enter IP address starts with zero and the end is eight or nine ( let say : **10.09.89.87** `OR` **10.67.019.67** `OR` **10.23.45.048** )
**Impacted Test case:**
N/A
**Actual Result:**
Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine
**Expected Result:**
No error should occurred if each decimal number of the IP address starts with zero and the end is eight or nine.
**What's Working**
- This issue is not occurring if each decimal number of the IP address starts with zero and the end is without eight or nine.
https://user-images.githubusercontent.com/69579402/186888343-b2f6f9aa-73b3-401b-9e2e-167fa4e2f0f7.mp4
**What's Not Working**
- N/A
**ScreenCast**
https://user-images.githubusercontent.com/69579402/186888279-a067777b-b0b5-4485-ad23-8fe82d91e4c5.mp4
**Logs:**
N/A
| True | [Security Solution] Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine - **Description:**
Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine
**Build Details:**
```
VERSION: 8.5.0
BUILD: 55792
COMMIT: 2f65e138d75bf940d12318008b21f440da90fecf
ARTIFACT PAGE: https://snapshots.elastic.co/8.5.0-17b8a62d/summary-8.5.0-SNAPSHOT.html
```
**Browser Details:**
All
**Preconditions:**
1. Kibana user should be logged in
**Steps to Reproduce:**
1. Navigate to the `Host isolation exceptions` tab under the Manage section from the left-hand side navigation bar
2. Click on `add host isolation exception` button.
3. Provide name under `name` field.
4. Enter IP address starts with zero and the end is eight or nine ( let say : **10.09.89.87** `OR` **10.67.019.67** `OR` **10.23.45.048** )
**Impacted Test case:**
N/A
**Actual Result:**
Error "Invalid IP" occurs if each decimal number of the IP address starts with zero and the end is eight or nine
**Expected Result:**
No error should occurred if each decimal number of the IP address starts with zero and the end is eight or nine.
**What's Working**
- This issue is not occurring if each decimal number of the IP address starts with zero and the end is without eight or nine.
https://user-images.githubusercontent.com/69579402/186888343-b2f6f9aa-73b3-401b-9e2e-167fa4e2f0f7.mp4
**What's Not Working**
- N/A
**ScreenCast**
https://user-images.githubusercontent.com/69579402/186888279-a067777b-b0b5-4485-ad23-8fe82d91e4c5.mp4
**Logs:**
N/A
| non_priority | error invalid ip occurs if each decimal number of the ip address starts with zero and the end is eight or nine description error invalid ip occurs if each decimal number of the ip address starts with zero and the end is eight or nine build details version build commit artifact page browser details all preconditions kibana user should be logged in steps to reproduce navigate to the host isolation exceptions tab under the manage section from the left hand side navigation bar click on add host isolation exception button provide name under name field enter ip address starts with zero and the end is eight or nine let say or or impacted test case n a actual result error invalid ip occurs if each decimal number of the ip address starts with zero and the end is eight or nine expected result no error should occurred if each decimal number of the ip address starts with zero and the end is eight or nine what s working this issue is not occurring if each decimal number of the ip address starts with zero and the end is without eight or nine what s not working n a screencast logs n a | 0 |
26,525 | 20,192,701,893 | IssuesEvent | 2022-02-11 07:38:34 | DestinyItemManager/DIM | https://api.github.com/repos/DestinyItemManager/DIM | opened | Automatically run yarn i18n | Infrastructure: i18n | For PRs, it'd be handy if we had a workflow that detected changes to `config/i18n.json`, ran `yarn i18n`, and then committed to the branch if there were any changes. This is especially useful when committing suggestion changes. | 1.0 | Automatically run yarn i18n - For PRs, it'd be handy if we had a workflow that detected changes to `config/i18n.json`, ran `yarn i18n`, and then committed to the branch if there were any changes. This is especially useful when committing suggestion changes. | non_priority | automatically run yarn for prs it d be handy if we had a workflow that detected changes to config json ran yarn and then committed to the branch if there were any changes this is especially useful when committing suggestion changes | 0 |
241,256 | 20,113,131,358 | IssuesEvent | 2022-02-07 16:48:00 | ChainSafe/gossamer | https://api.github.com/repos/ChainSafe/gossamer | closed | lib/trie: refactor and add tests for trie and database related code | tests Type: Chore w3f approved | ## Task summary
- [x] Database code fully tested; then
- [x] Database code refactored
- [x] Trie code (lib/trie/trie.go) fully tested
- [ ] Trie code refactored | 1.0 | lib/trie: refactor and add tests for trie and database related code - ## Task summary
- [x] Database code fully tested; then
- [x] Database code refactored
- [x] Trie code (lib/trie/trie.go) fully tested
- [ ] Trie code refactored | non_priority | lib trie refactor and add tests for trie and database related code task summary database code fully tested then database code refactored trie code lib trie trie go fully tested trie code refactored | 0 |
67,108 | 14,853,869,177 | IssuesEvent | 2021-01-18 10:31:52 | kadirselcuk/sanity-nuxt-events | https://api.github.com/repos/kadirselcuk/sanity-nuxt-events | closed | CVE-2020-8175 (Medium) detected in jpeg-js-0.3.4.tgz - autoclosed | security vulnerability | ## CVE-2020-8175 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jpeg-js-0.3.4.tgz</b></p></summary>
<p>A pure javascript JPEG encoder and decoder</p>
<p>Library home page: <a href="https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.4.tgz">https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.4.tgz</a></p>
<p>Path to dependency file: sanity-nuxt-events/web/package.json</p>
<p>Path to vulnerable library: sanity-nuxt-events/web/node_modules/jpeg-js/package.json</p>
<p>
Dependency Hierarchy:
- pwa-2.6.0.tgz (Root Library)
- icon-2.6.0.tgz
- jimp-0.5.6.tgz
- types-0.5.4.tgz
- jpeg-0.5.4.tgz
- :x: **jpeg-js-0.3.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/sanity-nuxt-events/commit/9d410b8811b780d227a7f7db9665b9d77d241c9d">9d410b8811b780d227a7f7db9665b9d77d241c9d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Uncontrolled resource consumption in `jpeg-js` before 0.4.0 may allow attacker to launch denial of service attacks using specially a crafted JPEG image.
<p>Publish Date: 2020-07-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8175>CVE-2020-8175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8175">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8175</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-8175 (Medium) detected in jpeg-js-0.3.4.tgz - autoclosed - ## CVE-2020-8175 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jpeg-js-0.3.4.tgz</b></p></summary>
<p>A pure javascript JPEG encoder and decoder</p>
<p>Library home page: <a href="https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.4.tgz">https://registry.npmjs.org/jpeg-js/-/jpeg-js-0.3.4.tgz</a></p>
<p>Path to dependency file: sanity-nuxt-events/web/package.json</p>
<p>Path to vulnerable library: sanity-nuxt-events/web/node_modules/jpeg-js/package.json</p>
<p>
Dependency Hierarchy:
- pwa-2.6.0.tgz (Root Library)
- icon-2.6.0.tgz
- jimp-0.5.6.tgz
- types-0.5.4.tgz
- jpeg-0.5.4.tgz
- :x: **jpeg-js-0.3.4.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/kadirselcuk/sanity-nuxt-events/commit/9d410b8811b780d227a7f7db9665b9d77d241c9d">9d410b8811b780d227a7f7db9665b9d77d241c9d</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Uncontrolled resource consumption in `jpeg-js` before 0.4.0 may allow attacker to launch denial of service attacks using specially a crafted JPEG image.
<p>Publish Date: 2020-07-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8175>CVE-2020-8175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8175">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8175</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in jpeg js tgz autoclosed cve medium severity vulnerability vulnerable library jpeg js tgz a pure javascript jpeg encoder and decoder library home page a href path to dependency file sanity nuxt events web package json path to vulnerable library sanity nuxt events web node modules jpeg js package json dependency hierarchy pwa tgz root library icon tgz jimp tgz types tgz jpeg tgz x jpeg js tgz vulnerable library found in head commit a href found in base branch main vulnerability details uncontrolled resource consumption in jpeg js before may allow attacker to launch denial of service attacks using specially a crafted jpeg image publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
53,065 | 10,981,407,735 | IssuesEvent | 2019-11-30 21:35:15 | AgileVentures/sfn-client | https://api.github.com/repos/AgileVentures/sfn-client | opened | Adding a how it works section to home page | code design enhancement feature help wanted | <!--- Provide a general summary of the issue in the Title above -->
We need to add a how it works section on the home page how Sing for Needs works. Optimistically with little illustrations showing the process what the charity does, what the fans are getting and how the artist is helping within.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Illustration shows how Sing for Needs works in a separate section above the how to get started. | 1.0 | Adding a how it works section to home page - <!--- Provide a general summary of the issue in the Title above -->
We need to add a how it works section on the home page how Sing for Needs works. Optimistically with little illustrations showing the process what the charity does, what the fans are getting and how the artist is helping within.
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
Illustration shows how Sing for Needs works in a separate section above the how to get started. | non_priority | adding a how it works section to home page we need to add a how it works section on the home page how sing for needs works optimistically with little illustrations showing the process what the charity does what the fans are getting and how the artist is helping within expected behavior illustration shows how sing for needs works in a separate section above the how to get started | 0 |
267,859 | 20,248,206,443 | IssuesEvent | 2022-02-14 15:32:06 | Xcov19/mycovidconnect | https://api.github.com/repos/Xcov19/mycovidconnect | closed | @codecakes Outline and refine Docker setup and add documentation and Gotchas steps; | bug documentation good first issue up-for-grabs | _Originally posted by @jayeclark in https://github.com/Xcov19/mycovidconnect/issues/233#issuecomment-987194703_
First timers to the project get confused when running the project. It is better to document and outline which **scopes** of the project they can run locally (for now)
Documentation is required to limit functionality and explicitly mention it:
- [x] The contributor tries to run normally with docker but need to scope out what things will work on local. For instance, trying to login won't work. For that you need to setup your own auth0 account, for web client only and register a client. This will be your own client. Their starter tutorial for this should suffice in setting up the client and using that instead of our .env vars;
- [x] react-scripts not found. Is create-react even installed locally? Try `npm install` first
- [ ] Installing with `npm install` throws error: There is a peer dependency issue with jest module that can be fixed by installing @types/jest with the right version.
Multiple PRs accpeted to close this issue; | 1.0 | @codecakes Outline and refine Docker setup and add documentation and Gotchas steps; - _Originally posted by @jayeclark in https://github.com/Xcov19/mycovidconnect/issues/233#issuecomment-987194703_
First timers to the project get confused when running the project. It is better to document and outline which **scopes** of the project they can run locally (for now)
Documentation is required to limit functionality and explicitly mention it:
- [x] The contributor tries to run normally with docker but need to scope out what things will work on local. For instance, trying to login won't work. For that you need to setup your own auth0 account, for web client only and register a client. This will be your own client. Their starter tutorial for this should suffice in setting up the client and using that instead of our .env vars;
- [x] react-scripts not found. Is create-react even installed locally? Try `npm install` first
- [ ] Installing with `npm install` throws error: There is a peer dependency issue with jest module that can be fixed by installing @types/jest with the right version.
Multiple PRs accpeted to close this issue; | non_priority | codecakes outline and refine docker setup and add documentation and gotchas steps originally posted by jayeclark in first timers to the project get confused when running the project it is better to document and outline which scopes of the project they can run locally for now documentation is required to limit functionality and explicitly mention it the contributor tries to run normally with docker but need to scope out what things will work on local for instance trying to login won t work for that you need to setup your own account for web client only and register a client this will be your own client their starter tutorial for this should suffice in setting up the client and using that instead of our env vars react scripts not found is create react even installed locally try npm install first installing with npm install throws error there is a peer dependency issue with jest module that can be fixed by installing types jest with the right version multiple prs accpeted to close this issue | 0 |
148,810 | 13,248,081,257 | IssuesEvent | 2020-08-19 18:20:40 | zillow/luminaire | https://api.github.com/repos/zillow/luminaire | closed | Add documentation link to the "About" section of this repo | documentation | This is the one consistent location people usually find links to docs; we should link to it there as well. | 1.0 | Add documentation link to the "About" section of this repo - This is the one consistent location people usually find links to docs; we should link to it there as well. | non_priority | add documentation link to the about section of this repo this is the one consistent location people usually find links to docs we should link to it there as well | 0 |
125,246 | 16,749,466,356 | IssuesEvent | 2021-06-11 20:24:47 | woocommerce/woocommerce-android | https://api.github.com/repos/woocommerce/woocommerce-android | opened | Login: can the background color of the the system indicator be the same as the rest of the screen? | category: design feature: login good first issue type: enhancement | > Is it possible to make the background color of the system indicator the same as the rest of the screen? (Gray 0)
<sup>— from @Garance91540 as part of 6.8 beta testing 👍</sup>
<img src="https://user-images.githubusercontent.com/1119271/121737117-b2a54900-cab5-11eb-97fa-03367113031a.png" width="270" alt="image">
<sup>(internal reference: p5T066-2mv-p2#comment-8598)</sup>
Testing note: we should check this in light theme and dark theme. | 1.0 | Login: can the background color of the the system indicator be the same as the rest of the screen? - > Is it possible to make the background color of the system indicator the same as the rest of the screen? (Gray 0)
<sup>— from @Garance91540 as part of 6.8 beta testing 👍</sup>
<img src="https://user-images.githubusercontent.com/1119271/121737117-b2a54900-cab5-11eb-97fa-03367113031a.png" width="270" alt="image">
<sup>(internal reference: p5T066-2mv-p2#comment-8598)</sup>
Testing note: we should check this in light theme and dark theme. | non_priority | login can the background color of the the system indicator be the same as the rest of the screen is it possible to make the background color of the system indicator the same as the rest of the screen gray — from as part of beta testing 👍 internal reference comment testing note we should check this in light theme and dark theme | 0 |
15,501 | 5,969,786,588 | IssuesEvent | 2017-05-30 21:03:55 | dotnet/buildtools | https://api.github.com/repos/dotnet/buildtools | closed | Add property for specifying XUnit method/class to test | 0 - Backlog area-buildtools-code | To specify a method or class to run when running tests for a project using the BuildTools `Test` target, one needs to specify `"/p:XunitOptions=-method <method_name>"` or `"/p:XunitOptions=-class <class_name>"`. As mentioned in https://github.com/dotnet/buildtools/pull/1272#discussion_r93084987, this is unintuitive.
In addition, specifying it using `XunitOptions` overwrites some of the options used by the `Test` target code, most notably `-xml $(XunitResultsFileName)`, meaning that no XML results file is produced when running a single method or class in this way.
Using explicit properties such as `XunitMethodName` and `XunitClassName` would make the usage clearer and solve the overwrite issue. | 1.0 | Add property for specifying XUnit method/class to test - To specify a method or class to run when running tests for a project using the BuildTools `Test` target, one needs to specify `"/p:XunitOptions=-method <method_name>"` or `"/p:XunitOptions=-class <class_name>"`. As mentioned in https://github.com/dotnet/buildtools/pull/1272#discussion_r93084987, this is unintuitive.
In addition, specifying it using `XunitOptions` overwrites some of the options used by the `Test` target code, most notably `-xml $(XunitResultsFileName)`, meaning that no XML results file is produced when running a single method or class in this way.
Using explicit properties such as `XunitMethodName` and `XunitClassName` would make the usage clearer and solve the overwrite issue. | non_priority | add property for specifying xunit method class to test to specify a method or class to run when running tests for a project using the buildtools test target one needs to specify p xunitoptions method or p xunitoptions class as mentioned in this is unintuitive in addition specifying it using xunitoptions overwrites some of the options used by the test target code most notably xml xunitresultsfilename meaning that no xml results file is produced when running a single method or class in this way using explicit properties such as xunitmethodname and xunitclassname would make the usage clearer and solve the overwrite issue | 0 |
12,515 | 7,895,627,820 | IssuesEvent | 2018-06-29 04:38:48 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | Using Dataset api with Estimator in MirroredStrategy, Non-DMA-safe string tensor error | type:bug/performance |
### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: centos
- **TensorFlow installed from (source or binary)**: pip install tensorflow-gpu
- **TensorFlow version (use command below)**:1.8.0
- **Python version**:
- **Bazel version (if compiling from source)**:
- **GCC/Compiler version (if compiling from source)**:4.8.5
- **CUDA/cuDNN version**: 9.0
- **GPU model and memory**:GeForce GTX 1080Ti * 4
- **Exact command to reproduce**:
### Describe the problem
Using mutilple gpu by MirroredStrategy, Get ' Non-DMA-safe string tensor may not be copied from/to a GPU.' error
### Source code / logs




| True | Using Dataset api with Estimator in MirroredStrategy, Non-DMA-safe string tensor error -
### System information
- **Have I written custom code (as opposed to using a stock example script provided in TensorFlow)**:
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: centos
- **TensorFlow installed from (source or binary)**: pip install tensorflow-gpu
- **TensorFlow version (use command below)**:1.8.0
- **Python version**:
- **Bazel version (if compiling from source)**:
- **GCC/Compiler version (if compiling from source)**:4.8.5
- **CUDA/cuDNN version**: 9.0
- **GPU model and memory**:GeForce GTX 1080Ti * 4
- **Exact command to reproduce**:
### Describe the problem
Using mutilple gpu by MirroredStrategy, Get ' Non-DMA-safe string tensor may not be copied from/to a GPU.' error
### Source code / logs




| non_priority | using dataset api with estimator in mirroredstrategy non dma safe string tensor error system information have i written custom code as opposed to using a stock example script provided in tensorflow os platform and distribution e g linux ubuntu centos tensorflow installed from source or binary pip install tensorflow gpu tensorflow version use command below python version bazel version if compiling from source gcc compiler version if compiling from source cuda cudnn version gpu model and memory geforce gtx exact command to reproduce describe the problem using mutilple gpu by mirroredstrategy get non dma safe string tensor may not be copied from to a gpu error source code logs | 0 |
145,447 | 22,689,569,137 | IssuesEvent | 2022-07-04 18:01:19 | Joystream/atlas | https://api.github.com/repos/Joystream/atlas | reopened | Improve "No node connection banner" | design ux discussion | When I sign out and have no connection - I can see only a notification saying that there is no node connection but I don't have a feeling that I'm waiting for something - currently it looks like the screen is frozen

| 1.0 | Improve "No node connection banner" - When I sign out and have no connection - I can see only a notification saying that there is no node connection but I don't have a feeling that I'm waiting for something - currently it looks like the screen is frozen

| non_priority | improve no node connection banner when i sign out and have no connection i can see only a notification saying that there is no node connection but i don t have a feeling that i m waiting for something currently it looks like the screen is frozen | 0 |
357,287 | 25,176,360,019 | IssuesEvent | 2022-11-11 09:36:45 | RichDom2185/pe | https://api.github.com/repos/RichDom2185/pe | opened | Inconsistent formatting of dates in UG | type.DocumentationBug severity.Low | 
Moreover, the latter formatting does not provide additional value (i.e. what does `2022-11-07` mean in this case? is it a date? or something else?)
<!--session: 1668154065008-52adc948-2a6b-43f0-ae21-9d8349f474cf-->
<!--Version: Web v3.4.4--> | 1.0 | Inconsistent formatting of dates in UG - 
Moreover, the latter formatting does not provide additional value (i.e. what does `2022-11-07` mean in this case? is it a date? or something else?)
<!--session: 1668154065008-52adc948-2a6b-43f0-ae21-9d8349f474cf-->
<!--Version: Web v3.4.4--> | non_priority | inconsistent formatting of dates in ug moreover the latter formatting does not provide additional value i e what does mean in this case is it a date or something else | 0 |
101,506 | 12,691,573,827 | IssuesEvent | 2020-06-21 17:44:38 | zinc-collective/support | https://api.github.com/repos/zinc-collective/support | closed | Using Support. | design documentation question | Some questions!
It looks like a license is required to use Support. Do you offer a free trial? How does an interested party know that this product is ideal for their use case? (On the website, sending an email to Zinc is mentioned. I'm guessing this is so you can chat further to discuss whether the product is a good fit for them.)
FAQ section is good and I like how you list out the perks for each paid tier. What made you decide on one inbox for hobbyist edition and four inboxes for indie edition? I'm guessing it's about scale and number of clients but wondered more about that. For the franchise edition, does that mean you create your own inboxes and maintain that? Is a script provided for that?
Are there any testimonials? Or users weighing in on pain points? | 1.0 | Using Support. - Some questions!
It looks like a license is required to use Support. Do you offer a free trial? How does an interested party know that this product is ideal for their use case? (On the website, sending an email to Zinc is mentioned. I'm guessing this is so you can chat further to discuss whether the product is a good fit for them.)
FAQ section is good and I like how you list out the perks for each paid tier. What made you decide on one inbox for hobbyist edition and four inboxes for indie edition? I'm guessing it's about scale and number of clients but wondered more about that. For the franchise edition, does that mean you create your own inboxes and maintain that? Is a script provided for that?
Are there any testimonials? Or users weighing in on pain points? | non_priority | using support some questions it looks like a license is required to use support do you offer a free trial how does an interested party know that this product is ideal for their use case on the website sending an email to zinc is mentioned i m guessing this is so you can chat further to discuss whether the product is a good fit for them faq section is good and i like how you list out the perks for each paid tier what made you decide on one inbox for hobbyist edition and four inboxes for indie edition i m guessing it s about scale and number of clients but wondered more about that for the franchise edition does that mean you create your own inboxes and maintain that is a script provided for that are there any testimonials or users weighing in on pain points | 0 |
216,094 | 24,225,573,847 | IssuesEvent | 2022-09-26 14:11:37 | fish-shell/fish-shell | https://api.github.com/repos/fish-shell/fish-shell | closed | [fish_config] Listen on non-localhost interfaces | enhancement good first issue security | I use fish on a lot of headless machines and for this usecase it would be a lot easier if I could configure fish_config to listen on a public interface
| True | [fish_config] Listen on non-localhost interfaces - I use fish on a lot of headless machines and for this usecase it would be a lot easier if I could configure fish_config to listen on a public interface
| non_priority | listen on non localhost interfaces i use fish on a lot of headless machines and for this usecase it would be a lot easier if i could configure fish config to listen on a public interface | 0 |
78,879 | 15,088,852,307 | IssuesEvent | 2021-02-06 02:31:13 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | opened | Allow ProjectItem to define how SourceDocument should be generated | area-razor.compiler feature-code-generation | To support generating a Razor source document from any target source (such as a Roslyn source text), we need to improve the abstractions that exist in the Razor compiler for generating a document from a target. Currently, we only support reading the document from a Stream.
| 1.0 | Allow ProjectItem to define how SourceDocument should be generated - To support generating a Razor source document from any target source (such as a Roslyn source text), we need to improve the abstractions that exist in the Razor compiler for generating a document from a target. Currently, we only support reading the document from a Stream.
| non_priority | allow projectitem to define how sourcedocument should be generated to support generating a razor source document from any target source such as a roslyn source text we need to improve the abstractions that exist in the razor compiler for generating a document from a target currently we only support reading the document from a stream | 0 |
129,877 | 27,580,830,149 | IssuesEvent | 2023-03-08 16:06:25 | aiken-lang/aiken | https://api.github.com/repos/aiken-lang/aiken | opened | Road to alpha | typechecking tooling code formatting | - [ ] `aiken check -m` should not generate UPLC for tests that are discarded.
- [ ] do not force newlines on `|>`
Ideally, we want to _preserve as much as possible_ of the original formatting. So if things are written as single line, we keep that (unless the line goes beyond an accepted length).
- [ ] Add type information on type-holes
For extra debugging capabilities
- [ ] Have `todo` properly displays the expected type
Instead of only showing a warning saying there's a todo left in code
- [ ] Forbid casting `String` to `Data` in the type-checker
- [ ] #394
| 1.0 | Road to alpha - - [ ] `aiken check -m` should not generate UPLC for tests that are discarded.
- [ ] do not force newlines on `|>`
Ideally, we want to _preserve as much as possible_ of the original formatting. So if things are written as single line, we keep that (unless the line goes beyond an accepted length).
- [ ] Add type information on type-holes
For extra debugging capabilities
- [ ] Have `todo` properly displays the expected type
Instead of only showing a warning saying there's a todo left in code
- [ ] Forbid casting `String` to `Data` in the type-checker
- [ ] #394
| non_priority | road to alpha aiken check m should not generate uplc for tests that are discarded do not force newlines on ideally we want to preserve as much as possible of the original formatting so if things are written as single line we keep that unless the line goes beyond an accepted length add type information on type holes for extra debugging capabilities have todo properly displays the expected type instead of only showing a warning saying there s a todo left in code forbid casting string to data in the type checker | 0 |
112,058 | 17,067,340,548 | IssuesEvent | 2021-07-07 08:59:42 | AlexMekkering/esphome-config | https://api.github.com/repos/AlexMekkering/esphome-config | closed | CVE-2021-33503 (High) detected in urllib3-1.26.3-py2.py3-none-any.whl | security vulnerability | ## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.3-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/23/fc/8a49991f7905261f9ca9df5aa9b58363c3c821ce3e7f671895442b7100f2/urllib3-1.26.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/23/fc/8a49991f7905261f9ca9df5aa9b58363c3c821ce3e7f671895442b7100f2/urllib3-1.26.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: esphome-config/requirements.txt</p>
<p>Path to vulnerable library: esphome-config/requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.25.1-py2.py3-none-any.whl (Root Library)
- :x: **urllib3-1.26.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexMekkering/esphome-config/commit/2abaf7889d0686737563006a9339f24db24ee5f2">2abaf7889d0686737563006a9339f24db24ee5f2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-05-22</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33503 (High) detected in urllib3-1.26.3-py2.py3-none-any.whl - ## CVE-2021-33503 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>urllib3-1.26.3-py2.py3-none-any.whl</b></p></summary>
<p>HTTP library with thread-safe connection pooling, file post, and more.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/23/fc/8a49991f7905261f9ca9df5aa9b58363c3c821ce3e7f671895442b7100f2/urllib3-1.26.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/23/fc/8a49991f7905261f9ca9df5aa9b58363c3c821ce3e7f671895442b7100f2/urllib3-1.26.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: esphome-config/requirements.txt</p>
<p>Path to vulnerable library: esphome-config/requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.25.1-py2.py3-none-any.whl (Root Library)
- :x: **urllib3-1.26.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexMekkering/esphome-config/commit/2abaf7889d0686737563006a9339f24db24ee5f2">2abaf7889d0686737563006a9339f24db24ee5f2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in urllib3 before 1.26.5. When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect.
<p>Publish Date: 2021-06-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33503>CVE-2021-33503</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg">https://github.com/urllib3/urllib3/security/advisories/GHSA-q2q7-5pp4-w6pg</a></p>
<p>Release Date: 2021-05-22</p>
<p>Fix Resolution: urllib3 - 1.26.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl http library with thread safe connection pooling file post and more library home page a href path to dependency file esphome config requirements txt path to vulnerable library esphome config requirements txt dependency hierarchy requests none any whl root library x none any whl vulnerable library found in head commit a href found in base branch master vulnerability details an issue was discovered in before when provided with a url containing many characters in the authority component the authority regular expression exhibits catastrophic backtracking causing a denial of service if a url were passed as a parameter or redirected to via an http redirect publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
151,803 | 19,665,390,103 | IssuesEvent | 2022-01-10 21:50:02 | rsoreq/grafana | https://api.github.com/repos/rsoreq/grafana | opened | WS-2022-0008 (Medium) detected in node-forge-0.9.0.tgz | security vulnerability | ## WS-2022-0008 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.9.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz</a></p>
<p>Path to dependency file: /yarn.lock</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- selfsigned-1.10.7.tgz
- :x: **node-forge-0.9.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.
<p>Publish Date: 2022-01-08
<p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p>
<p>Release Date: 2022-01-08</p>
<p>Fix Resolution: node-forge - 1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.9.0","packageFilePaths":["/yarn.lock"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.2.1;selfsigned:1.10.7;node-forge:0.9.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2022-0008","vulnerabilityDetails":"The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.","vulnerabilityUrl":"https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | WS-2022-0008 (Medium) detected in node-forge-0.9.0.tgz - ## WS-2022-0008 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.9.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.9.0.tgz</a></p>
<p>Path to dependency file: /yarn.lock</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- selfsigned-1.10.7.tgz
- :x: **node-forge-0.9.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.
<p>Publish Date: 2022-01-08
<p>URL: <a href=https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562>WS-2022-0008</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5rrq-pxf6-6jx5">https://github.com/advisories/GHSA-5rrq-pxf6-6jx5</a></p>
<p>Release Date: 2022-01-08</p>
<p>Fix Resolution: node-forge - 1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.9.0","packageFilePaths":["/yarn.lock"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.2.1;selfsigned:1.10.7;node-forge:0.9.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"node-forge - 1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2022-0008","vulnerabilityDetails":"The forge.debug API had a potential prototype pollution issue if called with untrusted input. The API was only used for internal debug purposes in a safe way and never documented or advertised. It is suspected that uses of this API, if any exist, would likely not have used untrusted inputs in a vulnerable way.","vulnerabilityUrl":"https://github.com/digitalbazaar/forge/commit/51228083550dde97701ac8e06c629a5184117562","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_priority | ws medium detected in node forge tgz ws medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file yarn lock path to vulnerable library node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch master vulnerability details the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree webpack dev server selfsigned node forge isminimumfixversionavailable true minimumfixversion node forge isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails the forge debug api had a potential prototype pollution issue if called with untrusted input the api was only used for internal debug purposes in a safe way and never documented or advertised it is suspected that uses of this api if any exist would likely not have used untrusted inputs in a vulnerable way vulnerabilityurl | 0 |
1,616 | 2,612,369,402 | IssuesEvent | 2015-02-27 14:06:00 | BigDataProteomics/bigdataproteomics.github.io | https://api.github.com/repos/BigDataProteomics/bigdataproteomics.github.io | closed | About the language Scala Java, functional programming | documentation enhancement question | @ruiwanguk I have been reading about functional programming, Scala, Java 8, etc. I know we decided to work with Java but some minor comments should be take into account:
The aim of the project and the target audience is mainly BigData analyst in the field of proteomics and also developers. The functional programming is really popular in the science and research community due the easy way to implement functions and task in through the data, like R or Python they are really popular to easy interaction between the developers or analysts and the data. For example, imagine
something like this in the future in the Spark cluster:
scala> import org.bdp.hubble
scala> var spectra = read.mgffile(file.txt)
scala> spectra.filterlowquality()
scala> var PSMs = spectra.searchDataBase(uniprot, MSGGplust)
scala> var proteins = PSMs.proteinInference(psmFDR == 1%)
Looks great? A complete pipeline for protein identification and they can switch the databased Uniprot, or msgplust, and filters etc ... functional programming is great for that and the algorithms will run the the Spark cluster.
| 1.0 | About the language Scala Java, functional programming - @ruiwanguk I have been reading about functional programming, Scala, Java 8, etc. I know we decided to work with Java but some minor comments should be take into account:
The aim of the project and the target audience is mainly BigData analyst in the field of proteomics and also developers. The functional programming is really popular in the science and research community due the easy way to implement functions and task in through the data, like R or Python they are really popular to easy interaction between the developers or analysts and the data. For example, imagine
something like this in the future in the Spark cluster:
scala> import org.bdp.hubble
scala> var spectra = read.mgffile(file.txt)
scala> spectra.filterlowquality()
scala> var PSMs = spectra.searchDataBase(uniprot, MSGGplust)
scala> var proteins = PSMs.proteinInference(psmFDR == 1%)
Looks great? A complete pipeline for protein identification and they can switch the databased Uniprot, or msgplust, and filters etc ... functional programming is great for that and the algorithms will run the the Spark cluster.
| non_priority | about the language scala java functional programming ruiwanguk i have been reading about functional programming scala java etc i know we decided to work with java but some minor comments should be take into account the aim of the project and the target audience is mainly bigdata analyst in the field of proteomics and also developers the functional programming is really popular in the science and research community due the easy way to implement functions and task in through the data like r or python they are really popular to easy interaction between the developers or analysts and the data for example imagine something like this in the future in the spark cluster scala import org bdp hubble scala var spectra read mgffile file txt scala spectra filterlowquality scala var psms spectra searchdatabase uniprot msggplust scala var proteins psms proteininference psmfdr looks great a complete pipeline for protein identification and they can switch the databased uniprot or msgplust and filters etc functional programming is great for that and the algorithms will run the the spark cluster | 0 |
60,506 | 14,861,653,670 | IssuesEvent | 2021-01-18 23:29:17 | lightningnetwork/lnd | https://api.github.com/repos/lightningnetwork/lnd | closed | Security: Verification of the Docker Image | docker feature request golang/build system verification | ### Background
LND makes a great effort to verify the source code and build images
with `gpg` and `timestamps`
But the new docker images hosted on docker hub is not easily verifiable.
because of the multi-strage build, of the container site, the binaries are just copied inside.
https://hub.docker.com/layers/lightninglabs/lnd/v0.12.0-beta.rc5/images/sha256-cf51e6989e832a928cfd82e7d77341b8bb9ef29578bb6d54a3e58bcac186aaf8?context=explore
How secure is an Docker-Hub account?
### Expected behaviour
Some mechanism to verify the container image itself or at least the binaries inside the container.
I made view month ago a docker repo found here: https://github.com/Zetanova/docker-lnd
The dockerfile is downloading the binaries and verifying them.
Instead of compiling them without verification.
The final stage of the current dockerfile could make the same.
Maybe there a better way to sign a container-image-hash
| 1.0 | Security: Verification of the Docker Image - ### Background
LND makes a great effort to verify the source code and build images
with `gpg` and `timestamps`
But the new docker images hosted on docker hub is not easily verifiable.
because of the multi-strage build, of the container site, the binaries are just copied inside.
https://hub.docker.com/layers/lightninglabs/lnd/v0.12.0-beta.rc5/images/sha256-cf51e6989e832a928cfd82e7d77341b8bb9ef29578bb6d54a3e58bcac186aaf8?context=explore
How secure is an Docker-Hub account?
### Expected behaviour
Some mechanism to verify the container image itself or at least the binaries inside the container.
I made view month ago a docker repo found here: https://github.com/Zetanova/docker-lnd
The dockerfile is downloading the binaries and verifying them.
Instead of compiling them without verification.
The final stage of the current dockerfile could make the same.
Maybe there a better way to sign a container-image-hash
| non_priority | security verification of the docker image background lnd makes a great effort to verify the source code and build images with gpg and timestamps but the new docker images hosted on docker hub is not easily verifiable because of the multi strage build of the container site the binaries are just copied inside how secure is an docker hub account expected behaviour some mechanism to verify the container image itself or at least the binaries inside the container i made view month ago a docker repo found here the dockerfile is downloading the binaries and verifying them instead of compiling them without verification the final stage of the current dockerfile could make the same maybe there a better way to sign a container image hash | 0 |
193,490 | 22,216,166,344 | IssuesEvent | 2022-06-08 02:02:33 | ConnectionMaster/create-probot-app | https://api.github.com/repos/ConnectionMaster/create-probot-app | opened | CVE-2022-29244 (Medium) detected in npm-6.14.11.tgz | security vulnerability | ## CVE-2022-29244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-6.14.11.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-6.14.11.tgz">https://registry.npmjs.org/npm/-/npm-6.14.11.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/package.json</p>
<p>
Dependency Hierarchy:
- :x: **npm-6.14.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ConnectionMaster/create-probot-app/commit/885809abccc4313bd892be901d1adc0141fa9f71">885809abccc4313bd892be901d1adc0141fa9f71</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm pack ignores root-level .gitignore & .npmignore file exclusion directives when run in a workspace or with a workspace flag (ie. --workspaces, --workspace=<name>). Anyone who has run npm pack or npm publish with workspaces, as of v7.9.0 & v7.13.0 respectively, may be affected and have published files into the npm registry they did not intend to include. Users should upgrade to the patched version of npm (v8.11.0 or greater).
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29244>CVE-2022-29244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hj9c-8jmm-8c52">https://github.com/advisories/GHSA-hj9c-8jmm-8c52</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-29244 (Medium) detected in npm-6.14.11.tgz - ## CVE-2022-29244 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-6.14.11.tgz</b></p></summary>
<p>a package manager for JavaScript</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm/-/npm-6.14.11.tgz">https://registry.npmjs.org/npm/-/npm-6.14.11.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/package.json</p>
<p>
Dependency Hierarchy:
- :x: **npm-6.14.11.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ConnectionMaster/create-probot-app/commit/885809abccc4313bd892be901d1adc0141fa9f71">885809abccc4313bd892be901d1adc0141fa9f71</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm pack ignores root-level .gitignore & .npmignore file exclusion directives when run in a workspace or with a workspace flag (ie. --workspaces, --workspace=<name>). Anyone who has run npm pack or npm publish with workspaces, as of v7.9.0 & v7.13.0 respectively, may be affected and have published files into the npm registry they did not intend to include. Users should upgrade to the patched version of npm (v8.11.0 or greater).
<p>Publish Date: 2022-04-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-29244>CVE-2022-29244</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hj9c-8jmm-8c52">https://github.com/advisories/GHSA-hj9c-8jmm-8c52</a></p>
<p>Release Date: 2022-04-14</p>
<p>Fix Resolution: 8.11.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in npm tgz cve medium severity vulnerability vulnerable library npm tgz a package manager for javascript library home page a href path to dependency file package json path to vulnerable library node modules npm package json dependency hierarchy x npm tgz vulnerable library found in head commit a href found in base branch master vulnerability details npm pack ignores root level gitignore npmignore file exclusion directives when run in a workspace or with a workspace flag ie workspaces workspace anyone who has run npm pack or npm publish with workspaces as of respectively may be affected and have published files into the npm registry they did not intend to include users should upgrade to the patched version of npm or greater publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
20,490 | 11,455,242,723 | IssuesEvent | 2020-02-06 18:41:47 | HPI-de/hpi-cloud | https://api.github.com/repos/HPI-de/hpi-cloud | closed | Use ISO_DATE as format for dates | C: common C: service-food C: service-food-crawler T: refactor | <!--
Thanks for taking the time to file an issue!
Please select the component label (C: abc) this feature is related to from the right (if applicable).
-->
**Description**
<!-- A clear and concise description of the problem or missing capability -->
Format YYYY-MM-DD should be preferred (instead of UTC millis).
| 2.0 | Use ISO_DATE as format for dates - <!--
Thanks for taking the time to file an issue!
Please select the component label (C: abc) this feature is related to from the right (if applicable).
-->
**Description**
<!-- A clear and concise description of the problem or missing capability -->
Format YYYY-MM-DD should be preferred (instead of UTC millis).
| non_priority | use iso date as format for dates thanks for taking the time to file an issue please select the component label c abc this feature is related to from the right if applicable description format yyyy mm dd should be preferred instead of utc millis | 0 |
405,922 | 27,541,273,441 | IssuesEvent | 2023-03-07 08:43:18 | road86/bahis-data | https://api.github.com/repos/road86/bahis-data | opened | Document installation on a readme file | documentation | Make a documentation of the installation process in a readme file for running the data pipeline. Include the dependencies. | 1.0 | Document installation on a readme file - Make a documentation of the installation process in a readme file for running the data pipeline. Include the dependencies. | non_priority | document installation on a readme file make a documentation of the installation process in a readme file for running the data pipeline include the dependencies | 0 |
35,312 | 17,023,476,294 | IssuesEvent | 2021-07-03 02:13:37 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | The performance of the saved model and the performance of the trained model seem to be different | TF 2.4 stalled stat:awaiting response type:performance | I use tensorflow2.4 and generate images by using the GAN method.
During the training stage, the generated model at one certain step is good. And then I save it by .tf. However, when I load the model again and plan to generate new images, these images are pretty terrible. (I have tried many times). This should not be a common issue for the GAN method. So could you please tell me the reason why there are huge differences after saving the model? | True | The performance of the saved model and the performance of the trained model seem to be different - I use tensorflow2.4 and generate images by using the GAN method.
During the training stage, the generated model at one certain step is good. And then I save it by .tf. However, when I load the model again and plan to generate new images, these images are pretty terrible. (I have tried many times). This should not be a common issue for the GAN method. So could you please tell me the reason why there are huge differences after saving the model? | non_priority | the performance of the saved model and the performance of the trained model seem to be different i use and generate images by using the gan method during the training stage the generated model at one certain step is good and then i save it by tf however when i load the model again and plan to generate new images these images are pretty terrible i have tried many times this should not be a common issue for the gan method so could you please tell me the reason why there are huge differences after saving the model | 0 |
108,608 | 16,796,200,194 | IssuesEvent | 2021-06-16 04:09:47 | Techini/WebGoat | https://api.github.com/repos/Techini/WebGoat | opened | CVE-2021-21343 (High) detected in xstream-1.4.5.jar | security vulnerability | ## CVE-2021-21343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: WebGoat/webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,canner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- webgoat-server-v8.0.0-SNAPSHOT.jar (Root Library)
- vulnerable-components-v8.0.0-SNAPSHOT.jar
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Techini/WebGoat/commit/d33cc0e32a0d1b949ff1b85af16890cd452276f8">d33cc0e32a0d1b949ff1b85af16890cd452276f8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects. XStream creates therefore new instances based on these type information. An attacker can manipulate the processed input stream and replace or inject objects, that result in the deletion of a file on the local host. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21343>CVE-2021-21343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf">https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-21343 (High) detected in xstream-1.4.5.jar - ## CVE-2021-21343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.5.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Path to dependency file: WebGoat/webgoat-server/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,canner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar,/home/wss-scanner/.m2/repository/com/thoughtworks/xstream/xstream/1.4.5/xstream-1.4.5.jar</p>
<p>
Dependency Hierarchy:
- webgoat-server-v8.0.0-SNAPSHOT.jar (Root Library)
- vulnerable-components-v8.0.0-SNAPSHOT.jar
- :x: **xstream-1.4.5.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Techini/WebGoat/commit/d33cc0e32a0d1b949ff1b85af16890cd452276f8">d33cc0e32a0d1b949ff1b85af16890cd452276f8</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects. XStream creates therefore new instances based on these type information. An attacker can manipulate the processed input stream and replace or inject objects, that result in the deletion of a file on the local host. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21343>CVE-2021-21343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf">https://github.com/x-stream/xstream/security/advisories/GHSA-74cv-f58x-f9wf</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back path to dependency file webgoat webgoat server pom xml path to vulnerable library home wss scanner repository com thoughtworks xstream xstream xstream jar canner repository com thoughtworks xstream xstream xstream jar home wss scanner repository com thoughtworks xstream xstream xstream jar dependency hierarchy webgoat server snapshot jar root library vulnerable components snapshot jar x xstream jar vulnerable library found in head commit a href vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability where the processed stream at unmarshalling time contains type information to recreate the formerly written objects xstream creates therefore new instances based on these type information an attacker can manipulate the processed input stream and replace or inject objects that result in the deletion of a file on the local host no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream step up your open source security game with whitesource | 0 |
18,330 | 5,623,552,949 | IssuesEvent | 2017-04-04 15:11:28 | OneiricArts/CustomizeNewTab | https://api.github.com/repos/OneiricArts/CustomizeNewTab | closed | remove bookmark bar files | code enhancement | `bookmarksBar.js`, `BookmarksBar.handlebars`
should not be in master in current form. put in separate branch later (with permissions and stuff) | 1.0 | remove bookmark bar files - `bookmarksBar.js`, `BookmarksBar.handlebars`
should not be in master in current form. put in separate branch later (with permissions and stuff) | non_priority | remove bookmark bar files bookmarksbar js bookmarksbar handlebars should not be in master in current form put in separate branch later with permissions and stuff | 0 |
146,601 | 11,740,902,537 | IssuesEvent | 2020-03-11 20:34:10 | openshift/openshift-azure | https://api.github.com/repos/openshift/openshift-azure | closed | Public IP limit of 20 frequently reached per region | kind/test-flake | /kind test-flake
e.g.
```
time="2020-03-10T16:57:57Z" level=warning msg="deployment failed: &azure.ServiceError{Code:\"DeploymentFailed\", Message:\"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\", Target:(*string)(nil), Details:[]map[string]interface {}{map[string]interface {}{\"code\":\"BadRequest\", \"message\":\"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"PublicIPCountLimitReached\\\",\\r\\n \\\"message\\\": \\\"Cannot create more than 20 public IP addresses for this subscription in this region.\\\",\\r\\n \\\"details\\\": []\\r\\n }\\r\\n}\"}, map[string]interface {}{\"code\":\"BadRequest\", \"message\":\"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"PublicIPCountLimitReached\\\",\\r\\n \\\"message\\\": \\\"Cannot create more than 20 public IP addresses for this subscription in this region.\\\",\\r\\n \\\"details\\\": []\\r\\n }\\r\\n}\"}}, InnerError:map[string]interface {}(nil), AdditionalInfo:[]map[string]interface {}(nil)}" func="pkg/fakerp.GetDeployer.func1()" file="pkg/fakerp/fakerp.go:104"
```
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-upgrade-v14.1/214/build-log.txt
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-upgrade-v15.0/25/build-log.txt
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-e2e-create-20191027-public/870/build-log.txt | 1.0 | Public IP limit of 20 frequently reached per region - /kind test-flake
e.g.
```
time="2020-03-10T16:57:57Z" level=warning msg="deployment failed: &azure.ServiceError{Code:\"DeploymentFailed\", Message:\"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.\", Target:(*string)(nil), Details:[]map[string]interface {}{map[string]interface {}{\"code\":\"BadRequest\", \"message\":\"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"PublicIPCountLimitReached\\\",\\r\\n \\\"message\\\": \\\"Cannot create more than 20 public IP addresses for this subscription in this region.\\\",\\r\\n \\\"details\\\": []\\r\\n }\\r\\n}\"}, map[string]interface {}{\"code\":\"BadRequest\", \"message\":\"{\\r\\n \\\"error\\\": {\\r\\n \\\"code\\\": \\\"PublicIPCountLimitReached\\\",\\r\\n \\\"message\\\": \\\"Cannot create more than 20 public IP addresses for this subscription in this region.\\\",\\r\\n \\\"details\\\": []\\r\\n }\\r\\n}\"}}, InnerError:map[string]interface {}(nil), AdditionalInfo:[]map[string]interface {}(nil)}" func="pkg/fakerp.GetDeployer.func1()" file="pkg/fakerp/fakerp.go:104"
```
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-upgrade-v14.1/214/build-log.txt
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-upgrade-v15.0/25/build-log.txt
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_openshift-azure/2246/pull-ci-azure-master-e2e-create-20191027-public/870/build-log.txt | non_priority | public ip limit of frequently reached per region kind test flake e g time level warning msg deployment failed azure serviceerror code deploymentfailed message at least one resource deployment operation failed please list deployment operations for details please see for usage details target string nil details map interface map interface code badrequest message r n error r n code publicipcountlimitreached r n message cannot create more than public ip addresses for this subscription in this region r n details r n r n map interface code badrequest message r n error r n code publicipcountlimitreached r n message cannot create more than public ip addresses for this subscription in this region r n details r n r n innererror map interface nil additionalinfo map interface nil func pkg fakerp getdeployer file pkg fakerp fakerp go | 0 |
337,399 | 24,538,223,993 | IssuesEvent | 2022-10-11 23:25:37 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | "How to Contribute Back" documentation | documentation ops and shared services | We'd like to create a standard way for teams to share cool tools and libraries and stuff with each other, so we will document how teams can share this stuff and make sure their tools get added to a list of community-provided stuff.
- demo
- stackoverflow
- rocketchat
- email newsletter
- added to https://beta-docs.developer.gov.bc.ca/reusable-services-list/
they can also request that we review the content, if they want. | 1.0 | "How to Contribute Back" documentation - We'd like to create a standard way for teams to share cool tools and libraries and stuff with each other, so we will document how teams can share this stuff and make sure their tools get added to a list of community-provided stuff.
- demo
- stackoverflow
- rocketchat
- email newsletter
- added to https://beta-docs.developer.gov.bc.ca/reusable-services-list/
they can also request that we review the content, if they want. | non_priority | how to contribute back documentation we d like to create a standard way for teams to share cool tools and libraries and stuff with each other so we will document how teams can share this stuff and make sure their tools get added to a list of community provided stuff demo stackoverflow rocketchat email newsletter added to they can also request that we review the content if they want | 0 |
43,457 | 7,046,831,659 | IssuesEvent | 2018-01-02 10:14:19 | usharesoft/hammr | https://api.github.com/repos/usharesoft/hammr | closed | RFE: add information on how to use supervision in hammr authentication | documentation good first issue | For authenticating in hammr using supervision mode (i.e. if the user with login userA has the entitlement to supervise, and wants to supervise userB), the doc does not mention the syntax to use.
The way to do it is to use for the user name
"userA\userB"
This information should appear in the following doc pages:
http://docs.usharesoft.com/projects/hammr/en/latest/pages/authentication/overview.html
and/or
http://docs.usharesoft.com/projects/hammr/en/latest/pages/getting-started/launch-hammr.html
| 1.0 | RFE: add information on how to use supervision in hammr authentication - For authenticating in hammr using supervision mode (i.e. if the user with login userA has the entitlement to supervise, and wants to supervise userB), the doc does not mention the syntax to use.
The way to do it is to use for the user name
"userA\userB"
This information should appear in the following doc pages:
http://docs.usharesoft.com/projects/hammr/en/latest/pages/authentication/overview.html
and/or
http://docs.usharesoft.com/projects/hammr/en/latest/pages/getting-started/launch-hammr.html
| non_priority | rfe add information on how to use supervision in hammr authentication for authenticating in hammr using supervision mode i e if the user with login usera has the entitlement to supervise and wants to supervise userb the doc does not mention the syntax to use the way to do it is to use for the user name usera userb this information should appear in the following doc pages and or | 0 |
319,501 | 27,376,923,367 | IssuesEvent | 2023-02-28 06:59:57 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | closed | Core e2e test framework should not import sub e2e test frameworks | kind/cleanup help wanted sig/testing lifecycle/frozen area/e2e-test-framework needs-triage | **What happened**:
As https://kubernetes.slack.com/archives/C9NK9KFFW/p1565372337078100 Core e2e test framework(`test/e2e/framework`) should not import sub e2e test frameworks (e.g. `test/e2e/framework/auth`) for avoiding circular dependency.
This issue is for managing such issues.
The following files are in e2e core framework:
- [x] cleanup.go (no dependency to sub packages)
- [x] exec_util.go (no dependency to sub packages)
- [x] expect.go (no dependency to sub packages)
- [x] flake_reporting_util.go (no dependency to sub packages)
- [ ] framework.go (https://github.com/kubernetes/kubernetes/pull/86679 is a part, but still we should have more PRs)
- [x] get-kubemark-resource-usage.go
- [x] google_compute.go (no dependency to sub packages)
- [ ] log.go
- [ ] log_size_monitoring.go
- [ ] nodes_util.go
- [ ] pods.go
- [x] profile_gatherer.go (https://github.com/kubernetes/kubernetes/pull/85304)
- [x] provider.go (no dependency to sub packages)
- [ ] psp.go
- [x] rc_util.go (no dependency to sub packages)
- [ ] resource_usage_gatherer.go
- [x] size.go (no dependency to sub packages)
- [x] skip.go (https://github.com/kubernetes/kubernetes/pull/87031 and https://github.com/kubernetes/kubernetes/issues/87047)
- [x] suites.go (https://github.com/kubernetes/kubernetes/pull/85235)
- [x] test_context.go (no dependency to sub packages)
- [ ] util.go
| 2.0 | Core e2e test framework should not import sub e2e test frameworks - **What happened**:
As https://kubernetes.slack.com/archives/C9NK9KFFW/p1565372337078100 Core e2e test framework(`test/e2e/framework`) should not import sub e2e test frameworks (e.g. `test/e2e/framework/auth`) for avoiding circular dependency.
This issue is for managing such issues.
The following files are in e2e core framework:
- [x] cleanup.go (no dependency to sub packages)
- [x] exec_util.go (no dependency to sub packages)
- [x] expect.go (no dependency to sub packages)
- [x] flake_reporting_util.go (no dependency to sub packages)
- [ ] framework.go (https://github.com/kubernetes/kubernetes/pull/86679 is a part, but still we should have more PRs)
- [x] get-kubemark-resource-usage.go
- [x] google_compute.go (no dependency to sub packages)
- [ ] log.go
- [ ] log_size_monitoring.go
- [ ] nodes_util.go
- [ ] pods.go
- [x] profile_gatherer.go (https://github.com/kubernetes/kubernetes/pull/85304)
- [x] provider.go (no dependency to sub packages)
- [ ] psp.go
- [x] rc_util.go (no dependency to sub packages)
- [ ] resource_usage_gatherer.go
- [x] size.go (no dependency to sub packages)
- [x] skip.go (https://github.com/kubernetes/kubernetes/pull/87031 and https://github.com/kubernetes/kubernetes/issues/87047)
- [x] suites.go (https://github.com/kubernetes/kubernetes/pull/85235)
- [x] test_context.go (no dependency to sub packages)
- [ ] util.go
| non_priority | core test framework should not import sub test frameworks what happened as core test framework test framework should not import sub test frameworks e g test framework auth for avoiding circular dependency this issue is for managing such issues the following files are in core framework cleanup go no dependency to sub packages exec util go no dependency to sub packages expect go no dependency to sub packages flake reporting util go no dependency to sub packages framework go is a part but still we should have more prs get kubemark resource usage go google compute go no dependency to sub packages log go log size monitoring go nodes util go pods go profile gatherer go provider go no dependency to sub packages psp go rc util go no dependency to sub packages resource usage gatherer go size go no dependency to sub packages skip go and suites go test context go no dependency to sub packages util go | 0 |
179,928 | 30,323,512,195 | IssuesEvent | 2023-07-10 21:16:02 | hicommonwealth/commonwealth | https://api.github.com/repos/hicommonwealth/commonwealth | closed | Design System: Upvotes | 3 design system refinement | Users can upvote posts as an interaction to show positive feedback

#### Design requirements
- This component can either indicate that a post **has been upvoted (`Upvoted=Yes`)** or has not been upvoted (`Upvoted=No`)**
- If `Upvote=No`, there are only 3 states`Default`, `Hover`, and `Disabled`
- If `Upvote=Yes`, there are 4 states`Default`, `Hover`, `Active`, and `Disabled`
- Max width for the container with numbers is 32px **(no overflow, use "k" or "m" to signify thousands or millions)**
#### Design resources

- 🎨 [Figma file](https://www.figma.com/file/eIVp33a1oCu0AtcLwSbGjr/%F0%9F%9A%A7-Components-and-Patterns?type=design&node-id=618%3A5101&t=ugnPgbpoK44T4NwU-1)
## Product Requirements
- Add component to Storybook with a story to be audited via Chromatic
- Replace current "Heart" system within the new Upvote component. | 1.0 | Design System: Upvotes - Users can upvote posts as an interaction to show positive feedback

#### Design requirements
- This component can either indicate that a post **has been upvoted (`Upvoted=Yes`)** or has not been upvoted (`Upvoted=No`)**
- If `Upvote=No`, there are only 3 states`Default`, `Hover`, and `Disabled`
- If `Upvote=Yes`, there are 4 states`Default`, `Hover`, `Active`, and `Disabled`
- Max width for the container with numbers is 32px **(no overflow, use "k" or "m" to signify thousands or millions)**
#### Design resources

- 🎨 [Figma file](https://www.figma.com/file/eIVp33a1oCu0AtcLwSbGjr/%F0%9F%9A%A7-Components-and-Patterns?type=design&node-id=618%3A5101&t=ugnPgbpoK44T4NwU-1)
## Product Requirements
- Add component to Storybook with a story to be audited via Chromatic
- Replace current "Heart" system within the new Upvote component. | non_priority | design system upvotes users can upvote posts as an interaction to show positive feedback design requirements this component can either indicate that a post has been upvoted upvoted yes or has not been upvoted upvoted no if upvote no there are only states default hover and disabled if upvote yes there are states default hover active and disabled max width for the container with numbers is no overflow use k or m to signify thousands or millions design resources 🎨 product requirements add component to storybook with a story to be audited via chromatic replace current heart system within the new upvote component | 0 |
10,019 | 2,920,879,805 | IssuesEvent | 2015-06-24 21:14:40 | IBM-Watson/design-library | https://api.github.com/repos/IBM-Watson/design-library | closed | Update Library to Guide | cleanup design guide ready for review runner | There are a bunch of places where we use `library` and it should be `guide`. We should fix that.
- [x] Changelog (#313)
- [x] Contributing Guidelines (#313)
- [x] Readme (#313)
- [x] Bower.json (#313)
- [x] Wiki (#329)
- [x] Labels
- [x] Library folder
- [ ] Repo itself | 1.0 | Update Library to Guide - There are a bunch of places where we use `library` and it should be `guide`. We should fix that.
- [x] Changelog (#313)
- [x] Contributing Guidelines (#313)
- [x] Readme (#313)
- [x] Bower.json (#313)
- [x] Wiki (#329)
- [x] Labels
- [x] Library folder
- [ ] Repo itself | non_priority | update library to guide there are a bunch of places where we use library and it should be guide we should fix that changelog contributing guidelines readme bower json wiki labels library folder repo itself | 0 |
10,046 | 8,788,584,560 | IssuesEvent | 2018-12-20 22:47:59 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Changing encrypted in aws_redshift_cluster triggers ForceNew | enhancement service/redshift | Changing `encrypted` in `aws_redshift_cluster` from `false` to `true` triggers a ForceNew. Previously this was an unsupported change (See https://github.com/terraform-providers/terraform-provider-aws/issues/1119). The AWS docs now indicate that this is a supported change and I was able to do this manually to an existing cluster. The ForceNew flag should be removed from `encrypted`.
Reference Doc: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
Terraform v0.11.10
+ provider.aws v1.52.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_redshift_cluster
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_redshift_cluster" "cluster" {
cluster_identifier = "${var.name}"
node_type = "${var.node_type}"
database_name = "${var.database}"
master_username = "${var.user}"
master_password = "${var.pass}
number_of_nodes = 1
availability_zone = "${var.zone}"
allow_version_upgrade = true
automated_snapshot_retention_period = 1
cluster_public_key = "${var.public_key}"
cluster_version = "1.0"
cluster_type = "single-node"
encrypted = false
enhanced_vpc_routing = false
vpc_security_group_ids = ["${aws_security_group.redshift.id}"]
port = 5439
preferred_maintenance_window = "tue:07:00-tue:07:30"
publicly_accessible = true
skip_final_snapshot = true
cluster_parameter_group_name = "${var.parameter_group}"
cluster_subnet_group_name = "${var.subnet_group}"
logging {
enable = true
bucket_name = "${var.bucket}"
s3_key_prefix = "${var.s3_prefix}"
}
lifecycle {
ignore_changes = [
"cluster_public_key",
]
}
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
```
~ module.redshift.aws_redshift_cluster.cluster
encrypted: "false" => "true"
```
### Actual Behavior
```
-/+ module.redshift.aws_redshift_cluster.cluster
encrypted: "false" => "true" (forces new resource)
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create aws_redshift_cluster with encrypted = false
2. `terraform apply`
3. Change encrypted to true
4. `terraform plan`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
| 1.0 | Changing encrypted in aws_redshift_cluster triggers ForceNew - Changing `encrypted` in `aws_redshift_cluster` from `false` to `true` triggers a ForceNew. Previously this was an unsupported change (See https://github.com/terraform-providers/terraform-provider-aws/issues/1119). The AWS docs now indicate that this is a supported change and I was able to do this manually to an existing cluster. The ForceNew flag should be removed from `encrypted`.
Reference Doc: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
Terraform v0.11.10
+ provider.aws v1.52.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_redshift_cluster
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_redshift_cluster" "cluster" {
cluster_identifier = "${var.name}"
node_type = "${var.node_type}"
database_name = "${var.database}"
master_username = "${var.user}"
master_password = "${var.pass}
number_of_nodes = 1
availability_zone = "${var.zone}"
allow_version_upgrade = true
automated_snapshot_retention_period = 1
cluster_public_key = "${var.public_key}"
cluster_version = "1.0"
cluster_type = "single-node"
encrypted = false
enhanced_vpc_routing = false
vpc_security_group_ids = ["${aws_security_group.redshift.id}"]
port = 5439
preferred_maintenance_window = "tue:07:00-tue:07:30"
publicly_accessible = true
skip_final_snapshot = true
cluster_parameter_group_name = "${var.parameter_group}"
cluster_subnet_group_name = "${var.subnet_group}"
logging {
enable = true
bucket_name = "${var.bucket}"
s3_key_prefix = "${var.s3_prefix}"
}
lifecycle {
ignore_changes = [
"cluster_public_key",
]
}
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
```
~ module.redshift.aws_redshift_cluster.cluster
encrypted: "false" => "true"
```
### Actual Behavior
```
-/+ module.redshift.aws_redshift_cluster.cluster
encrypted: "false" => "true" (forces new resource)
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. Create aws_redshift_cluster with encrypted = false
2. `terraform apply`
3. Change encrypted to true
4. `terraform plan`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
| non_priority | changing encrypted in aws redshift cluster triggers forcenew changing encrypted in aws redshift cluster from false to true triggers a forcenew previously this was an unsupported change see the aws docs now indicate that this is a supported change and i was able to do this manually to an existing cluster the forcenew flag should be removed from encrypted reference doc community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider aws affected resource s aws redshift cluster terraform configuration files hcl resource aws redshift cluster cluster cluster identifier var name node type var node type database name var database master username var user master password var pass number of nodes availability zone var zone allow version upgrade true automated snapshot retention period cluster public key var public key cluster version cluster type single node encrypted false enhanced vpc routing false vpc security group ids port preferred maintenance window tue tue publicly accessible true skip final snapshot true cluster parameter group name var parameter group cluster subnet group name var subnet group logging enable true bucket name var bucket key prefix var prefix lifecycle ignore changes cluster public key debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior module redshift aws redshift cluster cluster encrypted false true actual behavior module redshift aws redshift cluster cluster encrypted false true forces new resource steps to reproduce create aws redshift cluster with encrypted false terraform apply change encrypted to true terraform plan important factoids references | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.