Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
47,192 | 13,056,050,184 | IssuesEvent | 2020-07-30 03:30:13 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | vectors in trayinfo printouts aren't legible (Trac #114) | IceTray Migrated from Trac defect | <class icetray.vector_double instance at 0x4434343>
can't see contents
Migrated from https://code.icecube.wisc.edu/ticket/114
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"description": " <class icetray.vector_double instance at 0x4434343>\n\ncan't see contents ",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1416713876900096",
"component": "IceTray",
"summary": "vectors in trayinfo printouts aren't legible",
"priority": "major",
"keywords": "",
"time": "2008-08-21T12:36:41",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| 1.0 | vectors in trayinfo printouts aren't legible (Trac #114) - <class icetray.vector_double instance at 0x4434343>
can't see contents
Migrated from https://code.icecube.wisc.edu/ticket/114
```json
{
"status": "closed",
"changetime": "2014-11-23T03:37:56",
"description": " <class icetray.vector_double instance at 0x4434343>\n\ncan't see contents ",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1416713876900096",
"component": "IceTray",
"summary": "vectors in trayinfo printouts aren't legible",
"priority": "major",
"keywords": "",
"time": "2008-08-21T12:36:41",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| defect | vectors in trayinfo printouts aren t legible trac can t see contents migrated from json status closed changetime description n ncan t see contents reporter troy cc resolution fixed ts component icetray summary vectors in trayinfo printouts aren t legible priority major keywords time milestone owner troy type defect | 1 |
24,868 | 24,400,685,555 | IssuesEvent | 2022-10-05 00:52:31 | publishpress/PublishPress-Permissions | https://api.github.com/repos/publishpress/PublishPress-Permissions | closed | Pro: Disable statuses and other features by default | usability pro | We don't need this showing for new Pro users:
<img width="490" alt="Screen Shot 2021-08-06 at 9 12 04 AM" src="https://user-images.githubusercontent.com/3868028/128515742-163af5e8-b3e0-458b-b094-4d8b486c3e2f.png">
| True | Pro: Disable statuses and other features by default - We don't need this showing for new Pro users:
<img width="490" alt="Screen Shot 2021-08-06 at 9 12 04 AM" src="https://user-images.githubusercontent.com/3868028/128515742-163af5e8-b3e0-458b-b094-4d8b486c3e2f.png">
| non_defect | pro disable statuses and other features by default we don t need this showing for new pro users img width alt screen shot at am src | 0 |
341,559 | 30,592,407,326 | IssuesEvent | 2023-07-21 18:18:00 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: tpce/c=5000/nodes=3 failed | C-test-failure O-robot O-roachtest release-blocker branch-release-23.1 | roachtest.tpce/c=5000/nodes=3 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11002182?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11002182?buildTab=artifacts#/tpce/c=5000/nodes=3) on release-23.1 @ [043024b889af86e0b8f23f09619d8d3c2c9acd13](https://github.com/cockroachdb/cockroach/commits/043024b889af86e0b8f23f09619d8d3c2c9acd13):
```
(cluster.go:2249).Run: output in run_173244.121490028_n4_sudo-docker-run-cock: sudo docker run cockroachdb/tpc-e:latest --customers=5000 --racks=3 --init --hosts=10.142.0.247 returned: COMMAND_PROBLEM: exit status 1
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/tpce/c=5000/nodes=3/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=1</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpce/c=5000/nodes=3.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 2.0 | roachtest: tpce/c=5000/nodes=3 failed - roachtest.tpce/c=5000/nodes=3 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11002182?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/11002182?buildTab=artifacts#/tpce/c=5000/nodes=3) on release-23.1 @ [043024b889af86e0b8f23f09619d8d3c2c9acd13](https://github.com/cockroachdb/cockroach/commits/043024b889af86e0b8f23f09619d8d3c2c9acd13):
```
(cluster.go:2249).Run: output in run_173244.121490028_n4_sudo-docker-run-cock: sudo docker run cockroachdb/tpc-e:latest --customers=5000 --racks=3 --init --hosts=10.142.0.247 returned: COMMAND_PROBLEM: exit status 1
(monitor.go:137).Wait: monitor failure: monitor task failed: t.Fatal() was called
test artifacts and logs in: /artifacts/tpce/c=5000/nodes=3/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=4</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=1</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*tpce/c=5000/nodes=3.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_defect | roachtest tpce c nodes failed roachtest tpce c nodes with on release cluster go run output in run sudo docker run cock sudo docker run cockroachdb tpc e latest customers racks init hosts returned command problem exit status monitor go wait monitor failure monitor task failed t fatal was called test artifacts and logs in artifacts tpce c nodes run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb test eng | 0 |
115,345 | 24,752,778,478 | IssuesEvent | 2022-10-21 14:59:34 | FerretDB/FerretDB | https://api.github.com/repos/FerretDB/FerretDB | closed | `explain`'s `queryPlanner` should be an object, not an array | code/bug | ### Versions
0.5.4
### What did you do?
`db.runCommand({ explain: { find: "objectidkeys", filter: { _id: ObjectId("000102030405060708091011") } } });`
### What did you expect to see?
Response with `queryPlanner` being an object, just like in MongoDB.
### What did you see instead?
`queryPlanner` being array with a single object.
We should remove that extra array.
And that's also a time to add an integration for `explain`. It should be skipped for Tigris because `explain` is not implemented for it yet; see #1253.
DoD:
- [x] Add a test that checks the format of `explain` command. | 1.0 | `explain`'s `queryPlanner` should be an object, not an array - ### Versions
0.5.4
### What did you do?
`db.runCommand({ explain: { find: "objectidkeys", filter: { _id: ObjectId("000102030405060708091011") } } });`
### What did you expect to see?
Response with `queryPlanner` being an object, just like in MongoDB.
### What did you see instead?
`queryPlanner` being array with a single object.
We should remove that extra array.
And that's also a time to add an integration for `explain`. It should be skipped for Tigris because `explain` is not implemented for it yet; see #1253.
DoD:
- [x] Add a test that checks the format of `explain` command. | non_defect | explain s queryplanner should be an object not an array versions what did you do db runcommand explain find objectidkeys filter id objectid what did you expect to see response with queryplanner being an object just like in mongodb what did you see instead queryplanner being array with a single object we should remove that extra array and that s also a time to add an integration for explain it should be skipped for tigris because explain is not implemented for it yet see dod add a test that checks the format of explain command | 0 |
252,719 | 19,061,044,054 | IssuesEvent | 2021-11-26 07:47:39 | openvinotoolkit/cvat | https://api.github.com/repos/openvinotoolkit/cvat | closed | run :docker-compose up -d error | question documentation | # run :docker-compose up -d error
Many thanks to the author for designing a convenient tool like CVAT, but when I tried to run the command "docker-compose up -d", some errors occurred, the details are as follows:
```
$ docker-compose up -d
Creating cvat_db ... done
Creating cvat_redis ... done
Creating cvat ... done
Creating cvat_ui ... done
Creating cvat_proxy ... error
ERROR: for cvat_proxy Cannot create container for service cvat_proxy: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":" ▒▒ Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:▒к▒ 0\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:▒к▒ 47\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:▒к▒ 21\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}
```
| 1.0 | run :docker-compose up -d error - # run :docker-compose up -d error
Many thanks to the author for designing a convenient tool like CVAT, but when I tried to run the command "docker-compose up -d", some errors occurred, the details are as follows:
```
$ docker-compose up -d
Creating cvat_db ... done
Creating cvat_redis ... done
Creating cvat ... done
Creating cvat_ui ... done
Creating cvat_proxy ... error
ERROR: for cvat_proxy Cannot create container for service cvat_proxy: status code not OK but 500: {"Message":"Unhandled exception: Filesharing has been cancelled","StackTrace":" ▒▒ Docker.ApiServices.Mounting.FileSharing.<DoShareAsync>d__6.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:▒к▒ 0\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ Docker.ApiServices.Mounting.FileSharing.<ShareAsync>d__4.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.ApiServices\\Mounting\\FileSharing.cs:▒к▒ 47\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ Docker.HttpApi.Controllers.FilesharingController.<ShareDirectory>d__2.MoveNext() λ▒▒ C:\\workspaces\\stable-2.3.x\\src\\github.com\\docker\\pinata\\win\\src\\Docker.HttpApi\\Controllers\\FilesharingController.cs:▒к▒ 21\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Threading.Tasks.TaskHelpersExtensions.<CastToObject>d__1`1.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Controllers.ApiControllerActionInvoker.<InvokeActionAsyncCore>d__1.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Controllers.ActionFilterResult.<ExecuteAsync>d__5.MoveNext()\r\n--- ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ ---\r\n ▒▒ System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()\r\n ▒▒ System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)\r\n ▒▒ System.Web.Http.Dispatcher.HttpControllerDispatcher.<SendAsync>d__15.MoveNext()"}
```
| non_defect | run docker compose up d error run docker compose up d error many thanks to the author for designing a convenient tool like cvat but when i tried to run the command docker compose up d some errors occurred the details are as follows docker compose up d creating cvat db done creating cvat redis done creating cvat done creating cvat ui done creating cvat proxy error error for cvat proxy cannot create container for service cvat proxy status code not ok but message unhandled exception filesharing has been cancelled stacktrace ▒▒ docker apiservices mounting filesharing d movenext λ▒▒ c workspaces stable x src github com docker pinata win src docker apiservices mounting filesharing cs ▒к▒ r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ docker apiservices mounting filesharing d movenext λ▒▒ c workspaces stable x src github com docker pinata win src docker apiservices mounting filesharing cs ▒к▒ r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ docker httpapi controllers filesharingcontroller d movenext λ▒▒ c workspaces stable x src github com docker pinata win src docker httpapi controllers filesharingcontroller cs ▒к▒ r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ system threading tasks taskhelpersextensions d movenext r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ system web http controllers apicontrolleractioninvoker d movenext r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ system web http controllers actionfilterresult d movenext r n ▒▒▒▒▒쳣▒▒▒▒һλ▒▒▒ж▒ջ▒▒▒ٵ▒ĩβ r n ▒▒ system runtime exceptionservices exceptiondispatchinfo throw r n ▒▒ system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task r n ▒▒ system web http dispatcher httpcontrollerdispatcher d movenext | 0 |
64,732 | 18,850,135,829 | IssuesEvent | 2021-11-11 19:42:13 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Fedora Chat refused to send messages until I logged out and in again. I could see other people's messages | T-Defect | ### Steps to reproduce
1. Where are you starting? What can you see?
In the #fedora-meeting:fedora.im channel on https://chat.fedoraproject.org/
2. What do you click?
Edit message. Then after edit, "Enter" and "Save" are unresponsive.
This was today at around 19:02
Clicked cancel, and found I was unable to send any messages.
3. More steps…
Noticed that my phone was working fine. Logged out and in to web client again and now it works
### Outcome
#### What did you expect?
Messages sent
#### What happened instead?
Messages not sent.
### Operating system
Fedora Workstation
### Browser information
Firefox 94.0
### URL for webapp
https://chat.fedoraproject.org/
### Application version
Fedora Chat version: v1.9.4 Olm version: 3.2.3
### Homeserver
fedora.im
### Will you send logs?
Yes | 1.0 | Fedora Chat refused to send messages until I logged out and in again. I could see other people's messages - ### Steps to reproduce
1. Where are you starting? What can you see?
In the #fedora-meeting:fedora.im channel on https://chat.fedoraproject.org/
2. What do you click?
Edit message. Then after edit, "Enter" and "Save" are unresponsive.
This was today at around 19:02
Clicked cancel, and found I was unable to send any messages.
3. More steps…
Noticed that my phone was working fine. Logged out and in to web client again and now it works
### Outcome
#### What did you expect?
Messages sent
#### What happened instead?
Messages not sent.
### Operating system
Fedora Workstation
### Browser information
Firefox 94.0
### URL for webapp
https://chat.fedoraproject.org/
### Application version
Fedora Chat version: v1.9.4 Olm version: 3.2.3
### Homeserver
fedora.im
### Will you send logs?
Yes | defect | fedora chat refused to send messages until i logged out and in again i could see other people s messages steps to reproduce where are you starting what can you see in the fedora meeting fedora im channel on what do you click edit message then after edit enter and save are unresponsive this was today at around clicked cancel and found i was unable to send any messages more steps… noticed that my phone was working fine logged out and in to web client again and now it works outcome what did you expect messages sent what happened instead messages not sent operating system fedora workstation browser information firefox url for webapp application version fedora chat version olm version homeserver fedora im will you send logs yes | 1 |
34,081 | 7,342,121,323 | IssuesEvent | 2018-03-07 06:15:33 | comicpanda/log4jdbc-log4j2 | https://api.github.com/repos/comicpanda/log4jdbc-log4j2 | closed | Typo in DriverSpy | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. Just looking into net.sf.log4jdbc.sql.jdbcapi.DriverSpy
What is the expected output? What do you see instead?
During initialization, the class puts to debug typo: intialization instead of
initialization (twice).
Could you please fix it at modifying this file?
Thank you
```
Original issue reported on code.google.com by `D.Barabash` on 15 Apr 2015 at 9:36
| 1.0 | Typo in DriverSpy - ```
What steps will reproduce the problem?
1. Just looking into net.sf.log4jdbc.sql.jdbcapi.DriverSpy
What is the expected output? What do you see instead?
During initialization, the class puts to debug typo: intialization instead of
initialization (twice).
Could you please fix it at modifying this file?
Thank you
```
Original issue reported on code.google.com by `D.Barabash` on 15 Apr 2015 at 9:36
| defect | typo in driverspy what steps will reproduce the problem just looking into net sf sql jdbcapi driverspy what is the expected output what do you see instead during initialization the class puts to debug typo intialization instead of initialization twice could you please fix it at modifying this file thank you original issue reported on code google com by d barabash on apr at | 1 |
36,728 | 4,757,277,689 | IssuesEvent | 2016-10-24 16:08:27 | MozillaFoundation/Advocacy | https://api.github.com/repos/MozillaFoundation/Advocacy | reopened | Design fundraising snippet variant for testing (EU NN) | design Fundraising P1 snippet | @lovegushwa and I met with Stephen Horlander who is on the Firefox desktop UX team about the donation snippet design. We agreed to test a few variants that are less disruptive for users, as recommended by Stephen. Variants we should test if possible when we launch an EU fundraiser in the snippet on July 21:
1. text-only snippet with blue linked donation ask (Simple)
2. default snippet design from EOY (buttons with amounts + red "Donate" button)
3. some version of # 2 above but with more muted elements / subtler design treatment
Copywriting for the snippets is happening in #69
Next steps: @lovegushwa is going to chat with Stephen and mock up some options for # 3 above. NOTE: If time and capacity are too constrained, we can launch with a test of 1 and 2 variants and test a 3rd new variant when design capacity allows in the future. | 1.0 | Design fundraising snippet variant for testing (EU NN) - @lovegushwa and I met with Stephen Horlander who is on the Firefox desktop UX team about the donation snippet design. We agreed to test a few variants that are less disruptive for users, as recommended by Stephen. Variants we should test if possible when we launch an EU fundraiser in the snippet on July 21:
1. text-only snippet with blue linked donation ask (Simple)
2. default snippet design from EOY (buttons with amounts + red "Donate" button)
3. some version of # 2 above but with more muted elements / subtler design treatment
Copywriting for the snippets is happening in #69
Next steps: @lovegushwa is going to chat with Stephen and mock up some options for # 3 above. NOTE: If time and capacity are too constrained, we can launch with a test of 1 and 2 variants and test a 3rd new variant when design capacity allows in the future. | non_defect | design fundraising snippet variant for testing eu nn lovegushwa and i met with stephen horlander who is on the firefox desktop ux team about the donation snippet design we agreed to test a few variants that are less disruptive for users as recommended by stephen variants we should test if possible when we launch an eu fundraiser in the snippet on july text only snippet with blue linked donation ask simple default snippet design from eoy buttons with amounts red donate button some version of above but with more muted elements subtler design treatment copywriting for the snippets is happening in next steps lovegushwa is going to chat with stephen and mock up some options for above note if time and capacity are too constrained we can launch with a test of and variants and test a new variant when design capacity allows in the future | 0 |
65,703 | 19,661,498,137 | IssuesEvent | 2022-01-10 17:28:34 | pymc-devs/pymc | https://api.github.com/repos/pymc-devs/pymc | closed | `sample_prior_predictive(var_names=[...])` results in `KeyError` inside `to_inferencedata()` | defects | ## Description of your problem
**Please provide a minimal, self-contained, and reproducible example.**
```python
with pm.Model() as model:
x = pm.Normal("x")
y = pm.Normal("y", x, observed=5)
idata = pm.sample(tune=10, draws=20, chains=1, step=pm.Metropolis())
pm.sample_posterior_predictive(idata, var_names=["x"]) # 👈 this works fine
pm.sample_prior_predictive(var_names=["x"]) # 👈 this doesn't
```
**Please provide the full traceback.**
<details><summary>Complete error traceback</summary>
```python
KeyError Traceback (most recent call last)
<ipython-input-12-46d00c82b13d> in <module>
4 idata = pm.sample(tune=10, draws=20, chains=1, step=pm.Metropolis())
5 pm.sample_posterior_predictive(idata)
----> 6 pm.sample_prior_predictive(var_names=["x"])
c:\users\osthege\repos\pymc-main\pymc\sampling.py in sample_prior_predictive(samples, model, var_names, random_seed, mode, return_inferencedata, idata_kwargs)
2030 if idata_kwargs:
2031 ikwargs.update(idata_kwargs)
-> 2032 return pm.to_inference_data(prior=prior, **ikwargs)
2033
2034
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in to_inference_data(trace, prior, posterior_predictive, log_likelihood, coords, dims, model, save_warmup, density_dist_obs)
587 return trace
588
--> 589 return InferenceDataConverter(
590 trace=trace,
591 prior=prior,
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in to_inference_data(self)
523 "posterior_predictive": self.posterior_predictive_to_xarray(),
524 "predictions": self.predictions_to_xarray(),
--> 525 **self.priors_to_xarray(),
526 "observed_data": self.observed_data_to_xarray(),
527 }
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in priors_to_xarray(self)
442 if var_names is None
443 else dict_to_dataset(
--> 444 {k: np.expand_dims(self.prior[k], 0) for k in var_names},
445 library=pymc,
446 coords=self.coords,
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in <dictcomp>(.0)
442 if var_names is None
443 else dict_to_dataset(
--> 444 {k: np.expand_dims(self.prior[k], 0) for k in var_names},
445 library=pymc,
446 coords=self.coords,
KeyError: 'y'
```
</details>
## Versions and main components
* PyMC/PyMC3 Version: `main` | 1.0 | `sample_prior_predictive(var_names=[...])` results in `KeyError` inside `to_inferencedata()` - ## Description of your problem
**Please provide a minimal, self-contained, and reproducible example.**
```python
with pm.Model() as model:
x = pm.Normal("x")
y = pm.Normal("y", x, observed=5)
idata = pm.sample(tune=10, draws=20, chains=1, step=pm.Metropolis())
pm.sample_posterior_predictive(idata, var_names=["x"]) # 👈 this works fine
pm.sample_prior_predictive(var_names=["x"]) # 👈 this doesn't
```
**Please provide the full traceback.**
<details><summary>Complete error traceback</summary>
```python
KeyError Traceback (most recent call last)
<ipython-input-12-46d00c82b13d> in <module>
4 idata = pm.sample(tune=10, draws=20, chains=1, step=pm.Metropolis())
5 pm.sample_posterior_predictive(idata)
----> 6 pm.sample_prior_predictive(var_names=["x"])
c:\users\osthege\repos\pymc-main\pymc\sampling.py in sample_prior_predictive(samples, model, var_names, random_seed, mode, return_inferencedata, idata_kwargs)
2030 if idata_kwargs:
2031 ikwargs.update(idata_kwargs)
-> 2032 return pm.to_inference_data(prior=prior, **ikwargs)
2033
2034
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in to_inference_data(trace, prior, posterior_predictive, log_likelihood, coords, dims, model, save_warmup, density_dist_obs)
587 return trace
588
--> 589 return InferenceDataConverter(
590 trace=trace,
591 prior=prior,
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in to_inference_data(self)
523 "posterior_predictive": self.posterior_predictive_to_xarray(),
524 "predictions": self.predictions_to_xarray(),
--> 525 **self.priors_to_xarray(),
526 "observed_data": self.observed_data_to_xarray(),
527 }
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in priors_to_xarray(self)
442 if var_names is None
443 else dict_to_dataset(
--> 444 {k: np.expand_dims(self.prior[k], 0) for k in var_names},
445 library=pymc,
446 coords=self.coords,
c:\users\osthege\repos\pymc-main\pymc\backends\arviz.py in <dictcomp>(.0)
442 if var_names is None
443 else dict_to_dataset(
--> 444 {k: np.expand_dims(self.prior[k], 0) for k in var_names},
445 library=pymc,
446 coords=self.coords,
KeyError: 'y'
```
</details>
## Versions and main components
* PyMC/PyMC3 Version: `main` | defect | sample prior predictive var names results in keyerror inside to inferencedata description of your problem please provide a minimal self contained and reproducible example python with pm model as model x pm normal x y pm normal y x observed idata pm sample tune draws chains step pm metropolis pm sample posterior predictive idata var names 👈 this works fine pm sample prior predictive var names 👈 this doesn t please provide the full traceback complete error traceback python keyerror traceback most recent call last in idata pm sample tune draws chains step pm metropolis pm sample posterior predictive idata pm sample prior predictive var names c users osthege repos pymc main pymc sampling py in sample prior predictive samples model var names random seed mode return inferencedata idata kwargs if idata kwargs ikwargs update idata kwargs return pm to inference data prior prior ikwargs c users osthege repos pymc main pymc backends arviz py in to inference data trace prior posterior predictive log likelihood coords dims model save warmup density dist obs return trace return inferencedataconverter trace trace prior prior c users osthege repos pymc main pymc backends arviz py in to inference data self posterior predictive self posterior predictive to xarray predictions self predictions to xarray self priors to xarray observed data self observed data to xarray c users osthege repos pymc main pymc backends arviz py in priors to xarray self if var names is none else dict to dataset k np expand dims self prior for k in var names library pymc coords self coords c users osthege repos pymc main pymc backends arviz py in if var names is none else dict to dataset k np expand dims self prior for k in var names library pymc coords self coords keyerror y versions and main components pymc version main | 1 |
292,349 | 21,963,986,010 | IssuesEvent | 2022-05-24 18:18:40 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [docs] document how to use a custom TLS certificate for Platform install on k8s | area/documentation priority/low | ### Description
We don't have document on how to customize the certs instead of using the default cert with the key on github | 1.0 | [docs] document how to use a custom TLS certificate for Platform install on k8s - ### Description
We don't have document on how to customize the certs instead of using the default cert with the key on github | non_defect | document how to use a custom tls certificate for platform install on description we don t have document on how to customize the certs instead of using the default cert with the key on github | 0 |
2,520 | 2,607,906,847 | IssuesEvent | 2015-02-26 00:16:00 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | closed | Zencoding in Aptana 3 | auto-migrated Priority-Medium Type-Defect | ```
Zencoding not working in Aptana 3
```
-----
Original issue reported on code.google.com by `dergache...@gmail.com` on 18 Dec 2010 at 4:02 | 1.0 | Zencoding in Aptana 3 - ```
Zencoding not working in Aptana 3
```
-----
Original issue reported on code.google.com by `dergache...@gmail.com` on 18 Dec 2010 at 4:02 | defect | zencoding in aptana zencoding not working in aptana original issue reported on code google com by dergache gmail com on dec at | 1 |
35,828 | 7,804,574,072 | IssuesEvent | 2018-06-11 07:58:18 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Incomplete Support annotation on some of the H2 style mergeInto overloads | C: Documentation P: Medium R: Fixed T: Defect | The H2 style `DSLContext.mergeInto()` methods do not have consistent `@Support` annotations. In particular, the ones that are not generated (taking `Field<?>...` or `Collection<? extends Field<?>>` arguments) are incomplete.
----
See also: https://www.jooq.org/doc/latest/manual/sql-building/sql-statements/merge-statement/#comment-3938157570 | 1.0 | Incomplete Support annotation on some of the H2 style mergeInto overloads - The H2 style `DSLContext.mergeInto()` methods do not have consistent `@Support` annotations. In particular, the ones that are not generated (taking `Field<?>...` or `Collection<? extends Field<?>>` arguments) are incomplete.
----
See also: https://www.jooq.org/doc/latest/manual/sql-building/sql-statements/merge-statement/#comment-3938157570 | defect | incomplete support annotation on some of the style mergeinto overloads the style dslcontext mergeinto methods do not have consistent support annotations in particular the ones that are not generated taking field or collection arguments are incomplete see also | 1 |
23,237 | 3,779,449,174 | IssuesEvent | 2016-03-18 08:22:14 | CostaLab/reg-gen | https://api.github.com/repos/CostaLab/reg-gen | closed | Test case does not work | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Download the source code
2. Install
3. Run the test code
What is the expected output? What do you see instead?
I get a Python file read error.
What version of the product are you using? On what operating system?
Python 2.7
Please provide any additional information below.
Your test case is not working. It reads the InputMatrix.txt file as if the
first line contains a file. However, the first line is only a header line. I
had to fix this problem by deleting the header line in the InputMatrix.txt file.
```
Original issue reported on code.google.com by `dxqu...@uci.edu` on 5 Aug 2014 at 6:51 | 1.0 | Test case does not work - ```
What steps will reproduce the problem?
1. Download the source code
2. Install
3. Run the test code
What is the expected output? What do you see instead?
I get a Python file read error.
What version of the product are you using? On what operating system?
Python 2.7
Please provide any additional information below.
Your test case is not working. It reads the InputMatrix.txt file as if the
first line contains a file. However, the first line is only a header line. I
had to fix this problem by deleting the header line in the InputMatrix.txt file.
```
Original issue reported on code.google.com by `dxqu...@uci.edu` on 5 Aug 2014 at 6:51 | defect | test case does not work what steps will reproduce the problem download the source code install run the test code what is the expected output what do you see instead i get a python file read error what version of the product are you using on what operating system python please provide any additional information below your test case is not working it reads the inputmatrix txt file as if the first line contains a file however the first line is only a header line i had to fix this problem by deleting the header line in the inputmatrix txt file original issue reported on code google com by dxqu uci edu on aug at | 1 |
203,434 | 23,155,214,182 | IssuesEvent | 2022-07-29 12:22:35 | turkdevops/WordPress | https://api.github.com/repos/turkdevops/WordPress | closed | CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz - autoclosed | security vulnerability | ## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /wp-content/themes/twentytwenty/package.json</p>
<p>Path to vulnerable library: /wp-content/themes/twentytwenty/node_modules/minimist/package.json,/wp-content/themes/twentynineteen/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- meow-3.7.0.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/WordPress/commit/a30d128bbb79f203ffa32cc8f88c681f2c014e5b">a30d128bbb79f203ffa32cc8f88c681f2c014e5b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution (minimist): 1.2.6</p>
<p>Direct dependency fix Resolution (node-sass): 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz - autoclosed - ## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /wp-content/themes/twentytwenty/package.json</p>
<p>Path to vulnerable library: /wp-content/themes/twentytwenty/node_modules/minimist/package.json,/wp-content/themes/twentynineteen/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- node-sass-4.14.1.tgz (Root Library)
- meow-3.7.0.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/WordPress/commit/a30d128bbb79f203ffa32cc8f88c681f2c014e5b">a30d128bbb79f203ffa32cc8f88c681f2c014e5b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution (minimist): 1.2.6</p>
<p>Direct dependency fix Resolution (node-sass): 6.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in minimist tgz autoclosed cve medium severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file wp content themes twentytwenty package json path to vulnerable library wp content themes twentytwenty node modules minimist package json wp content themes twentynineteen node modules minimist package json dependency hierarchy node sass tgz root library meow tgz x minimist tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution minimist direct dependency fix resolution node sass step up your open source security game with mend | 0 |
69,567 | 22,502,102,665 | IssuesEvent | 2022-06-23 12:44:22 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | closed | Clicking red button in traffic light does not prompt user to save | defect polish | **Is your feature request related to a problem? Please describe.**
Zed asks the user if they want to save a file (with changes) if they:
- Close the tab
- Quit Zed
- Close close the window via the command palette ("workspace: close window") or via the file menu
but Zed does not prompt the user with a save dialogue if they click the red button in the traffic light in the upper-hand corner. | 1.0 | Clicking red button in traffic light does not prompt user to save - **Is your feature request related to a problem? Please describe.**
Zed asks the user if they want to save a file (with changes) if they:
- Close the tab
- Quit Zed
- Close close the window via the command palette ("workspace: close window") or via the file menu
but Zed does not prompt the user with a save dialogue if they click the red button in the traffic light in the upper-hand corner. | defect | clicking red button in traffic light does not prompt user to save is your feature request related to a problem please describe zed asks the user if they want to save a file with changes if they close the tab quit zed close close the window via the command palette workspace close window or via the file menu but zed does not prompt the user with a save dialogue if they click the red button in the traffic light in the upper hand corner | 1 |
47,949 | 13,264,868,063 | IssuesEvent | 2020-08-21 05:02:21 | uniquelyparticular/shipengine-request | https://api.github.com/repos/uniquelyparticular/shipengine-request | opened | WS-2020-0127 (Low) detected in npm-registry-fetch-3.9.0.tgz | security vulnerability | ## WS-2020-0127 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-registry-fetch-3.9.0.tgz</b></p></summary>
<p>Fetch-based http client for use with npm registry APIs</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm-registry-fetch/-/npm-registry-fetch-3.9.0.tgz">https://registry.npmjs.org/npm-registry-fetch/-/npm-registry-fetch-3.9.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/shipengine-request/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/shipengine-request/node_modules/npm/node_modules/npm-registry-fetch/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-15.13.15.tgz (Root Library)
- npm-5.1.7.tgz
- npm-6.9.0.tgz
- :x: **npm-registry-fetch-3.9.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/shipengine-request/commit/9fcea6cad97d59393155a130bd746b21aa833b23">9fcea6cad97d59393155a130bd746b21aa833b23</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm-registry-fetch before 4.0.5 and 8.1.1 is vulnerable to an information exposure vulnerability through log files.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://github.com/npm/npm-registry-fetch/commit/18bf9b97fb1deecdba01ffb05580370846255c88>WS-2020-0127</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1544">https://www.npmjs.com/advisories/1544</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: npm-registry-fetch - 4.0.5,8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0127 (Low) detected in npm-registry-fetch-3.9.0.tgz - ## WS-2020-0127 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>npm-registry-fetch-3.9.0.tgz</b></p></summary>
<p>Fetch-based http client for use with npm registry APIs</p>
<p>Library home page: <a href="https://registry.npmjs.org/npm-registry-fetch/-/npm-registry-fetch-3.9.0.tgz">https://registry.npmjs.org/npm-registry-fetch/-/npm-registry-fetch-3.9.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/shipengine-request/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/shipengine-request/node_modules/npm/node_modules/npm-registry-fetch/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-15.13.15.tgz (Root Library)
- npm-5.1.7.tgz
- npm-6.9.0.tgz
- :x: **npm-registry-fetch-3.9.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/uniquelyparticular/shipengine-request/commit/9fcea6cad97d59393155a130bd746b21aa833b23">9fcea6cad97d59393155a130bd746b21aa833b23</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
npm-registry-fetch before 4.0.5 and 8.1.1 is vulnerable to an information exposure vulnerability through log files.
<p>Publish Date: 2020-07-07
<p>URL: <a href=https://github.com/npm/npm-registry-fetch/commit/18bf9b97fb1deecdba01ffb05580370846255c88>WS-2020-0127</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1544">https://www.npmjs.com/advisories/1544</a></p>
<p>Release Date: 2020-07-14</p>
<p>Fix Resolution: npm-registry-fetch - 4.0.5,8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws low detected in npm registry fetch tgz ws low severity vulnerability vulnerable library npm registry fetch tgz fetch based http client for use with npm registry apis library home page a href path to dependency file tmp ws scm shipengine request package json path to vulnerable library tmp ws scm shipengine request node modules npm node modules npm registry fetch package json dependency hierarchy semantic release tgz root library npm tgz npm tgz x npm registry fetch tgz vulnerable library found in head commit a href vulnerability details npm registry fetch before and is vulnerable to an information exposure vulnerability through log files publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution npm registry fetch step up your open source security game with whitesource | 0 |
978 | 2,594,399,734 | IssuesEvent | 2015-02-20 02:51:32 | BALL-Project/ball | https://api.github.com/repos/BALL-Project/ball | closed | BALLView crash when opening project with stored PDB file | C: VIEW P: major R: worksforme T: defect | **Reported by pthiel on 18 Dec 41453222 22:55 UTC**
Opening or downloading PDB files and subsequent saving of this project crashes when reopening the latter with a segmentation fault. I traced the problem to occur the first time after this commit http://ball-gitweb.bioinf.uni-sb.de/?p=BALL.git;a=commit;h=6b2a86c46eda3b016684e2f500c39cb9f5c31e45. Project files created with older versions could be opened properly, so it seems to be a problem with saving the project. The segfault seems to occur in class persistenceManager.C (line 310, obj->persistentRead(*this)). At this position I got stuck ... | 1.0 | BALLView crash when opening project with stored PDB file - **Reported by pthiel on 18 Dec 41453222 22:55 UTC**
Opening or downloading PDB files and subsequent saving of this project crashes when reopening the latter with a segmentation fault. I traced the problem to occur the first time after this commit http://ball-gitweb.bioinf.uni-sb.de/?p=BALL.git;a=commit;h=6b2a86c46eda3b016684e2f500c39cb9f5c31e45. Project files created with older versions could be opened properly, so it seems to be a problem with saving the project. The segfault seems to occur in class persistenceManager.C (line 310, obj->persistentRead(*this)). At this position I got stuck ... | defect | ballview crash when opening project with stored pdb file reported by pthiel on dec utc opening or downloading pdb files and subsequent saving of this project crashes when reopening the latter with a segmentation fault i traced the problem to occur the first time after this commit project files created with older versions could be opened properly so it seems to be a problem with saving the project the segfault seems to occur in class persistencemanager c line obj persistentread this at this position i got stuck | 1 |
15,132 | 2,850,047,325 | IssuesEvent | 2015-05-31 07:31:07 | c-rack/squid-ecap-gzip | https://api.github.com/repos/c-rack/squid-ecap-gzip | closed | Cannot build on Solaris 10 | auto-migrated Priority-Medium Type-Defect | ```
Libecap configured and built successfully.
But adapter is not:
root @ fhtagn /patch/tmp/squid-ecap-gzip # ./configure
'CXXFLAGS=-I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib'
checking for a BSD-compatible install... /opt/csw/gnu/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /opt/csw/gnu/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking build system type... i386-pc-solaris2.10
checking host system type... i386-pc-solaris2.10
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for a sed that does not truncate output... /opt/csw/gnu/sed
checking for grep that handles long lines and -e... /opt/csw/gnu/grep
checking for egrep... /opt/csw/gnu/grep -E
checking for ld used by gcc... /usr/ccs/bin/ld
checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
checking for /usr/ccs/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /opt/csw/gnu/nm -B
checking whether ln -s works... yes
checking how to recognize dependent libraries... pass_all
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking the maximum length of command line arguments... 786240
checking command to parse /opt/csw/gnu/nm -B output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC
checking if gcc PIC flag -fPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... solaris2.10 ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
configure: creating libtool
configuring libtool for CXX support
checking for ld used by g++... /usr/ccs/bin/ld
checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
checking whether the g++ linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking for g++ option to produce PIC... -fPIC
checking if g++ PIC flag -fPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking whether the g++ linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking dynamic linker characteristics... solaris2.10 ld.so
(cached) (cached) checking how to hardcode library paths into programs...
immediate
checking for ranlib... (cached) ranlib
checking for ar... /opt/csw/gnu/ar
remembering installation prefix as /usr/local
checking whether make sets $(MAKE)... (cached) yes
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether g++ accepts -g... (cached) yes
checking dependency style of g++... (cached) gcc3
checking whether the C++ compiler (g++) is a C++ compiler... yes
checking iostream.h usability... yes
checking iostream.h presence... no
configure: WARNING: iostream.h: accepted by the compiler, rejected by the
preprocessor!
configure: WARNING: iostream.h: proceeding with the compiler's result
checking for iostream.h... yes
checking for ios::fmtflags... yes
checking for sstream::freeze... no
checking for set_new_handler... no
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether g++ accepts -g... (cached) yes
checking dependency style of g++... (cached) gcc3
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking how to run the C preprocessor... gcc -E
checking whether ln -s works... yes
checking whether make sets $(MAKE)... (cached) yes
checking for ranlib... (cached) ranlib
checking for main in -lecap... yes
checking libecap/adapter/service.h usability... yes
checking libecap/adapter/service.h presence... yes
checking for libecap/adapter/service.h... yes
configure: creating ./config.status
config.status: creating ./Makefile
config.status: creating src/Makefile
config.status: creating src/autoconf.h
config.status: src/autoconf.h is unchanged
config.status: executing depfiles commands
root @ fhtagn /patch/tmp/squid-ecap-gzip # gmake
Making all in src
gmake[1]: Entering directory '/patch/tmp/squid-ecap-gzip/src'
gmake all-am
gmake[2]: Entering directory '/patch/tmp/squid-ecap-gzip/src'
/bin/bash ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../src
-I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib -MT adapter_gzip.lo -MD
-MP -MF .deps/adapter_gzip.Tpo -c -o adapter_gzip.lo adapter_gzip.cc
g++ -DHAVE_CONFIG_H -I../src -I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib -MT adapter_gzip.lo -MD -MP -MF .deps/adapter_gzip.Tpo -c adapter_gzip.cc -fPIC -DPIC -o .libs/adapter_gzip.o
adapter_gzip.cc:65:32: error: 'Config' does not name a type
virtual void configure(const Config &cfg);
^
adapter_gzip.cc:66:34: error: 'Config' does not name a type
virtual void reconfigure(const Config &cfg);
^
adapter_gzip.cc:77:38: error: conflicting return type specified for 'virtual
libecap::adapter::Xaction*
Adapter::Service::makeXaction(libecap::host::Xaction*)'
virtual libecap::adapter::Xaction *makeXaction(libecap::host::Xaction *hostx);
^
In file included from adapter_gzip.cc:46:0:
/usr/local/include/libecap/adapter/service.h:41:30: error: overriding
'virtual libecap::adapter::Service::MadeXactionPointer
libecap::adapter::Service::makeXaction(libecap::host::Xaction*)'
virtual MadeXactionPointer makeXaction(host::Xaction *hostx) = 0;
^
adapter_gzip.cc:235:40: error: 'Config' does not name a type
void Adapter::Service::configure(const Config &) {
^
adapter_gzip.cc:239:42: error: 'Config' does not name a type
void Adapter::Service::reconfigure(const Config &) {
^
adapter_gzip.cc: In member function 'virtual libecap::adapter::Xaction*
Adapter::Service::makeXaction(libecap::host::Xaction*)':
adapter_gzip.cc:263:35: error: invalid new-expression of abstract class type
'Adapter::Xaction'
return new Adapter::Xaction(hostx);
^
adapter_gzip.cc:81:8: note: because the following virtual functions are pure
within 'Adapter::Xaction':
class Xaction: public libecap::adapter::Xaction {
^
In file included from /usr/local/include/libecap/adapter/xaction.h:7:0,
from adapter_gzip.cc:47:
/usr/local/include/libecap/common/options.h:21:22: note: virtual const
libecap::Area libecap::Options::option(const libecap::Name&) const
virtual const Area option(const Name &name) const = 0;
^
/usr/local/include/libecap/common/options.h:25:16: note: virtual void
libecap::Options::visitEachOption(libecap::NamedValueVisitor&) const
virtual void visitEachOption(NamedValueVisitor &visitor) const = 0;
^
adapter_gzip.cc: At global scope:
adapter_gzip.cc:613:33: error: 'RegisterService' is not a member of 'libecap'
static const bool Registered = (libecap::RegisterService(new Adapter::Service), true);
^
adapter_gzip.cc:613:71: error: invalid new-expression of abstract class type
'Adapter::Service'
static const bool Registered = (libecap::RegisterService(new Adapter::Service), true);
^
adapter_gzip.cc:56:8: note: because the following virtual functions are pure
within 'Adapter::Service':
class Service: public libecap::adapter::Service {
^
In file included from adapter_gzip.cc:46:0:
/usr/local/include/libecap/adapter/service.h:26:16: note: virtual void
libecap::adapter::Service::configure(const libecap::Options&)
virtual void configure(const Options &cfg) = 0;
^
/usr/local/include/libecap/adapter/service.h:27:16: note: virtual void
libecap::adapter::Service::reconfigure(const libecap::Options&)
virtual void reconfigure(const Options &cfg) = 0;
^
Makefile:318: recipe for target 'adapter_gzip.lo' failed
gmake[2]: *** [adapter_gzip.lo] Error 1
gmake[2]: Leaving directory '/patch/tmp/squid-ecap-gzip/src'
Makefile:215: recipe for target 'all' failed
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory '/patch/tmp/squid-ecap-gzip/src'
Makefile:252: recipe for target 'all-recursive' failed
gmake: *** [all-recursive] Error 1
```
Original issue reported on code.google.com by `yvoi...@gmail.com` on 16 Jan 2015 at 7:31 | 1.0 | Cannot build on Solaris 10 - ```
Libecap configured and built successfully.
But adapter is not:
root @ fhtagn /patch/tmp/squid-ecap-gzip # ./configure
'CXXFLAGS=-I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib'
checking for a BSD-compatible install... /opt/csw/gnu/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /opt/csw/gnu/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of Makefiles... no
checking build system type... i386-pc-solaris2.10
checking host system type... i386-pc-solaris2.10
checking for style of include used by make... GNU
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking dependency style of gcc... gcc3
checking for a sed that does not truncate output... /opt/csw/gnu/sed
checking for grep that handles long lines and -e... /opt/csw/gnu/grep
checking for egrep... /opt/csw/gnu/grep -E
checking for ld used by gcc... /usr/ccs/bin/ld
checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
checking for /usr/ccs/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /opt/csw/gnu/nm -B
checking whether ln -s works... yes
checking how to recognize dependent libraries... pass_all
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking dependency style of g++... gcc3
checking how to run the C++ preprocessor... g++ -E
checking the maximum length of command line arguments... 786240
checking command to parse /opt/csw/gnu/nm -B output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fPIC
checking if gcc PIC flag -fPIC works... yes
checking if gcc static flag -static works... no
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking whether -lc should be explicitly linked in... no
checking dynamic linker characteristics... solaris2.10 ld.so
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
configure: creating libtool
configuring libtool for CXX support
checking for ld used by g++... /usr/ccs/bin/ld
checking if the linker (/usr/ccs/bin/ld) is GNU ld... no
checking whether the g++ linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking for g++ option to produce PIC... -fPIC
checking if g++ PIC flag -fPIC works... yes
checking if g++ static flag -static works... no
checking if g++ supports -c -o file.o... yes
checking whether the g++ linker (/usr/ccs/bin/ld) supports shared libraries...
yes
checking dynamic linker characteristics... solaris2.10 ld.so
(cached) (cached) checking how to hardcode library paths into programs...
immediate
checking for ranlib... (cached) ranlib
checking for ar... /opt/csw/gnu/ar
remembering installation prefix as /usr/local
checking whether make sets $(MAKE)... (cached) yes
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether g++ accepts -g... (cached) yes
checking dependency style of g++... (cached) gcc3
checking whether the C++ compiler (g++) is a C++ compiler... yes
checking iostream.h usability... yes
checking iostream.h presence... no
configure: WARNING: iostream.h: accepted by the compiler, rejected by the
preprocessor!
configure: WARNING: iostream.h: proceeding with the compiler's result
checking for iostream.h... yes
checking for ios::fmtflags... yes
checking for sstream::freeze... no
checking for set_new_handler... no
checking whether we are using the GNU C++ compiler... (cached) yes
checking whether g++ accepts -g... (cached) yes
checking dependency style of g++... (cached) gcc3
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ISO C89... (cached) none needed
checking dependency style of gcc... (cached) gcc3
checking how to run the C preprocessor... gcc -E
checking whether ln -s works... yes
checking whether make sets $(MAKE)... (cached) yes
checking for ranlib... (cached) ranlib
checking for main in -lecap... yes
checking libecap/adapter/service.h usability... yes
checking libecap/adapter/service.h presence... yes
checking for libecap/adapter/service.h... yes
configure: creating ./config.status
config.status: creating ./Makefile
config.status: creating src/Makefile
config.status: creating src/autoconf.h
config.status: src/autoconf.h is unchanged
config.status: executing depfiles commands
root @ fhtagn /patch/tmp/squid-ecap-gzip # gmake
Making all in src
gmake[1]: Entering directory '/patch/tmp/squid-ecap-gzip/src'
gmake all-am
gmake[2]: Entering directory '/patch/tmp/squid-ecap-gzip/src'
/bin/bash ../libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I../src
-I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib -MT adapter_gzip.lo -MD
-MP -MF .deps/adapter_gzip.Tpo -c -o adapter_gzip.lo adapter_gzip.cc
g++ -DHAVE_CONFIG_H -I../src -I/usr/sfw/include/c++/3.4.3/backward -L/usr/local/lib -MT adapter_gzip.lo -MD -MP -MF .deps/adapter_gzip.Tpo -c adapter_gzip.cc -fPIC -DPIC -o .libs/adapter_gzip.o
adapter_gzip.cc:65:32: error: 'Config' does not name a type
virtual void configure(const Config &cfg);
^
adapter_gzip.cc:66:34: error: 'Config' does not name a type
virtual void reconfigure(const Config &cfg);
^
adapter_gzip.cc:77:38: error: conflicting return type specified for 'virtual
libecap::adapter::Xaction*
Adapter::Service::makeXaction(libecap::host::Xaction*)'
virtual libecap::adapter::Xaction *makeXaction(libecap::host::Xaction *hostx);
^
In file included from adapter_gzip.cc:46:0:
/usr/local/include/libecap/adapter/service.h:41:30: error: overriding
'virtual libecap::adapter::Service::MadeXactionPointer
libecap::adapter::Service::makeXaction(libecap::host::Xaction*)'
virtual MadeXactionPointer makeXaction(host::Xaction *hostx) = 0;
^
adapter_gzip.cc:235:40: error: 'Config' does not name a type
void Adapter::Service::configure(const Config &) {
^
adapter_gzip.cc:239:42: error: 'Config' does not name a type
void Adapter::Service::reconfigure(const Config &) {
^
adapter_gzip.cc: In member function 'virtual libecap::adapter::Xaction*
Adapter::Service::makeXaction(libecap::host::Xaction*)':
adapter_gzip.cc:263:35: error: invalid new-expression of abstract class type
'Adapter::Xaction'
return new Adapter::Xaction(hostx);
^
adapter_gzip.cc:81:8: note: because the following virtual functions are pure
within 'Adapter::Xaction':
class Xaction: public libecap::adapter::Xaction {
^
In file included from /usr/local/include/libecap/adapter/xaction.h:7:0,
from adapter_gzip.cc:47:
/usr/local/include/libecap/common/options.h:21:22: note: virtual const
libecap::Area libecap::Options::option(const libecap::Name&) const
virtual const Area option(const Name &name) const = 0;
^
/usr/local/include/libecap/common/options.h:25:16: note: virtual void
libecap::Options::visitEachOption(libecap::NamedValueVisitor&) const
virtual void visitEachOption(NamedValueVisitor &visitor) const = 0;
^
adapter_gzip.cc: At global scope:
adapter_gzip.cc:613:33: error: 'RegisterService' is not a member of 'libecap'
static const bool Registered = (libecap::RegisterService(new Adapter::Service), true);
^
adapter_gzip.cc:613:71: error: invalid new-expression of abstract class type
'Adapter::Service'
static const bool Registered = (libecap::RegisterService(new Adapter::Service), true);
^
adapter_gzip.cc:56:8: note: because the following virtual functions are pure
within 'Adapter::Service':
class Service: public libecap::adapter::Service {
^
In file included from adapter_gzip.cc:46:0:
/usr/local/include/libecap/adapter/service.h:26:16: note: virtual void
libecap::adapter::Service::configure(const libecap::Options&)
virtual void configure(const Options &cfg) = 0;
^
/usr/local/include/libecap/adapter/service.h:27:16: note: virtual void
libecap::adapter::Service::reconfigure(const libecap::Options&)
virtual void reconfigure(const Options &cfg) = 0;
^
Makefile:318: recipe for target 'adapter_gzip.lo' failed
gmake[2]: *** [adapter_gzip.lo] Error 1
gmake[2]: Leaving directory '/patch/tmp/squid-ecap-gzip/src'
Makefile:215: recipe for target 'all' failed
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory '/patch/tmp/squid-ecap-gzip/src'
Makefile:252: recipe for target 'all-recursive' failed
gmake: *** [all-recursive] Error 1
```
Original issue reported on code.google.com by `yvoi...@gmail.com` on 16 Jan 2015 at 7:31 | defect | cannot build on solaris libecap configured and built successfully but adapter is not root fhtagn patch tmp squid ecap gzip configure cxxflags i usr sfw include c backward l usr local lib checking for a bsd compatible install opt csw gnu install c checking whether build environment is sane yes checking for a thread safe mkdir p opt csw gnu mkdir p checking for gawk gawk checking whether make sets make yes checking whether to enable maintainer specific portions of makefiles no checking build system type pc checking host system type pc checking for style of include used by make gnu checking for gcc gcc checking whether the c compiler works yes checking for c compiler default output file name a out checking for suffix of executables checking whether we are cross compiling no checking for suffix of object files o checking whether we are using the gnu c compiler yes checking whether gcc accepts g yes checking for gcc option to accept iso none needed checking dependency style of gcc checking for a sed that does not truncate output opt csw gnu sed checking for grep that handles long lines and e opt csw gnu grep checking for egrep opt csw gnu grep e checking for ld used by gcc usr ccs bin ld checking if the linker usr ccs bin ld is gnu ld no checking for usr ccs bin ld option to reload object files r checking for bsd compatible nm opt csw gnu nm b checking whether ln s works yes checking how to recognize dependent libraries pass all checking how to run the c preprocessor gcc e checking for ansi c header files yes checking for sys types h yes checking for sys stat h yes checking for stdlib h yes checking for string h yes checking for memory h yes checking for strings h yes checking for inttypes h yes checking for stdint h yes checking for unistd h yes checking dlfcn h usability yes checking dlfcn h presence yes checking for dlfcn h yes checking for g g checking whether we are using the gnu c compiler yes checking whether g accepts g yes checking dependency style of g checking how to run the c preprocessor g e checking the maximum length of command line arguments checking command to parse opt csw gnu nm b output from gcc object ok checking for objdir libs checking for ar ar checking for ranlib ranlib checking for strip strip checking if gcc supports fno rtti fno exceptions no checking for gcc option to produce pic fpic checking if gcc pic flag fpic works yes checking if gcc static flag static works no checking if gcc supports c o file o yes checking whether the gcc linker usr ccs bin ld supports shared libraries yes checking whether lc should be explicitly linked in no checking dynamic linker characteristics ld so checking how to hardcode library paths into programs immediate checking whether stripping libraries is possible yes checking if libtool supports shared libraries yes checking whether to build shared libraries yes checking whether to build static libraries yes configure creating libtool configuring libtool for cxx support checking for ld used by g usr ccs bin ld checking if the linker usr ccs bin ld is gnu ld no checking whether the g linker usr ccs bin ld supports shared libraries yes checking for g option to produce pic fpic checking if g pic flag fpic works yes checking if g static flag static works no checking if g supports c o file o yes checking whether the g linker usr ccs bin ld supports shared libraries yes checking dynamic linker characteristics ld so cached cached checking how to hardcode library paths into programs immediate checking for ranlib cached ranlib checking for ar opt csw gnu ar remembering installation prefix as usr local checking whether make sets make cached yes checking whether we are using the gnu c compiler cached yes checking whether g accepts g cached yes checking dependency style of g cached checking whether the c compiler g is a c compiler yes checking iostream h usability yes checking iostream h presence no configure warning iostream h accepted by the compiler rejected by the preprocessor configure warning iostream h proceeding with the compiler s result checking for iostream h yes checking for ios fmtflags yes checking for sstream freeze no checking for set new handler no checking whether we are using the gnu c compiler cached yes checking whether g accepts g cached yes checking dependency style of g cached checking for gcc cached gcc checking whether we are using the gnu c compiler cached yes checking whether gcc accepts g cached yes checking for gcc option to accept iso cached none needed checking dependency style of gcc cached checking how to run the c preprocessor gcc e checking whether ln s works yes checking whether make sets make cached yes checking for ranlib cached ranlib checking for main in lecap yes checking libecap adapter service h usability yes checking libecap adapter service h presence yes checking for libecap adapter service h yes configure creating config status config status creating makefile config status creating src makefile config status creating src autoconf h config status src autoconf h is unchanged config status executing depfiles commands root fhtagn patch tmp squid ecap gzip gmake making all in src gmake entering directory patch tmp squid ecap gzip src gmake all am gmake entering directory patch tmp squid ecap gzip src bin bash libtool tag cxx mode compile g dhave config h i src i usr sfw include c backward l usr local lib mt adapter gzip lo md mp mf deps adapter gzip tpo c o adapter gzip lo adapter gzip cc g dhave config h i src i usr sfw include c backward l usr local lib mt adapter gzip lo md mp mf deps adapter gzip tpo c adapter gzip cc fpic dpic o libs adapter gzip o adapter gzip cc error config does not name a type virtual void configure const config cfg adapter gzip cc error config does not name a type virtual void reconfigure const config cfg adapter gzip cc error conflicting return type specified for virtual libecap adapter xaction adapter service makexaction libecap host xaction virtual libecap adapter xaction makexaction libecap host xaction hostx in file included from adapter gzip cc usr local include libecap adapter service h error overriding virtual libecap adapter service madexactionpointer libecap adapter service makexaction libecap host xaction virtual madexactionpointer makexaction host xaction hostx adapter gzip cc error config does not name a type void adapter service configure const config adapter gzip cc error config does not name a type void adapter service reconfigure const config adapter gzip cc in member function virtual libecap adapter xaction adapter service makexaction libecap host xaction adapter gzip cc error invalid new expression of abstract class type adapter xaction return new adapter xaction hostx adapter gzip cc note because the following virtual functions are pure within adapter xaction class xaction public libecap adapter xaction in file included from usr local include libecap adapter xaction h from adapter gzip cc usr local include libecap common options h note virtual const libecap area libecap options option const libecap name const virtual const area option const name name const usr local include libecap common options h note virtual void libecap options visiteachoption libecap namedvaluevisitor const virtual void visiteachoption namedvaluevisitor visitor const adapter gzip cc at global scope adapter gzip cc error registerservice is not a member of libecap static const bool registered libecap registerservice new adapter service true adapter gzip cc error invalid new expression of abstract class type adapter service static const bool registered libecap registerservice new adapter service true adapter gzip cc note because the following virtual functions are pure within adapter service class service public libecap adapter service in file included from adapter gzip cc usr local include libecap adapter service h note virtual void libecap adapter service configure const libecap options virtual void configure const options cfg usr local include libecap adapter service h note virtual void libecap adapter service reconfigure const libecap options virtual void reconfigure const options cfg makefile recipe for target adapter gzip lo failed gmake error gmake leaving directory patch tmp squid ecap gzip src makefile recipe for target all failed gmake error gmake leaving directory patch tmp squid ecap gzip src makefile recipe for target all recursive failed gmake error original issue reported on code google com by yvoi gmail com on jan at | 1 |
13,451 | 2,757,492,513 | IssuesEvent | 2015-04-27 15:10:35 | Parisson/Telemeta | https://api.github.com/repos/Parisson/Telemeta | closed | Défaut balise date | defect | Dans l'export Dublin Core, il y a des balises incomplètes comme la balise fermante du champ "Date" est incomplète.
Par exemple sur cette notice il y a un 1er champ titre renseigné et en suivant une 2ème balise titre vide, idem pour la date :
http://archives.crem-cnrs.fr/oai/?verb=GetRecord&identifier=:crem-cnrs:items:3829&metadataPrefix=oai_dc
Merci | 1.0 | Défaut balise date - Dans l'export Dublin Core, il y a des balises incomplètes comme la balise fermante du champ "Date" est incomplète.
Par exemple sur cette notice il y a un 1er champ titre renseigné et en suivant une 2ème balise titre vide, idem pour la date :
http://archives.crem-cnrs.fr/oai/?verb=GetRecord&identifier=:crem-cnrs:items:3829&metadataPrefix=oai_dc
Merci | defect | défaut balise date dans l export dublin core il y a des balises incomplètes comme la balise fermante du champ date est incomplète par exemple sur cette notice il y a un champ titre renseigné et en suivant une balise titre vide idem pour la date merci | 1 |
66,759 | 20,620,417,154 | IssuesEvent | 2022-03-07 16:53:37 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Pending messages appear in the threads list | T-Defect X-Regression S-Minor O-Occasional A-Threads | ### Steps to reproduce
1. Open the threads list
2. Send a message in the main timeline
### Outcome
#### What did you expect?
The message should not appear in the threads list since it is not a part of any threads.
#### What happened instead?
The message briefly appeared at the bottom of the threads list. (regression of https://github.com/vector-im/element-web/issues/21146)
### Operating system
NixOS unstable
### Browser information
Firefox 97.0.1
### URL for webapp
develop.element.io
### Application version
Element version: 31702e9a26a1-react-9379be0189b8-js-2ce1e7e6ef11 Olm version: 3.2.8
### Homeserver
Synapse 1.54.0rc1
### Will you send logs?
No | 1.0 | Pending messages appear in the threads list - ### Steps to reproduce
1. Open the threads list
2. Send a message in the main timeline
### Outcome
#### What did you expect?
The message should not appear in the threads list since it is not a part of any threads.
#### What happened instead?
The message briefly appeared at the bottom of the threads list. (regression of https://github.com/vector-im/element-web/issues/21146)
### Operating system
NixOS unstable
### Browser information
Firefox 97.0.1
### URL for webapp
develop.element.io
### Application version
Element version: 31702e9a26a1-react-9379be0189b8-js-2ce1e7e6ef11 Olm version: 3.2.8
### Homeserver
Synapse 1.54.0rc1
### Will you send logs?
No | defect | pending messages appear in the threads list steps to reproduce open the threads list send a message in the main timeline outcome what did you expect the message should not appear in the threads list since it is not a part of any threads what happened instead the message briefly appeared at the bottom of the threads list regression of operating system nixos unstable browser information firefox url for webapp develop element io application version element version react js olm version homeserver synapse will you send logs no | 1 |
2,491 | 2,607,905,109 | IssuesEvent | 2015-02-26 00:15:16 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | closed | ID variable doesn't work | auto-migrated Milestone-0.7 Priority-Medium Type-Defect | ```
Investigate this problem:
Добавляю в файл my_zen_setting.js такой сниппет:
‘djb’: ‘{% block ${id} %}\n\t${child}|\n{% endblock %}’
Т.е. такой как в видео.
При этом стока djb разворачивается в:
{{{
{% block ${id} %}
{% endblock %}
}}}
Попытка развернуть djb#content вообще ни к чему
не приводит.
```
-----
Original issue reported on code.google.com by `serge....@gmail.com` on 2 Jun 2010 at 8:38 | 1.0 | ID variable doesn't work - ```
Investigate this problem:
Добавляю в файл my_zen_setting.js такой сниппет:
‘djb’: ‘{% block ${id} %}\n\t${child}|\n{% endblock %}’
Т.е. такой как в видео.
При этом стока djb разворачивается в:
{{{
{% block ${id} %}
{% endblock %}
}}}
Попытка развернуть djb#content вообще ни к чему
не приводит.
```
-----
Original issue reported on code.google.com by `serge....@gmail.com` on 2 Jun 2010 at 8:38 | defect | id variable doesn t work investigate this problem добавляю в файл my zen setting js такой сниппет ‘djb’ ‘ block id n t child n endblock ’ т е такой как в видео при этом стока djb разворачивается в block id endblock попытка развернуть djb content вообще ни к чему не приводит original issue reported on code google com by serge gmail com on jun at | 1 |
33,536 | 7,155,161,887 | IssuesEvent | 2018-01-26 11:31:22 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Support precision on TIME and TIMESTAMP data types | C: Functionality P: Medium R: Fixed T: Defect | For example in MySQL 5.7 you can use `datatime(6)` which fails in JOOQ parser | 1.0 | Support precision on TIME and TIMESTAMP data types - For example in MySQL 5.7 you can use `datatime(6)` which fails in JOOQ parser | defect | support precision on time and timestamp data types for example in mysql you can use datatime which fails in jooq parser | 1 |
22,264 | 3,619,707,938 | IssuesEvent | 2016-02-08 17:00:21 | miracle091/transmission-remote-dotnet | https://api.github.com/repos/miracle091/transmission-remote-dotnet | closed | Torrent file is not deleted | Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Check "Delete torrent when adding" option
2. Add file by double clicking, drag-n-drop, "+" sign on toolbar, or quick add
from file menu
What is the expected output? What do you see instead?
Expect file on torrent file on local system to be deleted. It is not and there
is no warning (check error log with and without show debug log enabled.
What version of the products are you using?
OS: Vista
Transmission: 3.24 (build 0)
Remote: 2.13 on DD-WRT (rev XXX)
Please provide any additional information below. Feel free to
attach screenshots or sample code which demonstrates the issue being
described.
```
Original issue reported on code.google.com by `paulroun...@gmail.com` on 20 Mar 2011 at 5:48 | 1.0 | Torrent file is not deleted - ```
What steps will reproduce the problem?
1. Check "Delete torrent when adding" option
2. Add file by double clicking, drag-n-drop, "+" sign on toolbar, or quick add
from file menu
What is the expected output? What do you see instead?
Expect file on torrent file on local system to be deleted. It is not and there
is no warning (check error log with and without show debug log enabled.
What version of the products are you using?
OS: Vista
Transmission: 3.24 (build 0)
Remote: 2.13 on DD-WRT (rev XXX)
Please provide any additional information below. Feel free to
attach screenshots or sample code which demonstrates the issue being
described.
```
Original issue reported on code.google.com by `paulroun...@gmail.com` on 20 Mar 2011 at 5:48 | defect | torrent file is not deleted what steps will reproduce the problem check delete torrent when adding option add file by double clicking drag n drop sign on toolbar or quick add from file menu what is the expected output what do you see instead expect file on torrent file on local system to be deleted it is not and there is no warning check error log with and without show debug log enabled what version of the products are you using os vista transmission build remote on dd wrt rev xxx please provide any additional information below feel free to attach screenshots or sample code which demonstrates the issue being described original issue reported on code google com by paulroun gmail com on mar at | 1 |
159,848 | 12,493,727,217 | IssuesEvent | 2020-06-01 09:48:27 | ForgottenGlory/Living-Skyrim-2 | https://api.github.com/repos/ForgottenGlory/Living-Skyrim-2 | opened | Install Error Due to Symbol in Rayek's End Filenames | bug need testers | **LS Version**
LS2 Beta 1
**Describe the bug**
During install, the installer may fail due to Rayek's End having a degree symbol in one of its filenames.
**To Reproduce**
Have a Japanese version of Windows, likely. (I run in English and so does the OS, but the degree symbol named mesh in Meshes\Oaristys\Clutter\Mirror・nif has a Japanese dot rather than the correct Mirror°.nif as in the file.
**Expected behavior**
It doesn't do that.
**Additional context**
Suggest nixing mod since there were comments from you about liking the place but not being happy there's no price and it's just a free house. There's plenty of housing in the list as is and this mod likely causes issues for anyone on the other side of the globe with Asian fonts installed as part of their OS. (Assumption.) | 1.0 | Install Error Due to Symbol in Rayek's End Filenames - **LS Version**
LS2 Beta 1
**Describe the bug**
During install, the installer may fail due to Rayek's End having a degree symbol in one of its filenames.
**To Reproduce**
Have a Japanese version of Windows, likely. (I run in English and so does the OS, but the degree symbol named mesh in Meshes\Oaristys\Clutter\Mirror・nif has a Japanese dot rather than the correct Mirror°.nif as in the file.
**Expected behavior**
It doesn't do that.
**Additional context**
Suggest nixing mod since there were comments from you about liking the place but not being happy there's no price and it's just a free house. There's plenty of housing in the list as is and this mod likely causes issues for anyone on the other side of the globe with Asian fonts installed as part of their OS. (Assumption.) | non_defect | install error due to symbol in rayek s end filenames ls version beta describe the bug during install the installer may fail due to rayek s end having a degree symbol in one of its filenames to reproduce have a japanese version of windows likely i run in english and so does the os but the degree symbol named mesh in meshes oaristys clutter mirror・nif has a japanese dot rather than the correct mirror° nif as in the file expected behavior it doesn t do that additional context suggest nixing mod since there were comments from you about liking the place but not being happy there s no price and it s just a free house there s plenty of housing in the list as is and this mod likely causes issues for anyone on the other side of the globe with asian fonts installed as part of their os assumption | 0 |
35,727 | 7,800,102,362 | IssuesEvent | 2018-06-09 04:50:11 | StrikeNP/trac_test | https://api.github.com/repos/StrikeNP/trac_test | closed | Document DISNAN in CLUBB README (Trac #493) | Migrated from Trac clubb_src defect dschanen@uwm.edu | Hugh Morrison compiled CLUBB today, and the one snag he encountered was that his version of LAPACK didn't have disnan and sisnan.
Could you please add a sentence or two to CLUBB's main README on how to change the compile flags for older versions of LAPACK? We just need to explain to new users what's described in, e.g., r5525.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/493
```json
{
"status": "closed",
"changetime": "2012-07-02T20:22:34",
"description": "Hugh Morrison compiled CLUBB today, and the one snag he encountered was that his version of LAPACK didn't have disnan and sisnan.\n\nCould you please add a sentence or two to CLUBB's main README on how to change the compile flags for older versions of LAPACK? We just need to explain to new users what's described in, e.g., r5525.",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1341260554958391",
"component": "clubb_src",
"summary": "Document DISNAN in CLUBB README",
"priority": "trivial",
"keywords": "",
"time": "2012-02-01T21:24:19",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
| 1.0 | Document DISNAN in CLUBB README (Trac #493) - Hugh Morrison compiled CLUBB today, and the one snag he encountered was that his version of LAPACK didn't have disnan and sisnan.
Could you please add a sentence or two to CLUBB's main README on how to change the compile flags for older versions of LAPACK? We just need to explain to new users what's described in, e.g., r5525.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/493
```json
{
"status": "closed",
"changetime": "2012-07-02T20:22:34",
"description": "Hugh Morrison compiled CLUBB today, and the one snag he encountered was that his version of LAPACK didn't have disnan and sisnan.\n\nCould you please add a sentence or two to CLUBB's main README on how to change the compile flags for older versions of LAPACK? We just need to explain to new users what's described in, e.g., r5525.",
"reporter": "vlarson@uwm.edu",
"cc": "vlarson@uwm.edu",
"resolution": "Verified by V. Larson",
"_ts": "1341260554958391",
"component": "clubb_src",
"summary": "Document DISNAN in CLUBB README",
"priority": "trivial",
"keywords": "",
"time": "2012-02-01T21:24:19",
"milestone": "",
"owner": "dschanen@uwm.edu",
"type": "defect"
}
```
| defect | document disnan in clubb readme trac hugh morrison compiled clubb today and the one snag he encountered was that his version of lapack didn t have disnan and sisnan could you please add a sentence or two to clubb s main readme on how to change the compile flags for older versions of lapack we just need to explain to new users what s described in e g attachments migrated from json status closed changetime description hugh morrison compiled clubb today and the one snag he encountered was that his version of lapack didn t have disnan and sisnan n ncould you please add a sentence or two to clubb s main readme on how to change the compile flags for older versions of lapack we just need to explain to new users what s described in e g reporter vlarson uwm edu cc vlarson uwm edu resolution verified by v larson ts component clubb src summary document disnan in clubb readme priority trivial keywords time milestone owner dschanen uwm edu type defect | 1 |
56,662 | 15,266,713,356 | IssuesEvent | 2021-02-22 09:09:58 | SAP/fundamental-ngx | https://api.github.com/repos/SAP/fundamental-ngx | opened | Bug: (Core) Slider is showing Negative value in tooltip wrong when RTL applied | Defect Hunting bug | #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
On switching to Rtl, Negative value is hown as "10-" , but it should be "-10".
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
Angular 10
#### If this is a bug, please provide steps for reproducing it.
https://fundamental-ngx.netlify.app/fundamental-ngx#/core/slider

#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
chrome on Mac | 1.0 | Bug: (Core) Slider is showing Negative value in tooltip wrong when RTL applied - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
On switching to Rtl, Negative value is hown as "10-" , but it should be "-10".
#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
Angular 10
#### If this is a bug, please provide steps for reproducing it.
https://fundamental-ngx.netlify.app/fundamental-ngx#/core/slider

#### Please provide relevant source code if applicable.
#### Is there anything else we should know?
chrome on Mac | defect | bug core slider is showing negative value in tooltip wrong when rtl applied is this a bug enhancement or feature request bug briefly describe your proposal on switching to rtl negative value is hown as but it should be which versions of angular and fundamental library for angular are affected if this is a feature request use current version angular if this is a bug please provide steps for reproducing it please provide relevant source code if applicable is there anything else we should know chrome on mac | 1 |
103,058 | 8,876,402,599 | IssuesEvent | 2019-01-12 14:44:56 | frickler24/RechteDB | https://api.github.com/repos/frickler24/RechteDB | reopened | Die User aus der ersten Stufe fehlenm in der aktuellen DB | Test & quality bug help wanted | Konflikt zwischen Mainfrix- und f4s-DB-Inhalten | 1.0 | Die User aus der ersten Stufe fehlenm in der aktuellen DB - Konflikt zwischen Mainfrix- und f4s-DB-Inhalten | non_defect | die user aus der ersten stufe fehlenm in der aktuellen db konflikt zwischen mainfrix und db inhalten | 0 |
734,755 | 25,361,760,488 | IssuesEvent | 2022-11-20 23:57:05 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | opened | Compatibility issue with docker compose abort-on-container-exit flag | Type: Bug Priority: Medium Status: Available | ### Describe the bug
Watchtower causes an issue for me in combination with the `--abort-on-container-exit` flag for dockers `compose up` subcommand.
Docker compose removed the container on its own after the container was stopped. This leads to an error when watchtower attempts to remove the container.
`Error: No such container: 425173...`
The error is thrown by this [ContainerRemove](https://github.com/containrrr/watchtower/blob/d744c3488667d6cc4bb8258bb4b351bfd80e1602/pkg/container/client.go#L194) call.
### Steps to reproduce
1. Run container with `docker compose up --abort-on-container-exit`
2. Update image with watchtower
### Expected behavior
I expect for watchtower to be able to skip the container removal, either automatically if the container is already gone or with an explicit option.
### Screenshots
_No response_
### Environment
- Debian 11 amd64
- Docker 20.10.21
### Your logs
```text
...
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="Found a match"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="No pull needed. Skipping image."
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="No new images found for /traefik"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="This is the watchtower container /watchtower"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=info msg="Stopping /website (425173d9c76c) with SIGTERM"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=debug msg="Removing container 425173d9c76c"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=error msg="Error: No such container: 425173d9c76c6f794674ac37f4999ce0817967ec6819c2ba21c61572bed4f383"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=info msg="Session done" Failed=1 Scanned=5 Updated=0 notify=no
Nov 20 23:33:25 hub.lol systemd[1]: Stopping watchtower (Docker Compose)...
```
```
### Additional context
_No response_ | 1.0 | Compatibility issue with docker compose abort-on-container-exit flag - ### Describe the bug
Watchtower causes an issue for me in combination with the `--abort-on-container-exit` flag for dockers `compose up` subcommand.
Docker compose removed the container on its own after the container was stopped. This leads to an error when watchtower attempts to remove the container.
`Error: No such container: 425173...`
The error is thrown by this [ContainerRemove](https://github.com/containrrr/watchtower/blob/d744c3488667d6cc4bb8258bb4b351bfd80e1602/pkg/container/client.go#L194) call.
### Steps to reproduce
1. Run container with `docker compose up --abort-on-container-exit`
2. Update image with watchtower
### Expected behavior
I expect for watchtower to be able to skip the container removal, either automatically if the container is already gone or with an explicit option.
### Screenshots
_No response_
### Environment
- Debian 11 amd64
- Docker 20.10.21
### Your logs
```text
...
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="Found a match"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="No pull needed. Skipping image."
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="No new images found for /traefik"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=debug msg="This is the watchtower container /watchtower"
Nov 20 23:09:37 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:37+01:00" level=info msg="Stopping /website (425173d9c76c) with SIGTERM"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=debug msg="Removing container 425173d9c76c"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=error msg="Error: No such container: 425173d9c76c6f794674ac37f4999ce0817967ec6819c2ba21c61572bed4f383"
Nov 20 23:09:38 hub.lol docker[17234]: watchtower | time="2022-11-20T23:09:38+01:00" level=info msg="Session done" Failed=1 Scanned=5 Updated=0 notify=no
Nov 20 23:33:25 hub.lol systemd[1]: Stopping watchtower (Docker Compose)...
```
```
### Additional context
_No response_ | non_defect | compatibility issue with docker compose abort on container exit flag describe the bug watchtower causes an issue for me in combination with the abort on container exit flag for dockers compose up subcommand docker compose removed the container on its own after the container was stopped this leads to an error when watchtower attempts to remove the container error no such container the error is thrown by this call steps to reproduce run container with docker compose up abort on container exit update image with watchtower expected behavior i expect for watchtower to be able to skip the container removal either automatically if the container is already gone or with an explicit option screenshots no response environment debian docker your logs text nov hub lol docker watchtower time level debug msg found a match nov hub lol docker watchtower time level debug msg no pull needed skipping image nov hub lol docker watchtower time level debug msg no new images found for traefik nov hub lol docker watchtower time level debug msg this is the watchtower container watchtower nov hub lol docker watchtower time level info msg stopping website with sigterm nov hub lol docker watchtower time level debug msg removing container nov hub lol docker watchtower time level error msg error no such container nov hub lol docker watchtower time level info msg session done failed scanned updated notify no nov hub lol systemd stopping watchtower docker compose additional context no response | 0 |
53,560 | 13,261,913,999 | IssuesEvent | 2020-08-20 20:45:54 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Hessian can not be evaluated using Python in photonics-service (Trac #1684) | Migrated from Trac combo reconstruction defect | Calling the Hessian from python gives random values for the hessian matrix (for both the Quantiles and MeanExpectedCharge). The Gradient works.
Only sometimes (maybe once out of 10 times), the hessian matrix seems to be what is really returned by the corresponding c function, given the fact that it is similar in these cases. However, it is not symmetric, which speaks for a wrong matrix from the beginning.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1684">https://code.icecube.wisc.edu/projects/icecube/ticket/1684</a>, reported by icecubeand owned by andrii.terliuk</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"_ts": "1550067190995086",
"description": "Calling the Hessian from python gives random values for the hessian matrix (for both the Quantiles and MeanExpectedCharge). The Gradient works. \n\nOnly sometimes (maybe once out of 10 times), the hessian matrix seems to be what is really returned by the corresponding c function, given the fact that it is similar in these cases. However, it is not symmetric, which speaks for a wrong matrix from the beginning.",
"reporter": "icecube",
"cc": "gluesenkamp",
"resolution": "fixed",
"time": "2016-05-03T11:31:13",
"component": "combo reconstruction",
"summary": "Hessian can not be evaluated using Python in photonics-service",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "andrii.terliuk",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Hessian can not be evaluated using Python in photonics-service (Trac #1684) - Calling the Hessian from python gives random values for the hessian matrix (for both the Quantiles and MeanExpectedCharge). The Gradient works.
Only sometimes (maybe once out of 10 times), the hessian matrix seems to be what is really returned by the corresponding c function, given the fact that it is similar in these cases. However, it is not symmetric, which speaks for a wrong matrix from the beginning.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1684">https://code.icecube.wisc.edu/projects/icecube/ticket/1684</a>, reported by icecubeand owned by andrii.terliuk</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:10",
"_ts": "1550067190995086",
"description": "Calling the Hessian from python gives random values for the hessian matrix (for both the Quantiles and MeanExpectedCharge). The Gradient works. \n\nOnly sometimes (maybe once out of 10 times), the hessian matrix seems to be what is really returned by the corresponding c function, given the fact that it is similar in these cases. However, it is not symmetric, which speaks for a wrong matrix from the beginning.",
"reporter": "icecube",
"cc": "gluesenkamp",
"resolution": "fixed",
"time": "2016-05-03T11:31:13",
"component": "combo reconstruction",
"summary": "Hessian can not be evaluated using Python in photonics-service",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "andrii.terliuk",
"type": "defect"
}
```
</p>
</details>
| defect | hessian can not be evaluated using python in photonics service trac calling the hessian from python gives random values for the hessian matrix for both the quantiles and meanexpectedcharge the gradient works only sometimes maybe once out of times the hessian matrix seems to be what is really returned by the corresponding c function given the fact that it is similar in these cases however it is not symmetric which speaks for a wrong matrix from the beginning migrated from json status closed changetime ts description calling the hessian from python gives random values for the hessian matrix for both the quantiles and meanexpectedcharge the gradient works n nonly sometimes maybe once out of times the hessian matrix seems to be what is really returned by the corresponding c function given the fact that it is similar in these cases however it is not symmetric which speaks for a wrong matrix from the beginning reporter icecube cc gluesenkamp resolution fixed time component combo reconstruction summary hessian can not be evaluated using python in photonics service priority normal keywords milestone owner andrii terliuk type defect | 1 |
105,256 | 4,233,310,585 | IssuesEvent | 2016-07-05 07:16:01 | DigitalCampus/oppia-mobile-android | https://api.github.com/repos/DigitalCampus/oppia-mobile-android | closed | ListPreference requires an entries array and an entryValues array. | bug High priority | Where
Native Method
Short Stacktrace
0 java.lang.IllegalStateException: ListPreference requires an entries array and an entryValues array.
1 at android.preference.ListPreference.onPrepareDialogBuilder(ListPreference.java:240)
2 at android.preference.DialogPreference.showDialog(DialogPreference.java:307)
3 at android.preference.DialogPreference.onClick(DialogPreference.java:278)
4 at android.preference.Preference.performClick(Preference.java:1052)
5 at android.preference.PreferenceScreen.onItemClick(PreferenceScreen.java:229)
6 at android.widget.AdapterView.performItemClick(AdapterView.java:308)
7 at android.widget.AbsListView.performItemClick(AbsListView.java:1513)
8 at android.widget.AbsListView$PerformClick.run(AbsListView.java:3471)
9 at android.widget.AbsListView$3.run(AbsListView.java:4838)
10 at android.os.Handler.handleCallback(Handler.java:733)
11 at android.os.Handler.dispatchMessage(Handler.java:95)
12 at android.os.Looper.loop(Looper.java:146)
13 at android.app.ActivityThread.main(ActivityThread.java:5641)
14 at java.lang.reflect.Method.invokeNative(Native Method)
15 at java.lang.reflect.Method.invoke(Method.java:515)
16 at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1288)
17 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1104)
18 at dalvik.system.NativeStart.main(Native Method)
Tag
log
Affected App Versions
5.5
Affected OS Versions
4.4.2 | 1.0 | ListPreference requires an entries array and an entryValues array. - Where
Native Method
Short Stacktrace
0 java.lang.IllegalStateException: ListPreference requires an entries array and an entryValues array.
1 at android.preference.ListPreference.onPrepareDialogBuilder(ListPreference.java:240)
2 at android.preference.DialogPreference.showDialog(DialogPreference.java:307)
3 at android.preference.DialogPreference.onClick(DialogPreference.java:278)
4 at android.preference.Preference.performClick(Preference.java:1052)
5 at android.preference.PreferenceScreen.onItemClick(PreferenceScreen.java:229)
6 at android.widget.AdapterView.performItemClick(AdapterView.java:308)
7 at android.widget.AbsListView.performItemClick(AbsListView.java:1513)
8 at android.widget.AbsListView$PerformClick.run(AbsListView.java:3471)
9 at android.widget.AbsListView$3.run(AbsListView.java:4838)
10 at android.os.Handler.handleCallback(Handler.java:733)
11 at android.os.Handler.dispatchMessage(Handler.java:95)
12 at android.os.Looper.loop(Looper.java:146)
13 at android.app.ActivityThread.main(ActivityThread.java:5641)
14 at java.lang.reflect.Method.invokeNative(Native Method)
15 at java.lang.reflect.Method.invoke(Method.java:515)
16 at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1288)
17 at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1104)
18 at dalvik.system.NativeStart.main(Native Method)
Tag
log
Affected App Versions
5.5
Affected OS Versions
4.4.2 | non_defect | listpreference requires an entries array and an entryvalues array where native method short stacktrace java lang illegalstateexception listpreference requires an entries array and an entryvalues array at android preference listpreference onpreparedialogbuilder listpreference java at android preference dialogpreference showdialog dialogpreference java at android preference dialogpreference onclick dialogpreference java at android preference preference performclick preference java at android preference preferencescreen onitemclick preferencescreen java at android widget adapterview performitemclick adapterview java at android widget abslistview performitemclick abslistview java at android widget abslistview performclick run abslistview java at android widget abslistview run abslistview java at android os handler handlecallback handler java at android os handler dispatchmessage handler java at android os looper loop looper java at android app activitythread main activitythread java at java lang reflect method invokenative native method at java lang reflect method invoke method java at com android internal os zygoteinit methodandargscaller run zygoteinit java at com android internal os zygoteinit main zygoteinit java at dalvik system nativestart main native method tag log affected app versions affected os versions | 0 |
69,448 | 22,355,767,862 | IssuesEvent | 2022-06-15 15:30:59 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Known issues for VIRTUAL and STORED client side computed columns | T: Defect C: Functionality E: Professional Edition E: Enterprise Edition | jOOQ 3.17 introduced a big feature called client side computed columns (https://github.com/jOOQ/jOOQ/issues/9879). There are a few open issues including:
### Tasks
The following changes are needed:
- [x] Code generation
- [ ] The combination with `name`, `converter`, or `binding` must be tested as well
- [ ] The combinations above that aren't yet supported should be logged
- [x] Runtime
- [ ] Settings to enable / disable the feature at runtime
- [x] Tests
- [ ] A `VIRTUAL` computed column that depends on a `STORED` computed column
- [ ] `VIRTUAL` computed columns contained in embeddables
- [ ] with `replacesFields = false`
- [ ] with `replacesFields = true`
- [ ] `STORED` computed columns
- [ ] `INSERT` with aliased table
- [ ] Reorder computed columns again to prevent interdependency side effects, if any (in MySQL)
- [ ] `INSERT .. DEFAULT VALUES`
- [ ] `UPDATE`
- [ ] `UPDATE .. FROM`
- [ ] `MERGE`
- [ ] This is extremely hard to emulate correctly. I guess we can't ship support for the feature in `MERGE` statements for now, unless the `MERGE` statement is generated by jOOQ's internals (e.g. `ON CONFLICT` emulation)
- [ ] Combine client side computed columns with readonly columns
- [ ] Combine with `Settings.readonlyInsert` and other settings
- [ ] Combine with embeddables
- [ ] The SQL transformation should have a simple and sophisticated version
- [ ] Simple is for `Param` types (i.e. constants) or deterministic expressions based on `Param` types ("effectively" constants), in case the query shouldn't be changed drastically
- [ ] `STORED` computed columns contained in embeddables
- [ ] with `replacesFields = false`
- [ ] with `replacesFields = true`
### Related work / follow up work:
- https://github.com/jOOQ/jOOQ/issues/13411
- https://github.com/jOOQ/jOOQ/issues/13418
### Caveats:
- [ ] Some queries cannot enforce computed columns in all dialects, if any given syntax cannot be emulated. This requires
- [ ] Sufficiently clear error handling
- [ ] Documentation
- [ ] Tests that the computation either works, or fails with an error. It's important we don't silently execute the query with a wrong result
- [x] MySQL `UPDATE` doesn't run the `SET` clause atomically, see https://twitter.com/lukaseder/status/1507019364800688134, https://stackoverflow.com/q/37649/521799. This has 2 implications:
- [x] Computed columns must be calculated lexically after all user-provided `SET` clauses
- [ ] We have to resolve the inter dependency tree of computational expressions to emulate standard atomic computation behaviour. Until that is available, the behaviour in case of interdependent expressions is undefined.
- [ ] `Field.getDataType()` currently produces the underlying data type from the schema, not the one from the computational expression, which may differ (e.g. have different nullability or a specific row type in case of `MULTISET` or `ROW`). We should make the correct type available statically.
| 1.0 | Known issues for VIRTUAL and STORED client side computed columns - jOOQ 3.17 introduced a big feature called client side computed columns (https://github.com/jOOQ/jOOQ/issues/9879). There are a few open issues including:
### Tasks
The following changes are needed:
- [x] Code generation
- [ ] The combination with `name`, `converter`, or `binding` must be tested as well
- [ ] The combinations above that aren't yet supported should be logged
- [x] Runtime
- [ ] Settings to enable / disable the feature at runtime
- [x] Tests
- [ ] A `VIRTUAL` computed column that depends on a `STORED` computed column
- [ ] `VIRTUAL` computed columns contained in embeddables
- [ ] with `replacesFields = false`
- [ ] with `replacesFields = true`
- [ ] `STORED` computed columns
- [ ] `INSERT` with aliased table
- [ ] Reorder computed columns again to prevent interdependency side effects, if any (in MySQL)
- [ ] `INSERT .. DEFAULT VALUES`
- [ ] `UPDATE`
- [ ] `UPDATE .. FROM`
- [ ] `MERGE`
- [ ] This is extremely hard to emulate correctly. I guess we can't ship support for the feature in `MERGE` statements for now, unless the `MERGE` statement is generated by jOOQ's internals (e.g. `ON CONFLICT` emulation)
- [ ] Combine client side computed columns with readonly columns
- [ ] Combine with `Settings.readonlyInsert` and other settings
- [ ] Combine with embeddables
- [ ] The SQL transformation should have a simple and sophisticated version
- [ ] Simple is for `Param` types (i.e. constants) or deterministic expressions based on `Param` types ("effectively" constants), in case the query shouldn't be changed drastically
- [ ] `STORED` computed columns contained in embeddables
- [ ] with `replacesFields = false`
- [ ] with `replacesFields = true`
### Related work / follow up work:
- https://github.com/jOOQ/jOOQ/issues/13411
- https://github.com/jOOQ/jOOQ/issues/13418
### Caveats:
- [ ] Some queries cannot enforce computed columns in all dialects, if any given syntax cannot be emulated. This requires
- [ ] Sufficiently clear error handling
- [ ] Documentation
- [ ] Tests that the computation either works, or fails with an error. It's important we don't silently execute the query with a wrong result
- [x] MySQL `UPDATE` doesn't run the `SET` clause atomically, see https://twitter.com/lukaseder/status/1507019364800688134, https://stackoverflow.com/q/37649/521799. This has 2 implications:
- [x] Computed columns must be calculated lexically after all user-provided `SET` clauses
- [ ] We have to resolve the inter dependency tree of computational expressions to emulate standard atomic computation behaviour. Until that is available, the behaviour in case of interdependent expressions is undefined.
- [ ] `Field.getDataType()` currently produces the underlying data type from the schema, not the one from the computational expression, which may differ (e.g. have different nullability or a specific row type in case of `MULTISET` or `ROW`). We should make the correct type available statically.
| defect | known issues for virtual and stored client side computed columns jooq introduced a big feature called client side computed columns there are a few open issues including tasks the following changes are needed code generation the combination with name converter or binding must be tested as well the combinations above that aren t yet supported should be logged runtime settings to enable disable the feature at runtime tests a virtual computed column that depends on a stored computed column virtual computed columns contained in embeddables with replacesfields false with replacesfields true stored computed columns insert with aliased table reorder computed columns again to prevent interdependency side effects if any in mysql insert default values update update from merge this is extremely hard to emulate correctly i guess we can t ship support for the feature in merge statements for now unless the merge statement is generated by jooq s internals e g on conflict emulation combine client side computed columns with readonly columns combine with settings readonlyinsert and other settings combine with embeddables the sql transformation should have a simple and sophisticated version simple is for param types i e constants or deterministic expressions based on param types effectively constants in case the query shouldn t be changed drastically stored computed columns contained in embeddables with replacesfields false with replacesfields true related work follow up work caveats some queries cannot enforce computed columns in all dialects if any given syntax cannot be emulated this requires sufficiently clear error handling documentation tests that the computation either works or fails with an error it s important we don t silently execute the query with a wrong result mysql update doesn t run the set clause atomically see this has implications computed columns must be calculated lexically after all user provided set clauses we have to resolve the inter dependency tree of computational expressions to emulate standard atomic computation behaviour until that is available the behaviour in case of interdependent expressions is undefined field getdatatype currently produces the underlying data type from the schema not the one from the computational expression which may differ e g have different nullability or a specific row type in case of multiset or row we should make the correct type available statically | 1 |
16,676 | 2,928,579,317 | IssuesEvent | 2015-06-27 09:45:49 | CocoaPods/Xcodeproj | https://api.github.com/repos/CocoaPods/Xcodeproj | reopened | Figure out how to correctly initialise Xcode on 7 beta 2 and onwards | d2:moderate s2:confirmed t2:defect | Initialisation of Xcode frameworks fails right now for 7 beta 2, see #275. We have modified Xcodeproj to more gracefully handle this case, but it will now use XML writing on that version. We should figure out how to properly initialise Xcode so that ASCII plist writing can be done. | 1.0 | Figure out how to correctly initialise Xcode on 7 beta 2 and onwards - Initialisation of Xcode frameworks fails right now for 7 beta 2, see #275. We have modified Xcodeproj to more gracefully handle this case, but it will now use XML writing on that version. We should figure out how to properly initialise Xcode so that ASCII plist writing can be done. | defect | figure out how to correctly initialise xcode on beta and onwards initialisation of xcode frameworks fails right now for beta see we have modified xcodeproj to more gracefully handle this case but it will now use xml writing on that version we should figure out how to properly initialise xcode so that ascii plist writing can be done | 1 |
77,667 | 27,101,933,878 | IssuesEvent | 2023-02-15 09:18:54 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | End user cannot configure widgets | T-Defect | ### Steps to reproduce
The end user is unable to configure widgets by using Element, except in a limited capacity by using /devtools, /addwidget, or using the integration manager. The integration manager is only usable for a small number of 1p widgets, and the other methods are too difficult for a normal user. Addwidget does not allow changing the name or icon of the widget, and only works for rooms, not for account scoped widgets.
This means that it's impossible for a widget developer to ship a widget to end users who are using Element. As a result, widget development is effectively futile, so long as most users use Element.
### Outcome
Widget developers should have a way to ship widgets that end users can install in rooms and their account.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes | 1.0 | End user cannot configure widgets - ### Steps to reproduce
The end user is unable to configure widgets by using Element, except in a limited capacity by using /devtools, /addwidget, or using the integration manager. The integration manager is only usable for a small number of 1p widgets, and the other methods are too difficult for a normal user. Addwidget does not allow changing the name or icon of the widget, and only works for rooms, not for account scoped widgets.
This means that it's impossible for a widget developer to ship a widget to end users who are using Element. As a result, widget development is effectively futile, so long as most users use Element.
### Outcome
Widget developers should have a way to ship widgets that end users can install in rooms and their account.
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
Yes | defect | end user cannot configure widgets steps to reproduce the end user is unable to configure widgets by using element except in a limited capacity by using devtools addwidget or using the integration manager the integration manager is only usable for a small number of widgets and the other methods are too difficult for a normal user addwidget does not allow changing the name or icon of the widget and only works for rooms not for account scoped widgets this means that it s impossible for a widget developer to ship a widget to end users who are using element as a result widget development is effectively futile so long as most users use element outcome widget developers should have a way to ship widgets that end users can install in rooms and their account operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs yes | 1 |
165,517 | 20,592,225,203 | IssuesEvent | 2022-03-05 01:22:43 | n-devs/full-stack-react-profile | https://api.github.com/repos/n-devs/full-stack-react-profile | opened | CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz | security vulnerability | ## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /full-stack-react-profile/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.7.2.tgz (Root Library)
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-0691 (High) detected in url-parse-1.4.7.tgz - ## CVE-2022-0691 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /full-stack-react-profile/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.7.2.tgz (Root Library)
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in NPM url-parse prior to 1.5.9.
<p>Publish Date: 2022-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0691>CVE-2022-0691</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0691</a></p>
<p>Release Date: 2022-02-21</p>
<p>Fix Resolution (url-parse): 1.5.9</p>
<p>Direct dependency fix Resolution (webpack-dev-server): 3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in url parse tgz cve high severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file full stack react profile package json path to vulnerable library node modules url parse package json dependency hierarchy webpack dev server tgz root library sockjs client tgz x url parse tgz vulnerable library vulnerability details authorization bypass through user controlled key in npm url parse prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution webpack dev server step up your open source security game with whitesource | 0 |
226,348 | 18,013,303,987 | IssuesEvent | 2021-09-16 11:09:17 | elastic/kibana | https://api.github.com/repos/elastic/kibana | reopened | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/formula·ts - lens app lens formula should insert single quotes and escape when needed to create valid KQL | blocker Team:VisEditors failed-test v8.0.0 skipped-test Feature:Lens | A test failed on a tracked branch
```
Error: expected 'count(kql=\'Men\\\'s Clothing \'count\n(kql=Men\'s Clothing)' to equal 'count(kql=\'Men\\\'s Clothing\')'
at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/expect/expect.js:227:8)
at Context.<anonymous> (test/functional/apps/lens/formula.ts:88:49)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/14676/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/formula·ts","test.name":"lens app lens formula should insert single quotes and escape when needed to create valid KQL","test.failCount":7}} --> | 2.0 | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/formula·ts - lens app lens formula should insert single quotes and escape when needed to create valid KQL - A test failed on a tracked branch
```
Error: expected 'count(kql=\'Men\\\'s Clothing \'count\n(kql=Men\'s Clothing)' to equal 'count(kql=\'Men\\\'s Clothing\')'
at Assertion.assert (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/expect/expect.js:100:11)
at Assertion.equal (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/expect/expect.js:227:8)
at Context.<anonymous> (test/functional/apps/lens/formula.ts:88:49)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at Object.apply (/dev/shm/workspace/parallel/3/kibana/node_modules/@kbn/test/src/functional_test_runner/lib/mocha/wrap_function.js:73:16)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/14676/)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/lens/formula·ts","test.name":"lens app lens formula should insert single quotes and escape when needed to create valid KQL","test.failCount":7}} --> | non_defect | failing test chrome x pack ui functional tests x pack test functional apps lens formula·ts lens app lens formula should insert single quotes and escape when needed to create valid kql a test failed on a tracked branch error expected count kql men s clothing count n kql men s clothing to equal count kql men s clothing at assertion assert dev shm workspace parallel kibana node modules kbn expect expect js at assertion equal dev shm workspace parallel kibana node modules kbn expect expect js at context test functional apps lens formula ts at runmicrotasks at processticksandrejections internal process task queues js at object apply dev shm workspace parallel kibana node modules kbn test src functional test runner lib mocha wrap function js first failure | 0 |
243,622 | 26,286,969,068 | IssuesEvent | 2023-01-07 23:45:59 | BrianMcDonaldWS/keymaster | https://api.github.com/repos/BrianMcDonaldWS/keymaster | opened | CVE-2021-31525 (Medium) detected in github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533 | security vulnerability | ## CVE-2021-31525 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/cloudflare/cfssl/revoke-v1.4.1 (Root Library)
- github.com/cloudflare/cfssl/helpers-v1.4.1
- github.com/google/certificate-transparency-go-v1.1.0
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd/etcdserver/api/v2v3-v3.4.5
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd/clientv3/balancer/picker-v3.4.5
- github.com/grpc/grpc-go-v1.28.0
- github.com/grpc/grpc-go-v1.28.0
- github.com/grpc/grpc-go-v1.28.0
- github.com/golang/net/http2-e086a090c8fdb9982880f0fb6e3db47af1856533
- :x: **github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/http in Go before 1.15.12 and 1.16.x before 1.16.4 allows remote attackers to cause a denial of service (panic) via a large header to ReadRequest or ReadResponse. Server, Transport, and Client can each be affected in some configurations.
<p>Publish Date: 2021-05-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-31525>CVE-2021-31525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1958341">https://bugzilla.redhat.com/show_bug.cgi?id=1958341</a></p>
<p>Release Date: 2021-05-27</p>
<p>Fix Resolution: golang - v1.15.12,v1.16.4,v1.17.0</p>
</p>
</details>
<p></p>
| True | CVE-2021-31525 (Medium) detected in github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533 - ## CVE-2021-31525 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533</b></p></summary>
<p>[mirror] Go supplementary network libraries</p>
<p>
Dependency Hierarchy:
- github.com/cloudflare/cfssl/revoke-v1.4.1 (Root Library)
- github.com/cloudflare/cfssl/helpers-v1.4.1
- github.com/google/certificate-transparency-go-v1.1.0
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd/etcdserver/api/v2v3-v3.4.5
- github.com/etcd-io/etcd-v3.4.5
- github.com/etcd-io/etcd/clientv3/balancer/picker-v3.4.5
- github.com/grpc/grpc-go-v1.28.0
- github.com/grpc/grpc-go-v1.28.0
- github.com/grpc/grpc-go-v1.28.0
- github.com/golang/net/http2-e086a090c8fdb9982880f0fb6e3db47af1856533
- :x: **github.com/golang/net/http/httpguts-e086a090c8fdb9982880f0fb6e3db47af1856533** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net/http in Go before 1.15.12 and 1.16.x before 1.16.4 allows remote attackers to cause a denial of service (panic) via a large header to ReadRequest or ReadResponse. Server, Transport, and Client can each be affected in some configurations.
<p>Publish Date: 2021-05-27
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-31525>CVE-2021-31525</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1958341">https://bugzilla.redhat.com/show_bug.cgi?id=1958341</a></p>
<p>Release Date: 2021-05-27</p>
<p>Fix Resolution: golang - v1.15.12,v1.16.4,v1.17.0</p>
</p>
</details>
<p></p>
| non_defect | cve medium detected in github com golang net http httpguts cve medium severity vulnerability vulnerable library github com golang net http httpguts go supplementary network libraries dependency hierarchy github com cloudflare cfssl revoke root library github com cloudflare cfssl helpers github com google certificate transparency go github com etcd io etcd github com etcd io etcd github com etcd io etcd etcdserver api github com etcd io etcd github com etcd io etcd balancer picker github com grpc grpc go github com grpc grpc go github com grpc grpc go github com golang net x github com golang net http httpguts vulnerable library vulnerability details net http in go before and x before allows remote attackers to cause a denial of service panic via a large header to readrequest or readresponse server transport and client can each be affected in some configurations publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution golang | 0 |
39,298 | 2,853,218,830 | IssuesEvent | 2015-06-01 17:26:59 | GoogleCloudPlatform/kubernetes | https://api.github.com/repos/GoogleCloudPlatform/kubernetes | closed | Insert a layer in DNS hierarchy, tune search path | priority/P1 team/cluster | Instead of svc.ns.kubernetes.local, which leaves no room for non-service name stuff, let's insert literal "services" - svc.ns.services.kubernetes.local. Adjust the search path to ns.services.kubernetes.local, services.kubernetes.local, kubernetes.local
Not so important on its own, but hard to fix once we go to 1.0, and easy. | 1.0 | Insert a layer in DNS hierarchy, tune search path - Instead of svc.ns.kubernetes.local, which leaves no room for non-service name stuff, let's insert literal "services" - svc.ns.services.kubernetes.local. Adjust the search path to ns.services.kubernetes.local, services.kubernetes.local, kubernetes.local
Not so important on its own, but hard to fix once we go to 1.0, and easy. | non_defect | insert a layer in dns hierarchy tune search path instead of svc ns kubernetes local which leaves no room for non service name stuff let s insert literal services svc ns services kubernetes local adjust the search path to ns services kubernetes local services kubernetes local kubernetes local not so important on its own but hard to fix once we go to and easy | 0 |
25,935 | 4,532,502,262 | IssuesEvent | 2016-09-08 08:20:22 | buildo/react-components | https://api.github.com/repos/buildo/react-components | closed | LoadingSpinner style should be called loadingSpinner.scss | breaking defect in review | ## description
`LoadingSpinner` style is called `style.scss` instead of `loadingSpinner.scss`
## how to reproduce
- {optional: describe steps to reproduce defect}
## specs
rename it to `loadingSpinner.scss`
## misc
{optional: other useful info}
| 1.0 | LoadingSpinner style should be called loadingSpinner.scss - ## description
`LoadingSpinner` style is called `style.scss` instead of `loadingSpinner.scss`
## how to reproduce
- {optional: describe steps to reproduce defect}
## specs
rename it to `loadingSpinner.scss`
## misc
{optional: other useful info}
| defect | loadingspinner style should be called loadingspinner scss description loadingspinner style is called style scss instead of loadingspinner scss how to reproduce optional describe steps to reproduce defect specs rename it to loadingspinner scss misc optional other useful info | 1 |
5,586 | 20,160,087,706 | IssuesEvent | 2022-02-09 20:30:24 | Azure/missionlz | https://api.github.com/repos/Azure/missionlz | opened | Use AZ CLI to clean up Terraform nightly deployments | core dev-automation :zap: | ## Benefit/Result/Outcome
So that nightly deployments of Terraform run with fewer spurious errors.
## Description
The nightly Terraform build is often failing due to errors running the `terraform destroy` command. The errors are due to either issues in the Azure RM provider for Terraform, Terraform itself, or Azure. The errors are not related to Mission Landing Zone code, so rather than try to fix them we can use AZ CLI commands to clean up the build artifacts rather than `terraform destroy`.
Refer to the [Bicep cleanup instructions](https://github.com/Azure/missionlz/blob/main/docs/deployment-guide-bicep.md#cleanup) for how to delete all MLZ resources using the AZ CLI.
This is related to #609
## Acceptance Criteria
- The nightly Terraform deployments use AZ CLI commands to clean up deployed resources.
| 1.0 | Use AZ CLI to clean up Terraform nightly deployments - ## Benefit/Result/Outcome
So that nightly deployments of Terraform run with fewer spurious errors.
## Description
The nightly Terraform build is often failing due to errors running the `terraform destroy` command. The errors are due to either issues in the Azure RM provider for Terraform, Terraform itself, or Azure. The errors are not related to Mission Landing Zone code, so rather than try to fix them we can use AZ CLI commands to clean up the build artifacts rather than `terraform destroy`.
Refer to the [Bicep cleanup instructions](https://github.com/Azure/missionlz/blob/main/docs/deployment-guide-bicep.md#cleanup) for how to delete all MLZ resources using the AZ CLI.
This is related to #609
## Acceptance Criteria
- The nightly Terraform deployments use AZ CLI commands to clean up deployed resources.
| non_defect | use az cli to clean up terraform nightly deployments benefit result outcome so that nightly deployments of terraform run with fewer spurious errors description the nightly terraform build is often failing due to errors running the terraform destroy command the errors are due to either issues in the azure rm provider for terraform terraform itself or azure the errors are not related to mission landing zone code so rather than try to fix them we can use az cli commands to clean up the build artifacts rather than terraform destroy refer to the for how to delete all mlz resources using the az cli this is related to acceptance criteria the nightly terraform deployments use az cli commands to clean up deployed resources | 0 |
379,126 | 11,216,019,854 | IssuesEvent | 2020-01-07 04:38:17 | openmsupply/mobile | https://api.github.com/repos/openmsupply/mobile | opened | Quantities not updating | Docs: not needed Effort: small Module: dispensary Priority: high | ## Describe the bug
When navigating to the previous tag from the summary and editing quantities, the quantities are not reflected in the summary of a prescription
### To reproduce
Dispensing development
### Expected behaviour
These values should be updated
### Proposed Solution
N/A
### Version and device info
Dispensing feature development
### Additional context
N/A
| 1.0 | Quantities not updating - ## Describe the bug
When navigating to the previous tag from the summary and editing quantities, the quantities are not reflected in the summary of a prescription
### To reproduce
Dispensing development
### Expected behaviour
These values should be updated
### Proposed Solution
N/A
### Version and device info
Dispensing feature development
### Additional context
N/A
| non_defect | quantities not updating describe the bug when navigating to the previous tag from the summary and editing quantities the quantities are not reflected in the summary of a prescription to reproduce dispensing development expected behaviour these values should be updated proposed solution n a version and device info dispensing feature development additional context n a | 0 |
79,596 | 28,439,917,085 | IssuesEvent | 2023-04-15 19:35:10 | thomasleplus/tinkerit | https://api.github.com/repos/thomasleplus/tinkerit | closed | temperature code | Priority-Medium auto-migrated Type-Defect | ```
What steps will reproduce the problem?
1. upload the scretch
What is the expected output? What do you see instead?
a serial output at about 21000 (21°C), but I get something at 249400
(249,4°C?!)
What version of the product are you using? On what operating system?
Arduino Duemilanove with a ATMEGA 328p-pu
Please provide any additional information below.
Don't know what to add here :)
```
Original issue reported on code.google.com by `thopie...@gmail.com` on 25 Aug 2009 at 3:40
| 1.0 | temperature code - ```
What steps will reproduce the problem?
1. upload the scretch
What is the expected output? What do you see instead?
a serial output at about 21000 (21°C), but I get something at 249400
(249,4°C?!)
What version of the product are you using? On what operating system?
Arduino Duemilanove with a ATMEGA 328p-pu
Please provide any additional information below.
Don't know what to add here :)
```
Original issue reported on code.google.com by `thopie...@gmail.com` on 25 Aug 2009 at 3:40
| defect | temperature code what steps will reproduce the problem upload the scretch what is the expected output what do you see instead a serial output at about °c but i get something at °c what version of the product are you using on what operating system arduino duemilanove with a atmega pu please provide any additional information below don t know what to add here original issue reported on code google com by thopie gmail com on aug at | 1 |
487,647 | 14,049,830,188 | IssuesEvent | 2020-11-02 10:49:59 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | Clicking Review for a Duplicate application is not opening the application | Priority: high 👹Bug | **Describe the bug**
When a duplicate application is flagged, clicking the flagged application goes to a loading screen and then immediately returns to the Ready for Review page
**To Reproduce**
Steps to reproduce the behaviour:
1. Go to 'Ready for Review'
2. Click on any application with the duplicate flag
3. The page loads back to Ready for review page
**Expected behaviour**
The application should be opened and an option to Keep or Duplicate should be shown
**Screenshots**


**Desktop:**
- OS: Windows 10
- Browser: Chrome
- Version 84.0.4147.105 | 1.0 | Clicking Review for a Duplicate application is not opening the application - **Describe the bug**
When a duplicate application is flagged, clicking the flagged application goes to a loading screen and then immediately returns to the Ready for Review page
**To Reproduce**
Steps to reproduce the behaviour:
1. Go to 'Ready for Review'
2. Click on any application with the duplicate flag
3. The page loads back to Ready for review page
**Expected behaviour**
The application should be opened and an option to Keep or Duplicate should be shown
**Screenshots**


**Desktop:**
- OS: Windows 10
- Browser: Chrome
- Version 84.0.4147.105 | non_defect | clicking review for a duplicate application is not opening the application describe the bug when a duplicate application is flagged clicking the flagged application goes to a loading screen and then immediately returns to the ready for review page to reproduce steps to reproduce the behaviour go to ready for review click on any application with the duplicate flag the page loads back to ready for review page expected behaviour the application should be opened and an option to keep or duplicate should be shown screenshots desktop os windows browser chrome version | 0 |
74,133 | 24,962,771,900 | IssuesEvent | 2022-11-01 16:49:07 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Wrong transformation for transformPatternsTrivialPredicates when DISTINCT predicate operand is NULL | T: Defect C: Functionality P: Medium E: Professional Edition E: Enterprise Edition | The `QOM.IsDistinctFrom` and `QOM.IsNotDistinctFrom` predicates extend `CompareCondition`, which is the only check done for `Settings.transformPatternsTrivialPredicates` to decide whether a condition (e.g. `a = null`) is reduced to a `nullCondition()`.
This means the following wrong transformation is made:
```sql
-- Input
SELECT a IS DISTINCT FROM NULL, a IS NOT DISTINCT FROM NULL
-- Output
SELECT NULL, NULL
``` | 1.0 | Wrong transformation for transformPatternsTrivialPredicates when DISTINCT predicate operand is NULL - The `QOM.IsDistinctFrom` and `QOM.IsNotDistinctFrom` predicates extend `CompareCondition`, which is the only check done for `Settings.transformPatternsTrivialPredicates` to decide whether a condition (e.g. `a = null`) is reduced to a `nullCondition()`.
This means the following wrong transformation is made:
```sql
-- Input
SELECT a IS DISTINCT FROM NULL, a IS NOT DISTINCT FROM NULL
-- Output
SELECT NULL, NULL
``` | defect | wrong transformation for transformpatternstrivialpredicates when distinct predicate operand is null the qom isdistinctfrom and qom isnotdistinctfrom predicates extend comparecondition which is the only check done for settings transformpatternstrivialpredicates to decide whether a condition e g a null is reduced to a nullcondition this means the following wrong transformation is made sql input select a is distinct from null a is not distinct from null output select null null | 1 |
209,070 | 7,164,897,845 | IssuesEvent | 2018-01-29 12:51:34 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | Chart axis title rotate is missing from MVC Wrappers | Bug C: Chart Kendo1 Priority 1 S: Wrappers (ASP.NET MVC) SEV: Low | ### Bug report
Ticket with ID 1146349.
### Reproduction of the problem
ChartAxisTitleBuilder.cs is missing the ChartAxisTitle.cs Rotation property.
### Expected/desired behavior
[valueAxis.title.rotation](https://docs.telerik.com/kendo-ui/api/javascript/dataviz/ui/chart#configuration-valueAxis.title.rotation) should be made available to the UI for ASP.NET MVC wrappers.
We should be able to call it like this:
```
.ValueAxis(axis => axis
.Logarithmic()
.Title(t =>
{
t.Position(ChartAxisTitlePosition.Center);
t.Text("comeon");
t.Rotation(60);
})
```
### Workaround
When the Kendo UI Chart is initialized, set the option through JavaScript and refresh the chart:
```
$(document).ready(function () {
var chart = $("#chart").data("kendoChart");
chart.options.valueAxis.title.rotation = 60;
chart.refresh();
});
```
### Environment
* **Kendo UI version:** 2017.3.1026 | 1.0 | Chart axis title rotate is missing from MVC Wrappers - ### Bug report
Ticket with ID 1146349.
### Reproduction of the problem
ChartAxisTitleBuilder.cs is missing the ChartAxisTitle.cs Rotation property.
### Expected/desired behavior
[valueAxis.title.rotation](https://docs.telerik.com/kendo-ui/api/javascript/dataviz/ui/chart#configuration-valueAxis.title.rotation) should be made available to the UI for ASP.NET MVC wrappers.
We should be able to call it like this:
```
.ValueAxis(axis => axis
.Logarithmic()
.Title(t =>
{
t.Position(ChartAxisTitlePosition.Center);
t.Text("comeon");
t.Rotation(60);
})
```
### Workaround
When the Kendo UI Chart is initialized, set the option through JavaScript and refresh the chart:
```
$(document).ready(function () {
var chart = $("#chart").data("kendoChart");
chart.options.valueAxis.title.rotation = 60;
chart.refresh();
});
```
### Environment
* **Kendo UI version:** 2017.3.1026 | non_defect | chart axis title rotate is missing from mvc wrappers bug report ticket with id reproduction of the problem chartaxistitlebuilder cs is missing the chartaxistitle cs rotation property expected desired behavior should be made available to the ui for asp net mvc wrappers we should be able to call it like this valueaxis axis axis logarithmic title t t position chartaxistitleposition center t text comeon t rotation workaround when the kendo ui chart is initialized set the option through javascript and refresh the chart document ready function var chart chart data kendochart chart options valueaxis title rotation chart refresh environment kendo ui version | 0 |
202,984 | 15,326,087,349 | IssuesEvent | 2021-02-26 02:48:08 | nucypher/nucypher | https://api.github.com/repos/nucypher/nucypher | closed | (Complete) Contract mocking for Tests | Enhancement Test 🔍 | The vast majority of test runtime consists of blockchain transactions, namely deployment, and time travel.
Is there a way we can isolate a subset of our tests to use a mocked contract deployment?
eth-tester offers compatibility with a `MockBackend`, which can be used in lieu of `PyEVMBackend`, however no computations are performed. (<https://github.com/ethereum/eth-tester#mockbackend>) | 1.0 | (Complete) Contract mocking for Tests - The vast majority of test runtime consists of blockchain transactions, namely deployment, and time travel.
Is there a way we can isolate a subset of our tests to use a mocked contract deployment?
eth-tester offers compatibility with a `MockBackend`, which can be used in lieu of `PyEVMBackend`, however no computations are performed. (<https://github.com/ethereum/eth-tester#mockbackend>) | non_defect | complete contract mocking for tests the vast majority of test runtime consists of blockchain transactions namely deployment and time travel is there a way we can isolate a subset of our tests to use a mocked contract deployment eth tester offers compatibility with a mockbackend which can be used in lieu of pyevmbackend however no computations are performed | 0 |
9,413 | 2,615,148,274 | IssuesEvent | 2015-03-01 06:24:40 | chrsmith/html5rocks | https://api.github.com/repos/chrsmith/html5rocks | closed | Can't use on iPhone | auto-migrated Milestone-X Priority-Medium Slides Type-Defect | ```
Requires keyboard. You should allow touch-screen access.
```
Original issue reported on code.google.com by `albel...@gmail.com` on 27 Oct 2010 at 12:13 | 1.0 | Can't use on iPhone - ```
Requires keyboard. You should allow touch-screen access.
```
Original issue reported on code.google.com by `albel...@gmail.com` on 27 Oct 2010 at 12:13 | defect | can t use on iphone requires keyboard you should allow touch screen access original issue reported on code google com by albel gmail com on oct at | 1 |
492,363 | 14,201,146,024 | IssuesEvent | 2020-11-16 07:08:31 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | closed | TypeScript definition for Grid/GridOptions does not have "search" | Enhancement Kendo1 Priority 2 SEV: Medium TypeScript | ### Current behavior
Unable to use the "search" property on `kendo.ui.GridOptions` in TypeScript.
```
interface GridOptions {
name?: string;
allowCopy?: boolean | GridAllowCopy;
altRowTemplate?: string|Function;
autoBind?: boolean;
columnResizeHandleWidth?: number;
columns?: GridColumn[];
columnMenu?: boolean | GridColumnMenu;
dataSource?: any|any|kendo.data.DataSource;
detailTemplate?: string|Function;
editable?: boolean | "inline" | "incell" | "popup" | GridEditable;
excel?: GridExcel;
filterable?: boolean | GridFilterable;
groupable?: boolean | GridGroupable;
height?: number|string;
messages?: GridMessages;
mobile?: boolean|string;
navigatable?: boolean;
noRecords?: boolean | GridNoRecords;
pageable?: boolean | GridPageable;
pdf?: GridPdf;
persistSelection?: boolean;
reorderable?: boolean;
resizable?: boolean;
rowTemplate?: string|Function;
scrollable?: boolean | GridScrollable;
selectable?: boolean|string;
sortable?: boolean | GridSortable;
toolbar?: string | Function | (string | GridToolbarItem)[];
```
### Expected/desired behavior
Let me set `search` without TS throwing an error.
### Environment
* **Kendo UI version:** 2020.1.219
* **@types/kendo-ui:** 2020.1.0
* **jQuery version:** 3.4.1
* **Browser:** all
| 1.0 | TypeScript definition for Grid/GridOptions does not have "search" - ### Current behavior
Unable to use the "search" property on `kendo.ui.GridOptions` in TypeScript.
```
interface GridOptions {
name?: string;
allowCopy?: boolean | GridAllowCopy;
altRowTemplate?: string|Function;
autoBind?: boolean;
columnResizeHandleWidth?: number;
columns?: GridColumn[];
columnMenu?: boolean | GridColumnMenu;
dataSource?: any|any|kendo.data.DataSource;
detailTemplate?: string|Function;
editable?: boolean | "inline" | "incell" | "popup" | GridEditable;
excel?: GridExcel;
filterable?: boolean | GridFilterable;
groupable?: boolean | GridGroupable;
height?: number|string;
messages?: GridMessages;
mobile?: boolean|string;
navigatable?: boolean;
noRecords?: boolean | GridNoRecords;
pageable?: boolean | GridPageable;
pdf?: GridPdf;
persistSelection?: boolean;
reorderable?: boolean;
resizable?: boolean;
rowTemplate?: string|Function;
scrollable?: boolean | GridScrollable;
selectable?: boolean|string;
sortable?: boolean | GridSortable;
toolbar?: string | Function | (string | GridToolbarItem)[];
```
### Expected/desired behavior
Let me set `search` without TS throwing an error.
### Environment
* **Kendo UI version:** 2020.1.219
* **@types/kendo-ui:** 2020.1.0
* **jQuery version:** 3.4.1
* **Browser:** all
| non_defect | typescript definition for grid gridoptions does not have search current behavior unable to use the search property on kendo ui gridoptions in typescript interface gridoptions name string allowcopy boolean gridallowcopy altrowtemplate string function autobind boolean columnresizehandlewidth number columns gridcolumn columnmenu boolean gridcolumnmenu datasource any any kendo data datasource detailtemplate string function editable boolean inline incell popup grideditable excel gridexcel filterable boolean gridfilterable groupable boolean gridgroupable height number string messages gridmessages mobile boolean string navigatable boolean norecords boolean gridnorecords pageable boolean gridpageable pdf gridpdf persistselection boolean reorderable boolean resizable boolean rowtemplate string function scrollable boolean gridscrollable selectable boolean string sortable boolean gridsortable toolbar string function string gridtoolbaritem expected desired behavior let me set search without ts throwing an error environment kendo ui version types kendo ui jquery version browser all | 0 |
233,135 | 17,855,615,931 | IssuesEvent | 2021-09-05 00:41:23 | takobouzu/BOAT_RACE_DB | https://api.github.com/repos/takobouzu/BOAT_RACE_DB | reopened | セット交換における競走成績を検索する方法 | documentation | ## 目的
セット交換における競走成績を検索する方法を整理する。
- [ ] セット交換直後の競走成績
- [ ] セット交換後の優出回数
- [ ] セット交換後の部品交換傾向
## セット交換とは
シリンダケースとピンストン2本とピストンリング4個を一括で部品交換すること。
<img width="766" alt="直前情報|BOAT RACE オフィシャルウェブサイト 2021-05-02 23-19-08" src="https://user-images.githubusercontent.com/24547343/116816448-ef1f8400-ab9c-11eb-8a8d-91399b6780be.png">
| 1.0 | セット交換における競走成績を検索する方法 - ## 目的
セット交換における競走成績を検索する方法を整理する。
- [ ] セット交換直後の競走成績
- [ ] セット交換後の優出回数
- [ ] セット交換後の部品交換傾向
## セット交換とは
シリンダケースとピンストン2本とピストンリング4個を一括で部品交換すること。
<img width="766" alt="直前情報|BOAT RACE オフィシャルウェブサイト 2021-05-02 23-19-08" src="https://user-images.githubusercontent.com/24547343/116816448-ef1f8400-ab9c-11eb-8a8d-91399b6780be.png">
| non_defect | セット交換における競走成績を検索する方法 目的 セット交換における競走成績を検索する方法を整理する。 セット交換直後の競走成績 セット交換後の優出回数 セット交換後の部品交換傾向 セット交換とは 。 img width alt 直前情報|boat race オフィシャルウェブサイト src | 0 |
72,732 | 13,912,645,254 | IssuesEvent | 2020-10-20 19:11:29 | EdenServer/community | https://api.github.com/repos/EdenServer/community | reopened | Vrtra Drop Pool | in-code-review | Vrtra should have a 100% drop rate on 2 thread and 2 wool, but we only got 1 thread and 2 wool. plz halp =( | 1.0 | Vrtra Drop Pool - Vrtra should have a 100% drop rate on 2 thread and 2 wool, but we only got 1 thread and 2 wool. plz halp =( | non_defect | vrtra drop pool vrtra should have a drop rate on thread and wool but we only got thread and wool plz halp | 0 |
68,439 | 9,186,879,136 | IssuesEvent | 2019-03-06 00:28:31 | elm-street-technology/elevate-ui | https://api.github.com/repos/elm-street-technology/elevate-ui | opened | Documentation - Fullscreen Loader is not reference | :book: Documentation 🔥 Enhancement 🛠 Chore | We have a full screen loader component that is not mentioned in documentation. | 1.0 | Documentation - Fullscreen Loader is not reference - We have a full screen loader component that is not mentioned in documentation. | non_defect | documentation fullscreen loader is not reference we have a full screen loader component that is not mentioned in documentation | 0 |
444,887 | 12,822,490,401 | IssuesEvent | 2020-07-06 09:54:36 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | Possible deadlock in RemoveExternalConnectivityWatcher | kind/bug lang/core priority/P1 | Internal reference - b/159559055
The issue is that inside the mutex protected critical region of RemoveExternalConnectivityWatcher, we call Cancel on the ExternalConnectivityWatcher. This Cancel call jumps into WorkSerializer which is unsafe since the lambda executed by the WorkSerializer might end up calling RemoveExternalConnectivityWatcher again leading to a deadlock. | 1.0 | Possible deadlock in RemoveExternalConnectivityWatcher - Internal reference - b/159559055
The issue is that inside the mutex protected critical region of RemoveExternalConnectivityWatcher, we call Cancel on the ExternalConnectivityWatcher. This Cancel call jumps into WorkSerializer which is unsafe since the lambda executed by the WorkSerializer might end up calling RemoveExternalConnectivityWatcher again leading to a deadlock. | non_defect | possible deadlock in removeexternalconnectivitywatcher internal reference b the issue is that inside the mutex protected critical region of removeexternalconnectivitywatcher we call cancel on the externalconnectivitywatcher this cancel call jumps into workserializer which is unsafe since the lambda executed by the workserializer might end up calling removeexternalconnectivitywatcher again leading to a deadlock | 0 |
102,984 | 12,835,831,130 | IssuesEvent | 2020-07-07 13:29:50 | COVID19Tracking/website | https://api.github.com/repos/COVID19Tracking/website | closed | About page - contributors | DESIGN | The [About page](https://covidtracking.com/about) is ok for vp >768 however for vp < 768 it get's to close the the screen on the right


| 1.0 | About page - contributors - The [About page](https://covidtracking.com/about) is ok for vp >768 however for vp < 768 it get's to close the the screen on the right


| non_defect | about page contributors the is ok for vp however for vp it get s to close the the screen on the right | 0 |
43,008 | 5,561,013,292 | IssuesEvent | 2017-03-24 21:08:18 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | [C# feature request] define own operators | Area-Language Design Discussion | Hello all.
I would like to propose the functionality to define own Operators additionally to the existing posibility to override existing Operators.
For example in searching similar Persons it would be usefull:
Persons.Where(Person => Person.FullName ~= "Max Power");
In that case it would open the posibility to define the Operator ~= with a custom build similarity search logic.
This case is now possible with Extensions:
Persons.Where(Person => Person.Fullname.SimilarTo("Max Power")); // SimilarTo contains search logic
but an user defined Operator would make it easier to write and more readable.
Thanks
Richard
| 1.0 | [C# feature request] define own operators - Hello all.
I would like to propose the functionality to define own Operators additionally to the existing posibility to override existing Operators.
For example in searching similar Persons it would be usefull:
Persons.Where(Person => Person.FullName ~= "Max Power");
In that case it would open the posibility to define the Operator ~= with a custom build similarity search logic.
This case is now possible with Extensions:
Persons.Where(Person => Person.Fullname.SimilarTo("Max Power")); // SimilarTo contains search logic
but an user defined Operator would make it easier to write and more readable.
Thanks
Richard
| non_defect | define own operators hello all i would like to propose the functionality to define own operators additionally to the existing posibility to override existing operators for example in searching similar persons it would be usefull persons where person person fullname max power in that case it would open the posibility to define the operator with a custom build similarity search logic this case is now possible with extensions persons where person person fullname similarto max power similarto contains search logic but an user defined operator would make it easier to write and more readable thanks richard | 0 |
61,097 | 17,023,600,669 | IssuesEvent | 2021-07-03 02:51:37 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Merkaartor: recent incompatibility with Qt < 4.5 | Component: merkaartor Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 9.09pm, Tuesday, 1st June 2010]**
It may be intentional or not, but some recent changes brought an incompatibility with Qt versions smaller than 4.5.
The project file does not check for a specific version, so I assume it was by accident.
The function calls of the style:
```
QColorDialog::getColor(BgColor, this, tr("Select Color"), QColorDialog::ShowAlphaChannel);
```
were introduced with Qt 4.5, according to
http://doc.trolltech.com/4.5/qcolordialog.html#getColor
Earlier versions need the simpler alternative function with less arguments.
The attached patch fixes the issue for me with Qt 4.4.3. | 1.0 | Merkaartor: recent incompatibility with Qt < 4.5 - **[Submitted to the original trac issue database at 9.09pm, Tuesday, 1st June 2010]**
It may be intentional or not, but some recent changes brought an incompatibility with Qt versions smaller than 4.5.
The project file does not check for a specific version, so I assume it was by accident.
The function calls of the style:
```
QColorDialog::getColor(BgColor, this, tr("Select Color"), QColorDialog::ShowAlphaChannel);
```
were introduced with Qt 4.5, according to
http://doc.trolltech.com/4.5/qcolordialog.html#getColor
Earlier versions need the simpler alternative function with less arguments.
The attached patch fixes the issue for me with Qt 4.4.3. | defect | merkaartor recent incompatibility with qt it may be intentional or not but some recent changes brought an incompatibility with qt versions smaller than the project file does not check for a specific version so i assume it was by accident the function calls of the style qcolordialog getcolor bgcolor this tr select color qcolordialog showalphachannel were introduced with qt according to earlier versions need the simpler alternative function with less arguments the attached patch fixes the issue for me with qt | 1 |
504,760 | 14,620,939,543 | IssuesEvent | 2020-12-22 20:39:52 | googleapis/elixir-google-api | https://api.github.com/repos/googleapis/elixir-google-api | closed | Synthesis failed for Testing | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate Testing. :broken_heart:
Here's the output from running `synth.py`:
```
ts/testing/lib/google_api/testing/v1/model/apk_detail.ex.
Writing ApkManifest to clients/testing/lib/google_api/testing/v1/model/apk_manifest.ex.
Writing AppBundle to clients/testing/lib/google_api/testing/v1/model/app_bundle.ex.
Writing CancelTestMatrixResponse to clients/testing/lib/google_api/testing/v1/model/cancel_test_matrix_response.ex.
Writing ClientInfo to clients/testing/lib/google_api/testing/v1/model/client_info.ex.
Writing ClientInfoDetail to clients/testing/lib/google_api/testing/v1/model/client_info_detail.ex.
Writing Date to clients/testing/lib/google_api/testing/v1/model/date.ex.
Writing DeviceFile to clients/testing/lib/google_api/testing/v1/model/device_file.ex.
Writing DeviceIpBlock to clients/testing/lib/google_api/testing/v1/model/device_ip_block.ex.
Writing DeviceIpBlockCatalog to clients/testing/lib/google_api/testing/v1/model/device_ip_block_catalog.ex.
Writing Distribution to clients/testing/lib/google_api/testing/v1/model/distribution.ex.
Writing Environment to clients/testing/lib/google_api/testing/v1/model/environment.ex.
Writing EnvironmentMatrix to clients/testing/lib/google_api/testing/v1/model/environment_matrix.ex.
Writing EnvironmentVariable to clients/testing/lib/google_api/testing/v1/model/environment_variable.ex.
Writing FileReference to clients/testing/lib/google_api/testing/v1/model/file_reference.ex.
Writing GetApkDetailsResponse to clients/testing/lib/google_api/testing/v1/model/get_apk_details_response.ex.
Writing GoogleAuto to clients/testing/lib/google_api/testing/v1/model/google_auto.ex.
Writing GoogleCloudStorage to clients/testing/lib/google_api/testing/v1/model/google_cloud_storage.ex.
Writing IntentFilter to clients/testing/lib/google_api/testing/v1/model/intent_filter.ex.
Writing IosDevice to clients/testing/lib/google_api/testing/v1/model/ios_device.ex.
Writing IosDeviceCatalog to clients/testing/lib/google_api/testing/v1/model/ios_device_catalog.ex.
Writing IosDeviceFile to clients/testing/lib/google_api/testing/v1/model/ios_device_file.ex.
Writing IosDeviceList to clients/testing/lib/google_api/testing/v1/model/ios_device_list.ex.
Writing IosModel to clients/testing/lib/google_api/testing/v1/model/ios_model.ex.
Writing IosRuntimeConfiguration to clients/testing/lib/google_api/testing/v1/model/ios_runtime_configuration.ex.
Writing IosTestLoop to clients/testing/lib/google_api/testing/v1/model/ios_test_loop.ex.
Writing IosTestSetup to clients/testing/lib/google_api/testing/v1/model/ios_test_setup.ex.
Writing IosVersion to clients/testing/lib/google_api/testing/v1/model/ios_version.ex.
Writing IosXcTest to clients/testing/lib/google_api/testing/v1/model/ios_xc_test.ex.
Writing LauncherActivityIntent to clients/testing/lib/google_api/testing/v1/model/launcher_activity_intent.ex.
Writing Locale to clients/testing/lib/google_api/testing/v1/model/locale.ex.
Writing ManualSharding to clients/testing/lib/google_api/testing/v1/model/manual_sharding.ex.
Writing NetworkConfiguration to clients/testing/lib/google_api/testing/v1/model/network_configuration.ex.
Writing NetworkConfigurationCatalog to clients/testing/lib/google_api/testing/v1/model/network_configuration_catalog.ex.
Writing ObbFile to clients/testing/lib/google_api/testing/v1/model/obb_file.ex.
Writing Orientation to clients/testing/lib/google_api/testing/v1/model/orientation.ex.
Writing ProvidedSoftwareCatalog to clients/testing/lib/google_api/testing/v1/model/provided_software_catalog.ex.
Writing RegularFile to clients/testing/lib/google_api/testing/v1/model/regular_file.ex.
Writing ResultStorage to clients/testing/lib/google_api/testing/v1/model/result_storage.ex.
Writing RoboDirective to clients/testing/lib/google_api/testing/v1/model/robo_directive.ex.
Writing RoboStartingIntent to clients/testing/lib/google_api/testing/v1/model/robo_starting_intent.ex.
Writing Shard to clients/testing/lib/google_api/testing/v1/model/shard.ex.
Writing ShardingOption to clients/testing/lib/google_api/testing/v1/model/sharding_option.ex.
Writing StartActivityIntent to clients/testing/lib/google_api/testing/v1/model/start_activity_intent.ex.
Writing SystraceSetup to clients/testing/lib/google_api/testing/v1/model/systrace_setup.ex.
Writing TestDetails to clients/testing/lib/google_api/testing/v1/model/test_details.ex.
Writing TestEnvironmentCatalog to clients/testing/lib/google_api/testing/v1/model/test_environment_catalog.ex.
Writing TestExecution to clients/testing/lib/google_api/testing/v1/model/test_execution.ex.
Writing TestMatrix to clients/testing/lib/google_api/testing/v1/model/test_matrix.ex.
Writing TestSetup to clients/testing/lib/google_api/testing/v1/model/test_setup.ex.
Writing TestSpecification to clients/testing/lib/google_api/testing/v1/model/test_specification.ex.
Writing TestTargetsForShard to clients/testing/lib/google_api/testing/v1/model/test_targets_for_shard.ex.
Writing ToolResultsExecution to clients/testing/lib/google_api/testing/v1/model/tool_results_execution.ex.
Writing ToolResultsHistory to clients/testing/lib/google_api/testing/v1/model/tool_results_history.ex.
Writing ToolResultsStep to clients/testing/lib/google_api/testing/v1/model/tool_results_step.ex.
Writing TrafficRule to clients/testing/lib/google_api/testing/v1/model/traffic_rule.ex.
Writing UniformSharding to clients/testing/lib/google_api/testing/v1/model/uniform_sharding.ex.
Writing XcodeVersion to clients/testing/lib/google_api/testing/v1/model/xcode_version.ex.
Writing ApplicationDetailService to clients/testing/lib/google_api/testing/v1/api/application_detail_service.ex.
Writing Projects to clients/testing/lib/google_api/testing/v1/api/projects.ex.
Writing TestEnvironmentCatalog to clients/testing/lib/google_api/testing/v1/api/test_environment_catalog.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
15:00:54.044 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 252, in __exit__
self.observer.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 361, in on_thread_stop
self.unschedule_all()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 357, in unschedule_all
self._clear_emitters()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 231, in _clear_emitters
emitter.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_stop
self._inotify.close()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 50, in close
self.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 46, in on_thread_stop
self._inotify.close()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 277, in close
os.close(self._inotify_fd)
OSError: [Errno 9] Bad file descriptor
2020-12-18 07:00:57,171 autosynth [ERROR] > Synthesis failed
2020-12-18 07:00:57,171 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 291, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/testing/synth.metadata', 'synth.py', '--', 'Testing']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/3a455424-7540-46a9-bb77-8265c8f04c06/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
| 1.0 | Synthesis failed for Testing - Hello! Autosynth couldn't regenerate Testing. :broken_heart:
Here's the output from running `synth.py`:
```
ts/testing/lib/google_api/testing/v1/model/apk_detail.ex.
Writing ApkManifest to clients/testing/lib/google_api/testing/v1/model/apk_manifest.ex.
Writing AppBundle to clients/testing/lib/google_api/testing/v1/model/app_bundle.ex.
Writing CancelTestMatrixResponse to clients/testing/lib/google_api/testing/v1/model/cancel_test_matrix_response.ex.
Writing ClientInfo to clients/testing/lib/google_api/testing/v1/model/client_info.ex.
Writing ClientInfoDetail to clients/testing/lib/google_api/testing/v1/model/client_info_detail.ex.
Writing Date to clients/testing/lib/google_api/testing/v1/model/date.ex.
Writing DeviceFile to clients/testing/lib/google_api/testing/v1/model/device_file.ex.
Writing DeviceIpBlock to clients/testing/lib/google_api/testing/v1/model/device_ip_block.ex.
Writing DeviceIpBlockCatalog to clients/testing/lib/google_api/testing/v1/model/device_ip_block_catalog.ex.
Writing Distribution to clients/testing/lib/google_api/testing/v1/model/distribution.ex.
Writing Environment to clients/testing/lib/google_api/testing/v1/model/environment.ex.
Writing EnvironmentMatrix to clients/testing/lib/google_api/testing/v1/model/environment_matrix.ex.
Writing EnvironmentVariable to clients/testing/lib/google_api/testing/v1/model/environment_variable.ex.
Writing FileReference to clients/testing/lib/google_api/testing/v1/model/file_reference.ex.
Writing GetApkDetailsResponse to clients/testing/lib/google_api/testing/v1/model/get_apk_details_response.ex.
Writing GoogleAuto to clients/testing/lib/google_api/testing/v1/model/google_auto.ex.
Writing GoogleCloudStorage to clients/testing/lib/google_api/testing/v1/model/google_cloud_storage.ex.
Writing IntentFilter to clients/testing/lib/google_api/testing/v1/model/intent_filter.ex.
Writing IosDevice to clients/testing/lib/google_api/testing/v1/model/ios_device.ex.
Writing IosDeviceCatalog to clients/testing/lib/google_api/testing/v1/model/ios_device_catalog.ex.
Writing IosDeviceFile to clients/testing/lib/google_api/testing/v1/model/ios_device_file.ex.
Writing IosDeviceList to clients/testing/lib/google_api/testing/v1/model/ios_device_list.ex.
Writing IosModel to clients/testing/lib/google_api/testing/v1/model/ios_model.ex.
Writing IosRuntimeConfiguration to clients/testing/lib/google_api/testing/v1/model/ios_runtime_configuration.ex.
Writing IosTestLoop to clients/testing/lib/google_api/testing/v1/model/ios_test_loop.ex.
Writing IosTestSetup to clients/testing/lib/google_api/testing/v1/model/ios_test_setup.ex.
Writing IosVersion to clients/testing/lib/google_api/testing/v1/model/ios_version.ex.
Writing IosXcTest to clients/testing/lib/google_api/testing/v1/model/ios_xc_test.ex.
Writing LauncherActivityIntent to clients/testing/lib/google_api/testing/v1/model/launcher_activity_intent.ex.
Writing Locale to clients/testing/lib/google_api/testing/v1/model/locale.ex.
Writing ManualSharding to clients/testing/lib/google_api/testing/v1/model/manual_sharding.ex.
Writing NetworkConfiguration to clients/testing/lib/google_api/testing/v1/model/network_configuration.ex.
Writing NetworkConfigurationCatalog to clients/testing/lib/google_api/testing/v1/model/network_configuration_catalog.ex.
Writing ObbFile to clients/testing/lib/google_api/testing/v1/model/obb_file.ex.
Writing Orientation to clients/testing/lib/google_api/testing/v1/model/orientation.ex.
Writing ProvidedSoftwareCatalog to clients/testing/lib/google_api/testing/v1/model/provided_software_catalog.ex.
Writing RegularFile to clients/testing/lib/google_api/testing/v1/model/regular_file.ex.
Writing ResultStorage to clients/testing/lib/google_api/testing/v1/model/result_storage.ex.
Writing RoboDirective to clients/testing/lib/google_api/testing/v1/model/robo_directive.ex.
Writing RoboStartingIntent to clients/testing/lib/google_api/testing/v1/model/robo_starting_intent.ex.
Writing Shard to clients/testing/lib/google_api/testing/v1/model/shard.ex.
Writing ShardingOption to clients/testing/lib/google_api/testing/v1/model/sharding_option.ex.
Writing StartActivityIntent to clients/testing/lib/google_api/testing/v1/model/start_activity_intent.ex.
Writing SystraceSetup to clients/testing/lib/google_api/testing/v1/model/systrace_setup.ex.
Writing TestDetails to clients/testing/lib/google_api/testing/v1/model/test_details.ex.
Writing TestEnvironmentCatalog to clients/testing/lib/google_api/testing/v1/model/test_environment_catalog.ex.
Writing TestExecution to clients/testing/lib/google_api/testing/v1/model/test_execution.ex.
Writing TestMatrix to clients/testing/lib/google_api/testing/v1/model/test_matrix.ex.
Writing TestSetup to clients/testing/lib/google_api/testing/v1/model/test_setup.ex.
Writing TestSpecification to clients/testing/lib/google_api/testing/v1/model/test_specification.ex.
Writing TestTargetsForShard to clients/testing/lib/google_api/testing/v1/model/test_targets_for_shard.ex.
Writing ToolResultsExecution to clients/testing/lib/google_api/testing/v1/model/tool_results_execution.ex.
Writing ToolResultsHistory to clients/testing/lib/google_api/testing/v1/model/tool_results_history.ex.
Writing ToolResultsStep to clients/testing/lib/google_api/testing/v1/model/tool_results_step.ex.
Writing TrafficRule to clients/testing/lib/google_api/testing/v1/model/traffic_rule.ex.
Writing UniformSharding to clients/testing/lib/google_api/testing/v1/model/uniform_sharding.ex.
Writing XcodeVersion to clients/testing/lib/google_api/testing/v1/model/xcode_version.ex.
Writing ApplicationDetailService to clients/testing/lib/google_api/testing/v1/api/application_detail_service.ex.
Writing Projects to clients/testing/lib/google_api/testing/v1/api/projects.ex.
Writing TestEnvironmentCatalog to clients/testing/lib/google_api/testing/v1/api/test_environment_catalog.ex.
Writing connection.ex.
Writing metadata.ex.
Writing mix.exs
Writing README.md
Writing LICENSE
Writing .gitignore
Writing config/config.exs
Writing test/test_helper.exs
15:00:54.044 [info] Found only discovery_revision and/or formatting changes. Not significant enough for a PR.
fixing file permissions
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 252, in __exit__
self.observer.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 361, in on_thread_stop
self.unschedule_all()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 357, in unschedule_all
self._clear_emitters()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/api.py", line 231, in _clear_emitters
emitter.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify.py", line 121, in on_thread_stop
self._inotify.close()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 50, in close
self.stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/utils/__init__.py", line 81, in stop
self.on_thread_stop()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_buffer.py", line 46, in on_thread_stop
self._inotify.close()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/watchdog/observers/inotify_c.py", line 277, in close
os.close(self._inotify_fd)
OSError: [Errno 9] Bad file descriptor
2020-12-18 07:00:57,171 autosynth [ERROR] > Synthesis failed
2020-12-18 07:00:57,171 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 354, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 189, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 291, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/testing/synth.metadata', 'synth.py', '--', 'Testing']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/3a455424-7540-46a9-bb77-8265c8f04c06/targets/github%2Fsynthtool;config=default/tests;query=elixir-google-api;failed=false).
| non_defect | synthesis failed for testing hello autosynth couldn t regenerate testing broken heart here s the output from running synth py ts testing lib google api testing model apk detail ex writing apkmanifest to clients testing lib google api testing model apk manifest ex writing appbundle to clients testing lib google api testing model app bundle ex writing canceltestmatrixresponse to clients testing lib google api testing model cancel test matrix response ex writing clientinfo to clients testing lib google api testing model client info ex writing clientinfodetail to clients testing lib google api testing model client info detail ex writing date to clients testing lib google api testing model date ex writing devicefile to clients testing lib google api testing model device file ex writing deviceipblock to clients testing lib google api testing model device ip block ex writing deviceipblockcatalog to clients testing lib google api testing model device ip block catalog ex writing distribution to clients testing lib google api testing model distribution ex writing environment to clients testing lib google api testing model environment ex writing environmentmatrix to clients testing lib google api testing model environment matrix ex writing environmentvariable to clients testing lib google api testing model environment variable ex writing filereference to clients testing lib google api testing model file reference ex writing getapkdetailsresponse to clients testing lib google api testing model get apk details response ex writing googleauto to clients testing lib google api testing model google auto ex writing googlecloudstorage to clients testing lib google api testing model google cloud storage ex writing intentfilter to clients testing lib google api testing model intent filter ex writing iosdevice to clients testing lib google api testing model ios device ex writing iosdevicecatalog to clients testing lib google api testing model ios device catalog ex writing iosdevicefile to clients testing lib google api testing model ios device file ex writing iosdevicelist to clients testing lib google api testing model ios device list ex writing iosmodel to clients testing lib google api testing model ios model ex writing iosruntimeconfiguration to clients testing lib google api testing model ios runtime configuration ex writing iostestloop to clients testing lib google api testing model ios test loop ex writing iostestsetup to clients testing lib google api testing model ios test setup ex writing iosversion to clients testing lib google api testing model ios version ex writing iosxctest to clients testing lib google api testing model ios xc test ex writing launcheractivityintent to clients testing lib google api testing model launcher activity intent ex writing locale to clients testing lib google api testing model locale ex writing manualsharding to clients testing lib google api testing model manual sharding ex writing networkconfiguration to clients testing lib google api testing model network configuration ex writing networkconfigurationcatalog to clients testing lib google api testing model network configuration catalog ex writing obbfile to clients testing lib google api testing model obb file ex writing orientation to clients testing lib google api testing model orientation ex writing providedsoftwarecatalog to clients testing lib google api testing model provided software catalog ex writing regularfile to clients testing lib google api testing model regular file ex writing resultstorage to clients testing lib google api testing model result storage ex writing robodirective to clients testing lib google api testing model robo directive ex writing robostartingintent to clients testing lib google api testing model robo starting intent ex writing shard to clients testing lib google api testing model shard ex writing shardingoption to clients testing lib google api testing model sharding option ex writing startactivityintent to clients testing lib google api testing model start activity intent ex writing systracesetup to clients testing lib google api testing model systrace setup ex writing testdetails to clients testing lib google api testing model test details ex writing testenvironmentcatalog to clients testing lib google api testing model test environment catalog ex writing testexecution to clients testing lib google api testing model test execution ex writing testmatrix to clients testing lib google api testing model test matrix ex writing testsetup to clients testing lib google api testing model test setup ex writing testspecification to clients testing lib google api testing model test specification ex writing testtargetsforshard to clients testing lib google api testing model test targets for shard ex writing toolresultsexecution to clients testing lib google api testing model tool results execution ex writing toolresultshistory to clients testing lib google api testing model tool results history ex writing toolresultsstep to clients testing lib google api testing model tool results step ex writing trafficrule to clients testing lib google api testing model traffic rule ex writing uniformsharding to clients testing lib google api testing model uniform sharding ex writing xcodeversion to clients testing lib google api testing model xcode version ex writing applicationdetailservice to clients testing lib google api testing api application detail service ex writing projects to clients testing lib google api testing api projects ex writing testenvironmentcatalog to clients testing lib google api testing api test environment catalog ex writing connection ex writing metadata ex writing mix exs writing readme md writing license writing gitignore writing config config exs writing test test helper exs found only discovery revision and or formatting changes not significant enough for a pr fixing file permissions traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit self observer stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers api py line in on thread stop self unschedule all file tmpfs src github synthtool env lib site packages watchdog observers api py line in unschedule all self clear emitters file tmpfs src github synthtool env lib site packages watchdog observers api py line in clear emitters emitter stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers inotify py line in on thread stop self inotify close file tmpfs src github synthtool env lib site packages watchdog observers inotify buffer py line in close self stop file tmpfs src github synthtool env lib site packages watchdog utils init py line in stop self on thread stop file tmpfs src github synthtool env lib site packages watchdog observers inotify buffer py line in on thread stop self inotify close file tmpfs src github synthtool env lib site packages watchdog observers inotify c py line in close os close self inotify fd oserror bad file descriptor autosynth synthesis failed autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize synth log path sponge log log file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 0 |
469,847 | 13,526,656,446 | IssuesEvent | 2020-09-15 14:30:03 | abpframework/abp | https://api.github.com/repos/abpframework/abp | closed | Allow to select Blazor UI while creating a new solution | abp-cli abp-io effort-13 feature priority:high ui-blazor | - [x] abp cli
- [x] abp.io web site (direct download)
- [x] abp suite | 1.0 | Allow to select Blazor UI while creating a new solution - - [x] abp cli
- [x] abp.io web site (direct download)
- [x] abp suite | non_defect | allow to select blazor ui while creating a new solution abp cli abp io web site direct download abp suite | 0 |
7,667 | 5,115,443,542 | IssuesEvent | 2017-01-06 21:50:15 | coreos/bugs | https://api.github.com/repos/coreos/bugs | closed | Can't downgrade to a lower version on the same channel | area/usability component/update-engine kind/question team/os | # Issue Report #
I am using https://coreos.com/os/docs/latest/manual-rollbacks.html#performing-a-manual-rollback article to lower my version on the same channel, i.e. `stable` however I keep booting to the same version.
### CoreOS Version ###
1185.5.0
### Environment ###
DigitalOcean
### Expected Behavior ###
I follow the docs and set `COREOS_RELEASE_VERSION=1122.3.0` after copying the release file to /tmp/release and run `sudo systemctl restart update-engine; update_engine_client -update` then I expect to be in lower version. I also run the one-liner command to revert to the other partition.
### Actual Behavior ###
It doesn't get me to a lower version, whatever I did I am still in the same partition.
### Reproduction Steps ###
```
# cgpt show /dev/vda*
start size part contents
0 1 Hybrid MBR
1 1 Pri GPT header
2 32 Pri GPT table
4096 262144 1 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: 340A53DF-DE0F-4B1B-9F3D-86028CE53D13
Attr: Legacy BIOS Bootable
266240 4096 2 Label: "BIOS-BOOT"
Type: BIOS Boot Partition
UUID: 25F85B89-95C9-42B5-BB88-374B44B8F712
270336 2097152 3 Label: "USR-A"
Type: Alias for coreos-rootfs
UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
Attr: priority=1 tries=0 successful=0
2367488 2097152 4 Label: "USR-B"
Type: Alias for coreos-rootfs
UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
Attr: priority=2 tries=0 successful=1
4464640 262144 6 Label: "OEM"
Type: Alias for linux-data
UUID: 7FFB25F1-A3AB-406B-8C74-F5D9FE3FB5A8
4726784 131072 7 Label: "OEM-CONFIG"
Type: CoreOS reserved
UUID: 95A635FE-2C8A-4245-A29C-9E743A2E2A7D
4857856 37085151 9 Label: "ROOT"
Type: CoreOS auto-resize
UUID: B2F7CC0E-C179-476B-80CA-82EBCB295334
41943007 32 Sec GPT table
41943039 1 Sec GPT header
```
active partition:
```
# findmnt --noheadings --raw --output=source --target=/usr
/dev/vda4
```
Then I do the following steps:
1. `cp /usr/share/coreos/release /tmp`
2. edit /tmp/release with `COREOS_RELEASE_VERSION=1122.3.0` because I want that
3. `sudo mount -o bind /tmp/release /usr/share/coreos/release`
4. `sudo systemctl restart update-engine`
5. wait a few secs
6. `update_engine_client -update`
7. `reboot`
8. ssh again
9. I'm on the existing version (1185.5.0) again.
<details>
<summary>"update_engine_client -update" output</summary>
<pre>
core core # update_engine_client -update
I0105 01:24:17.417250 1725 update_engine_client.cc:247] Initiating update check and install.
I0105 01:24:17.426007 1725 update_engine_client.cc:252] Waiting for update to complete.
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATE_AVAILABLE
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATE_AVAILABLE
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.040074
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.100192
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.160311
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.230449
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.400786
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.450884
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.490963
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.551082
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.741458
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.801576
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.911794
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_FINALIZING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
Broadcast message from locksmithd at 2017-01-05 01:25:31.215990388 +0000 UTC:
System reboot in 5 minutes!
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATED_NEED_REBOOT
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
I0105 01:25:32.699391 1725 update_engine_client.cc:194] Update succeeded -- reboot needed.
</pre>
</details>
<details>
<summary>"cgpt show" output after "-update"</summary>
<pre>
$ cgpt show /dev/vda*
start size part contents
0 1 Hybrid MBR
1 1 Pri GPT header
2 32 Pri GPT table
4096 262144 1 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: 340A53DF-DE0F-4B1B-9F3D-86028CE53D13
Attr: Legacy BIOS Bootable
266240 4096 2 Label: "BIOS-BOOT"
Type: BIOS Boot Partition
UUID: 25F85B89-95C9-42B5-BB88-374B44B8F712
270336 2097152 3 Label: "USR-A"
Type: Alias for coreos-rootfs
UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
Attr: priority=2 tries=1 successful=0
2367488 2097152 4 Label: "USR-B"
Type: Alias for coreos-rootfs
UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
Attr: priority=1 tries=0 successful=1
4464640 262144 6 Label: "OEM"
Type: Alias for linux-data
UUID: 7FFB25F1-A3AB-406B-8C74-F5D9FE3FB5A8
4726784 131072 7 Label: "OEM-CONFIG"
Type: CoreOS reserved
UUID: 95A635FE-2C8A-4245-A29C-9E743A2E2A7D
4857856 37085151 9 Label: "ROOT"
Type: CoreOS auto-resize
UUID: B2F7CC0E-C179-476B-80CA-82EBCB295334
41943007 32 Sec GPT table
41943039 1 Sec GPT header
</pre>
</details>
----
From reading the doc what's not clear to me is
1. Does `-update` prioritize the passive partition so that after `reboot` it is picked up?
2. Do I have to swap the boot partition myself manually in this case?
3. Why does `-update` output show NEW_VERSION=0.0.0? That's confusing
Either I'm missing something or it's not straightforward to downgrade a coreos version on the same channel? | True | Can't downgrade to a lower version on the same channel - # Issue Report #
I am using https://coreos.com/os/docs/latest/manual-rollbacks.html#performing-a-manual-rollback article to lower my version on the same channel, i.e. `stable` however I keep booting to the same version.
### CoreOS Version ###
1185.5.0
### Environment ###
DigitalOcean
### Expected Behavior ###
I follow the docs and set `COREOS_RELEASE_VERSION=1122.3.0` after copying the release file to /tmp/release and run `sudo systemctl restart update-engine; update_engine_client -update` then I expect to be in lower version. I also run the one-liner command to revert to the other partition.
### Actual Behavior ###
It doesn't get me to a lower version, whatever I did I am still in the same partition.
### Reproduction Steps ###
```
# cgpt show /dev/vda*
start size part contents
0 1 Hybrid MBR
1 1 Pri GPT header
2 32 Pri GPT table
4096 262144 1 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: 340A53DF-DE0F-4B1B-9F3D-86028CE53D13
Attr: Legacy BIOS Bootable
266240 4096 2 Label: "BIOS-BOOT"
Type: BIOS Boot Partition
UUID: 25F85B89-95C9-42B5-BB88-374B44B8F712
270336 2097152 3 Label: "USR-A"
Type: Alias for coreos-rootfs
UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
Attr: priority=1 tries=0 successful=0
2367488 2097152 4 Label: "USR-B"
Type: Alias for coreos-rootfs
UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
Attr: priority=2 tries=0 successful=1
4464640 262144 6 Label: "OEM"
Type: Alias for linux-data
UUID: 7FFB25F1-A3AB-406B-8C74-F5D9FE3FB5A8
4726784 131072 7 Label: "OEM-CONFIG"
Type: CoreOS reserved
UUID: 95A635FE-2C8A-4245-A29C-9E743A2E2A7D
4857856 37085151 9 Label: "ROOT"
Type: CoreOS auto-resize
UUID: B2F7CC0E-C179-476B-80CA-82EBCB295334
41943007 32 Sec GPT table
41943039 1 Sec GPT header
```
active partition:
```
# findmnt --noheadings --raw --output=source --target=/usr
/dev/vda4
```
Then I do the following steps:
1. `cp /usr/share/coreos/release /tmp`
2. edit /tmp/release with `COREOS_RELEASE_VERSION=1122.3.0` because I want that
3. `sudo mount -o bind /tmp/release /usr/share/coreos/release`
4. `sudo systemctl restart update-engine`
5. wait a few secs
6. `update_engine_client -update`
7. `reboot`
8. ssh again
9. I'm on the existing version (1185.5.0) again.
<details>
<summary>"update_engine_client -update" output</summary>
<pre>
core core # update_engine_client -update
I0105 01:24:17.417250 1725 update_engine_client.cc:247] Initiating update check and install.
I0105 01:24:17.426007 1725 update_engine_client.cc:252] Waiting for update to complete.
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATE_AVAILABLE
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATE_AVAILABLE
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.040074
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.100192
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.160311
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.230449
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.400786
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.450884
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.490963
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.551082
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.741458
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.801576
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.911794
CURRENT_OP=UPDATE_STATUS_DOWNLOADING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_FINALIZING
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
Broadcast message from locksmithd at 2017-01-05 01:25:31.215990388 +0000 UTC:
System reboot in 5 minutes!
LAST_CHECKED_TIME=1483579457
PROGRESS=0.000000
CURRENT_OP=UPDATE_STATUS_UPDATED_NEED_REBOOT
NEW_VERSION=0.0.0.0
NEW_SIZE=264897038
I0105 01:25:32.699391 1725 update_engine_client.cc:194] Update succeeded -- reboot needed.
</pre>
</details>
<details>
<summary>"cgpt show" output after "-update"</summary>
<pre>
$ cgpt show /dev/vda*
start size part contents
0 1 Hybrid MBR
1 1 Pri GPT header
2 32 Pri GPT table
4096 262144 1 Label: "EFI-SYSTEM"
Type: EFI System Partition
UUID: 340A53DF-DE0F-4B1B-9F3D-86028CE53D13
Attr: Legacy BIOS Bootable
266240 4096 2 Label: "BIOS-BOOT"
Type: BIOS Boot Partition
UUID: 25F85B89-95C9-42B5-BB88-374B44B8F712
270336 2097152 3 Label: "USR-A"
Type: Alias for coreos-rootfs
UUID: 7130C94A-213A-4E5A-8E26-6CCE9662F132
Attr: priority=2 tries=1 successful=0
2367488 2097152 4 Label: "USR-B"
Type: Alias for coreos-rootfs
UUID: E03DD35C-7C2D-4A47-B3FE-27F15780A57C
Attr: priority=1 tries=0 successful=1
4464640 262144 6 Label: "OEM"
Type: Alias for linux-data
UUID: 7FFB25F1-A3AB-406B-8C74-F5D9FE3FB5A8
4726784 131072 7 Label: "OEM-CONFIG"
Type: CoreOS reserved
UUID: 95A635FE-2C8A-4245-A29C-9E743A2E2A7D
4857856 37085151 9 Label: "ROOT"
Type: CoreOS auto-resize
UUID: B2F7CC0E-C179-476B-80CA-82EBCB295334
41943007 32 Sec GPT table
41943039 1 Sec GPT header
</pre>
</details>
----
From reading the doc what's not clear to me is
1. Does `-update` prioritize the passive partition so that after `reboot` it is picked up?
2. Do I have to swap the boot partition myself manually in this case?
3. Why does `-update` output show NEW_VERSION=0.0.0? That's confusing
Either I'm missing something or it's not straightforward to downgrade a coreos version on the same channel? | non_defect | can t downgrade to a lower version on the same channel issue report i am using article to lower my version on the same channel i e stable however i keep booting to the same version coreos version environment digitalocean expected behavior i follow the docs and set coreos release version after copying the release file to tmp release and run sudo systemctl restart update engine update engine client update then i expect to be in lower version i also run the one liner command to revert to the other partition actual behavior it doesn t get me to a lower version whatever i did i am still in the same partition reproduction steps cgpt show dev vda start size part contents hybrid mbr pri gpt header pri gpt table label efi system type efi system partition uuid attr legacy bios bootable label bios boot type bios boot partition uuid label usr a type alias for coreos rootfs uuid attr priority tries successful label usr b type alias for coreos rootfs uuid attr priority tries successful label oem type alias for linux data uuid label oem config type coreos reserved uuid label root type coreos auto resize uuid sec gpt table sec gpt header active partition findmnt noheadings raw output source target usr dev then i do the following steps cp usr share coreos release tmp edit tmp release with coreos release version because i want that sudo mount o bind tmp release usr share coreos release sudo systemctl restart update engine wait a few secs update engine client update reboot ssh again i m on the existing version again update engine client update output core core update engine client update update engine client cc initiating update check and install update engine client cc waiting for update to complete last checked time progress current op update status update available new version new size last checked time progress current op update status update available new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status downloading new version new size last checked time progress current op update status finalizing new version new size broadcast message from locksmithd at utc system reboot in minutes last checked time progress current op update status updated need reboot new version new size update engine client cc update succeeded reboot needed cgpt show output after update cgpt show dev vda start size part contents hybrid mbr pri gpt header pri gpt table label efi system type efi system partition uuid attr legacy bios bootable label bios boot type bios boot partition uuid label usr a type alias for coreos rootfs uuid attr priority tries successful label usr b type alias for coreos rootfs uuid attr priority tries successful label oem type alias for linux data uuid label oem config type coreos reserved uuid label root type coreos auto resize uuid sec gpt table sec gpt header from reading the doc what s not clear to me is does update prioritize the passive partition so that after reboot it is picked up do i have to swap the boot partition myself manually in this case why does update output show new version that s confusing either i m missing something or it s not straightforward to downgrade a coreos version on the same channel | 0 |
245,286 | 18,778,697,238 | IssuesEvent | 2021-11-08 01:48:45 | A-S-T-U-C-E/STudio4Education | https://api.github.com/repos/A-S-T-U-C-E/STudio4Education | closed | Icônes non documentées | documentation | L'icône en haut à droite de la zone des blocs n'est pas documentée dans le bandeau bleu.
L'icône en bas à droite (carré avec flèche en biais) n'est pas documentée dans le bandeau bleu.
Ces deux icônes ne déclenchent aucune action (provisoirement ?) | 1.0 | Icônes non documentées - L'icône en haut à droite de la zone des blocs n'est pas documentée dans le bandeau bleu.
L'icône en bas à droite (carré avec flèche en biais) n'est pas documentée dans le bandeau bleu.
Ces deux icônes ne déclenchent aucune action (provisoirement ?) | non_defect | icônes non documentées l icône en haut à droite de la zone des blocs n est pas documentée dans le bandeau bleu l icône en bas à droite carré avec flèche en biais n est pas documentée dans le bandeau bleu ces deux icônes ne déclenchent aucune action provisoirement | 0 |
1,467 | 2,603,966,093 | IssuesEvent | 2015-02-24 18:59:02 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳包皮小颗粒怎么回事 | auto-migrated Priority-Medium Type-Defect | ```
沈阳包皮小颗粒怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:02 | 1.0 | 沈阳包皮小颗粒怎么回事 - ```
沈阳包皮小颗粒怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:02 | defect | 沈阳包皮小颗粒怎么回事 沈阳包皮小颗粒怎么回事〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
72,996 | 24,398,404,063 | IssuesEvent | 2022-10-04 21:41:07 | dkfans/keeperfx | https://api.github.com/repos/dkfans/keeperfx | opened | Pathfinding slowdown on 14-Sleepiburgh | Type-Defect Priority-High | Reported by multiple users on r3009, the game slows down immensely at some point of the level.
Load the save in that version to experience slowdowns: [fx1g0001.zip](https://github.com/dkfans/keeperfx/files/9711057/fx1g0001.zip)
Note that in the latest master version the slowdown is much less severe.
The heroes on the north side of the map slow the game down due to pathfinding troubles, even though they are locked inside a room and trapped across lava.
| 1.0 | Pathfinding slowdown on 14-Sleepiburgh - Reported by multiple users on r3009, the game slows down immensely at some point of the level.
Load the save in that version to experience slowdowns: [fx1g0001.zip](https://github.com/dkfans/keeperfx/files/9711057/fx1g0001.zip)
Note that in the latest master version the slowdown is much less severe.
The heroes on the north side of the map slow the game down due to pathfinding troubles, even though they are locked inside a room and trapped across lava.
| defect | pathfinding slowdown on sleepiburgh reported by multiple users on the game slows down immensely at some point of the level load the save in that version to experience slowdowns note that in the latest master version the slowdown is much less severe the heroes on the north side of the map slow the game down due to pathfinding troubles even though they are locked inside a room and trapped across lava | 1 |
171,259 | 6,485,739,071 | IssuesEvent | 2017-08-19 13:23:32 | JetBoom/noxiousnet-issues | https://api.github.com/repos/JetBoom/noxiousnet-issues | closed | [ZS] Zombie vision can't be toggled when you're dead | bug priority low zombie survival | This is a minor annoyance and a call for enhancement.
1. Zombie vision isn't able to be toggled on/off if you're dead, would like to be able to use it in spectator free roam.
2. I'm pretty sure Zombie vision is delayed when the server is under heavy load. | 1.0 | [ZS] Zombie vision can't be toggled when you're dead - This is a minor annoyance and a call for enhancement.
1. Zombie vision isn't able to be toggled on/off if you're dead, would like to be able to use it in spectator free roam.
2. I'm pretty sure Zombie vision is delayed when the server is under heavy load. | non_defect | zombie vision can t be toggled when you re dead this is a minor annoyance and a call for enhancement zombie vision isn t able to be toggled on off if you re dead would like to be able to use it in spectator free roam i m pretty sure zombie vision is delayed when the server is under heavy load | 0 |
257,097 | 27,561,768,923 | IssuesEvent | 2023-03-07 22:45:12 | samqws-marketing/coursera_naptime | https://api.github.com/repos/samqws-marketing/coursera_naptime | closed | CVE-2020-9546 (High) detected in multiple libraries - autoclosed | Mend: dependency security vulnerability | ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.3.3.jar</b>, <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.9.0.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.8.11.6</p>
<p>Direct dependency fix Resolution (com.typesafe.play:play-ehcache_2.12): 2.7.0</p>
</p>
</details>
<p></p>
| True | CVE-2020-9546 (High) detected in multiple libraries - autoclosed - ## CVE-2020-9546 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.3.3.jar</b>, <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.9.0.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.hadoop.shaded.com.zaxxer.hikari.HikariConfig (aka shaded hikari-config).
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-9546>CVE-2020-9546</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9546</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.8.11.6</p>
<p>Direct dependency fix Resolution (com.typesafe.play:play-ehcache_2.12): 2.7.0</p>
</p>
</details>
<p></p>
| non_defect | cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy sbt plugin jar root library sbt js engine jar npm jar webjars locator jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar play json jar jackson datatype jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache hadoop shaded com zaxxer hikari hikariconfig aka shaded hikari config publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution com typesafe play play ehcache | 0 |
29,039 | 5,512,038,709 | IssuesEvent | 2017-03-17 08:01:17 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Paginators not in sync for DataTable, DataList and DataGrid | defect | <!--
- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING.
- IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours.
-->
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/966XeMZYMs2IDq0L4IOC
**Current behavior**
When the number of rows is changed from 5 to 10 in the upper paginator, the number of page buttons changes correctly from two to one in the upper paginator, but the lower paginator still has two buttons.
**Expected behavior**
The number of buttons in the upper and lower paginator should be the same when the number of rows per page option is changed in one of the paginators.
**Minimal reproduction of the problem with instructions**
- start the above plunker that contains a datalist with two paginators and six items.
- change the number of rows/page from 5 to 10 in the upper paginator.
- the datalist correctly shows all six items in one page, but the lower paginator still has 2 page buttons.
By the way, the same thing happens if the number of rows/page is changed in the lower paginator. The number buttons in the upper paginator is wrong.
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
Window 8.1, IE 11
* **Angular version:** 2.0.4
* **PrimeNG version:** 2.0.3
* **Browser:** [IE11, Firefox ]
* **Language:** [TypeScript 2.2.1]
* **Node (for AoT issues):** `node --version` =
| 1.0 | Paginators not in sync for DataTable, DataList and DataGrid - <!--
- IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING.
- IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours.
-->
**I'm submitting a ...** (check one with "x")
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/966XeMZYMs2IDq0L4IOC
**Current behavior**
When the number of rows is changed from 5 to 10 in the upper paginator, the number of page buttons changes correctly from two to one in the upper paginator, but the lower paginator still has two buttons.
**Expected behavior**
The number of buttons in the upper and lower paginator should be the same when the number of rows per page option is changed in one of the paginators.
**Minimal reproduction of the problem with instructions**
- start the above plunker that contains a datalist with two paginators and six items.
- change the number of rows/page from 5 to 10 in the upper paginator.
- the datalist correctly shows all six items in one page, but the lower paginator still has 2 page buttons.
By the way, the same thing happens if the number of rows/page is changed in the lower paginator. The number buttons in the upper paginator is wrong.
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
Window 8.1, IE 11
* **Angular version:** 2.0.4
* **PrimeNG version:** 2.0.3
* **Browser:** [IE11, Firefox ]
* **Language:** [TypeScript 2.2.1]
* **Node (for AoT issues):** `node --version` =
| defect | paginators not in sync for datatable datalist and datagrid if you don t fill out the following information we might close your issue without investigating if you d like to secure our response you may consider primeng pro support where support is provided within hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior when the number of rows is changed from to in the upper paginator the number of page buttons changes correctly from two to one in the upper paginator but the lower paginator still has two buttons expected behavior the number of buttons in the upper and lower paginator should be the same when the number of rows per page option is changed in one of the paginators minimal reproduction of the problem with instructions start the above plunker that contains a datalist with two paginators and six items change the number of rows page from to in the upper paginator the datalist correctly shows all six items in one page but the lower paginator still has page buttons by the way the same thing happens if the number of rows page is changed in the lower paginator the number buttons in the upper paginator is wrong what is the motivation use case for changing the behavior please tell us about your environment window ie angular version primeng version browser language node for aot issues node version | 1 |
10,548 | 2,622,171,906 | IssuesEvent | 2015-03-04 00:14:52 | byzhang/rapidjson | https://api.github.com/repos/byzhang/rapidjson | closed | Document leaks memory when stored as Value. | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. rapidjson::Value* value = new rapidjson::Document;
2. delete value;
3. // leak memory.
What is the expected output? What do you see instead?
Expected all memory to be freed, but Document::stack_ is not freed.
What version of the product are you using? On what operating system?
Tip of the tree. Linux.
Please provide any additional information below.
rapidjson::Value has a non-virtual destructor. rapidjson::Document does not
have a destructor at all. When a pointer to rapidjson::Value is deleted, the
destructor for rapidjson::Document does not get called. This causes the leak of
Document::stack_. The solution is to mark the destructor virtual in Value and
also add a virtual destructor to Document.
```
Original issue reported on code.google.com by `rous...@chromium.org` on 19 May 2014 at 8:00 | 1.0 | Document leaks memory when stored as Value. - ```
What steps will reproduce the problem?
1. rapidjson::Value* value = new rapidjson::Document;
2. delete value;
3. // leak memory.
What is the expected output? What do you see instead?
Expected all memory to be freed, but Document::stack_ is not freed.
What version of the product are you using? On what operating system?
Tip of the tree. Linux.
Please provide any additional information below.
rapidjson::Value has a non-virtual destructor. rapidjson::Document does not
have a destructor at all. When a pointer to rapidjson::Value is deleted, the
destructor for rapidjson::Document does not get called. This causes the leak of
Document::stack_. The solution is to mark the destructor virtual in Value and
also add a virtual destructor to Document.
```
Original issue reported on code.google.com by `rous...@chromium.org` on 19 May 2014 at 8:00 | defect | document leaks memory when stored as value what steps will reproduce the problem rapidjson value value new rapidjson document delete value leak memory what is the expected output what do you see instead expected all memory to be freed but document stack is not freed what version of the product are you using on what operating system tip of the tree linux please provide any additional information below rapidjson value has a non virtual destructor rapidjson document does not have a destructor at all when a pointer to rapidjson value is deleted the destructor for rapidjson document does not get called this causes the leak of document stack the solution is to mark the destructor virtual in value and also add a virtual destructor to document original issue reported on code google com by rous chromium org on may at | 1 |
36,211 | 7,868,322,431 | IssuesEvent | 2018-06-23 20:09:54 | StrikeNP/trac_test | https://api.github.com/repos/StrikeNP/trac_test | closed | Case ARM 3 year crashes due to floating-point overflow with the current code (Trac #681) | Migrated from Trac betlej@uwm.edu clubb_src defect | **Description**
We implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.
One configuration that runs for at least a few weeks is:
128 level grid
Morrison microphysics
dt_main = 6sec
dt_rad = 60sec
The temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/681
```json
{
"status": "closed",
"changetime": "2014-08-13T19:55:10",
"description": "'''Description'''\nWe implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.\nOne configuration that runs for at least a few weeks is:\n128 level grid\nMorrison microphysics\ndt_main = 6sec\ndt_rad = 60sec\nThe temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.",
"reporter": "dschanen@uwm.edu",
"cc": "vlarson@uwm.edu, raut@uwm.edu",
"resolution": "fixed",
"_ts": "1407959710652186",
"component": "clubb_src",
"summary": "Case ARM 3 year crashes due to floating-point overflow with the current code",
"priority": "minor",
"keywords": "",
"time": "2014-05-05T22:32:36",
"milestone": "",
"owner": "betlej@uwm.edu",
"type": "defect"
}
```
| 1.0 | Case ARM 3 year crashes due to floating-point overflow with the current code (Trac #681) - **Description**
We implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.
One configuration that runs for at least a few weeks is:
128 level grid
Morrison microphysics
dt_main = 6sec
dt_rad = 60sec
The temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.
Attachments:
[plot_explicit_ta_configs.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_explicit_ta_configs.maff)
[plot_new_pdf_config_1_plot_2.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_config_1_plot_2.maff)
[plot_combo_pdf_run_3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_combo_pdf_run_3.maff)
[plot_input_fields_rtp3_thlp3_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_input_fields_rtp3_thlp3_1.maff)
[plot_new_pdf_20180522_test_1.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_new_pdf_20180522_test_1.maff)
[plot_attempts_8_10.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempts_8_10.maff)
[plot_attempt_8_only.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_attempt_8_only.maff)
[plot_beta_1p3.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3.maff)
[plot_beta_1p3_all.maff](https://github.com/larson-group/trac_attachment_archive/blob/master/trac_test/822/plot_beta_1p3_all.maff)
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/681
```json
{
"status": "closed",
"changetime": "2014-08-13T19:55:10",
"description": "'''Description'''\nWe implemented a 3 year long simulation in CLUBB standalone years ago to test the code. Recently, I attempted to run this case to see if our statistics code works correctly with infrequent (one every simulated week) output. To my surprise, this case no longer works. It appears that temperatures close to the surface become unrealistically cold and cause a floating-point error. I think this case would run at one time.\nOne configuration that runs for at least a few weeks is:\n128 level grid\nMorrison microphysics\ndt_main = 6sec\ndt_rad = 60sec\nThe temperatures still seem very low and the profile of total water and theta both look unstable. This case is probably not of much interest at this point, but I wonder if we changed something in how we interface with the microphysics since the case was added? It crashes in less than a week in the default configuration.",
"reporter": "dschanen@uwm.edu",
"cc": "vlarson@uwm.edu, raut@uwm.edu",
"resolution": "fixed",
"_ts": "1407959710652186",
"component": "clubb_src",
"summary": "Case ARM 3 year crashes due to floating-point overflow with the current code",
"priority": "minor",
"keywords": "",
"time": "2014-05-05T22:32:36",
"milestone": "",
"owner": "betlej@uwm.edu",
"type": "defect"
}
```
| defect | case arm year crashes due to floating point overflow with the current code trac description we implemented a year long simulation in clubb standalone years ago to test the code recently i attempted to run this case to see if our statistics code works correctly with infrequent one every simulated week output to my surprise this case no longer works it appears that temperatures close to the surface become unrealistically cold and cause a floating point error i think this case would run at one time one configuration that runs for at least a few weeks is level grid morrison microphysics dt main dt rad the temperatures still seem very low and the profile of total water and theta both look unstable this case is probably not of much interest at this point but i wonder if we changed something in how we interface with the microphysics since the case was added it crashes in less than a week in the default configuration attachments migrated from json status closed changetime description description nwe implemented a year long simulation in clubb standalone years ago to test the code recently i attempted to run this case to see if our statistics code works correctly with infrequent one every simulated week output to my surprise this case no longer works it appears that temperatures close to the surface become unrealistically cold and cause a floating point error i think this case would run at one time none configuration that runs for at least a few weeks is level grid nmorrison microphysics ndt main ndt rad nthe temperatures still seem very low and the profile of total water and theta both look unstable this case is probably not of much interest at this point but i wonder if we changed something in how we interface with the microphysics since the case was added it crashes in less than a week in the default configuration reporter dschanen uwm edu cc vlarson uwm edu raut uwm edu resolution fixed ts component clubb src summary case arm year crashes due to floating point overflow with the current code priority minor keywords time milestone owner betlej uwm edu type defect | 1 |
12,638 | 20,605,627,672 | IssuesEvent | 2022-03-06 23:01:59 | DD2480-gr25/jabref | https://api.github.com/repos/DD2480-gr25/jabref | opened | Requirement R4 - Prompt user for shared database password on startup | assignment-4 requirement | When the shared database password is configured to be stored in memory, prompt the user for the password on startup. This way we can guarantee that it will always be available when needed.
In the case that credential manager password storing has been implemented and in-memory passwords are used as fallback, then prompting the user for the password immediately on startup will make them aware that the fallback is active (and they might have to take action). | 1.0 | Requirement R4 - Prompt user for shared database password on startup - When the shared database password is configured to be stored in memory, prompt the user for the password on startup. This way we can guarantee that it will always be available when needed.
In the case that credential manager password storing has been implemented and in-memory passwords are used as fallback, then prompting the user for the password immediately on startup will make them aware that the fallback is active (and they might have to take action). | non_defect | requirement prompt user for shared database password on startup when the shared database password is configured to be stored in memory prompt the user for the password on startup this way we can guarantee that it will always be available when needed in the case that credential manager password storing has been implemented and in memory passwords are used as fallback then prompting the user for the password immediately on startup will make them aware that the fallback is active and they might have to take action | 0 |
82,361 | 7,839,031,202 | IssuesEvent | 2018-06-18 12:28:22 | octavian-paraschiv/protone-suite | https://api.github.com/repos/octavian-paraschiv/protone-suite | reopened | DVD operation in Windows 7 causes switching to the Basic theme | Category-Player OS-All Priority-P2 Regression_Test_Selected ReportSource-DevQA ReportSource-EndUser Type-Defect | ```
Launch the Open DVD dialog and select a DVD to be played\r\nPress OK\r\nNotice
error message given by Windoes 7 and theme switched to Basic.\r\n\r\nSame
behavoir if the player is launched having a DVD item in the playlist.
```
Original issue reported on code.google.com by `octavian...@gmail.com` on 18 Apr 2013 at 7:07
| 1.0 | DVD operation in Windows 7 causes switching to the Basic theme - ```
Launch the Open DVD dialog and select a DVD to be played\r\nPress OK\r\nNotice
error message given by Windoes 7 and theme switched to Basic.\r\n\r\nSame
behavoir if the player is launched having a DVD item in the playlist.
```
Original issue reported on code.google.com by `octavian...@gmail.com` on 18 Apr 2013 at 7:07
| non_defect | dvd operation in windows causes switching to the basic theme launch the open dvd dialog and select a dvd to be played r npress ok r nnotice error message given by windoes and theme switched to basic r n r nsame behavoir if the player is launched having a dvd item in the playlist original issue reported on code google com by octavian gmail com on apr at | 0 |
38,740 | 8,952,884,793 | IssuesEvent | 2019-01-25 17:46:39 | svigerske/ipopt-donotuse | https://api.github.com/repos/svigerske/ipopt-donotuse | closed | configure: error: Please obtain the threadsafe (newer) version of MA27 | Ipopt MA27 defect | Issue created by migration from Trac.
Original creator: guest
Original creation time: 2012-12-30 09:07:56
Assignee: ipopt-team
Version: 3.10
I have tried to compile Ipopt-3.10.3 on Mac OS X 10.8, but the "configure" terminated by the following messages:
checking whether MA27 is threadsafe... no
configure: error: Please obtain the threadsafe (newer) version of MA27
configure: error: /bin/sh '../../../ThirdParty/HSL/configure' failed for ThirdParty/HSL
Any advice as to how I can get around this?
Thanks | 1.0 | configure: error: Please obtain the threadsafe (newer) version of MA27 - Issue created by migration from Trac.
Original creator: guest
Original creation time: 2012-12-30 09:07:56
Assignee: ipopt-team
Version: 3.10
I have tried to compile Ipopt-3.10.3 on Mac OS X 10.8, but the "configure" terminated by the following messages:
checking whether MA27 is threadsafe... no
configure: error: Please obtain the threadsafe (newer) version of MA27
configure: error: /bin/sh '../../../ThirdParty/HSL/configure' failed for ThirdParty/HSL
Any advice as to how I can get around this?
Thanks | defect | configure error please obtain the threadsafe newer version of issue created by migration from trac original creator guest original creation time assignee ipopt team version i have tried to compile ipopt on mac os x but the configure terminated by the following messages checking whether is threadsafe no configure error please obtain the threadsafe newer version of configure error bin sh thirdparty hsl configure failed for thirdparty hsl any advice as to how i can get around this thanks | 1 |
326,064 | 27,975,350,589 | IssuesEvent | 2023-03-25 14:12:45 | dudykr/stc | https://api.github.com/repos/dudykr/stc | opened | Fix unit test for `tests/pass-only/typeNarrowing/.do-while-1.ts` | tsc-unit-test |
---
Related test: https://github.com/dudykr/stc/blob/main/crates/stc_ts_file_analyzer/tests/pass-only/typeNarrowing/.do-while-1.ts
---
This issue is created by sync script.
| 1.0 | Fix unit test for `tests/pass-only/typeNarrowing/.do-while-1.ts` -
---
Related test: https://github.com/dudykr/stc/blob/main/crates/stc_ts_file_analyzer/tests/pass-only/typeNarrowing/.do-while-1.ts
---
This issue is created by sync script.
| non_defect | fix unit test for tests pass only typenarrowing do while ts related test this issue is created by sync script | 0 |
69,862 | 22,700,681,719 | IssuesEvent | 2022-07-05 10:21:08 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | opened | Rage-Shake not working: Request failed: internal server error (500) | T-Defect | ### Steps to reproduce
1. Activate rage shake by shaking your device
2. [some steps I cannot recall]
3. Enter error message (I use german localisation and entered the following text:)
Gerät kippen, Gerät wieder gerade halten. Der Chat ist danach leer, man sieht keine Nachrichten mehr. Man muss aus dem Chat rauswechseln und wieder rein um den Chatverlauf wieder zu sehen.
4. Click on "Sende Protokolldaten" or "Sende Bildschirmfoto" (does not matter whether these are ticked or not)
5. Click on "Senden"
### Outcome
#### What did you expect?
A rage-shake issue is opened
#### What happened instead?
An error message appears saying:
Request failed: internal server error (500)
### Your phone model
iPhone12
### Operating system version
iOS15.5(19F77)
### Application version
1.18.9 (20220615113437)
### Homeserver
Synapse 1.61.1
### Will you send logs?
No | 1.0 | Rage-Shake not working: Request failed: internal server error (500) - ### Steps to reproduce
1. Activate rage shake by shaking your device
2. [some steps I cannot recall]
3. Enter error message (I use german localisation and entered the following text:)
Gerät kippen, Gerät wieder gerade halten. Der Chat ist danach leer, man sieht keine Nachrichten mehr. Man muss aus dem Chat rauswechseln und wieder rein um den Chatverlauf wieder zu sehen.
4. Click on "Sende Protokolldaten" or "Sende Bildschirmfoto" (does not matter whether these are ticked or not)
5. Click on "Senden"
### Outcome
#### What did you expect?
A rage-shake issue is opened
#### What happened instead?
An error message appears saying:
Request failed: internal server error (500)
### Your phone model
iPhone12
### Operating system version
iOS15.5(19F77)
### Application version
1.18.9 (20220615113437)
### Homeserver
Synapse 1.61.1
### Will you send logs?
No | defect | rage shake not working request failed internal server error steps to reproduce activate rage shake by shaking your device enter error message i use german localisation and entered the following text gerät kippen gerät wieder gerade halten der chat ist danach leer man sieht keine nachrichten mehr man muss aus dem chat rauswechseln und wieder rein um den chatverlauf wieder zu sehen click on sende protokolldaten or sende bildschirmfoto does not matter whether these are ticked or not click on senden outcome what did you expect a rage shake issue is opened what happened instead an error message appears saying request failed internal server error your phone model operating system version application version homeserver synapse will you send logs no | 1 |
233,084 | 17,850,638,791 | IssuesEvent | 2021-09-04 02:01:46 | intelligent-environments-lab/utx000 | https://api.github.com/repos/intelligent-environments-lab/utx000 | closed | UTx000 Graphic | documentation no-issue-activity | It would be in my best interest to create a graphic that highlights my research with UTx000. | 1.0 | UTx000 Graphic - It would be in my best interest to create a graphic that highlights my research with UTx000. | non_defect | graphic it would be in my best interest to create a graphic that highlights my research with | 0 |
271,459 | 29,505,398,953 | IssuesEvent | 2023-06-03 08:35:38 | Azure/PSRule.Rules.Azure | https://api.github.com/repos/Azure/PSRule.Rules.Azure | closed | Azure PostgreSQL single and flexible servers should have AAD authentication configured | rule: postgresql pillar: security | ### Existing rule
_No response_
### Suggested rule
AAD authentication improves security by ensuring that Azure PostgreSQL databases can be accessed by Azure Active Directory identities which are far more secure than local methods.
This is supported for both single and flexible servers deployment model.
### Pillar
Security
### Additional context
- [Use modern password protection](https://learn.microsoft.com/azure/architecture/framework/security/design-identity-authentication#use-modern-password-protection)
- [Azure Active Directory Authentication with PostgreSQL Flexible Server](https://learn.microsoft.com/azure/postgresql/flexible-server/concepts-azure-ad-authentication#how-azure-ad-works-in-flexible-server)
- [Use Azure AD for authentication with Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/azure/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication)
- [Use Azure AD for authentication with Azure Database for PostgreSQL - Single Server](https://learn.microsoft.com/azure/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication)
- [Azure Active Directory Authentication (Single Server VS Flexible Server)](https://learn.microsoft.com/azure/postgresql/flexible-server/concepts-azure-ad-authentication#azure-active-directory-authentication-single-server-vs-flexible-server)
- [Azure security baseline for Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/security/benchmark/azure/baselines/azure-database-for-postgresql-flexible-server-security-baseline)
- [Azure security baseline for Azure Database for PostgreSQL - Single Server](https://learn.microsoft.com/security/benchmark/azure/baselines/postgresql-security-baseline)
- [IM-1: Use centralized identity and authentication system](https://learn.microsoft.com/security/benchmark/azure/baselines/azure-database-for-postgresql-flexible-server-security-baseline#im-1-use-centralized-identity-and-authentication-system)
- [Azure deployment reference Flexible Server](https://learn.microsoft.com/azure/templates/microsoft.dbforpostgresql/flexibleservers/administrators)
- [Azure deployment reference Single Server](https://learn.microsoft.com/azure/templates/microsoft.dbforpostgresql/servers/administrators) | True | Azure PostgreSQL single and flexible servers should have AAD authentication configured - ### Existing rule
_No response_
### Suggested rule
AAD authentication improves security by ensuring that Azure PostgreSQL databases can be accessed by Azure Active Directory identities which are far more secure than local methods.
This is supported for both single and flexible servers deployment model.
### Pillar
Security
### Additional context
- [Use modern password protection](https://learn.microsoft.com/azure/architecture/framework/security/design-identity-authentication#use-modern-password-protection)
- [Azure Active Directory Authentication with PostgreSQL Flexible Server](https://learn.microsoft.com/azure/postgresql/flexible-server/concepts-azure-ad-authentication#how-azure-ad-works-in-flexible-server)
- [Use Azure AD for authentication with Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/azure/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication)
- [Use Azure AD for authentication with Azure Database for PostgreSQL - Single Server](https://learn.microsoft.com/azure/postgresql/single-server/how-to-configure-sign-in-azure-ad-authentication)
- [Azure Active Directory Authentication (Single Server VS Flexible Server)](https://learn.microsoft.com/azure/postgresql/flexible-server/concepts-azure-ad-authentication#azure-active-directory-authentication-single-server-vs-flexible-server)
- [Azure security baseline for Azure Database for PostgreSQL - Flexible Server](https://learn.microsoft.com/security/benchmark/azure/baselines/azure-database-for-postgresql-flexible-server-security-baseline)
- [Azure security baseline for Azure Database for PostgreSQL - Single Server](https://learn.microsoft.com/security/benchmark/azure/baselines/postgresql-security-baseline)
- [IM-1: Use centralized identity and authentication system](https://learn.microsoft.com/security/benchmark/azure/baselines/azure-database-for-postgresql-flexible-server-security-baseline#im-1-use-centralized-identity-and-authentication-system)
- [Azure deployment reference Flexible Server](https://learn.microsoft.com/azure/templates/microsoft.dbforpostgresql/flexibleservers/administrators)
- [Azure deployment reference Single Server](https://learn.microsoft.com/azure/templates/microsoft.dbforpostgresql/servers/administrators) | non_defect | azure postgresql single and flexible servers should have aad authentication configured existing rule no response suggested rule aad authentication improves security by ensuring that azure postgresql databases can be accessed by azure active directory identities which are far more secure than local methods this is supported for both single and flexible servers deployment model pillar security additional context | 0 |
132,130 | 18,522,641,074 | IssuesEvent | 2021-10-20 16:35:07 | hibernate/hibernate-reactive | https://api.github.com/repos/hibernate/hibernate-reactive | opened | Add withTransaction(Session) | design | Should we overload `SessionFactory#withTransaction` so that it can only accept the session?
```
<T> Uni<T> withTransaction(Function<Session, Uni<T>> work);
<T> Uni<T> withStatelessTransaction(Function<StatelessSession, Uni<T>> work);
```
Most of the time, when I use `withTransaction`, I don't really care about the tx object.
Overloading the methods will make the code a simpler in some cases:
For example:
```
sf.withTransaction( (session, tx) -> session.persist( entity ) );
```
becomes
```
sf.withTransaction( session -> session.persist( entity ) );
```
It will also make it easier to use method reference when the transaction parameter is not useful.
(Similar for `Stage.SessionFactory`)
| 1.0 | Add withTransaction(Session) - Should we overload `SessionFactory#withTransaction` so that it can only accept the session?
```
<T> Uni<T> withTransaction(Function<Session, Uni<T>> work);
<T> Uni<T> withStatelessTransaction(Function<StatelessSession, Uni<T>> work);
```
Most of the time, when I use `withTransaction`, I don't really care about the tx object.
Overloading the methods will make the code a simpler in some cases:
For example:
```
sf.withTransaction( (session, tx) -> session.persist( entity ) );
```
becomes
```
sf.withTransaction( session -> session.persist( entity ) );
```
It will also make it easier to use method reference when the transaction parameter is not useful.
(Similar for `Stage.SessionFactory`)
| non_defect | add withtransaction session should we overload sessionfactory withtransaction so that it can only accept the session uni withtransaction function work uni withstatelesstransaction function work most of the time when i use withtransaction i don t really care about the tx object overloading the methods will make the code a simpler in some cases for example sf withtransaction session tx session persist entity becomes sf withtransaction session session persist entity it will also make it easier to use method reference when the transaction parameter is not useful similar for stage sessionfactory | 0 |
74,689 | 25,262,624,793 | IssuesEvent | 2022-11-16 00:32:42 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | linux6.0.7-rt-xanmod does not build ZFS module | Type: Defect | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Void Linux
Distribution Version | Rolling release
Kernel Version | 6.0.8
Architecture |x86_64
OpenZFS Version |2.1.6
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Xanmod rt versions do not compile OpenZFS module, non rt versions compile normally, the same problem occurs on Ubuntu.
### Describe how to reproduce the problem
In Void Linux you need to use this repository to create kernel:
https://notabug.org/Marcoapc/voidxanmodK
On Ubuntu you need to follow the steps from the official Xanmod website:
https://xanmod.org/
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
**Building DKMS module: zfs-2.1.6... FAILED!**
**Generating kernel module dependency lists... done.**
**Executing post-install kernel hook: 20-initramfs ...**
**grep: warning: stray \ before /**
**grep: warning: stray \ before /**
**dracut-install: Failed to find module 'zfs'**
**dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.b6n1va/initramfs --kerneldir /lib/modules/6.0.7-rt11-xanmod1_1/ -m zfs**
| 1.0 | linux6.0.7-rt-xanmod does not build ZFS module - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Void Linux
Distribution Version | Rolling release
Kernel Version | 6.0.8
Architecture |x86_64
OpenZFS Version |2.1.6
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Xanmod rt versions do not compile OpenZFS module, non rt versions compile normally, the same problem occurs on Ubuntu.
### Describe how to reproduce the problem
In Void Linux you need to use this repository to create kernel:
https://notabug.org/Marcoapc/voidxanmodK
On Ubuntu you need to follow the steps from the official Xanmod website:
https://xanmod.org/
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
**Building DKMS module: zfs-2.1.6... FAILED!**
**Generating kernel module dependency lists... done.**
**Executing post-install kernel hook: 20-initramfs ...**
**grep: warning: stray \ before /**
**grep: warning: stray \ before /**
**dracut-install: Failed to find module 'zfs'**
**dracut: FAILED: /usr/lib/dracut/dracut-install -D /var/tmp/dracut.b6n1va/initramfs --kerneldir /lib/modules/6.0.7-rt11-xanmod1_1/ -m zfs**
| defect | rt xanmod does not build zfs module thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name void linux distribution version rolling release kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing xanmod rt versions do not compile openzfs module non rt versions compile normally the same problem occurs on ubuntu describe how to reproduce the problem in void linux you need to use this repository to create kernel on ubuntu you need to follow the steps from the official xanmod website include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with building dkms module zfs failed generating kernel module dependency lists done executing post install kernel hook initramfs grep warning stray before grep warning stray before dracut install failed to find module zfs dracut failed usr lib dracut dracut install d var tmp dracut initramfs kerneldir lib modules m zfs | 1 |
158,088 | 24,784,812,627 | IssuesEvent | 2022-10-24 08:55:14 | npocccties/chiloportal | https://api.github.com/repos/npocccties/chiloportal | closed | CHiLO-Portalのペルソナ | design | ### 大学
大学は,教師に提供したいオンラインコースのデジタルバッジに,カテゴリ,レベル,対象などの属性毎(タグをつけて?)にポータルに登録する.
### 教師
教師がオンライン研修を始めるきっかけは2つ
1. 自ら抱えている課題の解決や目的を達成するため(主体的)
2. 教育委員会にいわれて(能動的)
本来は1を目指しているが,そのためには,何らかの仕組みが必要.従って当面2になると考えている.
教師の選択方法は,以下が考えられる(要検討)
- 大学が分類したバッジの属性から選ぶ(中央卸売市場から探す,に相当)
- 教育委員会がマップした教員育成指標から選ぶ(小売店から探す,に相当)
- フリーワードで検索する
- 最終課題の種類,ビデオ再生時間など,取得のしやすさから選ぶ
- 代替できる研修数で選ぶ
### 教育委員会
教育委員会は,大学が登録したバッジそれぞれが,自らの教員育成指標のどれに相当するのか指定する
教育委員会は,教師に,オンライン講習を受講し,バッジを獲得することを指示する
<img width="976" alt="image" src="https://user-images.githubusercontent.com/9005132/183550516-d616558a-61ef-4720-9c9c-cce0491c4f46.png">
| 1.0 | CHiLO-Portalのペルソナ - ### 大学
大学は,教師に提供したいオンラインコースのデジタルバッジに,カテゴリ,レベル,対象などの属性毎(タグをつけて?)にポータルに登録する.
### 教師
教師がオンライン研修を始めるきっかけは2つ
1. 自ら抱えている課題の解決や目的を達成するため(主体的)
2. 教育委員会にいわれて(能動的)
本来は1を目指しているが,そのためには,何らかの仕組みが必要.従って当面2になると考えている.
教師の選択方法は,以下が考えられる(要検討)
- 大学が分類したバッジの属性から選ぶ(中央卸売市場から探す,に相当)
- 教育委員会がマップした教員育成指標から選ぶ(小売店から探す,に相当)
- フリーワードで検索する
- 最終課題の種類,ビデオ再生時間など,取得のしやすさから選ぶ
- 代替できる研修数で選ぶ
### 教育委員会
教育委員会は,大学が登録したバッジそれぞれが,自らの教員育成指標のどれに相当するのか指定する
教育委員会は,教師に,オンライン講習を受講し,バッジを獲得することを指示する
<img width="976" alt="image" src="https://user-images.githubusercontent.com/9005132/183550516-d616558a-61ef-4720-9c9c-cce0491c4f46.png">
| non_defect | chilo portalのペルソナ 大学 大学は,教師に提供したいオンラインコースのデジタルバッジに,カテゴリ,レベル,対象などの属性毎(タグをつけて?)にポータルに登録する. 教師 自ら抱えている課題の解決や目的を達成するため(主体的) 教育委員会にいわれて(能動的) ,そのためには,何らかの仕組みが必要. . 教師の選択方法は,以下が考えられる(要検討) 大学が分類したバッジの属性から選ぶ(中央卸売市場から探す,に相当) 教育委員会がマップした教員育成指標から選ぶ(小売店から探す,に相当) フリーワードで検索する 最終課題の種類,ビデオ再生時間など,取得のしやすさから選ぶ 代替できる研修数で選ぶ 教育委員会 教育委員会は,大学が登録したバッジそれぞれが,自らの教員育成指標のどれに相当するのか指定する 教育委員会は,教師に,オンライン講習を受講し,バッジを獲得することを指示する img width alt image src | 0 |
77,450 | 26,996,270,903 | IssuesEvent | 2023-02-10 01:28:43 | SeleniumHQ/selenium | https://api.github.com/repos/SeleniumHQ/selenium | closed | [🐛 Bug]: Error when creating CDP connection with IE Driver | I-defect needs-triaging | ### What happened?
I'm trying to intercept network logs while using IE driver.
When I call the following code:
```javascript
const connection = await driver.createCDPConnection('page');
```
I get the following error:
```
Uncaught TypeError: debuggerAddress.match is not a function
```
### How can we reproduce the issue?
```javascript
const { Builder, By, Key, until } = require('selenium-webdriver');
const { Options } = require('selenium-webdriver/ie');
(async () => {
let ieOptions = new Options();
ieOptions.setEdgeChromium(true);
ieOptions.setEdgePath('C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe');
ieOptions.initialBrowserUrl('http://www.bing.com');
ieOptions.ignoreZoomSetting(true);
ieOptions.requireWindowFocus(true);
let driver = await new Builder().forBrowser('ie').setIeOptions(ieOptions).build();
try {
const connection = await driver.createCDPConnection('page');
let httpResponse = new HttpResponse();
await driver.onIntercept(connection, httpResponse, async function () {
console.log('http response: ', httpResponse);
});
await driver.get('https://www.google.com')
} finally {
await driver.quit();
}
})();
```
### Relevant log output
```shell
TypeError: debuggerAddress.match is not a function
at WebDriver.getWsUrl (C:\Users\$USER$\Projects\a\testProject\node_modules\selenium-webdriver\lib\webdriver.js:1297:25)
at WebDriver.createCDPConnection (C:\Users\$USER$\Projects\a\testProject\node_modules\selenium-webdriver\lib\webdriver.js:1226:30)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async C:\Users\$USER$\Projects\a\testProject\index.js:18:24
```
### Operating System
Windows 11
### Selenium version
4.8.0.0
### What are the browser(s) and version(s) where you see this issue?
Internet Explorer 11
### What are the browser driver(s) and version(s) where you see this issue?
IE Driver 4.8.0.0
### Are you using Selenium Grid?
_No response_ | 1.0 | [🐛 Bug]: Error when creating CDP connection with IE Driver - ### What happened?
I'm trying to intercept network logs while using IE driver.
When I call the following code:
```javascript
const connection = await driver.createCDPConnection('page');
```
I get the following error:
```
Uncaught TypeError: debuggerAddress.match is not a function
```
### How can we reproduce the issue?
```javascript
const { Builder, By, Key, until } = require('selenium-webdriver');
const { Options } = require('selenium-webdriver/ie');
(async () => {
let ieOptions = new Options();
ieOptions.setEdgeChromium(true);
ieOptions.setEdgePath('C:/Program Files (x86)/Microsoft/Edge/Application/msedge.exe');
ieOptions.initialBrowserUrl('http://www.bing.com');
ieOptions.ignoreZoomSetting(true);
ieOptions.requireWindowFocus(true);
let driver = await new Builder().forBrowser('ie').setIeOptions(ieOptions).build();
try {
const connection = await driver.createCDPConnection('page');
let httpResponse = new HttpResponse();
await driver.onIntercept(connection, httpResponse, async function () {
console.log('http response: ', httpResponse);
});
await driver.get('https://www.google.com')
} finally {
await driver.quit();
}
})();
```
### Relevant log output
```shell
TypeError: debuggerAddress.match is not a function
at WebDriver.getWsUrl (C:\Users\$USER$\Projects\a\testProject\node_modules\selenium-webdriver\lib\webdriver.js:1297:25)
at WebDriver.createCDPConnection (C:\Users\$USER$\Projects\a\testProject\node_modules\selenium-webdriver\lib\webdriver.js:1226:30)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async C:\Users\$USER$\Projects\a\testProject\index.js:18:24
```
### Operating System
Windows 11
### Selenium version
4.8.0.0
### What are the browser(s) and version(s) where you see this issue?
Internet Explorer 11
### What are the browser driver(s) and version(s) where you see this issue?
IE Driver 4.8.0.0
### Are you using Selenium Grid?
_No response_ | defect | error when creating cdp connection with ie driver what happened i m trying to intercept network logs while using ie driver when i call the following code javascript const connection await driver createcdpconnection page i get the following error uncaught typeerror debuggeraddress match is not a function how can we reproduce the issue javascript const builder by key until require selenium webdriver const options require selenium webdriver ie async let ieoptions new options ieoptions setedgechromium true ieoptions setedgepath c program files microsoft edge application msedge exe ieoptions initialbrowserurl ieoptions ignorezoomsetting true ieoptions requirewindowfocus true let driver await new builder forbrowser ie setieoptions ieoptions build try const connection await driver createcdpconnection page let httpresponse new httpresponse await driver onintercept connection httpresponse async function console log http response httpresponse await driver get finally await driver quit relevant log output shell typeerror debuggeraddress match is not a function at webdriver getwsurl c users user projects a testproject node modules selenium webdriver lib webdriver js at webdriver createcdpconnection c users user projects a testproject node modules selenium webdriver lib webdriver js at processticksandrejections node internal process task queues at async c users user projects a testproject index js operating system windows selenium version what are the browser s and version s where you see this issue internet explorer what are the browser driver s and version s where you see this issue ie driver are you using selenium grid no response | 1 |
98,987 | 16,389,589,169 | IssuesEvent | 2021-05-17 14:35:42 | Thanraj/linux-1 | https://api.github.com/repos/Thanraj/linux-1 | opened | CVE-2020-14386 (High) detected in linuxv5.0 | security vulnerability | ## CVE-2020-14386 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-1/commits/9738d89d33cb0f3ac708908509b82eafc007d557">9738d89d33cb0f3ac708908509b82eafc007d557</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/net/packet/af_packet.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel before 5.9-rc4. Memory corruption can be exploited to gain root privileges from unprivileged processes. The highest threat from this vulnerability is to data confidentiality and integrity.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14386>CVE-2020-14386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14386</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v5.9-rc4,v5.4.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-14386 (High) detected in linuxv5.0 - ## CVE-2020-14386 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-1/commits/9738d89d33cb0f3ac708908509b82eafc007d557">9738d89d33cb0f3ac708908509b82eafc007d557</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/net/packet/af_packet.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel before 5.9-rc4. Memory corruption can be exploited to gain root privileges from unprivileged processes. The highest threat from this vulnerability is to data confidentiality and integrity.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14386>CVE-2020-14386</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14386">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14386</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v5.9-rc4,v5.4.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files linux net packet af packet c vulnerability details a flaw was found in the linux kernel before memory corruption can be exploited to gain root privileges from unprivileged processes the highest threat from this vulnerability is to data confidentiality and integrity publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
11,287 | 2,648,822,739 | IssuesEvent | 2015-03-14 09:04:57 | channingwalton/ergo | https://api.github.com/repos/channingwalton/ergo | closed | for eclipse menu without keybinding . Some are logged, some not. | auto-migrated Priority-Medium Type-Defect | ```
for example, "Help-->About eclipse SDK" is logged while "Help-->Cheat
sheet " is not logged.
IS it possible to log all menu(with and without key binding) event.
```
Original issue reported on code.google.com by `xufengb...@gmail.com` on 26 Feb 2008 at 2:34 | 1.0 | for eclipse menu without keybinding . Some are logged, some not. - ```
for example, "Help-->About eclipse SDK" is logged while "Help-->Cheat
sheet " is not logged.
IS it possible to log all menu(with and without key binding) event.
```
Original issue reported on code.google.com by `xufengb...@gmail.com` on 26 Feb 2008 at 2:34 | defect | for eclipse menu without keybinding some are logged some not for example help about eclipse sdk is logged while help cheat sheet is not logged is it possible to log all menu with and without key binding event original issue reported on code google com by xufengb gmail com on feb at | 1 |
69,195 | 22,271,593,251 | IssuesEvent | 2022-06-10 12:54:05 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: scipy.signal.max_len_seq / _max_len_seq_inner incorrect taps | defect | ### Describe your issue.
Incorrect sequences obtained from scipy.signal.max_len_seq / _max_len_seq_inner.
Looking through the code, the problem appears to be with the taps.
Line 19 in _max_len_seq_inner uses variable "feedback" that is initialised to a_0 where a_0 is defined in Figure 1 on the wikipedia page https://en.wikipedia.org/wiki/Maximum_length_sequence
The effect is that there is an additional 0th tap. This allows there to be two 0th taps and always at least one 0th tap!
I believe the fix is to change lines 16 and 17 from
(16) feedback = state[idx]
(17) seq[i] = feedback
to
(16) feedback = 0
(17) seq[i] = state[idx]
but this needs testing. I suggest testing against the two examples given on slide 21 of http://pages.hmc.edu/harris/class/e11/lect7.pdf (beware the notation and ordering of taps is different! See reproducing code below for clarity.
_max_len_seq_inner is here:
https://github.com/scipy/scipy/blob/b80267e9b44169c1ae4ba691bce1e60b66104cbc/scipy/signal/_max_len_seq_inner.py#L19
### Reproducing Code Example
```python
# see slide 21 of http://pages.hmc.edu/harris/class/e11/lect7.pdf for these examples
# when these assertions fail the code is incorrect:
import numpy as np
from scipy.signal import max_len_seq
s1 = [1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1]
s2 = [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0]
assert np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[0, 1, 2, 3])[0] == s1)
assert np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[0, 2])[0] == s2)
# removing the 0th tap currently leads to equality, when it shouldn't do!
assert not np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[1, 2, 3])[0] == s1)
assert not np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[2])[0] == s2)
```
### Error message
```shell
None
```
### SciPy/NumPy/Python version information
1.8.1 1.22.4 sys.version_info(major=3, minor=9, micro=10, releaselevel='final', serial=0) | 1.0 | BUG: scipy.signal.max_len_seq / _max_len_seq_inner incorrect taps - ### Describe your issue.
Incorrect sequences obtained from scipy.signal.max_len_seq / _max_len_seq_inner.
Looking through the code, the problem appears to be with the taps.
Line 19 in _max_len_seq_inner uses variable "feedback" that is initialised to a_0 where a_0 is defined in Figure 1 on the wikipedia page https://en.wikipedia.org/wiki/Maximum_length_sequence
The effect is that there is an additional 0th tap. This allows there to be two 0th taps and always at least one 0th tap!
I believe the fix is to change lines 16 and 17 from
(16) feedback = state[idx]
(17) seq[i] = feedback
to
(16) feedback = 0
(17) seq[i] = state[idx]
but this needs testing. I suggest testing against the two examples given on slide 21 of http://pages.hmc.edu/harris/class/e11/lect7.pdf (beware the notation and ordering of taps is different! See reproducing code below for clarity.
_max_len_seq_inner is here:
https://github.com/scipy/scipy/blob/b80267e9b44169c1ae4ba691bce1e60b66104cbc/scipy/signal/_max_len_seq_inner.py#L19
### Reproducing Code Example
```python
# see slide 21 of http://pages.hmc.edu/harris/class/e11/lect7.pdf for these examples
# when these assertions fail the code is incorrect:
import numpy as np
from scipy.signal import max_len_seq
s1 = [1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1]
s2 = [1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0]
assert np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[0, 1, 2, 3])[0] == s1)
assert np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[0, 2])[0] == s2)
# removing the 0th tap currently leads to equality, when it shouldn't do!
assert not np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[1, 2, 3])[0] == s1)
assert not np.all(max_len_seq(5, state=[1,0,0,0,0], taps=[2])[0] == s2)
```
### Error message
```shell
None
```
### SciPy/NumPy/Python version information
1.8.1 1.22.4 sys.version_info(major=3, minor=9, micro=10, releaselevel='final', serial=0) | defect | bug scipy signal max len seq max len seq inner incorrect taps describe your issue incorrect sequences obtained from scipy signal max len seq max len seq inner looking through the code the problem appears to be with the taps line in max len seq inner uses variable feedback that is initialised to a where a is defined in figure on the wikipedia page the effect is that there is an additional tap this allows there to be two taps and always at least one tap i believe the fix is to change lines and from feedback state seq feedback to feedback seq state but this needs testing i suggest testing against the two examples given on slide of beware the notation and ordering of taps is different see reproducing code below for clarity max len seq inner is here reproducing code example python see slide of for these examples when these assertions fail the code is incorrect import numpy as np from scipy signal import max len seq assert np all max len seq state taps assert np all max len seq state taps removing the tap currently leads to equality when it shouldn t do assert not np all max len seq state taps assert not np all max len seq state taps error message shell none scipy numpy python version information sys version info major minor micro releaselevel final serial | 1 |
75,551 | 25,912,313,942 | IssuesEvent | 2022-12-15 14:51:09 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | Sometimes excessive delay truncating rpmdb.sqlite-shm | Type: Defect | <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Fedora
Distribution Version | 36
Kernel Version | 5.15.82
Architecture |x86_64
OpenZFS Version |2.1.7
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Usually `rpm` commands execute quickly, but pretty often (10% of the cases) there is 4 second extra delay:
```
16:22:55.694445 openat(AT_FDCWD, "/usr/lib/sysimage/rpm/rpmdb.sqlite-shm", O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC, 0644) = 6 <0.000086>
16:22:55.694563 newfstatat(6, "", {st_dev=makedev(0, 0x1e), st_ino=27231358, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=131072, st_blocks=18, st_size=32768, st_atime=1671114175 /* 2022-12-15T16:22:55.105109260+0200 */, st_atime_nsec=105109260, st_mtime=1671114175 /* 2022-12-15T16:22:55.106109257+0200 */, st_mtime_nsec=106109257, st_ctime=1671114175 /* 2022-12-15T16:22:55.106109257+0200 */, st_ctime_nsec=106109257}, AT_EMPTY_PATH) = 0 <0.000011>
16:22:55.694692 geteuid() = 0 <0.000010>
16:22:55.694759 fchown(6, 0, 0) = 0 <0.000047>
16:22:55.694855 fcntl(6, F_GETLK, {l_type=F_UNLCK, l_whence=SEEK_SET, l_start=128, l_len=1, l_pid=0}) = 0 <0.000010>
16:22:55.694909 fcntl(6, F_SETLK, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=128, l_len=1}) = 0 <0.000009>
16:22:55.694953 ftruncate(6, 3) = 0 <4.372489>
```
I have ZFS as root fs running on top of **LUKS2**. NVMe Corsair MP510 (Max Random Write QD32 IOMeter: Up to _440K_ IOPS) with practically zero other IO operations when I run `rpm`.
### Describe how to reproduce the problem
Just try rpm again.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
zpool show no errors and:
```
NAME PROPERTY VALUE SOURCE
tank type filesystem -
tank creation Wed Nov 10 15:40 2021 -
tank used 1.39T -
tank available 296G -
tank referenced 96K -
tank compressratio 1.29x -
tank mounted no -
tank quota none default
tank reservation none default
tank recordsize 128K default
tank mountpoint /tank default
tank sharenfs off default
tank checksum on default
tank compression zstd-3 local
tank atime on default
tank devices on default
tank exec on default
tank setuid on default
tank readonly off default
tank zoned off default
tank snapdir hidden default
tank aclmode discard default
tank aclinherit restricted default
tank createtxg 1 -
tank canmount off local
tank xattr sa local
tank copies 1 default
tank version 5 -
tank utf8only on -
tank normalization formD -
tank casesensitivity sensitive -
tank vscan off default
tank nbmand off default
tank sharesmb off default
tank refquota none default
tank refreservation none default
tank guid 8575689710526589949 -
tank primarycache all default
tank secondarycache all default
tank usedbysnapshots 0B -
tank usedbydataset 96K -
tank usedbychildren 1.39T -
tank usedbyrefreservation 0B -
tank logbias latency default
tank objsetid 54 -
tank dedup off default
tank mlslabel none default
tank sync standard default
tank dnodesize auto local
tank refcompressratio 1.00x -
tank written 0 -
tank logicalused 1.77T -
tank logicalreferenced 42K -
tank volmode default default
tank filesystem_limit none default
tank snapshot_limit none default
tank filesystem_count none default
tank snapshot_count none default
tank snapdev hidden default
tank acltype posix local
tank context none default
tank fscontext none default
tank defcontext none default
tank rootcontext none default
tank relatime on local
tank redundant_metadata all default
tank overlay on default
tank encryption off default
tank keylocation none default
tank keyformat none default
tank pbkdf2iters 0 default
tank special_small_blocks 0 default
```
| 1.0 | Sometimes excessive delay truncating rpmdb.sqlite-shm - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Fedora
Distribution Version | 36
Kernel Version | 5.15.82
Architecture |x86_64
OpenZFS Version |2.1.7
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
Usually `rpm` commands execute quickly, but pretty often (10% of the cases) there is 4 second extra delay:
```
16:22:55.694445 openat(AT_FDCWD, "/usr/lib/sysimage/rpm/rpmdb.sqlite-shm", O_RDWR|O_CREAT|O_NOFOLLOW|O_CLOEXEC, 0644) = 6 <0.000086>
16:22:55.694563 newfstatat(6, "", {st_dev=makedev(0, 0x1e), st_ino=27231358, st_mode=S_IFREG|0644, st_nlink=1, st_uid=0, st_gid=0, st_blksize=131072, st_blocks=18, st_size=32768, st_atime=1671114175 /* 2022-12-15T16:22:55.105109260+0200 */, st_atime_nsec=105109260, st_mtime=1671114175 /* 2022-12-15T16:22:55.106109257+0200 */, st_mtime_nsec=106109257, st_ctime=1671114175 /* 2022-12-15T16:22:55.106109257+0200 */, st_ctime_nsec=106109257}, AT_EMPTY_PATH) = 0 <0.000011>
16:22:55.694692 geteuid() = 0 <0.000010>
16:22:55.694759 fchown(6, 0, 0) = 0 <0.000047>
16:22:55.694855 fcntl(6, F_GETLK, {l_type=F_UNLCK, l_whence=SEEK_SET, l_start=128, l_len=1, l_pid=0}) = 0 <0.000010>
16:22:55.694909 fcntl(6, F_SETLK, {l_type=F_WRLCK, l_whence=SEEK_SET, l_start=128, l_len=1}) = 0 <0.000009>
16:22:55.694953 ftruncate(6, 3) = 0 <4.372489>
```
I have ZFS as root fs running on top of **LUKS2**. NVMe Corsair MP510 (Max Random Write QD32 IOMeter: Up to _440K_ IOPS) with practically zero other IO operations when I run `rpm`.
### Describe how to reproduce the problem
Just try rpm again.
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
zpool show no errors and:
```
NAME PROPERTY VALUE SOURCE
tank type filesystem -
tank creation Wed Nov 10 15:40 2021 -
tank used 1.39T -
tank available 296G -
tank referenced 96K -
tank compressratio 1.29x -
tank mounted no -
tank quota none default
tank reservation none default
tank recordsize 128K default
tank mountpoint /tank default
tank sharenfs off default
tank checksum on default
tank compression zstd-3 local
tank atime on default
tank devices on default
tank exec on default
tank setuid on default
tank readonly off default
tank zoned off default
tank snapdir hidden default
tank aclmode discard default
tank aclinherit restricted default
tank createtxg 1 -
tank canmount off local
tank xattr sa local
tank copies 1 default
tank version 5 -
tank utf8only on -
tank normalization formD -
tank casesensitivity sensitive -
tank vscan off default
tank nbmand off default
tank sharesmb off default
tank refquota none default
tank refreservation none default
tank guid 8575689710526589949 -
tank primarycache all default
tank secondarycache all default
tank usedbysnapshots 0B -
tank usedbydataset 96K -
tank usedbychildren 1.39T -
tank usedbyrefreservation 0B -
tank logbias latency default
tank objsetid 54 -
tank dedup off default
tank mlslabel none default
tank sync standard default
tank dnodesize auto local
tank refcompressratio 1.00x -
tank written 0 -
tank logicalused 1.77T -
tank logicalreferenced 42K -
tank volmode default default
tank filesystem_limit none default
tank snapshot_limit none default
tank filesystem_count none default
tank snapshot_count none default
tank snapdev hidden default
tank acltype posix local
tank context none default
tank fscontext none default
tank defcontext none default
tank rootcontext none default
tank relatime on local
tank redundant_metadata all default
tank overlay on default
tank encryption off default
tank keylocation none default
tank keyformat none default
tank pbkdf2iters 0 default
tank special_small_blocks 0 default
```
| defect | sometimes excessive delay truncating rpmdb sqlite shm thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name fedora distribution version kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing usually rpm commands execute quickly but pretty often of the cases there is second extra delay openat at fdcwd usr lib sysimage rpm rpmdb sqlite shm o rdwr o creat o nofollow o cloexec newfstatat st dev makedev st ino st mode s ifreg st nlink st uid st gid st blksize st blocks st size st atime st atime nsec st mtime st mtime nsec st ctime st ctime nsec at empty path geteuid fchown fcntl f getlk l type f unlck l whence seek set l start l len l pid fcntl f setlk l type f wrlck l whence seek set l start l len ftruncate i have zfs as root fs running on top of nvme corsair max random write iometer up to iops with practically zero other io operations when i run rpm describe how to reproduce the problem just try rpm again include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with zpool show no errors and name property value source tank type filesystem tank creation wed nov tank used tank available tank referenced tank compressratio tank mounted no tank quota none default tank reservation none default tank recordsize default tank mountpoint tank default tank sharenfs off default tank checksum on default tank compression zstd local tank atime on default tank devices on default tank exec on default tank setuid on default tank readonly off default tank zoned off default tank snapdir hidden default tank aclmode discard default tank aclinherit restricted default tank createtxg tank canmount off local tank xattr sa local tank copies default tank version tank on tank normalization formd tank casesensitivity sensitive tank vscan off default tank nbmand off default tank sharesmb off default tank refquota none default tank refreservation none default tank guid tank primarycache all default tank secondarycache all default tank usedbysnapshots tank usedbydataset tank usedbychildren tank usedbyrefreservation tank logbias latency default tank objsetid tank dedup off default tank mlslabel none default tank sync standard default tank dnodesize auto local tank refcompressratio tank written tank logicalused tank logicalreferenced tank volmode default default tank filesystem limit none default tank snapshot limit none default tank filesystem count none default tank snapshot count none default tank snapdev hidden default tank acltype posix local tank context none default tank fscontext none default tank defcontext none default tank rootcontext none default tank relatime on local tank redundant metadata all default tank overlay on default tank encryption off default tank keylocation none default tank keyformat none default tank default tank special small blocks default | 1 |
2,899 | 2,607,965,166 | IssuesEvent | 2015-02-26 00:42:00 | chrsmithdemos/leveldb | https://api.github.com/repos/chrsmithdemos/leveldb | closed | leveldb::DestroyDB fails on Windows | auto-migrated Priority-Medium Type-Defect | ```
I'm using Chromium's Env for Windows, but this should apply to any Env
implementation on that OS.
In db_impl.cc's DestroyDB:
const std::string lockname = LockFileName(dbname);
This is defined to be dbname + "/LOCK" in filename.cc
Later on:
if (ParseFileName(filenames[i], &number, &type) &&
filenames[i] != lockname) { // Lock file will be deleted at end
Status del = env->DeleteFile(dbname + "/" + filenames[i]);
if (result.ok() && !del.ok()) {
result = del;
}
}
filenames[i] contains just "LOCK", but lockname will be the full path, so the
expression filenames[i] != lockname will always be false. Therefore, DeleteFile
will always be called, and possibly fail. If the Env can't delete the LOCK file
(e.g. Windows file semantics), the DestroyDB will fail.
Two possible fixes:
for (size_t i = 0; i < filenames.size(); i++) {
- if (ParseFileName(filenames[i], &number, &type) &&
- filenames[i] != lockname) { // Lock file will be deleted at end
- Status del = env->DeleteFile(dbname + "/" + filenames[i]);
- if (result.ok() && !del.ok()) {
- result = del;
+ if (ParseFileName(filenames[i], &number, &type)) {
+ const std::string filepath = dbname + "/" + filenames[i];
+ if (filepath != lockname) { // Lock file will be deleted at end
+ Status del = env->DeleteFile(filepath);
+ if (result.ok() && !del.ok()) {
+ result = del;
+ }
}
Or what I assume is preferable:
if (ParseFileName(filenames[i], &number, &type) &&
- filenames[i] != lockname) { // Lock file will be deleted at end
+ type != kDBLockFile) { // Lock file will be deleted at end
Status del = env->DeleteFile(dbname + "/" + filenames[i]);
```
-----
Original issue reported on code.google.com by `jsbell@chromium.org` on 8 Mar 2012 at 7:51 | 1.0 | leveldb::DestroyDB fails on Windows - ```
I'm using Chromium's Env for Windows, but this should apply to any Env
implementation on that OS.
In db_impl.cc's DestroyDB:
const std::string lockname = LockFileName(dbname);
This is defined to be dbname + "/LOCK" in filename.cc
Later on:
if (ParseFileName(filenames[i], &number, &type) &&
filenames[i] != lockname) { // Lock file will be deleted at end
Status del = env->DeleteFile(dbname + "/" + filenames[i]);
if (result.ok() && !del.ok()) {
result = del;
}
}
filenames[i] contains just "LOCK", but lockname will be the full path, so the
expression filenames[i] != lockname will always be false. Therefore, DeleteFile
will always be called, and possibly fail. If the Env can't delete the LOCK file
(e.g. Windows file semantics), the DestroyDB will fail.
Two possible fixes:
for (size_t i = 0; i < filenames.size(); i++) {
- if (ParseFileName(filenames[i], &number, &type) &&
- filenames[i] != lockname) { // Lock file will be deleted at end
- Status del = env->DeleteFile(dbname + "/" + filenames[i]);
- if (result.ok() && !del.ok()) {
- result = del;
+ if (ParseFileName(filenames[i], &number, &type)) {
+ const std::string filepath = dbname + "/" + filenames[i];
+ if (filepath != lockname) { // Lock file will be deleted at end
+ Status del = env->DeleteFile(filepath);
+ if (result.ok() && !del.ok()) {
+ result = del;
+ }
}
Or what I assume is preferable:
if (ParseFileName(filenames[i], &number, &type) &&
- filenames[i] != lockname) { // Lock file will be deleted at end
+ type != kDBLockFile) { // Lock file will be deleted at end
Status del = env->DeleteFile(dbname + "/" + filenames[i]);
```
-----
Original issue reported on code.google.com by `jsbell@chromium.org` on 8 Mar 2012 at 7:51 | defect | leveldb destroydb fails on windows i m using chromium s env for windows but this should apply to any env implementation on that os in db impl cc s destroydb const std string lockname lockfilename dbname this is defined to be dbname lock in filename cc later on if parsefilename filenames number type filenames lockname lock file will be deleted at end status del env deletefile dbname filenames if result ok del ok result del filenames contains just lock but lockname will be the full path so the expression filenames lockname will always be false therefore deletefile will always be called and possibly fail if the env can t delete the lock file e g windows file semantics the destroydb will fail two possible fixes for size t i i filenames size i if parsefilename filenames number type filenames lockname lock file will be deleted at end status del env deletefile dbname filenames if result ok del ok result del if parsefilename filenames number type const std string filepath dbname filenames if filepath lockname lock file will be deleted at end status del env deletefile filepath if result ok del ok result del or what i assume is preferable if parsefilename filenames number type filenames lockname lock file will be deleted at end type kdblockfile lock file will be deleted at end status del env deletefile dbname filenames original issue reported on code google com by jsbell chromium org on mar at | 1 |
298,805 | 9,201,454,324 | IssuesEvent | 2019-03-07 19:40:41 | ArctosDB/arctos | https://api.github.com/repos/ArctosDB/arctos | closed | DISCUSS: User Management | Function-Users Priority-Critical | ```
What steps will reproduce the problem?
1. Let admin users grant anyone their rights
- What is the expected output?
Users awesomifying collections after having carefully read the documentation.
- What do you see instead?
Users apparently hurling fingers in the general direction of their keyboard.
Please use labels and text to provide additional information.
There are dangerous forms in shared nodes (eg, containers) and there are users
who create extremely unexpected error messages from these forms. I suggest the
AC should take a more active role in monitoring access to shared nodes, or
limit grant access, or provide a test before allowing access (Gordon's best
idea ever!), or SOMETHING. It seems only a matter of time before someone who
should perhaps not yet have admin access finds a way to cause significant
damage to an unrelated collection.
```
Original issue reported on code.google.com by `dust...@gmail.com` on 18 Aug 2015 at 3:05
| 1.0 | DISCUSS: User Management - ```
What steps will reproduce the problem?
1. Let admin users grant anyone their rights
- What is the expected output?
Users awesomifying collections after having carefully read the documentation.
- What do you see instead?
Users apparently hurling fingers in the general direction of their keyboard.
Please use labels and text to provide additional information.
There are dangerous forms in shared nodes (eg, containers) and there are users
who create extremely unexpected error messages from these forms. I suggest the
AC should take a more active role in monitoring access to shared nodes, or
limit grant access, or provide a test before allowing access (Gordon's best
idea ever!), or SOMETHING. It seems only a matter of time before someone who
should perhaps not yet have admin access finds a way to cause significant
damage to an unrelated collection.
```
Original issue reported on code.google.com by `dust...@gmail.com` on 18 Aug 2015 at 3:05
| non_defect | discuss user management what steps will reproduce the problem let admin users grant anyone their rights what is the expected output users awesomifying collections after having carefully read the documentation what do you see instead users apparently hurling fingers in the general direction of their keyboard please use labels and text to provide additional information there are dangerous forms in shared nodes eg containers and there are users who create extremely unexpected error messages from these forms i suggest the ac should take a more active role in monitoring access to shared nodes or limit grant access or provide a test before allowing access gordon s best idea ever or something it seems only a matter of time before someone who should perhaps not yet have admin access finds a way to cause significant damage to an unrelated collection original issue reported on code google com by dust gmail com on aug at | 0 |
151,994 | 19,672,068,508 | IssuesEvent | 2022-01-11 08:31:22 | swagger-api/swagger-ui | https://api.github.com/repos/swagger-api/swagger-ui | closed | CVE-2022-0122 (Medium) detected in node-forge-0.10.0.tgz | security vulnerability | ## CVE-2022-0122 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.3.tgz (Root Library)
- selfsigned-1.10.11.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
forge is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2022-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122>CVE-2022-0122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/">https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/</a></p>
<p>Release Date: 2022-01-06</p>
<p>Fix Resolution: forge - v1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.11.3;selfsigned:1.10.11;node-forge:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"forge - v1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0122","vulnerabilityDetails":"forge is vulnerable to URL Redirection to Untrusted Site","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2022-0122 (Medium) detected in node-forge-0.10.0.tgz - ## CVE-2022-0122 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.3.tgz (Root Library)
- selfsigned-1.10.11.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
forge is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2022-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122>CVE-2022-0122</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/">https://huntr.dev/bounties/41852c50-3c6d-4703-8c55-4db27164a4ae/</a></p>
<p>Release Date: 2022-01-06</p>
<p>Fix Resolution: forge - v1.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-forge","packageVersion":"0.10.0","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:3.11.3;selfsigned:1.10.11;node-forge:0.10.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"forge - v1.0.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-0122","vulnerabilityDetails":"forge is vulnerable to URL Redirection to Untrusted Site","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0122","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in node forge tgz cve medium severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file package json path to vulnerable library node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch master vulnerability details forge is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution forge isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree webpack dev server selfsigned node forge isminimumfixversionavailable true minimumfixversion forge isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails forge is vulnerable to url redirection to untrusted site vulnerabilityurl | 0 |
49,224 | 13,185,306,424 | IssuesEvent | 2020-08-12 21:07:52 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | Steamshovel in Python 3 (Trac #1005) | Incomplete Migration Migrated from Trac combo core defect | <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/1005
, reported by moriah.tobin and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-12T14:11:44",
"description": "{{{\n[ 86%] Building CXX object steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:210:\n error: \u2018PyString_Check\u2019 was not declared in this scope\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:215:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nmake[3]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o] Error 1\nmake[2]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/all] Error 2\nmake[1]: *** [steamshovel/CMakeFiles/steamshovel.dir/rule] Error 2\nmake: *** [steamshovel] Error 2\n}}}",
"reporter": "moriah.tobin",
"cc": "david.schultz",
"resolution": "fixed",
"_ts": "1444659104505675",
"component": "combo core",
"summary": "Steamshovel in Python 3",
"priority": "blocker",
"keywords": "",
"time": "2015-05-28T20:57:28",
"milestone": "Long-Term Future",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Steamshovel in Python 3 (Trac #1005) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/1005
, reported by moriah.tobin and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-12T14:11:44",
"description": "{{{\n[ 86%] Building CXX object steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:210:\n error: \u2018PyString_Check\u2019 was not declared in this scope\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp: In\n static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/mntobin/IceRec/src/steamshovel/private/shovelart/pybindings/Types.cpp:215:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nmake[3]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o] Error 1\nmake[2]: *** [steamshovel/CMakeFiles/shovelart-pybindings.dir/all] Error 2\nmake[1]: *** [steamshovel/CMakeFiles/steamshovel.dir/rule] Error 2\nmake: *** [steamshovel] Error 2\n}}}",
"reporter": "moriah.tobin",
"cc": "david.schultz",
"resolution": "fixed",
"_ts": "1444659104505675",
"component": "combo core",
"summary": "Steamshovel in Python 3",
"priority": "blocker",
"keywords": "",
"time": "2015-05-28T20:57:28",
"milestone": "Long-Term Future",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| defect | steamshovel in python trac migrated from reported by moriah tobin and owned by hdembinski json status closed changetime description n building cxx object steamshovel cmakefiles shovelart pybindings dir private shovelart pybindings types cpp o n home mntobin icerec src steamshovel private shovelart pybindings types cpp in n static member function void scripting shovelart n qstringconversion convertible pyobject n home mntobin icerec src steamshovel private shovelart pybindings types cpp n error check was not declared in this scope n home mntobin icerec src steamshovel private shovelart pybindings types cpp in n static member function void scripting shovelart n qstringconversion construct n pyobject boost python converter rvalue from python data n n home mntobin icerec src steamshovel private shovelart pybindings types cpp n error asstring was not declared in this scope nmake error nmake error nmake error nmake error n reporter moriah tobin cc david schultz resolution fixed ts component combo core summary steamshovel in python priority blocker keywords time milestone long term future owner hdembinski type defect | 1 |
73,464 | 24,645,786,029 | IssuesEvent | 2022-10-17 14:46:23 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: Scipy installs and uses 2 copies of libopenblas (indirectly) | defect | ### Describe your issue.
Scipy requires numpy.
But both include (at least in their Windows wheels) a form of libopenblas.
So when you want to redistribute an app that is using scipy you are forced to redistribute 2 copies of libopenblas (each 35 MB) that both get loaded into memory.
A simple solution would be to publish libopenblas as an own library on Pypi (and make scipy and numpy depending on it).
An alternative solution would be for scipy to use the numpy installed libopenblas.
### Reproducing Code Example
```python
pip install scipy
```
### Error message
```shell
No error message
```
### SciPy/NumPy/Python version information
1.91/1.23.4/3.10.8 | 1.0 | BUG: Scipy installs and uses 2 copies of libopenblas (indirectly) - ### Describe your issue.
Scipy requires numpy.
But both include (at least in their Windows wheels) a form of libopenblas.
So when you want to redistribute an app that is using scipy you are forced to redistribute 2 copies of libopenblas (each 35 MB) that both get loaded into memory.
A simple solution would be to publish libopenblas as an own library on Pypi (and make scipy and numpy depending on it).
An alternative solution would be for scipy to use the numpy installed libopenblas.
### Reproducing Code Example
```python
pip install scipy
```
### Error message
```shell
No error message
```
### SciPy/NumPy/Python version information
1.91/1.23.4/3.10.8 | defect | bug scipy installs and uses copies of libopenblas indirectly describe your issue scipy requires numpy but both include at least in their windows wheels a form of libopenblas so when you want to redistribute an app that is using scipy you are forced to redistribute copies of libopenblas each mb that both get loaded into memory a simple solution would be to publish libopenblas as an own library on pypi and make scipy and numpy depending on it an alternative solution would be for scipy to use the numpy installed libopenblas reproducing code example python pip install scipy error message shell no error message scipy numpy python version information | 1 |
31,873 | 6,652,134,637 | IssuesEvent | 2017-09-28 23:13:41 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | should_execute in TestHarness not doing the right thing | C: TestHarness P: normal T: defect | ## Rationale
It looks like the `should_execute` option for tests in the test harness is not doing the right thing. It looks like it's skipping execution of the _entire_ test. It should only skip execution of the executable.
My suspicion is that this got lost in the latest refactor.
## Description
The idea behind this option is that you can run once and output multiple things then have several downstream tests that Exodiff and CSVDiff those outputs. What it's doing right now is just silently passing any test that has `should_execute = false`.
## Impact
Currently a bug in the TestHarness... but it's not used too much. | 1.0 | should_execute in TestHarness not doing the right thing - ## Rationale
It looks like the `should_execute` option for tests in the test harness is not doing the right thing. It looks like it's skipping execution of the _entire_ test. It should only skip execution of the executable.
My suspicion is that this got lost in the latest refactor.
## Description
The idea behind this option is that you can run once and output multiple things then have several downstream tests that Exodiff and CSVDiff those outputs. What it's doing right now is just silently passing any test that has `should_execute = false`.
## Impact
Currently a bug in the TestHarness... but it's not used too much. | defect | should execute in testharness not doing the right thing rationale it looks like the should execute option for tests in the test harness is not doing the right thing it looks like it s skipping execution of the entire test it should only skip execution of the executable my suspicion is that this got lost in the latest refactor description the idea behind this option is that you can run once and output multiple things then have several downstream tests that exodiff and csvdiff those outputs what it s doing right now is just silently passing any test that has should execute false impact currently a bug in the testharness but it s not used too much | 1 |
30,319 | 14,517,837,705 | IssuesEvent | 2020-12-13 21:11:52 | ExchangeUnion/xud | https://api.github.com/repos/ExchangeUnion/xud | closed | Classify peers as `offline`, long-dead peers as `dead` | P2 p2p performance | ### Background
Currently logs of an "older" xud environment look like [this](https://paste.ubuntu.com/p/NhwB7FY9s5/) when starting/restarting xud. Busy for minutes with a decent load on the tor process trying to connect to *very* dead peers. It takes a while (in my case 6 minutes) until xud finally reconnected to its previous peers.
### Your environment
* version of `xud`: `1.2.0-d4587458`
* which operating system (`uname -a` on *Nix):
* version of `lndbtc`, `lndltc`, `connext` and others:
* any other relevant environment details:
### Steps to reproduce
Restart xud which has 100+ dead peers in the db.
### Expected behaviour
xud immediately connects to previosly connected peers, leaves classified *dead* peers reconnection attempts to the regular scheduled attempts at runtime, but **not** on startup.
### Actual behaviour
xud tries to connect to all peers on startup, leading to behavior described above. | True | Classify peers as `offline`, long-dead peers as `dead` - ### Background
Currently logs of an "older" xud environment look like [this](https://paste.ubuntu.com/p/NhwB7FY9s5/) when starting/restarting xud. Busy for minutes with a decent load on the tor process trying to connect to *very* dead peers. It takes a while (in my case 6 minutes) until xud finally reconnected to its previous peers.
### Your environment
* version of `xud`: `1.2.0-d4587458`
* which operating system (`uname -a` on *Nix):
* version of `lndbtc`, `lndltc`, `connext` and others:
* any other relevant environment details:
### Steps to reproduce
Restart xud which has 100+ dead peers in the db.
### Expected behaviour
xud immediately connects to previosly connected peers, leaves classified *dead* peers reconnection attempts to the regular scheduled attempts at runtime, but **not** on startup.
### Actual behaviour
xud tries to connect to all peers on startup, leading to behavior described above. | non_defect | classify peers as offline long dead peers as dead background currently logs of an older xud environment look like when starting restarting xud busy for minutes with a decent load on the tor process trying to connect to very dead peers it takes a while in my case minutes until xud finally reconnected to its previous peers your environment version of xud which operating system uname a on nix version of lndbtc lndltc connext and others any other relevant environment details steps to reproduce restart xud which has dead peers in the db expected behaviour xud immediately connects to previosly connected peers leaves classified dead peers reconnection attempts to the regular scheduled attempts at runtime but not on startup actual behaviour xud tries to connect to all peers on startup leading to behavior described above | 0 |
51,881 | 27,289,716,900 | IssuesEvent | 2023-02-23 15:48:02 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | storage: MVCCGarbageCollect allocations regression | C-performance A-storage T-storage | From the sneak peak of 23.1 microbenchmark results (#96960): https://docs.google.com/spreadsheets/d/1kagTm9WXv9CSn78nDg3iD0bsnFXhc-r4a0L29hP_-No/edit#gid=5
There looks like an allocation regression here. | True | storage: MVCCGarbageCollect allocations regression - From the sneak peak of 23.1 microbenchmark results (#96960): https://docs.google.com/spreadsheets/d/1kagTm9WXv9CSn78nDg3iD0bsnFXhc-r4a0L29hP_-No/edit#gid=5
There looks like an allocation regression here. | non_defect | storage mvccgarbagecollect allocations regression from the sneak peak of microbenchmark results there looks like an allocation regression here | 0 |
54,318 | 13,562,244,176 | IssuesEvent | 2020-09-18 06:29:33 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Try to prevent unnecessary arithmetic in LIMIT emulations | C: Functionality E: All Editions P: Medium T: Defect | When emulating things like this in e.g. PostgreSQL:
```sql
SELECT a, b
FROM t
ORDER BY x
FETCH FIRST 10 ROWS WITH TIES
```
We'll get some unnecessary clauses and arithmetic, such as:
```sql
SELECT
v0 AS a,
v1 AS b
FROM (
SELECT
a AS v0,
b AS v1,
rank() OVER (ORDER BY x) AS rn
FROM t
) x
WHERE rn > 0
AND rn <= (0 + 10)
ORDER BY rn
```
It's not necessary to filter for `rn > 0`, as it can never be `0`. Also, the arithmetic `0 + 10` is an unnecessary artifact, which should be avoided. | 1.0 | Try to prevent unnecessary arithmetic in LIMIT emulations - When emulating things like this in e.g. PostgreSQL:
```sql
SELECT a, b
FROM t
ORDER BY x
FETCH FIRST 10 ROWS WITH TIES
```
We'll get some unnecessary clauses and arithmetic, such as:
```sql
SELECT
v0 AS a,
v1 AS b
FROM (
SELECT
a AS v0,
b AS v1,
rank() OVER (ORDER BY x) AS rn
FROM t
) x
WHERE rn > 0
AND rn <= (0 + 10)
ORDER BY rn
```
It's not necessary to filter for `rn > 0`, as it can never be `0`. Also, the arithmetic `0 + 10` is an unnecessary artifact, which should be avoided. | defect | try to prevent unnecessary arithmetic in limit emulations when emulating things like this in e g postgresql sql select a b from t order by x fetch first rows with ties we ll get some unnecessary clauses and arithmetic such as sql select as a as b from select a as b as rank over order by x as rn from t x where rn and rn order by rn it s not necessary to filter for rn as it can never be also the arithmetic is an unnecessary artifact which should be avoided | 1 |
161,982 | 13,880,786,607 | IssuesEvent | 2020-10-17 20:27:12 | TRemigi/Roll-a-Jazz | https://api.github.com/repos/TRemigi/Roll-a-Jazz | closed | Code clean-up!! | documentation refactor | - remove code that isn't being used
- remove imported "components" that aren't being used.
- add comments where needed in the code | 1.0 | Code clean-up!! - - remove code that isn't being used
- remove imported "components" that aren't being used.
- add comments where needed in the code | non_defect | code clean up remove code that isn t being used remove imported components that aren t being used add comments where needed in the code | 0 |
36,214 | 7,868,398,963 | IssuesEvent | 2018-06-23 21:13:59 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | opened | Optibelt CCE - ValorUnitarioAduana y ValorDolares con 2 decimales | bug defect | Cuando el XML original está en pesos, tienes que hacer la conversión a USD, de acuerdo con el tipo de cambio.
Eso ya lo hace, pero ahora tiene este otro detalle.
A nivel nodo de mercancías dentro del CCE, valorUnitarioAduana y valorDolares, debe ir con 2 decimales.
Una vez que todos los productos de la factura, dentro del CCE, entonces se hace la sumatoria de valorDolares para reportar el TotalUSD (el que va en el header del CCE).
Marca error por que ahora mismo se están reportando 13 decimales... jajaja
Esto aplica igual para el CCE estándar. | 1.0 | Optibelt CCE - ValorUnitarioAduana y ValorDolares con 2 decimales - Cuando el XML original está en pesos, tienes que hacer la conversión a USD, de acuerdo con el tipo de cambio.
Eso ya lo hace, pero ahora tiene este otro detalle.
A nivel nodo de mercancías dentro del CCE, valorUnitarioAduana y valorDolares, debe ir con 2 decimales.
Una vez que todos los productos de la factura, dentro del CCE, entonces se hace la sumatoria de valorDolares para reportar el TotalUSD (el que va en el header del CCE).
Marca error por que ahora mismo se están reportando 13 decimales... jajaja
Esto aplica igual para el CCE estándar. | defect | optibelt cce valorunitarioaduana y valordolares con decimales cuando el xml original está en pesos tienes que hacer la conversión a usd de acuerdo con el tipo de cambio eso ya lo hace pero ahora tiene este otro detalle a nivel nodo de mercancías dentro del cce valorunitarioaduana y valordolares debe ir con decimales una vez que todos los productos de la factura dentro del cce entonces se hace la sumatoria de valordolares para reportar el totalusd el que va en el header del cce marca error por que ahora mismo se están reportando decimales jajaja esto aplica igual para el cce estándar | 1 |
429,574 | 30,083,751,304 | IssuesEvent | 2023-06-29 06:57:43 | Klantinteractie-Servicesysteem/KISS-frontend | https://api.github.com/repos/Klantinteractie-Servicesysteem/KISS-frontend | closed | Opruimen GitHub | documentation ready | Github opruimen
- niet relevante discussies sluiten,
- niet relevant vragen en commentaren verwijderen of op een andere manier aangeven dat ze niet meer relevant zijn | 1.0 | Opruimen GitHub - Github opruimen
- niet relevante discussies sluiten,
- niet relevant vragen en commentaren verwijderen of op een andere manier aangeven dat ze niet meer relevant zijn | non_defect | opruimen github github opruimen niet relevante discussies sluiten niet relevant vragen en commentaren verwijderen of op een andere manier aangeven dat ze niet meer relevant zijn | 0 |
21,834 | 3,563,478,032 | IssuesEvent | 2016-01-25 03:58:11 | zealdocs/zeal | https://api.github.com/repos/zealdocs/zeal | closed | Zeal crashes when removing active docset | Component: Docset Registry Type: Defect | Specifics:
1. active docset: Puppet (some query result is displayed)
1. click remove
1. crash | 1.0 | Zeal crashes when removing active docset - Specifics:
1. active docset: Puppet (some query result is displayed)
1. click remove
1. crash | defect | zeal crashes when removing active docset specifics active docset puppet some query result is displayed click remove crash | 1 |
311,901 | 9,540,197,509 | IssuesEvent | 2019-04-30 18:53:21 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | Re-enable JS web service test components of basin_reservoir.js and mall_motor_vehicle.js | Category: Core Priority: Medium Status: Defined Type: Maintenance | Refs PR #3035
Refs #3044
Pending the required fixes as mentioned in PR 3035, re-enable the disabled parts of basin_reservoir.js and mall_motor_vehicle.js.
basin_reservoir: translation from TDSv61 to OSM using IWT=5, reordering of tags results in IWT=1 when translating back
~~mall_motor_vehicle: Assignment order of FFN, FFN2, etc. affects translation result (#3044)~~ Done | 1.0 | Re-enable JS web service test components of basin_reservoir.js and mall_motor_vehicle.js - Refs PR #3035
Refs #3044
Pending the required fixes as mentioned in PR 3035, re-enable the disabled parts of basin_reservoir.js and mall_motor_vehicle.js.
basin_reservoir: translation from TDSv61 to OSM using IWT=5, reordering of tags results in IWT=1 when translating back
~~mall_motor_vehicle: Assignment order of FFN, FFN2, etc. affects translation result (#3044)~~ Done | non_defect | re enable js web service test components of basin reservoir js and mall motor vehicle js refs pr refs pending the required fixes as mentioned in pr re enable the disabled parts of basin reservoir js and mall motor vehicle js basin reservoir translation from to osm using iwt reordering of tags results in iwt when translating back mall motor vehicle assignment order of ffn etc affects translation result done | 0 |
66,335 | 20,155,650,553 | IssuesEvent | 2022-02-09 16:13:39 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | closed | Element iOS green spinner + gray spinner + cannot send messages | T-Defect A-Startup S-Major O-Occasional | ### Steps to reproduce
1. Where are you starting? What can you see?
When launching the app, it stays on the green spinner forever about 80% of the time. If it does launch properly, it refuses to send messages or calls, or stays on the gray spinner when entering a room.
2. What do you click?
App icon
3. More steps…
### Outcome
#### What did you expect?
The app launching properly, and being able to send messages.
#### What happened instead?
It stays on the green spinner forever about 80% of the time. If it does launch properly, it refuses to send messages or calls, or stays on the gray spinner when entering a room.
If I uninstall and install again, it works... until I close the app or reboot the phone. Then, the problems happen again.
Please note that I can receive messages with no issues but cannot receive calls.
Also, please note that the app worked perfectly for a year until February 2nd mid-day in Asia.
I can still send messages with the same account from a pc.
I also tried making a new account, but the same problems persist.
No update seems to have been performed on February 2nd, whether it's Element or iOS.
### Your phone model
Iphone 6
### Operating system version
iOS 12.5.5
### Application version
Element version 1.7.0
### Homeserver
matrix.org
### Will you send logs?
No | 1.0 | Element iOS green spinner + gray spinner + cannot send messages - ### Steps to reproduce
1. Where are you starting? What can you see?
When launching the app, it stays on the green spinner forever about 80% of the time. If it does launch properly, it refuses to send messages or calls, or stays on the gray spinner when entering a room.
2. What do you click?
App icon
3. More steps…
### Outcome
#### What did you expect?
The app launching properly, and being able to send messages.
#### What happened instead?
It stays on the green spinner forever about 80% of the time. If it does launch properly, it refuses to send messages or calls, or stays on the gray spinner when entering a room.
If I uninstall and install again, it works... until I close the app or reboot the phone. Then, the problems happen again.
Please note that I can receive messages with no issues but cannot receive calls.
Also, please note that the app worked perfectly for a year until February 2nd mid-day in Asia.
I can still send messages with the same account from a pc.
I also tried making a new account, but the same problems persist.
No update seems to have been performed on February 2nd, whether it's Element or iOS.
### Your phone model
Iphone 6
### Operating system version
iOS 12.5.5
### Application version
Element version 1.7.0
### Homeserver
matrix.org
### Will you send logs?
No | defect | element ios green spinner gray spinner cannot send messages steps to reproduce where are you starting what can you see when launching the app it stays on the green spinner forever about of the time if it does launch properly it refuses to send messages or calls or stays on the gray spinner when entering a room what do you click app icon more steps… outcome what did you expect the app launching properly and being able to send messages what happened instead it stays on the green spinner forever about of the time if it does launch properly it refuses to send messages or calls or stays on the gray spinner when entering a room if i uninstall and install again it works until i close the app or reboot the phone then the problems happen again please note that i can receive messages with no issues but cannot receive calls also please note that the app worked perfectly for a year until february mid day in asia i can still send messages with the same account from a pc i also tried making a new account but the same problems persist no update seems to have been performed on february whether it s element or ios your phone model iphone operating system version ios application version element version homeserver matrix org will you send logs no | 1 |
43,839 | 11,860,245,195 | IssuesEvent | 2020-03-25 14:37:14 | mestrade/jx-go-hello | https://api.github.com/repos/mestrade/jx-go-hello | opened | My Finding | security/defectDojo | *My Finding*
*Severity:* Info
*Cve:*
*Product/Engagement:* fake2 product / Ad Hoc Engagement
*Systems*:
*Description*:
My description
*Mitigation*:
my mitigation
*Impact*:
impact
*References*:No references given | 1.0 | My Finding - *My Finding*
*Severity:* Info
*Cve:*
*Product/Engagement:* fake2 product / Ad Hoc Engagement
*Systems*:
*Description*:
My description
*Mitigation*:
my mitigation
*Impact*:
impact
*References*:No references given | defect | my finding my finding severity info cve product engagement product ad hoc engagement systems description my description mitigation my mitigation impact impact references no references given | 1 |
605,951 | 18,752,401,075 | IssuesEvent | 2021-11-05 05:07:47 | ballerina-platform/nballerina-cpp | https://api.github.com/repos/ballerina-platform/nballerina-cpp | closed | Global variables are not intialized | Type/NewFeature Priority/High | **Description:**
Global variable initialization is happening in the `..<init>` functions, which codegen is currently skipping. Thus all global variables are left uninitialized.
**Suggested Labels:**
Type/Feature, Priority/High
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
#134 | 1.0 | Global variables are not intialized - **Description:**
Global variable initialization is happening in the `..<init>` functions, which codegen is currently skipping. Thus all global variables are left uninitialized.
**Suggested Labels:**
Type/Feature, Priority/High
**Suggested Assignees:**
<!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees-->
**Affected Product Version:**
**OS, DB, other environment details and versions:**
**Steps to reproduce:**
**Related Issues:**
#134 | non_defect | global variables are not intialized description global variable initialization is happening in the functions which codegen is currently skipping thus all global variables are left uninitialized suggested labels type feature priority high suggested assignees affected product version os db other environment details and versions steps to reproduce related issues | 0 |
50,576 | 13,187,592,410 | IssuesEvent | 2020-08-13 03:55:31 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | closed | slc-veto: ‘GetMostEnergeticPrimary’ is not a member of ‘I3MCTreeUtils’ (Trac #976) | Migrated from Trac defect other | While building slc-veto, from http://code.icecube.wisc.edu/svn/sandbox/koskinen/slc-veto in rev. 132417, these two errors appear:
```text
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx: In
member function ‘virtual void Q_Box::Physics(I3FramePtr)’:
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:960:
error: ‘GetMostEnergeticPrimary’ is not a member of ‘I3MCTreeUtils’
```
```text
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:961:
error: no match for ‘operator!=’ in ‘iprimary != TreeBase::Tree<
I3Particle, I3ParticleID, hash<I3ParticleID> >::end() const()’
```
I hope Chang Hyon and Jason won't mind that I cc-ed them.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/976
, reported by chraab and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-05-15T08:01:24",
"description": "While building slc-veto, from http://code.icecube.wisc.edu/svn/sandbox/koskinen/slc-veto in rev. 132417, these two errors appear:\n\n{{{\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx: In\n member function \u2018virtual void Q_Box::Physics(I3FramePtr)\u2019:\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:960:\n error: \u2018GetMostEnergeticPrimary\u2019 is not a member of \u2018I3MCTreeUtils\u2019\n}}}\n{{{\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:961:\n error: no match for \u2018operator!=\u2019 in \u2018iprimary != TreeBase::Tree<\n I3Particle, I3ParticleID, hash<I3ParticleID> >::end() const()\u2019\n\n}}}\n\nI hope Chang Hyon and Jason won't mind that I cc-ed them.",
"reporter": "chraab",
"cc": "koskinen, cuh136",
"resolution": "fixed",
"_ts": "1431676884131754",
"component": "other",
"summary": "slc-veto: \u2018GetMostEnergeticPrimary\u2019 is not a member of \u2018I3MCTreeUtils\u2019",
"priority": "normal",
"keywords": "slc-veto,I3MCTreeUtils,operator,sandbox",
"time": "2015-05-13T15:13:33",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| 1.0 | slc-veto: ‘GetMostEnergeticPrimary’ is not a member of ‘I3MCTreeUtils’ (Trac #976) - While building slc-veto, from http://code.icecube.wisc.edu/svn/sandbox/koskinen/slc-veto in rev. 132417, these two errors appear:
```text
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx: In
member function ‘virtual void Q_Box::Physics(I3FramePtr)’:
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:960:
error: ‘GetMostEnergeticPrimary’ is not a member of ‘I3MCTreeUtils’
```
```text
/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:961:
error: no match for ‘operator!=’ in ‘iprimary != TreeBase::Tree<
I3Particle, I3ParticleID, hash<I3ParticleID> >::end() const()’
```
I hope Chang Hyon and Jason won't mind that I cc-ed them.
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/976
, reported by chraab and owned by </em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-05-15T08:01:24",
"description": "While building slc-veto, from http://code.icecube.wisc.edu/svn/sandbox/koskinen/slc-veto in rev. 132417, these two errors appear:\n\n{{{\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx: In\n member function \u2018virtual void Q_Box::Physics(I3FramePtr)\u2019:\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:960:\n error: \u2018GetMostEnergeticPrimary\u2019 is not a member of \u2018I3MCTreeUtils\u2019\n}}}\n{{{\n/data/user/chraab/icerec-trunk/src/slc-veto/private/slc-veto/Q_Box.cxx:961:\n error: no match for \u2018operator!=\u2019 in \u2018iprimary != TreeBase::Tree<\n I3Particle, I3ParticleID, hash<I3ParticleID> >::end() const()\u2019\n\n}}}\n\nI hope Chang Hyon and Jason won't mind that I cc-ed them.",
"reporter": "chraab",
"cc": "koskinen, cuh136",
"resolution": "fixed",
"_ts": "1431676884131754",
"component": "other",
"summary": "slc-veto: \u2018GetMostEnergeticPrimary\u2019 is not a member of \u2018I3MCTreeUtils\u2019",
"priority": "normal",
"keywords": "slc-veto,I3MCTreeUtils,operator,sandbox",
"time": "2015-05-13T15:13:33",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
| defect | slc veto ‘getmostenergeticprimary’ is not a member of ‘ ’ trac while building slc veto from in rev these two errors appear text data user chraab icerec trunk src slc veto private slc veto q box cxx in member function ‘virtual void q box physics ’ data user chraab icerec trunk src slc veto private slc veto q box cxx error ‘getmostenergeticprimary’ is not a member of ‘ ’ text data user chraab icerec trunk src slc veto private slc veto q box cxx error no match for ‘operator ’ in ‘iprimary treebase tree hash end const ’ i hope chang hyon and jason won t mind that i cc ed them migrated from reported by chraab and owned by json status closed changetime description while building slc veto from in rev these two errors appear n n n data user chraab icerec trunk src slc veto private slc veto q box cxx in n member function void q box physics n data user chraab icerec trunk src slc veto private slc veto q box cxx n error is not a member of n n n data user chraab icerec trunk src slc veto private slc veto q box cxx n error no match for in treebase tree end const n n n ni hope chang hyon and jason won t mind that i cc ed them reporter chraab cc koskinen resolution fixed ts component other summary slc veto is not a member of priority normal keywords slc veto operator sandbox time milestone owner type defect | 1 |
810,534 | 30,247,120,911 | IssuesEvent | 2023-07-06 17:22:36 | insightsengineering/tern | https://api.github.com/repos/insightsengineering/tern | closed | [Bug]: cryptic error message in tern::g_km | bug sme priority | ### What happened?
Reproducible example:
```
> library(tern)
> library(dplyr)
> library(ggplot2)
> library(survival)
> library(grid)
> library(nestcolor)
>
> df <- tern_ex_adtte %>%
+ filter(PARAMCD == "OS") %>%
+ mutate(is_event = CNSR == 0)
> variables <- list(tte = "AVAL", is_event = "is_event", arm = "ARMCD")
>
> df2 <- df[df$ARMCD == "ARM A", ]
>
> res <- g_km(df = df2, variables = variables, annot_coxph = TRUE)
Error in grid::unit(as.numeric(height)/4, grid::unitType(height)) :
'x' and 'units' must have length > 0
```
It makes sense that it fails since if there is only one group; we cannot compute a hazard ratio, but the error message is not very helpful, would you consider adding a little assertion in the function? Something that throws an error message if there is only one value in the `arm` variable and `annot_coxph = TRUE`.
thank you
### sessionInfo()
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
### Contribution Guidelines
- [X] I agree to follow this project's Contribution Guidelines.
### Security Policy
- [X] I agree to follow this project's Security Policy. | 1.0 | [Bug]: cryptic error message in tern::g_km - ### What happened?
Reproducible example:
```
> library(tern)
> library(dplyr)
> library(ggplot2)
> library(survival)
> library(grid)
> library(nestcolor)
>
> df <- tern_ex_adtte %>%
+ filter(PARAMCD == "OS") %>%
+ mutate(is_event = CNSR == 0)
> variables <- list(tte = "AVAL", is_event = "is_event", arm = "ARMCD")
>
> df2 <- df[df$ARMCD == "ARM A", ]
>
> res <- g_km(df = df2, variables = variables, annot_coxph = TRUE)
Error in grid::unit(as.numeric(height)/4, grid::unitType(height)) :
'x' and 'units' must have length > 0
```
It makes sense that it fails since if there is only one group; we cannot compute a hazard ratio, but the error message is not very helpful, would you consider adding a little assertion in the function? Something that throws an error message if there is only one value in the `arm` variable and `annot_coxph = TRUE`.
thank you
### sessionInfo()
_No response_
### Relevant log output
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct.
### Contribution Guidelines
- [X] I agree to follow this project's Contribution Guidelines.
### Security Policy
- [X] I agree to follow this project's Security Policy. | non_defect | cryptic error message in tern g km what happened reproducible example library tern library dplyr library library survival library grid library nestcolor df filter paramcd os mutate is event cnsr variables list tte aval is event is event arm armcd df res g km df variables variables annot coxph true error in grid unit as numeric height grid unittype height x and units must have length it makes sense that it fails since if there is only one group we cannot compute a hazard ratio but the error message is not very helpful would you consider adding a little assertion in the function something that throws an error message if there is only one value in the arm variable and annot coxph true thank you sessioninfo no response relevant log output no response code of conduct i agree to follow this project s code of conduct contribution guidelines i agree to follow this project s contribution guidelines security policy i agree to follow this project s security policy | 0 |
175,925 | 27,997,683,415 | IssuesEvent | 2023-03-27 09:33:59 | Joystream/pioneer | https://api.github.com/repos/Joystream/pioneer | opened | Show proposals execution ETA before the gracing period | enhancement design scope:proposals | ## Use case
Currently the [Ephesus runtime upgrade proposal](https://pioneerapp.xyz/#/proposals/preview/175) is in it's 3rd deciding stage. It would help planning the release if the proposal page would give an idea of when the actual upgrade would actually happen (earliest/latest ETA).
## Proposal
Maybe a design inspired by #4006's [solution](https://www.figma.com/file/GlgN8uBRtvtMJtiOsdtDF7/Pioneer-design?node-id=11664%3A445586&t=btpGbvJr9qG6E7JF-1):
 | 1.0 | Show proposals execution ETA before the gracing period - ## Use case
Currently the [Ephesus runtime upgrade proposal](https://pioneerapp.xyz/#/proposals/preview/175) is in it's 3rd deciding stage. It would help planning the release if the proposal page would give an idea of when the actual upgrade would actually happen (earliest/latest ETA).
## Proposal
Maybe a design inspired by #4006's [solution](https://www.figma.com/file/GlgN8uBRtvtMJtiOsdtDF7/Pioneer-design?node-id=11664%3A445586&t=btpGbvJr9qG6E7JF-1):
 | non_defect | show proposals execution eta before the gracing period use case currently the is in it s deciding stage it would help planning the release if the proposal page would give an idea of when the actual upgrade would actually happen earliest latest eta proposal maybe a design inspired by s | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.