Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
28,966 | 5,447,905,018 | IssuesEvent | 2017-03-07 14:44:06 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | reopened | Crashed pdns while (bulk) creating zones via the api | auth defect | I was creating 225 zones in pdns using nsedit, which uses the API. That were quite a lot of requests fired at the API.
It seems (now) that during that import, pdns crashed twice. I didn't notice because it restarted quickly..
nsedit would have been doing the following requests:
- PUT on /servers/localhost/zones
- PATCH on /servers/localhost/zones/<zone> for each non-NS/SOA record (4 in this case)
- Rinse and repeat for next zone
I got the following trace:
```
Nov 6 14:05:38 nscache2 pdns[24922]: Got a signal 6, attempting to print trace:
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x652460]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f45d1c384a0]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7f45d1c38425]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x7f45d1c3bb8b]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(SSQLite3::~SSQLite3()+0xab) [0x72458b]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(SSQLite3::~SSQLite3()+0x9) [0x7245e9]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(GSQLBackend::~GSQLBackend()+0x2e) [0x56c36e]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(gSQLite3Backend::~gSQLite3Backend()+0x17) [0x584c67]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(cleanup_backends(UeberBackend*)+0x26) [0x654246]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(UeberBackend::cleanup()+0x78) [0x654878]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(UeberBackend::~UeberBackend()+0x1d) [0x6548bd]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x635423]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::function2<void, HttpRequest*, HttpResponse*>::operator()(HttpRequest*, HttpResponse*) const+0x18) [0x648af8]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x644bcc]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::detail::function::void_function_obj_invoker2<boost::_bi::bind_t<void, void (*)(boost::function<void (HttpRequest*, HttpResponse*)>, HttpRequest*, HttpResponse*), boost::_bi::list3<boost::_bi::value<boost::function<void (HttpRequest*, HttpResponse*)> >, boost::arg<1>, boost::arg<2> > >, void, HttpRequest*, HttpResponse*>::invoke(boost::detail::function::function_buffer&, HttpRequest*, HttpResponse*)+0x67) [0x6474f7]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::function2<void, HttpRequest*, HttpResponse*>::operator()(HttpRequest*, HttpResponse*) const+0x18) [0x648af8]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::detail::function::void_function_obj_invoker2<boost::_bi::bind_t<void, void (*)(boost::function<void (HttpRequest*, HttpResponse*)>, YaHTTP::Request*, YaHTTP::Response*), boost::_bi::list3<boost::_bi::value<boost::function<void (HttpRequest*, HttpResponse*)> >, boost::arg<1>, boost::arg<2> > >, void, YaHTTP::Request*, YaHTTP::Response*>::invoke(boost::detail::function::function_buffer&, YaHTTP::Request*, YaHTTP::Response*)+0x67) [0x6475d7]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(WebServer::handleRequest(HttpRequest)+0x239) [0x645289]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(WebServer::serveConnection(Socket*)+0x1a5) [0x646775]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x646d92]
Nov 6 14:05:38 nscache2 pdns[12820]: Our pdns instance (24922) exited after signal 6
```
I've got no clue where to look, maybe you guys do.
Running: pdns 0.0.20140723.4983.b6b24bc-1
| 1.0 | Crashed pdns while (bulk) creating zones via the api - I was creating 225 zones in pdns using nsedit, which uses the API. That were quite a lot of requests fired at the API.
It seems (now) that during that import, pdns crashed twice. I didn't notice because it restarted quickly..
nsedit would have been doing the following requests:
- PUT on /servers/localhost/zones
- PATCH on /servers/localhost/zones/<zone> for each non-NS/SOA record (4 in this case)
- Rinse and repeat for next zone
I got the following trace:
```
Nov 6 14:05:38 nscache2 pdns[24922]: Got a signal 6, attempting to print trace:
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x652460]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(+0x364a0) [0x7f45d1c384a0]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7f45d1c38425]
Nov 6 14:05:38 nscache2 pdns[24922]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x7f45d1c3bb8b]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(SSQLite3::~SSQLite3()+0xab) [0x72458b]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(SSQLite3::~SSQLite3()+0x9) [0x7245e9]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(GSQLBackend::~GSQLBackend()+0x2e) [0x56c36e]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(gSQLite3Backend::~gSQLite3Backend()+0x17) [0x584c67]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(cleanup_backends(UeberBackend*)+0x26) [0x654246]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(UeberBackend::cleanup()+0x78) [0x654878]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(UeberBackend::~UeberBackend()+0x1d) [0x6548bd]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x635423]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::function2<void, HttpRequest*, HttpResponse*>::operator()(HttpRequest*, HttpResponse*) const+0x18) [0x648af8]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x644bcc]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::detail::function::void_function_obj_invoker2<boost::_bi::bind_t<void, void (*)(boost::function<void (HttpRequest*, HttpResponse*)>, HttpRequest*, HttpResponse*), boost::_bi::list3<boost::_bi::value<boost::function<void (HttpRequest*, HttpResponse*)> >, boost::arg<1>, boost::arg<2> > >, void, HttpRequest*, HttpResponse*>::invoke(boost::detail::function::function_buffer&, HttpRequest*, HttpResponse*)+0x67) [0x6474f7]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::function2<void, HttpRequest*, HttpResponse*>::operator()(HttpRequest*, HttpResponse*) const+0x18) [0x648af8]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(boost::detail::function::void_function_obj_invoker2<boost::_bi::bind_t<void, void (*)(boost::function<void (HttpRequest*, HttpResponse*)>, YaHTTP::Request*, YaHTTP::Response*), boost::_bi::list3<boost::_bi::value<boost::function<void (HttpRequest*, HttpResponse*)> >, boost::arg<1>, boost::arg<2> > >, void, YaHTTP::Request*, YaHTTP::Response*>::invoke(boost::detail::function::function_buffer&, YaHTTP::Request*, YaHTTP::Response*)+0x67) [0x6475d7]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(WebServer::handleRequest(HttpRequest)+0x239) [0x645289]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance(WebServer::serveConnection(Socket*)+0x1a5) [0x646775]
Nov 6 14:05:38 nscache2 pdns[24922]: /usr/sbin/pdns_server-instance() [0x646d92]
Nov 6 14:05:38 nscache2 pdns[12820]: Our pdns instance (24922) exited after signal 6
```
I've got no clue where to look, maybe you guys do.
Running: pdns 0.0.20140723.4983.b6b24bc-1
| defect | crashed pdns while bulk creating zones via the api i was creating zones in pdns using nsedit which uses the api that were quite a lot of requests fired at the api it seems now that during that import pdns crashed twice i didn t notice because it restarted quickly nsedit would have been doing the following requests put on servers localhost zones patch on servers localhost zones for each non ns soa record in this case rinse and repeat for next zone i got the following trace nov pdns got a signal attempting to print trace nov pdns usr sbin pdns server instance nov pdns lib linux gnu libc so nov pdns lib linux gnu libc so gsignal nov pdns lib linux gnu libc so abort nov pdns usr sbin pdns server instance nov pdns usr sbin pdns server instance nov pdns usr sbin pdns server instance gsqlbackend gsqlbackend nov pdns usr sbin pdns server instance nov pdns usr sbin pdns server instance cleanup backends ueberbackend nov pdns usr sbin pdns server instance ueberbackend cleanup nov pdns usr sbin pdns server instance ueberbackend ueberbackend nov pdns usr sbin pdns server instance nov pdns usr sbin pdns server instance boost operator httprequest httpresponse const nov pdns usr sbin pdns server instance nov pdns usr sbin pdns server instance boost detail function void function obj httprequest httpresponse boost bi boost arg boost arg void httprequest httpresponse invoke boost detail function function buffer httprequest httpresponse nov pdns usr sbin pdns server instance boost operator httprequest httpresponse const nov pdns usr sbin pdns server instance boost detail function void function obj yahttp request yahttp response boost bi boost arg boost arg void yahttp request yahttp response invoke boost detail function function buffer yahttp request yahttp response nov pdns usr sbin pdns server instance webserver handlerequest httprequest nov pdns usr sbin pdns server instance webserver serveconnection socket nov pdns usr sbin pdns server instance nov pdns our pdns instance exited after signal i ve got no clue where to look maybe you guys do running pdns | 1 |
245,762 | 18,793,223,889 | IssuesEvent | 2021-11-08 19:03:17 | Berat-Dzhevdetov/DeemZ | https://api.github.com/repos/Berat-Dzhevdetov/DeemZ | opened | LiteChat documentation | documentation | We are now ready with the project which is real-time communication between the lecturer and the students. And as follows the project must have documentation so just create a simple readme file. | 1.0 | LiteChat documentation - We are now ready with the project which is real-time communication between the lecturer and the students. And as follows the project must have documentation so just create a simple readme file. | non_defect | litechat documentation we are now ready with the project which is real time communication between the lecturer and the students and as follows the project must have documentation so just create a simple readme file | 0 |
104,299 | 16,613,588,391 | IssuesEvent | 2021-06-02 14:16:52 | Thanraj/linux-4.1.15 | https://api.github.com/repos/Thanraj/linux-4.1.15 | opened | CVE-2016-4557 (High) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2016-4557 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The replace_map_fd_with_map_ptr function in kernel/bpf/verifier.c in the Linux kernel before 4.5.5 does not properly maintain an fd data structure, which allows local users to gain privileges or cause a denial of service (use-after-free) via crafted BPF instructions that reference an incorrect file descriptor.
<p>Publish Date: 2016-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4557>CVE-2016-4557</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-4557">https://nvd.nist.gov/vuln/detail/CVE-2016-4557</a></p>
<p>Release Date: 2016-05-23</p>
<p>Fix Resolution: 4.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-4557 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2016-4557 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-4.1.15/commits/5e3fb3e332499e1ad10a0969e55582af1027b085">5e3fb3e332499e1ad10a0969e55582af1027b085</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/kernel/bpf/verifier.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-4.1.15/kernel/bpf/verifier.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The replace_map_fd_with_map_ptr function in kernel/bpf/verifier.c in the Linux kernel before 4.5.5 does not properly maintain an fd data structure, which allows local users to gain privileges or cause a denial of service (use-after-free) via crafted BPF instructions that reference an incorrect file descriptor.
<p>Publish Date: 2016-05-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4557>CVE-2016-4557</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-4557">https://nvd.nist.gov/vuln/detail/CVE-2016-4557</a></p>
<p>Release Date: 2016-05-23</p>
<p>Fix Resolution: 4.5.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files linux kernel bpf verifier c linux kernel bpf verifier c vulnerability details the replace map fd with map ptr function in kernel bpf verifier c in the linux kernel before does not properly maintain an fd data structure which allows local users to gain privileges or cause a denial of service use after free via crafted bpf instructions that reference an incorrect file descriptor publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
7,976 | 7,176,304,247 | IssuesEvent | 2018-01-31 09:36:06 | VolleyManagement/volley-management | https://api.github.com/repos/VolleyManagement/volley-management | opened | Design Localization system | cmp: infrastructure pri: medium type: enhancement | - [ ] Backend localization
- [ ] UI localization
- [ ] Languages: en, ua, ru | 1.0 | Design Localization system - - [ ] Backend localization
- [ ] UI localization
- [ ] Languages: en, ua, ru | non_defect | design localization system backend localization ui localization languages en ua ru | 0 |
17,925 | 3,013,786,614 | IssuesEvent | 2015-07-29 11:13:24 | yawlfoundation/yawl | https://api.github.com/repos/yawlfoundation/yawl | closed | Editor 3.0 Analyse Specification Window does not terminate/close | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. open an existing spec in editor 3.0
2. choose "analyse specification" from menu bar
3. select "stop" on analyse specification window
4. try to close that window (via x)
What is the expected output? What do you see instead?
In my opinion the window should close immediately, but it stays opened until
the editor is closed, the "close" button can not be pushed.
What version of the product are you using? On what operating system?
editor 3.0
Please provide any additional information below.
the existing specification was made with old editor, and has been verified
there.
```
Original issue reported on code.google.com by `anwim...@gmail.com` on 10 Dec 2013 at 8:06 | 1.0 | Editor 3.0 Analyse Specification Window does not terminate/close - ```
What steps will reproduce the problem?
1. open an existing spec in editor 3.0
2. choose "analyse specification" from menu bar
3. select "stop" on analyse specification window
4. try to close that window (via x)
What is the expected output? What do you see instead?
In my opinion the window should close immediately, but it stays opened until
the editor is closed, the "close" button can not be pushed.
What version of the product are you using? On what operating system?
editor 3.0
Please provide any additional information below.
the existing specification was made with old editor, and has been verified
there.
```
Original issue reported on code.google.com by `anwim...@gmail.com` on 10 Dec 2013 at 8:06 | defect | editor analyse specification window does not terminate close what steps will reproduce the problem open an existing spec in editor choose analyse specification from menu bar select stop on analyse specification window try to close that window via x what is the expected output what do you see instead in my opinion the window should close immediately but it stays opened until the editor is closed the close button can not be pushed what version of the product are you using on what operating system editor please provide any additional information below the existing specification was made with old editor and has been verified there original issue reported on code google com by anwim gmail com on dec at | 1 |
189,306 | 15,184,802,655 | IssuesEvent | 2021-02-15 10:04:17 | argoproj/gitops-engine | https://api.github.com/repos/argoproj/gitops-engine | opened | docs: update FAQ | documentation | https://github.com/argoproj/gitops-engine/blob/master/docs/faq.md still refers a lot to the collaboration between Argo and Flux. In #126 I mostly just explained the current situation and how we got there. I didn't change any of the other text as I didn't want to speak for the Argo project and am not really qualified to write a good gitops-engine FAQ.
The page might need an update to reflect the current reality or aspirations. | 1.0 | docs: update FAQ - https://github.com/argoproj/gitops-engine/blob/master/docs/faq.md still refers a lot to the collaboration between Argo and Flux. In #126 I mostly just explained the current situation and how we got there. I didn't change any of the other text as I didn't want to speak for the Argo project and am not really qualified to write a good gitops-engine FAQ.
The page might need an update to reflect the current reality or aspirations. | non_defect | docs update faq still refers a lot to the collaboration between argo and flux in i mostly just explained the current situation and how we got there i didn t change any of the other text as i didn t want to speak for the argo project and am not really qualified to write a good gitops engine faq the page might need an update to reflect the current reality or aspirations | 0 |
66,402 | 6,997,846,331 | IssuesEvent | 2017-12-16 19:34:44 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | reopened | Test failure: System.Net.Http.Functional.Tests.ResponseStreamTest / ReadAsStreamAsync_InvalidServerResponse_ThrowsIOException | area-System.Net.Http os-windows test-run-core | Opened on behalf of @Jiayili1
The test `System.Net.Http.Functional.Tests.ResponseStreamTest/ReadAsStreamAsync_InvalidServerResponse_ThrowsIOException(transferType: ContentLength, transferError: ContentLengthTooLarge)` has failed.
Assert.Throws() Failure\r
Expected: typeof(System.IO.IOException)\r
Actual: typeof(System.Net.Http.HttpRequestException): An error occurred while sending the request.
Stack Trace:
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.HttpClient.<FinishSendAsyncUnbuffered>d__59.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.Functional.Tests.ResponseStreamTest.<ReadAsStreamHelper>d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
Build : Master - 20171018.01 (Core Tests)
Failing configurations:
- Windows.7.Amd64-x86
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20171018.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ResponseStreamTest~2FReadAsStreamAsync_InvalidServerResponse_ThrowsIOException(transferType:%20ContentLength,%20transferError:%20ContentLengthTooLarge) | 1.0 | Test failure: System.Net.Http.Functional.Tests.ResponseStreamTest / ReadAsStreamAsync_InvalidServerResponse_ThrowsIOException - Opened on behalf of @Jiayili1
The test `System.Net.Http.Functional.Tests.ResponseStreamTest/ReadAsStreamAsync_InvalidServerResponse_ThrowsIOException(transferType: ContentLength, transferError: ContentLengthTooLarge)` has failed.
Assert.Throws() Failure\r
Expected: typeof(System.IO.IOException)\r
Actual: typeof(System.Net.Http.HttpRequestException): An error occurred while sending the request.
Stack Trace:
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.HttpClient.<FinishSendAsyncUnbuffered>d__59.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Net.Http.Functional.Tests.ResponseStreamTest.<ReadAsStreamHelper>d__9.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
Build : Master - 20171018.01 (Core Tests)
Failing configurations:
- Windows.7.Amd64-x86
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20171018.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ResponseStreamTest~2FReadAsStreamAsync_InvalidServerResponse_ThrowsIOException(transferType:%20ContentLength,%20transferError:%20ContentLengthTooLarge) | non_defect | test failure system net http functional tests responsestreamtest readasstreamasync invalidserverresponse throwsioexception opened on behalf of the test system net http functional tests responsestreamtest readasstreamasync invalidserverresponse throwsioexception transfertype contentlength transfererror contentlengthtoolarge has failed assert throws failure r expected typeof system io ioexception r actual typeof system net http httprequestexception an error occurred while sending the request stack trace at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system net http httpclient d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system net http functional tests responsestreamtest d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task build master core tests failing configurations windows release detail | 0 |
32,185 | 6,733,156,371 | IssuesEvent | 2017-10-18 14:01:03 | hazelcast/hazelcast-csharp-client | https://api.github.com/repos/hazelcast/hazelcast-csharp-client | closed | Leaking Handles when reconnecting | Type: Defect | Handle leaking occur when reconnecting to server
can be reproduced by continually trying to connect to a nonexisting server
> ClientConfig HazelConfig = null;
IHazelcastInstance instance = null;
HazelConfig = new ClientConfig();
HazelConfig.GetNetworkConfig().AddAddress("10.10.10.123");
bool connected = false;
while (!connected)
{
try
{
Program.instance = HazelcastClient.NewHazelcastClient(Program.HazelConfig);
// ConnectToHazelcast();
connected = true;
}
catch (Exception)
{
}
}
| 1.0 | Leaking Handles when reconnecting - Handle leaking occur when reconnecting to server
can be reproduced by continually trying to connect to a nonexisting server
> ClientConfig HazelConfig = null;
IHazelcastInstance instance = null;
HazelConfig = new ClientConfig();
HazelConfig.GetNetworkConfig().AddAddress("10.10.10.123");
bool connected = false;
while (!connected)
{
try
{
Program.instance = HazelcastClient.NewHazelcastClient(Program.HazelConfig);
// ConnectToHazelcast();
connected = true;
}
catch (Exception)
{
}
}
| defect | leaking handles when reconnecting handle leaking occur when reconnecting to server can be reproduced by continually trying to connect to a nonexisting server clientconfig hazelconfig null ihazelcastinstance instance null hazelconfig new clientconfig hazelconfig getnetworkconfig addaddress bool connected false while connected try program instance hazelcastclient newhazelcastclient program hazelconfig connecttohazelcast connected true catch exception | 1 |
75,844 | 26,096,188,149 | IssuesEvent | 2022-12-26 20:12:43 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Upgrade to Jooq 3.17.6 and Spring Boot 3.0.0 -> AbstractMethodError | T: Defect | ### Expected behavior
working as with previous version
### Actual behavior
_No response_
### Steps to reproduce the problem
022-12-26T20:58:48.774+01:00 ERROR 22584 --- [nio-8090-exec-2] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed: java.lang.AbstractMethodError: Receiver class org.springframework.boot.autoconfigure.jooq.JooqExceptionTranslator does not define or inherit an implementation of the resolved method 'abstract void end(org.jooq.ExecuteContext)' of interface org.jooq.ExecuteListener.] with root cause
java.lang.AbstractMethodError: Receiver class org.springframework.boot.autoconfigure.jooq.JooqExceptionTranslator does not define or inherit an implementation of the resolved method 'abstract void end(org.jooq.ExecuteContext)' of interface org.jooq.ExecuteListener.
at org.jooq.impl.ExecuteListeners.end(ExecuteListeners.java:270) ~[jooq-3.17.6.jar:na]
at org.jooq.impl.Tools.safeClose(Tools.java:3152) ~[jooq-3.17.6.jar:na]
at org.jooq.impl.Tools.safeClose(Tools.java:3115) ~[jooq-3.17.6.jar:na]
### jOOQ Version
3.17.6
### Database product and version
MySql 8
### Java Version
17
### OS Version
Version 10.0.22621.963
### JDBC driver name and version (include name if unofficial driver)
mysql:mysql-connector-java:8.0.13 | 1.0 | Upgrade to Jooq 3.17.6 and Spring Boot 3.0.0 -> AbstractMethodError - ### Expected behavior
working as with previous version
### Actual behavior
_No response_
### Steps to reproduce the problem
022-12-26T20:58:48.774+01:00 ERROR 22584 --- [nio-8090-exec-2] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed: java.lang.AbstractMethodError: Receiver class org.springframework.boot.autoconfigure.jooq.JooqExceptionTranslator does not define or inherit an implementation of the resolved method 'abstract void end(org.jooq.ExecuteContext)' of interface org.jooq.ExecuteListener.] with root cause
java.lang.AbstractMethodError: Receiver class org.springframework.boot.autoconfigure.jooq.JooqExceptionTranslator does not define or inherit an implementation of the resolved method 'abstract void end(org.jooq.ExecuteContext)' of interface org.jooq.ExecuteListener.
at org.jooq.impl.ExecuteListeners.end(ExecuteListeners.java:270) ~[jooq-3.17.6.jar:na]
at org.jooq.impl.Tools.safeClose(Tools.java:3152) ~[jooq-3.17.6.jar:na]
at org.jooq.impl.Tools.safeClose(Tools.java:3115) ~[jooq-3.17.6.jar:na]
### jOOQ Version
3.17.6
### Database product and version
MySql 8
### Java Version
17
### OS Version
Version 10.0.22621.963
### JDBC driver name and version (include name if unofficial driver)
mysql:mysql-connector-java:8.0.13 | defect | upgrade to jooq and spring boot abstractmethoderror expected behavior working as with previous version actual behavior no response steps to reproduce the problem error o a c c c servlet service for servlet in context with path threw exception with root cause java lang abstractmethoderror receiver class org springframework boot autoconfigure jooq jooqexceptiontranslator does not define or inherit an implementation of the resolved method abstract void end org jooq executecontext of interface org jooq executelistener at org jooq impl executelisteners end executelisteners java at org jooq impl tools safeclose tools java at org jooq impl tools safeclose tools java jooq version database product and version mysql java version os version version jdbc driver name and version include name if unofficial driver mysql mysql connector java | 1 |
44,932 | 12,468,189,978 | IssuesEvent | 2020-05-28 18:24:28 | networkx/networkx | https://api.github.com/repos/networkx/networkx | closed | I think `nx.vote_rank` make wrong result. | Defect | As the `vote_rank` algorithm was written in the origianl paper [Identifying a set of influential spreaders in complex networks](https://www.nature.com/articles/srep27823), **the voting ability of already elected nodes have to be zero**.
But, in `networkx.vote_rank`, the voting ability of already elected node could be negative value.
To check if the voting ability could be negative value, I copied the code of `nx.vote_rank` from [networkx documentation](https://networkx.github.io/documentation/stable/_modules/networkx/algorithms/centrality/voterank_alg.html#voterank), and add simple code that print node having negative value.
```python
def voterank(G, number_of_nodes=None, max_iter=10000):
voterank = []
if len(G) == 0:
return voterank
if number_of_nodes is None or number_of_nodes > len(G):
number_of_nodes = len(G)
avgDegree = sum(deg for _, deg in G.degree()) / float(len(G))
# step 1 - initiate all nodes to (0,1) (score, voting ability)
for _, v in G.nodes(data=True):
v['voterank'] = [0, 1]
# Repeat steps 1b to 4 until num_seeds are elected.
for _ in range(max_iter):
# step 1b - reset rank
for _, v in G.nodes(data=True):
v['voterank'][0] = 0
####################################
####################################
# this code was added by me to check if voting ability is negative
for n in G:
node_voting_ability = G.nodes[n]['voterank'][1]
if node_voting_ability < 0.0:
print(n, node_voting_ability)
####################################
####################################
# step 2 - vote
for n, nbr in G.edges():
G.nodes[n]['voterank'][0] += G.nodes[nbr]['voterank'][1]
G.nodes[nbr]['voterank'][0] += G.nodes[n]['voterank'][1]
for n in voterank:
G.nodes[n]['voterank'][0] = 0
# step 3 - select top node
n, value = max(G.nodes(data=True), key=lambda x: x[1]['voterank'][0])
if value['voterank'][0] == 0:
return voterank
voterank.append(n)
if len(voterank) >= number_of_nodes:
return voterank
# weaken the selected node
G.nodes[n]['voterank'] = [0, 0]
# step 4 - update voterank properties
for nbr in G.neighbors(n):
G.nodes[nbr]['voterank'][1] -= 1 / avgDegree
return voterank
N = 10
G = nx.scale_free_graph(n=N, seed=0)
# Digraph => Graph
G = nx.Graph(G)
# remove self-loop
G.remove_edges_from([(u, v) for u, v in G.edges() if u==v])
assert nx.is_connected(G)
voterank(G)
```
the output of code execution is below. Some nodes have definitely negative value. And also, nodes that have negative voting ability are already elected nodes.
Therefore, the result of `nx.vote_rank` could be little different with the origianl vote rank algorithm.
```
1 -0.3333333333333333
1 -0.3333333333333333
1 -0.6666666666666666
2 -0.3333333333333333
7 -0.3333333333333333
```
If I am wrong, please let me know.
I always thanks for your support and `networkx`.
| 1.0 | I think `nx.vote_rank` make wrong result. - As the `vote_rank` algorithm was written in the origianl paper [Identifying a set of influential spreaders in complex networks](https://www.nature.com/articles/srep27823), **the voting ability of already elected nodes have to be zero**.
But, in `networkx.vote_rank`, the voting ability of already elected node could be negative value.
To check if the voting ability could be negative value, I copied the code of `nx.vote_rank` from [networkx documentation](https://networkx.github.io/documentation/stable/_modules/networkx/algorithms/centrality/voterank_alg.html#voterank), and add simple code that print node having negative value.
```python
def voterank(G, number_of_nodes=None, max_iter=10000):
voterank = []
if len(G) == 0:
return voterank
if number_of_nodes is None or number_of_nodes > len(G):
number_of_nodes = len(G)
avgDegree = sum(deg for _, deg in G.degree()) / float(len(G))
# step 1 - initiate all nodes to (0,1) (score, voting ability)
for _, v in G.nodes(data=True):
v['voterank'] = [0, 1]
# Repeat steps 1b to 4 until num_seeds are elected.
for _ in range(max_iter):
# step 1b - reset rank
for _, v in G.nodes(data=True):
v['voterank'][0] = 0
####################################
####################################
# this code was added by me to check if voting ability is negative
for n in G:
node_voting_ability = G.nodes[n]['voterank'][1]
if node_voting_ability < 0.0:
print(n, node_voting_ability)
####################################
####################################
# step 2 - vote
for n, nbr in G.edges():
G.nodes[n]['voterank'][0] += G.nodes[nbr]['voterank'][1]
G.nodes[nbr]['voterank'][0] += G.nodes[n]['voterank'][1]
for n in voterank:
G.nodes[n]['voterank'][0] = 0
# step 3 - select top node
n, value = max(G.nodes(data=True), key=lambda x: x[1]['voterank'][0])
if value['voterank'][0] == 0:
return voterank
voterank.append(n)
if len(voterank) >= number_of_nodes:
return voterank
# weaken the selected node
G.nodes[n]['voterank'] = [0, 0]
# step 4 - update voterank properties
for nbr in G.neighbors(n):
G.nodes[nbr]['voterank'][1] -= 1 / avgDegree
return voterank
N = 10
G = nx.scale_free_graph(n=N, seed=0)
# Digraph => Graph
G = nx.Graph(G)
# remove self-loop
G.remove_edges_from([(u, v) for u, v in G.edges() if u==v])
assert nx.is_connected(G)
voterank(G)
```
the output of code execution is below. Some nodes have definitely negative value. And also, nodes that have negative voting ability are already elected nodes.
Therefore, the result of `nx.vote_rank` could be little different with the origianl vote rank algorithm.
```
1 -0.3333333333333333
1 -0.3333333333333333
1 -0.6666666666666666
2 -0.3333333333333333
7 -0.3333333333333333
```
If I am wrong, please let me know.
I always thanks for your support and `networkx`.
| defect | i think nx vote rank make wrong result as the vote rank algorithm was written in the origianl paper the voting ability of already elected nodes have to be zero but in networkx vote rank the voting ability of already elected node could be negative value to check if the voting ability could be negative value i copied the code of nx vote rank from and add simple code that print node having negative value python def voterank g number of nodes none max iter voterank if len g return voterank if number of nodes is none or number of nodes len g number of nodes len g avgdegree sum deg for deg in g degree float len g step initiate all nodes to score voting ability for v in g nodes data true v repeat steps to until num seeds are elected for in range max iter step reset rank for v in g nodes data true v this code was added by me to check if voting ability is negative for n in g node voting ability g nodes if node voting ability print n node voting ability step vote for n nbr in g edges g nodes g nodes g nodes g nodes for n in voterank g nodes step select top node n value max g nodes data true key lambda x x if value return voterank voterank append n if len voterank number of nodes return voterank weaken the selected node g nodes step update voterank properties for nbr in g neighbors n g nodes avgdegree return voterank n g nx scale free graph n n seed digraph graph g nx graph g remove self loop g remove edges from assert nx is connected g voterank g the output of code execution is below some nodes have definitely negative value and also nodes that have negative voting ability are already elected nodes therefore the result of nx vote rank could be little different with the origianl vote rank algorithm if i am wrong please let me know i always thanks for your support and networkx | 1 |
52,413 | 13,224,720,010 | IssuesEvent | 2020-08-17 19:42:31 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | Neutrino simulation event has a pulse width of zero (Trac #2180) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2180">https://code.icecube.wisc.edu/projects/icecube/ticket/2180</a>, reported by lwilleand owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-03-27T16:25:07",
"_ts": "1553703907985721",
"description": "I'm running into a Monopod error with about 10% of the files in Nancy's NuGen_new simulation set. The error is \"FATAL (millipede): Assertion failed: p->GetWidth() > 0 (MillipedeDOMCacheMap.cxx:379 in void MillipedeDOMCacheMap::UpdateData(const I3TimeWindow&, const I3RecoPulseSeriesMap&, const I3TimeWindowSeriesMap&, double, double, double, bool))\".\n\nLooking into this error, it appears that about 10% of the time, an event will have a single pulse with a width reported as 0. I'm uncertain if this is an issue with the simulation file processing or something else. An example of an event with a zero pulse width is here `/data/ana/Cscd/StartingEvents/NuGen_new/NuTau/medium_energy/IC86_flasher_p1=0.3_p2=0.0_domeff_081/l2/1/l2_00000248.i3.zst` Event 450.",
"reporter": "lwille",
"cc": "jvansanten, nwhitehorn",
"resolution": "duplicate",
"time": "2018-08-08T18:33:10",
"component": "combo simulation",
"summary": "Neutrino simulation event has a pulse width of zero",
"priority": "normal",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Neutrino simulation event has a pulse width of zero (Trac #2180) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2180">https://code.icecube.wisc.edu/projects/icecube/ticket/2180</a>, reported by lwilleand owned by juancarlos</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-03-27T16:25:07",
"_ts": "1553703907985721",
"description": "I'm running into a Monopod error with about 10% of the files in Nancy's NuGen_new simulation set. The error is \"FATAL (millipede): Assertion failed: p->GetWidth() > 0 (MillipedeDOMCacheMap.cxx:379 in void MillipedeDOMCacheMap::UpdateData(const I3TimeWindow&, const I3RecoPulseSeriesMap&, const I3TimeWindowSeriesMap&, double, double, double, bool))\".\n\nLooking into this error, it appears that about 10% of the time, an event will have a single pulse with a width reported as 0. I'm uncertain if this is an issue with the simulation file processing or something else. An example of an event with a zero pulse width is here `/data/ana/Cscd/StartingEvents/NuGen_new/NuTau/medium_energy/IC86_flasher_p1=0.3_p2=0.0_domeff_081/l2/1/l2_00000248.i3.zst` Event 450.",
"reporter": "lwille",
"cc": "jvansanten, nwhitehorn",
"resolution": "duplicate",
"time": "2018-08-08T18:33:10",
"component": "combo simulation",
"summary": "Neutrino simulation event has a pulse width of zero",
"priority": "normal",
"keywords": "",
"milestone": "Vernal Equinox 2019",
"owner": "juancarlos",
"type": "defect"
}
```
</p>
</details>
| defect | neutrino simulation event has a pulse width of zero trac migrated from json status closed changetime ts description i m running into a monopod error with about of the files in nancy s nugen new simulation set the error is fatal millipede assertion failed p getwidth millipededomcachemap cxx in void millipededomcachemap updatedata const const const double double double bool n nlooking into this error it appears that about of the time an event will have a single pulse with a width reported as i m uncertain if this is an issue with the simulation file processing or something else an example of an event with a zero pulse width is here data ana cscd startingevents nugen new nutau medium energy flasher domeff zst event reporter lwille cc jvansanten nwhitehorn resolution duplicate time component combo simulation summary neutrino simulation event has a pulse width of zero priority normal keywords milestone vernal equinox owner juancarlos type defect | 1 |
36,600 | 8,030,786,494 | IssuesEvent | 2018-07-27 21:03:06 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | PrimeFaces cannot find default message bundles when deployed as an OSGi bundle | defect | Getting messages from the default `ResourceBundle`s fails in an OSGi environment because the default `ResourceBundle`s cannot be found. For example, the following calls will return `null` when called from an OSGi WAB or bundle that depends on the PrimeFaces bundle:
```
MessageFactory.getMessage(UIData.ARIA_NEXT_PAGE_LABEL, new Object[]{});
MessageFactory.getMessage("javax.faces.converter.DateTimeConverter.DATE", FacesMessage.SEVERITY_ERROR, params);
```
The reason these calls return `null` because `MessageFactory` uses the `Thread.currentThread().getContextClassLoader()` to look for the bundles. In OSGi, that `ClassLoader` is the loader of the calling WAB, but it does not have access to the default `ResourceBundle`s that are contained in PrimeFaces, Mojarra, and MyFaces.
In order to fix this problem, `MessageFactory` should look for `ResourceBundle`s using both the thread-context loader and the loader of the bundle containing the default resource (such as the PrimeFaces bundles' loader or Mojarra's loader). | 1.0 | PrimeFaces cannot find default message bundles when deployed as an OSGi bundle - Getting messages from the default `ResourceBundle`s fails in an OSGi environment because the default `ResourceBundle`s cannot be found. For example, the following calls will return `null` when called from an OSGi WAB or bundle that depends on the PrimeFaces bundle:
```
MessageFactory.getMessage(UIData.ARIA_NEXT_PAGE_LABEL, new Object[]{});
MessageFactory.getMessage("javax.faces.converter.DateTimeConverter.DATE", FacesMessage.SEVERITY_ERROR, params);
```
The reason these calls return `null` because `MessageFactory` uses the `Thread.currentThread().getContextClassLoader()` to look for the bundles. In OSGi, that `ClassLoader` is the loader of the calling WAB, but it does not have access to the default `ResourceBundle`s that are contained in PrimeFaces, Mojarra, and MyFaces.
In order to fix this problem, `MessageFactory` should look for `ResourceBundle`s using both the thread-context loader and the loader of the bundle containing the default resource (such as the PrimeFaces bundles' loader or Mojarra's loader). | defect | primefaces cannot find default message bundles when deployed as an osgi bundle getting messages from the default resourcebundle s fails in an osgi environment because the default resourcebundle s cannot be found for example the following calls will return null when called from an osgi wab or bundle that depends on the primefaces bundle messagefactory getmessage uidata aria next page label new object messagefactory getmessage javax faces converter datetimeconverter date facesmessage severity error params the reason these calls return null because messagefactory uses the thread currentthread getcontextclassloader to look for the bundles in osgi that classloader is the loader of the calling wab but it does not have access to the default resourcebundle s that are contained in primefaces mojarra and myfaces in order to fix this problem messagefactory should look for resourcebundle s using both the thread context loader and the loader of the bundle containing the default resource such as the primefaces bundles loader or mojarra s loader | 1 |
30,668 | 7,241,653,828 | IssuesEvent | 2018-02-14 02:29:19 | elastic/logstash | https://api.github.com/repos/elastic/logstash | closed | Cleanup doc repos to remove duplicated content | code cleanup docs | Currently, the doc build is pulling files from both the logstash and logstash-doc repos:
For 5.0 and later, the build pulls:
- index files and static doc source files from the `logstash` repo
- plugin source files from the `logstash-docs` repo
The files in `logstash-docs` under `/docs/static` are no longer used by the build.
For 1.5 through 2.4, the build pulls:
- index files from the `logstash` repo.
- static docs and plugin docs from the `logstash-docs` repo.
Eventually, we want to clean up branches 1.5 through 2.4 so that we follow the same build strategy that we follow for 5.0+.
TO DO list:
- [ ] For version 1.5 through 2.4, make sure the `docs/static` folder in the `logstash` repo contains the authoritative version of the files. Right now there are a few doc changes that never got into the `logstash-docs`. I need to confirm that the opposite has not happened with the `logstash-docs` repo. It should not have changes that are not also in the `logstash` repo.
- [x] Update the build to pull the static doc source files from the `logstash` repo instead of `logstash-docs`.
- [ ] Delete the `/docs/static` folder from all branches of the `logstash-docs` repo starting with version 1.5 - done for all 5.x branches.
- [ ] Delete all index files from the `logstash-docs` repo.
| 1.0 | Cleanup doc repos to remove duplicated content - Currently, the doc build is pulling files from both the logstash and logstash-doc repos:
For 5.0 and later, the build pulls:
- index files and static doc source files from the `logstash` repo
- plugin source files from the `logstash-docs` repo
The files in `logstash-docs` under `/docs/static` are no longer used by the build.
For 1.5 through 2.4, the build pulls:
- index files from the `logstash` repo.
- static docs and plugin docs from the `logstash-docs` repo.
Eventually, we want to clean up branches 1.5 through 2.4 so that we follow the same build strategy that we follow for 5.0+.
TO DO list:
- [ ] For version 1.5 through 2.4, make sure the `docs/static` folder in the `logstash` repo contains the authoritative version of the files. Right now there are a few doc changes that never got into the `logstash-docs`. I need to confirm that the opposite has not happened with the `logstash-docs` repo. It should not have changes that are not also in the `logstash` repo.
- [x] Update the build to pull the static doc source files from the `logstash` repo instead of `logstash-docs`.
- [ ] Delete the `/docs/static` folder from all branches of the `logstash-docs` repo starting with version 1.5 - done for all 5.x branches.
- [ ] Delete all index files from the `logstash-docs` repo.
| non_defect | cleanup doc repos to remove duplicated content currently the doc build is pulling files from both the logstash and logstash doc repos for and later the build pulls index files and static doc source files from the logstash repo plugin source files from the logstash docs repo the files in logstash docs under docs static are no longer used by the build for through the build pulls index files from the logstash repo static docs and plugin docs from the logstash docs repo eventually we want to clean up branches through so that we follow the same build strategy that we follow for to do list for version through make sure the docs static folder in the logstash repo contains the authoritative version of the files right now there are a few doc changes that never got into the logstash docs i need to confirm that the opposite has not happened with the logstash docs repo it should not have changes that are not also in the logstash repo update the build to pull the static doc source files from the logstash repo instead of logstash docs delete the docs static folder from all branches of the logstash docs repo starting with version done for all x branches delete all index files from the logstash docs repo | 0 |
75,451 | 25,854,663,961 | IssuesEvent | 2022-12-13 12:55:17 | matrix-org/synapse | https://api.github.com/repos/matrix-org/synapse | closed | Complement `TestPartialStateJoin/Room_aliases_can_be_added_and_queried_during_a_resync` is flakey | A-Federated-Join T-Defect Z-Flake Z-Dev-Wishlist A-Testing | [TestPartialStateJoin/Room_aliases_can_be_added_and_queried_during_a_resync](https://github.com/matrix-org/synapse/actions/runs/3538420582/jobs/5939246922)
```
client.go:599: [CSAPI] POST hs1/_matrix/client/v3/register => 200 OK (32.809293ms)
client.go:599: [CSAPI] GET hs1/_matrix/client/v3/capabilities => 200 OK (8.09415ms)
server.go:170: Creating room !0-UceOsu61aV8dXGXSxM:host.docker.internal:33837 with version 9
federation_room_join_partial_state_test.go:3468: Registered state_ids handler for event $g53VZ818ZJcgniyOAKqjgOgqN7Z7aEPYJ_Cq9a5NbVE
federation_room_join_partial_state_test.go:3509: Registered /state handler for event $g53VZ818ZJcgniyOAKqjgOgqN7Z7aEPYJ_Cq9a5NbVE
client.go:599: [CSAPI] POST hs1/_matrix/client/v3/join/!0-UceOsu61aV8dXGXSxM:host.docker.internal:33837 => 502 Bad Gateway (6.328761ms)
federation_room_join_partial_state_test.go:3380: CSAPI.MustDoFunc POST http://localhost:49420/_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 returned non-2xx code: 502 Bad Gateway - body: {"errcode":"M_UNKNOWN","error":"Failed to make_join via any server"}
```
Why did we get a 502?
```
2022-11-24T07:20:08.4197065Z nginx | 192.168.16.1 - - [24/Nov/2022:07:14:39 +0000] "POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 HTTP/1.1" 502 95 "-" "Go-http-client/1.1"
2022-11-24T07:20:08.4175716Z synapse_main | 2022-11-24 07:14:39,151 - synapse.http.servlet - 668 - WARNING - POST-516 - Unable to parse JSON from POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 response: Expecting value: line 1 column 1 (char 0) (b'')
2022-11-24T07:20:08.4176297Z synapse_main | 2022-11-24 07:14:39,155 - synapse.federation.federation_client - 850 - INFO - POST-516 - make_join: Not retrying server host.docker.internal:33837 because we tried it recently retry_last_ts=1669273939360 and we won't check for another retry_interval=600000ms.
2022-11-24T07:20:08.4177016Z synapse_main | 2022-11-24 07:14:39,156 - synapse.http.server - 108 - INFO - POST-516 - <SynapseRequest at 0x7f785e4a0c40 method='POST' uri='/_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837' clientproto='HTTP/1.0' site='8080'> SynapseError: 502 - Failed to make_join via any server
2022-11-24T07:20:08.4177822Z synapse_main | 2022-11-24 07:14:39,156 - synapse.access.http.8080 - 460 - INFO - POST-516 - ::ffff:127.0.0.1 - 8080 - {@t40alice:hs1} Processed request: 0.005sec/0.000sec (0.003sec, 0.000sec) (0.001sec/0.001sec/8) 84B 502 "POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 HTTP/1.0" "Go-http-client/1.1" [0 dbevts]
```
In particular:
> Unable to parse JSON from POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 response: Expecting value: line 1 column 1 (char 0) (b'')
erm wat. Is this a complement test bug that I wrote? | 1.0 | Complement `TestPartialStateJoin/Room_aliases_can_be_added_and_queried_during_a_resync` is flakey - [TestPartialStateJoin/Room_aliases_can_be_added_and_queried_during_a_resync](https://github.com/matrix-org/synapse/actions/runs/3538420582/jobs/5939246922)
```
client.go:599: [CSAPI] POST hs1/_matrix/client/v3/register => 200 OK (32.809293ms)
client.go:599: [CSAPI] GET hs1/_matrix/client/v3/capabilities => 200 OK (8.09415ms)
server.go:170: Creating room !0-UceOsu61aV8dXGXSxM:host.docker.internal:33837 with version 9
federation_room_join_partial_state_test.go:3468: Registered state_ids handler for event $g53VZ818ZJcgniyOAKqjgOgqN7Z7aEPYJ_Cq9a5NbVE
federation_room_join_partial_state_test.go:3509: Registered /state handler for event $g53VZ818ZJcgniyOAKqjgOgqN7Z7aEPYJ_Cq9a5NbVE
client.go:599: [CSAPI] POST hs1/_matrix/client/v3/join/!0-UceOsu61aV8dXGXSxM:host.docker.internal:33837 => 502 Bad Gateway (6.328761ms)
federation_room_join_partial_state_test.go:3380: CSAPI.MustDoFunc POST http://localhost:49420/_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 returned non-2xx code: 502 Bad Gateway - body: {"errcode":"M_UNKNOWN","error":"Failed to make_join via any server"}
```
Why did we get a 502?
```
2022-11-24T07:20:08.4197065Z nginx | 192.168.16.1 - - [24/Nov/2022:07:14:39 +0000] "POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 HTTP/1.1" 502 95 "-" "Go-http-client/1.1"
2022-11-24T07:20:08.4175716Z synapse_main | 2022-11-24 07:14:39,151 - synapse.http.servlet - 668 - WARNING - POST-516 - Unable to parse JSON from POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 response: Expecting value: line 1 column 1 (char 0) (b'')
2022-11-24T07:20:08.4176297Z synapse_main | 2022-11-24 07:14:39,155 - synapse.federation.federation_client - 850 - INFO - POST-516 - make_join: Not retrying server host.docker.internal:33837 because we tried it recently retry_last_ts=1669273939360 and we won't check for another retry_interval=600000ms.
2022-11-24T07:20:08.4177016Z synapse_main | 2022-11-24 07:14:39,156 - synapse.http.server - 108 - INFO - POST-516 - <SynapseRequest at 0x7f785e4a0c40 method='POST' uri='/_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837' clientproto='HTTP/1.0' site='8080'> SynapseError: 502 - Failed to make_join via any server
2022-11-24T07:20:08.4177822Z synapse_main | 2022-11-24 07:14:39,156 - synapse.access.http.8080 - 460 - INFO - POST-516 - ::ffff:127.0.0.1 - 8080 - {@t40alice:hs1} Processed request: 0.005sec/0.000sec (0.003sec, 0.000sec) (0.001sec/0.001sec/8) 84B 502 "POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 HTTP/1.0" "Go-http-client/1.1" [0 dbevts]
```
In particular:
> Unable to parse JSON from POST /_matrix/client/v3/join/%210-UceOsu61aV8dXGXSxM:host.docker.internal:33837?server_name=host.docker.internal%3A33837 response: Expecting value: line 1 column 1 (char 0) (b'')
erm wat. Is this a complement test bug that I wrote? | defect | complement testpartialstatejoin room aliases can be added and queried during a resync is flakey client go post matrix client register ok client go get matrix client capabilities ok server go creating room host docker internal with version federation room join partial state test go registered state ids handler for event federation room join partial state test go registered state handler for event client go post matrix client join host docker internal bad gateway federation room join partial state test go csapi mustdofunc post returned non code bad gateway body errcode m unknown error failed to make join via any server why did we get a nginx post matrix client join host docker internal server name host docker internal http go http client synapse main synapse http servlet warning post unable to parse json from post matrix client join host docker internal server name host docker internal response expecting value line column char b synapse main synapse federation federation client info post make join not retrying server host docker internal because we tried it recently retry last ts and we won t check for another retry interval synapse main synapse http server info post synapseerror failed to make join via any server synapse main synapse access http info post ffff processed request post matrix client join host docker internal server name host docker internal http go http client in particular unable to parse json from post matrix client join host docker internal server name host docker internal response expecting value line column char b erm wat is this a complement test bug that i wrote | 1 |
52,272 | 13,731,722,334 | IssuesEvent | 2020-10-05 02:08:43 | mwilliams7197/bootstrap | https://api.github.com/repos/mwilliams7197/bootstrap | opened | WS-2019-0491 (High) detected in handlebars-4.0.12.tgz | security vulnerability | ## WS-2019-0491 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.12.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz</a></p>
<p>Path to dependency file: bootstrap/package.json</p>
<p>Path to vulnerable library: bootstrap/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.0.4.tgz (Root Library)
- istanbul-api-2.0.6.tgz
- istanbul-reports-2.0.1.tgz
- :x: **handlebars-4.0.12.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.4.5 is vulnerable to Denial of Service. The package's parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.12","isTransitiveDependency":true,"dependencyTree":"karma-coverage-istanbul-reporter:2.0.4;istanbul-api:2.0.6;istanbul-reports:2.0.1;handlebars:4.0.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0491","vulnerabilityDetails":"handlebars before 4.4.5 is vulnerable to Denial of Service. The package\u0027s parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | WS-2019-0491 (High) detected in handlebars-4.0.12.tgz - ## WS-2019-0491 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.12.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.12.tgz</a></p>
<p>Path to dependency file: bootstrap/package.json</p>
<p>Path to vulnerable library: bootstrap/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-2.0.4.tgz (Root Library)
- istanbul-api-2.0.6.tgz
- istanbul-reports-2.0.1.tgz
- :x: **handlebars-4.0.12.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 4.4.5 is vulnerable to Denial of Service. The package's parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.
<p>Publish Date: 2019-11-04
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b>WS-2019-0491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1300">https://www.npmjs.com/advisories/1300</a></p>
<p>Release Date: 2019-11-04</p>
<p>Fix Resolution: handlebars - 4.4.5</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.12","isTransitiveDependency":true,"dependencyTree":"karma-coverage-istanbul-reporter:2.0.4;istanbul-api:2.0.6;istanbul-reports:2.0.1;handlebars:4.0.12","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 4.4.5"}],"vulnerabilityIdentifier":"WS-2019-0491","vulnerabilityDetails":"handlebars before 4.4.5 is vulnerable to Denial of Service. The package\u0027s parser may be forced into an endless loop while processing specially-crafted templates. This may allow attackers to exhaust system resources leading to Denial of Service.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/8d5530ee2c3ea9f0aee3fde310b9f36887d00b8b","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_defect | ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file bootstrap package json path to vulnerable library bootstrap node modules handlebars package json dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details handlebars before is vulnerable to denial of service the package s parser may be forced into an endless loop while processing specially crafted templates this may allow attackers to exhaust system resources leading to denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails handlebars before is vulnerable to denial of service the package parser may be forced into an endless loop while processing specially crafted templates this may allow attackers to exhaust system resources leading to denial of service vulnerabilityurl | 0 |
53,903 | 23,098,841,535 | IssuesEvent | 2022-07-26 22:58:40 | microsoft/botbuilder-dotnet | https://api.github.com/repos/microsoft/botbuilder-dotnet | closed | FormatDateTime adaptive expression fails to parse if input is DateTimeOffset type | bug customer-reported Bot Services customer-replied-to | ## Version
4.16.1
## Describe the bug
If you set a memory scope variable to be a `DateTimeOffset` type, and then try and use that variable in an expression containing `formatDateTime`, the `formatDateTime` fails at the following line.
https://github.com/microsoft/botbuilder-dotnet/blob/402bc02b4cbbd2f4ec359134640e99211367e4a5/libraries/AdaptiveExpressions/BuiltinFunctions/FormatDateTime.cs#L50
It's because the parser only checks for `string` and `DateTime` but not `DateTimeOffset`.
## Expected behavior
Expect to be able to format a `DateTimeOffset` variable in the same way as `DateTime`.
## Screenshots
n/a
## Additional context
Please ping me for additional context, if needed. Thanks! | 1.0 | FormatDateTime adaptive expression fails to parse if input is DateTimeOffset type - ## Version
4.16.1
## Describe the bug
If you set a memory scope variable to be a `DateTimeOffset` type, and then try and use that variable in an expression containing `formatDateTime`, the `formatDateTime` fails at the following line.
https://github.com/microsoft/botbuilder-dotnet/blob/402bc02b4cbbd2f4ec359134640e99211367e4a5/libraries/AdaptiveExpressions/BuiltinFunctions/FormatDateTime.cs#L50
It's because the parser only checks for `string` and `DateTime` but not `DateTimeOffset`.
## Expected behavior
Expect to be able to format a `DateTimeOffset` variable in the same way as `DateTime`.
## Screenshots
n/a
## Additional context
Please ping me for additional context, if needed. Thanks! | non_defect | formatdatetime adaptive expression fails to parse if input is datetimeoffset type version describe the bug if you set a memory scope variable to be a datetimeoffset type and then try and use that variable in an expression containing formatdatetime the formatdatetime fails at the following line it s because the parser only checks for string and datetime but not datetimeoffset expected behavior expect to be able to format a datetimeoffset variable in the same way as datetime screenshots n a additional context please ping me for additional context if needed thanks | 0 |
55,575 | 14,563,446,930 | IssuesEvent | 2020-12-17 02:32:31 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | 508-defect-2 [COGNITION, FOCUS MANAGEMENT]: When focus moves to the map, the focus halo should be visible and content announced to screen reader | 508-defect-2 508-issue-cognition 508-issue-focus-mgmt 508/Accessibility vsa vsa-facilities | # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Jennifer
## User Story or Problem Statement
As a user who navigates by keyboard, I expect to see a focus outline as a tab through the screen, so I may orient myself in the content.
## Details
When tabbing through the screen content, moving from the search results to the map, focus moves from the last search result item to the `<canvas>` element for the map, but there is no focus halo. Visible focus indicators help anyone who relies on the keyboard to operate the page, by letting them visually determine the component on which keyboard operations will interact at any point in time. People with attention limitations, short term memory limitations, or limitations in executive processes benefit by being able to discover where the focus is located.
Additionally, using the screen readers, the map is read only as "clickable" (if it is announced at all) when focus is on it also without the focus halo. It would be helpful to have an `aria-label="Search results map area"` on the `<canvas>` element.
## Acceptance Criteria
- [ ] When focus is moved to keyboard operable user interface element, the focus indicator is visible for sighted users.
- [ ] When focus is on the map, the screen reader hears "Search results map area"
## Environment
* Operating System: all
* Browser: all
* Screen reading device: all
* Server destination: staging
## Steps to Recreate
1. Enter `https://staging.va.gov/find-locations/?address=04101&context=Portland%2C%20Maine%2004101%2C%20United%20States&facilityType=health&location=43.66%2C-70.25&page=1&serviceType` in browser
2. Tab through the screen
3. At the last item in the search results, tab to the next, and notice it is not visible where focus is
4. Verify that the map is not announced to the screen reader
## WCAG or Vendor Guidance (optional)
* [WCAG 2.0 Level AA - 2.4.7 Focus Visible](https://www.w3.org/TR/UNDERSTANDING-WCAG20/navigation-mechanisms-focus-visible.html) - Any keyboard operable user interface has a mode of operation where the keyboard focus indicator is visible.
## Screenshots or Trace Logs
Using `document.activeElement` to uncover where focus is after the last link in the search results, "Mental health: 207-623-8411 x5515" reveals focus is on the `canvas.mapboxgl-canvas' and without a visible indicator of focus.

| 1.0 | 508-defect-2 [COGNITION, FOCUS MANAGEMENT]: When focus moves to the map, the focus halo should be visible and content announced to screen reader - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Jennifer
## User Story or Problem Statement
As a user who navigates by keyboard, I expect to see a focus outline as a tab through the screen, so I may orient myself in the content.
## Details
When tabbing through the screen content, moving from the search results to the map, focus moves from the last search result item to the `<canvas>` element for the map, but there is no focus halo. Visible focus indicators help anyone who relies on the keyboard to operate the page, by letting them visually determine the component on which keyboard operations will interact at any point in time. People with attention limitations, short term memory limitations, or limitations in executive processes benefit by being able to discover where the focus is located.
Additionally, using the screen readers, the map is read only as "clickable" (if it is announced at all) when focus is on it also without the focus halo. It would be helpful to have an `aria-label="Search results map area"` on the `<canvas>` element.
## Acceptance Criteria
- [ ] When focus is moved to keyboard operable user interface element, the focus indicator is visible for sighted users.
- [ ] When focus is on the map, the screen reader hears "Search results map area"
## Environment
* Operating System: all
* Browser: all
* Screen reading device: all
* Server destination: staging
## Steps to Recreate
1. Enter `https://staging.va.gov/find-locations/?address=04101&context=Portland%2C%20Maine%2004101%2C%20United%20States&facilityType=health&location=43.66%2C-70.25&page=1&serviceType` in browser
2. Tab through the screen
3. At the last item in the search results, tab to the next, and notice it is not visible where focus is
4. Verify that the map is not announced to the screen reader
## WCAG or Vendor Guidance (optional)
* [WCAG 2.0 Level AA - 2.4.7 Focus Visible](https://www.w3.org/TR/UNDERSTANDING-WCAG20/navigation-mechanisms-focus-visible.html) - Any keyboard operable user interface has a mode of operation where the keyboard focus indicator is visible.
## Screenshots or Trace Logs
Using `document.activeElement` to uncover where focus is after the last link in the search results, "Mental health: 207-623-8411 x5515" reveals focus is on the `canvas.mapboxgl-canvas' and without a visible indicator of focus.

| defect | defect when focus moves to the map the focus halo should be visible and content announced to screen reader feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact jennifer user story or problem statement as a user who navigates by keyboard i expect to see a focus outline as a tab through the screen so i may orient myself in the content details when tabbing through the screen content moving from the search results to the map focus moves from the last search result item to the element for the map but there is no focus halo visible focus indicators help anyone who relies on the keyboard to operate the page by letting them visually determine the component on which keyboard operations will interact at any point in time people with attention limitations short term memory limitations or limitations in executive processes benefit by being able to discover where the focus is located additionally using the screen readers the map is read only as clickable if it is announced at all when focus is on it also without the focus halo it would be helpful to have an aria label search results map area on the element acceptance criteria when focus is moved to keyboard operable user interface element the focus indicator is visible for sighted users when focus is on the map the screen reader hears search results map area environment operating system all browser all screen reading device all server destination staging steps to recreate enter in browser tab through the screen at the last item in the search results tab to the next and notice it is not visible where focus is verify that the map is not announced to the screen reader wcag or vendor guidance optional any keyboard operable user interface has a mode of operation where the keyboard focus indicator is visible screenshots or trace logs using document activeelement to uncover where focus is after the last link in the search results mental health reveals focus is on the canvas mapboxgl canvas and without a visible indicator of focus | 1 |
6,673 | 23,702,787,541 | IssuesEvent | 2022-08-29 20:44:05 | pulumi/pulumi | https://api.github.com/repos/pulumi/pulumi | closed | [Automation API] Multiple parallel calls to `runPulumiCommand` fail with empty stdout | kind/bug area/sdks language/javascript area/automation-api | ### What happened?
When writing reproduction tests for https://github.com/pulumi/pulumi/issues/5449, after roughly 5-6 successful calls to `pulumi version` when instantiating a `LocalWorkspace`, the calls begin to fail due to receiving empty stdout.
### Steps to reproduce
Asynchronously create multiple stacks in localworkspaces:
```typescript
const stackNames = Array.from(Array(10).keys()).map(_ => fullyQualifiedStackName(getTestOrg(), projectName, `int_test${getTestSuffix()}`));
const stacks = await Promise.all(stackNames.map(async stackName => LocalWorkspace.createStack({ stackName, projectName, program })));
```
### Expected Behavior
I would still expect these to fail once `up`'d, but the local initialization should succeed.
### Actual Behavior
Automation API fails to parse semver as stdout is empty.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| 1.0 | [Automation API] Multiple parallel calls to `runPulumiCommand` fail with empty stdout - ### What happened?
When writing reproduction tests for https://github.com/pulumi/pulumi/issues/5449, after roughly 5-6 successful calls to `pulumi version` when instantiating a `LocalWorkspace`, the calls begin to fail due to receiving empty stdout.
### Steps to reproduce
Asynchronously create multiple stacks in localworkspaces:
```typescript
const stackNames = Array.from(Array(10).keys()).map(_ => fullyQualifiedStackName(getTestOrg(), projectName, `int_test${getTestSuffix()}`));
const stacks = await Promise.all(stackNames.map(async stackName => LocalWorkspace.createStack({ stackName, projectName, program })));
```
### Expected Behavior
I would still expect these to fail once `up`'d, but the local initialization should succeed.
### Actual Behavior
Automation API fails to parse semver as stdout is empty.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
| non_defect | multiple parallel calls to runpulumicommand fail with empty stdout what happened when writing reproduction tests for after roughly successful calls to pulumi version when instantiating a localworkspace the calls begin to fail due to receiving empty stdout steps to reproduce asynchronously create multiple stacks in localworkspaces typescript const stacknames array from array keys map fullyqualifiedstackname gettestorg projectname int test gettestsuffix const stacks await promise all stacknames map async stackname localworkspace createstack stackname projectname program expected behavior i would still expect these to fail once up d but the local initialization should succeed actual behavior automation api fails to parse semver as stdout is empty output of pulumi about no response additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already | 0 |
14,085 | 2,789,890,600 | IssuesEvent | 2015-05-08 22:12:28 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | opened | Ctrl-Click on MacBook does not go up the treemap | Priority-Medium Type-Defect | Original [issue 442](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=442) created by orwant on 2010-10-23T11:13:37.000Z:
On a treemap, "the default behavior is ... to move back up the tree when a user right-clicks the graph." Right-click is emulated with Ctrl-Click on Macbooks. This doesn't seem to work (MacOSX 10.6.4, Chrome 7.0.517.41 or Safari 5.0.2).
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Using a macbook, visit the treemap demo (http://code.google.com/apis/visualization/documentation/gallery/treemap.html)
2. Click on "Asia".
3. Try and get back to "World". I can't figure out how.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
TreeMap
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
MacOSX 10.6.4, Chrome 7.0.517.41 or Safari 5.0.2
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| 1.0 | Ctrl-Click on MacBook does not go up the treemap - Original [issue 442](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=442) created by orwant on 2010-10-23T11:13:37.000Z:
On a treemap, "the default behavior is ... to move back up the tree when a user right-clicks the graph." Right-click is emulated with Ctrl-Click on Macbooks. This doesn't seem to work (MacOSX 10.6.4, Chrome 7.0.517.41 or Safari 5.0.2).
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. Using a macbook, visit the treemap demo (http://code.google.com/apis/visualization/documentation/gallery/treemap.html)
2. Click on "Asia".
3. Try and get back to "World". I can't figure out how.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
TreeMap
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
MacOSX 10.6.4, Chrome 7.0.517.41 or Safari 5.0.2
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| defect | ctrl click on macbook does not go up the treemap original created by orwant on on a treemap quot the default behavior is to move back up the tree when a user right clicks the graph quot right click is emulated with ctrl click on macbooks this doesn t seem to work macosx chrome or safari what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code using a macbook visit the treemap demo click on quot asia quot try and get back to quot world quot i can t figure out how what component is this issue related to piechart linechart datatable query etc treemap are you using the test environment version if you are not sure answer no no what operating system and browser are you using macosx chrome or safari for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved | 1 |
12,238 | 3,263,105,081 | IssuesEvent | 2015-10-22 01:27:18 | radare/radare2 | https://api.github.com/repos/radare/radare2 | closed | Regression on dll import function names bug for ws2_32.dll[XX] | bug regression test-attached | [ ] 1 import_names: dll import function names bug for ws2_32.dll[XX]
Command: /usr/local/bin/radare2 -e cfg.plugins=false -e scr.color=0 -N -q -i /tmp/r2-regressions//import_names-rad.6e2ISl ../../bins/pe/Lab05-01.dll > /tmp/r2-regressions//import_names-out.G76uC8 2> /tmp/r2-regressions//import_names-err.i3Yx9m
File: ../../bins/pe/Lab05-01.dll
Script: ii~&WS2_32,ordinal=052
Diff: --- /tmp/r2-regressions//import_names-exp.DU9RZn 2015-03-31 02:35:56.634967034 +0000
+++ /tmp/r2-regressions//import_names-out.G76uC8 2015-03-31 02:35:56.669965337 +0000
@@ -1 +1 @@
-ordinal=052 plt=0x000163cc bind=NONE type=FUNC name=WS2_32.dll_gethostbyname
+ordinal=052 plt=0x000163cc bind=NONE type=FUNC name=qqWS2_32.dll_Ordinal_52 | 1.0 | Regression on dll import function names bug for ws2_32.dll[XX] - [ ] 1 import_names: dll import function names bug for ws2_32.dll[XX]
Command: /usr/local/bin/radare2 -e cfg.plugins=false -e scr.color=0 -N -q -i /tmp/r2-regressions//import_names-rad.6e2ISl ../../bins/pe/Lab05-01.dll > /tmp/r2-regressions//import_names-out.G76uC8 2> /tmp/r2-regressions//import_names-err.i3Yx9m
File: ../../bins/pe/Lab05-01.dll
Script: ii~&WS2_32,ordinal=052
Diff: --- /tmp/r2-regressions//import_names-exp.DU9RZn 2015-03-31 02:35:56.634967034 +0000
+++ /tmp/r2-regressions//import_names-out.G76uC8 2015-03-31 02:35:56.669965337 +0000
@@ -1 +1 @@
-ordinal=052 plt=0x000163cc bind=NONE type=FUNC name=WS2_32.dll_gethostbyname
+ordinal=052 plt=0x000163cc bind=NONE type=FUNC name=qqWS2_32.dll_Ordinal_52 | non_defect | regression on dll import function names bug for dll import names dll import function names bug for dll command usr local bin e cfg plugins false e scr color n q i tmp regressions import names rad bins pe dll tmp regressions import names out tmp regressions import names err file bins pe dll script ii ordinal diff tmp regressions import names exp tmp regressions import names out ordinal plt bind none type func name dll gethostbyname ordinal plt bind none type func name dll ordinal | 0 |
58,327 | 16,486,808,045 | IssuesEvent | 2021-05-24 19:17:13 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | [KEYBOARD]: Navigation through the chat log must be easy and intuitive | 508-defect-2 508-issue-keyboard 508/Accessibility Virtual-Agent | # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. -->
**VFS Point of Contact:** _Trevor_
## User Story or Problem Statement
<!-- Example: As a user with cognitive considerations, I expect to see a label and input pairing consistently styled as throughout the rest of the site, with the label just above the text/email/search input or to the right of a radio/checkbox input, so that I am clearly able to understand what entry is expected. -->
* As a keyboard user, I want to be be able to tab to the chat log container, and use arrow keys to scroll, instead of jump from one message to the next. This feels like custom behavior that could interfere with screen readers.
* As a screen reader user, I do not want custom navigation to interfere with default navigation patterns
## Details
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The screen reader instructions say to use "arrow keys" to navigate through messages, and sighted users are not offered any instructions to use the arrow keys to navigate. It would be better to remove the custom instructions to screen readers, and have the chat log container include a `tabindex="0"` so it will receive focus, and an `overflow-y: auto` declaration so users can scroll with up and down arrow keys. This is a default browser behavior.
## Acceptance Criteria
- [ ] Consistent navigation experience that relies as much on defaults (arrow keys, default screen reader patterns) as possible.
- [ ] Custom navigation patterns must include instructions and not interfere with assistive technology
## Environment
* https://staging.va.gov/virtual-agent/
## WCAG or Vendor Guidance (optional)
* https://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-keyboard-operable.html | 1.0 | [KEYBOARD]: Navigation through the chat log must be easy and intuitive - # [508-defect-2](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-2)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
## Point of Contact
<!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. -->
**VFS Point of Contact:** _Trevor_
## User Story or Problem Statement
<!-- Example: As a user with cognitive considerations, I expect to see a label and input pairing consistently styled as throughout the rest of the site, with the label just above the text/email/search input or to the right of a radio/checkbox input, so that I am clearly able to understand what entry is expected. -->
* As a keyboard user, I want to be be able to tab to the chat log container, and use arrow keys to scroll, instead of jump from one message to the next. This feels like custom behavior that could interfere with screen readers.
* As a screen reader user, I do not want custom navigation to interfere with default navigation patterns
## Details
<!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. -->
The screen reader instructions say to use "arrow keys" to navigate through messages, and sighted users are not offered any instructions to use the arrow keys to navigate. It would be better to remove the custom instructions to screen readers, and have the chat log container include a `tabindex="0"` so it will receive focus, and an `overflow-y: auto` declaration so users can scroll with up and down arrow keys. This is a default browser behavior.
## Acceptance Criteria
- [ ] Consistent navigation experience that relies as much on defaults (arrow keys, default screen reader patterns) as possible.
- [ ] Custom navigation patterns must include instructions and not interfere with assistive technology
## Environment
* https://staging.va.gov/virtual-agent/
## WCAG or Vendor Guidance (optional)
* https://www.w3.org/TR/UNDERSTANDING-WCAG20/keyboard-operation-keyboard-operable.html | defect | navigation through the chat log must be easy and intuitive feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact trevor user story or problem statement as a keyboard user i want to be be able to tab to the chat log container and use arrow keys to scroll instead of jump from one message to the next this feels like custom behavior that could interfere with screen readers as a screen reader user i do not want custom navigation to interfere with default navigation patterns details the screen reader instructions say to use arrow keys to navigate through messages and sighted users are not offered any instructions to use the arrow keys to navigate it would be better to remove the custom instructions to screen readers and have the chat log container include a tabindex so it will receive focus and an overflow y auto declaration so users can scroll with up and down arrow keys this is a default browser behavior acceptance criteria consistent navigation experience that relies as much on defaults arrow keys default screen reader patterns as possible custom navigation patterns must include instructions and not interfere with assistive technology environment wcag or vendor guidance optional | 1 |
21,053 | 3,454,158,542 | IssuesEvent | 2015-12-17 14:50:59 | bkper/bkper-issues | https://api.github.com/repos/bkper/bkper-issues | closed | Cannot open BKper from Google Sheet | auto-migrated Priority-Medium Type-Defect | ```
I cannot seem to open the Bkper side panel from Google Sheets anymore. Each
time I open from Google Sheets Add-on for Bkper under "open", the Bkper side
panel fires up but it is blank after that and not showing anything any more.
Before I could export transactions. Now, I cannot see anything at all.
Please see enclosed file.
Thanks.
```
Original issue reported on code.google.com by `tre...@glocalex.com` on 11 Jul 2015 at 1:02 | 1.0 | Cannot open BKper from Google Sheet - ```
I cannot seem to open the Bkper side panel from Google Sheets anymore. Each
time I open from Google Sheets Add-on for Bkper under "open", the Bkper side
panel fires up but it is blank after that and not showing anything any more.
Before I could export transactions. Now, I cannot see anything at all.
Please see enclosed file.
Thanks.
```
Original issue reported on code.google.com by `tre...@glocalex.com` on 11 Jul 2015 at 1:02 | defect | cannot open bkper from google sheet i cannot seem to open the bkper side panel from google sheets anymore each time i open from google sheets add on for bkper under open the bkper side panel fires up but it is blank after that and not showing anything any more before i could export transactions now i cannot see anything at all please see enclosed file thanks original issue reported on code google com by tre glocalex com on jul at | 1 |
53,111 | 13,260,910,652 | IssuesEvent | 2020-08-20 18:58:28 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | blas/lapack need check_language() and enable_language() (Trac #724) | Migrated from Trac cmake defect | ... especially for SYSTEM_TOOLS support
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/724">https://code.icecube.wisc.edu/projects/icecube/ticket/724</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"_ts": "1550067082284240",
"description": "... especially for SYSTEM_TOOLS support",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"time": "2014-04-14T19:30:53",
"component": "cmake",
"summary": "blas/lapack need check_language() and enable_language()",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| 1.0 | blas/lapack need check_language() and enable_language() (Trac #724) - ... especially for SYSTEM_TOOLS support
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/724">https://code.icecube.wisc.edu/projects/icecube/ticket/724</a>, reported by negaand owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:22",
"_ts": "1550067082284240",
"description": "... especially for SYSTEM_TOOLS support",
"reporter": "nega",
"cc": "",
"resolution": "worksforme",
"time": "2014-04-14T19:30:53",
"component": "cmake",
"summary": "blas/lapack need check_language() and enable_language()",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "nega",
"type": "defect"
}
```
</p>
</details>
| defect | blas lapack need check language and enable language trac especially for system tools support migrated from json status closed changetime ts description especially for system tools support reporter nega cc resolution worksforme time component cmake summary blas lapack need check language and enable language priority normal keywords milestone owner nega type defect | 1 |
126,677 | 17,970,598,490 | IssuesEvent | 2021-09-14 01:07:47 | edgexfoundry/edgex-go | https://api.github.com/repos/edgexfoundry/edgex-go | reopened | Feature request: add a "make lint" target | enhancement security_audit | # 🚀 Feature Request
### Relevant Package
Global
### Description
Suggest adding a "make lint" target to the top-level Makefile to point out code quality and readability issues.
This will help enforce common coding practices in the project.
```
golangci-lint run
golangci-lint run -E bodyclose
golangci-lint run -E cyclop
golangci-lint run -E dupl
golangci-lint run -E gochecknoglobals
golangci-lint run -E gocognit
golangci-lint run -E misspell
golangci-lint run -E nilerr
golangci-lint run -E unparam
golangci-lint run -E unconvert
```
Also trial enable gosec and disposition results.
Enable the following linters:
- [ ] deadcode
- [ ] errcheck
- [ ] gosimple
- [ ] ineffassign
- [ ] staticcheck
- [ ] unused
- [ ] varcheck
- [ ] gosec | True | Feature request: add a "make lint" target - # 🚀 Feature Request
### Relevant Package
Global
### Description
Suggest adding a "make lint" target to the top-level Makefile to point out code quality and readability issues.
This will help enforce common coding practices in the project.
```
golangci-lint run
golangci-lint run -E bodyclose
golangci-lint run -E cyclop
golangci-lint run -E dupl
golangci-lint run -E gochecknoglobals
golangci-lint run -E gocognit
golangci-lint run -E misspell
golangci-lint run -E nilerr
golangci-lint run -E unparam
golangci-lint run -E unconvert
```
Also trial enable gosec and disposition results.
Enable the following linters:
- [ ] deadcode
- [ ] errcheck
- [ ] gosimple
- [ ] ineffassign
- [ ] staticcheck
- [ ] unused
- [ ] varcheck
- [ ] gosec | non_defect | feature request add a make lint target 🚀 feature request relevant package global description suggest adding a make lint target to the top level makefile to point out code quality and readability issues this will help enforce common coding practices in the project golangci lint run golangci lint run e bodyclose golangci lint run e cyclop golangci lint run e dupl golangci lint run e gochecknoglobals golangci lint run e gocognit golangci lint run e misspell golangci lint run e nilerr golangci lint run e unparam golangci lint run e unconvert also trial enable gosec and disposition results enable the following linters deadcode errcheck gosimple ineffassign staticcheck unused varcheck gosec | 0 |
61,463 | 7,469,806,619 | IssuesEvent | 2018-04-03 01:01:14 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | closed | Untranslatable on/off sliding checkbox | design i18n minor | With the current implementation, the on/off sliding checkboxes can't be translated, as the text is located in css pseudo-elements.
The question now is: should they be translatable? This could cause design issues depending on the language used.
The other question is: should we have text on those sliding checkboxes? Should we rely on button position and color only to indicate the state of the option? | 1.0 | Untranslatable on/off sliding checkbox - With the current implementation, the on/off sliding checkboxes can't be translated, as the text is located in css pseudo-elements.
The question now is: should they be translatable? This could cause design issues depending on the language used.
The other question is: should we have text on those sliding checkboxes? Should we rely on button position and color only to indicate the state of the option? | non_defect | untranslatable on off sliding checkbox with the current implementation the on off sliding checkboxes can t be translated as the text is located in css pseudo elements the question now is should they be translatable this could cause design issues depending on the language used the other question is should we have text on those sliding checkboxes should we rely on button position and color only to indicate the state of the option | 0 |
130,372 | 18,155,778,148 | IssuesEvent | 2021-09-27 01:13:41 | benlazarine/cas-overlay | https://api.github.com/repos/benlazarine/cas-overlay | opened | CVE-2020-36184 (High) detected in jackson-databind-2.9.5.jar | security vulnerability | ## CVE-2020-36184 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cas-overlay/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- cas-server-support-oauth-webflow-5.3.7.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2998">https://github.com/FasterXML/jackson-databind/issues/2998</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-36184 (High) detected in jackson-databind-2.9.5.jar - ## CVE-2020-36184 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: cas-overlay/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.5/jackson-databind-2.9.5.jar</p>
<p>
Dependency Hierarchy:
- cas-server-support-oauth-webflow-5.3.7.jar (Root Library)
- :x: **jackson-databind-2.9.5.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.PerUserPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-36184>CVE-2020-36184</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2998">https://github.com/FasterXML/jackson-databind/issues/2998</a></p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file cas overlay pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy cas server support oauth webflow jar root library x jackson databind jar vulnerable library vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources peruserpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource | 0 |
217,057 | 16,834,186,570 | IssuesEvent | 2021-06-18 09:42:54 | ophirhan/cnsr-vlc-viewer-addon | https://api.github.com/repos/ophirhan/cnsr-vlc-viewer-addon | opened | System test behavior for bad input files and changing stuff mid playback | testing | We need to test behavior for bad input files and changing stuff mid playback such as:
- missing input files
- empty input files
- input files with bad timestamp formatting
- non-existent categories
- changing video file mid playback
- changing censor preferences mid playback
- etc.. | 1.0 | System test behavior for bad input files and changing stuff mid playback - We need to test behavior for bad input files and changing stuff mid playback such as:
- missing input files
- empty input files
- input files with bad timestamp formatting
- non-existent categories
- changing video file mid playback
- changing censor preferences mid playback
- etc.. | non_defect | system test behavior for bad input files and changing stuff mid playback we need to test behavior for bad input files and changing stuff mid playback such as missing input files empty input files input files with bad timestamp formatting non existent categories changing video file mid playback changing censor preferences mid playback etc | 0 |
256,945 | 8,130,608,303 | IssuesEvent | 2018-08-17 19:06:57 | grpc/grpc | https://api.github.com/repos/grpc/grpc | closed | ConcurrentObjectUseError with grpc + gevent | disposition/requires reporter action kind/question lang/Python priority/P2 | ### What version of gRPC and what language are you using?
grpcio 1.12.1 python
### What operating system (Linux, Windows, …) and version?
Centos 7.5, kernel 4.7.11
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 2.7.10
### What did you do?
I have a gunicorn+gevent+grpc setup. I have gcloud libraries imported but not actually using them yet. I have also checked that gevent is using 'libev' and not 'libuv' event loop.
I am patching grpc in a custom gunicorn worker:
gevent_grpc_worker.py:
```
import gunicorn.workers.ggevent
class GeventGrpcWorker(gunicorn.workers.ggevent.GeventWorker):
def patch(self):
super(GeventGrpcWorker, self).patch()
# Patch grpc library to support gevent.
import grpc._cython.cygrpc
grpc._cython.cygrpc.init_grpc_gevent()
self.log.info('GeventGrpcWorker: patched grpc.')
```
gunicorn.conf:
```
bind = "unix:/tmp/gunicorn.sock"
workers = 129
worker_class = "gevent_grpc_worker.GeventGrpcWorker"
logerror = "/mnt/log/gunicorn-error.log"
max_requests = 100000
backlog = 2048
user = "dropcam"
group = "dropcam"
pidfile = "/var/run/gunicorn.pid"
timeout = 90
worker_connections = 25
```
I am using:
gunicorn==19.8.1
grpcio==1.12.1
gevent==1.3.5
greenlet==0.4.14
google-cloud-core==0.28.1
google-cloud-pubsub==0.35.4
google-cloud-storage==1.10.0
google-cloud-bigquery==1.4.0
google-api-core==1.3.0
google-api-python-client==1.7.4
google-apitools==0.5.23
grpc-google-iam-v1==0.11.4
I am getting the following stack trace in my logs. There is no request_id meaning that it's not tied to a specific operation/web-request, so it's not causing any issues yet, and I am not aware of what's triggering it. Regardless, it doesn't seem like the right thing is happening, seems like some kind of incompatibility with the google auth library and gevent.
```
2018-08-01 08:24:08,146 ERROR [root] request_id=None AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x152a040fe610>" raised exception!
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/grpc/_plugin_wrapping.py", line 77, in __call__
callback_state, callback))
File "/usr/lib/python2.7/site-packages/google/auth/transport/grpc.py", line 77, in __call__
callback(self._get_authorization_headers(context), None)
File "/usr/lib/python2.7/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
headers)
File "/usr/lib/python2.7/site-packages/google/auth/credentials.py", line 122, in before_request
self.refresh(request)
File "/usr/lib/python2.7/site-packages/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/usr/lib/python2.7/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/usr/lib/python2.7/site-packages/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/usr/lib/python2.7/site-packages/google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "/usr/lib64/python2.7/site-packages/newrelic-2.50.0.39/newrelic/api/external_trace.py", line 103, in dynamic_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1307, in _conn_request
response = conn.getresponse()
File "/usr/lib64/python2.7/site-packages/newrelic-2.50.0.39/newrelic/hooks/external_httplib.py", line 65, in httplib_getresponse_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib64/python2.7/httplib.py", line 1132, in getresponse
response.begin()
File "/usr/lib64/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.7/httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib64/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib64/python2.7/site-packages/gevent/_sslgte279.py", line 457, in recv
return self.read(buflen)
File "/usr/lib64/python2.7/site-packages/gevent/_sslgte279.py", line 318, in read
self._wait(self._read_event, timeout_exc=_SSLErrorReadTimeout)
File "src/gevent/_hub_primitives.py", line 265, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 266, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 245, in gevent.__hub_primitives._primitive_wait
ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.__waiter.Waiter object at 0x152a03bfe9f0>>
```
### What did you expect to see?
No errors around grpc + gevent
### What did you see instead?
ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.__waiter.Waiter object at 0x152a03bfe9f0>>
### Anything else we should know about your project / environment?
| 1.0 | ConcurrentObjectUseError with grpc + gevent - ### What version of gRPC and what language are you using?
grpcio 1.12.1 python
### What operating system (Linux, Windows, …) and version?
Centos 7.5, kernel 4.7.11
### What runtime / compiler are you using (e.g. python version or version of gcc)
python 2.7.10
### What did you do?
I have a gunicorn+gevent+grpc setup. I have gcloud libraries imported but not actually using them yet. I have also checked that gevent is using 'libev' and not 'libuv' event loop.
I am patching grpc in a custom gunicorn worker:
gevent_grpc_worker.py:
```
import gunicorn.workers.ggevent
class GeventGrpcWorker(gunicorn.workers.ggevent.GeventWorker):
def patch(self):
super(GeventGrpcWorker, self).patch()
# Patch grpc library to support gevent.
import grpc._cython.cygrpc
grpc._cython.cygrpc.init_grpc_gevent()
self.log.info('GeventGrpcWorker: patched grpc.')
```
gunicorn.conf:
```
bind = "unix:/tmp/gunicorn.sock"
workers = 129
worker_class = "gevent_grpc_worker.GeventGrpcWorker"
logerror = "/mnt/log/gunicorn-error.log"
max_requests = 100000
backlog = 2048
user = "dropcam"
group = "dropcam"
pidfile = "/var/run/gunicorn.pid"
timeout = 90
worker_connections = 25
```
I am using:
gunicorn==19.8.1
grpcio==1.12.1
gevent==1.3.5
greenlet==0.4.14
google-cloud-core==0.28.1
google-cloud-pubsub==0.35.4
google-cloud-storage==1.10.0
google-cloud-bigquery==1.4.0
google-api-core==1.3.0
google-api-python-client==1.7.4
google-apitools==0.5.23
grpc-google-iam-v1==0.11.4
I am getting the following stack trace in my logs. There is no request_id meaning that it's not tied to a specific operation/web-request, so it's not causing any issues yet, and I am not aware of what's triggering it. Regardless, it doesn't seem like the right thing is happening, seems like some kind of incompatibility with the google auth library and gevent.
```
2018-08-01 08:24:08,146 ERROR [root] request_id=None AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x152a040fe610>" raised exception!
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/grpc/_plugin_wrapping.py", line 77, in __call__
callback_state, callback))
File "/usr/lib/python2.7/site-packages/google/auth/transport/grpc.py", line 77, in __call__
callback(self._get_authorization_headers(context), None)
File "/usr/lib/python2.7/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
headers)
File "/usr/lib/python2.7/site-packages/google/auth/credentials.py", line 122, in before_request
self.refresh(request)
File "/usr/lib/python2.7/site-packages/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/usr/lib/python2.7/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/usr/lib/python2.7/site-packages/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/usr/lib/python2.7/site-packages/google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "/usr/lib64/python2.7/site-packages/newrelic-2.50.0.39/newrelic/api/external_trace.py", line 103, in dynamic_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/usr/lib/python2.7/site-packages/httplib2/__init__.py", line 1307, in _conn_request
response = conn.getresponse()
File "/usr/lib64/python2.7/site-packages/newrelic-2.50.0.39/newrelic/hooks/external_httplib.py", line 65, in httplib_getresponse_wrapper
return wrapped(*args, **kwargs)
File "/usr/lib64/python2.7/httplib.py", line 1132, in getresponse
response.begin()
File "/usr/lib64/python2.7/httplib.py", line 453, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.7/httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib64/python2.7/socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib64/python2.7/site-packages/gevent/_sslgte279.py", line 457, in recv
return self.read(buflen)
File "/usr/lib64/python2.7/site-packages/gevent/_sslgte279.py", line 318, in read
self._wait(self._read_event, timeout_exc=_SSLErrorReadTimeout)
File "src/gevent/_hub_primitives.py", line 265, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 266, in gevent.__hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 245, in gevent.__hub_primitives._primitive_wait
ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.__waiter.Waiter object at 0x152a03bfe9f0>>
```
### What did you expect to see?
No errors around grpc + gevent
### What did you see instead?
ConcurrentObjectUseError: This socket is already used by another greenlet: <bound method Waiter.switch of <gevent.__waiter.Waiter object at 0x152a03bfe9f0>>
### Anything else we should know about your project / environment?
| non_defect | concurrentobjectuseerror with grpc gevent what version of grpc and what language are you using grpcio python what operating system linux windows … and version centos kernel what runtime compiler are you using e g python version or version of gcc python what did you do i have a gunicorn gevent grpc setup i have gcloud libraries imported but not actually using them yet i have also checked that gevent is using libev and not libuv event loop i am patching grpc in a custom gunicorn worker gevent grpc worker py import gunicorn workers ggevent class geventgrpcworker gunicorn workers ggevent geventworker def patch self super geventgrpcworker self patch patch grpc library to support gevent import grpc cython cygrpc grpc cython cygrpc init grpc gevent self log info geventgrpcworker patched grpc gunicorn conf bind unix tmp gunicorn sock workers worker class gevent grpc worker geventgrpcworker logerror mnt log gunicorn error log max requests backlog user dropcam group dropcam pidfile var run gunicorn pid timeout worker connections i am using gunicorn grpcio gevent greenlet google cloud core google cloud pubsub google cloud storage google cloud bigquery google api core google api python client google apitools grpc google iam i am getting the following stack trace in my logs there is no request id meaning that it s not tied to a specific operation web request so it s not causing any issues yet and i am not aware of what s triggering it regardless it doesn t seem like the right thing is happening seems like some kind of incompatibility with the google auth library and gevent error request id none authmetadataplugincallback raised exception traceback most recent call last file usr site packages grpc plugin wrapping py line in call callback state callback file usr lib site packages google auth transport grpc py line in call callback self get authorization headers context none file usr lib site packages google auth transport grpc py line in get authorization headers headers file usr lib site packages google auth credentials py line in before request self refresh request file usr lib site packages google service account py line in refresh request self token uri assertion file usr lib site packages google client py line in jwt grant response data token endpoint request request token uri body file usr lib site packages google client py line in token endpoint request method post url token uri headers headers body body file usr lib site packages google auth py line in call url method method body body headers headers kwargs file usr site packages newrelic newrelic api external trace py line in dynamic wrapper return wrapped args kwargs file usr lib site packages init py line in request response content self request conn authority uri request uri method body headers redirections cachekey file usr lib site packages init py line in request response content self conn request conn request uri method body headers file usr lib site packages init py line in conn request response conn getresponse file usr site packages newrelic newrelic hooks external httplib py line in httplib getresponse wrapper return wrapped args kwargs file usr httplib py line in getresponse response begin file usr httplib py line in begin version status reason self read status file usr httplib py line in read status line self fp readline maxline file usr socket py line in readline data self sock recv self rbufsize file usr site packages gevent py line in recv return self read buflen file usr site packages gevent py line in read self wait self read event timeout exc sslerrorreadtimeout file src gevent hub primitives py line in gevent hub primitives wait on socket file src gevent hub primitives py line in gevent hub primitives wait on socket file src gevent hub primitives py line in gevent hub primitives primitive wait concurrentobjectuseerror this socket is already used by another greenlet what did you expect to see no errors around grpc gevent what did you see instead concurrentobjectuseerror this socket is already used by another greenlet anything else we should know about your project environment | 0 |
55,255 | 14,344,906,419 | IssuesEvent | 2020-11-28 16:35:06 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | TreeTable column alignment problem | defect | I have used primefaces 3.2 in my project. Lately I found a bug in treetable. Data column in treetable is not in line with its column header. Please can you fix the bug. Since I am not a pro member, please tell me if there is a cost to fix this problem.
Thanks...
# Ahmad Hazairin

| 1.0 | TreeTable column alignment problem - I have used primefaces 3.2 in my project. Lately I found a bug in treetable. Data column in treetable is not in line with its column header. Please can you fix the bug. Since I am not a pro member, please tell me if there is a cost to fix this problem.
Thanks...
# Ahmad Hazairin

| defect | treetable column alignment problem i have used primefaces in my project lately i found a bug in treetable data column in treetable is not in line with its column header please can you fix the bug since i am not a pro member please tell me if there is a cost to fix this problem thanks ahmad hazairin | 1 |
38,709 | 8,952,501,953 | IssuesEvent | 2019-01-25 16:42:02 | svigerske/ipopt-donotuse | https://api.github.com/repos/svigerske/ipopt-donotuse | closed | C Interface: call to OptimizeTNLP does not return status if exception | Ipopt defect | Issue created by migration from Trac.
Original creator: hschilling
Original creation time: 2010-12-29 19:27:19
Assignee: ipopt-team
Version: 3.9
In the C Interface code (IpStdCInterface.cpp), if there is an exception thrown by OptimizeTNLP, the variable "status" is never set so the C caller gets a garbage value for status.
If would also be nice to pass some textual information about the exception back to the C calling function. | 1.0 | C Interface: call to OptimizeTNLP does not return status if exception - Issue created by migration from Trac.
Original creator: hschilling
Original creation time: 2010-12-29 19:27:19
Assignee: ipopt-team
Version: 3.9
In the C Interface code (IpStdCInterface.cpp), if there is an exception thrown by OptimizeTNLP, the variable "status" is never set so the C caller gets a garbage value for status.
If would also be nice to pass some textual information about the exception back to the C calling function. | defect | c interface call to optimizetnlp does not return status if exception issue created by migration from trac original creator hschilling original creation time assignee ipopt team version in the c interface code ipstdcinterface cpp if there is an exception thrown by optimizetnlp the variable status is never set so the c caller gets a garbage value for status if would also be nice to pass some textual information about the exception back to the c calling function | 1 |
223,166 | 17,570,642,592 | IssuesEvent | 2021-08-14 16:11:04 | cseelhoff/RimThreaded | https://api.github.com/repos/cseelhoff/RimThreaded | closed | Throught handler bug because of too many objects affecting at once (index array out of bounds) | Bug Reproducible Accepted For Testing 2.2.x | IMPORTANT: Please first search existing bugs to ensure you are not creating a duplicate bug report!
**Describe the bug**
Game starts to log exceptions if there is too many objects affecting the throught array of a pawn (can lead to crash and game destabilization)
Affecting: Call of Cthulhu - Cosmic Horrors and possibly Vanilla Social Interactions
**Steps to reproduce the behavior (VERY IMPORTANT)**
1. Start new world (or load "Testing" save)
2. Trap pawns on a box with god mode
3. Spawn ton of Deep Ones and Great Deep Ones and let the game run
4. See error (or load the save with it already triggered)
**Error Log**
[Player.log](https://github.com/cseelhoff/RimThreaded/files/6684668/Player.log)
https://gist.github.com/bb354b9760a9626153551e419db2c0eb
**Save file**
[Saves.zip](https://github.com/cseelhoff/RimThreaded/files/6684675/Saves.zip)
**Screenshots**
<img width="193" alt="RimWorldWin64_mTM7eOOMmi" src="https://user-images.githubusercontent.com/43865463/122721507-5de09b80-d236-11eb-9382-50b39065b46c.png">
<img width="451" alt="RimWorldWin64_gyD9JppvrC" src="https://user-images.githubusercontent.com/43865463/122721557-6fc23e80-d236-11eb-854d-8363b5418a7a.png">
**Mod list (Preferably a RimPy compatible list.)**
* Harmony
* Core
* Royalty (optional)
* HugsLib
* JecsTools (Unofficial)
* Call of Cthulhu - Cosmic Horrors (Continued)
* RimThreaded
| 1.0 | Throught handler bug because of too many objects affecting at once (index array out of bounds) - IMPORTANT: Please first search existing bugs to ensure you are not creating a duplicate bug report!
**Describe the bug**
Game starts to log exceptions if there is too many objects affecting the throught array of a pawn (can lead to crash and game destabilization)
Affecting: Call of Cthulhu - Cosmic Horrors and possibly Vanilla Social Interactions
**Steps to reproduce the behavior (VERY IMPORTANT)**
1. Start new world (or load "Testing" save)
2. Trap pawns on a box with god mode
3. Spawn ton of Deep Ones and Great Deep Ones and let the game run
4. See error (or load the save with it already triggered)
**Error Log**
[Player.log](https://github.com/cseelhoff/RimThreaded/files/6684668/Player.log)
https://gist.github.com/bb354b9760a9626153551e419db2c0eb
**Save file**
[Saves.zip](https://github.com/cseelhoff/RimThreaded/files/6684675/Saves.zip)
**Screenshots**
<img width="193" alt="RimWorldWin64_mTM7eOOMmi" src="https://user-images.githubusercontent.com/43865463/122721507-5de09b80-d236-11eb-9382-50b39065b46c.png">
<img width="451" alt="RimWorldWin64_gyD9JppvrC" src="https://user-images.githubusercontent.com/43865463/122721557-6fc23e80-d236-11eb-854d-8363b5418a7a.png">
**Mod list (Preferably a RimPy compatible list.)**
* Harmony
* Core
* Royalty (optional)
* HugsLib
* JecsTools (Unofficial)
* Call of Cthulhu - Cosmic Horrors (Continued)
* RimThreaded
| non_defect | throught handler bug because of too many objects affecting at once index array out of bounds important please first search existing bugs to ensure you are not creating a duplicate bug report describe the bug game starts to log exceptions if there is too many objects affecting the throught array of a pawn can lead to crash and game destabilization affecting call of cthulhu cosmic horrors and possibly vanilla social interactions steps to reproduce the behavior very important start new world or load testing save trap pawns on a box with god mode spawn ton of deep ones and great deep ones and let the game run see error or load the save with it already triggered error log save file screenshots img width alt src img width alt src mod list preferably a rimpy compatible list harmony core royalty optional hugslib jecstools unofficial call of cthulhu cosmic horrors continued rimthreaded | 0 |
132,268 | 10,739,417,454 | IssuesEvent | 2019-10-29 16:19:33 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Test: minimap scale | testplan-item | Refs: https://github.com/microsoft/vscode/issues/21773
- [x] any os @jrieken
- [x] any os @alexr00
1. In VS Code, turn on the minimap (`"editor.minimap.enabled": true`)
2. Verify that the minimap renders and lays out correctly, and at various scales (`editor.minimap.scale` with values 1, 2, and 3).
3. Turn the minimap into block rendering mode `"editor.minimap.renderCharacters": false` and do the same | 1.0 | Test: minimap scale - Refs: https://github.com/microsoft/vscode/issues/21773
- [x] any os @jrieken
- [x] any os @alexr00
1. In VS Code, turn on the minimap (`"editor.minimap.enabled": true`)
2. Verify that the minimap renders and lays out correctly, and at various scales (`editor.minimap.scale` with values 1, 2, and 3).
3. Turn the minimap into block rendering mode `"editor.minimap.renderCharacters": false` and do the same | non_defect | test minimap scale refs any os jrieken any os in vs code turn on the minimap editor minimap enabled true verify that the minimap renders and lays out correctly and at various scales editor minimap scale with values and turn the minimap into block rendering mode editor minimap rendercharacters false and do the same | 0 |
24,835 | 17,838,217,175 | IssuesEvent | 2021-09-03 06:20:14 | Budibase/budibase | https://api.github.com/repos/Budibase/budibase | closed | Budibase x Kubernetes - Infra | infrastructure | Bring up the budibase stack in a EKS kubernetes cluster in AWS eu-west-1.
**Infra**
- Proxy
- Worker
- Apps
- CouchDB
- S3 instead of MinIO
- Elasticache instead of redis
**CouchDB**
We will need an EFS volume for all the couchDB pods in the cluster to use and share. Each node will need to be registered as part of this cluster.
**Conditions of Satisfaction**
- We have fully configurable yaml/helm configuration for bringing up budibase in K8S
- The entire stack is up and running in us-east-1 and we can use it the same way we use our staging env
| 1.0 | Budibase x Kubernetes - Infra - Bring up the budibase stack in a EKS kubernetes cluster in AWS eu-west-1.
**Infra**
- Proxy
- Worker
- Apps
- CouchDB
- S3 instead of MinIO
- Elasticache instead of redis
**CouchDB**
We will need an EFS volume for all the couchDB pods in the cluster to use and share. Each node will need to be registered as part of this cluster.
**Conditions of Satisfaction**
- We have fully configurable yaml/helm configuration for bringing up budibase in K8S
- The entire stack is up and running in us-east-1 and we can use it the same way we use our staging env
| non_defect | budibase x kubernetes infra bring up the budibase stack in a eks kubernetes cluster in aws eu west infra proxy worker apps couchdb instead of minio elasticache instead of redis couchdb we will need an efs volume for all the couchdb pods in the cluster to use and share each node will need to be registered as part of this cluster conditions of satisfaction we have fully configurable yaml helm configuration for bringing up budibase in the entire stack is up and running in us east and we can use it the same way we use our staging env | 0 |
122,430 | 4,835,352,840 | IssuesEvent | 2016-11-08 16:35:17 | bounswe/bounswe2016group4 | https://api.github.com/repos/bounswe/bounswe2016group4 | closed | <django.db.models.base.ModelState object at 0x03F092F0> is not JSON serializable | backend bug priority-high | when i try to call /get_a_food/1 which is
```
def get_food(req, food_id):
# no error handling
food_dict = db_retrieve_food(food_id).__dict__
print(food_dict)
food_json = json.dumps(food_dict)
print(food_json)
return render(req, 'kwue/food.html', food_json)
```
json.dumps gives error | 1.0 | <django.db.models.base.ModelState object at 0x03F092F0> is not JSON serializable - when i try to call /get_a_food/1 which is
```
def get_food(req, food_id):
# no error handling
food_dict = db_retrieve_food(food_id).__dict__
print(food_dict)
food_json = json.dumps(food_dict)
print(food_json)
return render(req, 'kwue/food.html', food_json)
```
json.dumps gives error | non_defect | is not json serializable when i try to call get a food which is def get food req food id no error handling food dict db retrieve food food id dict print food dict food json json dumps food dict print food json return render req kwue food html food json json dumps gives error | 0 |
36,794 | 12,425,543,772 | IssuesEvent | 2020-05-24 16:49:09 | gavarasana/IdentityServer | https://api.github.com/repos/gavarasana/IdentityServer | opened | CVE-2018-20677 (Medium) detected in bootstrap-3.3.5.min.js, bootstrap-3.3.5.js | security vulnerability | ## CVE-2018-20677 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.5.min.js</b>, <b>bootstrap-3.3.5.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /IdentityServer/Ravi.Learn.IdentityServer/wwwroot/lib/bootstrap/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.3.5.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.js</a></p>
<p>Path to vulnerable library: /IdentityServer/Ravi.Learn.IdentityServer/wwwroot/lib/bootstrap/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gavarasana/IdentityServer/commit/e79ea81b3d1bdb66f4d8aa0a2f43bb55a4816674">e79ea81b3d1bdb66f4d8aa0a2f43bb55a4816674</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-20677 (Medium) detected in bootstrap-3.3.5.min.js, bootstrap-3.3.5.js - ## CVE-2018-20677 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>bootstrap-3.3.5.min.js</b>, <b>bootstrap-3.3.5.js</b></p></summary>
<p>
<details><summary><b>bootstrap-3.3.5.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /IdentityServer/Ravi.Learn.IdentityServer/wwwroot/lib/bootstrap/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.min.js** (Vulnerable Library)
</details>
<details><summary><b>bootstrap-3.3.5.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.5/js/bootstrap.js</a></p>
<p>Path to vulnerable library: /IdentityServer/Ravi.Learn.IdentityServer/wwwroot/lib/bootstrap/js/bootstrap.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.5.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/gavarasana/IdentityServer/commit/e79ea81b3d1bdb66f4d8aa0a2f43bb55a4816674">e79ea81b3d1bdb66f4d8aa0a2f43bb55a4816674</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.0, XSS is possible in the affix configuration target property.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20677>CVE-2018-20677</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20677</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: Bootstrap - v3.4.0;NorDroN.AngularTemplate - 0.1.6;Dynamic.NET.Express.ProjectTemplates - 0.8.0;dotnetng.template - 1.0.0.4;ZNxtApp.Core.Module.Theme - 1.0.9-Beta;JMeter - 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in bootstrap min js bootstrap js cve medium severity vulnerability vulnerable libraries bootstrap min js bootstrap js bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library identityserver ravi learn identityserver wwwroot lib bootstrap js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library bootstrap js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library identityserver ravi learn identityserver wwwroot lib bootstrap js bootstrap js dependency hierarchy x bootstrap js vulnerable library found in head commit a href vulnerability details in bootstrap before xss is possible in the affix configuration target property publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap nordron angulartemplate dynamic net express projecttemplates dotnetng template znxtapp core module theme beta jmeter step up your open source security game with whitesource | 0 |
3,283 | 2,610,059,863 | IssuesEvent | 2015-02-26 18:17:39 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 临海不孕不育检查项目及费用 | auto-migrated Priority-Medium Type-Defect | ```
临海不孕不育检查项目及费用【台州五洲生殖医院】24小时健
康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:
台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104�
��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105
、109、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:03 | 1.0 | 临海不孕不育检查项目及费用 - ```
临海不孕不育检查项目及费用【台州五洲生殖医院】24小时健
康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:
台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104�
��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105
、109、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 7:03 | defect | 临海不孕不育检查项目及费用 临海不孕不育检查项目及费用【台州五洲生殖医院】 康咨询热线 微信号tzwzszyy 医院地址 (枫南大转盘旁)乘车线路 � �� 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 1 |
41,085 | 10,299,289,754 | IssuesEvent | 2019-08-28 12:13:01 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | CREATE TABLE fails with ParseWithMetaLookups.THROW_ON_FAILURE | C: Parser E: All Editions P: Medium R: Fixed T: Defect | When parsing a `CREATE TABLE X ...` statement while using the `ParseWithMetaLookups.THROW_ON_FAILURE` setting the parser will fail with an exception like the following:
```java
org.jooq.impl.ParserException: Unknown table identifier: [1:16] create table x [*]as select a = 1
at org.jooq.impl.ParserContext.exception(ParserImpl.java:10649)
at org.jooq.impl.ParserContext.lookupTable(ParserImpl.java:10801)
at org.jooq.impl.ParserImpl.parseTableName(ParserImpl.java:8751)
at org.jooq.impl.ParserImpl.parseCreateTable(ParserImpl.java:3101)
at org.jooq.impl.ParserImpl.parseCreate(ParserImpl.java:2137)
at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:842)
at org.jooq.impl.ParserImpl.parse(ParserImpl.java:542)
at org.jooq.impl.ParserImpl.parse(ParserImpl.java:529)
```
This seems wrong as the table `X` is being created in this case and shouldn't have to exist already. | 1.0 | CREATE TABLE fails with ParseWithMetaLookups.THROW_ON_FAILURE - When parsing a `CREATE TABLE X ...` statement while using the `ParseWithMetaLookups.THROW_ON_FAILURE` setting the parser will fail with an exception like the following:
```java
org.jooq.impl.ParserException: Unknown table identifier: [1:16] create table x [*]as select a = 1
at org.jooq.impl.ParserContext.exception(ParserImpl.java:10649)
at org.jooq.impl.ParserContext.lookupTable(ParserImpl.java:10801)
at org.jooq.impl.ParserImpl.parseTableName(ParserImpl.java:8751)
at org.jooq.impl.ParserImpl.parseCreateTable(ParserImpl.java:3101)
at org.jooq.impl.ParserImpl.parseCreate(ParserImpl.java:2137)
at org.jooq.impl.ParserImpl.parseQuery(ParserImpl.java:842)
at org.jooq.impl.ParserImpl.parse(ParserImpl.java:542)
at org.jooq.impl.ParserImpl.parse(ParserImpl.java:529)
```
This seems wrong as the table `X` is being created in this case and shouldn't have to exist already. | defect | create table fails with parsewithmetalookups throw on failure when parsing a create table x statement while using the parsewithmetalookups throw on failure setting the parser will fail with an exception like the following java org jooq impl parserexception unknown table identifier create table x as select a at org jooq impl parsercontext exception parserimpl java at org jooq impl parsercontext lookuptable parserimpl java at org jooq impl parserimpl parsetablename parserimpl java at org jooq impl parserimpl parsecreatetable parserimpl java at org jooq impl parserimpl parsecreate parserimpl java at org jooq impl parserimpl parsequery parserimpl java at org jooq impl parserimpl parse parserimpl java at org jooq impl parserimpl parse parserimpl java this seems wrong as the table x is being created in this case and shouldn t have to exist already | 1 |
116,543 | 4,703,390,703 | IssuesEvent | 2016-10-13 07:48:52 | CS2103AUG2016-T16-C2/main | https://api.github.com/repos/CS2103AUG2016-T16-C2/main | closed | As a user I can add a task | priority.high type.epic | A user can add a task with or without its deadline and priority, meaning either a deadline task or floating task | 1.0 | As a user I can add a task - A user can add a task with or without its deadline and priority, meaning either a deadline task or floating task | non_defect | as a user i can add a task a user can add a task with or without its deadline and priority meaning either a deadline task or floating task | 0 |
37,454 | 8,404,421,024 | IssuesEvent | 2018-10-11 12:48:07 | sfepy/sfepy | https://api.github.com/repos/sfepy/sfepy | closed | appveyor.yml problem | defect tests | Appveyor now fails with:
```
Error parsing appveyor.yml: "version" cannot contain request path invalid characters: < > * % & : \ (Line: 14, Column: 10)
``` | 1.0 | appveyor.yml problem - Appveyor now fails with:
```
Error parsing appveyor.yml: "version" cannot contain request path invalid characters: < > * % & : \ (Line: 14, Column: 10)
``` | defect | appveyor yml problem appveyor now fails with error parsing appveyor yml version cannot contain request path invalid characters line column | 1 |
78,996 | 27,863,377,720 | IssuesEvent | 2023-03-21 08:30:41 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Flaky percy test Room Directory - filtered no results | T-Defect Z-Flaky-Test | ### Steps to reproduce
E.g. https://percy.io/dfde73bd/matrix-react-sdk/builds/26058610/changed/1452588982?browser=edge&browser_ids=33%2C34%2C35%2C36&subcategories=unreviewed%2Cchanges_requested&viewLayout=overlay&viewMode=original&width=1024&widths=516%2C1024%2C1920
This test failed because a spinner was visible.
### Outcome
.
### Operating system
.
### Browser information
.
### URL for webapp
.
### Application version
.
### Homeserver
.
### Will you send logs?
No | 1.0 | Flaky percy test Room Directory - filtered no results - ### Steps to reproduce
E.g. https://percy.io/dfde73bd/matrix-react-sdk/builds/26058610/changed/1452588982?browser=edge&browser_ids=33%2C34%2C35%2C36&subcategories=unreviewed%2Cchanges_requested&viewLayout=overlay&viewMode=original&width=1024&widths=516%2C1024%2C1920
This test failed because a spinner was visible.
### Outcome
.
### Operating system
.
### Browser information
.
### URL for webapp
.
### Application version
.
### Homeserver
.
### Will you send logs?
No | defect | flaky percy test room directory filtered no results steps to reproduce e g this test failed because a spinner was visible outcome operating system browser information url for webapp application version homeserver will you send logs no | 1 |
148,287 | 19,529,192,334 | IssuesEvent | 2021-12-30 13:41:37 | developerone12/WebGoat-WhiteSource-Bolt | https://api.github.com/repos/developerone12/WebGoat-WhiteSource-Bolt | opened | CVE-2016-3092 (High) detected in commons-fileupload-1.2.2.jar | security vulnerability | ## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /mons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/developerone12/WebGoat-WhiteSource-Bolt/commit/c42e663814e4b88294ff90339ad577ca1afcf531">c42e663814e4b88294ff90339ad577ca1afcf531</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-3092 (High) detected in commons-fileupload-1.2.2.jar - ## CVE-2016-3092 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /mons-fileupload/commons-fileupload/1.2.2/commons-fileupload-1.2.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.2.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/developerone12/WebGoat-WhiteSource-Bolt/commit/c42e663814e4b88294ff90339ad577ca1afcf531">c42e663814e4b88294ff90339ad577ca1afcf531</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The MultipartStream class in Apache Commons Fileupload before 1.3.2, as used in Apache Tomcat 7.x before 7.0.70, 8.x before 8.0.36, 8.5.x before 8.5.3, and 9.x before 9.0.0.M7 and other products, allows remote attackers to cause a denial of service (CPU consumption) via a long boundary string.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-3092>CVE-2016-3092</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3092</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M8,8.5.3,8.0.36,7.0.70,org.apache.tomcat:tomcat-coyote:9.0.0.M8,8.5.3,8.0.36,7.0.70,commons-fileupload:commons-fileupload:1.3.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in commons fileupload jar cve high severity vulnerability vulnerable library commons fileupload jar the fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications path to dependency file pom xml path to vulnerable library mons fileupload commons fileupload commons fileupload jar dependency hierarchy x commons fileupload jar vulnerable library found in head commit a href found in base branch master vulnerability details the multipartstream class in apache commons fileupload before as used in apache tomcat x before x before x before and x before and other products allows remote attackers to cause a denial of service cpu consumption via a long boundary string publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote commons fileupload commons fileupload step up your open source security game with whitesource | 0 |
248,489 | 7,931,775,465 | IssuesEvent | 2018-07-07 05:04:42 | wso2/testgrid | https://api.github.com/repos/wso2/testgrid | opened | Testgrid distribution should have Jenkins and tinkerer built-in | Priority/High Type/Improvement | **Description:**
ATM, we provide the testgrid core libraries and its runtime engine, jenkins, separately. This has lead to some confusion where unnecessary internal information has been exposed to first-time users.
For example, users should not have to run generate-testplan and then iterate run-testplan. This is actually be part of the testgrid pipeline script, and the users should be executing that instead.
Secondly, there's a manual process involved to configure the tinkerer webapp. We need to analyze the effort required to automate the tinkerer configuration, and the ability to run it locally. Right now, the tinkerer agents cannot connect to tinkerer webapp when we run testgrid locally with aws deployments.
**Affected Product Version:**
m34
**OS, DB, other environment details and versions:**
local testgrid +
local tinkerer webapp +
aws product deployment
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | 1.0 | Testgrid distribution should have Jenkins and tinkerer built-in - **Description:**
ATM, we provide the testgrid core libraries and its runtime engine, jenkins, separately. This has lead to some confusion where unnecessary internal information has been exposed to first-time users.
For example, users should not have to run generate-testplan and then iterate run-testplan. This is actually be part of the testgrid pipeline script, and the users should be executing that instead.
Secondly, there's a manual process involved to configure the tinkerer webapp. We need to analyze the effort required to automate the tinkerer configuration, and the ability to run it locally. Right now, the tinkerer agents cannot connect to tinkerer webapp when we run testgrid locally with aws deployments.
**Affected Product Version:**
m34
**OS, DB, other environment details and versions:**
local testgrid +
local tinkerer webapp +
aws product deployment
**Steps to reproduce:**
**Related Issues:**
<!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. --> | non_defect | testgrid distribution should have jenkins and tinkerer built in description atm we provide the testgrid core libraries and its runtime engine jenkins separately this has lead to some confusion where unnecessary internal information has been exposed to first time users for example users should not have to run generate testplan and then iterate run testplan this is actually be part of the testgrid pipeline script and the users should be executing that instead secondly there s a manual process involved to configure the tinkerer webapp we need to analyze the effort required to automate the tinkerer configuration and the ability to run it locally right now the tinkerer agents cannot connect to tinkerer webapp when we run testgrid locally with aws deployments affected product version os db other environment details and versions local testgrid local tinkerer webapp aws product deployment steps to reproduce related issues | 0 |
53,954 | 13,262,553,298 | IssuesEvent | 2020-08-20 22:02:29 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Segfault following replace() in I3MCTree (Trac #2385) | Migrated from Trac combo core defect | The follwing code leads to a segfault (py3v4.0.1 + combo/stable):
```text
from icecube import dataclasses as dc
tree = dc.I3MCTree()
p = dc.I3Particle()
tree.add_primary(p)
p2 = dc.I3Particle()
tree.replace(p.id, p2)
print(tree)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2385">https://code.icecube.wisc.edu/projects/icecube/ticket/2385</a>, reported by chaackand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "The follwing code leads to a segfault (py3v4.0.1 + combo/stable):\n{{{\nfrom icecube import dataclasses as dc \ntree = dc.I3MCTree() \np = dc.I3Particle() \ntree.add_primary(p) \np2 = dc.I3Particle() \ntree.replace(p.id, p2) \nprint(tree) \n}}}\n",
"reporter": "chaack",
"cc": "olivas",
"resolution": "fixed",
"time": "2019-12-13T17:10:17",
"component": "combo core",
"summary": "Segfault following replace() in I3MCTree",
"priority": "blocker",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Segfault following replace() in I3MCTree (Trac #2385) - The follwing code leads to a segfault (py3v4.0.1 + combo/stable):
```text
from icecube import dataclasses as dc
tree = dc.I3MCTree()
p = dc.I3Particle()
tree.add_primary(p)
p2 = dc.I3Particle()
tree.replace(p.id, p2)
print(tree)
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2385">https://code.icecube.wisc.edu/projects/icecube/ticket/2385</a>, reported by chaackand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2020-06-24T12:31:42",
"_ts": "1593001902142004",
"description": "The follwing code leads to a segfault (py3v4.0.1 + combo/stable):\n{{{\nfrom icecube import dataclasses as dc \ntree = dc.I3MCTree() \np = dc.I3Particle() \ntree.add_primary(p) \np2 = dc.I3Particle() \ntree.replace(p.id, p2) \nprint(tree) \n}}}\n",
"reporter": "chaack",
"cc": "olivas",
"resolution": "fixed",
"time": "2019-12-13T17:10:17",
"component": "combo core",
"summary": "Segfault following replace() in I3MCTree",
"priority": "blocker",
"keywords": "",
"milestone": "Autumnal Equinox 2020",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | segfault following replace in trac the follwing code leads to a segfault combo stable text from icecube import dataclasses as dc tree dc p dc tree add primary p dc tree replace p id print tree migrated from json status closed changetime ts description the follwing code leads to a segfault combo stable n nfrom icecube import dataclasses as dc ntree dc np dc ntree add primary p dc ntree replace p id nprint tree n n reporter chaack cc olivas resolution fixed time component combo core summary segfault following replace in priority blocker keywords milestone autumnal equinox owner david schultz type defect | 1 |
6,503 | 3,823,303,446 | IssuesEvent | 2016-03-30 07:26:59 | gpac/gpac | https://api.github.com/repos/gpac/gpac | closed | Regression tests on latest nightly builds (Android) | android build | Hi,
I have noticed that videos, animations and user interaction in a BIFS scene running on an Android device are very (very) slow compared to how it used to be a while ago. For example in a more complex scene the TouchSensor is not even triggered most of the time (e.g. clicking a button).
Here's a simple example from regression tests where some interpolators are used with Layer3D: _bifs-3D-positioning-layer3D-views.bt_.
I don't know how can I see the fps on the Android device but there are no more than 5 fps. Could you investigate this issue or at least give me some hints on how should I proceed to fix this?
Thank you!
| 1.0 | Regression tests on latest nightly builds (Android) - Hi,
I have noticed that videos, animations and user interaction in a BIFS scene running on an Android device are very (very) slow compared to how it used to be a while ago. For example in a more complex scene the TouchSensor is not even triggered most of the time (e.g. clicking a button).
Here's a simple example from regression tests where some interpolators are used with Layer3D: _bifs-3D-positioning-layer3D-views.bt_.
I don't know how can I see the fps on the Android device but there are no more than 5 fps. Could you investigate this issue or at least give me some hints on how should I proceed to fix this?
Thank you!
| non_defect | regression tests on latest nightly builds android hi i have noticed that videos animations and user interaction in a bifs scene running on an android device are very very slow compared to how it used to be a while ago for example in a more complex scene the touchsensor is not even triggered most of the time e g clicking a button here s a simple example from regression tests where some interpolators are used with bifs positioning views bt i don t know how can i see the fps on the android device but there are no more than fps could you investigate this issue or at least give me some hints on how should i proceed to fix this thank you | 0 |
240,507 | 20,034,732,291 | IssuesEvent | 2022-02-02 10:38:50 | Oldes/Rebol-issues | https://api.github.com/repos/Oldes/Rebol-issues | closed | protect does not protect 2nd arg of swap | Status.important Test.written Type.bug Protection CC.resolved | _Submitted by:_ _Sunanda_
Protect works to protect first arg of 'swap:
```rebol
swap a b ;; fails if a is protected
```
but not the 2nd:
```rebol
swap a b ;; succeeds even if b is protected
```
```rebol
fred: "abc"
doris: "zxy"
protect fred
swap fred doris ;; :-) triggers protection message
probe fred ;; :-) fred is unchanged
swap doris fred ;; :-( works
probe fred ;; :-( fred has been changed
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=695)** [ Version: alpha 42 Type: Bug Platform: All Category: n/a Reproduce: Always Fixed-in:alpha 46 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/695</sup>
Comments:
---
> **Rebolbot** commented on Apr 9, 2009:
_Submitted by:_ _Carl_
Good one Sunanda.
---
> **Rebolbot** added **Type.bug** and **Status.important** on Jan 12, 2016
--- | 1.0 | protect does not protect 2nd arg of swap - _Submitted by:_ _Sunanda_
Protect works to protect first arg of 'swap:
```rebol
swap a b ;; fails if a is protected
```
but not the 2nd:
```rebol
swap a b ;; succeeds even if b is protected
```
```rebol
fred: "abc"
doris: "zxy"
protect fred
swap fred doris ;; :-) triggers protection message
probe fred ;; :-) fred is unchanged
swap doris fred ;; :-( works
probe fred ;; :-( fred has been changed
```
---
<sup>**Imported from:** **[CureCode](https://www.curecode.org/rebol3/ticket.rsp?id=695)** [ Version: alpha 42 Type: Bug Platform: All Category: n/a Reproduce: Always Fixed-in:alpha 46 ]</sup>
<sup>**Imported from**: https://github.com/rebol/rebol-issues/issues/695</sup>
Comments:
---
> **Rebolbot** commented on Apr 9, 2009:
_Submitted by:_ _Carl_
Good one Sunanda.
---
> **Rebolbot** added **Type.bug** and **Status.important** on Jan 12, 2016
--- | non_defect | protect does not protect arg of swap submitted by sunanda protect works to protect first arg of swap rebol swap a b fails if a is protected but not the rebol swap a b succeeds even if b is protected rebol fred abc doris zxy protect fred swap fred doris triggers protection message probe fred fred is unchanged swap doris fred works probe fred fred has been changed imported from imported from comments rebolbot commented on apr submitted by carl good one sunanda rebolbot added type bug and status important on jan | 0 |
96,551 | 3,969,582,211 | IssuesEvent | 2016-05-04 00:31:15 | washingtonstateuniversity/WSU-Web-Provisioner | https://api.github.com/repos/washingtonstateuniversity/WSU-Web-Provisioner | opened | epel repository error during provisioning | bug priority:low | ```
[INFO ] Executing command "repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --all --pkgnarrow=installed" in directory '/root'
[INFO ] Disabling repo 'epel'
/usr/lib/python2.6/site-packages/salt/modules/yumpkg.py:785: DeprecationWarning: "--disablerepo='epel'" is being deprecated in favor of "None"[INFO ] Executing command ['yum', '-q', 'clean', 'expire-cache', "--disablerepo='epel'"] in directory '/root'
[ERROR ] Command ['yum', '-q', 'clean', 'expire-cache', "--disablerepo='epel'"] failed with return code: 1
[ERROR ] output:
Error getting repository data for 'epel', repository not found
[INFO ] Executing command ['yum', '-q', 'check-update', "--disablerepo='epel'"] in directory '/root'
[INFO ] Executing command "repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --disablerepo='epel' --pkgnarrow=available ca-certificates" in directory '/root'
```
Not sure if this is important, but we should dig a bit. | 1.0 | epel repository error during provisioning - ```
[INFO ] Executing command "repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --all --pkgnarrow=installed" in directory '/root'
[INFO ] Disabling repo 'epel'
/usr/lib/python2.6/site-packages/salt/modules/yumpkg.py:785: DeprecationWarning: "--disablerepo='epel'" is being deprecated in favor of "None"[INFO ] Executing command ['yum', '-q', 'clean', 'expire-cache', "--disablerepo='epel'"] in directory '/root'
[ERROR ] Command ['yum', '-q', 'clean', 'expire-cache', "--disablerepo='epel'"] failed with return code: 1
[ERROR ] output:
Error getting repository data for 'epel', repository not found
[INFO ] Executing command ['yum', '-q', 'check-update', "--disablerepo='epel'"] in directory '/root'
[INFO ] Executing command "repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --disablerepo='epel' --pkgnarrow=available ca-certificates" in directory '/root'
```
Not sure if this is important, but we should dig a bit. | non_defect | epel repository error during provisioning executing command repoquery plugins queryformat name version release arch repoid all pkgnarrow installed in directory root disabling repo epel usr lib site packages salt modules yumpkg py deprecationwarning disablerepo epel is being deprecated in favor of none executing command in directory root command failed with return code output error getting repository data for epel repository not found executing command in directory root executing command repoquery plugins queryformat name version release arch repoid disablerepo epel pkgnarrow available ca certificates in directory root not sure if this is important but we should dig a bit | 0 |
40,708 | 10,134,378,598 | IssuesEvent | 2019-08-02 07:21:17 | CenturyLinkCloud/mdw | https://api.github.com/repos/CenturyLinkCloud/mdw | closed | Subprocesses Inspector tab for Microservice Orchestrator shows misleading start time | defect | When a delay is configured for the orchestrator subflow invocation, the delay is honored for async, sequential flow execution. However, in Hub's instance view the Inspector Subprocesses tab shows that all the flows started within milliseconds of each other. The subprocess instance also shows the same misleading start time. This may be the case for Multiple Subprocess Invoke as well. | 1.0 | Subprocesses Inspector tab for Microservice Orchestrator shows misleading start time - When a delay is configured for the orchestrator subflow invocation, the delay is honored for async, sequential flow execution. However, in Hub's instance view the Inspector Subprocesses tab shows that all the flows started within milliseconds of each other. The subprocess instance also shows the same misleading start time. This may be the case for Multiple Subprocess Invoke as well. | defect | subprocesses inspector tab for microservice orchestrator shows misleading start time when a delay is configured for the orchestrator subflow invocation the delay is honored for async sequential flow execution however in hub s instance view the inspector subprocesses tab shows that all the flows started within milliseconds of each other the subprocess instance also shows the same misleading start time this may be the case for multiple subprocess invoke as well | 1 |
4,554 | 11,348,404,413 | IssuesEvent | 2020-01-24 00:16:26 | TerriaJS/terriajs | https://api.github.com/repos/TerriaJS/terriajs | closed | Changing coord presentation fails in mobx | New Model Architecture T-Bug | Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method.

| 1.0 | Changing coord presentation fails in mobx - Clicking on the coord bar in a mobx app doesn't do anything. Throws an error related to `toggleUseProjection` method.

| non_defect | changing coord presentation fails in mobx clicking on the coord bar in a mobx app doesn t do anything throws an error related to toggleuseprojection method | 0 |
6,588 | 5,536,756,548 | IssuesEvent | 2017-03-21 20:25:39 | golang/go | https://api.github.com/repos/golang/go | closed | bytes/strings: add optimized countByte | Performance | (migrated from #19402)
`bytes.Count` and `strings.Count` are commonly called with a single-byte separator, both independently and as part of a `Split` call. Currently they call `Index` and then `IndexByte` in a loop. `IndexByte` uses SSE tricks. I suspect that an assembly implementation of `countByte` usng similar SSE tricks would run much more quickly than calling `IndexByte` in a loop. We could dispatch to `countByte` in `Count` when sep has length one.
@TocarIP @randall77
| True | bytes/strings: add optimized countByte - (migrated from #19402)
`bytes.Count` and `strings.Count` are commonly called with a single-byte separator, both independently and as part of a `Split` call. Currently they call `Index` and then `IndexByte` in a loop. `IndexByte` uses SSE tricks. I suspect that an assembly implementation of `countByte` usng similar SSE tricks would run much more quickly than calling `IndexByte` in a loop. We could dispatch to `countByte` in `Count` when sep has length one.
@TocarIP @randall77
| non_defect | bytes strings add optimized countbyte migrated from bytes count and strings count are commonly called with a single byte separator both independently and as part of a split call currently they call index and then indexbyte in a loop indexbyte uses sse tricks i suspect that an assembly implementation of countbyte usng similar sse tricks would run much more quickly than calling indexbyte in a loop we could dispatch to countbyte in count when sep has length one tocarip | 0 |
3,187 | 2,607,987,109 | IssuesEvent | 2015-02-26 00:52:13 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | opened | Notepad++ 'Enter Koan' ? | auto-migrated Priority-Medium Type-Defect | ```
I saw Sublime have the function 'Enter Koan', does Notepad++ have it also? How
to enable it?
Example of 'Enter Koan' https://tutsplus.com/lesson/creating-the-markup/
Thanks :)
```
-----
Original issue reported on code.google.com by `legendar...@gmail.com` on 6 Apr 2013 at 11:17 | 1.0 | Notepad++ 'Enter Koan' ? - ```
I saw Sublime have the function 'Enter Koan', does Notepad++ have it also? How
to enable it?
Example of 'Enter Koan' https://tutsplus.com/lesson/creating-the-markup/
Thanks :)
```
-----
Original issue reported on code.google.com by `legendar...@gmail.com` on 6 Apr 2013 at 11:17 | defect | notepad enter koan i saw sublime have the function enter koan does notepad have it also how to enable it example of enter koan thanks original issue reported on code google com by legendar gmail com on apr at | 1 |
5,829 | 2,610,216,255 | IssuesEvent | 2015-02-26 19:08:56 | chrsmith/somefinders | https://api.github.com/repos/chrsmith/somefinders | opened | umdgen v 4 00 | auto-migrated Priority-Medium Type-Defect | ```
'''Бернард Зайцев'''
День добрый никак не могу найти .umdgen v 4 00.
где то видел уже
'''Альвиан Суворов'''
Вот держи линк http://bit.ly/1gf4sEB
'''Владислав Игнатов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Адольф Буров'''
Неа все ок у меня ничего не списало
'''Влад Ермаков'''
Неа все ок у меня ничего не списало
Информация о файле: umdgen v 4 00
Загружен: В этом месяце
Скачан раз: 191
Рейтинг: 489
Средняя скорость скачивания: 1226
Похожих файлов: 19
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:49 | 1.0 | umdgen v 4 00 - ```
'''Бернард Зайцев'''
День добрый никак не могу найти .umdgen v 4 00.
где то видел уже
'''Альвиан Суворов'''
Вот держи линк http://bit.ly/1gf4sEB
'''Владислав Игнатов'''
Просит ввести номер мобилы!Не опасно ли это?
'''Адольф Буров'''
Неа все ок у меня ничего не списало
'''Влад Ермаков'''
Неа все ок у меня ничего не списало
Информация о файле: umdgen v 4 00
Загружен: В этом месяце
Скачан раз: 191
Рейтинг: 489
Средняя скорость скачивания: 1226
Похожих файлов: 19
```
-----
Original issue reported on code.google.com by `kondense...@gmail.com` on 16 Dec 2013 at 6:49 | defect | umdgen v бернард зайцев день добрый никак не могу найти umdgen v где то видел уже альвиан суворов вот держи линк владислав игнатов просит ввести номер мобилы не опасно ли это адольф буров неа все ок у меня ничего не списало влад ермаков неа все ок у меня ничего не списало информация о файле umdgen v загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at | 1 |
211,966 | 23,856,882,873 | IssuesEvent | 2022-09-07 01:13:03 | panasalap/linux-4.1.15 | https://api.github.com/repos/panasalap/linux-4.1.15 | reopened | CVE-2019-19332 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2019-19332 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/cpuid.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/cpuid.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An out-of-bounds memory write issue was found in the Linux Kernel, version 3.13 through 5.4, in the way the Linux kernel's KVM hypervisor handled the 'KVM_GET_EMULATED_CPUID' ioctl(2) request to get CPUID features emulated by the KVM hypervisor. A user or process able to access the '/dev/kvm' device could use this flaw to crash the system, resulting in a denial of service.
<p>Publish Date: 2020-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19332>CVE-2019-19332</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19332">https://www.linuxkernelcves.com/cves/CVE-2019-19332</a></p>
<p>Release Date: 2020-03-13</p>
<p>Fix Resolution: v5.5-rc1,v3.16.79,v4.14.159,v4.19.89,v4.4.207,v4.9.207,v5.3.16,v5.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19332 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2019-19332 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.1.15/commit/aae4c2fa46027fd4c477372871df090c6b94f3f1">aae4c2fa46027fd4c477372871df090c6b94f3f1</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/cpuid.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/cpuid.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An out-of-bounds memory write issue was found in the Linux Kernel, version 3.13 through 5.4, in the way the Linux kernel's KVM hypervisor handled the 'KVM_GET_EMULATED_CPUID' ioctl(2) request to get CPUID features emulated by the KVM hypervisor. A user or process able to access the '/dev/kvm' device could use this flaw to crash the system, resulting in a denial of service.
<p>Publish Date: 2020-01-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19332>CVE-2019-19332</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19332">https://www.linuxkernelcves.com/cves/CVE-2019-19332</a></p>
<p>Release Date: 2020-03-13</p>
<p>Fix Resolution: v5.5-rc1,v3.16.79,v4.14.159,v4.19.89,v4.4.207,v4.9.207,v5.3.16,v5.4.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files arch kvm cpuid c arch kvm cpuid c vulnerability details an out of bounds memory write issue was found in the linux kernel version through in the way the linux kernel s kvm hypervisor handled the kvm get emulated cpuid ioctl request to get cpuid features emulated by the kvm hypervisor a user or process able to access the dev kvm device could use this flaw to crash the system resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
151,135 | 19,648,527,300 | IssuesEvent | 2022-01-10 01:57:12 | Kijacode/dotfiles | https://api.github.com/repos/Kijacode/dotfiles | opened | WS-2019-0605 (Medium) detected in CSS::Sassv3.4.11 | security vulnerability | ## WS-2019-0605 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/.vscode/extensions/vscjava.vscode-java-test-0.30.1/node_modules/node-sass/src/libsass/src/lexer.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In sass versions between 3.2.0 to 3.6.3 may read 1 byte outside an allocated buffer while parsing a specially crafted css rule.
<p>Publish Date: 2019-07-16
<p>URL: <a href=https://github.com/sass/libsass/commit/7a21c79e321927363a153dc5d7e9c492365faf9b>WS-2019-0605</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/OSV-2020-734">https://osv.dev/vulnerability/OSV-2020-734</a></p>
<p>Release Date: 2019-07-16</p>
<p>Fix Resolution: 3.6.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0605 (Medium) detected in CSS::Sassv3.4.11 - ## WS-2019-0605 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>CSS::Sassv3.4.11</b></p></summary>
<p>
<p>Library home page: <a href=https://metacpan.org/pod/CSS::Sass>https://metacpan.org/pod/CSS::Sass</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/.vscode/extensions/vscjava.vscode-java-test-0.30.1/node_modules/node-sass/src/libsass/src/lexer.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In sass versions between 3.2.0 to 3.6.3 may read 1 byte outside an allocated buffer while parsing a specially crafted css rule.
<p>Publish Date: 2019-07-16
<p>URL: <a href=https://github.com/sass/libsass/commit/7a21c79e321927363a153dc5d7e9c492365faf9b>WS-2019-0605</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/OSV-2020-734">https://osv.dev/vulnerability/OSV-2020-734</a></p>
<p>Release Date: 2019-07-16</p>
<p>Fix Resolution: 3.6.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | ws medium detected in css ws medium severity vulnerability vulnerable library css library home page a href found in base branch main vulnerable source files vscode extensions vscjava vscode java test node modules node sass src libsass src lexer cpp vulnerability details in sass versions between to may read byte outside an allocated buffer while parsing a specially crafted css rule publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
66,050 | 19,909,650,843 | IssuesEvent | 2022-01-25 15:59:11 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Error decrypting image | T-Defect | ### Steps to reproduce
1. Open an encrypted room. Note that encryption seems to be working for text (and other images).
2. Receive an encrypted image.
3. Wait for it to decrypt (noting the pulsing thumbnail)
4. "Error decrypting image"
### Outcome
#### What did you expect?
An image to be decrypted and visible.
#### What happened instead?
Error decrypting image
### Operating system
Arch Linux / Wayland w/ Sway
### Application version
Element version: 1.9.8 Olm version: 3.2.8
### How did you install the app?
Element version: 1.9.9 Olm version: 3.2.8
### Homeserver
Synapse 1.50.1
### Will you send logs?
Yes | 1.0 | Error decrypting image - ### Steps to reproduce
1. Open an encrypted room. Note that encryption seems to be working for text (and other images).
2. Receive an encrypted image.
3. Wait for it to decrypt (noting the pulsing thumbnail)
4. "Error decrypting image"
### Outcome
#### What did you expect?
An image to be decrypted and visible.
#### What happened instead?
Error decrypting image
### Operating system
Arch Linux / Wayland w/ Sway
### Application version
Element version: 1.9.8 Olm version: 3.2.8
### How did you install the app?
Element version: 1.9.9 Olm version: 3.2.8
### Homeserver
Synapse 1.50.1
### Will you send logs?
Yes | defect | error decrypting image steps to reproduce open an encrypted room note that encryption seems to be working for text and other images receive an encrypted image wait for it to decrypt noting the pulsing thumbnail error decrypting image outcome what did you expect an image to be decrypted and visible what happened instead error decrypting image operating system arch linux wayland w sway application version element version olm version how did you install the app element version olm version homeserver synapse will you send logs yes | 1 |
26,741 | 4,778,143,216 | IssuesEvent | 2016-10-27 18:24:06 | wheeler-microfluidics/microdrop | https://api.github.com/repos/wheeler-microfluidics/microdrop | closed | yesno dialogs should use main window as parent (Trac #98) | defect microdrop Migrated from Trac | This allows you to center them on the parent window as opposed to now, they appear randomly on the screen.
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/98
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:43",
"description": "This allows you to center them on the parent window as opposed to now, they appear randomly on the screen.",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763583719301",
"component": "microdrop",
"summary": "yesno dialogs should use main window as parent",
"priority": "minor",
"keywords": "",
"version": "0.1",
"time": "2012-04-13T21:44:54",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
| 1.0 | yesno dialogs should use main window as parent (Trac #98) - This allows you to center them on the parent window as opposed to now, they appear randomly on the screen.
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/98
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:43",
"description": "This allows you to center them on the parent window as opposed to now, they appear randomly on the screen.",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763583719301",
"component": "microdrop",
"summary": "yesno dialogs should use main window as parent",
"priority": "minor",
"keywords": "",
"version": "0.1",
"time": "2012-04-13T21:44:54",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
| defect | yesno dialogs should use main window as parent trac this allows you to center them on the parent window as opposed to now they appear randomly on the screen migrated from json status closed changetime description this allows you to center them on the parent window as opposed to now they appear randomly on the screen reporter ryan cc resolution fixed ts component microdrop summary yesno dialogs should use main window as parent priority minor keywords version time milestone microdrop owner cfobel type defect | 1 |
32,588 | 6,843,009,472 | IssuesEvent | 2017-11-12 10:20:23 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | minimize_cobyla broken if `disp=True` passed | defect scipy.optimize | 7819b7f removed the deprecated `iprint` argument from `optimize.cobyla`. However, it did not do so completely. If `disp=False` is passed to minimize, then some left-over code referencing `iprint` sets `iprint=0`. If `disp=True`, the converse is not true (iprint is left unbound). Later, `iprint` is unconditionally referenced.
c.f.
https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L200 sets iprint=0 if not disp. But https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L252 references iprint anyway.
### Reproducing code example:
```python
import scipy.optimize
def f(x): return sum(_**2 for _ in x)
scipy.optimize.minimize(f, [1], method="cobyla", options=dict(disp=True))
```
### Error message:
```python
>>> scipy.optimize.minimize(f, [1], method="cobyla", options=dict(disp=True))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/lmitche1/src/firedrake/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 492, in minimize
return _minimize_cobyla(fun, x0, args, constraints, **options)
File "/data/lmitche1/src/firedrake/lib/python3.5/site-packages/scipy/optimize/cobyla.py", line 252, in _minimize_cobyla
rhoend=rhoend, iprint=iprint, maxfun=maxfun,
UnboundLocalError: local variable 'iprint' referenced before assignment
>>>
```
### Scipy/Numpy/Python version information:
```python
>>> import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.0.0 1.13.3 sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0)
```
| 1.0 | minimize_cobyla broken if `disp=True` passed - 7819b7f removed the deprecated `iprint` argument from `optimize.cobyla`. However, it did not do so completely. If `disp=False` is passed to minimize, then some left-over code referencing `iprint` sets `iprint=0`. If `disp=True`, the converse is not true (iprint is left unbound). Later, `iprint` is unconditionally referenced.
c.f.
https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L200 sets iprint=0 if not disp. But https://github.com/scipy/scipy/blob/master/scipy/optimize/cobyla.py#L252 references iprint anyway.
### Reproducing code example:
```python
import scipy.optimize
def f(x): return sum(_**2 for _ in x)
scipy.optimize.minimize(f, [1], method="cobyla", options=dict(disp=True))
```
### Error message:
```python
>>> scipy.optimize.minimize(f, [1], method="cobyla", options=dict(disp=True))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/lmitche1/src/firedrake/lib/python3.5/site-packages/scipy/optimize/_minimize.py", line 492, in minimize
return _minimize_cobyla(fun, x0, args, constraints, **options)
File "/data/lmitche1/src/firedrake/lib/python3.5/site-packages/scipy/optimize/cobyla.py", line 252, in _minimize_cobyla
rhoend=rhoend, iprint=iprint, maxfun=maxfun,
UnboundLocalError: local variable 'iprint' referenced before assignment
>>>
```
### Scipy/Numpy/Python version information:
```python
>>> import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.0.0 1.13.3 sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0)
```
| defect | minimize cobyla broken if disp true passed removed the deprecated iprint argument from optimize cobyla however it did not do so completely if disp false is passed to minimize then some left over code referencing iprint sets iprint if disp true the converse is not true iprint is left unbound later iprint is unconditionally referenced c f sets iprint if not disp but references iprint anyway reproducing code example python import scipy optimize def f x return sum for in x scipy optimize minimize f method cobyla options dict disp true error message python scipy optimize minimize f method cobyla options dict disp true traceback most recent call last file line in file data src firedrake lib site packages scipy optimize minimize py line in minimize return minimize cobyla fun args constraints options file data src firedrake lib site packages scipy optimize cobyla py line in minimize cobyla rhoend rhoend iprint iprint maxfun maxfun unboundlocalerror local variable iprint referenced before assignment scipy numpy python version information python import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial | 1 |
108,243 | 16,762,822,145 | IssuesEvent | 2021-06-14 03:17:30 | gms-ws-sandbox/nibrs-pr-test | https://api.github.com/repos/gms-ws-sandbox/nibrs-pr-test | opened | CVE-2020-11996 (High) detected in tomcat-embed-core-9.0.19.jar | security vulnerability | ## CVE-2020-11996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.19.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.19/tomcat-embed-core-9.0.19.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:10.0.0-M6,9.0.36,8.5.56,org.apache.tomcat.embed:org.apache.tomcat.embed:10.0.0-M6,9.0.36,8.5.56</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.19","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.1.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:10.0.0-M6,9.0.36,8.5.56,org.apache.tomcat.embed:org.apache.tomcat.embed:10.0.0-M6,9.0.36,8.5.56"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11996","vulnerabilityDetails":"A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-11996 (High) detected in tomcat-embed-core-9.0.19.jar - ## CVE-2020-11996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.19.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nibrs-pr-test/tools/nibrs-summary-report-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.19/tomcat-embed-core-9.0.19.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.19.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gms-ws-sandbox/nibrs-pr-test/commit/860cc22f54e17594e32e303f0716fb065202fff5">860cc22f54e17594e32e303f0716fb065202fff5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.
<p>Publish Date: 2020-06-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996>CVE-2020-11996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html">https://lists.apache.org/thread.html/r5541ef6b6b68b49f76fc4c45695940116da2bcbe0312ef204a00a2e0%40%3Cannounce.tomcat.apache.org%3E,http://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2020-06-26</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:10.0.0-M6,9.0.36,8.5.56,org.apache.tomcat.embed:org.apache.tomcat.embed:10.0.0-M6,9.0.36,8.5.56</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"9.0.19","packageFilePaths":["/tools/nibrs-summary-report-common/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:2.1.5.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:2.1.5.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:9.0.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat:tomcat-coyote:10.0.0-M6,9.0.36,8.5.56,org.apache.tomcat.embed:org.apache.tomcat.embed:10.0.0-M6,9.0.36,8.5.56"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-11996","vulnerabilityDetails":"A specially crafted sequence of HTTP/2 requests sent to Apache Tomcat 10.0.0-M1 to 10.0.0-M5, 9.0.0.M1 to 9.0.35 and 8.5.0 to 8.5.55 could trigger high CPU usage for several seconds. If a sufficient number of such requests were made on concurrent HTTP/2 connections, the server could become unresponsive.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11996","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in tomcat embed core jar cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file nibrs pr test tools nibrs summary report common pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details a specially crafted sequence of http requests sent to apache tomcat to to and to could trigger high cpu usage for several seconds if a sufficient number of such requests were made on concurrent http connections the server could become unresponsive publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed org apache tomcat embed isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org springframework boot spring boot starter web release org springframework boot spring boot starter tomcat release org apache tomcat embed tomcat embed core isminimumfixversionavailable true minimumfixversion org apache tomcat tomcat coyote org apache tomcat embed org apache tomcat embed basebranches vulnerabilityidentifier cve vulnerabilitydetails a specially crafted sequence of http requests sent to apache tomcat to to and to could trigger high cpu usage for several seconds if a sufficient number of such requests were made on concurrent http connections the server could become unresponsive vulnerabilityurl | 0 |
40,038 | 9,808,855,183 | IssuesEvent | 2019-06-12 16:31:08 | foxundermoon/vs-shell-format | https://api.github.com/repos/foxundermoon/vs-shell-format | closed | Cursor focus is stolen by update | defect | When the extension automatically updates `shellfmt`, focus is stolen by the Output window which shows up to report download process. I experienced this while writing a Rust file, and every time it reported the download progress percentage, my cursor was moved back to the output window, so I was forced to wait until the download finished. | 1.0 | Cursor focus is stolen by update - When the extension automatically updates `shellfmt`, focus is stolen by the Output window which shows up to report download process. I experienced this while writing a Rust file, and every time it reported the download progress percentage, my cursor was moved back to the output window, so I was forced to wait until the download finished. | defect | cursor focus is stolen by update when the extension automatically updates shellfmt focus is stolen by the output window which shows up to report download process i experienced this while writing a rust file and every time it reported the download progress percentage my cursor was moved back to the output window so i was forced to wait until the download finished | 1 |
47,387 | 13,056,158,848 | IssuesEvent | 2020-07-30 03:50:22 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | noise-generator and very late MCHits/MCPEs (Trac #473) | Migrated from Trac combo simulation defect | For maps with very late MCHits/MCPEs noise-generator will fill the RAM until the node dies (60gb and more). MCPE times are in the range of seconds to hours after the initial neutrino interaction. They can be generated by clsim in geant4 mode.
Migrated from https://code.icecube.wisc.edu/ticket/473
```json
{
"status": "closed",
"changetime": "2015-03-11T14:43:34",
"description": "For maps with very late MCHits/MCPEs noise-generator will fill the RAM until the node dies (60gb and more). MCPE times are in the range of seconds to hours after the initial neutrino interaction. They can be generated by clsim in geant4 mode.",
"reporter": "vehring",
"cc": "",
"resolution": "wontfix",
"_ts": "1426085014817363",
"component": "combo simulation",
"summary": "noise-generator and very late MCHits/MCPEs",
"priority": "normal",
"keywords": "",
"time": "2013-11-18T23:04:03",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | noise-generator and very late MCHits/MCPEs (Trac #473) - For maps with very late MCHits/MCPEs noise-generator will fill the RAM until the node dies (60gb and more). MCPE times are in the range of seconds to hours after the initial neutrino interaction. They can be generated by clsim in geant4 mode.
Migrated from https://code.icecube.wisc.edu/ticket/473
```json
{
"status": "closed",
"changetime": "2015-03-11T14:43:34",
"description": "For maps with very late MCHits/MCPEs noise-generator will fill the RAM until the node dies (60gb and more). MCPE times are in the range of seconds to hours after the initial neutrino interaction. They can be generated by clsim in geant4 mode.",
"reporter": "vehring",
"cc": "",
"resolution": "wontfix",
"_ts": "1426085014817363",
"component": "combo simulation",
"summary": "noise-generator and very late MCHits/MCPEs",
"priority": "normal",
"keywords": "",
"time": "2013-11-18T23:04:03",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| defect | noise generator and very late mchits mcpes trac for maps with very late mchits mcpes noise generator will fill the ram until the node dies and more mcpe times are in the range of seconds to hours after the initial neutrino interaction they can be generated by clsim in mode migrated from json status closed changetime description for maps with very late mchits mcpes noise generator will fill the ram until the node dies and more mcpe times are in the range of seconds to hours after the initial neutrino interaction they can be generated by clsim in mode reporter vehring cc resolution wontfix ts component combo simulation summary noise generator and very late mchits mcpes priority normal keywords time milestone owner olivas type defect | 1 |
59,516 | 17,023,149,485 | IssuesEvent | 2021-07-03 00:35:48 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Pedestrian ways look odd | Component: mapnik Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 10.24am, Monday, 19th March 2007]**
Look at this for example:
http://www.openstreetmap.org/index.html?lat=51.452401812718215&lon=-0.9620519709248013&zoom=14
That bright green really stands out far too much. The style I put in Osmarender seems much more subtle and conveys the idea of a pedestrian way better, IMO:
http://www.openstreetmap.org/index.html?lat=51.45577803154156&lon=-0.9715469887809586&zoom=17 | 1.0 | Pedestrian ways look odd - **[Submitted to the original trac issue database at 10.24am, Monday, 19th March 2007]**
Look at this for example:
http://www.openstreetmap.org/index.html?lat=51.452401812718215&lon=-0.9620519709248013&zoom=14
That bright green really stands out far too much. The style I put in Osmarender seems much more subtle and conveys the idea of a pedestrian way better, IMO:
http://www.openstreetmap.org/index.html?lat=51.45577803154156&lon=-0.9715469887809586&zoom=17 | defect | pedestrian ways look odd look at this for example that bright green really stands out far too much the style i put in osmarender seems much more subtle and conveys the idea of a pedestrian way better imo | 1 |
55,369 | 14,406,113,041 | IssuesEvent | 2020-12-03 19:43:17 | naev/naev | https://api.github.com/repos/naev/naev | opened | Auto-assigning of secondary weapons doesn't work properly | Priority-High Type-Defect | This is something I noticed some time ago but forgot to write an issue about. So I decided to look into it, and I couldn't figure out at a glance what causes this, so opening this now (since it's very important) and will probably do a git bisect later.
For some reason, automatic assignment of secondary weapons is broken. You can see this in the weapons screen, and as @nenau-again pointed out on the Discord server, this has also led to all AI ships not being able to use any weapon considered "secondary" (such as homing missiles and ion cannons).
What happens on the weapons screen is, primary weapons get assigned automatically properly, but secondary weapons instead become in a weird limbo state where they're unassigned, but the game also seems to think of them as assigned when determining whether or not the weapon set is active. This means if you want to use secondary weapons, you must configure your weapons manually (manual configuration works just fine).
Since the bug also occurs with AI ships, that would suggest that it has something to do with `pilot_weapSetAdd()` in pilot_weapon.c. That seems to have not changed in years, though, so it could be something within that, or maybe it could be a bug with the "level" argument somewhere. A git bisect should reveal this; I'll do one at some point when I have the spoons for it if no one else does.
🕵️ | 1.0 | Auto-assigning of secondary weapons doesn't work properly - This is something I noticed some time ago but forgot to write an issue about. So I decided to look into it, and I couldn't figure out at a glance what causes this, so opening this now (since it's very important) and will probably do a git bisect later.
For some reason, automatic assignment of secondary weapons is broken. You can see this in the weapons screen, and as @nenau-again pointed out on the Discord server, this has also led to all AI ships not being able to use any weapon considered "secondary" (such as homing missiles and ion cannons).
What happens on the weapons screen is, primary weapons get assigned automatically properly, but secondary weapons instead become in a weird limbo state where they're unassigned, but the game also seems to think of them as assigned when determining whether or not the weapon set is active. This means if you want to use secondary weapons, you must configure your weapons manually (manual configuration works just fine).
Since the bug also occurs with AI ships, that would suggest that it has something to do with `pilot_weapSetAdd()` in pilot_weapon.c. That seems to have not changed in years, though, so it could be something within that, or maybe it could be a bug with the "level" argument somewhere. A git bisect should reveal this; I'll do one at some point when I have the spoons for it if no one else does.
🕵️ | defect | auto assigning of secondary weapons doesn t work properly this is something i noticed some time ago but forgot to write an issue about so i decided to look into it and i couldn t figure out at a glance what causes this so opening this now since it s very important and will probably do a git bisect later for some reason automatic assignment of secondary weapons is broken you can see this in the weapons screen and as nenau again pointed out on the discord server this has also led to all ai ships not being able to use any weapon considered secondary such as homing missiles and ion cannons what happens on the weapons screen is primary weapons get assigned automatically properly but secondary weapons instead become in a weird limbo state where they re unassigned but the game also seems to think of them as assigned when determining whether or not the weapon set is active this means if you want to use secondary weapons you must configure your weapons manually manual configuration works just fine since the bug also occurs with ai ships that would suggest that it has something to do with pilot weapsetadd in pilot weapon c that seems to have not changed in years though so it could be something within that or maybe it could be a bug with the level argument somewhere a git bisect should reveal this i ll do one at some point when i have the spoons for it if no one else does 🕵️ | 1 |
40,025 | 8,718,271,803 | IssuesEvent | 2018-12-07 19:52:31 | rubberduck-vba/Rubberduck | https://api.github.com/repos/rubberduck-vba/Rubberduck | closed | False positive `Variable Foo is used but not assigned` when passed ByRef and assigned | code-path-analysis difficulty-03-duck enhancement resolver | Have the ability to pick up if a variable is passed `ByRef` and assigned where it's passed to.
```
Public Sub Test()
Dim counter As Long
If Foo(counter) Then
Dim i As Long
For i = 1 To counter
' ... Code that does stuff ...
Debug.Print i
Next
End If
End Sub
Private Function Foo(ByRef barCount As Long) As Boolean
barCount = 5
Foo = True
End Function
``` | 1.0 | False positive `Variable Foo is used but not assigned` when passed ByRef and assigned - Have the ability to pick up if a variable is passed `ByRef` and assigned where it's passed to.
```
Public Sub Test()
Dim counter As Long
If Foo(counter) Then
Dim i As Long
For i = 1 To counter
' ... Code that does stuff ...
Debug.Print i
Next
End If
End Sub
Private Function Foo(ByRef barCount As Long) As Boolean
barCount = 5
Foo = True
End Function
``` | non_defect | false positive variable foo is used but not assigned when passed byref and assigned have the ability to pick up if a variable is passed byref and assigned where it s passed to public sub test dim counter as long if foo counter then dim i as long for i to counter code that does stuff debug print i next end if end sub private function foo byref barcount as long as boolean barcount foo true end function | 0 |
35,486 | 7,753,424,746 | IssuesEvent | 2018-05-31 00:33:52 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | IList methods do not throw exceptions for fixed size arrays | defect in-progress | Instead of throwing **NotSupportedException**:
- `IList.Insert` does nothing
- `IList.Add` does nothing and returns `-1`
- `IList.Remove` removes an element
- `IList.RemoveAt` throws `TypeError: list.System$Collections$IList$removeAt is not a function at Function.IsIList`
- `IList.Clear` assigns `null` to existing elements
### Steps To Reproduce
https://deck.net/30cb3c32279b4fbe587048a8aa63b975
https://dotnetfiddle.net/AtaSgF
```csharp
public class Program
{
public static void Main()
{
var arr = new int[] { 1, 2, 3 };
var ilist = (IList)arr;
try
{
ilist.Insert(0, 0);
Console.WriteLine("No Exception");
}
catch (Exception ex)
{
Console.WriteLine("[Error]: " + ex.Message);
}
}
}
```
### Expected Result
Console output:
```
[Error]: Collection was of a fixed size.
```
### Actual Result
Console output:
```
No Exception
```
### See also
- [#3583] IList.IsFixedSize returns incorrect result | 1.0 | IList methods do not throw exceptions for fixed size arrays - Instead of throwing **NotSupportedException**:
- `IList.Insert` does nothing
- `IList.Add` does nothing and returns `-1`
- `IList.Remove` removes an element
- `IList.RemoveAt` throws `TypeError: list.System$Collections$IList$removeAt is not a function at Function.IsIList`
- `IList.Clear` assigns `null` to existing elements
### Steps To Reproduce
https://deck.net/30cb3c32279b4fbe587048a8aa63b975
https://dotnetfiddle.net/AtaSgF
```csharp
public class Program
{
public static void Main()
{
var arr = new int[] { 1, 2, 3 };
var ilist = (IList)arr;
try
{
ilist.Insert(0, 0);
Console.WriteLine("No Exception");
}
catch (Exception ex)
{
Console.WriteLine("[Error]: " + ex.Message);
}
}
}
```
### Expected Result
Console output:
```
[Error]: Collection was of a fixed size.
```
### Actual Result
Console output:
```
No Exception
```
### See also
- [#3583] IList.IsFixedSize returns incorrect result | defect | ilist methods do not throw exceptions for fixed size arrays instead of throwing notsupportedexception ilist insert does nothing ilist add does nothing and returns ilist remove removes an element ilist removeat throws typeerror list system collections ilist removeat is not a function at function isilist ilist clear assigns null to existing elements steps to reproduce csharp public class program public static void main var arr new int var ilist ilist arr try ilist insert console writeline no exception catch exception ex console writeline ex message expected result console output collection was of a fixed size actual result console output no exception see also ilist isfixedsize returns incorrect result | 1 |
15,205 | 2,850,317,912 | IssuesEvent | 2015-05-31 13:34:12 | damonkohler/sl4a | https://api.github.com/repos/damonkohler/sl4a | opened | Leak found in FacadeManagerFactory | auto-migrated Priority-Medium Type-Defect | _From @GoogleCodeExporter on May 31, 2015 11:31_
```
What steps will reproduce the problem?
AndroidProxy constructor creates a FacadeManagerFactory and an instance of
JsonRpcServer(passing in the factory).
FacadeManagerFactory has a .create() method, which creates a new FacadeManger
adding the newly created object to a list (see
mFacadeManagers.add(facadeManager)), and returns the newly created FacadeManger
object.
There is no code to remove from the list.
The JsonRpcServer makes a call to FacadeManagerFactory .create method each time
it handles a connection.
The mFacadeManagers list grows each time an connection is handled, and the
objects in the list cannot be collected as a references to them is kept.
To reproduce you can makes calls in a loop to a facade and check in DDMS (or
MAT) the "Retained Heap" objects growing as FacadeManager objects number grows
without going down (see retained heap of FacadeManager).
```
Original issue reported on code.google.com by `anthony....@gmail.com` on 4 Jan 2013 at 1:24
_Copied from original issue: damonkohler/android-scripting#672_ | 1.0 | Leak found in FacadeManagerFactory - _From @GoogleCodeExporter on May 31, 2015 11:31_
```
What steps will reproduce the problem?
AndroidProxy constructor creates a FacadeManagerFactory and an instance of
JsonRpcServer(passing in the factory).
FacadeManagerFactory has a .create() method, which creates a new FacadeManger
adding the newly created object to a list (see
mFacadeManagers.add(facadeManager)), and returns the newly created FacadeManger
object.
There is no code to remove from the list.
The JsonRpcServer makes a call to FacadeManagerFactory .create method each time
it handles a connection.
The mFacadeManagers list grows each time an connection is handled, and the
objects in the list cannot be collected as a references to them is kept.
To reproduce you can makes calls in a loop to a facade and check in DDMS (or
MAT) the "Retained Heap" objects growing as FacadeManager objects number grows
without going down (see retained heap of FacadeManager).
```
Original issue reported on code.google.com by `anthony....@gmail.com` on 4 Jan 2013 at 1:24
_Copied from original issue: damonkohler/android-scripting#672_ | defect | leak found in facademanagerfactory from googlecodeexporter on may what steps will reproduce the problem androidproxy constructor creates a facademanagerfactory and an instance of jsonrpcserver passing in the factory facademanagerfactory has a create method which creates a new facademanger adding the newly created object to a list see mfacademanagers add facademanager and returns the newly created facademanger object there is no code to remove from the list the jsonrpcserver makes a call to facademanagerfactory create method each time it handles a connection the mfacademanagers list grows each time an connection is handled and the objects in the list cannot be collected as a references to them is kept to reproduce you can makes calls in a loop to a facade and check in ddms or mat the retained heap objects growing as facademanager objects number grows without going down see retained heap of facademanager original issue reported on code google com by anthony gmail com on jan at copied from original issue damonkohler android scripting | 1 |
225,938 | 7,496,513,895 | IssuesEvent | 2018-04-08 10:12:35 | CS2103JAN2018-W13-B4/main | https://api.github.com/repos/CS2103JAN2018-W13-B4/main | closed | Autoupdating priority math bug. | priority.high type.bug | A deadline set before the current date should be automatically bumped up to maximum priority level. | 1.0 | Autoupdating priority math bug. - A deadline set before the current date should be automatically bumped up to maximum priority level. | non_defect | autoupdating priority math bug a deadline set before the current date should be automatically bumped up to maximum priority level | 0 |
651,687 | 21,485,183,599 | IssuesEvent | 2022-04-26 22:14:23 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.1.0 beta release-105]Hide asphalt ramp via road tag | Priority: Low Category: UI Squad: Redwood | If you mouse over any road tile / craft tooptip you can see asphalt ramp item which should not be visible.
 | 1.0 | [0.9.1.0 beta release-105]Hide asphalt ramp via road tag - If you mouse over any road tile / craft tooptip you can see asphalt ramp item which should not be visible.
 | non_defect | hide asphalt ramp via road tag if you mouse over any road tile craft tooptip you can see asphalt ramp item which should not be visible | 0 |
59,212 | 17,016,517,343 | IssuesEvent | 2021-07-02 12:50:33 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | opened | Abbreviations for concatenated words is not working properly for Swedish | Component: nominatim Priority: minor Type: defect | **[Submitted to the original trac issue database at 7.38am, Thursday, 10th April 2014]**
In Swedish it is common to concatenate words into one longer word. One example is the street name "Kungsgatan", formed by the words "kungs" and "gatan". The correct abbreviation for "Kungsgatan" is "Kungsg.". Searching for kungsg in Nominatim yeilds no result since the rewrite for abbreviation for gatan separates the two words into "kungs g". Searching for "kungs g " works fine but is plain wrong in Swedish. | 1.0 | Abbreviations for concatenated words is not working properly for Swedish - **[Submitted to the original trac issue database at 7.38am, Thursday, 10th April 2014]**
In Swedish it is common to concatenate words into one longer word. One example is the street name "Kungsgatan", formed by the words "kungs" and "gatan". The correct abbreviation for "Kungsgatan" is "Kungsg.". Searching for kungsg in Nominatim yeilds no result since the rewrite for abbreviation for gatan separates the two words into "kungs g". Searching for "kungs g " works fine but is plain wrong in Swedish. | defect | abbreviations for concatenated words is not working properly for swedish in swedish it is common to concatenate words into one longer word one example is the street name kungsgatan formed by the words kungs and gatan the correct abbreviation for kungsgatan is kungsg searching for kungsg in nominatim yeilds no result since the rewrite for abbreviation for gatan separates the two words into kungs g searching for kungs g works fine but is plain wrong in swedish | 1 |
67,451 | 20,961,613,477 | IssuesEvent | 2022-03-27 21:49:40 | abedmaatalla/sipdroid | https://api.github.com/repos/abedmaatalla/sipdroid | closed | Sound is a mess in Motorola Photon | Priority-Medium Type-Defect auto-migrated | ```
Installs ok on Motorola Photon, integrates ok with the OS, BUT, in call sound
is all but clear. The user at the other side of the call hears more or less ok,
but sound in the Photon is only clicks and clicks.
```
Original issue reported on code.google.com by `glozano....@gmail.com` on 27 Oct 2011 at 11:26
| 1.0 | Sound is a mess in Motorola Photon - ```
Installs ok on Motorola Photon, integrates ok with the OS, BUT, in call sound
is all but clear. The user at the other side of the call hears more or less ok,
but sound in the Photon is only clicks and clicks.
```
Original issue reported on code.google.com by `glozano....@gmail.com` on 27 Oct 2011 at 11:26
| defect | sound is a mess in motorola photon installs ok on motorola photon integrates ok with the os but in call sound is all but clear the user at the other side of the call hears more or less ok but sound in the photon is only clicks and clicks original issue reported on code google com by glozano gmail com on oct at | 1 |
81,130 | 30,721,496,544 | IssuesEvent | 2023-07-27 16:14:34 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Segfault when using refined meshes with mortar contact | T: defect P: normal | ## Bug Description
Trying to use RefineSidesetGenerator together with penalty mortar contact (refining the contact area). Segfault. And in dbg
```
No index 18446744073709551615 in ghosted vector.
Vector contains [0,33618)
And empty ghost array.
```
Tagging @lindsayad
## Steps to Reproduce
```
cd $MOOSE_DIR/modules/contact/test/tests/pdass_problems
../../../contact-opt -i cylinder_friction_penalty.i Mesh/refine/type=RefineSidesetGenerator Mesh/refine/boundaries=3 Mesh/refine/input=input_file Mesh/refine/refinement=1 Mesh/secondary/input=refine
```
## Impact
Prevents users from refining meshes along contact sidesets. | 1.0 | Segfault when using refined meshes with mortar contact - ## Bug Description
Trying to use RefineSidesetGenerator together with penalty mortar contact (refining the contact area). Segfault. And in dbg
```
No index 18446744073709551615 in ghosted vector.
Vector contains [0,33618)
And empty ghost array.
```
Tagging @lindsayad
## Steps to Reproduce
```
cd $MOOSE_DIR/modules/contact/test/tests/pdass_problems
../../../contact-opt -i cylinder_friction_penalty.i Mesh/refine/type=RefineSidesetGenerator Mesh/refine/boundaries=3 Mesh/refine/input=input_file Mesh/refine/refinement=1 Mesh/secondary/input=refine
```
## Impact
Prevents users from refining meshes along contact sidesets. | defect | segfault when using refined meshes with mortar contact bug description trying to use refinesidesetgenerator together with penalty mortar contact refining the contact area segfault and in dbg no index in ghosted vector vector contains and empty ghost array tagging lindsayad steps to reproduce cd moose dir modules contact test tests pdass problems contact opt i cylinder friction penalty i mesh refine type refinesidesetgenerator mesh refine boundaries mesh refine input input file mesh refine refinement mesh secondary input refine impact prevents users from refining meshes along contact sidesets | 1 |
47,364 | 13,056,143,395 | IssuesEvent | 2020-07-30 03:47:26 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | Handle leap seconds properly (Trac #421) | Migrated from Trac dataclasses defect | End of June contains a leap second.
Below is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the "DAQ Time" (0.1 ms counts since Jan 1 0:0:0.
Our conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.
---------- Forwarded message ----------
From: Dave Glowacki <dave.glowacki@icecube.wisc.edu>
Date: Fri, Jun 15, 2012 at 12:10 PM
Subject: How DAQ is handling the June 30 leap second
To: Benedikt Riedel <briedel@icecube.wisc.edu>
On June 30, an extra second will be added to the end of the day, so
23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From
a DAQ point of view, this is unnoticed. It's counting the number of
seconds since January 1 00:00:00 and at the end of the year there will
simply be an extra second of data before DAQ time wraps back to 0
The only problem arises when translating DAQ seconds into UTC.
Normally DAQ times are translated to UTC like this:
:::: DAQ time :::: ::::::::::: UTC :::::::::::
157247999999999999 June 30 23:59:59.9999999999
157248000000000000 July 1 00:00:00.0000000000
157248010000000000 July 1 00:00:01.0000000000
Because of this year's June 30 leap second, DAQ times need to be
translated differently:
:::: DAQ time :::: ::::::::::: UTC :::::::::::
157247999999999999 June 30 23:59:59.9999999999
157248000000000000 June 30 23:59:60.0000000000
157248010000000000 July 1 00:00:00.0000000000
Inside DAQ, we're using the leapseconds file found at
ftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation
from DAQ time into UTC. That file lists all the leap seconds since
1972, counting from second 0 on Jan 1 1900. It includes an expiration
date so software can automatically fetch new versions from that FTP
site without requiring human intervention.
(I would hope that system software would factor in this information,
but that doesn't appear to be the case.)
An older email:
Hi Guys,
Some of this may have already gone by so apologies if I missed
something in advance.
I'm not sure if it's enough information, but..
The gps clock has a month long warning of an impending leap second.
It should show up in the time string. The second can get added or
subtracted four times a year ( although they've only used June, Dec )
ntp_gettime ( linux system call ) that will give you utc offset from
tai for the current time.
You probably already know this so apologies if I missed a comment
going by.. ntpd uses a leap second definition file provided by nist.
The file is available in:
ftp://time.nist.gov/pub
It's called leap-seconds.<serial number>. The current one is called
ftp://time.nist.gov/pub/leap-seconds.3535228800
The format is in ntp timestamp and UTC - TAI in seconds.
Matt
Migrated from https://code.icecube.wisc.edu/ticket/421
```json
{
"status": "closed",
"changetime": "2012-06-28T16:16:55",
"description": "End of June contains a leap second. \n\nBelow is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the \"DAQ Time\" (0.1 ms counts since Jan 1 0:0:0.\n\nOur conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.\n\n\n---------- Forwarded message ----------\nFrom: Dave Glowacki <dave.glowacki@icecube.wisc.edu>\nDate: Fri, Jun 15, 2012 at 12:10 PM\nSubject: How DAQ is handling the June 30 leap second\nTo: Benedikt Riedel <briedel@icecube.wisc.edu>\n\n\nOn June 30, an extra second will be added to the end of the day, so\n23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From\na DAQ point of view, this is unnoticed. It's counting the number of\nseconds since January 1 00:00:00 and at the end of the year there will\nsimply be an extra second of data before DAQ time wraps back to 0\n\nThe only problem arises when translating DAQ seconds into UTC.\nNormally DAQ times are translated to UTC like this:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 July 1 00:00:00.0000000000\n157248010000000000 July 1 00:00:01.0000000000\n\nBecause of this year's June 30 leap second, DAQ times need to be\ntranslated differently:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 June 30 23:59:60.0000000000\n157248010000000000 July 1 00:00:00.0000000000\n\nInside DAQ, we're using the leapseconds file found at\nftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation\nfrom DAQ time into UTC. That file lists all the leap seconds since\n1972, counting from second 0 on Jan 1 1900. It includes an expiration\ndate so software can automatically fetch new versions from that FTP\nsite without requiring human intervention.\n\n(I would hope that system software would factor in this information,\nbut that doesn't appear to be the case.)\n\n\nAn older email:\n\nHi Guys,\n\nSome of this may have already gone by so apologies if I missed\nsomething in advance.\n\nI'm not sure if it's enough information, but..\n\nThe gps clock has a month long warning of an impending leap second.\nIt should show up in the time string. The second can get added or\nsubtracted four times a year ( although they've only used June, Dec )\n\nntp_gettime ( linux system call ) that will give you utc offset from\ntai for the current time.\n\nYou probably already know this so apologies if I missed a comment\ngoing by.. ntpd uses a leap second definition file provided by nist.\n\nThe file is available in:\n\nftp://time.nist.gov/pub\n\nIt's called leap-seconds.<serial number>. The current one is called\nftp://time.nist.gov/pub/leap-seconds.3535228800\n\nThe format is in ntp timestamp and UTC - TAI in seconds.\n\nMatt",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1340900215000000",
"component": "dataclasses",
"summary": "Handle leap seconds properly",
"priority": "normal",
"keywords": "",
"time": "2012-06-22T19:07:17",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| 1.0 | Handle leap seconds properly (Trac #421) - End of June contains a leap second.
Below is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the "DAQ Time" (0.1 ms counts since Jan 1 0:0:0.
Our conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.
---------- Forwarded message ----------
From: Dave Glowacki <dave.glowacki@icecube.wisc.edu>
Date: Fri, Jun 15, 2012 at 12:10 PM
Subject: How DAQ is handling the June 30 leap second
To: Benedikt Riedel <briedel@icecube.wisc.edu>
On June 30, an extra second will be added to the end of the day, so
23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From
a DAQ point of view, this is unnoticed. It's counting the number of
seconds since January 1 00:00:00 and at the end of the year there will
simply be an extra second of data before DAQ time wraps back to 0
The only problem arises when translating DAQ seconds into UTC.
Normally DAQ times are translated to UTC like this:
:::: DAQ time :::: ::::::::::: UTC :::::::::::
157247999999999999 June 30 23:59:59.9999999999
157248000000000000 July 1 00:00:00.0000000000
157248010000000000 July 1 00:00:01.0000000000
Because of this year's June 30 leap second, DAQ times need to be
translated differently:
:::: DAQ time :::: ::::::::::: UTC :::::::::::
157247999999999999 June 30 23:59:59.9999999999
157248000000000000 June 30 23:59:60.0000000000
157248010000000000 July 1 00:00:00.0000000000
Inside DAQ, we're using the leapseconds file found at
ftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation
from DAQ time into UTC. That file lists all the leap seconds since
1972, counting from second 0 on Jan 1 1900. It includes an expiration
date so software can automatically fetch new versions from that FTP
site without requiring human intervention.
(I would hope that system software would factor in this information,
but that doesn't appear to be the case.)
An older email:
Hi Guys,
Some of this may have already gone by so apologies if I missed
something in advance.
I'm not sure if it's enough information, but..
The gps clock has a month long warning of an impending leap second.
It should show up in the time string. The second can get added or
subtracted four times a year ( although they've only used June, Dec )
ntp_gettime ( linux system call ) that will give you utc offset from
tai for the current time.
You probably already know this so apologies if I missed a comment
going by.. ntpd uses a leap second definition file provided by nist.
The file is available in:
ftp://time.nist.gov/pub
It's called leap-seconds.<serial number>. The current one is called
ftp://time.nist.gov/pub/leap-seconds.3535228800
The format is in ntp timestamp and UTC - TAI in seconds.
Matt
Migrated from https://code.icecube.wisc.edu/ticket/421
```json
{
"status": "closed",
"changetime": "2012-06-28T16:16:55",
"description": "End of June contains a leap second. \n\nBelow is some information on how the DAQ is handling this and future leap seconds. They use this information to convert from the UTC reported by GPS to the \"DAQ Time\" (0.1 ms counts since Jan 1 0:0:0.\n\nOur conversions back to UTC likely need similar treatments, since these are generally what is used for pointing, comparisons to other experiments, etc.\n\n\n---------- Forwarded message ----------\nFrom: Dave Glowacki <dave.glowacki@icecube.wisc.edu>\nDate: Fri, Jun 15, 2012 at 12:10 PM\nSubject: How DAQ is handling the June 30 leap second\nTo: Benedikt Riedel <briedel@icecube.wisc.edu>\n\n\nOn June 30, an extra second will be added to the end of the day, so\n23:59:59 will be followed by 23:59:60 and then July 1 00:00:00. From\na DAQ point of view, this is unnoticed. It's counting the number of\nseconds since January 1 00:00:00 and at the end of the year there will\nsimply be an extra second of data before DAQ time wraps back to 0\n\nThe only problem arises when translating DAQ seconds into UTC.\nNormally DAQ times are translated to UTC like this:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 July 1 00:00:00.0000000000\n157248010000000000 July 1 00:00:01.0000000000\n\nBecause of this year's June 30 leap second, DAQ times need to be\ntranslated differently:\n\n:::: DAQ time :::: ::::::::::: UTC :::::::::::\n157247999999999999 June 30 23:59:59.9999999999\n157248000000000000 June 30 23:59:60.0000000000\n157248010000000000 July 1 00:00:00.0000000000\n\nInside DAQ, we're using the leapseconds file found at\nftp://tycho.usno.navy.mil/pub/ntp as the basis for our translation\nfrom DAQ time into UTC. That file lists all the leap seconds since\n1972, counting from second 0 on Jan 1 1900. It includes an expiration\ndate so software can automatically fetch new versions from that FTP\nsite without requiring human intervention.\n\n(I would hope that system software would factor in this information,\nbut that doesn't appear to be the case.)\n\n\nAn older email:\n\nHi Guys,\n\nSome of this may have already gone by so apologies if I missed\nsomething in advance.\n\nI'm not sure if it's enough information, but..\n\nThe gps clock has a month long warning of an impending leap second.\nIt should show up in the time string. The second can get added or\nsubtracted four times a year ( although they've only used June, Dec )\n\nntp_gettime ( linux system call ) that will give you utc offset from\ntai for the current time.\n\nYou probably already know this so apologies if I missed a comment\ngoing by.. ntpd uses a leap second definition file provided by nist.\n\nThe file is available in:\n\nftp://time.nist.gov/pub\n\nIt's called leap-seconds.<serial number>. The current one is called\nftp://time.nist.gov/pub/leap-seconds.3535228800\n\nThe format is in ntp timestamp and UTC - TAI in seconds.\n\nMatt",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"_ts": "1340900215000000",
"component": "dataclasses",
"summary": "Handle leap seconds properly",
"priority": "normal",
"keywords": "",
"time": "2012-06-22T19:07:17",
"milestone": "",
"owner": "kjmeagher",
"type": "defect"
}
```
| defect | handle leap seconds properly trac end of june contains a leap second below is some information on how the daq is handling this and future leap seconds they use this information to convert from the utc reported by gps to the daq time ms counts since jan our conversions back to utc likely need similar treatments since these are generally what is used for pointing comparisons to other experiments etc forwarded message from dave glowacki date fri jun at pm subject how daq is handling the june leap second to benedikt riedel on june an extra second will be added to the end of the day so will be followed by and then july from a daq point of view this is unnoticed it s counting the number of seconds since january and at the end of the year there will simply be an extra second of data before daq time wraps back to the only problem arises when translating daq seconds into utc normally daq times are translated to utc like this daq time utc june july july because of this year s june leap second daq times need to be translated differently daq time utc june june july inside daq we re using the leapseconds file found at ftp tycho usno navy mil pub ntp as the basis for our translation from daq time into utc that file lists all the leap seconds since counting from second on jan it includes an expiration date so software can automatically fetch new versions from that ftp site without requiring human intervention i would hope that system software would factor in this information but that doesn t appear to be the case an older email hi guys some of this may have already gone by so apologies if i missed something in advance i m not sure if it s enough information but the gps clock has a month long warning of an impending leap second it should show up in the time string the second can get added or subtracted four times a year although they ve only used june dec ntp gettime linux system call that will give you utc offset from tai for the current time you probably already know this so apologies if i missed a comment going by ntpd uses a leap second definition file provided by nist the file is available in ftp time nist gov pub it s called leap seconds the current one is called ftp time nist gov pub leap seconds the format is in ntp timestamp and utc tai in seconds matt migrated from json status closed changetime description end of june contains a leap second n nbelow is some information on how the daq is handling this and future leap seconds they use this information to convert from the utc reported by gps to the daq time ms counts since jan n nour conversions back to utc likely need similar treatments since these are generally what is used for pointing comparisons to other experiments etc n n n forwarded message nfrom dave glowacki ndate fri jun at pm nsubject how daq is handling the june leap second nto benedikt riedel n n non june an extra second will be added to the end of the day so will be followed by and then july from na daq point of view this is unnoticed it s counting the number of nseconds since january and at the end of the year there will nsimply be an extra second of data before daq time wraps back to n nthe only problem arises when translating daq seconds into utc nnormally daq times are translated to utc like this n n daq time utc june july july n nbecause of this year s june leap second daq times need to be ntranslated differently n n daq time utc june june july n ninside daq we re using the leapseconds file found at nftp tycho usno navy mil pub ntp as the basis for our translation nfrom daq time into utc that file lists all the leap seconds since counting from second on jan it includes an expiration ndate so software can automatically fetch new versions from that ftp nsite without requiring human intervention n n i would hope that system software would factor in this information nbut that doesn t appear to be the case n n nan older email n nhi guys n nsome of this may have already gone by so apologies if i missed nsomething in advance n ni m not sure if it s enough information but n nthe gps clock has a month long warning of an impending leap second nit should show up in the time string the second can get added or nsubtracted four times a year although they ve only used june dec n nntp gettime linux system call that will give you utc offset from ntai for the current time n nyou probably already know this so apologies if i missed a comment ngoing by ntpd uses a leap second definition file provided by nist n nthe file is available in n nftp time nist gov pub n nit s called leap seconds the current one is called nftp time nist gov pub leap seconds n nthe format is in ntp timestamp and utc tai in seconds n nmatt reporter blaufuss cc resolution fixed ts component dataclasses summary handle leap seconds properly priority normal keywords time milestone owner kjmeagher type defect | 1 |
485,883 | 14,000,877,398 | IssuesEvent | 2020-10-28 12:57:02 | Ameelio/letters-mobile | https://api.github.com/repos/Ameelio/letters-mobile | opened | [Bug] Wrong tab icon is shown as active | low-priority | <!--- Provide a general summary of the issue in the Title above -->
Isis: After I tracked I pressed home but it highlighted the store icon but showed me the home page. I pressed home then store again and it was normal. | 1.0 | [Bug] Wrong tab icon is shown as active - <!--- Provide a general summary of the issue in the Title above -->
Isis: After I tracked I pressed home but it highlighted the store icon but showed me the home page. I pressed home then store again and it was normal. | non_defect | wrong tab icon is shown as active isis after i tracked i pressed home but it highlighted the store icon but showed me the home page i pressed home then store again and it was normal | 0 |
324,483 | 27,809,077,320 | IssuesEvent | 2023-03-18 00:08:40 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix elementwise.test_square | Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_elementwise.py::test_square[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-17T21:55:52.0451142Z E TypeError: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0457615Z E ivy.utils.exceptions.IvyBackendException: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0461650Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0465442Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0468970Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0473646Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0478359Z E ivy.utils.exceptions.IvyBackendException: jax: execute_with_gradients: jax: nested_map: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0478882Z E Falsifying example: test_square(
2023-03-17T21:55:52.0479240Z E dtype_and_x=(['float32'], [array(-1., dtype=float32)]),
2023-03-17T21:55:52.0479587Z E ground_truth_backend='tensorflow',
2023-03-17T21:55:52.0479871Z E fn_name='square',
2023-03-17T21:55:52.0480114Z E test_flags=FunctionTestFlags(
2023-03-17T21:55:52.0480370Z E num_positional_args=1,
2023-03-17T21:55:52.0480601Z E with_out=False,
2023-03-17T21:55:52.0668430Z E instance_method=False,
2023-03-17T21:55:52.0668723Z E test_gradients=True,
2023-03-17T21:55:52.0668992Z E test_compile=False,
2023-03-17T21:55:52.0669477Z E as_variable=[False],
2023-03-17T21:55:52.0669717Z E native_arrays=[False],
2023-03-17T21:55:52.0669947Z E container=[False],
2023-03-17T21:55:52.0670162Z E ),
2023-03-17T21:55:52.0670751Z E backend_fw=<module 'ivy.functional.backends.jax' from '/ivy/ivy/functional/backends/jax/__init__.py'>,
2023-03-17T21:55:52.0671136Z E on_device='cpu',
2023-03-17T21:55:52.0671347Z E )
2023-03-17T21:55:52.0671524Z E
2023-03-17T21:55:52.0672048Z E You can reproduce this example by temporarily adding @reproduce_failure('6.70.0', b'AXic42QAAkYGCGCEEwABBwAN') as a decorator on your test case
</details>
| 1.0 | Fix elementwise.test_square - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4450961173/jobs/7817046026" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_elementwise.py::test_square[cpu-ivy.functional.backends.jax-False-False]</summary>
2023-03-17T21:55:52.0451142Z E TypeError: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0457615Z E ivy.utils.exceptions.IvyBackendException: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0461650Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0465442Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0468970Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0473646Z E ivy.utils.exceptions.IvyBackendException: jax: nested_map: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0478359Z E ivy.utils.exceptions.IvyBackendException: jax: execute_with_gradients: jax: nested_map: jax: nested_map: jax: nested_map: jax: nested_map: jax: stop_gradient: Value ivy.array(-1.) with type <class 'ivy.array.array.Array'> is not a valid JAX type
2023-03-17T21:55:52.0478882Z E Falsifying example: test_square(
2023-03-17T21:55:52.0479240Z E dtype_and_x=(['float32'], [array(-1., dtype=float32)]),
2023-03-17T21:55:52.0479587Z E ground_truth_backend='tensorflow',
2023-03-17T21:55:52.0479871Z E fn_name='square',
2023-03-17T21:55:52.0480114Z E test_flags=FunctionTestFlags(
2023-03-17T21:55:52.0480370Z E num_positional_args=1,
2023-03-17T21:55:52.0480601Z E with_out=False,
2023-03-17T21:55:52.0668430Z E instance_method=False,
2023-03-17T21:55:52.0668723Z E test_gradients=True,
2023-03-17T21:55:52.0668992Z E test_compile=False,
2023-03-17T21:55:52.0669477Z E as_variable=[False],
2023-03-17T21:55:52.0669717Z E native_arrays=[False],
2023-03-17T21:55:52.0669947Z E container=[False],
2023-03-17T21:55:52.0670162Z E ),
2023-03-17T21:55:52.0670751Z E backend_fw=<module 'ivy.functional.backends.jax' from '/ivy/ivy/functional/backends/jax/__init__.py'>,
2023-03-17T21:55:52.0671136Z E on_device='cpu',
2023-03-17T21:55:52.0671347Z E )
2023-03-17T21:55:52.0671524Z E
2023-03-17T21:55:52.0672048Z E You can reproduce this example by temporarily adding @reproduce_failure('6.70.0', b'AXic42QAAkYGCGCEEwABBwAN') as a decorator on your test case
</details>
| non_defect | fix elementwise test square tensorflow img src torch img src numpy img src jax img src failed ivy tests test ivy test functional test core test elementwise py test square e typeerror value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax stop gradient value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax nested map jax stop gradient value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax nested map jax nested map jax stop gradient value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax nested map jax nested map jax nested map jax stop gradient value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax nested map jax nested map jax nested map jax nested map jax stop gradient value ivy array with type is not a valid jax type e ivy utils exceptions ivybackendexception jax execute with gradients jax nested map jax nested map jax nested map jax nested map jax stop gradient value ivy array with type is not a valid jax type e falsifying example test square e dtype and x e ground truth backend tensorflow e fn name square e test flags functiontestflags e num positional args e with out false e instance method false e test gradients true e test compile false e as variable e native arrays e container e e backend fw e on device cpu e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case | 0 |
57,558 | 15,860,692,749 | IssuesEvent | 2021-04-08 09:27:19 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | closed | Passing an empty array or collection to returningResult() does not match the behaviour of returning() | C: Functionality E: All Editions P: Medium T: Defect T: Incompatible change | ### Expected behavior
The `returningResult()` and `returning()` methods to return the same result.
### Actual behavior
The `returningResult()` method returns an empty result (wrong result).
The `returning()` method returns the expected result.
### Steps to reproduce the problem
I tried this query:
**RETURN THE TWO ROWS AS EXPECTED**
```Java
var inserted = ctx.insertInto(PRODUCTLINE,
PRODUCTLINE.PRODUCT_LINE, PRODUCTLINE.TEXT_DESCRIPTION, PRODUCTLINE.CODE)
.values("Master Vans", "This new line of master vans ...", 11983423L)
.values("Cool Cars", "This new line of cool cars ...", 11193384L)
.returning()
.fetch();
```
**RETURN AN EMPTY RESULT**
```Java
var inserted = ctx.insertInto(PRODUCTLINE,
PRODUCTLINE.PRODUCT_LINE, PRODUCTLINE.TEXT_DESCRIPTION, PRODUCTLINE.CODE)
.values("Master Vans", "This new line of master vans ...", 11983423L)
.values("Cool Cars", "This new line of cool cars ...", 11193384L)
.returningResult()
.fetch();
```
### Versions
- jOOQ: 3.14.7
- Java:14
- Database (include vendor): MySQL, SQL Server, PostgreSQL, Oracle
- OS:Windows 10
- JDBC Driver (include name if inofficial driver): - | 1.0 | Passing an empty array or collection to returningResult() does not match the behaviour of returning() - ### Expected behavior
The `returningResult()` and `returning()` methods to return the same result.
### Actual behavior
The `returningResult()` method returns an empty result (wrong result).
The `returning()` method returns the expected result.
### Steps to reproduce the problem
I tried this query:
**RETURN THE TWO ROWS AS EXPECTED**
```Java
var inserted = ctx.insertInto(PRODUCTLINE,
PRODUCTLINE.PRODUCT_LINE, PRODUCTLINE.TEXT_DESCRIPTION, PRODUCTLINE.CODE)
.values("Master Vans", "This new line of master vans ...", 11983423L)
.values("Cool Cars", "This new line of cool cars ...", 11193384L)
.returning()
.fetch();
```
**RETURN AN EMPTY RESULT**
```Java
var inserted = ctx.insertInto(PRODUCTLINE,
PRODUCTLINE.PRODUCT_LINE, PRODUCTLINE.TEXT_DESCRIPTION, PRODUCTLINE.CODE)
.values("Master Vans", "This new line of master vans ...", 11983423L)
.values("Cool Cars", "This new line of cool cars ...", 11193384L)
.returningResult()
.fetch();
```
### Versions
- jOOQ: 3.14.7
- Java:14
- Database (include vendor): MySQL, SQL Server, PostgreSQL, Oracle
- OS:Windows 10
- JDBC Driver (include name if inofficial driver): - | defect | passing an empty array or collection to returningresult does not match the behaviour of returning expected behavior the returningresult and returning methods to return the same result actual behavior the returningresult method returns an empty result wrong result the returning method returns the expected result steps to reproduce the problem i tried this query return the two rows as expected java var inserted ctx insertinto productline productline product line productline text description productline code values master vans this new line of master vans values cool cars this new line of cool cars returning fetch return an empty result java var inserted ctx insertinto productline productline product line productline text description productline code values master vans this new line of master vans values cool cars this new line of cool cars returningresult fetch versions jooq java database include vendor mysql sql server postgresql oracle os windows jdbc driver include name if inofficial driver | 1 |
78,071 | 27,307,520,904 | IssuesEvent | 2023-02-24 09:30:22 | zed-industries/community | https://api.github.com/repos/zed-industries/community | opened | Restarting language server (that resolves code issues) leaves project issues icon | defect triage | ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
After adding a `go.work` file to a monorepo root and restarting the language server (so that gopls picks up the new workspace file to resolve the issues), leaves the `project diagnostics` icon still showing the previous errors.
### Environment
Zed: 0.74.2
Go: 1.20
macos: 13.0.1 (intel)
### If applicable, add mockups / screenshots to help explain present your vision of the feature
<img width="352" alt="Screenshot 2023-02-24 at 09 28 36" src="https://user-images.githubusercontent.com/49863988/221142564-d94b9e74-c24e-4be6-bfac-9329cb849ee1.png">
<img width="88" alt="Screenshot 2023-02-24 at 09 28 30" src="https://user-images.githubusercontent.com/49863988/221142568-4faea1a8-8266-4680-b55e-781a828ad81c.png">
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
```
2023-02-24T09:21:13 [INFO] unhandled notification window/logMessage:
{
"type": 1,
"message": "2023/02/24 09:21:13 errors loading workspace: You are outside of a module and outside of $GOPATH/src.\nIf you are using modules, please open your editor to a directory in your module.\nIf you believe this warning is incorrect, please file an issue: https://github.com/golang/go/issues/new.\n\tsnapshot=6\n\tdirectory=file:///Users/*******/Documents/GitHub/*******\n"
}
2023-02-24T09:21:27 [ERROR] oneshot canceled
2023-02-24T09:21:27 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:27 go env for /Users/*******/Documents/GitHub/synap\n(root /Users/*******/Documents/GitHub/*******)\n(go version go version go1.20 darwin/amd64)\n(valid build configuration = true)\n(build flags: [])\nGOMOD=/dev/null\nGOFLAGS=\nGOCACHE=/Users/*******/Library/Caches/go-build\nGOMODCACHE=/Users/*******/go/pkg/mod\nGOWORK=/Users/*******/Documents/GitHub/*******/go.work\nGO111MODULE=\nGOPRIVATE=\nGOPROXY=https://proxy.golang.org,direct\nGOINSECURE=\nGONOSUMDB=\nGOPATH=/Users/*******/go\nGOSUMDB=sum.golang.org\nGONOPROXY=\nGOROOT=/usr/local/go\n\n"
}
2023-02-24T09:21:28 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:28 go/packages.Load #1\n\tsnapshot=0\n\tdirectory=file:///Users/*******/Documents/GitHub/*******\n\tquery=[builtin github.com/*******/*******/libraries/dev-hq-2/...]\n\tpackages=11\n"
}
2023-02-24T09:21:28 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:28 go/packages.Load #1: updating metadata for 274 packages\n"
}
``` | 1.0 | Restarting language server (that resolves code issues) leaves project issues icon - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
After adding a `go.work` file to a monorepo root and restarting the language server (so that gopls picks up the new workspace file to resolve the issues), leaves the `project diagnostics` icon still showing the previous errors.
### Environment
Zed: 0.74.2
Go: 1.20
macos: 13.0.1 (intel)
### If applicable, add mockups / screenshots to help explain present your vision of the feature
<img width="352" alt="Screenshot 2023-02-24 at 09 28 36" src="https://user-images.githubusercontent.com/49863988/221142564-d94b9e74-c24e-4be6-bfac-9329cb849ee1.png">
<img width="88" alt="Screenshot 2023-02-24 at 09 28 30" src="https://user-images.githubusercontent.com/49863988/221142568-4faea1a8-8266-4680-b55e-781a828ad81c.png">
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
```
2023-02-24T09:21:13 [INFO] unhandled notification window/logMessage:
{
"type": 1,
"message": "2023/02/24 09:21:13 errors loading workspace: You are outside of a module and outside of $GOPATH/src.\nIf you are using modules, please open your editor to a directory in your module.\nIf you believe this warning is incorrect, please file an issue: https://github.com/golang/go/issues/new.\n\tsnapshot=6\n\tdirectory=file:///Users/*******/Documents/GitHub/*******\n"
}
2023-02-24T09:21:27 [ERROR] oneshot canceled
2023-02-24T09:21:27 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:27 go env for /Users/*******/Documents/GitHub/synap\n(root /Users/*******/Documents/GitHub/*******)\n(go version go version go1.20 darwin/amd64)\n(valid build configuration = true)\n(build flags: [])\nGOMOD=/dev/null\nGOFLAGS=\nGOCACHE=/Users/*******/Library/Caches/go-build\nGOMODCACHE=/Users/*******/go/pkg/mod\nGOWORK=/Users/*******/Documents/GitHub/*******/go.work\nGO111MODULE=\nGOPRIVATE=\nGOPROXY=https://proxy.golang.org,direct\nGOINSECURE=\nGONOSUMDB=\nGOPATH=/Users/*******/go\nGOSUMDB=sum.golang.org\nGONOPROXY=\nGOROOT=/usr/local/go\n\n"
}
2023-02-24T09:21:28 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:28 go/packages.Load #1\n\tsnapshot=0\n\tdirectory=file:///Users/*******/Documents/GitHub/*******\n\tquery=[builtin github.com/*******/*******/libraries/dev-hq-2/...]\n\tpackages=11\n"
}
2023-02-24T09:21:28 [INFO] unhandled notification window/logMessage:
{
"type": 3,
"message": "2023/02/24 09:21:28 go/packages.Load #1: updating metadata for 274 packages\n"
}
``` | defect | restarting language server that resolves code issues leaves project issues icon check for existing issues completed describe the bug provide steps to reproduce it after adding a go work file to a monorepo root and restarting the language server so that gopls picks up the new workspace file to resolve the issues leaves the project diagnostics icon still showing the previous errors environment zed go macos intel if applicable add mockups screenshots to help explain present your vision of the feature img width alt screenshot at src img width alt screenshot at src if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last unhandled notification window logmessage type message errors loading workspace you are outside of a module and outside of gopath src nif you are using modules please open your editor to a directory in your module nif you believe this warning is incorrect please file an issue oneshot canceled unhandled notification window logmessage type message go env for users documents github synap n root users documents github n go version go version darwin n valid build configuration true n build flags ngomod dev null ngoflags ngocache users library caches go build ngomodcache users go pkg mod ngowork users documents github go work ngoprivate ngoproxy unhandled notification window logmessage type message go packages load n tsnapshot n tdirectory file users documents github n tquery n tpackages n unhandled notification window logmessage type message go packages load updating metadata for packages n | 1 |
100,984 | 11,210,289,296 | IssuesEvent | 2020-01-06 12:50:26 | vuetifyjs/vuetify | https://api.github.com/repos/vuetifyjs/vuetify | closed | [Documentation] update v-data-table options prop | T: documentation | ### Problem to solve
Vuetify 2 introduces new props for data table like page, items-per-page, sort-by and sort-desc. Devs will assume that those options will have to synced individually which adds to the complexity of implementation. Options prop also allows sync (missing from documentation) and can handle all of these props. The options -prop is not documented (docs just say refer to examples). I had to convert 20+ tables from Vuetify 1.5 and this made the conversion somewhat painful. If you are using server side pagination you actually have to use the options -prop because you cannot easily watch all the individual sort and page props for changes (as they update in some random order when user click on header sort options).
### Proposed solution
Document v-data-table options property and mention .sync modifier can be used.
<!-- generated by vuetify-issue-helper. DO NOT REMOVE --> | 1.0 | [Documentation] update v-data-table options prop - ### Problem to solve
Vuetify 2 introduces new props for data table like page, items-per-page, sort-by and sort-desc. Devs will assume that those options will have to synced individually which adds to the complexity of implementation. Options prop also allows sync (missing from documentation) and can handle all of these props. The options -prop is not documented (docs just say refer to examples). I had to convert 20+ tables from Vuetify 1.5 and this made the conversion somewhat painful. If you are using server side pagination you actually have to use the options -prop because you cannot easily watch all the individual sort and page props for changes (as they update in some random order when user click on header sort options).
### Proposed solution
Document v-data-table options property and mention .sync modifier can be used.
<!-- generated by vuetify-issue-helper. DO NOT REMOVE --> | non_defect | update v data table options prop problem to solve vuetify introduces new props for data table like page items per page sort by and sort desc devs will assume that those options will have to synced individually which adds to the complexity of implementation options prop also allows sync missing from documentation and can handle all of these props the options prop is not documented docs just say refer to examples i had to convert tables from vuetify and this made the conversion somewhat painful if you are using server side pagination you actually have to use the options prop because you cannot easily watch all the individual sort and page props for changes as they update in some random order when user click on header sort options proposed solution document v data table options property and mention sync modifier can be used | 0 |
57,432 | 15,782,927,128 | IssuesEvent | 2021-04-01 13:24:09 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | p-table with save state (localstorage/sessionstorage) remember old value after delete from filter | defect | **I'm submitting a ...** (check one with "x")
```
[x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
https://stackblitz.com/edit/primeng-tablestate-demo-uckb6g?file=src%2Fapp%2Fapp.component.html
**Current behavior**
If user type in filters some text, then deletes it in localstorage/sessionstorage text from filter it still alive. I see that issue exist when p-table doesn't have [paginator]="true". When p-table have [paginator]="true" state working fine.
**Expected behavior**
When p-table don't have [paginator]="true" and user delete text from filter then should be delete from storage.
**Minimal reproduction of the problem with instructions**
On section case i paste url with the bug. Be on page press F12 and go to section 'application' -> 'localstorage' -> url stackblitz, and on stackblitz put some text on filter you will see value of filter save in storage. If you remove all characters from filter then in storage will have old value. If you in p-table add '[paginator]="true"' and do same steps, the value from storage will deletes.
**What is the motivation / use case for changing the behavior?**
I use in production app custom paginator, and after refresh page filters will fill with old value. To remove value user must have clear application storage.
* **Angular version:** 11.0.8
* **PrimeNG version:** 11.2.3
* **Browser:** all | 1.0 | p-table with save state (localstorage/sessionstorage) remember old value after delete from filter - **I'm submitting a ...** (check one with "x")
```
[x ] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
Please demonstrate your case at stackblitz by using the issue template below. Issues without a test case have much less possibility to be reviewd in detail and assisted.
https://stackblitz.com/edit/primeng-tablestate-demo-uckb6g?file=src%2Fapp%2Fapp.component.html
**Current behavior**
If user type in filters some text, then deletes it in localstorage/sessionstorage text from filter it still alive. I see that issue exist when p-table doesn't have [paginator]="true". When p-table have [paginator]="true" state working fine.
**Expected behavior**
When p-table don't have [paginator]="true" and user delete text from filter then should be delete from storage.
**Minimal reproduction of the problem with instructions**
On section case i paste url with the bug. Be on page press F12 and go to section 'application' -> 'localstorage' -> url stackblitz, and on stackblitz put some text on filter you will see value of filter save in storage. If you remove all characters from filter then in storage will have old value. If you in p-table add '[paginator]="true"' and do same steps, the value from storage will deletes.
**What is the motivation / use case for changing the behavior?**
I use in production app custom paginator, and after refresh page filters will fill with old value. To remove value user must have clear application storage.
* **Angular version:** 11.0.8
* **PrimeNG version:** 11.2.3
* **Browser:** all | defect | p table with save state localstorage sessionstorage remember old value after delete from filter i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please demonstrate your case at stackblitz by using the issue template below issues without a test case have much less possibility to be reviewd in detail and assisted current behavior if user type in filters some text then deletes it in localstorage sessionstorage text from filter it still alive i see that issue exist when p table doesn t have true when p table have true state working fine expected behavior when p table don t have true and user delete text from filter then should be delete from storage minimal reproduction of the problem with instructions on section case i paste url with the bug be on page press and go to section application localstorage url stackblitz and on stackblitz put some text on filter you will see value of filter save in storage if you remove all characters from filter then in storage will have old value if you in p table add true and do same steps the value from storage will deletes what is the motivation use case for changing the behavior i use in production app custom paginator and after refresh page filters will fill with old value to remove value user must have clear application storage angular version primeng version browser all | 1 |
69,942 | 30,507,373,181 | IssuesEvent | 2023-07-18 17:56:56 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Invalid value given for parameter StorageSizeGB | service/postgresql v/3.x | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help.
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.15.3
### AzureRM Provider Version
3.65.0
### Affected Resource(s)/Data Source(s)
azurerm_postgresql_flexible_server
### Terraform Configuration Files
```hcl
provider "azurerm" {
features {
}
}
resource "azurerm_resource_group" "example" {
name = "da-demo-postgre"
location = "East US"
}
resource "azurerm_virtual_network" "example" {
name = "da-demo-postgre-vn"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "da-demo-postgre-sn"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "fs"
service_delegation {
name = "Microsoft.DBforPostgreSQL/flexibleServers"
actions = [
"Microsoft.Network/virtualNetworks/subnets/join/action",
]
}
}
}
resource "azurerm_private_dns_zone" "example" {
name = "example.postgres.database.azure.com"
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "exampleVnetZone.com"
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.example.id
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_postgresql_flexible_server" "example" {
name = "da-demo-psqlflexibleserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12"
delegated_subnet_id = azurerm_subnet.example.id
private_dns_zone_id = azurerm_private_dns_zone.example.id
administrator_login = "psqladmin"
administrator_password = "H@Sh1CoR3!"
zone = "1"
storage_mb = 33554432
sku_name = "GP_Standard_D4s_v3"
depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
```
### Debug Output/Panic Output
```shell
Error: creating Flexible Server (Subscription: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
│ Resource Group Name: "da-demo-postgre"
│ Flexible Server Name: "da-demo-psqlflexibleserver"): performing Create: servers.ServersClient#Create: Failure sending request: StatusCode=0 -- Original Error: Code="InvalidParameterValue" Message="Invalid value given for parameter StorageSizeGB. Specify a valid parameter value."
```
### Expected Behaviour
Resource should be deployed with storage_mb set to 33554432 (32TB)
### Actual Behaviour
_No response_
### Steps to Reproduce
- Build a new Terraform Azure provider binary which contains fix for https://github.com/hashicorp/terraform-provider-azurerm/issues/22573 since the release is not created yet.
- The previous issue mentioned in 22573 with validation logic is fixed.
- terraform apply.
### Important Factoids
_No response_
### References
https://github.com/hashicorp/terraform-provider-azurerm/issues/22573 | 1.0 | Invalid value given for parameter StorageSizeGB - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Community Note
<!--- Please keep this note for the community --->
* Please vote on this issue by adding a :thumbsup: [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the [contribution guide](https://github.com/hashicorp/terraform-provider-azurerm/blob/main/contributing/README.md) to help.
<!--- Thank you for keeping this note for the community --->
### Terraform Version
1.15.3
### AzureRM Provider Version
3.65.0
### Affected Resource(s)/Data Source(s)
azurerm_postgresql_flexible_server
### Terraform Configuration Files
```hcl
provider "azurerm" {
features {
}
}
resource "azurerm_resource_group" "example" {
name = "da-demo-postgre"
location = "East US"
}
resource "azurerm_virtual_network" "example" {
name = "da-demo-postgre-vn"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "example" {
name = "da-demo-postgre-sn"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "fs"
service_delegation {
name = "Microsoft.DBforPostgreSQL/flexibleServers"
actions = [
"Microsoft.Network/virtualNetworks/subnets/join/action",
]
}
}
}
resource "azurerm_private_dns_zone" "example" {
name = "example.postgres.database.azure.com"
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_private_dns_zone_virtual_network_link" "example" {
name = "exampleVnetZone.com"
private_dns_zone_name = azurerm_private_dns_zone.example.name
virtual_network_id = azurerm_virtual_network.example.id
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_postgresql_flexible_server" "example" {
name = "da-demo-psqlflexibleserver"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = "12"
delegated_subnet_id = azurerm_subnet.example.id
private_dns_zone_id = azurerm_private_dns_zone.example.id
administrator_login = "psqladmin"
administrator_password = "H@Sh1CoR3!"
zone = "1"
storage_mb = 33554432
sku_name = "GP_Standard_D4s_v3"
depends_on = [azurerm_private_dns_zone_virtual_network_link.example]
}
```
### Debug Output/Panic Output
```shell
Error: creating Flexible Server (Subscription: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
│ Resource Group Name: "da-demo-postgre"
│ Flexible Server Name: "da-demo-psqlflexibleserver"): performing Create: servers.ServersClient#Create: Failure sending request: StatusCode=0 -- Original Error: Code="InvalidParameterValue" Message="Invalid value given for parameter StorageSizeGB. Specify a valid parameter value."
```
### Expected Behaviour
Resource should be deployed with storage_mb set to 33554432 (32TB)
### Actual Behaviour
_No response_
### Steps to Reproduce
- Build a new Terraform Azure provider binary which contains fix for https://github.com/hashicorp/terraform-provider-azurerm/issues/22573 since the release is not created yet.
- The previous issue mentioned in 22573 with validation logic is fixed.
- terraform apply.
### Important Factoids
_No response_
### References
https://github.com/hashicorp/terraform-provider-azurerm/issues/22573 | non_defect | invalid value given for parameter storagesizegb is there an existing issue for this i have searched the existing issues community note please vote on this issue by adding a thumbsup to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment and review the to help terraform version azurerm provider version affected resource s data source s azurerm postgresql flexible server terraform configuration files hcl provider azurerm features resource azurerm resource group example name da demo postgre location east us resource azurerm virtual network example name da demo postgre vn location azurerm resource group example location resource group name azurerm resource group example name address space resource azurerm subnet example name da demo postgre sn resource group name azurerm resource group example name virtual network name azurerm virtual network example name address prefixes service endpoints delegation name fs service delegation name microsoft dbforpostgresql flexibleservers actions microsoft network virtualnetworks subnets join action resource azurerm private dns zone example name example postgres database azure com resource group name azurerm resource group example name resource azurerm private dns zone virtual network link example name examplevnetzone com private dns zone name azurerm private dns zone example name virtual network id azurerm virtual network example id resource group name azurerm resource group example name resource azurerm postgresql flexible server example name da demo psqlflexibleserver resource group name azurerm resource group example name location azurerm resource group example location version delegated subnet id azurerm subnet example id private dns zone id azurerm private dns zone example id administrator login psqladmin administrator password h zone storage mb sku name gp standard depends on debug output panic output shell error creating flexible server subscription xxxxxxxx xxxx xxxx xxxx xxxxxxxxxxxx │ resource group name da demo postgre │ flexible server name da demo psqlflexibleserver performing create servers serversclient create failure sending request statuscode original error code invalidparametervalue message invalid value given for parameter storagesizegb specify a valid parameter value expected behaviour resource should be deployed with storage mb set to actual behaviour no response steps to reproduce build a new terraform azure provider binary which contains fix for since the release is not created yet the previous issue mentioned in with validation logic is fixed terraform apply important factoids no response references | 0 |
16,155 | 2,873,766,243 | IssuesEvent | 2015-06-08 18:49:31 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | Operation timeout mechanism is not working | Team: Core Type: Defect | When an operation is invoked if the remote side is not responding it should throw an OperationTimeoutException after the predefined timeout passed.
With the current codebase, it fails to detect an operation still executing or not and loops infinitely.
Reproducer
```
@Test
public void testInvocationThrowsOperationTimeoutExceptionWhenTimeout() throws Exception {
final Config config = new Config();
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, "300");
TestHazelcastInstanceFactory factory = createHazelcastInstanceFactory(2);
HazelcastInstance local = factory.newHazelcastInstance(config);
HazelcastInstance remote = factory.newHazelcastInstance(config);
warmUpPartitions(local, remote);
Set<Partition> partitions = remote.getPartitionService().getPartitions();
Partition partition = partitions.iterator().next();
OperationService service = getOperationService(local);
Operation op = new TargetOperation();
op.setPartitionId(partition.getPartitionId());
Future f = service.createInvocationBuilder(MapService.SERVICE_NAME, op, partition.getPartitionId()).invoke();
f.get();
}
/**
* Operation send to a specific member.
*/
private static class TargetOperation extends AbstractOperation implements PartitionAwareOperation {
@Override
public void run() throws InterruptedException {}
@Override
public boolean returnsResponse() {
return false;
}
}
```
Logs containing infinite loop.
```
ASSERT_TRUE_EVENTUALLY_TIMEOUT = 120
Started Running Test: Invocation_initInvocationTargetTest.testInvocationThrowsOperationTimeoutExceptionWhenTimeout
21:48:25,670 INFO [OperationService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Backpressure is disabled
21:48:25,682 INFO [ClassicOperationExecutor] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Starting with 2 generic operation threads and 4 partition operation threads.
21:48:26,099 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Hazelcast 3.5-EA2-SNAPSHOT (20150608) starting at Address[127.0.0.1]:5001
21:48:26,100 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
21:48:26,103 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5001 is STARTING
21:48:26,225 INFO [TestNodeRegistry$MockJoiner] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5001 this
}
21:48:26,267 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5001 is STARTED
21:48:26,277 INFO [OperationService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Backpressure is disabled
21:48:26,279 INFO [ClassicOperationExecutor] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Starting with 2 generic operation threads and 4 partition operation threads.
21:48:26,284 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Hazelcast 3.5-EA2-SNAPSHOT (20150608) starting at Address[127.0.0.1]:5002
21:48:26,285 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
21:48:26,285 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5002 is STARTING
21:48:26,310 INFO [ClusterService] hz._hzInstance_1_dev.generic-operation.thread-1 - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT]
Members [2] {
Member [127.0.0.1]:5001 this
Member [127.0.0.1]:5002
}
21:48:26,311 INFO [ClusterService] hz._hzInstance_2_dev.generic-operation.thread-1 - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT]
Members [2] {
Member [127.0.0.1]:5001
Member [127.0.0.1]:5002 this
}
21:48:27,349 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5002 is STARTED
21:48:27,355 INFO [InternalPartitionService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Initializing cluster partition table first arrangement...
21:48:28,011 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:28,014 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,015 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,617 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 602 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:28,619 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,620 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,222 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:29,224 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,225 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,826 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:29,828 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,828 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:30,430 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:30,433 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:30,434 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,035 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:31,036 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,037 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,638 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:31,639 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,640 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,242 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:32,243 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,243 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,844 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:32,845 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,845 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:33,446 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:33,447 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:33,447 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,049 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:34,049 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,050 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,651 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:34,652 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,653 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,254 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:35,256 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,256 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,857 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:35,859 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,859 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:36,460 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:36,460 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
.....
.....
.....
``` | 1.0 | Operation timeout mechanism is not working - When an operation is invoked if the remote side is not responding it should throw an OperationTimeoutException after the predefined timeout passed.
With the current codebase, it fails to detect an operation still executing or not and loops infinitely.
Reproducer
```
@Test
public void testInvocationThrowsOperationTimeoutExceptionWhenTimeout() throws Exception {
final Config config = new Config();
config.setProperty(GroupProperties.PROP_OPERATION_CALL_TIMEOUT_MILLIS, "300");
TestHazelcastInstanceFactory factory = createHazelcastInstanceFactory(2);
HazelcastInstance local = factory.newHazelcastInstance(config);
HazelcastInstance remote = factory.newHazelcastInstance(config);
warmUpPartitions(local, remote);
Set<Partition> partitions = remote.getPartitionService().getPartitions();
Partition partition = partitions.iterator().next();
OperationService service = getOperationService(local);
Operation op = new TargetOperation();
op.setPartitionId(partition.getPartitionId());
Future f = service.createInvocationBuilder(MapService.SERVICE_NAME, op, partition.getPartitionId()).invoke();
f.get();
}
/**
* Operation send to a specific member.
*/
private static class TargetOperation extends AbstractOperation implements PartitionAwareOperation {
@Override
public void run() throws InterruptedException {}
@Override
public boolean returnsResponse() {
return false;
}
}
```
Logs containing infinite loop.
```
ASSERT_TRUE_EVENTUALLY_TIMEOUT = 120
Started Running Test: Invocation_initInvocationTargetTest.testInvocationThrowsOperationTimeoutExceptionWhenTimeout
21:48:25,670 INFO [OperationService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Backpressure is disabled
21:48:25,682 INFO [ClassicOperationExecutor] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Starting with 2 generic operation threads and 4 partition operation threads.
21:48:26,099 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Hazelcast 3.5-EA2-SNAPSHOT (20150608) starting at Address[127.0.0.1]:5001
21:48:26,100 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
21:48:26,103 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5001 is STARTING
21:48:26,225 INFO [TestNodeRegistry$MockJoiner] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT]
Members [1] {
Member [127.0.0.1]:5001 this
}
21:48:26,267 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5001 is STARTED
21:48:26,277 INFO [OperationService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Backpressure is disabled
21:48:26,279 INFO [ClassicOperationExecutor] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Starting with 2 generic operation threads and 4 partition operation threads.
21:48:26,284 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Hazelcast 3.5-EA2-SNAPSHOT (20150608) starting at Address[127.0.0.1]:5002
21:48:26,285 INFO [system] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved.
21:48:26,285 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5002 is STARTING
21:48:26,310 INFO [ClusterService] hz._hzInstance_1_dev.generic-operation.thread-1 - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT]
Members [2] {
Member [127.0.0.1]:5001 this
Member [127.0.0.1]:5002
}
21:48:26,311 INFO [ClusterService] hz._hzInstance_2_dev.generic-operation.thread-1 - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT]
Members [2] {
Member [127.0.0.1]:5001
Member [127.0.0.1]:5002 this
}
21:48:27,349 INFO [LifecycleService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5002 [dev] [3.5-EA2-SNAPSHOT] Address[127.0.0.1]:5002 is STARTED
21:48:27,355 INFO [InternalPartitionService] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Initializing cluster partition table first arrangement...
21:48:28,011 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:28,014 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,015 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,617 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 602 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:28,619 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:28,620 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,222 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:29,224 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,225 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,826 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:29,828 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:29,828 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:30,430 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:30,433 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:30,434 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,035 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:31,036 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,037 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,638 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:31,639 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:31,640 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,242 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:32,243 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,243 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,844 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:32,845 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:32,845 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:33,446 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:33,447 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:33,447 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,049 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:34,049 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,050 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,651 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:34,652 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:34,653 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,254 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:35,256 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,256 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,857 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 601 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:35,859 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:35,859 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] 'is-executing': true -> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
21:48:36,460 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] No response for 600 ms. InvocationFuture{invocation=Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.spi.impl.operationservice.impl.Invocation_initInvocationTargetTest$TargetOperation{serviceName='hz:impl:mapService', partitionId=0, callId=9223372036854775807, invocationTime=1433789307410, waitTimeout=-1, callTimeout=300}, partitionId=0, replicaIndex=0, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeout=300, target=Address[127.0.0.1]:5001, backupsExpected=0, backupsCompleted=0}, response=null, done=false}
21:48:36,460 WARN [Invocation] testInvocationThrowsOperationTimeoutExceptionWhenTimeout - [127.0.0.1]:5001 [dev] [3.5-EA2-SNAPSHOT] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService@1b071c0
.....
.....
.....
``` | defect | operation timeout mechanism is not working when an operation is invoked if the remote side is not responding it should throw an operationtimeoutexception after the predefined timeout passed with the current codebase it fails to detect an operation still executing or not and loops infinitely reproducer test public void testinvocationthrowsoperationtimeoutexceptionwhentimeout throws exception final config config new config config setproperty groupproperties prop operation call timeout millis testhazelcastinstancefactory factory createhazelcastinstancefactory hazelcastinstance local factory newhazelcastinstance config hazelcastinstance remote factory newhazelcastinstance config warmuppartitions local remote set partitions remote getpartitionservice getpartitions partition partition partitions iterator next operationservice service getoperationservice local operation op new targetoperation op setpartitionid partition getpartitionid future f service createinvocationbuilder mapservice service name op partition getpartitionid invoke f get operation send to a specific member private static class targetoperation extends abstractoperation implements partitionawareoperation override public void run throws interruptedexception override public boolean returnsresponse return false logs containing infinite loop assert true eventually timeout started running test invocation initinvocationtargettest testinvocationthrowsoperationtimeoutexceptionwhentimeout info testinvocationthrowsoperationtimeoutexceptionwhentimeout backpressure is disabled info testinvocationthrowsoperationtimeoutexceptionwhentimeout starting with generic operation threads and partition operation threads info testinvocationthrowsoperationtimeoutexceptionwhentimeout hazelcast snapshot starting at address info testinvocationthrowsoperationtimeoutexceptionwhentimeout copyright c hazelcast inc all rights reserved info testinvocationthrowsoperationtimeoutexceptionwhentimeout address is starting info testinvocationthrowsoperationtimeoutexceptionwhentimeout members member this info testinvocationthrowsoperationtimeoutexceptionwhentimeout address is started info testinvocationthrowsoperationtimeoutexceptionwhentimeout backpressure is disabled info testinvocationthrowsoperationtimeoutexceptionwhentimeout starting with generic operation threads and partition operation threads info testinvocationthrowsoperationtimeoutexceptionwhentimeout hazelcast snapshot starting at address info testinvocationthrowsoperationtimeoutexceptionwhentimeout copyright c hazelcast inc all rights reserved info testinvocationthrowsoperationtimeoutexceptionwhentimeout address is starting info hz hzinstance dev generic operation thread members member this member info hz hzinstance dev generic operation thread members member member this info testinvocationthrowsoperationtimeoutexceptionwhentimeout address is started info testinvocationthrowsoperationtimeoutexceptionwhentimeout initializing cluster partition table first arrangement warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout is executing true com hazelcast spi impl operationservice impl isstillrunningservice warn testinvocationthrowsoperationtimeoutexceptionwhentimeout no response for ms invocationfuture invocation invocation servicename hz impl mapservice op com hazelcast spi impl operationservice impl invocation initinvocationtargettest targetoperation servicename hz impl mapservice partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target address backupsexpected backupscompleted response null done false warn testinvocationthrowsoperationtimeoutexceptionwhentimeout asking if operation execution has been started com hazelcast spi impl operationservice impl isstillrunningservice | 1 |
9,858 | 2,616,004,128 | IssuesEvent | 2015-03-02 00:48:54 | jasonhall/bwapi | https://api.github.com/repos/jasonhall/bwapi | closed | Wont load map xml file | auto-migrated Priority-Medium Type-Defect | ```
Revision: 188
I wanted to test whether the API is able to load the ICCup Python
1.3.scx.xml file, so I started a match on the exact same map.
The following logline appears in the log:
Exception in AI::onStart: Unable to load data file bwapi-data\\maps\.xml
Possible Explanation: Maybe I have another version of python 1.3, so that
it isnt able to match it to the xml -> can you send me your version of
python if you suspect that that might be the cause?
```
Original issue reported on code.google.com by `quietdeath@gmail.com` on 21 May 2008 at 8:54 | 1.0 | Wont load map xml file - ```
Revision: 188
I wanted to test whether the API is able to load the ICCup Python
1.3.scx.xml file, so I started a match on the exact same map.
The following logline appears in the log:
Exception in AI::onStart: Unable to load data file bwapi-data\\maps\.xml
Possible Explanation: Maybe I have another version of python 1.3, so that
it isnt able to match it to the xml -> can you send me your version of
python if you suspect that that might be the cause?
```
Original issue reported on code.google.com by `quietdeath@gmail.com` on 21 May 2008 at 8:54 | defect | wont load map xml file revision i wanted to test whether the api is able to load the iccup python scx xml file so i started a match on the exact same map the following logline appears in the log exception in ai onstart unable to load data file bwapi data maps xml possible explanation maybe i have another version of python so that it isnt able to match it to the xml can you send me your version of python if you suspect that that might be the cause original issue reported on code google com by quietdeath gmail com on may at | 1 |
33,378 | 7,702,689,621 | IssuesEvent | 2018-05-21 04:16:21 | threadly/threadly | https://api.github.com/repos/threadly/threadly | closed | Fix CI | non-code related | Our Jenkins setup has rotted even further. It's now completely broken and needs to be fixed. | 1.0 | Fix CI - Our Jenkins setup has rotted even further. It's now completely broken and needs to be fixed. | non_defect | fix ci our jenkins setup has rotted even further it s now completely broken and needs to be fixed | 0 |
2,971 | 2,607,968,084 | IssuesEvent | 2015-02-26 00:43:17 | chrsmithdemos/leveldb | https://api.github.com/repos/chrsmithdemos/leveldb | closed | hardedcoded number of shards in ShardedLRUCache can cause file descriptor exhaustion | auto-migrated Priority-Medium Type-Defect | ```
If the minimum of 20 is used for options.max_open_files, each shard of the
table cache will only have one entry. This can lead to a quick exhaustion of
file descriptors when there are two L0 tables and many iterators opened at
once. Each iterator effectively uses two file descriptors because the cache is
constantly turning over.
I'm going to work around this problem in chrome by increasing max_open_files
but it seems like accommodating 20 in this case wouldn't require much change to
cache.cc.
There are more details in this comment:
https://code.google.com/p/chromium/issues/detail?id=227313#c11
I haven't yet looked into why the keys used by indexeddb end up in the same
shard. It might just be the same key for every iterator.
```
-----
Original issue reported on code.google.com by `dgrogan@chromium.org` on 22 Apr 2013 at 11:44 | 1.0 | hardedcoded number of shards in ShardedLRUCache can cause file descriptor exhaustion - ```
If the minimum of 20 is used for options.max_open_files, each shard of the
table cache will only have one entry. This can lead to a quick exhaustion of
file descriptors when there are two L0 tables and many iterators opened at
once. Each iterator effectively uses two file descriptors because the cache is
constantly turning over.
I'm going to work around this problem in chrome by increasing max_open_files
but it seems like accommodating 20 in this case wouldn't require much change to
cache.cc.
There are more details in this comment:
https://code.google.com/p/chromium/issues/detail?id=227313#c11
I haven't yet looked into why the keys used by indexeddb end up in the same
shard. It might just be the same key for every iterator.
```
-----
Original issue reported on code.google.com by `dgrogan@chromium.org` on 22 Apr 2013 at 11:44 | defect | hardedcoded number of shards in shardedlrucache can cause file descriptor exhaustion if the minimum of is used for options max open files each shard of the table cache will only have one entry this can lead to a quick exhaustion of file descriptors when there are two tables and many iterators opened at once each iterator effectively uses two file descriptors because the cache is constantly turning over i m going to work around this problem in chrome by increasing max open files but it seems like accommodating in this case wouldn t require much change to cache cc there are more details in this comment i haven t yet looked into why the keys used by indexeddb end up in the same shard it might just be the same key for every iterator original issue reported on code google com by dgrogan chromium org on apr at | 1 |
20,023 | 3,290,957,783 | IssuesEvent | 2015-10-30 04:35:14 | prettydiff/prettydiff | https://api.github.com/repos/prettydiff/prettydiff | closed | command line doesn't like certain files that work with the web app | Defect QA | Some files seem to make the command line version go crazy, but work when I paste them into the web app.
Running version 1.14.8, on, for example, the file `variables.less` from Bootstrap 3 (https://github.com/twbs/bootstrap/blob/master/less/variables.less) , I get an output full of things like:
```
//** Global textual link color.
@link - color : @brand - primary;
//** Link hover color set via `darken()` function.
@link - hover - color : darken(@link - color, 15 %);
//** Link hover decoration.
@link - hover - decoration : underline;
```
Where spaces have been inserted either side of hyphens in variable names and (not shown in the snippet above) indentation is messed up and comments and code become entangled.
Yet all the files I've had problems with work perfectly when pasted into the web app on prettydiff.com, which also reports it is version 1.14.8 | 1.0 | command line doesn't like certain files that work with the web app - Some files seem to make the command line version go crazy, but work when I paste them into the web app.
Running version 1.14.8, on, for example, the file `variables.less` from Bootstrap 3 (https://github.com/twbs/bootstrap/blob/master/less/variables.less) , I get an output full of things like:
```
//** Global textual link color.
@link - color : @brand - primary;
//** Link hover color set via `darken()` function.
@link - hover - color : darken(@link - color, 15 %);
//** Link hover decoration.
@link - hover - decoration : underline;
```
Where spaces have been inserted either side of hyphens in variable names and (not shown in the snippet above) indentation is messed up and comments and code become entangled.
Yet all the files I've had problems with work perfectly when pasted into the web app on prettydiff.com, which also reports it is version 1.14.8 | defect | command line doesn t like certain files that work with the web app some files seem to make the command line version go crazy but work when i paste them into the web app running version on for example the file variables less from bootstrap i get an output full of things like global textual link color link color brand primary link hover color set via darken function link hover color darken link color link hover decoration link hover decoration underline where spaces have been inserted either side of hyphens in variable names and not shown in the snippet above indentation is messed up and comments and code become entangled yet all the files i ve had problems with work perfectly when pasted into the web app on prettydiff com which also reports it is version | 1 |
120,458 | 10,118,304,869 | IssuesEvent | 2019-07-31 08:46:44 | NuGet/Home | https://api.github.com/repos/NuGet/Home | closed | Convert async void test methods to async Task | Area:Test | Convert `async void` test methods to `async Task` to avoid potentially unobserved exceptions to crash the test runner process. | 1.0 | Convert async void test methods to async Task - Convert `async void` test methods to `async Task` to avoid potentially unobserved exceptions to crash the test runner process. | non_defect | convert async void test methods to async task convert async void test methods to async task to avoid potentially unobserved exceptions to crash the test runner process | 0 |
383,931 | 11,364,565,567 | IssuesEvent | 2020-01-27 08:38:09 | wso2/product-microgateway | https://api.github.com/repos/wso2/product-microgateway | closed | Token caches initialisation binds to default values | Priority/Normal Type/Bug | ### Description:
Due to defining the cache objects globally, and using those cache objects within handlers which are also used when defining global listeners. So due to this nature token cache objects get initialised with default values when defining handlers in order initialise the listeners
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
3.1.0-SNAPSHOT
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| 1.0 | Token caches initialisation binds to default values - ### Description:
Due to defining the cache objects globally, and using those cache objects within handlers which are also used when defining global listeners. So due to this nature token cache objects get initialised with default values when defining handlers in order initialise the listeners
### Steps to reproduce:
### Affected Product Version:
<!-- Members can use Affected/*** labels -->
3.1.0-SNAPSHOT
### Environment details (with versions):
- OS:
- Client:
- Env (Docker/K8s):
---
### Optional Fields
#### Related Issues:
<!-- Any related issues from this/other repositories-->
#### Suggested Labels:
<!--Only to be used by non-members-->
#### Suggested Assignees:
<!--Only to be used by non-members-->
| non_defect | token caches initialisation binds to default values description due to defining the cache objects globally and using those cache objects within handlers which are also used when defining global listeners so due to this nature token cache objects get initialised with default values when defining handlers in order initialise the listeners steps to reproduce affected product version snapshot environment details with versions os client env docker optional fields related issues suggested labels suggested assignees | 0 |
48,632 | 12,225,260,065 | IssuesEvent | 2020-05-03 04:01:18 | Autodesk/arnold-usd | https://api.github.com/repos/Autodesk/arnold-usd | closed | Update testsuite scripts to match arnold | bug build | **Describe the bug**
The arnold-usd testsuite scripts have slightly derived from arnold core ones, which is causing issues when running the testsuite from arnold. The parameter `resave` should be called again `resaved`, and we should check whether it's a string or a boolean. When set to true, we assume the scene has to be resaved to .ass. This way, both test scripts will be similar again | 1.0 | Update testsuite scripts to match arnold - **Describe the bug**
The arnold-usd testsuite scripts have slightly derived from arnold core ones, which is causing issues when running the testsuite from arnold. The parameter `resave` should be called again `resaved`, and we should check whether it's a string or a boolean. When set to true, we assume the scene has to be resaved to .ass. This way, both test scripts will be similar again | non_defect | update testsuite scripts to match arnold describe the bug the arnold usd testsuite scripts have slightly derived from arnold core ones which is causing issues when running the testsuite from arnold the parameter resave should be called again resaved and we should check whether it s a string or a boolean when set to true we assume the scene has to be resaved to ass this way both test scripts will be similar again | 0 |
98,803 | 16,389,483,914 | IssuesEvent | 2021-05-17 14:29:23 | Thanraj/linux-1 | https://api.github.com/repos/Thanraj/linux-1 | opened | CVE-2020-11494 (Medium) detected in linuxv5.0 | security vulnerability | ## CVE-2020-11494 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-1/commits/9738d89d33cb0f3ac708908509b82eafc007d557">9738d89d33cb0f3ac708908509b82eafc007d557</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/drivers/net/can/slcan.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/drivers/net/can/slcan.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in slc_bump in drivers/net/can/slcan.c in the Linux kernel 3.16 through 5.6.2. It allows attackers to read uninitialized can_frame data, potentially containing sensitive information from kernel stack memory, if the configuration lacks CONFIG_INIT_STACK_ALL, aka CID-b9258a2cece4.
<p>Publish Date: 2020-04-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11494>CVE-2020-11494</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-11494">https://nvd.nist.gov/vuln/detail/CVE-2020-11494</a></p>
<p>Release Date: 2020-04-02</p>
<p>Fix Resolution: linux- v5.7-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11494 (Medium) detected in linuxv5.0 - ## CVE-2020-11494 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.0</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/Thanraj/linux-1/commits/9738d89d33cb0f3ac708908509b82eafc007d557">9738d89d33cb0f3ac708908509b82eafc007d557</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/drivers/net/can/slcan.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux-1/drivers/net/can/slcan.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in slc_bump in drivers/net/can/slcan.c in the Linux kernel 3.16 through 5.6.2. It allows attackers to read uninitialized can_frame data, potentially containing sensitive information from kernel stack memory, if the configuration lacks CONFIG_INIT_STACK_ALL, aka CID-b9258a2cece4.
<p>Publish Date: 2020-04-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11494>CVE-2020-11494</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-11494">https://nvd.nist.gov/vuln/detail/CVE-2020-11494</a></p>
<p>Release Date: 2020-04-02</p>
<p>Fix Resolution: linux- v5.7-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files linux drivers net can slcan c linux drivers net can slcan c vulnerability details an issue was discovered in slc bump in drivers net can slcan c in the linux kernel through it allows attackers to read uninitialized can frame data potentially containing sensitive information from kernel stack memory if the configuration lacks config init stack all aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux step up your open source security game with whitesource | 0 |
53,170 | 13,261,075,685 | IssuesEvent | 2020-08-20 19:15:19 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | GCD files from simprod are incorrect version. (Trac #870) | Migrated from Trac defect iceprod | Testing data from:
http://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/
it has the wrong GCD file.
http://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/GeoCalibDetectorStatus_IC86.55697_corrected_V2.i3.gz
Should be:
/data/sim/sim-new/downloads/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz `
Hat tip to dschultz for the correct location
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/870">https://code.icecube.wisc.edu/projects/icecube/ticket/870</a>, reported by blaufussand owned by ddelventhal</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"_ts": "1458335643235016",
"description": "Testing data from:\nhttp://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/\n\nit has the wrong GCD file.\nhttp://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/GeoCalibDetectorStatus_IC86.55697_corrected_V2.i3.gz\n\nShould be:\n/data/sim/sim-new/downloads/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz `\n\n\nHat tip to dschultz for the correct location",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2015-02-13T03:25:19",
"component": "iceprod",
"summary": "GCD files from simprod are incorrect version.",
"priority": "normal",
"keywords": "simprod",
"milestone": "",
"owner": "ddelventhal",
"type": "defect"
}
```
</p>
</details>
| 1.0 | GCD files from simprod are incorrect version. (Trac #870) - Testing data from:
http://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/
it has the wrong GCD file.
http://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/GeoCalibDetectorStatus_IC86.55697_corrected_V2.i3.gz
Should be:
/data/sim/sim-new/downloads/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz `
Hat tip to dschultz for the correct location
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/870">https://code.icecube.wisc.edu/projects/icecube/ticket/870</a>, reported by blaufussand owned by ddelventhal</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"_ts": "1458335643235016",
"description": "Testing data from:\nhttp://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/\n\nit has the wrong GCD file.\nhttp://convey.icecube.wisc.edu/data/sim/IceCube/2013/generated/CORSIKA-in-ice/10649/00000-00999/GeoCalibDetectorStatus_IC86.55697_corrected_V2.i3.gz\n\nShould be:\n/data/sim/sim-new/downloads/GCD/GeoCalibDetectorStatus_2013.56429_V1.i3.gz `\n\n\nHat tip to dschultz for the correct location",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2015-02-13T03:25:19",
"component": "iceprod",
"summary": "GCD files from simprod are incorrect version.",
"priority": "normal",
"keywords": "simprod",
"milestone": "",
"owner": "ddelventhal",
"type": "defect"
}
```
</p>
</details>
| defect | gcd files from simprod are incorrect version trac testing data from it has the wrong gcd file should be data sim sim new downloads gcd geocalibdetectorstatus gz hat tip to dschultz for the correct location migrated from json status closed changetime ts description testing data from n has the wrong gcd file n be n data sim sim new downloads gcd geocalibdetectorstatus gz n n nhat tip to dschultz for the correct location reporter blaufuss cc resolution fixed time component iceprod summary gcd files from simprod are incorrect version priority normal keywords simprod milestone owner ddelventhal type defect | 1 |
49,343 | 13,186,623,262 | IssuesEvent | 2020-08-13 00:46:57 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | [DOMLauncher] FIXME: PMTResponseSimulatorTests: the 0.4 distance (Trac #1203) | Incomplete Migration Migrated from Trac combo simulation defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1203">https://code.icecube.wisc.edu/ticket/1203</a>, reported by david.schultz and owned by cweaver</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "{{{\nprivate/test/PMTResponseSimulatorTests.cxx:\t//FIXME: The 0.4 distance is to silence the buildbots. This needs to be investigated.\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "[DOMLauncher] FIXME: PMTResponseSimulatorTests: the 0.4 distance",
"priority": "normal",
"keywords": "",
"time": "2015-08-19T19:05:49",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [DOMLauncher] FIXME: PMTResponseSimulatorTests: the 0.4 distance (Trac #1203) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1203">https://code.icecube.wisc.edu/ticket/1203</a>, reported by david.schultz and owned by cweaver</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"description": "{{{\nprivate/test/PMTResponseSimulatorTests.cxx:\t//FIXME: The 0.4 distance is to silence the buildbots. This needs to be investigated.\n}}}",
"reporter": "david.schultz",
"cc": "",
"resolution": "wontfix",
"_ts": "1550067215093672",
"component": "combo simulation",
"summary": "[DOMLauncher] FIXME: PMTResponseSimulatorTests: the 0.4 distance",
"priority": "normal",
"keywords": "",
"time": "2015-08-19T19:05:49",
"milestone": "",
"owner": "cweaver",
"type": "defect"
}
```
</p>
</details>
| defect | fixme pmtresponsesimulatortests the distance trac migrated from json status closed changetime description nprivate test pmtresponsesimulatortests cxx t fixme the distance is to silence the buildbots this needs to be investigated n reporter david schultz cc resolution wontfix ts component combo simulation summary fixme pmtresponsesimulatortests the distance priority normal keywords time milestone owner cweaver type defect | 1 |
463,699 | 13,298,324,437 | IssuesEvent | 2020-08-25 08:05:29 | YatopiaMC/Yatopia | https://api.github.com/repos/YatopiaMC/Yatopia | closed | End crystals can't be destroyed by snowballs | bug normal priority | **Expected behavior**
When I throw a snowball to end crystal, this end crystal explodes.
**Observed behavior**
When I throw a snowball at the end crystal, it just crashes into it.
**Steps to reproduce (if known)**
Just throw a snowball to end crystal.
**Installed plugins**
No plugins
**Version**
 | 1.0 | End crystals can't be destroyed by snowballs - **Expected behavior**
When I throw a snowball to end crystal, this end crystal explodes.
**Observed behavior**
When I throw a snowball at the end crystal, it just crashes into it.
**Steps to reproduce (if known)**
Just throw a snowball to end crystal.
**Installed plugins**
No plugins
**Version**
 | non_defect | end crystals can t be destroyed by snowballs expected behavior when i throw a snowball to end crystal this end crystal explodes observed behavior when i throw a snowball at the end crystal it just crashes into it steps to reproduce if known just throw a snowball to end crystal installed plugins no plugins version | 0 |
18,099 | 3,023,833,466 | IssuesEvent | 2015-08-01 23:05:27 | WarGamesLabs/Jack | https://api.github.com/repos/WarGamesLabs/Jack | closed | Binary Mode misbehaves after receiving 20 zeroes at once | auto-migrated Priority-Medium Type-Defect | ```
I created this bug in order to easily track the status of the buffer overflow
issue recently discussed in the forums, here is the thread address:
http://dangerousprototypes.com/forum/viewtopic.php?f=28&t=2864
If the link above fails, just search for "Binary Mode misbehaves after
receiving 20 zeroes at once" in the forums.
```
Original issue reported on code.google.com by `rdiezmai...@yahoo.de` on 10 Oct 2011 at 8:40 | 1.0 | Binary Mode misbehaves after receiving 20 zeroes at once - ```
I created this bug in order to easily track the status of the buffer overflow
issue recently discussed in the forums, here is the thread address:
http://dangerousprototypes.com/forum/viewtopic.php?f=28&t=2864
If the link above fails, just search for "Binary Mode misbehaves after
receiving 20 zeroes at once" in the forums.
```
Original issue reported on code.google.com by `rdiezmai...@yahoo.de` on 10 Oct 2011 at 8:40 | defect | binary mode misbehaves after receiving zeroes at once i created this bug in order to easily track the status of the buffer overflow issue recently discussed in the forums here is the thread address if the link above fails just search for binary mode misbehaves after receiving zeroes at once in the forums original issue reported on code google com by rdiezmai yahoo de on oct at | 1 |
650,297 | 21,367,789,083 | IssuesEvent | 2022-04-20 04:58:58 | ppy/osu | https://api.github.com/repos/ppy/osu | closed | Attempting to enter an online screen when not logged in should show and focus the login dialog | type:UX good-first-issue priority:2 | The notification mentioned in the discussion should as a result be removed.
### Discussed in https://github.com/ppy/osu/discussions/17866
<div type='discussions-op-text'>
<sup>Originally posted by **RBQcat** April 18, 2022</sup>
Steps to reproduce
1. Make sure you are not logged in
2. Tap the multiplayer button

</div> | 1.0 | Attempting to enter an online screen when not logged in should show and focus the login dialog - The notification mentioned in the discussion should as a result be removed.
### Discussed in https://github.com/ppy/osu/discussions/17866
<div type='discussions-op-text'>
<sup>Originally posted by **RBQcat** April 18, 2022</sup>
Steps to reproduce
1. Make sure you are not logged in
2. Tap the multiplayer button

</div> | non_defect | attempting to enter an online screen when not logged in should show and focus the login dialog the notification mentioned in the discussion should as a result be removed discussed in originally posted by rbqcat april steps to reproduce make sure you are not logged in tap the multiplayer button | 0 |
114,200 | 9,692,759,093 | IssuesEvent | 2019-05-24 14:30:52 | italia/spid | https://api.github.com/repos/italia/spid | closed | Richiesta Validazione metadati SPID per il Comune di Fontanelle (TV) | metadata nuovo md test | Buongiorno,
per conto del Comune in oggetto si richiede la verifica e il deploy dei metadati esposti all'URL:
https://www.comune.fontanelle.tv.it/spid/metadata.xml
Grazie.
| 1.0 | Richiesta Validazione metadati SPID per il Comune di Fontanelle (TV) - Buongiorno,
per conto del Comune in oggetto si richiede la verifica e il deploy dei metadati esposti all'URL:
https://www.comune.fontanelle.tv.it/spid/metadata.xml
Grazie.
| non_defect | richiesta validazione metadati spid per il comune di fontanelle tv buongiorno per conto del comune in oggetto si richiede la verifica e il deploy dei metadati esposti all url grazie | 0 |
52,012 | 6,558,535,336 | IssuesEvent | 2017-09-06 21:51:33 | vavr-io/vavr | https://api.github.com/repos/vavr-io/vavr | closed | Either.sequence should return Either<Seq<L>, Seq<R>> | design/refactoring/improvement «vavr-controlx» | In the case of Either (as opposed to Try/Option) it makes sense to accumulate also the left values.
We recently added Either.sequence - it was forgotten. Because it is still not released, we are able to change the signature for the upcoming 0.9.1 patch release.
```java
// current signature
static <L,R> Either<L, Seq<R>> sequence(Iterable<? extends Either<? extends L, ? extends R>> values) { ... }
// new signature
static <L,R> Either<Seq<L>, Seq<R>> sequence(Iterable<? extends Either<? extends L, ? extends R>> values) { ... }
```
**Semantics:** `sequence` collects all right values if all given Eithers are Right, else it collects all Left instances (if one or more Eithers are Left).
**Usage example:**
```java
Either<Seq<L>, Seq<R>> result = Either.sequence(either1, either2, ...);
// perform ugly side-effects
result.orElseRun(errors -> ...);
``` | 1.0 | Either.sequence should return Either<Seq<L>, Seq<R>> - In the case of Either (as opposed to Try/Option) it makes sense to accumulate also the left values.
We recently added Either.sequence - it was forgotten. Because it is still not released, we are able to change the signature for the upcoming 0.9.1 patch release.
```java
// current signature
static <L,R> Either<L, Seq<R>> sequence(Iterable<? extends Either<? extends L, ? extends R>> values) { ... }
// new signature
static <L,R> Either<Seq<L>, Seq<R>> sequence(Iterable<? extends Either<? extends L, ? extends R>> values) { ... }
```
**Semantics:** `sequence` collects all right values if all given Eithers are Right, else it collects all Left instances (if one or more Eithers are Left).
**Usage example:**
```java
Either<Seq<L>, Seq<R>> result = Either.sequence(either1, either2, ...);
// perform ugly side-effects
result.orElseRun(errors -> ...);
``` | non_defect | either sequence should return either seq in the case of either as opposed to try option it makes sense to accumulate also the left values we recently added either sequence it was forgotten because it is still not released we are able to change the signature for the upcoming patch release java current signature static either sequence iterable values new signature static either seq sequence iterable values semantics sequence collects all right values if all given eithers are right else it collects all left instances if one or more eithers are left usage example java either seq result either sequence perform ugly side effects result orelserun errors | 0 |
16,660 | 2,925,078,811 | IssuesEvent | 2015-06-26 01:19:54 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Missing UnsupportedOperation on (const {}).clear() | Area-Dart2JS Priority-Unassigned Triaged Type-Defect | Dart2JS generated code fails at 'Expect.throws(..)' in the following co19 test:
Language/12_Expressions/17_Getter_Invocation_A07_t02.
The method namedArguments should return a 'const {}' in this situation, and such a map should not be modified. ECMA-408 p57 says 'Attempting to mutate a constant map literal will result in a dynamic error' which presumably means that attempts to modify an object obtained by evaluating a constant map literal expression must throw such an error. If this had occurred, the 'Expect.throws(..)' would have succeeded.
| 1.0 | Missing UnsupportedOperation on (const {}).clear() - Dart2JS generated code fails at 'Expect.throws(..)' in the following co19 test:
Language/12_Expressions/17_Getter_Invocation_A07_t02.
The method namedArguments should return a 'const {}' in this situation, and such a map should not be modified. ECMA-408 p57 says 'Attempting to mutate a constant map literal will result in a dynamic error' which presumably means that attempts to modify an object obtained by evaluating a constant map literal expression must throw such an error. If this had occurred, the 'Expect.throws(..)' would have succeeded.
| defect | missing unsupportedoperation on const clear generated code fails at expect throws in the following test language expressions getter invocation the method namedarguments should return a const in this situation and such a map should not be modified ecma says attempting to mutate a constant map literal will result in a dynamic error which presumably means that attempts to modify an object obtained by evaluating a constant map literal expression must throw such an error if this had occurred the expect throws would have succeeded | 1 |
21,332 | 3,896,717,265 | IssuesEvent | 2016-04-16 00:40:07 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | Kubemark runs fails | priority/P0 team/test-infra | http://kubekins.dls.corp.google.com/view/Scalability/job/kubernetes-kubemark-5-gce/1508/consoleFull
```16:36:46 docker build -t gcr.io/k8s-jenkins-kubemark/kubemark .
16:36:46 make: docker: Command not found``` | 1.0 | Kubemark runs fails - http://kubekins.dls.corp.google.com/view/Scalability/job/kubernetes-kubemark-5-gce/1508/consoleFull
```16:36:46 docker build -t gcr.io/k8s-jenkins-kubemark/kubemark .
16:36:46 make: docker: Command not found``` | non_defect | kubemark runs fails docker build t gcr io jenkins kubemark kubemark make docker command not found | 0 |
713,932 | 24,544,009,076 | IssuesEvent | 2022-10-12 07:19:48 | thecyberworld/thecyberhub.org | https://api.github.com/repos/thecyberworld/thecyberhub.org | closed | [BUG] Sidebar | ✨ goal: improvement 🛠 goal: fix 🐛 bug 🟥 priority: critical good first issue hacktoberfest | ### Describe the bug
When we click on the sidebar-nav item, it automatically closes the sidebar and opens the new page.
whereas when we click on the dropdown nav-items, it doen;t close the sidebar automatically
under Learn navitem:
- navitem > learn
- navitem > learn > prep
<img src="https://user-images.githubusercontent.com/44284877/194046321-07e9b714-84e0-4c89-9c16-57a022e0874d.png" alt="drawing" width="200"/>
### To Reproduce
- navitem > events:
- works.
- navitem > learn > prep > quiz:
- does not work.
### Expected Behavior
When we click on:
- navitem > learn > prep > quiz:
it should work the same as events and blogs.
### Screenshot/ Video
screenshot:
<img src="https://user-images.githubusercontent.com/44284877/194046321-07e9b714-84e0-4c89-9c16-57a022e0874d.png" alt="drawing" width="200"/>
video:
https://user-images.githubusercontent.com/44284877/194047248-01af7233-7c4f-4140-9415-c4b791eef504.mp4
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | [BUG] Sidebar - ### Describe the bug
When we click on the sidebar-nav item, it automatically closes the sidebar and opens the new page.
whereas when we click on the dropdown nav-items, it doen;t close the sidebar automatically
under Learn navitem:
- navitem > learn
- navitem > learn > prep
<img src="https://user-images.githubusercontent.com/44284877/194046321-07e9b714-84e0-4c89-9c16-57a022e0874d.png" alt="drawing" width="200"/>
### To Reproduce
- navitem > events:
- works.
- navitem > learn > prep > quiz:
- does not work.
### Expected Behavior
When we click on:
- navitem > learn > prep > quiz:
it should work the same as events and blogs.
### Screenshot/ Video
screenshot:
<img src="https://user-images.githubusercontent.com/44284877/194046321-07e9b714-84e0-4c89-9c16-57a022e0874d.png" alt="drawing" width="200"/>
video:
https://user-images.githubusercontent.com/44284877/194047248-01af7233-7c4f-4140-9415-c4b791eef504.mp4
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | non_defect | sidebar describe the bug when we click on the sidebar nav item it automatically closes the sidebar and opens the new page whereas when we click on the dropdown nav items it doen t close the sidebar automatically under learn navitem navitem learn navitem learn prep to reproduce navitem events works navitem learn prep quiz does not work expected behavior when we click on navitem learn prep quiz it should work the same as events and blogs screenshot video screenshot video code of conduct i agree to follow this project s code of conduct | 0 |
1,294 | 5,475,329,541 | IssuesEvent | 2017-03-11 09:53:45 | BloodyBlade/Fairytale | https://api.github.com/repos/BloodyBlade/Fairytale | closed | Вынести сервис логирования в Common | architecture | Вынести текущую обертку над NLog в отдельный проект. В дальнейшем развить его до полноценного сервиса и отвязать прочие проекты от NLog. | 1.0 | Вынести сервис логирования в Common - Вынести текущую обертку над NLog в отдельный проект. В дальнейшем развить его до полноценного сервиса и отвязать прочие проекты от NLog. | non_defect | вынести сервис логирования в common вынести текущую обертку над nlog в отдельный проект в дальнейшем развить его до полноценного сервиса и отвязать прочие проекты от nlog | 0 |
262,271 | 27,881,876,733 | IssuesEvent | 2023-03-21 20:05:28 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | opened | [SecurityWeek] Verosint Launches Account Fraud Detection and Prevention Platform | SecurityWeek |
443ID is refocusing its solution to tackle account fraud detection and prevention, and has changed its name to Verosint.
The post [Verosint Launches Account Fraud Detection and Prevention Platform](https://www.securityweek.com/verosint-launches-account-fraud-detection-and-prevention-platform/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/verosint-launches-account-fraud-detection-and-prevention-platform/>
| True | [SecurityWeek] Verosint Launches Account Fraud Detection and Prevention Platform -
443ID is refocusing its solution to tackle account fraud detection and prevention, and has changed its name to Verosint.
The post [Verosint Launches Account Fraud Detection and Prevention Platform](https://www.securityweek.com/verosint-launches-account-fraud-detection-and-prevention-platform/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/verosint-launches-account-fraud-detection-and-prevention-platform/>
| non_defect | verosint launches account fraud detection and prevention platform is refocusing its solution to tackle account fraud detection and prevention and has changed its name to verosint the post appeared first on | 0 |
208,283 | 16,108,910,617 | IssuesEvent | 2021-04-27 18:20:48 | AJStacy/adze | https://api.github.com/repos/AJStacy/adze | closed | Add Filters Description to Config Docs | documentation | The config docs for Adze Configuration show's `filters` as a config property but doesn't document it in the description table below. | 1.0 | Add Filters Description to Config Docs - The config docs for Adze Configuration show's `filters` as a config property but doesn't document it in the description table below. | non_defect | add filters description to config docs the config docs for adze configuration show s filters as a config property but doesn t document it in the description table below | 0 |
42,958 | 9,344,800,619 | IssuesEvent | 2019-03-30 01:06:29 | EdenServer/community | https://api.github.com/repos/EdenServer/community | closed | Onion Sword desynth = 100 blacksmithing? | in-code-review | Apparently, you can use the desynth of an onion sword to reach 100 blacksmithing on DSP, have noticed a few people doing this.
https://ffxiclopedia.fandom.com/wiki/Onion_Sword
There is no level listed and I can't find any proof or indication of it anywhere, not in any old guides, anything. This strongly feels wrong.
This is copied from our own website::
Smithing (100)
--
Onion Sword x1
Normal: Bronze Ingot x1
HQ1: Copper Ingot x1
HQ2: Square Of Sheep Leather x1
HQ3: Square Of Sheep Leather x2
| 1.0 | Onion Sword desynth = 100 blacksmithing? - Apparently, you can use the desynth of an onion sword to reach 100 blacksmithing on DSP, have noticed a few people doing this.
https://ffxiclopedia.fandom.com/wiki/Onion_Sword
There is no level listed and I can't find any proof or indication of it anywhere, not in any old guides, anything. This strongly feels wrong.
This is copied from our own website::
Smithing (100)
--
Onion Sword x1
Normal: Bronze Ingot x1
HQ1: Copper Ingot x1
HQ2: Square Of Sheep Leather x1
HQ3: Square Of Sheep Leather x2
| non_defect | onion sword desynth blacksmithing apparently you can use the desynth of an onion sword to reach blacksmithing on dsp have noticed a few people doing this there is no level listed and i can t find any proof or indication of it anywhere not in any old guides anything this strongly feels wrong this is copied from our own website smithing onion sword normal bronze ingot copper ingot square of sheep leather square of sheep leather | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.