Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29,933
| 5,959,855,772
|
IssuesEvent
|
2017-05-29 12:26:40
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
[4.4.0-dev] Commit #8ecb142 results in bad font rendering on Chrome (Windows)
|
defect
|
At least on my Desktop the font renders almost unreadable.
Especially in light font width some chars are to thin and distorted (e.g. the letter "e").
Can someone else confirm this?

|
1.0
|
[4.4.0-dev] Commit #8ecb142 results in bad font rendering on Chrome (Windows) - At least on my Desktop the font renders almost unreadable.
Especially in light font width some chars are to thin and distorted (e.g. the letter "e").
Can someone else confirm this?

|
defect
|
commit results in bad font rendering on chrome windows at least on my desktop the font renders almost unreadable especially in light font width some chars are to thin and distorted e g the letter e can someone else confirm this
| 1
|
103,673
| 16,603,670,776
|
IssuesEvent
|
2021-06-01 23:36:09
|
hygieia/hygieia-common
|
https://api.github.com/repos/hygieia/hygieia-common
|
opened
|
CVE-2020-13943 (Medium) detected in tomcat-embed-core-9.0.14.jar
|
security vulnerability
|
## CVE-2020-13943 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.14.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: hygieia-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.14/tomcat-embed-core-9.0.14.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.2.RELEASE.jar
- :x: **tomcat-embed-core-9.0.14.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/hygieia/hygieia-common/commits/b8fbfc18552132520e52029d9b0fc0a1db09f115">b8fbfc18552132520e52029d9b0fc0a1db09f115</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If an HTTP/2 client connecting to Apache Tomcat 10.0.0-M1 to 10.0.0-M7, 9.0.0.M1 to 9.0.37 or 8.5.0 to 8.5.57 exceeded the agreed maximum number of concurrent streams for a connection (in violation of the HTTP/2 protocol), it was possible that a subsequent request made on that connection could contain HTTP headers - including HTTP/2 pseudo headers - from a previous request rather than the intended headers. This could lead to users seeing responses for unexpected resources.
<p>Publish Date: 2020-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13943>CVE-2020-13943</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r4a390027eb27e4550142fac6c8317cc684b157ae314d31514747f307%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r4a390027eb27e4550142fac6c8317cc684b157ae314d31514747f307%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-10-12</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.58,9.0.38,10.0.0-M8;org.apache.tomcat.embed:tomcat-embed-core:8.5.58,9.0.38,10.0.0-M8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-13943 (Medium) detected in tomcat-embed-core-9.0.14.jar - ## CVE-2020-13943 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.14.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Path to dependency file: hygieia-common/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.14/tomcat-embed-core-9.0.14.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.2.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.2.RELEASE.jar
- :x: **tomcat-embed-core-9.0.14.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/hygieia/hygieia-common/commits/b8fbfc18552132520e52029d9b0fc0a1db09f115">b8fbfc18552132520e52029d9b0fc0a1db09f115</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
If an HTTP/2 client connecting to Apache Tomcat 10.0.0-M1 to 10.0.0-M7, 9.0.0.M1 to 9.0.37 or 8.5.0 to 8.5.57 exceeded the agreed maximum number of concurrent streams for a connection (in violation of the HTTP/2 protocol), it was possible that a subsequent request made on that connection could contain HTTP headers - including HTTP/2 pseudo headers - from a previous request rather than the intended headers. This could lead to users seeing responses for unexpected resources.
<p>Publish Date: 2020-10-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13943>CVE-2020-13943</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/r4a390027eb27e4550142fac6c8317cc684b157ae314d31514747f307%40%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/r4a390027eb27e4550142fac6c8317cc684b157ae314d31514747f307%40%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2020-10-12</p>
<p>Fix Resolution: org.apache.tomcat:tomcat-coyote:8.5.58,9.0.38,10.0.0-M8;org.apache.tomcat.embed:tomcat-embed-core:8.5.58,9.0.38,10.0.0-M8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation path to dependency file hygieia common pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch main vulnerability details if an http client connecting to apache tomcat to to or to exceeded the agreed maximum number of concurrent streams for a connection in violation of the http protocol it was possible that a subsequent request made on that connection could contain http headers including http pseudo headers from a previous request rather than the intended headers this could lead to users seeing responses for unexpected resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat tomcat coyote org apache tomcat embed tomcat embed core step up your open source security game with whitesource
| 0
|
133,694
| 18,299,043,932
|
IssuesEvent
|
2021-10-05 23:55:46
|
bsbtd/Teste
|
https://api.github.com/repos/bsbtd/Teste
|
opened
|
CVE-2013-0248 (Medium) detected in commons-fileupload-1.2.jar
|
security vulnerability
|
## CVE-2013-0248 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://jakarta.apache.org/commons/fileupload/">http://jakarta.apache.org/commons/fileupload/</a></p>
<p>Path to vulnerable library: upload-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The default configuration of javax.servlet.context.tempdir in Apache Commons FileUpload 1.0 through 1.2.2 uses the /tmp directory for uploaded files, which allows local users to overwrite arbitrary files via an unspecified symlink attack.
<p>Publish Date: 2013-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0248>CVE-2013-0248</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0248">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0248</a></p>
<p>Release Date: 2013-03-15</p>
<p>Fix Resolution: 1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2013-0248 (Medium) detected in commons-fileupload-1.2.jar - ## CVE-2013-0248 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-fileupload-1.2.jar</b></p></summary>
<p>The FileUpload component provides a simple yet flexible means of adding support for multipart
file upload functionality to servlets and web applications.</p>
<p>Library home page: <a href="http://jakarta.apache.org/commons/fileupload/">http://jakarta.apache.org/commons/fileupload/</a></p>
<p>Path to vulnerable library: upload-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **commons-fileupload-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bsbtd/Teste/commit/64dde89c50c07496423c4d4a865f2e16b92399ad">64dde89c50c07496423c4d4a865f2e16b92399ad</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The default configuration of javax.servlet.context.tempdir in Apache Commons FileUpload 1.0 through 1.2.2 uses the /tmp directory for uploaded files, which allows local users to overwrite arbitrary files via an unspecified symlink attack.
<p>Publish Date: 2013-03-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2013-0248>CVE-2013-0248</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0248">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0248</a></p>
<p>Release Date: 2013-03-15</p>
<p>Fix Resolution: 1.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve medium detected in commons fileupload jar cve medium severity vulnerability vulnerable library commons fileupload jar the fileupload component provides a simple yet flexible means of adding support for multipart file upload functionality to servlets and web applications library home page a href path to vulnerable library upload jar dependency hierarchy x commons fileupload jar vulnerable library found in head commit a href vulnerability details the default configuration of javax servlet context tempdir in apache commons fileupload through uses the tmp directory for uploaded files which allows local users to overwrite arbitrary files via an unspecified symlink attack publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
167,331
| 14,108,336,983
|
IssuesEvent
|
2020-11-06 17:38:58
|
imsanjoykb/Chatbot-Application
|
https://api.github.com/repos/imsanjoykb/Chatbot-Application
|
opened
|
code sets up your registering in your firebase database.
|
documentation
|
As you might wonder, the Firebase database is a NoSQL which essentially means, you don’t need to query stuff using SQL.
Firebase uses reference method to retrieve and put the values.
So we have to decide on what and how to make our database structure. Here we also are going to make user sign up and sign in using a custom username and password, so we’ve to keep this in mind too.
The database structure that I’ve come up for this app is based on simplicity. We keep authentication details in a parent node named “users” and messages in another parent node named “messages”.
Inside messages we also keep track of who messaged whom by the order of the usernames.
|
1.0
|
code sets up your registering in your firebase database. - As you might wonder, the Firebase database is a NoSQL which essentially means, you don’t need to query stuff using SQL.
Firebase uses reference method to retrieve and put the values.
So we have to decide on what and how to make our database structure. Here we also are going to make user sign up and sign in using a custom username and password, so we’ve to keep this in mind too.
The database structure that I’ve come up for this app is based on simplicity. We keep authentication details in a parent node named “users” and messages in another parent node named “messages”.
Inside messages we also keep track of who messaged whom by the order of the usernames.
|
non_defect
|
code sets up your registering in your firebase database as you might wonder the firebase database is a nosql which essentially means you don’t need to query stuff using sql firebase uses reference method to retrieve and put the values so we have to decide on what and how to make our database structure here we also are going to make user sign up and sign in using a custom username and password so we’ve to keep this in mind too the database structure that i’ve come up for this app is based on simplicity we keep authentication details in a parent node named “users” and messages in another parent node named “messages” inside messages we also keep track of who messaged whom by the order of the usernames
| 0
|
559
| 2,570,824,562
|
IssuesEvent
|
2015-02-10 12:38:39
|
itm/testbed-runtime
|
https://api.github.com/repos/itm/testbed-runtime
|
closed
|
JSON representation of available devices empty
|
Defect
|
Link "Get JSON representation" in WiseGui returns empty array.
|
1.0
|
JSON representation of available devices empty - Link "Get JSON representation" in WiseGui returns empty array.
|
defect
|
json representation of available devices empty link get json representation in wisegui returns empty array
| 1
|
38,832
| 8,967,397,515
|
IssuesEvent
|
2019-01-29 03:10:45
|
svigerske/Ipopt
|
https://api.github.com/repos/svigerske/Ipopt
|
closed
|
Error installing / configuring ASL
|
Ipopt defect
|
Issue created by migration from Trac.
Original creator: rajhanschinmay
Original creation time: 2017-09-15 14:29:35
Assignee: ipopt-team
Version: 3.12
When I am running ./get.ASL command in Linux terminal, it says:
Applying path for MinGW
/home/chinmay/CoinIpopt/ThirdParty/ASL/get.ASL: 57: /home/chinmay/CoinIpopt/ThirdParty/ASL/get.ASL: cannot open mingw.patch: No such file
How to solve this?
|
1.0
|
Error installing / configuring ASL - Issue created by migration from Trac.
Original creator: rajhanschinmay
Original creation time: 2017-09-15 14:29:35
Assignee: ipopt-team
Version: 3.12
When I am running ./get.ASL command in Linux terminal, it says:
Applying path for MinGW
/home/chinmay/CoinIpopt/ThirdParty/ASL/get.ASL: 57: /home/chinmay/CoinIpopt/ThirdParty/ASL/get.ASL: cannot open mingw.patch: No such file
How to solve this?
|
defect
|
error installing configuring asl issue created by migration from trac original creator rajhanschinmay original creation time assignee ipopt team version when i am running get asl command in linux terminal it says applying path for mingw home chinmay coinipopt thirdparty asl get asl home chinmay coinipopt thirdparty asl get asl cannot open mingw patch no such file how to solve this
| 1
|
67,267
| 20,961,597,574
|
IssuesEvent
|
2022-03-27 21:46:35
|
abedmaatalla/imsdroid
|
https://api.github.com/repos/abedmaatalla/imsdroid
|
closed
|
Can imsdroid support LAN? neither WLAN nor 3G.
|
Priority-Medium Type-Defect auto-migrated
|
```
What steps will reproduce the problem?
1.Install imsdroid in Android system by LAN accessing Network
2.Set Identity and Network
3.Imsdroid show "NO active network"
Now does imsdroid not support LAN? How can I solve this problem?
```
Original issue reported on code.google.com by `ldf198...@163.com` on 4 Jan 2011 at 2:41
|
1.0
|
Can imsdroid support LAN? neither WLAN nor 3G. - ```
What steps will reproduce the problem?
1.Install imsdroid in Android system by LAN accessing Network
2.Set Identity and Network
3.Imsdroid show "NO active network"
Now does imsdroid not support LAN? How can I solve this problem?
```
Original issue reported on code.google.com by `ldf198...@163.com` on 4 Jan 2011 at 2:41
|
defect
|
can imsdroid support lan neither wlan nor what steps will reproduce the problem install imsdroid in android system by lan accessing network set identity and network imsdroid show no active network now does imsdroid not support lan how can i solve this problem original issue reported on code google com by com on jan at
| 1
|
64,080
| 18,165,646,381
|
IssuesEvent
|
2021-09-27 14:22:01
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
opened
|
Can't enable message search
|
T-Defect
|
### Steps to reproduce
1. I used to have seshat-search enabled and working on sway
2. I logged out of sway and into gnome and use element in gnome
3. I log back into sway
4. search not available: sqlcypher error
5. reset matrix-org/matrix-react-sdk#5806
6. enable
### What happened?
### What did you expect?
start indexing, search becomes available
### What happened?
stuck spinning here

### Related
#14229
### Operating system
arch
### Application version
Element Nightly version: 2021092701 Olm version: 3.2.3
### How did you install the app?
aur/nightly-bin
### Homeserver
private
### Have you submitted a rageshake?
Yes
|
1.0
|
Can't enable message search - ### Steps to reproduce
1. I used to have seshat-search enabled and working on sway
2. I logged out of sway and into gnome and use element in gnome
3. I log back into sway
4. search not available: sqlcypher error
5. reset matrix-org/matrix-react-sdk#5806
6. enable
### What happened?
### What did you expect?
start indexing, search becomes available
### What happened?
stuck spinning here

### Related
#14229
### Operating system
arch
### Application version
Element Nightly version: 2021092701 Olm version: 3.2.3
### How did you install the app?
aur/nightly-bin
### Homeserver
private
### Have you submitted a rageshake?
Yes
|
defect
|
can t enable message search steps to reproduce i used to have seshat search enabled and working on sway i logged out of sway and into gnome and use element in gnome i log back into sway search not available sqlcypher error reset matrix org matrix react sdk enable what happened what did you expect start indexing search becomes available what happened stuck spinning here related operating system arch application version element nightly version olm version how did you install the app aur nightly bin homeserver private have you submitted a rageshake yes
| 1
|
29,733
| 5,846,049,814
|
IssuesEvent
|
2017-05-10 15:24:41
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
INSERT INTO .. SET Record statement should not take defaulted null values into consideration
|
C: Functionality P: Medium R: Invalid T: Defect
|
Similar to other API elements, the `INSERT INTO .. SET record` API should not explicitly set values that are:
- `null` (in Java)
- `NOT NULL` (in the database)
---
See also
- #2700
- #4161
- https://groups.google.com/forum/#!topic/jooq-user/8hwhDanETYs
|
1.0
|
INSERT INTO .. SET Record statement should not take defaulted null values into consideration - Similar to other API elements, the `INSERT INTO .. SET record` API should not explicitly set values that are:
- `null` (in Java)
- `NOT NULL` (in the database)
---
See also
- #2700
- #4161
- https://groups.google.com/forum/#!topic/jooq-user/8hwhDanETYs
|
defect
|
insert into set record statement should not take defaulted null values into consideration similar to other api elements the insert into set record api should not explicitly set values that are null in java not null in the database see also
| 1
|
53,739
| 13,262,213,922
|
IssuesEvent
|
2020-08-20 21:19:27
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
closed
|
clsim tablemaker deadlock (Trac #1984)
|
Migrated from Trac combo simulation defect
|
When running the clsim tablemaker, once in a while (maybe 5-10% of my jobs) get stuck at a random time and wait forever at 0% CPU.
Running on Ubuntu16.04 w/ Intel openCL runtime and Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
GDB revealed that it gets stuck at:
```text
pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
```
backtrace:
```text
(gdb) backtrace
https://code.icecube.wisc.edu/ticket/0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so
#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so
#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so
#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) () from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so
#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529
#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253
#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058
#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681
#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,
kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131
#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056
#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681
#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168,
args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669
#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,
filename=0x63fb30 "/home/peller/build/simulation/trunk/lib/libdataio.so", mod=<optimized out>) at Python/pythonrun.c:1371
#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 "generate_table.py", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,
locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357
#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 "generate_table.py", closeit=1, flags=flags@entry=0x7fffffffc550)
at Python/pythonrun.c:949
#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550)
at Python/pythonrun.c:753
#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640
#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffc708) at ../csu/libc-start.c:291
#33 0x00000000004006e9 in _start ()
```
full backtrace:
```text
(gdb) backtrace full
https://code.icecube.wisc.edu/ticket/0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
No locals.
#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so
No symbol table info available.
#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) ()
from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so
No symbol table info available.
#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529
result = <optimized out>
call = 0x7ffff5245ff0 <function_call>
#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253
callargs = <optimized out>
kwdict = <optimized out>
result = 0x0
#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058
func = 0x74c350
w = <optimized out>
na = 1
nk = <optimized out>
n = <optimized out>
pfunc = 0x782178
x = <optimized out>
#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681
sp = 0x782178
stack_pointer = <optimized out>
next_instr = <optimized out>
opcode = <optimized out>
oparg = <optimized out>
why = WHY_NOT
err = 0
x = <optimized out>
v = <optimized out>
w = <optimized out>
u = <optimized out>
t = <optimized out>
stream = 0x0
fastlocals = 0x782168
freevars = <optimized out>
retval = <optimized out>
tstate = <optimized out>
co = <optimized out>
instr_ub = -1
instr_lb = 0
instr_prev = -1
first_instr = <optimized out>
names = <optimized out>
consts = <optimized out>
#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,
kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
f = 0x781ff0
retval = 0x0
fastlocals = 0x782168
freevars = 0x782178
tstate = 0x6020a0
x = <optimized out>
u = <optimized out>
#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131
co = <optimized out>
nd = <optimized out>
globals = <optimized out>
argdefs = <optimized out>
d = <optimized out>
#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056
func = 0x7fffdad056e0
w = <optimized out>
na = 1
nk = 0
n = <optimized out>
pfunc = 0x685898
x = <optimized out>
#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681
sp = 0x6858a0
stack_pointer = <optimized out>
next_instr = <optimized out>
opcode = <optimized out>
oparg = <optimized out>
why = WHY_NOT
err = 0
x = <optimized out>
v = <optimized out>
w = <optimized out>
u = <optimized out>
t = <optimized out>
stream = 0x0
fastlocals = 0x685898
freevars = <optimized out>
retval = <optimized out>
tstate = <optimized out>
co = <optimized out>
instr_ub = -1
instr_lb = 0
instr_prev = -1
first_instr = <optimized out>
names = <optimized out>
consts = <optimized out>
#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168, args=args@entry=0x0,
argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
f = 0x685720
retval = 0x0
fastlocals = 0x685898
freevars = 0x685898
tstate = 0x6020a0
x = <optimized out>
u = <optimized out>
#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669
No locals.
#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,
filename=0x63fb30 "/home/peller/build/simulation/trunk/lib/libdataio.so", mod=<optimized out>) at Python/pythonrun.c:1371
co = 0x7ffff7ece8b0
v = <optimized out>
#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 "generate_table.py", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,
locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357
mod = <optimized out>
arena = 0x673c60
#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 "generate_table.py", closeit=1, flags=flags@entry=0x7fffffffc550)
---Type <return> to continue, or q <return> to quit---
at Python/pythonrun.c:949
m = 0x7ffff7f45be8
d = 0x7ffff7f59168
v = <optimized out>
ext = 0x7fffffffcb54 "e.py"
set_file_name = 1
len = <optimized out>
ret = -1
#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550) at Python/pythonrun.c:753
No locals.
#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640
c = <optimized out>
sts = <optimized out>
command = 0x0
filename = 0x7fffffffcb47 "generate_table.py"
module = 0x0
fp = 0x63fb30
p = <optimized out>
unbuffered = 0
skipfirstline = 0
stdin_is_interactive = 1
help = <optimized out>
version = <optimized out>
saw_unbuffered_flag = <optimized out>
cf = {cf_flags = 0}
#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
stack_end=0x7fffffffc708) at ../csu/libc-start.c:291
result = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -52366050721878219, 4196032, 140737488340752, 0, 0, 52365500628250421, 52384425274421045}, mask_was_saved = 0}}, priv = {pad = {
0x0, 0x0, 0x8, 0x4006b0 <main>}, data = {prev = 0x0, cleanup = 0x0, canceltype = 8```
not_first_call = <optimized out>
#33 0x00000000004006e9 in _start ()
No symbol table info available.
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1984">https://code.icecube.wisc.edu/projects/icecube/ticket/1984</a>, reported by peller</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "When running the clsim tablemaker, once in a while (maybe 5-10% of my jobs) get stuck at a random time and wait forever at 0% CPU.\n\nRunning on Ubuntu16.04 w/ Intel openCL runtime and Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz\n\nGDB revealed that it gets stuck at:\n\n{{{\npthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\n185 ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.\n}}}\n\n\nbacktrace:\n\n{{{\n(gdb) backtrace\n#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\n#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so\n#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) () from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so\n#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529\n#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253\n#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058\n#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,\n kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131\n#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056\n#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168,\n args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669\n#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,\n filename=0x63fb30 \"/home/peller/build/simulation/trunk/lib/libdataio.so\", mod=<optimized out>) at Python/pythonrun.c:1371\n#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 \"generate_table.py\", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,\n locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357\n#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 \"generate_table.py\", closeit=1, flags=flags@entry=0x7fffffffc550)\n at Python/pythonrun.c:949\n#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550)\n at Python/pythonrun.c:753\n#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640\n#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>,\n rtld_fini=<optimized out>, stack_end=0x7fffffffc708) at ../csu/libc-start.c:291\n#33 0x00000000004006e9 in _start ()\n}}}\n\nfull backtrace:\n{{{\n(gdb) backtrace full\n#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\nNo locals.\n#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so\nNo symbol table info available.\n#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) ()\n from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so\nNo symbol table info available.\n#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529\n result = <optimized out>\n call = 0x7ffff5245ff0 <function_call>\n#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253\n callargs = <optimized out>\n kwdict = <optimized out>\n result = 0x0\n#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058\n func = 0x74c350\n w = <optimized out>\n na = 1\n nk = <optimized out>\n n = <optimized out>\n pfunc = 0x782178\n x = <optimized out>\n#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n sp = 0x782178\n stack_pointer = <optimized out>\n next_instr = <optimized out>\n opcode = <optimized out>\n oparg = <optimized out>\n why = WHY_NOT\n err = 0\n x = <optimized out>\n v = <optimized out>\n w = <optimized out>\n u = <optimized out>\n t = <optimized out>\n stream = 0x0\n fastlocals = 0x782168\n freevars = <optimized out>\n retval = <optimized out>\n tstate = <optimized out>\n co = <optimized out>\n instr_ub = -1\n instr_lb = 0\n instr_prev = -1\n first_instr = <optimized out>\n names = <optimized out>\n consts = <optimized out>\n#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,\n kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n f = 0x781ff0\n retval = 0x0\n fastlocals = 0x782168\n freevars = 0x782178\n tstate = 0x6020a0\n x = <optimized out>\n u = <optimized out>\n#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131\n co = <optimized out>\n nd = <optimized out>\n globals = <optimized out>\n argdefs = <optimized out>\n d = <optimized out>\n#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056\n func = 0x7fffdad056e0\n w = <optimized out>\n na = 1\n nk = 0\n n = <optimized out>\n pfunc = 0x685898\n x = <optimized out>\n#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n sp = 0x6858a0\n stack_pointer = <optimized out>\n next_instr = <optimized out>\n opcode = <optimized out>\n oparg = <optimized out>\n why = WHY_NOT\n err = 0\n x = <optimized out>\n v = <optimized out>\n w = <optimized out>\n u = <optimized out>\n t = <optimized out>\n stream = 0x0\n fastlocals = 0x685898\n freevars = <optimized out>\n retval = <optimized out>\n tstate = <optimized out>\n co = <optimized out>\n instr_ub = -1\n instr_lb = 0\n instr_prev = -1\n first_instr = <optimized out>\n names = <optimized out>\n consts = <optimized out>\n#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168, args=args@entry=0x0,\n argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n f = 0x685720\n retval = 0x0\n fastlocals = 0x685898\n freevars = 0x685898\n tstate = 0x6020a0\n x = <optimized out>\n u = <optimized out>\n#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669\nNo locals.\n#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,\n filename=0x63fb30 \"/home/peller/build/simulation/trunk/lib/libdataio.so\", mod=<optimized out>) at Python/pythonrun.c:1371\n co = 0x7ffff7ece8b0\n v = <optimized out>\n#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 \"generate_table.py\", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,\n locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357\n mod = <optimized out>\n arena = 0x673c60\n#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 \"generate_table.py\", closeit=1, flags=flags@entry=0x7fffffffc550)\n---Type <return> to continue, or q <return> to quit---\n at Python/pythonrun.c:949\n m = 0x7ffff7f45be8\n d = 0x7ffff7f59168\n v = <optimized out>\n ext = 0x7fffffffcb54 \"e.py\"\n set_file_name = 1\n len = <optimized out>\n ret = -1\n#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550) at Python/pythonrun.c:753\nNo locals.\n#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640\n c = <optimized out>\n sts = <optimized out>\n command = 0x0\n filename = 0x7fffffffcb47 \"generate_table.py\"\n module = 0x0\n fp = 0x63fb30\n p = <optimized out>\n unbuffered = 0\n skipfirstline = 0\n stdin_is_interactive = 1\n help = <optimized out>\n version = <optimized out>\n saw_unbuffered_flag = <optimized out>\n cf = {cf_flags = 0}\n#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,\n stack_end=0x7fffffffc708) at ../csu/libc-start.c:291\n result = <optimized out>\n unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -52366050721878219, 4196032, 140737488340752, 0, 0, 52365500628250421, 52384425274421045}, mask_was_saved = 0}}, priv = {pad = {\n 0x0, 0x0, 0x8, 0x4006b0 <main>}, data = {prev = 0x0, cleanup = 0x0, canceltype = 8}}}\n not_first_call = <optimized out>\n#33 0x00000000004006e9 in _start ()\nNo symbol table info available.\n}}}",
"reporter": "peller",
"cc": "",
"resolution": "fixed",
"time": "2017-04-14T16:28:03",
"component": "combo simulation",
"summary": "clsim tablemaker deadlock",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
clsim tablemaker deadlock (Trac #1984) - When running the clsim tablemaker, once in a while (maybe 5-10% of my jobs) get stuck at a random time and wait forever at 0% CPU.
Running on Ubuntu16.04 w/ Intel openCL runtime and Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
GDB revealed that it gets stuck at:
```text
pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
185 ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.
```
backtrace:
```text
(gdb) backtrace
https://code.icecube.wisc.edu/ticket/0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so
#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so
#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so
#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so
#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) () from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so
#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529
#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253
#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058
#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681
#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,
kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131
#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056
#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681
#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168,
args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669
#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,
filename=0x63fb30 "/home/peller/build/simulation/trunk/lib/libdataio.so", mod=<optimized out>) at Python/pythonrun.c:1371
#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 "generate_table.py", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,
locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357
#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 "generate_table.py", closeit=1, flags=flags@entry=0x7fffffffc550)
at Python/pythonrun.c:949
#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550)
at Python/pythonrun.c:753
#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640
#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>,
rtld_fini=<optimized out>, stack_end=0x7fffffffc708) at ../csu/libc-start.c:291
#33 0x00000000004006e9 in _start ()
```
full backtrace:
```text
(gdb) backtrace full
https://code.icecube.wisc.edu/ticket/0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
No locals.
#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so
No symbol table info available.
#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so
No symbol table info available.
#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()
from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) ()
from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so
No symbol table info available.
#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0
No symbol table info available.
#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529
result = <optimized out>
call = 0x7ffff5245ff0 <function_call>
#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253
callargs = <optimized out>
kwdict = <optimized out>
result = 0x0
#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058
func = 0x74c350
w = <optimized out>
na = 1
nk = <optimized out>
n = <optimized out>
pfunc = 0x782178
x = <optimized out>
#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681
sp = 0x782178
stack_pointer = <optimized out>
next_instr = <optimized out>
opcode = <optimized out>
oparg = <optimized out>
why = WHY_NOT
err = 0
x = <optimized out>
v = <optimized out>
w = <optimized out>
u = <optimized out>
t = <optimized out>
stream = 0x0
fastlocals = 0x782168
freevars = <optimized out>
retval = <optimized out>
tstate = <optimized out>
co = <optimized out>
instr_ub = -1
instr_lb = 0
instr_prev = -1
first_instr = <optimized out>
names = <optimized out>
consts = <optimized out>
#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,
kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
f = 0x781ff0
retval = 0x0
fastlocals = 0x782168
freevars = 0x782178
tstate = 0x6020a0
x = <optimized out>
u = <optimized out>
#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131
co = <optimized out>
nd = <optimized out>
globals = <optimized out>
argdefs = <optimized out>
d = <optimized out>
#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056
func = 0x7fffdad056e0
w = <optimized out>
na = 1
nk = 0
n = <optimized out>
pfunc = 0x685898
x = <optimized out>
#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681
sp = 0x6858a0
stack_pointer = <optimized out>
next_instr = <optimized out>
opcode = <optimized out>
oparg = <optimized out>
why = WHY_NOT
err = 0
x = <optimized out>
v = <optimized out>
w = <optimized out>
u = <optimized out>
t = <optimized out>
stream = 0x0
fastlocals = 0x685898
freevars = <optimized out>
retval = <optimized out>
tstate = <optimized out>
co = <optimized out>
instr_ub = -1
instr_lb = 0
instr_prev = -1
first_instr = <optimized out>
names = <optimized out>
consts = <optimized out>
#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168, args=args@entry=0x0,
argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267
f = 0x685720
retval = 0x0
fastlocals = 0x685898
freevars = 0x685898
tstate = 0x6020a0
x = <optimized out>
u = <optimized out>
#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669
No locals.
#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,
filename=0x63fb30 "/home/peller/build/simulation/trunk/lib/libdataio.so", mod=<optimized out>) at Python/pythonrun.c:1371
co = 0x7ffff7ece8b0
v = <optimized out>
#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 "generate_table.py", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,
locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357
mod = <optimized out>
arena = 0x673c60
#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 "generate_table.py", closeit=1, flags=flags@entry=0x7fffffffc550)
---Type <return> to continue, or q <return> to quit---
at Python/pythonrun.c:949
m = 0x7ffff7f45be8
d = 0x7ffff7f59168
v = <optimized out>
ext = 0x7fffffffcb54 "e.py"
set_file_name = 1
len = <optimized out>
ret = -1
#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550) at Python/pythonrun.c:753
No locals.
#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640
c = <optimized out>
sts = <optimized out>
command = 0x0
filename = 0x7fffffffcb47 "generate_table.py"
module = 0x0
fp = 0x63fb30
p = <optimized out>
unbuffered = 0
skipfirstline = 0
stdin_is_interactive = 1
help = <optimized out>
version = <optimized out>
saw_unbuffered_flag = <optimized out>
cf = {cf_flags = 0}
#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
stack_end=0x7fffffffc708) at ../csu/libc-start.c:291
result = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -52366050721878219, 4196032, 140737488340752, 0, 0, 52365500628250421, 52384425274421045}, mask_was_saved = 0}}, priv = {pad = {
0x0, 0x0, 0x8, 0x4006b0 <main>}, data = {prev = 0x0, cleanup = 0x0, canceltype = 8```
not_first_call = <optimized out>
#33 0x00000000004006e9 in _start ()
No symbol table info available.
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1984">https://code.icecube.wisc.edu/projects/icecube/ticket/1984</a>, reported by peller</summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:13:35",
"_ts": "1550067215093672",
"description": "When running the clsim tablemaker, once in a while (maybe 5-10% of my jobs) get stuck at a random time and wait forever at 0% CPU.\n\nRunning on Ubuntu16.04 w/ Intel openCL runtime and Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz\n\nGDB revealed that it gets stuck at:\n\n{{{\npthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\n185 ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S: No such file or directory.\n}}}\n\n\nbacktrace:\n\n{{{\n(gdb) backtrace\n#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\n#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so\n#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so\n#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) () from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so\n#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\n#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529\n#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253\n#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058\n#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,\n kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131\n#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056\n#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168,\n args=args@entry=0x0, argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669\n#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,\n filename=0x63fb30 \"/home/peller/build/simulation/trunk/lib/libdataio.so\", mod=<optimized out>) at Python/pythonrun.c:1371\n#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 \"generate_table.py\", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,\n locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357\n#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 \"generate_table.py\", closeit=1, flags=flags@entry=0x7fffffffc550)\n at Python/pythonrun.c:949\n#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550)\n at Python/pythonrun.c:753\n#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640\n#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>,\n rtld_fini=<optimized out>, stack_end=0x7fffffffc708) at ../csu/libc-start.c:291\n#33 0x00000000004006e9 in _start ()\n}}}\n\nfull backtrace:\n{{{\n(gdb) backtrace full\n#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185\nNo locals.\n#1 0x00007fffe1d9a50f in I3CLSimTabulatorModule::DAQ(boost::shared_ptr<I3Frame>) () from /home/peller/build/simulation/trunk/lib/libclsim.so\nNo symbol table info available.\n#2 0x00007ffff583768e in I3Module::Process() () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#3 0x00007ffff583a689 in I3Module::Process_() () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#4 0x00007ffff58351ad in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#5 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#6 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#7 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#8 0x00007ffff5835269 in I3Module::Do(void (I3Module::*)()) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#9 0x00007ffff57d79a9 in I3Tray::Execute(unsigned int) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#10 0x00007ffff592d56e in boost::python::objects::caller_py_function_impl<boost::python::detail::caller<void (I3Tray::*)(), boost::python::default_call_policies, boost::mpl::vector2<void, I3Tray&> > >::operator()(_object*, _object*) () from /home/peller/build/simulation/trunk/lib/libicetray.so\nNo symbol table info available.\n#11 0x00007ffff5248c8d in boost::python::objects::function::call(_object*, _object*) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#12 0x00007ffff5248e88 in boost::detail::function::void_function_ref_invoker0<boost::python::objects::(anonymous namespace)::bind_return, void>::invoke(boost::detail::function::function_buffer&) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#13 0x00007ffff5250ee3 in boost::python::detail::exception_handler::operator()(boost::function0<void> const&) const ()\n from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#14 0x00007fffed72e263 in boost::detail::function::function_obj_invoker2<boost::_bi::bind_t<bool, boost::python::detail::translate_exception<not_found_exception, void (*)(not_found_exception const&)>, boost::_bi::list3<boost::arg<1>, boost::arg<2>, boost::_bi::value<void (*)(not_found_exception const&)> > >, bool, boost::python::detail::exception_handler const&, boost::function0<void> const&>::invoke(boost::detail::function::function_buffer&, boost::python::detail::exception_handler const&, boost::function0<void> const&) ()\n from /home/peller/build/simulation/trunk/lib/icecube/dataclasses.so\nNo symbol table info available.\n#15 0x00007ffff5250c9d in boost::python::handle_exception_impl(boost::function0<void>) () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#16 0x00007ffff5246059 in function_call () from /cvmfs/icecube.opensciencegrid.org/py2-v2/Ubuntu_16_x86_64/lib/libboost_python.so.1.57.0\nNo symbol table info available.\n#17 0x00007ffff7a27333 in PyObject_Call (func=func@entry=0x74c350, arg=arg@entry=0x7fffcfdb62d0, kw=kw@entry=0x0) at Objects/abstract.c:2529\n result = <optimized out>\n call = 0x7ffff5245ff0 <function_call>\n#18 0x00007ffff7add212 in do_call (nk=<optimized out>, na=1, pp_stack=0x7fffffffc140, func=0x74c350) at Python/ceval.c:4253\n callargs = <optimized out>\n kwdict = <optimized out>\n result = 0x0\n#19 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc140) at Python/ceval.c:4058\n func = 0x74c350\n w = <optimized out>\n na = 1\n nk = <optimized out>\n n = <optimized out>\n pfunc = 0x782178\n x = <optimized out>\n#20 PyEval_EvalFrameEx (f=f@entry=0x781ff0, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n sp = 0x782178\n stack_pointer = <optimized out>\n next_instr = <optimized out>\n opcode = <optimized out>\n oparg = <optimized out>\n why = WHY_NOT\n err = 0\n x = <optimized out>\n v = <optimized out>\n w = <optimized out>\n u = <optimized out>\n t = <optimized out>\n stream = 0x0\n fastlocals = 0x782168\n freevars = <optimized out>\n retval = <optimized out>\n tstate = <optimized out>\n co = <optimized out>\n instr_ub = -1\n instr_lb = 0\n instr_prev = -1\n first_instr = <optimized out>\n names = <optimized out>\n consts = <optimized out>\n#21 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=<optimized out>, globals=<optimized out>, locals=locals@entry=0x0, args=<optimized out>, argcount=argcount@entry=1,\n kws=<optimized out>, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n f = 0x781ff0\n retval = 0x0\n fastlocals = 0x782168\n freevars = 0x782178\n tstate = 0x6020a0\n x = <optimized out>\n u = <optimized out>\n#22 0x00007ffff7adf05a in fast_function (nk=0, na=1, n=<optimized out>, pp_stack=0x7fffffffc330, func=0x7fffdad056e0) at Python/ceval.c:4131\n co = <optimized out>\n nd = <optimized out>\n globals = <optimized out>\n argdefs = <optimized out>\n d = <optimized out>\n#23 call_function (oparg=<optimized out>, pp_stack=0x7fffffffc330) at Python/ceval.c:4056\n func = 0x7fffdad056e0\n w = <optimized out>\n na = 1\n nk = 0\n n = <optimized out>\n pfunc = 0x685898\n x = <optimized out>\n#24 PyEval_EvalFrameEx (f=f@entry=0x685720, throwflag=throwflag@entry=0) at Python/ceval.c:2681\n sp = 0x6858a0\n stack_pointer = <optimized out>\n next_instr = <optimized out>\n opcode = <optimized out>\n oparg = <optimized out>\n why = WHY_NOT\n err = 0\n x = <optimized out>\n v = <optimized out>\n w = <optimized out>\n u = <optimized out>\n t = <optimized out>\n stream = 0x0\n fastlocals = 0x685898\n freevars = <optimized out>\n retval = <optimized out>\n tstate = <optimized out>\n co = <optimized out>\n instr_ub = -1\n instr_lb = 0\n instr_prev = -1\n first_instr = <optimized out>\n names = <optimized out>\n consts = <optimized out>\n#25 0x00007ffff7ae026c in PyEval_EvalCodeEx (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168, args=args@entry=0x0,\n argcount=argcount@entry=0, kws=kws@entry=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:3267\n f = 0x685720\n retval = 0x0\n fastlocals = 0x685898\n freevars = 0x685898\n tstate = 0x6020a0\n x = <optimized out>\n u = <optimized out>\n#26 0x00007ffff7ae0389 in PyEval_EvalCode (co=co@entry=0x7ffff7ece8b0, globals=globals@entry=0x7ffff7f59168, locals=locals@entry=0x7ffff7f59168) at Python/ceval.c:669\nNo locals.\n#27 0x00007ffff7b0429a in run_mod (arena=0x673c60, flags=0x7fffffffc550, locals=0x7ffff7f59168, globals=0x7ffff7f59168,\n filename=0x63fb30 \"/home/peller/build/simulation/trunk/lib/libdataio.so\", mod=<optimized out>) at Python/pythonrun.c:1371\n co = 0x7ffff7ece8b0\n v = <optimized out>\n#28 PyRun_FileExFlags (fp=fp@entry=0x63fb30, filename=filename@entry=0x7fffffffcb47 \"generate_table.py\", start=start@entry=257, globals=globals@entry=0x7ffff7f59168,\n locals=locals@entry=0x7ffff7f59168, closeit=closeit@entry=1, flags=0x7fffffffc550) at Python/pythonrun.c:1357\n mod = <optimized out>\n arena = 0x673c60\n#29 0x00007ffff7b05797 in PyRun_SimpleFileExFlags (fp=fp@entry=0x63fb30, filename=0x7fffffffcb47 \"generate_table.py\", closeit=1, flags=flags@entry=0x7fffffffc550)\n---Type <return> to continue, or q <return> to quit---\n at Python/pythonrun.c:949\n m = 0x7ffff7f45be8\n d = 0x7ffff7f59168\n v = <optimized out>\n ext = 0x7fffffffcb54 \"e.py\"\n set_file_name = 1\n len = <optimized out>\n ret = -1\n#30 0x00007ffff7b05e53 in PyRun_AnyFileExFlags (fp=fp@entry=0x63fb30, filename=<optimized out>, closeit=<optimized out>, flags=flags@entry=0x7fffffffc550) at Python/pythonrun.c:753\nNo locals.\n#31 0x00007ffff7b1c041 in Py_Main (argc=<optimized out>, argv=<optimized out>) at Modules/main.c:640\n c = <optimized out>\n sts = <optimized out>\n command = 0x0\n filename = 0x7fffffffcb47 \"generate_table.py\"\n module = 0x0\n fp = 0x63fb30\n p = <optimized out>\n unbuffered = 0\n skipfirstline = 0\n stdin_is_interactive = 1\n help = <optimized out>\n version = <optimized out>\n saw_unbuffered_flag = <optimized out>\n cf = {cf_flags = 0}\n#32 0x00007ffff740e830 in __libc_start_main (main=0x4006b0 <main>, argc=8, argv=0x7fffffffc718, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,\n stack_end=0x7fffffffc708) at ../csu/libc-start.c:291\n result = <optimized out>\n unwind_buf = {cancel_jmp_buf = {{jmp_buf = {0, -52366050721878219, 4196032, 140737488340752, 0, 0, 52365500628250421, 52384425274421045}, mask_was_saved = 0}}, priv = {pad = {\n 0x0, 0x0, 0x8, 0x4006b0 <main>}, data = {prev = 0x0, cleanup = 0x0, canceltype = 8}}}\n not_first_call = <optimized out>\n#33 0x00000000004006e9 in _start ()\nNo symbol table info available.\n}}}",
"reporter": "peller",
"cc": "",
"resolution": "fixed",
"time": "2017-04-14T16:28:03",
"component": "combo simulation",
"summary": "clsim tablemaker deadlock",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
clsim tablemaker deadlock trac when running the clsim tablemaker once in a while maybe of my jobs get stuck at a random time and wait forever at cpu running on w intel opencl runtime and intel r xeon r cpu gdb revealed that it gets stuck at text pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s sysdeps unix sysv linux pthread cond wait s no such file or directory backtrace text gdb backtrace pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s in daq boost shared ptr from home peller build simulation trunk lib libclsim so in process from home peller build simulation trunk lib libicetray so in process from home peller build simulation trunk lib libicetray so in do void from home peller build simulation trunk lib libicetray so in do void from home peller build simulation trunk lib libicetray so in do void from home peller build simulation trunk lib libicetray so in do void from home peller build simulation trunk lib libicetray so in do void from home peller build simulation trunk lib libicetray so in execute unsigned int from home peller build simulation trunk lib libicetray so in boost python objects caller py function impl operator object object from home peller build simulation trunk lib libicetray so in boost python objects function call object object const from cvmfs icecube opensciencegrid org ubuntu lib libboost python so in boost detail function void function ref invoke boost detail function function buffer from cvmfs icecube opensciencegrid org ubuntu lib libboost python so in boost python detail exception handler operator boost const const from cvmfs icecube opensciencegrid org ubuntu lib libboost python so in boost detail function function obj boost bi boost arg boost bi value bool boost python detail exception handler const boost const invoke boost detail function function buffer boost python detail exception handler const boost const from home peller build simulation trunk lib icecube dataclasses so in boost python handle exception impl boost from cvmfs icecube opensciencegrid org ubuntu lib libboost python so in function call from cvmfs icecube opensciencegrid org ubuntu lib libboost python so in pyobject call func func entry arg arg entry kw kw entry at objects abstract c in do call nk na pp stack func at python ceval c call function oparg pp stack at python ceval c pyeval evalframeex f f entry throwflag throwflag entry at python ceval c in pyeval evalcodeex co globals locals locals entry args argcount argcount entry kws kwcount defs defcount closure at python ceval c in fast function nk na n pp stack func at python ceval c call function oparg pp stack at python ceval c pyeval evalframeex f f entry throwflag throwflag entry at python ceval c in pyeval evalcodeex co co entry globals globals entry locals locals entry args args entry argcount argcount entry kws kws entry kwcount defs defcount closure at python ceval c in pyeval evalcode co co entry globals globals entry locals locals entry at python ceval c in run mod arena flags locals globals filename home peller build simulation trunk lib libdataio so mod at python pythonrun c pyrun fileexflags fp fp entry filename filename entry generate table py start start entry globals globals entry locals locals entry closeit closeit entry flags at python pythonrun c in pyrun simplefileexflags fp fp entry filename generate table py closeit flags flags entry at python pythonrun c in pyrun anyfileexflags fp fp entry filename closeit flags flags entry at python pythonrun c in py main argc argv at modules main c in libc start main main argc argv init fini rtld fini stack end at csu libc start c in start full backtrace text gdb backtrace full pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s no locals in daq boost shared ptr from home peller build simulation trunk lib libclsim so no symbol table info available in process from home peller build simulation trunk lib libicetray so no symbol table info available in process from home peller build simulation trunk lib libicetray so no symbol table info available in do void from home peller build simulation trunk lib libicetray so no symbol table info available in do void from home peller build simulation trunk lib libicetray so no symbol table info available in do void from home peller build simulation trunk lib libicetray so no symbol table info available in do void from home peller build simulation trunk lib libicetray so no symbol table info available in do void from home peller build simulation trunk lib libicetray so no symbol table info available in execute unsigned int from home peller build simulation trunk lib libicetray so no symbol table info available in boost python objects caller py function impl operator object object from home peller build simulation trunk lib libicetray so no symbol table info available in boost python objects function call object object const from cvmfs icecube opensciencegrid org ubuntu lib libboost python so no symbol table info available in boost detail function void function ref invoke boost detail function function buffer from cvmfs icecube opensciencegrid org ubuntu lib libboost python so no symbol table info available in boost python detail exception handler operator boost const const from cvmfs icecube opensciencegrid org ubuntu lib libboost python so no symbol table info available in boost detail function function obj boost bi boost arg boost bi value bool boost python detail exception handler const boost const invoke boost detail function function buffer boost python detail exception handler const boost const from home peller build simulation trunk lib icecube dataclasses so no symbol table info available in boost python handle exception impl boost from cvmfs icecube opensciencegrid org ubuntu lib libboost python so no symbol table info available in function call from cvmfs icecube opensciencegrid org ubuntu lib libboost python so no symbol table info available in pyobject call func func entry arg arg entry kw kw entry at objects abstract c result call in do call nk na pp stack func at python ceval c callargs kwdict result call function oparg pp stack at python ceval c func w na nk n pfunc x pyeval evalframeex f f entry throwflag throwflag entry at python ceval c sp stack pointer next instr opcode oparg why why not err x v w u t stream fastlocals freevars retval tstate co instr ub instr lb instr prev first instr names consts in pyeval evalcodeex co globals locals locals entry args argcount argcount entry kws kwcount defs defcount closure at python ceval c f retval fastlocals freevars tstate x u in fast function nk na n pp stack func at python ceval c co nd globals argdefs d call function oparg pp stack at python ceval c func w na nk n pfunc x pyeval evalframeex f f entry throwflag throwflag entry at python ceval c sp stack pointer next instr opcode oparg why why not err x v w u t stream fastlocals freevars retval tstate co instr ub instr lb instr prev first instr names consts in pyeval evalcodeex co co entry globals globals entry locals locals entry args args entry argcount argcount entry kws kws entry kwcount defs defcount closure at python ceval c f retval fastlocals freevars tstate x u in pyeval evalcode co co entry globals globals entry locals locals entry at python ceval c no locals in run mod arena flags locals globals filename home peller build simulation trunk lib libdataio so mod at python pythonrun c co v pyrun fileexflags fp fp entry filename filename entry generate table py start start entry globals globals entry locals locals entry closeit closeit entry flags at python pythonrun c mod arena in pyrun simplefileexflags fp fp entry filename generate table py closeit flags flags entry type to continue or q to quit at python pythonrun c m d v ext e py set file name len ret in pyrun anyfileexflags fp fp entry filename closeit flags flags entry at python pythonrun c no locals in py main argc argv at modules main c c sts command filename generate table py module fp p unbuffered skipfirstline stdin is interactive help version saw unbuffered flag cf cf flags in libc start main main argc argv init fini rtld fini stack end at csu libc start c result unwind buf cancel jmp buf jmp buf mask was saved priv pad data prev cleanup canceltype not first call in start no symbol table info available migrated from json status closed changetime ts description when running the clsim tablemaker once in a while maybe of my jobs get stuck at a random time and wait forever at cpu n nrunning on w intel opencl runtime and intel r xeon r cpu n ngdb revealed that it gets stuck at n n npthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s sysdeps unix sysv linux pthread cond wait s no such file or directory n n n nbacktrace n n n gdb backtrace n pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s n in daq boost shared ptr from home peller build simulation trunk lib libclsim so n in process from home peller build simulation trunk lib libicetray so n in process from home peller build simulation trunk lib libicetray so n in do void from home peller build simulation trunk lib libicetray so n in do void from home peller build simulation trunk lib libicetray so n in do void from home peller build simulation trunk lib libicetray so n in do void from home peller build simulation trunk lib libicetray so n in do void from home peller build simulation trunk lib libicetray so n in execute unsigned int from home peller build simulation trunk lib libicetray so n in boost python objects caller py function impl operator object object from home peller build simulation trunk lib libicetray so n in boost python objects function call object object const n from cvmfs icecube opensciencegrid org ubuntu lib libboost python so n in boost detail function void function ref invoke boost detail function function buffer from cvmfs icecube opensciencegrid org ubuntu lib libboost python so n in boost python detail exception handler operator boost const const n from cvmfs icecube opensciencegrid org ubuntu lib libboost python so n in boost detail function function obj boost bi boost arg boost bi value bool boost python detail exception handler const boost const invoke boost detail function function buffer boost python detail exception handler const boost const from home peller build simulation trunk lib icecube dataclasses so n in boost python handle exception impl boost n from cvmfs icecube opensciencegrid org ubuntu lib libboost python so n in function call from cvmfs icecube opensciencegrid org ubuntu lib libboost python so n in pyobject call func func entry arg arg entry kw kw entry at objects abstract c n in do call nk na pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex f f entry throwflag throwflag entry at python ceval c n in pyeval evalcodeex co globals locals locals entry args argcount argcount entry n kws kwcount defs defcount closure at python ceval c n in fast function nk na n pp stack func at python ceval c n call function oparg pp stack at python ceval c n pyeval evalframeex f f entry throwflag throwflag entry at python ceval c n in pyeval evalcodeex co co entry globals globals entry locals locals entry n args args entry argcount argcount entry kws kws entry kwcount defs defcount closure at python ceval c n in pyeval evalcode co co entry globals globals entry locals locals entry at python ceval c n in run mod arena flags locals globals n filename home peller build simulation trunk lib libdataio so mod at python pythonrun c n pyrun fileexflags fp fp entry filename filename entry generate table py start start entry globals globals entry n locals locals entry closeit closeit entry flags at python pythonrun c n in pyrun simplefileexflags fp fp entry filename generate table py closeit flags flags entry n at python pythonrun c n in pyrun anyfileexflags fp fp entry filename closeit flags flags entry n at python pythonrun c n in py main argc argv at modules main c n in libc start main main argc argv init fini n rtld fini stack end at csu libc start c n in start n n nfull backtrace n n gdb backtrace full n pthread cond wait glibc at sysdeps unix sysv linux pthread cond wait s nno locals n in daq boost shared ptr from home peller build simulation trunk lib libclsim so nno symbol table info available n in process from home peller build simulation trunk lib libicetray so nno symbol table info available n in process from home peller build simulation trunk lib libicetray so nno symbol table info available n in do void from home peller build simulation trunk lib libicetray so nno symbol table info available n in do void from home peller build simulation trunk lib libicetray so nno symbol table info available n in do void from home peller build simulation trunk lib libicetray so nno symbol table info available n in do void from home peller build simulation trunk lib libicetray so nno symbol table info available n in do void from home peller build simulation trunk lib libicetray so nno symbol table info available n in execute unsigned int from home peller build simulation trunk lib libicetray so nno symbol table info available n in boost python objects caller py function impl operator object object from home peller build simulation trunk lib libicetray so nno symbol table info available n in boost python objects function call object object const n from cvmfs icecube opensciencegrid org ubuntu lib libboost python so nno symbol table info available n in boost detail function void function ref invoke boost detail function function buffer from cvmfs icecube opensciencegrid org ubuntu lib libboost python so nno symbol table info available n in boost python detail exception handler operator boost const const n from cvmfs icecube opensciencegrid org ubuntu lib libboost python so nno symbol table info available n in boost detail function function obj boost bi boost arg boost bi value bool boost python detail exception handler const boost const invoke boost detail function function buffer boost python detail exception handler const boost const n from home peller build simulation trunk lib icecube dataclasses so nno symbol table info available n in boost python handle exception impl boost from cvmfs icecube opensciencegrid org ubuntu lib libboost python so nno symbol table info available n in function call from cvmfs icecube opensciencegrid org ubuntu lib libboost python so nno symbol table info available n in pyobject call func func entry arg arg entry kw kw entry at objects abstract c n result n call n in do call nk na pp stack func at python ceval c n callargs n kwdict n result n call function oparg pp stack at python ceval c n func n w n na n nk n n n pfunc n x n pyeval evalframeex f f entry throwflag throwflag entry at python ceval c n sp n stack pointer n next instr n opcode n oparg n why why not n err n x n v n w n u n t n stream n fastlocals n freevars n retval n tstate n co n instr ub n instr lb n instr prev n first instr n names n consts n in pyeval evalcodeex co globals locals locals entry args argcount argcount entry n kws kwcount defs defcount closure at python ceval c n f n retval n fastlocals n freevars n tstate n x n u n in fast function nk na n pp stack func at python ceval c n co n nd n globals n argdefs n d n call function oparg pp stack at python ceval c n func n w n na n nk n n n pfunc n x n pyeval evalframeex f f entry throwflag throwflag entry at python ceval c n sp n stack pointer n next instr n opcode n oparg n why why not n err n x n v n w n u n t n stream n fastlocals n freevars n retval n tstate n co n instr ub n instr lb n instr prev n first instr n names n consts n in pyeval evalcodeex co co entry globals globals entry locals locals entry args args entry n argcount argcount entry kws kws entry kwcount defs defcount closure at python ceval c n f n retval n fastlocals n freevars n tstate n x n u n in pyeval evalcode co co entry globals globals entry locals locals entry at python ceval c nno locals n in run mod arena flags locals globals n filename home peller build simulation trunk lib libdataio so mod at python pythonrun c n co n v n pyrun fileexflags fp fp entry filename filename entry generate table py start start entry globals globals entry n locals locals entry closeit closeit entry flags at python pythonrun c n mod n arena n in pyrun simplefileexflags fp fp entry filename generate table py closeit flags flags entry n type to continue or q to quit n at python pythonrun c n m n d n v n ext e py n set file name n len n ret n in pyrun anyfileexflags fp fp entry filename closeit flags flags entry at python pythonrun c nno locals n in py main argc argv at modules main c n c n sts n command n filename generate table py n module n fp n p n unbuffered n skipfirstline n stdin is interactive n help n version n saw unbuffered flag n cf cf flags n in libc start main main argc argv init fini rtld fini n stack end at csu libc start c n result n unwind buf cancel jmp buf jmp buf mask was saved priv pad n data prev cleanup canceltype n not first call n in start nno symbol table info available n reporter peller cc resolution fixed time component combo simulation summary clsim tablemaker deadlock priority normal keywords milestone owner type defect
| 1
|
71,426
| 23,622,188,913
|
IssuesEvent
|
2022-08-24 21:52:14
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
SelectOneRadio: Labels broken in custom layout if item index is not enforced since 12.0.0-RC3
|
:lady_beetle: defect :bangbang: needs-triage
|
### Describe the bug
Since PF 12.0.0-RC3 selectOneRadio with custom layout has issues with the labels if the item index is not straight.
Take for instance the showcase example with custom layout and change some <p:radioButton> order, labels will be broken.
This used to work in 11.0.0.
I'm not really sure about what commit broke this, if you can point me in some direction I can try to put up a PR.
### Reproducer
<h5>Custom Layout (Facet)</h5>
<p:selectOneRadio id="customRadio">
<f:selectItem itemLabel="Red" itemValue="Red"/>
<f:selectItem itemLabel="Green" itemValue="Green"/>
<f:selectItem itemLabel="Blue" itemValue="Blue"/>
<f:facet name="custom">
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet2" for="customRadio" itemIndex="1"/>
<p:outputLabel for="facet2">
<span class="legend" style="background:green"/> Green
</p:outputLabel>
</span>
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet1" for="customRadio" itemIndex="0"/>
<p:outputLabel for="facet1">
<span class="legend" style="background:red"/> Red
</p:outputLabel>
</span>
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet3" for="customRadio" itemIndex="2"/>
<p:outputLabel for="facet3">
<span class="legend" style="background:blue"/> Blue
</p:outputLabel>
</span>
</f:facet>
</p:selectOneRadio>
### Expected behavior
Labels should point to the correct radio button
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0-RC-3
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.3-NEXT-M6
### Java version
18.0.2
### Browser(s)
_No response_
|
1.0
|
SelectOneRadio: Labels broken in custom layout if item index is not enforced since 12.0.0-RC3 - ### Describe the bug
Since PF 12.0.0-RC3 selectOneRadio with custom layout has issues with the labels if the item index is not straight.
Take for instance the showcase example with custom layout and change some <p:radioButton> order, labels will be broken.
This used to work in 11.0.0.
I'm not really sure about what commit broke this, if you can point me in some direction I can try to put up a PR.
### Reproducer
<h5>Custom Layout (Facet)</h5>
<p:selectOneRadio id="customRadio">
<f:selectItem itemLabel="Red" itemValue="Red"/>
<f:selectItem itemLabel="Green" itemValue="Green"/>
<f:selectItem itemLabel="Blue" itemValue="Blue"/>
<f:facet name="custom">
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet2" for="customRadio" itemIndex="1"/>
<p:outputLabel for="facet2">
<span class="legend" style="background:green"/> Green
</p:outputLabel>
</span>
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet1" for="customRadio" itemIndex="0"/>
<p:outputLabel for="facet1">
<span class="legend" style="background:red"/> Red
</p:outputLabel>
</span>
<span class="field-radiobutton" role="radio">
<p:radioButton id="facet3" for="customRadio" itemIndex="2"/>
<p:outputLabel for="facet3">
<span class="legend" style="background:blue"/> Blue
</p:outputLabel>
</span>
</f:facet>
</p:selectOneRadio>
### Expected behavior
Labels should point to the correct radio button
### PrimeFaces edition
Community
### PrimeFaces version
12.0.0-RC-3
### Theme
_No response_
### JSF implementation
MyFaces
### JSF version
2.3-NEXT-M6
### Java version
18.0.2
### Browser(s)
_No response_
|
defect
|
selectoneradio labels broken in custom layout if item index is not enforced since describe the bug since pf selectoneradio with custom layout has issues with the labels if the item index is not straight take for instance the showcase example with custom layout and change some order labels will be broken this used to work in i m not really sure about what commit broke this if you can point me in some direction i can try to put up a pr reproducer custom layout facet green red blue expected behavior labels should point to the correct radio button primefaces edition community primefaces version rc theme no response jsf implementation myfaces jsf version next java version browser s no response
| 1
|
62,511
| 17,023,936,818
|
IssuesEvent
|
2021-07-03 04:39:34
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
editing uploaded trace details via /trace/123456/edit fails in Firefox
|
Component: website Priority: minor Resolution: worksforme Type: defect
|
**[Submitted to the original trac issue database at 7.51pm, Thursday, 16th February 2017]**
(see https://help.openstreetmap.org/questions/54631/edit-uploaded-gps-track-description-visibility-or-tags for the inital user report)
When I try to change (in Firefox 51.0.1) the description or the visibility on https://www.openstreetmap.org/trace/123456/edit (it is one of my uploaded traces; placeholder number here) and then "save" the page reloads and the values are the same as before.
In the Firefox console I inspected the issued POST request to /trace/123456/edit when I edited the description: the sent parameters contain the changed "description", the server's answer to the POST is "200 OK" and the webpage with the form containing the old values is returned. No errors were logged.
Thank you!
|
1.0
|
editing uploaded trace details via /trace/123456/edit fails in Firefox - **[Submitted to the original trac issue database at 7.51pm, Thursday, 16th February 2017]**
(see https://help.openstreetmap.org/questions/54631/edit-uploaded-gps-track-description-visibility-or-tags for the inital user report)
When I try to change (in Firefox 51.0.1) the description or the visibility on https://www.openstreetmap.org/trace/123456/edit (it is one of my uploaded traces; placeholder number here) and then "save" the page reloads and the values are the same as before.
In the Firefox console I inspected the issued POST request to /trace/123456/edit when I edited the description: the sent parameters contain the changed "description", the server's answer to the POST is "200 OK" and the webpage with the form containing the old values is returned. No errors were logged.
Thank you!
|
defect
|
editing uploaded trace details via trace edit fails in firefox see for the inital user report when i try to change in firefox the description or the visibility on it is one of my uploaded traces placeholder number here and then save the page reloads and the values are the same as before in the firefox console i inspected the issued post request to trace edit when i edited the description the sent parameters contain the changed description the server s answer to the post is ok and the webpage with the form containing the old values is returned no errors were logged thank you
| 1
|
270,754
| 23,533,807,277
|
IssuesEvent
|
2022-08-19 18:10:01
|
vegaprotocol/vega
|
https://api.github.com/repos/vegaprotocol/vega
|
closed
|
Fix data-node flickering tests
|
tests critical bug DatanodeV2
|
```
--- FAIL: TestOrders (0.08s)
--- FAIL: TestOrders/GetAll (0.00s)
orders_test.go:164:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:164
Error: Not equal:
expected: 44
actual : 41
Test: TestOrders/GetAll
--- FAIL: TestOrders/GetByOrderID (0.01s)
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"fe766f1953fe8a6ff4205968277041e0d1d08856240860347b311640229fa944"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_7", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160031000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160031000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160031000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x7}
actual : entities.Order{ID:entities.OrderID{ID:"fe766f1953fe8a6ff4205968277041e0d1d08856240860347b311640229fa944"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:60, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_8", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160032000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160032000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160032000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158479000, time.Local), SeqNum:0x8}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "bb"
+ ID: (entities.ID) (len=2) "aa"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 10,
+ Remaining: (int64) 60,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=14) "my_reference_7",
+ Reference: (string) (len=14) "my_reference_8",
Reason: (vega.OrderError) 0,
- Version: (int32) 2,
+ Version: (int32) 1,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115248,
@@ -3589,3 +3589,3 @@
VegaTime: (time.Time) {
- wall: (uint64) 158587000,
+ wall: (uint64) 158479000,
ext: (int64) 63795115238,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 7
+ SeqNum: (uint64) 8
}
Test: TestOrders/GetByOrderID
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"}, Side:1, Price:10, Size:100, Remaining:25, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_22", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160049000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160049000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160049000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x16}
actual : entities.Order{ID:entities.OrderID{ID:"db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_23", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160050000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160050000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160050000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x17}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "aa"
+ ID: (entities.ID) (len=2) "bb"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 25,
+ Remaining: (int64) 10,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=15) "my_reference_22",
+ Reference: (string) (len=15) "my_reference_23",
Reason: (vega.OrderError) 0,
- Version: (int32) 1,
+ Version: (int32) 2,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115248,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 22
+ SeqNum: (uint64) 23
}
Test: TestOrders/GetByOrderID
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"aaa3220495b55a6ccba156958ead9b9d50474f06d8a5b1e6c946e7753f1b9e2d"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_27", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160054000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160054000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160054000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x1b}
actual : entities.Order{ID:entities.OrderID{ID:"aaa3220495b55a6ccba156958ead9b9d50474f06d8a5b1e6c946e7753f1b9e2d"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:25, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_26", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160053000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160053000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160053000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x1a}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "bb"
+ ID: (entities.ID) (len=2) "aa"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 10,
+ Remaining: (int64) 25,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=15) "my_reference_27",
+ Reference: (string) (len=15) "my_reference_26",
Reason: (vega.OrderError) 0,
- Version: (int32) 2,
+ Version: (int32) 1,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115248,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 27
+ SeqNum: (uint64) 26
}
Test: TestOrders/GetByOrderID
--- FAIL: TestOrders/GetByMarket (0.00s)
orders_test.go:188:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:188
Error: "[{{0ebac9a3a30bcb2fbfb70fe645611276e119f580b351609e31784a21d636fb8f} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_4 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160027 +0200 CEST 2022-08-03 11:20:43.160027 +0200 CEST 2022-08-03 11:20:48.160027 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=4)} {{2676309200d5866a8b1cfab34c3c12c59c4a8b46c8dba0da17b3b815c3afa6c2} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_0 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160019 +0200 CEST 2022-08-03 11:20:43.160019 +0200 CEST 2022-08-03 11:20:48.160019 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=0)} {{9231644a3a36a658358a76319bafed943862a5c16bf3e25d6822c2246f2f0339} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_20 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160047 +0200 CEST 2022-08-03 11:20:43.160047 +0200 CEST 2022-08-03 11:20:48.160047 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=20)} {{a959fdf8f5eeb4e95b6c41229e49b5f0a94afbcb71f9dc7cdc224734d69b7baa} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_24 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160051 +0200 CEST 2022-08-03 11:20:43.160051 +0200 CEST 2022-08-03 11:20:48.160051 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=24)} {{aff797feb2b969bb0ed99573cb466d8af72df1be000b70d3bbd176026d0e529d} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_16 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160043 +0200 CEST 2022-08-03 11:20:43.160043 +0200 CEST 2022-08-03 11:20:48.160043 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=16)} {{c8fe7b1a853453f4be1d314e915400f3cbf7c662ae94041b4e2da691cccc4bd5} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_12 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160038 +0200 CEST 2022-08-03 11:20:43.160038 +0200 CEST 2022-08-03 11:20:48.160038 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=12)} {{e1f42785b7713989559bea6a6afeb8d993fe2b6dc0115792fe1521a81feb0782} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_28 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160055 +0200 CEST 2022-08-03 11:20:43.160055 +0200 CEST 2022-08-03 11:20:48.160055 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=28)} {{212882050ad613b7949fa0ed02c3b53c939452cd8f9d38479ea4405969c66070} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_6 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.16003 +0200 CEST 2022-08-03 11:20:43.16003 +0200 CEST 2022-08-03 11:20:48.16003 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=6)} {{267c68b26059ee3c65f2e351978431e13f7452b99fd5c051f0acfe9b4092d597} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_10 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160036 +0200 CEST 2022-08-03 11:20:43.160036 +0200 CEST 2022-08-03 11:20:48.160036 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=10)} {{a00cfaec468e9aba34bae52e4f45ae7a4e9f8e061a3e4b7467e76eb17a0f35b8} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_18 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160045 +0200 CEST 2022-08-03 11:20:43.160045 +0200 CEST 2022-08-03 11:20:48.160045 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=18)} {{d7960890bf5c3ab71807234fdcbe207d6a37e2e6cba392be7a534bd9f12fe6df} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_2 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160025 +0200 CEST 2022-08-03 11:20:43.160025 +0200 CEST 2022-08-03 11:20:48.160025 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=2)} {{db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_22 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160049 +0200 CEST 2022-08-03 11:20:43.160049 +0200 CEST 2022-08-03 11:20:48.160049 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=22)} {{e1f211b8eab691e6aada354c79abef090e07846efbb2f166291bfc6a79d50bd9} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_14 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.16004 +0200 CEST 2022-08-03 11:20:43.16004 +0200 CEST 2022-08-03 11:20:48.16004 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=14)}]" should have 15 item(s), but has 13
Test: TestOrders/GetByMarket
yay
--- FAIL: TestProposals (0.09s)
--- FAIL: TestProposals/GetById (0.00s)
proposals_test.go:91:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:61
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:91
Error: Should be empty, but was entities.Proposal{
ID: {ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"},
Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
- PartyID: entities.PartyID{ID: "3dacb5bd1c6a7063d63ba65ae78c52d9c68781a1318efa3abe8be47b2dfc7424"},
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd953c5574c0694ee2e64050c7febbb13e78"},
State: 6,
Rationale: entities.ProposalRationale{
ProposalRationale: &vega.ProposalRationale{
... // 3 ignored fields
Description: "",
Hash: "",
- Url: "myurl2.com",
+ Url: "myurl1.com",
},
},
Terms: {ProposalTerms: &{...}},
Reason: 0,
... // 3 identical fields
}
Test: TestProposals/GetById
--- FAIL: TestProposals/GetInState (0.00s)
proposals_test.go:106:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:55
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:106
Error: Should be empty, but was []entities.Proposal{
+ {
+ ID: entities.ProposalID{ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"},
+ Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd953c5574c0694ee2e64050c7febbb13e78"},
+ State: 6,
+ Rationale: s`url:"myurl1.com"`,
+ Terms: s"",
+ ProposalTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ VegaTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ },
{ID: {ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"}, Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce", PartyID: {ID: "3dacb5bd1c6a7063d63ba65ae78c52d9c68781a1318efa3abe8be47b2dfc7424"}, State: 6, ...},
}
Test: TestProposals/GetInState
--- FAIL: TestProposals/GetByParty (0.00s)
proposals_test.go:113:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:55
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:113
Error: Should be empty, but was []entities.Proposal(
- nil,
+ {
+ {
+ ID: entities.ProposalID{ID: "25ce5c1927a978bd8d908639da3ab617"...},
+ Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd95"...},
+ State: 6,
+ Rationale: s`url:"myurl1.com"`,
+ Terms: s"",
+ ProposalTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ VegaTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ },
+ },
)
Test: TestProposals/GetByParty
```
|
1.0
|
Fix data-node flickering tests - ```
--- FAIL: TestOrders (0.08s)
--- FAIL: TestOrders/GetAll (0.00s)
orders_test.go:164:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:164
Error: Not equal:
expected: 44
actual : 41
Test: TestOrders/GetAll
--- FAIL: TestOrders/GetByOrderID (0.01s)
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"fe766f1953fe8a6ff4205968277041e0d1d08856240860347b311640229fa944"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_7", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160031000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160031000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160031000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x7}
actual : entities.Order{ID:entities.OrderID{ID:"fe766f1953fe8a6ff4205968277041e0d1d08856240860347b311640229fa944"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:60, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_8", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160032000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160032000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160032000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158479000, time.Local), SeqNum:0x8}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "bb"
+ ID: (entities.ID) (len=2) "aa"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 10,
+ Remaining: (int64) 60,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=14) "my_reference_7",
+ Reference: (string) (len=14) "my_reference_8",
Reason: (vega.OrderError) 0,
- Version: (int32) 2,
+ Version: (int32) 1,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160031000,
+ wall: (uint64) 160032000,
ext: (int64) 63795115248,
@@ -3589,3 +3589,3 @@
VegaTime: (time.Time) {
- wall: (uint64) 158587000,
+ wall: (uint64) 158479000,
ext: (int64) 63795115238,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 7
+ SeqNum: (uint64) 8
}
Test: TestOrders/GetByOrderID
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"}, Side:1, Price:10, Size:100, Remaining:25, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_22", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160049000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160049000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160049000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x16}
actual : entities.Order{ID:entities.OrderID{ID:"db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_23", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160050000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160050000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160050000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x17}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "aa"
+ ID: (entities.ID) (len=2) "bb"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 25,
+ Remaining: (int64) 10,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=15) "my_reference_22",
+ Reference: (string) (len=15) "my_reference_23",
Reason: (vega.OrderError) 0,
- Version: (int32) 1,
+ Version: (int32) 2,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160049000,
+ wall: (uint64) 160050000,
ext: (int64) 63795115248,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 22
+ SeqNum: (uint64) 23
}
Test: TestOrders/GetByOrderID
orders_test.go:172:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:172
Error: Not equal:
expected: entities.Order{ID:entities.OrderID{ID:"aaa3220495b55a6ccba156958ead9b9d50474f06d8a5b1e6c946e7753f1b9e2d"}, MarketID:entities.MarketID{ID:"bb"}, PartyID:entities.PartyID{ID:"fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502"}, Side:1, Price:10, Size:100, Remaining:10, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_27", Reason:0, Version:2, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160054000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160054000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160054000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x1b}
actual : entities.Order{ID:entities.OrderID{ID:"aaa3220495b55a6ccba156958ead9b9d50474f06d8a5b1e6c946e7753f1b9e2d"}, MarketID:entities.MarketID{ID:"aa"}, PartyID:entities.PartyID{ID:"3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"}, Side:1, Price:10, Size:100, Remaining:25, TimeInForce:1, Type:1, Status:1, Reference:"my_reference_26", Reason:0, Version:1, PeggedOffset:0, BatchID:0, PeggedReference:1, LpID:[]uint8(nil), CreatedAt:time.Date(2022, time.August, 3, 11, 20, 38, 160053000, time.Local), UpdatedAt:time.Date(2022, time.August, 3, 11, 20, 43, 160053000, time.Local), ExpiresAt:time.Date(2022, time.August, 3, 11, 20, 48, 160053000, time.Local), VegaTime:time.Date(2022, time.August, 3, 11, 20, 38, 158587000, time.Local), SeqNum:0x1a}
Diff:
--- Expected
+++ Actual
@@ -5,6 +5,6 @@
MarketID: (entities.MarketID) {
- ID: (entities.ID) (len=2) "bb"
+ ID: (entities.ID) (len=2) "aa"
},
PartyID: (entities.PartyID) {
- ID: (entities.ID) (len=64) "fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502"
+ ID: (entities.ID) (len=64) "3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd"
},
@@ -13,3 +13,3 @@
Size: (int64) 100,
- Remaining: (int64) 10,
+ Remaining: (int64) 25,
TimeInForce: (vega.Order_TimeInForce) 1,
@@ -17,5 +17,5 @@
Status: (vega.Order_Status) 1,
- Reference: (string) (len=15) "my_reference_27",
+ Reference: (string) (len=15) "my_reference_26",
Reason: (vega.OrderError) 0,
- Version: (int32) 2,
+ Version: (int32) 1,
PeggedOffset: (int32) 0,
@@ -25,3 +25,3 @@
CreatedAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115238,
@@ -1213,3 +1213,3 @@
UpdatedAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115243,
@@ -2401,3 +2401,3 @@
ExpiresAt: (time.Time) {
- wall: (uint64) 160054000,
+ wall: (uint64) 160053000,
ext: (int64) 63795115248,
@@ -4776,3 +4776,3 @@
},
- SeqNum: (uint64) 27
+ SeqNum: (uint64) 26
}
Test: TestOrders/GetByOrderID
--- FAIL: TestOrders/GetByMarket (0.00s)
orders_test.go:188:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/orders_test.go:188
Error: "[{{0ebac9a3a30bcb2fbfb70fe645611276e119f580b351609e31784a21d636fb8f} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_4 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160027 +0200 CEST 2022-08-03 11:20:43.160027 +0200 CEST 2022-08-03 11:20:48.160027 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=4)} {{2676309200d5866a8b1cfab34c3c12c59c4a8b46c8dba0da17b3b815c3afa6c2} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_0 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160019 +0200 CEST 2022-08-03 11:20:43.160019 +0200 CEST 2022-08-03 11:20:48.160019 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=0)} {{9231644a3a36a658358a76319bafed943862a5c16bf3e25d6822c2246f2f0339} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_20 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160047 +0200 CEST 2022-08-03 11:20:43.160047 +0200 CEST 2022-08-03 11:20:48.160047 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=20)} {{a959fdf8f5eeb4e95b6c41229e49b5f0a94afbcb71f9dc7cdc224734d69b7baa} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_24 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160051 +0200 CEST 2022-08-03 11:20:43.160051 +0200 CEST 2022-08-03 11:20:48.160051 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=24)} {{aff797feb2b969bb0ed99573cb466d8af72df1be000b70d3bbd176026d0e529d} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_16 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160043 +0200 CEST 2022-08-03 11:20:43.160043 +0200 CEST 2022-08-03 11:20:48.160043 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=16)} {{c8fe7b1a853453f4be1d314e915400f3cbf7c662ae94041b4e2da691cccc4bd5} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_12 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160038 +0200 CEST 2022-08-03 11:20:43.160038 +0200 CEST 2022-08-03 11:20:48.160038 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=12)} {{e1f42785b7713989559bea6a6afeb8d993fe2b6dc0115792fe1521a81feb0782} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=60) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_28 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160055 +0200 CEST 2022-08-03 11:20:43.160055 +0200 CEST 2022-08-03 11:20:48.160055 +0200 CEST 2022-08-03 11:20:38.158479 +0200 CEST %!s(uint64=28)} {{212882050ad613b7949fa0ed02c3b53c939452cd8f9d38479ea4405969c66070} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_6 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.16003 +0200 CEST 2022-08-03 11:20:43.16003 +0200 CEST 2022-08-03 11:20:48.16003 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=6)} {{267c68b26059ee3c65f2e351978431e13f7452b99fd5c051f0acfe9b4092d597} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_10 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160036 +0200 CEST 2022-08-03 11:20:43.160036 +0200 CEST 2022-08-03 11:20:48.160036 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=10)} {{a00cfaec468e9aba34bae52e4f45ae7a4e9f8e061a3e4b7467e76eb17a0f35b8} {aa} {fcf2652ba3db182f3a0c1b056985fc6cc954cb812203da8489e090abf0858502} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_18 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160045 +0200 CEST 2022-08-03 11:20:43.160045 +0200 CEST 2022-08-03 11:20:48.160045 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=18)} {{d7960890bf5c3ab71807234fdcbe207d6a37e2e6cba392be7a534bd9f12fe6df} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_2 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160025 +0200 CEST 2022-08-03 11:20:43.160025 +0200 CEST 2022-08-03 11:20:48.160025 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=2)} {{db591756964b5a95ed8e00b779f24aa15bd0ea0de185f658534470fbd965321c} {aa} {720d10c99c8f32713cace56ddf05796cadf3f4ffdb02da20ba1f405d6f052f0a} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_22 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.160049 +0200 CEST 2022-08-03 11:20:43.160049 +0200 CEST 2022-08-03 11:20:48.160049 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=22)} {{e1f211b8eab691e6aada354c79abef090e07846efbb2f166291bfc6a79d50bd9} {aa} {3558eec2c42cb3f965408b5b96e1a65d418d5ed29c5633a48cac6d7ed1b8a8dd} SIDE_BUY %!s(int64=10) %!s(int64=100) %!s(int64=25) TIME_IN_FORCE_GTC TYPE_LIMIT STATUS_ACTIVE my_reference_14 none %!s(int32=1) %!s(int32=0) %!s(int32=0) PEGGED_REFERENCE_MID 2022-08-03 11:20:38.16004 +0200 CEST 2022-08-03 11:20:43.16004 +0200 CEST 2022-08-03 11:20:48.16004 +0200 CEST 2022-08-03 11:20:38.158587 +0200 CEST %!s(uint64=14)}]" should have 15 item(s), but has 13
Test: TestOrders/GetByMarket
yay
--- FAIL: TestProposals (0.09s)
--- FAIL: TestProposals/GetById (0.00s)
proposals_test.go:91:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:61
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:91
Error: Should be empty, but was entities.Proposal{
ID: {ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"},
Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
- PartyID: entities.PartyID{ID: "3dacb5bd1c6a7063d63ba65ae78c52d9c68781a1318efa3abe8be47b2dfc7424"},
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd953c5574c0694ee2e64050c7febbb13e78"},
State: 6,
Rationale: entities.ProposalRationale{
ProposalRationale: &vega.ProposalRationale{
... // 3 ignored fields
Description: "",
Hash: "",
- Url: "myurl2.com",
+ Url: "myurl1.com",
},
},
Terms: {ProposalTerms: &{...}},
Reason: 0,
... // 3 identical fields
}
Test: TestProposals/GetById
--- FAIL: TestProposals/GetInState (0.00s)
proposals_test.go:106:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:55
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:106
Error: Should be empty, but was []entities.Proposal{
+ {
+ ID: entities.ProposalID{ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"},
+ Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd953c5574c0694ee2e64050c7febbb13e78"},
+ State: 6,
+ Rationale: s`url:"myurl1.com"`,
+ Terms: s"",
+ ProposalTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ VegaTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ },
{ID: {ID: "25ce5c1927a978bd8d908639da3ab6176fb722dddd10c4d2843650b64f1369af"}, Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce", PartyID: {ID: "3dacb5bd1c6a7063d63ba65ae78c52d9c68781a1318efa3abe8be47b2dfc7424"}, State: 6, ...},
}
Test: TestProposals/GetInState
--- FAIL: TestProposals/GetByParty (0.00s)
proposals_test.go:113:
Error Trace: /Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:55
/Users/valentin/Projects/vega/vega/datanode/sqlstore/proposals_test.go:113
Error: Should be empty, but was []entities.Proposal(
- nil,
+ {
+ {
+ ID: entities.ProposalID{ID: "25ce5c1927a978bd8d908639da3ab617"...},
+ Reference: "8fcbd0a773eabacd2c51d1aa7fb6b8a25eeee9f3ed520f6675296da6863e64ce",
+ PartyID: entities.PartyID{ID: "b8d188d09da06a28d63b59c2e964dd95"...},
+ State: 6,
+ Rationale: s`url:"myurl1.com"`,
+ Terms: s"",
+ ProposalTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ VegaTime: s"2022-08-03 11:20:47.645597 +0200 CEST",
+ },
+ },
)
Test: TestProposals/GetByParty
```
|
non_defect
|
fix data node flickering tests fail testorders fail testorders getall orders test go error trace users valentin projects vega vega datanode sqlstore orders test go error not equal expected actual test testorders getall fail testorders getbyorderid orders test go error trace users valentin projects vega vega datanode sqlstore orders test go error not equal expected entities order id entities orderid id marketid entities marketid id bb partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum actual entities order id entities orderid id marketid entities marketid id aa partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum diff expected actual marketid entities marketid id entities id len bb id entities id len aa partyid entities partyid id entities id len id entities id len size remaining remaining timeinforce vega order timeinforce status vega order status reference string len my reference reference string len my reference reason vega ordererror version version peggedoffset createdat time time wall wall ext updatedat time time wall wall ext expiresat time time wall wall ext vegatime time time wall wall ext seqnum seqnum test testorders getbyorderid orders test go error trace users valentin projects vega vega datanode sqlstore orders test go error not equal expected entities order id entities orderid id marketid entities marketid id aa partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum actual entities order id entities orderid id marketid entities marketid id bb partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum diff expected actual marketid entities marketid id entities id len aa id entities id len bb partyid entities partyid id entities id len id entities id len size remaining remaining timeinforce vega order timeinforce status vega order status reference string len my reference reference string len my reference reason vega ordererror version version peggedoffset createdat time time wall wall ext updatedat time time wall wall ext expiresat time time wall wall ext seqnum seqnum test testorders getbyorderid orders test go error trace users valentin projects vega vega datanode sqlstore orders test go error not equal expected entities order id entities orderid id marketid entities marketid id bb partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum actual entities order id entities orderid id marketid entities marketid id aa partyid entities partyid id side price size remaining timeinforce type status reference my reference reason version peggedoffset batchid peggedreference lpid nil createdat time date time august time local updatedat time date time august time local expiresat time date time august time local vegatime time date time august time local seqnum diff expected actual marketid entities marketid id entities id len bb id entities id len aa partyid entities partyid id entities id len id entities id len size remaining remaining timeinforce vega order timeinforce status vega order status reference string len my reference reference string len my reference reason vega ordererror version version peggedoffset createdat time time wall wall ext updatedat time time wall wall ext expiresat time time wall wall ext seqnum seqnum test testorders getbyorderid fail testorders getbymarket orders test go error trace users valentin projects vega vega datanode sqlstore orders test go error should have item s but has test testorders getbymarket yay fail testproposals fail testproposals getbyid proposals test go error trace users valentin projects vega vega datanode sqlstore proposals test go users valentin projects vega vega datanode sqlstore proposals test go error should be empty but was entities proposal id id reference partyid entities partyid id partyid entities partyid id state rationale entities proposalrationale proposalrationale vega proposalrationale ignored fields description hash url com url com terms proposalterms reason identical fields test testproposals getbyid fail testproposals getinstate proposals test go error trace users valentin projects vega vega datanode sqlstore proposals test go users valentin projects vega vega datanode sqlstore proposals test go error should be empty but was entities proposal id entities proposalid id reference partyid entities partyid id state rationale s url com terms s proposaltime s cest vegatime s cest id id reference partyid id state test testproposals getinstate fail testproposals getbyparty proposals test go error trace users valentin projects vega vega datanode sqlstore proposals test go users valentin projects vega vega datanode sqlstore proposals test go error should be empty but was entities proposal nil id entities proposalid id reference partyid entities partyid id state rationale s url com terms s proposaltime s cest vegatime s cest test testproposals getbyparty
| 0
|
45,034
| 12,529,572,651
|
IssuesEvent
|
2020-06-04 11:35:30
|
google/pywebsocket
|
https://api.github.com/repos/google/pywebsocket
|
closed
|
Remove draft08 parameter from PerMessageDeflateExtensionProcessor
|
Priority-Medium Type-Defect auto-migrated
|
```
It's no longer used.
```
Original issue reported on code.google.com by `tyoshino@chromium.org` on 25 Nov 2014 at 3:47
|
1.0
|
Remove draft08 parameter from PerMessageDeflateExtensionProcessor - ```
It's no longer used.
```
Original issue reported on code.google.com by `tyoshino@chromium.org` on 25 Nov 2014 at 3:47
|
defect
|
remove parameter from permessagedeflateextensionprocessor it s no longer used original issue reported on code google com by tyoshino chromium org on nov at
| 1
|
52,221
| 13,731,442,847
|
IssuesEvent
|
2020-10-05 01:02:21
|
jtimberlake/skf-flask
|
https://api.github.com/repos/jtimberlake/skf-flask
|
opened
|
WS-2019-0493 (High) detected in handlebars-4.5.1.tgz
|
security vulnerability
|
## WS-2019-0493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.5.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.5.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.5.1.tgz</a></p>
<p>Path to dependency file: skf-flask/Angular/package.json</p>
<p>Path to vulnerable library: skf-flask/Angular/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-1.4.3.tgz (Root Library)
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-14
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-11-14</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.5.1","isTransitiveDependency":true,"dependencyTree":"karma-coverage-istanbul-reporter:1.4.3;istanbul-api:1.3.7;istanbul-reports:1.5.1;handlebars:4.5.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 3.0.8,4.5.2"}],"vulnerabilityIdentifier":"WS-2019-0493","vulnerabilityDetails":"handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package\u0027s lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
WS-2019-0493 (High) detected in handlebars-4.5.1.tgz - ## WS-2019-0493 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.5.1.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.5.1.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.5.1.tgz</a></p>
<p>Path to dependency file: skf-flask/Angular/package.json</p>
<p>Path to vulnerable library: skf-flask/Angular/node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-istanbul-reporter-1.4.3.tgz (Root Library)
- istanbul-api-1.3.7.tgz
- istanbul-reports-1.5.1.tgz
- :x: **handlebars-4.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package's lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.
<p>Publish Date: 2019-11-14
<p>URL: <a href=https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e>WS-2019-0493</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1316">https://www.npmjs.com/advisories/1316</a></p>
<p>Release Date: 2019-11-14</p>
<p>Fix Resolution: handlebars - 3.0.8,4.5.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.5.1","isTransitiveDependency":true,"dependencyTree":"karma-coverage-istanbul-reporter:1.4.3;istanbul-api:1.3.7;istanbul-reports:1.5.1;handlebars:4.5.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"handlebars - 3.0.8,4.5.2"}],"vulnerabilityIdentifier":"WS-2019-0493","vulnerabilityDetails":"handlebars before 3.0.8 and 4.x before 4.5.2 is vulnerable to Arbitrary Code Execution. The package\u0027s lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript in the system.","vulnerabilityUrl":"https://github.com/handlebars-lang/handlebars.js/commit/d54137810a49939fd2ad01a91a34e182ece4528e","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
ws high detected in handlebars tgz ws high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file skf flask angular package json path to vulnerable library skf flask angular node modules handlebars package json dependency hierarchy karma coverage istanbul reporter tgz root library istanbul api tgz istanbul reports tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the package s lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier ws vulnerabilitydetails handlebars before and x before is vulnerable to arbitrary code execution the package lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript in the system vulnerabilityurl
| 0
|
48,503
| 13,103,608,670
|
IssuesEvent
|
2020-08-04 08:51:40
|
OpenMS/OpenMS
|
https://api.github.com/repos/OpenMS/OpenMS
|
closed
|
Misclassified tool documentation
|
defect
|
There are a few tools in 2.4. where the documentation is misclassified (TOPP / UTILS):
Resulting in 404 for the following URLs:
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_MSFraggerAdapter.html - http://www.openms.de/documentation/UTILS_MSFraggerAdapter.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_SimpleSearchEngine.html - http://www.openms.de/documentation/UTILS_SimpleSearchEngine.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_ClusterMassTraces.html - http://www.openms.de/documentation/UTILS_ClusterMassTraces.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_ClusterMassTracesByPrecursor.html - http://www.openms.de/documentation/UTILS_ClusterMassTracesByPrecursor.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_RNPxlSearch.html - http://www.openms.de/documentation/UTILS_RNPxlSearch.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_MetaProSIP.html - http://www.openms.de/documentation/UTILS_MetaProSIP.html
|
1.0
|
Misclassified tool documentation - There are a few tools in 2.4. where the documentation is misclassified (TOPP / UTILS):
Resulting in 404 for the following URLs:
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_MSFraggerAdapter.html - http://www.openms.de/documentation/UTILS_MSFraggerAdapter.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_SimpleSearchEngine.html - http://www.openms.de/documentation/UTILS_SimpleSearchEngine.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_ClusterMassTraces.html - http://www.openms.de/documentation/UTILS_ClusterMassTraces.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_ClusterMassTracesByPrecursor.html - http://www.openms.de/documentation/UTILS_ClusterMassTracesByPrecursor.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_RNPxlSearch.html - http://www.openms.de/documentation/UTILS_RNPxlSearch.html
- https://abibuilder.informatik.uni-tuebingen.de/archive/openms/Documentation/release/latest/html/UTILS_MetaProSIP.html - http://www.openms.de/documentation/UTILS_MetaProSIP.html
|
defect
|
misclassified tool documentation there are a few tools in where the documentation is misclassified topp utils resulting in for the following urls
| 1
|
55,401
| 14,439,329,780
|
IssuesEvent
|
2020-12-07 14:15:24
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Content release error message mentions the wrong slack channel
|
Defect
|
**Describe the defect**
A clear and concise description of what the bug is.

#cms-engineering channel does not exist.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a failed content release (not sure exactly how to do this)
**Expected behavior**
Error message should not suggest an impossible action.
|
1.0
|
Content release error message mentions the wrong slack channel - **Describe the defect**
A clear and concise description of what the bug is.

#cms-engineering channel does not exist.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a failed content release (not sure exactly how to do this)
**Expected behavior**
Error message should not suggest an impossible action.
|
defect
|
content release error message mentions the wrong slack channel describe the defect a clear and concise description of what the bug is cms engineering channel does not exist to reproduce steps to reproduce the behavior create a failed content release not sure exactly how to do this expected behavior error message should not suggest an impossible action
| 1
|
15,644
| 19,846,202,660
|
IssuesEvent
|
2022-01-21 06:45:10
|
ooi-data/RS01SBPD-DP01A-03-FLCDRA102-recovered_inst-dpc_flcdrtd_instrument_recovered
|
https://api.github.com/repos/ooi-data/RS01SBPD-DP01A-03-FLCDRA102-recovered_inst-dpc_flcdrtd_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T06:45:10.093142.
## Details
Flow name: `RS01SBPD-DP01A-03-FLCDRA102-recovered_inst-dpc_flcdrtd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T06:45:10.093142.
## Details
Flow name: `RS01SBPD-DP01A-03-FLCDRA102-recovered_inst-dpc_flcdrtd_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
non_defect
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered inst dpc flcdrtd instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 0
|
13,274
| 2,744,110,283
|
IssuesEvent
|
2015-04-22 03:46:51
|
netty/netty
|
https://api.github.com/repos/netty/netty
|
closed
|
Immediate HTTP/2 stream removal kills connection for HEADERS
|
defect
|
Issue #3557 seeks to improve handling of DATA frames now that streams are immediately removed. DATA frame handling is "okay" now, but could be improved. However, HEADERS frames are handled very poorly.
I'm seeing exceptions like:
```
Caused by: io.netty.handler.codec.http2.Http2Exception: Request stream 11 is not correct for server connection
at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:58)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.checkNewStreamAllowed(DefaultHttp2Connection.java:1002)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.createStream(DefaultHttp2Connection.java:873)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.createStream(DefaultHttp2Connection.java:829)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onHeadersRead(DefaultHttp2ConnectionDecoder.java:269)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onHeadersRead(DefaultHttp2ConnectionDecoder.java:260)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onHeadersRead(Http2InboundFrameLogger.java:54)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader$2.processFragment(DefaultHttp2FrameReader.java:450)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readHeadersFrame(DefaultHttp2FrameReader.java:459)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:226)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:130)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:39)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:99)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:288)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:329)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1001)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:891)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
... 3 more
```
If a client sends RST_STREAM (for one of its streams) while a server-sent HEADERS frame is en route, then the client would cause the above error and reset the _connection_. This sort of situation is also likely if trailers are involved.
This makes RST_STREAM very dangerous with the current implementation. I've encountered some users that RST_STREAM very often, and the above error is reliable for those users.
@Scottmitch @nmittler @buchgr
|
1.0
|
Immediate HTTP/2 stream removal kills connection for HEADERS - Issue #3557 seeks to improve handling of DATA frames now that streams are immediately removed. DATA frame handling is "okay" now, but could be improved. However, HEADERS frames are handled very poorly.
I'm seeing exceptions like:
```
Caused by: io.netty.handler.codec.http2.Http2Exception: Request stream 11 is not correct for server connection
at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:58)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.checkNewStreamAllowed(DefaultHttp2Connection.java:1002)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.createStream(DefaultHttp2Connection.java:873)
at io.netty.handler.codec.http2.DefaultHttp2Connection$DefaultEndpoint.createStream(DefaultHttp2Connection.java:829)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onHeadersRead(DefaultHttp2ConnectionDecoder.java:269)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onHeadersRead(DefaultHttp2ConnectionDecoder.java:260)
at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onHeadersRead(Http2InboundFrameLogger.java:54)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader$2.processFragment(DefaultHttp2FrameReader.java:450)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readHeadersFrame(DefaultHttp2FrameReader.java:459)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:226)
at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:130)
at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:39)
at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:99)
at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:288)
at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:329)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1001)
at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:891)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
at io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
at io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
at io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
... 3 more
```
If a client sends RST_STREAM (for one of its streams) while a server-sent HEADERS frame is en route, then the client would cause the above error and reset the _connection_. This sort of situation is also likely if trailers are involved.
This makes RST_STREAM very dangerous with the current implementation. I've encountered some users that RST_STREAM very often, and the above error is reliable for those users.
@Scottmitch @nmittler @buchgr
|
defect
|
immediate http stream removal kills connection for headers issue seeks to improve handling of data frames now that streams are immediately removed data frame handling is okay now but could be improved however headers frames are handled very poorly i m seeing exceptions like caused by io netty handler codec request stream is not correct for server connection at io netty handler codec connectionerror java at io netty handler codec defaultendpoint checknewstreamallowed java at io netty handler codec defaultendpoint createstream java at io netty handler codec defaultendpoint createstream java at io netty handler codec framereadlistener onheadersread java at io netty handler codec framereadlistener onheadersread java at io netty handler codec onheadersread java at io netty handler codec processfragment java at io netty handler codec readheadersframe java at io netty handler codec processpayloadstate java at io netty handler codec readframe java at io netty handler codec readframe java at io netty handler codec decodeframe java at io netty handler codec framedecoder decode java at io netty handler codec decode java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel channelhandlerinvokerutil invokechannelreadnow channelhandlerinvokerutil java at io netty channel defaultchannelhandlerinvoker invokechannelread defaultchannelhandlerinvoker java at io netty channel pausablechanneleventexecutor invokechannelread pausablechanneleventexecutor java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler ssl sslhandler unwrap sslhandler java at io netty handler ssl sslhandler decode sslhandler java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel channelhandlerinvokerutil invokechannelreadnow channelhandlerinvokerutil java at io netty channel defaultchannelhandlerinvoker invokechannelread defaultchannelhandlerinvoker java at io netty channel pausablechanneleventexecutor invokechannelread pausablechanneleventexecutor java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java more if a client sends rst stream for one of its streams while a server sent headers frame is en route then the client would cause the above error and reset the connection this sort of situation is also likely if trailers are involved this makes rst stream very dangerous with the current implementation i ve encountered some users that rst stream very often and the above error is reliable for those users scottmitch nmittler buchgr
| 1
|
77,820
| 7,604,421,350
|
IssuesEvent
|
2018-04-30 00:58:52
|
TerriaJS/nationalmap
|
https://api.github.com/repos/TerriaJS/nationalmap
|
closed
|
National Map v2018-04-16 pre-release test - Error loading catalogue Item - 7
|
GA-testing
|
Error loading catalogue item - Status Code: 502, retest error Status Code: 504
Layer: National Datasets/Health/Medicare Offices




|
1.0
|
National Map v2018-04-16 pre-release test - Error loading catalogue Item - 7 - Error loading catalogue item - Status Code: 502, retest error Status Code: 504
Layer: National Datasets/Health/Medicare Offices




|
non_defect
|
national map pre release test error loading catalogue item error loading catalogue item status code retest error status code layer national datasets health medicare offices
| 0
|
412,522
| 27,860,832,810
|
IssuesEvent
|
2023-03-21 06:03:35
|
restqa/restqa
|
https://api.github.com/repos/restqa/restqa
|
closed
|
🚀 [FEATURE]: API Collection - Generate collection for Insomnia
|
documentation enhancement good first issue dev-exp
|
### 👀 Background
RestQA aim to provide an up to date API collection for the backend engineer to stop accumulating meaningless collection.
### ✌️ What is the actual behavior?
Currently the only API collection supported is API collection
### 🕵️♀️ How to reproduce the current behavior?
1. Clone the project
2. run the command `npm run contribute`
3. Check the API collection on the RestQA Report result
### 🤞 What is the expected behavior?
It would be great to have other tool supported such as [Insomnia](https://insomnia.rest/)
### 😎 Proposed solution.
* [ ] Update CLI
* [ ] Update documentation
* [ ] Report UI update
### 🙏 Would you be willing to submit a PR?
Yes
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
🚀 [FEATURE]: API Collection - Generate collection for Insomnia - ### 👀 Background
RestQA aim to provide an up to date API collection for the backend engineer to stop accumulating meaningless collection.
### ✌️ What is the actual behavior?
Currently the only API collection supported is API collection
### 🕵️♀️ How to reproduce the current behavior?
1. Clone the project
2. run the command `npm run contribute`
3. Check the API collection on the RestQA Report result
### 🤞 What is the expected behavior?
It would be great to have other tool supported such as [Insomnia](https://insomnia.rest/)
### 😎 Proposed solution.
* [ ] Update CLI
* [ ] Update documentation
* [ ] Report UI update
### 🙏 Would you be willing to submit a PR?
Yes
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
non_defect
|
🚀 api collection generate collection for insomnia 👀 background restqa aim to provide an up to date api collection for the backend engineer to stop accumulating meaningless collection ✌️ what is the actual behavior currently the only api collection supported is api collection 🕵️♀️ how to reproduce the current behavior clone the project run the command npm run contribute check the api collection on the restqa report result 🤞 what is the expected behavior it would be great to have other tool supported such as 😎 proposed solution update cli update documentation report ui update 🙏 would you be willing to submit a pr yes code of conduct i agree to follow this project s code of conduct
| 0
|
222,231
| 24,692,488,972
|
IssuesEvent
|
2022-10-19 09:33:54
|
rsoreq/zenbot
|
https://api.github.com/repos/rsoreq/zenbot
|
opened
|
CVE-2022-40764 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2022-40764 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snyk-1.386.0.tgz</b>, <b>snyk-1.984.0.tgz</b>, <b>snyk-1.374.0.tgz</b></p></summary>
<p>
<details><summary><b>snyk-1.386.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.386.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.386.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.386.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>snyk-1.984.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.984.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.984.0.tgz</a></p>
<p>Path to dependency file: /extensions/exchanges/gemini/package.json</p>
<p>Path to vulnerable library: /extensions/exchanges/gemini/node_modules/snyk/package.json</p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.984.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>snyk-1.374.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.374.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.374.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.374.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/zenbot/commit/7a24c0d7b98ee76e6bac827974cff490a7694378">7a24c0d7b98ee76e6bac827974cff490a7694378</a></p>
<p>Found in base branch: <b>unstable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Snyk CLI before 1.996.0 allows arbitrary command execution, affecting Snyk IDE plugins and the snyk npm package. Exploitation could follow from the common practice of viewing untrusted files in the Visual Studio Code editor, for example. The original demonstration was with shell metacharacters in the vendor.json ignore field, affecting snyk-go-plugin before 1.19.1. This affects, for example, the Snyk TeamCity plugin (which does not update automatically) before 20220930.142957.
<p>Publish Date: 2022-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40764>CVE-2022-40764</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hpqj-7cj6-hfj8">https://github.com/advisories/GHSA-hpqj-7cj6-hfj8</a></p>
<p>Release Date: 2022-10-03</p>
<p>Fix Resolution: 1.996.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
True
|
CVE-2022-40764 (High) detected in multiple libraries - ## CVE-2022-40764 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>snyk-1.386.0.tgz</b>, <b>snyk-1.984.0.tgz</b>, <b>snyk-1.374.0.tgz</b></p></summary>
<p>
<details><summary><b>snyk-1.386.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.386.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.386.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.386.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>snyk-1.984.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.984.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.984.0.tgz</a></p>
<p>Path to dependency file: /extensions/exchanges/gemini/package.json</p>
<p>Path to vulnerable library: /extensions/exchanges/gemini/node_modules/snyk/package.json</p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.984.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>snyk-1.374.0.tgz</b></p></summary>
<p>snyk library and cli utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/snyk/-/snyk-1.374.0.tgz">https://registry.npmjs.org/snyk/-/snyk-1.374.0.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **snyk-1.374.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/rsoreq/zenbot/commit/7a24c0d7b98ee76e6bac827974cff490a7694378">7a24c0d7b98ee76e6bac827974cff490a7694378</a></p>
<p>Found in base branch: <b>unstable</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Snyk CLI before 1.996.0 allows arbitrary command execution, affecting Snyk IDE plugins and the snyk npm package. Exploitation could follow from the common practice of viewing untrusted files in the Visual Studio Code editor, for example. The original demonstration was with shell metacharacters in the vendor.json ignore field, affecting snyk-go-plugin before 1.19.1. This affects, for example, the Snyk TeamCity plugin (which does not update automatically) before 20220930.142957.
<p>Publish Date: 2022-10-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-40764>CVE-2022-40764</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-hpqj-7cj6-hfj8">https://github.com/advisories/GHSA-hpqj-7cj6-hfj8</a></p>
<p>Release Date: 2022-10-03</p>
<p>Fix Resolution: 1.996.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
|
non_defect
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries snyk tgz snyk tgz snyk tgz snyk tgz snyk library and cli utility library home page a href dependency hierarchy x snyk tgz vulnerable library snyk tgz snyk library and cli utility library home page a href path to dependency file extensions exchanges gemini package json path to vulnerable library extensions exchanges gemini node modules snyk package json dependency hierarchy x snyk tgz vulnerable library snyk tgz snyk library and cli utility library home page a href dependency hierarchy x snyk tgz vulnerable library found in head commit a href found in base branch unstable vulnerability details snyk cli before allows arbitrary command execution affecting snyk ide plugins and the snyk npm package exploitation could follow from the common practice of viewing untrusted files in the visual studio code editor for example the original demonstration was with shell metacharacters in the vendor json ignore field affecting snyk go plugin before this affects for example the snyk teamcity plugin which does not update automatically before publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue
| 0
|
57,543
| 15,835,854,679
|
IssuesEvent
|
2021-04-06 18:32:22
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
508-defect-3 [COGNITION]: Wizard, links should be styled as links
|
508-defect-3 508-issue-cognition 508/Accessibility
|
# [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh
## User Story or Problem Statement
- As a user, I expect links to be styled as links so that I know I can open them in a new tab.
- As a voice AT user, I expect links to be styled as links so that I can provide an accurate command based on what I see.
- As a screen reader user with low vision, I expect the screen reader to announce what I see on the page.
- As a user, I expect links to be styled consistently as links so that I do not confuse them with buttons.
## Details
The "Submit a Financial Status Report" link within the wizard is styled as a `button` which is materially dishonest. This causes several issues:
- It's materially dishonest and damages habituation on VA.gov
- Voice assistive tech users may say "Click go to xyz form _button_" which may result in failure as it is tagged as a `a`
- Screen reader users with low vision may become confused at a button being announced as a link, and even more so when navigating by links/buttons using the rotor
- Sighted users may not know that they can open this in a new tab because it looks like a button, especially on mobile which gives a stronger affordance for a tap vs. a tap/hold
This is a sitewide long-tail effort. [To better track button/link inconsistencies, please view this Mural.](https://app.mural.co/invitation/mural/vsa8243/1610052905994?sender=u2134c22982ad9c7b41798011&key=87284ca2-fd99-431d-b086-5890abf480ac).
## Acceptance Criteria
- [ ] Links are styled as links
## Environment
* Operating System: all
* Browser: any
* Screenreading device: any
* Server destination: staging
## Solution
Use the new action link component in the VA design system
## WCAG or Vendor Guidance (optional)
* [Adam Silver: But sometimes buttons look like links](https://adamsilver.io/articles/but-sometimes-buttons-look-like-links/)
* [WCAG Success Criterion 3.2.4: Consistent Identification](https://www.w3.org/WAI/WCAG21/Understanding/consistent-identification.html)
## Screenshots or Trace Logs

|
1.0
|
508-defect-3 [COGNITION]: Wizard, links should be styled as links - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3)
## Feedback framework
- **❗️ Must** for if the feedback must be applied
- **⚠️ Should** if the feedback is best practice
- **✔️ Consider** for suggestions/enhancements
## Definition of done
1. Review and acknowledge feedback.
1. Fix and/or document decisions made.
1. Accessibility specialist will close ticket after reviewing documented decisions / validating fix.
<hr/>
## Point of Contact
**VFS Point of Contact:** Josh
## User Story or Problem Statement
- As a user, I expect links to be styled as links so that I know I can open them in a new tab.
- As a voice AT user, I expect links to be styled as links so that I can provide an accurate command based on what I see.
- As a screen reader user with low vision, I expect the screen reader to announce what I see on the page.
- As a user, I expect links to be styled consistently as links so that I do not confuse them with buttons.
## Details
The "Submit a Financial Status Report" link within the wizard is styled as a `button` which is materially dishonest. This causes several issues:
- It's materially dishonest and damages habituation on VA.gov
- Voice assistive tech users may say "Click go to xyz form _button_" which may result in failure as it is tagged as a `a`
- Screen reader users with low vision may become confused at a button being announced as a link, and even more so when navigating by links/buttons using the rotor
- Sighted users may not know that they can open this in a new tab because it looks like a button, especially on mobile which gives a stronger affordance for a tap vs. a tap/hold
This is a sitewide long-tail effort. [To better track button/link inconsistencies, please view this Mural.](https://app.mural.co/invitation/mural/vsa8243/1610052905994?sender=u2134c22982ad9c7b41798011&key=87284ca2-fd99-431d-b086-5890abf480ac).
## Acceptance Criteria
- [ ] Links are styled as links
## Environment
* Operating System: all
* Browser: any
* Screenreading device: any
* Server destination: staging
## Solution
Use the new action link component in the VA design system
## WCAG or Vendor Guidance (optional)
* [Adam Silver: But sometimes buttons look like links](https://adamsilver.io/articles/but-sometimes-buttons-look-like-links/)
* [WCAG Success Criterion 3.2.4: Consistent Identification](https://www.w3.org/WAI/WCAG21/Understanding/consistent-identification.html)
## Screenshots or Trace Logs

|
defect
|
defect wizard links should be styled as links feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements definition of done review and acknowledge feedback fix and or document decisions made accessibility specialist will close ticket after reviewing documented decisions validating fix point of contact vfs point of contact josh user story or problem statement as a user i expect links to be styled as links so that i know i can open them in a new tab as a voice at user i expect links to be styled as links so that i can provide an accurate command based on what i see as a screen reader user with low vision i expect the screen reader to announce what i see on the page as a user i expect links to be styled consistently as links so that i do not confuse them with buttons details the submit a financial status report link within the wizard is styled as a button which is materially dishonest this causes several issues it s materially dishonest and damages habituation on va gov voice assistive tech users may say click go to xyz form button which may result in failure as it is tagged as a a screen reader users with low vision may become confused at a button being announced as a link and even more so when navigating by links buttons using the rotor sighted users may not know that they can open this in a new tab because it looks like a button especially on mobile which gives a stronger affordance for a tap vs a tap hold this is a sitewide long tail effort acceptance criteria links are styled as links environment operating system all browser any screenreading device any server destination staging solution use the new action link component in the va design system wcag or vendor guidance optional screenshots or trace logs
| 1
|
40,859
| 10,195,303,093
|
IssuesEvent
|
2019-08-12 17:47:51
|
telus/tds-core
|
https://api.github.com/repos/telus/tds-core
|
closed
|
[ButtonGroup] Allowing unselected button groups by default
|
priority: medium status: in progress type: defect :bug:
|
Using the current button group, there seems to be missing a feature where the first selection is unselected. Not all the time, do we know or want to predetermine the selection. We want the user to dictate their own selection and flow within the completion process. Having the first option selected can sometimes cause the page length to be unnecessarily long, causing the user to miss important information below the viewable area, especially in mobile.

## ACs
- Allow the ButtonGroup to be unselected by default
|
1.0
|
[ButtonGroup] Allowing unselected button groups by default - Using the current button group, there seems to be missing a feature where the first selection is unselected. Not all the time, do we know or want to predetermine the selection. We want the user to dictate their own selection and flow within the completion process. Having the first option selected can sometimes cause the page length to be unnecessarily long, causing the user to miss important information below the viewable area, especially in mobile.

## ACs
- Allow the ButtonGroup to be unselected by default
|
defect
|
allowing unselected button groups by default using the current button group there seems to be missing a feature where the first selection is unselected not all the time do we know or want to predetermine the selection we want the user to dictate their own selection and flow within the completion process having the first option selected can sometimes cause the page length to be unnecessarily long causing the user to miss important information below the viewable area especially in mobile acs allow the buttongroup to be unselected by default
| 1
|
22,655
| 11,771,772,085
|
IssuesEvent
|
2020-03-16 01:26:02
|
terraform-providers/terraform-provider-aws
|
https://api.github.com/repos/terraform-providers/terraform-provider-aws
|
closed
|
aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException
|
bug service/backup
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
0.11.14
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_backup_selection
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
```
2019-10-15T12:24:03.672+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Action=DescribeSubnets&SubnetId.1=subnet-0xxxxxx8df&Version=2016-11-15
2019-10-15T12:24:03.672+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: -----------------------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] DEBUG: Response Backup/CreateBackupSelection Details:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: ---[ RESPONSE ]--------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: HTTP/1.1 400 Bad Request
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Connection: close
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Content-Length: 295
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Content-Type: application/json
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Date: Tue, 15 Oct 2019 10:24:02 GMT
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: X-Amzn-Errortype: InvalidParameterValueException:http://internal.amazon.com/coral/com.amazonaws.services.cryo/
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: X-Amzn-Requestid: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: -----------------------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] {"Code":"ERROR_3018","Context":"arn:aws:iam::126666663812:role/ROLE
resname","Message":"IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
","Type":null}
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] DEBUG: Validate Response Backup/CreateBackupSelection failed, attempt 0/25, e
rror InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [ERROR] root.mymodule: eval: *terraform.EvalApplyPost, err: 1 error occurred:
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [ERROR] root.mymodule: eval: *terraform.EvalSequence, err: 1 error occurred:
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [TRACE] [walkApply] Exiting eval tree: module.mymodule.aws_backup_selection.selection
Error: Error applying plan:
2019/10/15 12:24:03 [DEBUG] plugin: waiting for all plugin processes to complete...
1 error occurred:
* module.mymodule.aws_backup_selection.selection: 1 error occurred:
2019-10-15T12:24:03.039+0200 [DEBUG] plugin.terraform-provider-random_v2.2.1_x4: 2019/10/15 12:24:03 [ERR] plugin: plugin server: accept unix /tmp/plugin229438604: use of closed network connection
2019-10-15T12:24:03.039+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [ERR] plugin: plugin server: accept unix /tmp/plugin054189175: use of closed network connection
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
N/A
### Expected Behavior
It should retry.
My code doesn't fail always, but sometimes :-|
<!--- What should have happened? --->
### Actual Behavior
<!--- What actually happened? --->
It Errors out after first try (as per my understanding).
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #9297
|
1.0
|
aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
0.11.14
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_backup_selection
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key: https://keybase.io/hashicorp
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
```
2019-10-15T12:24:03.672+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Action=DescribeSubnets&SubnetId.1=subnet-0xxxxxx8df&Version=2016-11-15
2019-10-15T12:24:03.672+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: -----------------------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] DEBUG: Response Backup/CreateBackupSelection Details:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: ---[ RESPONSE ]--------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: HTTP/1.1 400 Bad Request
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Connection: close
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Content-Length: 295
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Content-Type: application/json
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: Date: Tue, 15 Oct 2019 10:24:02 GMT
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: X-Amzn-Errortype: InvalidParameterValueException:http://internal.amazon.com/coral/com.amazonaws.services.cryo/
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: X-Amzn-Requestid: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4:
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: -----------------------------------------------------
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] {"Code":"ERROR_3018","Context":"arn:aws:iam::126666663812:role/ROLE
resname","Message":"IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
","Type":null}
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [DEBUG] [aws-sdk-go] DEBUG: Validate Response Backup/CreateBackupSelection failed, attempt 0/25, e
rror InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
2019-10-15T12:24:03.028+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [ERROR] root.mymodule: eval: *terraform.EvalApplyPost, err: 1 error occurred:
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [ERROR] root.mymodule: eval: *terraform.EvalSequence, err: 1 error occurred:
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
2019/10/15 12:24:03 [TRACE] [walkApply] Exiting eval tree: module.mymodule.aws_backup_selection.selection
Error: Error applying plan:
2019/10/15 12:24:03 [DEBUG] plugin: waiting for all plugin processes to complete...
1 error occurred:
* module.mymodule.aws_backup_selection.selection: 1 error occurred:
2019-10-15T12:24:03.039+0200 [DEBUG] plugin.terraform-provider-random_v2.2.1_x4: 2019/10/15 12:24:03 [ERR] plugin: plugin server: accept unix /tmp/plugin229438604: use of closed network connection
2019-10-15T12:24:03.039+0200 [DEBUG] plugin.terraform-provider-aws_v2.32.0_x4: 2019/10/15 12:24:03 [ERR] plugin: plugin server: accept unix /tmp/plugin054189175: use of closed network connection
* aws_backup_selection.selection: error creating Backup Selection: InvalidParameterValueException: IAM Role arn:aws:iam::126666663812:role/ROLEresname is not authorized to call tag:GetResources
status code: 400, request id: ae13be5f-a155-45c3-b886-24c72cbd7fd9
```
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
N/A
### Expected Behavior
It should retry.
My code doesn't fail always, but sometimes :-|
<!--- What should have happened? --->
### Actual Behavior
<!--- What actually happened? --->
It Errors out after first try (as per my understanding).
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* #9297
|
non_defect
|
aws backup selection selection error creating backup selection invalidparametervalueexception please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version affected resource s aws backup selection terraform configuration files hcl copy paste your terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the plugin terraform provider aws action describesubnets subnetid subnet version plugin terraform provider aws plugin terraform provider aws debug response backup createbackupselection details plugin terraform provider aws plugin terraform provider aws http bad request plugin terraform provider aws connection close plugin terraform provider aws content length plugin terraform provider aws content type application json plugin terraform provider aws date tue oct gmt plugin terraform provider aws x amzn errortype invalidparametervalueexception plugin terraform provider aws x amzn requestid plugin terraform provider aws plugin terraform provider aws plugin terraform provider aws plugin terraform provider aws code error context arn aws iam role role resname message iam role arn aws iam role roleresname is not authorized to call tag getresources type null plugin terraform provider aws debug validate response backup createbackupselection failed attempt e rror invalidparametervalueexception iam role arn aws iam role roleresname is not authorized to call tag getresources plugin terraform provider aws status code request id root mymodule eval terraform evalapplypost err error occurred aws backup selection selection error creating backup selection invalidparametervalueexception iam role arn aws iam role roleresname is not authorized to call tag getresources status code request id root mymodule eval terraform evalsequence err error occurred aws backup selection selection error creating backup selection invalidparametervalueexception iam role arn aws iam role roleresname is not authorized to call tag getresources status code request id exiting eval tree module mymodule aws backup selection selection error error applying plan plugin waiting for all plugin processes to complete error occurred module mymodule aws backup selection selection error occurred plugin terraform provider random plugin plugin server accept unix tmp use of closed network connection plugin terraform provider aws plugin plugin server accept unix tmp use of closed network connection aws backup selection selection error creating backup selection invalidparametervalueexception iam role arn aws iam role roleresname is not authorized to call tag getresources status code request id panic output n a expected behavior it should retry my code doesn t fail always but sometimes actual behavior it errors out after first try as per my understanding steps to reproduce terraform apply important factoids references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example
| 0
|
37,778
| 18,766,794,678
|
IssuesEvent
|
2021-11-06 03:41:01
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Different writing impact the read performance in zfs 0.7 and 0.8, 0.6.5 is OK
|
Type: Performance Status: Stale
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | CenOS
Distribution Version | 7.7/8.2
Linux Kernel | 3.10.0-1062.18.1.el7.x86_64/4.18.0-193.6.3.el8_2.x86_64
Architecture | x86_64
ZFS Version | 0.8.3/0.8.4/0.7.9/0.7.13
SPL Version | 0.8.3/0.8.4/0.7.9/0.7.13
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
In zfs 0.7 and 0.8, the same data directory has the diff performance with the same application.
If you want to get the best read performance, I have to copy the dir with cp command(single process).
eg: dir A and dir B has the same data. B was copied from A, B has the best read performance, A was generate by the application. you could run the application with dir A, the read throughput only 500MB/s, if you switch the application to access dir B. the read throughput could be reached 1GB/s.
The test did not pass the network, just read and write local zpool.
I check a lot of the iostat 2 and zpool iostat -lv 2, all loading is balance in each disk.
### Describe how to reproduce the problem
Here is the simulate bash script. it could show the issue.
Create the zpool script, I switch the hardware platform from Dell R740 to MD3060e, the issue is the same.
```bash
for i in {b..k}; do parted -s /dev/sd$i mklabel gpt; parted -s /dev/sd$i mkpart p1 2048s 200G; done
zpool create tank raidz2 /dev/sd{b..k}1
[0:0:0:0] enclosu DELL MD3060e 039F -
[0:0:1:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdb
[0:0:2:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdc
[0:0:3:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdd
[0:0:4:0] disk SEAGATE ST6000NM0034 MS2A /dev/sde
[0:0:5:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdf
[0:0:6:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdg
[0:0:7:0] disk SEAGATE ST6000NM0095 DS23 /dev/sdh
[0:0:8:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdi
[0:0:9:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdj
[0:0:10:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdk
```
Generate data to the zpool
```bash
cat gen.sh
start_time=$(date +%s)
test_path=/tank/test4/gen
[[ ! -f /dev/shm/1G.file ]] && openssl rand -out /dev/shm/1G.file $(( 1024*1024*1024 ))
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..19}
do
cat /dev/shm/1G.file >> ${test_path}/rw.${j} &
done
done
wait
end_time=$(date +%s)
echo "gen files total time(secs):"$((end_time-start_time))
```
read generate data from this zpool
```bash
cat read-gen.sh
echo 3 > /proc/sys/vm/drop_caches
start_time=$(date +%s)
test_path=/tank/test4/gen
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..1}
do
dd if=${test_path}/rw.${j} of=/dev/null bs=1M &
done
done
wait
end_time=$(date +%s)
echo "read gen dir total time(secs):"$((end_time-start_time))
```
copy the generate data
```bash
cp -a /tank/test4/gen /tank/test4/gen-copy
```
Test read performance from the copied dir
```bash
cat read-cp.sh
echo 3 > /proc/sys/vm/drop_caches
start_time=$(date +%s)
test_path=/tank/test4/gen-copy
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..1}
do
dd if=${test_path}/rw.${j} of=/dev/null bs=1M &
done
done
wait
end_time=$(date +%s)
echo "read copy dir total time(secs):"$((end_time-start_time))
```
The run process
```bash
$ cd /tank/test4
$ rm -rf gen gen-copy; sh ./gen.sh ; cp -a gen gen-copy ; sh ./read-gen.sh ; sh ./read-cp.sh
# switch the read order
$ rm -rf gen gen-copy; sh ./gen.sh ; cp -a gen gen-copy ; sh ./read-cp.sh; sh ./read-gen.sh
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
The simulate test result(The lower the better):
| test env (CentOS7) | read from generating dir(secs) | read from cp dir(secs)
|--------------------------------|--------------------------------|---------------------
| MD3060E+ 0.8.4(new) | 84~99 | 60~61
| MD3060E+ 0.8.4(FRAG 13%) | 113~119 | 67~68
| MD3060E+ 0.7.13(new) | 94~100 | 64~67
| MD3060E+ 0.7.13(FRAG 51%) | 133~141 | 75~77
| MD3060E+ 0.6.5.11(new) | 99~100 | 101~103
Here is the original in my production env, I can't re-create the zpool, I test 20 times. 10x dirA, 10x dirB, fio 3.7, I use script parallel to generate the fio test file.
| test env (CentOS8) | read from generating dir(MB/s) | read from cp dir(MB/s)
|--------------------------------|--------------------------------|---------------------
| R740 + 0.8.4(FRAG 5%) | 200~400 | 600~800
Thanks : )
Here is the zdb -dddddd show the same file in diff dirs.
[after cp](https://drive.google.com/file/d/1fiGV2mNJ1GD5lO0kjl5unMxDBmdc4gQK/view?usp=sharing)
[parallel write](https://drive.google.com/file/d/11aaAFgRclcnt1z2LsCgIKsThKbaldBFg/view?usp=sharing)
Test parameters
```bash
/sys/module/zfs/parameters/dbuf_cache_hiwater_pct 10
/sys/module/zfs/parameters/dbuf_cache_hiwater_pct 10
/sys/module/zfs/parameters/dbuf_cache_lowater_pct 10
/sys/module/zfs/parameters/dbuf_cache_lowater_pct 10
/sys/module/zfs/parameters/dbuf_cache_max_bytes 1051399104
/sys/module/zfs/parameters/dbuf_cache_max_bytes 1051399104
/sys/module/zfs/parameters/dbuf_cache_shift 5
/sys/module/zfs/parameters/dbuf_cache_shift 5
/sys/module/zfs/parameters/dbuf_metadata_cache_max_bytes 525699552
/sys/module/zfs/parameters/dbuf_metadata_cache_max_bytes 525699552
/sys/module/zfs/parameters/dbuf_metadata_cache_shift 6
/sys/module/zfs/parameters/dbuf_metadata_cache_shift 6
/sys/module/zfs/parameters/dmu_object_alloc_chunk_shift 7
/sys/module/zfs/parameters/dmu_object_alloc_chunk_shift 7
/sys/module/zfs/parameters/dmu_prefetch_max 134217728
/sys/module/zfs/parameters/dmu_prefetch_max 134217728
/sys/module/zfs/parameters/ignore_hole_birth 1
/sys/module/zfs/parameters/ignore_hole_birth 1
/sys/module/zfs/parameters/l2arc_feed_again 1
/sys/module/zfs/parameters/l2arc_feed_again 1
/sys/module/zfs/parameters/l2arc_feed_min_ms 200
/sys/module/zfs/parameters/l2arc_feed_min_ms 200
/sys/module/zfs/parameters/l2arc_feed_secs 1
/sys/module/zfs/parameters/l2arc_feed_secs 1
/sys/module/zfs/parameters/l2arc_headroom 2
/sys/module/zfs/parameters/l2arc_headroom 2
/sys/module/zfs/parameters/l2arc_headroom_boost 200
/sys/module/zfs/parameters/l2arc_headroom_boost 200
/sys/module/zfs/parameters/l2arc_noprefetch 1
/sys/module/zfs/parameters/l2arc_noprefetch 1
/sys/module/zfs/parameters/l2arc_norw 0
/sys/module/zfs/parameters/l2arc_norw 0
/sys/module/zfs/parameters/l2arc_write_boost 8388608
/sys/module/zfs/parameters/l2arc_write_boost 8388608
/sys/module/zfs/parameters/l2arc_write_max 8388608
/sys/module/zfs/parameters/l2arc_write_max 8388608
/sys/module/zfs/parameters/metaslab_aliquot 524288
/sys/module/zfs/parameters/metaslab_aliquot 524288
/sys/module/zfs/parameters/metaslab_bias_enabled 1
/sys/module/zfs/parameters/metaslab_bias_enabled 1
/sys/module/zfs/parameters/metaslab_debug_load 0
/sys/module/zfs/parameters/metaslab_debug_load 0
/sys/module/zfs/parameters/metaslab_debug_unload 1
/sys/module/zfs/parameters/metaslab_debug_unload 1
/sys/module/zfs/parameters/metaslab_df_max_search 16777216
/sys/module/zfs/parameters/metaslab_df_max_search 16777216
/sys/module/zfs/parameters/metaslab_df_use_largest_segment 0
/sys/module/zfs/parameters/metaslab_df_use_largest_segment 0
/sys/module/zfs/parameters/metaslab_force_ganging 16777217
/sys/module/zfs/parameters/metaslab_force_ganging 16777217
/sys/module/zfs/parameters/metaslab_fragmentation_factor_enabled 1
/sys/module/zfs/parameters/metaslab_fragmentation_factor_enabled 1
/sys/module/zfs/parameters/metaslab_lba_weighting_enabled 1
/sys/module/zfs/parameters/metaslab_lba_weighting_enabled 1
/sys/module/zfs/parameters/metaslab_preload_enabled 1
/sys/module/zfs/parameters/metaslab_preload_enabled 1
/sys/module/zfs/parameters/send_holes_without_birth_time 1
/sys/module/zfs/parameters/send_holes_without_birth_time 1
/sys/module/zfs/parameters/spa_asize_inflation 24
/sys/module/zfs/parameters/spa_asize_inflation 24
/sys/module/zfs/parameters/spa_config_path /etc/zfs/zpool.cache
/sys/module/zfs/parameters/spa_config_path /etc/zfs/zpool.cache
/sys/module/zfs/parameters/spa_load_print_vdev_tree 0
/sys/module/zfs/parameters/spa_load_print_vdev_tree 0
/sys/module/zfs/parameters/spa_load_verify_data 1
/sys/module/zfs/parameters/spa_load_verify_data 1
/sys/module/zfs/parameters/spa_load_verify_metadata 1
/sys/module/zfs/parameters/spa_load_verify_metadata 1
/sys/module/zfs/parameters/spa_load_verify_shift 4
/sys/module/zfs/parameters/spa_load_verify_shift 4
/sys/module/zfs/parameters/spa_slop_shift 5
/sys/module/zfs/parameters/spa_slop_shift 5
/sys/module/zfs/parameters/vdev_removal_max_span 32768
/sys/module/zfs/parameters/vdev_removal_max_span 32768
/sys/module/zfs/parameters/vdev_validate_skip 0
/sys/module/zfs/parameters/vdev_validate_skip 0
/sys/module/zfs/parameters/zap_iterate_prefetch 1
/sys/module/zfs/parameters/zap_iterate_prefetch 1
/sys/module/zfs/parameters/zfetch_array_rd_sz 1048576
/sys/module/zfs/parameters/zfetch_array_rd_sz 1048576
/sys/module/zfs/parameters/zfetch_max_distance 8388608
/sys/module/zfs/parameters/zfetch_max_distance 8388608
/sys/module/zfs/parameters/zfetch_max_streams 8
/sys/module/zfs/parameters/zfetch_max_streams 8
/sys/module/zfs/parameters/zfetch_min_sec_reap 2
/sys/module/zfs/parameters/zfetch_min_sec_reap 2
/sys/module/zfs/parameters/zfs_abd_scatter_enabled 1
/sys/module/zfs/parameters/zfs_abd_scatter_enabled 1
/sys/module/zfs/parameters/zfs_abd_scatter_max_order 10
/sys/module/zfs/parameters/zfs_abd_scatter_max_order 10
/sys/module/zfs/parameters/zfs_abd_scatter_min_size 1536
/sys/module/zfs/parameters/zfs_abd_scatter_min_size 1536
/sys/module/zfs/parameters/zfs_admin_snapshot 0
/sys/module/zfs/parameters/zfs_admin_snapshot 0
/sys/module/zfs/parameters/zfs_arc_average_blocksize 8192
/sys/module/zfs/parameters/zfs_arc_average_blocksize 8192
/sys/module/zfs/parameters/zfs_arc_dnode_limit 0
/sys/module/zfs/parameters/zfs_arc_dnode_limit 0
/sys/module/zfs/parameters/zfs_arc_dnode_limit_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_limit_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_reduce_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_reduce_percent 10
/sys/module/zfs/parameters/zfs_arc_grow_retry 0
/sys/module/zfs/parameters/zfs_arc_grow_retry 0
/sys/module/zfs/parameters/zfs_arc_lotsfree_percent 10
/sys/module/zfs/parameters/zfs_arc_lotsfree_percent 10
/sys/module/zfs/parameters/zfs_arc_max 0
/sys/module/zfs/parameters/zfs_arc_max 0
/sys/module/zfs/parameters/zfs_arc_meta_adjust_restarts 4096
/sys/module/zfs/parameters/zfs_arc_meta_adjust_restarts 4096
/sys/module/zfs/parameters/zfs_arc_meta_limit 0
/sys/module/zfs/parameters/zfs_arc_meta_limit 0
/sys/module/zfs/parameters/zfs_arc_meta_limit_percent 75
/sys/module/zfs/parameters/zfs_arc_meta_limit_percent 75
/sys/module/zfs/parameters/zfs_arc_meta_min 0
/sys/module/zfs/parameters/zfs_arc_meta_min 0
/sys/module/zfs/parameters/zfs_arc_meta_prune 10000
/sys/module/zfs/parameters/zfs_arc_meta_prune 10000
/sys/module/zfs/parameters/zfs_arc_meta_strategy 1
/sys/module/zfs/parameters/zfs_arc_meta_strategy 1
/sys/module/zfs/parameters/zfs_arc_min 0
/sys/module/zfs/parameters/zfs_arc_min 0
/sys/module/zfs/parameters/zfs_arc_min_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prescient_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prescient_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_pc_percent 0
/sys/module/zfs/parameters/zfs_arc_pc_percent 0
/sys/module/zfs/parameters/zfs_arc_p_dampener_disable 1
/sys/module/zfs/parameters/zfs_arc_p_dampener_disable 1
/sys/module/zfs/parameters/zfs_arc_p_min_shift 0
/sys/module/zfs/parameters/zfs_arc_p_min_shift 0
/sys/module/zfs/parameters/zfs_arc_shrink_shift 0
/sys/module/zfs/parameters/zfs_arc_shrink_shift 0
/sys/module/zfs/parameters/zfs_arc_sys_free 0
/sys/module/zfs/parameters/zfs_arc_sys_free 0
/sys/module/zfs/parameters/zfs_async_block_max_blocks 100000
/sys/module/zfs/parameters/zfs_async_block_max_blocks 100000
/sys/module/zfs/parameters/zfs_autoimport_disable 1
/sys/module/zfs/parameters/zfs_autoimport_disable 1
/sys/module/zfs/parameters/zfs_checksum_events_per_second 20
/sys/module/zfs/parameters/zfs_checksum_events_per_second 20
/sys/module/zfs/parameters/zfs_commit_timeout_pct 5
/sys/module/zfs/parameters/zfs_commit_timeout_pct 5
/sys/module/zfs/parameters/zfs_compressed_arc_enabled 1
/sys/module/zfs/parameters/zfs_compressed_arc_enabled 1
/sys/module/zfs/parameters/zfs_condense_indirect_commit_entry_delay_ms 0
/sys/module/zfs/parameters/zfs_condense_indirect_commit_entry_delay_ms 0
/sys/module/zfs/parameters/zfs_condense_indirect_vdevs_enable 1
/sys/module/zfs/parameters/zfs_condense_indirect_vdevs_enable 1
/sys/module/zfs/parameters/zfs_condense_max_obsolete_bytes 1073741824
/sys/module/zfs/parameters/zfs_condense_max_obsolete_bytes 1073741824
/sys/module/zfs/parameters/zfs_condense_min_mapping_bytes 131072
/sys/module/zfs/parameters/zfs_condense_min_mapping_bytes 131072
/sys/module/zfs/parameters/zfs_dbgmsg_enable 1
/sys/module/zfs/parameters/zfs_dbgmsg_enable 1
/sys/module/zfs/parameters/zfs_dbgmsg_maxsize 4194304
/sys/module/zfs/parameters/zfs_dbgmsg_maxsize 4194304
/sys/module/zfs/parameters/zfs_dbuf_state_index 0
/sys/module/zfs/parameters/zfs_dbuf_state_index 0
/sys/module/zfs/parameters/zfs_ddt_data_is_special 1
/sys/module/zfs/parameters/zfs_ddt_data_is_special 1
/sys/module/zfs/parameters/zfs_deadman_checktime_ms 60000
/sys/module/zfs/parameters/zfs_deadman_checktime_ms 60000
/sys/module/zfs/parameters/zfs_deadman_enabled 1
/sys/module/zfs/parameters/zfs_deadman_enabled 1
/sys/module/zfs/parameters/zfs_deadman_failmode wait
/sys/module/zfs/parameters/zfs_deadman_failmode wait
/sys/module/zfs/parameters/zfs_deadman_synctime_ms 600000
/sys/module/zfs/parameters/zfs_deadman_synctime_ms 600000
/sys/module/zfs/parameters/zfs_deadman_ziotime_ms 300000
/sys/module/zfs/parameters/zfs_deadman_ziotime_ms 300000
/sys/module/zfs/parameters/zfs_dedup_prefetch 0
/sys/module/zfs/parameters/zfs_dedup_prefetch 0
/sys/module/zfs/parameters/zfs_delay_min_dirty_percent 60
/sys/module/zfs/parameters/zfs_delay_min_dirty_percent 60
/sys/module/zfs/parameters/zfs_delay_scale 500000
/sys/module/zfs/parameters/zfs_delay_scale 500000
/sys/module/zfs/parameters/zfs_delete_blocks 20480
/sys/module/zfs/parameters/zfs_delete_blocks 20480
/sys/module/zfs/parameters/zfs_dirty_data_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max_percent 25
/sys/module/zfs/parameters/zfs_dirty_data_max_max_percent 25
/sys/module/zfs/parameters/zfs_dirty_data_max_percent 10
/sys/module/zfs/parameters/zfs_dirty_data_max_percent 10
/sys/module/zfs/parameters/zfs_dirty_data_sync_percent 20
/sys/module/zfs/parameters/zfs_dirty_data_sync_percent 20
/sys/module/zfs/parameters/zfs_disable_ivset_guid_check 0
/sys/module/zfs/parameters/zfs_disable_ivset_guid_check 0
/sys/module/zfs/parameters/zfs_dmu_offset_next_sync 0
/sys/module/zfs/parameters/zfs_dmu_offset_next_sync 0
/sys/module/zfs/parameters/zfs_expire_snapshot 300
/sys/module/zfs/parameters/zfs_expire_snapshot 300
/sys/module/zfs/parameters/zfs_flags 0
/sys/module/zfs/parameters/zfs_flags 0
/sys/module/zfs/parameters/zfs_free_bpobj_enabled 1
/sys/module/zfs/parameters/zfs_free_bpobj_enabled 1
/sys/module/zfs/parameters/zfs_free_leak_on_eio 0
/sys/module/zfs/parameters/zfs_free_leak_on_eio 0
/sys/module/zfs/parameters/zfs_free_min_time_ms 1000
/sys/module/zfs/parameters/zfs_free_min_time_ms 1000
/sys/module/zfs/parameters/zfs_immediate_write_sz 32768
/sys/module/zfs/parameters/zfs_immediate_write_sz 32768
/sys/module/zfs/parameters/zfs_initialize_value 16045690984833335022
/sys/module/zfs/parameters/zfs_initialize_value 16045690984833335022
/sys/module/zfs/parameters/zfs_key_max_salt_uses 400000000
/sys/module/zfs/parameters/zfs_key_max_salt_uses 400000000
/sys/module/zfs/parameters/zfs_lua_max_instrlimit 100000000
/sys/module/zfs/parameters/zfs_lua_max_instrlimit 100000000
/sys/module/zfs/parameters/zfs_lua_max_memlimit 104857600
/sys/module/zfs/parameters/zfs_lua_max_memlimit 104857600
/sys/module/zfs/parameters/zfs_max_missing_tvds 0
/sys/module/zfs/parameters/zfs_max_missing_tvds 0
/sys/module/zfs/parameters/zfs_max_recordsize 1048576
/sys/module/zfs/parameters/zfs_max_recordsize 1048576
/sys/module/zfs/parameters/zfs_metaslab_fragmentation_threshold 70
/sys/module/zfs/parameters/zfs_metaslab_fragmentation_threshold 70
/sys/module/zfs/parameters/zfs_metaslab_segment_weight_enabled 1
/sys/module/zfs/parameters/zfs_metaslab_segment_weight_enabled 1
/sys/module/zfs/parameters/zfs_metaslab_switch_threshold 2
/sys/module/zfs/parameters/zfs_metaslab_switch_threshold 2
/sys/module/zfs/parameters/zfs_mg_fragmentation_threshold 95
/sys/module/zfs/parameters/zfs_mg_fragmentation_threshold 95
/sys/module/zfs/parameters/zfs_mg_noalloc_threshold 0
/sys/module/zfs/parameters/zfs_mg_noalloc_threshold 0
/sys/module/zfs/parameters/zfs_multihost_fail_intervals 10
/sys/module/zfs/parameters/zfs_multihost_fail_intervals 10
/sys/module/zfs/parameters/zfs_multihost_history 0
/sys/module/zfs/parameters/zfs_multihost_history 0
/sys/module/zfs/parameters/zfs_multihost_import_intervals 20
/sys/module/zfs/parameters/zfs_multihost_import_intervals 20
/sys/module/zfs/parameters/zfs_multihost_interval 1000
/sys/module/zfs/parameters/zfs_multihost_interval 1000
/sys/module/zfs/parameters/zfs_multilist_num_sublists 0
/sys/module/zfs/parameters/zfs_multilist_num_sublists 0
/sys/module/zfs/parameters/zfs_nocacheflush 0
/sys/module/zfs/parameters/zfs_nocacheflush 0
/sys/module/zfs/parameters/zfs_nopwrite_enabled 1
/sys/module/zfs/parameters/zfs_nopwrite_enabled 1
/sys/module/zfs/parameters/zfs_no_scrub_io 0
/sys/module/zfs/parameters/zfs_no_scrub_io 0
/sys/module/zfs/parameters/zfs_no_scrub_prefetch 0
/sys/module/zfs/parameters/zfs_no_scrub_prefetch 0
/sys/module/zfs/parameters/zfs_object_mutex_size 64
/sys/module/zfs/parameters/zfs_object_mutex_size 64
/sys/module/zfs/parameters/zfs_obsolete_min_time_ms 500
/sys/module/zfs/parameters/zfs_obsolete_min_time_ms 500
/sys/module/zfs/parameters/zfs_override_estimate_recordsize 0
/sys/module/zfs/parameters/zfs_override_estimate_recordsize 0
/sys/module/zfs/parameters/zfs_pd_bytes_max 52428800
/sys/module/zfs/parameters/zfs_pd_bytes_max 52428800
/sys/module/zfs/parameters/zfs_per_txg_dirty_frees_percent 5
/sys/module/zfs/parameters/zfs_per_txg_dirty_frees_percent 5
/sys/module/zfs/parameters/zfs_prefetch_disable 0
/sys/module/zfs/parameters/zfs_prefetch_disable 0
/sys/module/zfs/parameters/zfs_read_chunk_size 1048576
/sys/module/zfs/parameters/zfs_read_chunk_size 1048576
/sys/module/zfs/parameters/zfs_read_history 0
/sys/module/zfs/parameters/zfs_read_history 0
/sys/module/zfs/parameters/zfs_read_history_hits 0
/sys/module/zfs/parameters/zfs_read_history_hits 0
/sys/module/zfs/parameters/zfs_reconstruct_indirect_combinations_max 4096
/sys/module/zfs/parameters/zfs_reconstruct_indirect_combinations_max 4096
/sys/module/zfs/parameters/zfs_recover 0
/sys/module/zfs/parameters/zfs_recover 0
/sys/module/zfs/parameters/zfs_recv_queue_length 16777216
/sys/module/zfs/parameters/zfs_recv_queue_length 16777216
/sys/module/zfs/parameters/zfs_removal_ignore_errors 0
/sys/module/zfs/parameters/zfs_removal_ignore_errors 0
/sys/module/zfs/parameters/zfs_removal_suspend_progress 0
/sys/module/zfs/parameters/zfs_removal_suspend_progress 0
/sys/module/zfs/parameters/zfs_remove_max_segment 16777216
/sys/module/zfs/parameters/zfs_remove_max_segment 16777216
/sys/module/zfs/parameters/zfs_resilver_disable_defer 0
/sys/module/zfs/parameters/zfs_resilver_disable_defer 0
/sys/module/zfs/parameters/zfs_resilver_min_time_ms 3000
/sys/module/zfs/parameters/zfs_resilver_min_time_ms 3000
/sys/module/zfs/parameters/zfs_scan_checkpoint_intval 7200
/sys/module/zfs/parameters/zfs_scan_checkpoint_intval 7200
/sys/module/zfs/parameters/zfs_scan_fill_weight 3
/sys/module/zfs/parameters/zfs_scan_fill_weight 3
/sys/module/zfs/parameters/zfs_scan_ignore_errors 0
/sys/module/zfs/parameters/zfs_scan_ignore_errors 0
/sys/module/zfs/parameters/zfs_scan_issue_strategy 0
/sys/module/zfs/parameters/zfs_scan_issue_strategy 0
/sys/module/zfs/parameters/zfs_scan_legacy 0
/sys/module/zfs/parameters/zfs_scan_legacy 0
/sys/module/zfs/parameters/zfs_scan_max_ext_gap 2097152
/sys/module/zfs/parameters/zfs_scan_max_ext_gap 2097152
/sys/module/zfs/parameters/zfs_scan_mem_lim_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_soft_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_soft_fact 20
/sys/module/zfs/parameters/zfs_scan_strict_mem_lim 0
/sys/module/zfs/parameters/zfs_scan_strict_mem_lim 0
/sys/module/zfs/parameters/zfs_scan_suspend_progress 0
/sys/module/zfs/parameters/zfs_scan_suspend_progress 0
/sys/module/zfs/parameters/zfs_scan_vdev_limit 4194304
/sys/module/zfs/parameters/zfs_scan_vdev_limit 4194304
/sys/module/zfs/parameters/zfs_scrub_min_time_ms 1000
/sys/module/zfs/parameters/zfs_scrub_min_time_ms 1000
/sys/module/zfs/parameters/zfs_send_corrupt_data 0
/sys/module/zfs/parameters/zfs_send_corrupt_data 0
/sys/module/zfs/parameters/zfs_send_queue_length 16777216
/sys/module/zfs/parameters/zfs_send_queue_length 16777216
/sys/module/zfs/parameters/zfs_send_unmodified_spill_blocks 1
/sys/module/zfs/parameters/zfs_send_unmodified_spill_blocks 1
/sys/module/zfs/parameters/zfs_slow_io_events_per_second 20
/sys/module/zfs/parameters/zfs_slow_io_events_per_second 20
/sys/module/zfs/parameters/zfs_spa_discard_memory_limit 16777216
/sys/module/zfs/parameters/zfs_spa_discard_memory_limit 16777216
/sys/module/zfs/parameters/zfs_special_class_metadata_reserve_pct 25
/sys/module/zfs/parameters/zfs_special_class_metadata_reserve_pct 25
/sys/module/zfs/parameters/zfs_sync_pass_deferred_free 2
/sys/module/zfs/parameters/zfs_sync_pass_deferred_free 2
/sys/module/zfs/parameters/zfs_sync_pass_dont_compress 8
/sys/module/zfs/parameters/zfs_sync_pass_dont_compress 8
/sys/module/zfs/parameters/zfs_sync_pass_rewrite 2
/sys/module/zfs/parameters/zfs_sync_pass_rewrite 2
/sys/module/zfs/parameters/zfs_sync_taskq_batch_pct 75
/sys/module/zfs/parameters/zfs_sync_taskq_batch_pct 75
/sys/module/zfs/parameters/zfs_trim_extent_bytes_max 134217728
/sys/module/zfs/parameters/zfs_trim_extent_bytes_max 134217728
/sys/module/zfs/parameters/zfs_trim_extent_bytes_min 32768
/sys/module/zfs/parameters/zfs_trim_extent_bytes_min 32768
/sys/module/zfs/parameters/zfs_trim_metaslab_skip 0
/sys/module/zfs/parameters/zfs_trim_metaslab_skip 0
/sys/module/zfs/parameters/zfs_trim_queue_limit 10
/sys/module/zfs/parameters/zfs_trim_queue_limit 10
/sys/module/zfs/parameters/zfs_trim_txg_batch 32
/sys/module/zfs/parameters/zfs_trim_txg_batch 32
/sys/module/zfs/parameters/zfs_txg_history 100
/sys/module/zfs/parameters/zfs_txg_history 100
/sys/module/zfs/parameters/zfs_txg_timeout 5
/sys/module/zfs/parameters/zfs_txg_timeout 5
/sys/module/zfs/parameters/zfs_unlink_suspend_progress 0
/sys/module/zfs/parameters/zfs_unlink_suspend_progress 0
/sys/module/zfs/parameters/zfs_user_indirect_is_special 1
/sys/module/zfs/parameters/zfs_user_indirect_is_special 1
/sys/module/zfs/parameters/zfs_vdev_aggregate_trim 0
/sys/module/zfs/parameters/zfs_vdev_aggregate_trim 0
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit 1048576
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit 1048576
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit_non_rotating 131072
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit_non_rotating 131072
/sys/module/zfs/parameters/zfs_vdev_async_read_max_active 3
/sys/module/zfs/parameters/zfs_vdev_async_read_max_active 3
/sys/module/zfs/parameters/zfs_vdev_async_read_min_active 1
/sys/module/zfs/parameters/zfs_vdev_async_read_min_active 1
/sys/module/zfs/parameters/zfs_vdev_async_write_active_max_dirty_percent 60
/sys/module/zfs/parameters/zfs_vdev_async_write_active_max_dirty_percent 60
/sys/module/zfs/parameters/zfs_vdev_async_write_active_min_dirty_percent 30
/sys/module/zfs/parameters/zfs_vdev_async_write_active_min_dirty_percent 30
/sys/module/zfs/parameters/zfs_vdev_async_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_async_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_async_write_min_active 2
/sys/module/zfs/parameters/zfs_vdev_async_write_min_active 2
/sys/module/zfs/parameters/zfs_vdev_cache_bshift 16
/sys/module/zfs/parameters/zfs_vdev_cache_bshift 16
/sys/module/zfs/parameters/zfs_vdev_cache_max 16384
/sys/module/zfs/parameters/zfs_vdev_cache_max 16384
/sys/module/zfs/parameters/zfs_vdev_cache_size 0
/sys/module/zfs/parameters/zfs_vdev_cache_size 0
/sys/module/zfs/parameters/zfs_vdev_default_ms_count 200
/sys/module/zfs/parameters/zfs_vdev_default_ms_count 200
/sys/module/zfs/parameters/zfs_vdev_initializing_max_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_max_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_min_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_min_active 1
/sys/module/zfs/parameters/zfs_vdev_max_active 1000
/sys/module/zfs/parameters/zfs_vdev_max_active 1000
/sys/module/zfs/parameters/zfs_vdev_min_ms_count 16
/sys/module/zfs/parameters/zfs_vdev_min_ms_count 16
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_seek_inc 1
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_seek_inc 1
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_inc 5
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_inc 5
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_offset 1048576
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_offset 1048576
/sys/module/zfs/parameters/zfs_vdev_ms_count_limit 131072
/sys/module/zfs/parameters/zfs_vdev_ms_count_limit 131072
/sys/module/zfs/parameters/zfs_vdev_queue_depth_pct 1000
/sys/module/zfs/parameters/zfs_vdev_queue_depth_pct 1000
/sys/module/zfs/parameters/zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
/sys/module/zfs/parameters/zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
/sys/module/zfs/parameters/zfs_vdev_read_gap_limit 32768
/sys/module/zfs/parameters/zfs_vdev_read_gap_limit 32768
/sys/module/zfs/parameters/zfs_vdev_removal_max_active 2
/sys/module/zfs/parameters/zfs_vdev_removal_max_active 2
/sys/module/zfs/parameters/zfs_vdev_removal_min_active 1
/sys/module/zfs/parameters/zfs_vdev_removal_min_active 1
/sys/module/zfs/parameters/zfs_vdev_scheduler deadline
/sys/module/zfs/parameters/zfs_vdev_scheduler deadline
/sys/module/zfs/parameters/zfs_vdev_scrub_max_active 2
/sys/module/zfs/parameters/zfs_vdev_scrub_max_active 2
/sys/module/zfs/parameters/zfs_vdev_scrub_min_active 1
/sys/module/zfs/parameters/zfs_vdev_scrub_min_active 1
/sys/module/zfs/parameters/zfs_vdev_sync_read_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_min_active 10
/sys/module/zfs/parameters/zfs_vdev_trim_max_active 2
/sys/module/zfs/parameters/zfs_vdev_trim_max_active 2
/sys/module/zfs/parameters/zfs_vdev_trim_min_active 1
/sys/module/zfs/parameters/zfs_vdev_trim_min_active 1
/sys/module/zfs/parameters/zfs_vdev_write_gap_limit 4096
/sys/module/zfs/parameters/zfs_vdev_write_gap_limit 4096
/sys/module/zfs/parameters/zfs_zevent_cols 80
/sys/module/zfs/parameters/zfs_zevent_cols 80
/sys/module/zfs/parameters/zfs_zevent_console 0
/sys/module/zfs/parameters/zfs_zevent_console 0
/sys/module/zfs/parameters/zfs_zevent_len_max 128
/sys/module/zfs/parameters/zfs_zevent_len_max 128
/sys/module/zfs/parameters/zfs_zil_clean_taskq_maxalloc 1048576
/sys/module/zfs/parameters/zfs_zil_clean_taskq_maxalloc 1048576
/sys/module/zfs/parameters/zfs_zil_clean_taskq_minalloc 1024
/sys/module/zfs/parameters/zfs_zil_clean_taskq_minalloc 1024
/sys/module/zfs/parameters/zfs_zil_clean_taskq_nthr_pct 100
/sys/module/zfs/parameters/zfs_zil_clean_taskq_nthr_pct 100
/sys/module/zfs/parameters/zil_maxblocksize 131072
/sys/module/zfs/parameters/zil_maxblocksize 131072
/sys/module/zfs/parameters/zil_nocacheflush 0
/sys/module/zfs/parameters/zil_nocacheflush 0
/sys/module/zfs/parameters/zil_replay_disable 0
/sys/module/zfs/parameters/zil_replay_disable 0
/sys/module/zfs/parameters/zil_slog_bulk 786432
/sys/module/zfs/parameters/zil_slog_bulk 786432
/sys/module/zfs/parameters/zio_deadman_log_all 0
/sys/module/zfs/parameters/zio_deadman_log_all 0
/sys/module/zfs/parameters/zio_dva_throttle_enabled 0
/sys/module/zfs/parameters/zio_dva_throttle_enabled 0
/sys/module/zfs/parameters/zio_requeue_io_start_cut_in_line 1
/sys/module/zfs/parameters/zio_requeue_io_start_cut_in_line 1
/sys/module/zfs/parameters/zio_slow_io_ms 30000
/sys/module/zfs/parameters/zio_slow_io_ms 30000
/sys/module/zfs/parameters/zio_taskq_batch_pct 75
/sys/module/zfs/parameters/zio_taskq_batch_pct 75
/sys/module/zfs/parameters/zvol_inhibit_dev 0
/sys/module/zfs/parameters/zvol_inhibit_dev 0
/sys/module/zfs/parameters/zvol_major 230
/sys/module/zfs/parameters/zvol_major 230
/sys/module/zfs/parameters/zvol_max_discard_blocks 16384
/sys/module/zfs/parameters/zvol_max_discard_blocks 16384
/sys/module/zfs/parameters/zvol_prefetch_bytes 131072
/sys/module/zfs/parameters/zvol_prefetch_bytes 131072
/sys/module/zfs/parameters/zvol_request_sync 0
/sys/module/zfs/parameters/zvol_request_sync 0
/sys/module/zfs/parameters/zvol_threads 32
/sys/module/zfs/parameters/zvol_threads 32
/sys/module/zfs/parameters/zvol_volmode 1
/sys/module/zfs/parameters/zvol_volmode 1
```
|
True
|
Different writing impact the read performance in zfs 0.7 and 0.8, 0.6.5 is OK - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | CenOS
Distribution Version | 7.7/8.2
Linux Kernel | 3.10.0-1062.18.1.el7.x86_64/4.18.0-193.6.3.el8_2.x86_64
Architecture | x86_64
ZFS Version | 0.8.3/0.8.4/0.7.9/0.7.13
SPL Version | 0.8.3/0.8.4/0.7.9/0.7.13
<!--
Commands to find ZFS/SPL versions:
modinfo zfs | grep -iw version
modinfo spl | grep -iw version
-->
### Describe the problem you're observing
In zfs 0.7 and 0.8, the same data directory has the diff performance with the same application.
If you want to get the best read performance, I have to copy the dir with cp command(single process).
eg: dir A and dir B has the same data. B was copied from A, B has the best read performance, A was generate by the application. you could run the application with dir A, the read throughput only 500MB/s, if you switch the application to access dir B. the read throughput could be reached 1GB/s.
The test did not pass the network, just read and write local zpool.
I check a lot of the iostat 2 and zpool iostat -lv 2, all loading is balance in each disk.
### Describe how to reproduce the problem
Here is the simulate bash script. it could show the issue.
Create the zpool script, I switch the hardware platform from Dell R740 to MD3060e, the issue is the same.
```bash
for i in {b..k}; do parted -s /dev/sd$i mklabel gpt; parted -s /dev/sd$i mkpart p1 2048s 200G; done
zpool create tank raidz2 /dev/sd{b..k}1
[0:0:0:0] enclosu DELL MD3060e 039F -
[0:0:1:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdb
[0:0:2:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdc
[0:0:3:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdd
[0:0:4:0] disk SEAGATE ST6000NM0034 MS2A /dev/sde
[0:0:5:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdf
[0:0:6:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdg
[0:0:7:0] disk SEAGATE ST6000NM0095 DS23 /dev/sdh
[0:0:8:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdi
[0:0:9:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdj
[0:0:10:0] disk SEAGATE ST6000NM0034 MS2A /dev/sdk
```
Generate data to the zpool
```bash
cat gen.sh
start_time=$(date +%s)
test_path=/tank/test4/gen
[[ ! -f /dev/shm/1G.file ]] && openssl rand -out /dev/shm/1G.file $(( 1024*1024*1024 ))
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..19}
do
cat /dev/shm/1G.file >> ${test_path}/rw.${j} &
done
done
wait
end_time=$(date +%s)
echo "gen files total time(secs):"$((end_time-start_time))
```
read generate data from this zpool
```bash
cat read-gen.sh
echo 3 > /proc/sys/vm/drop_caches
start_time=$(date +%s)
test_path=/tank/test4/gen
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..1}
do
dd if=${test_path}/rw.${j} of=/dev/null bs=1M &
done
done
wait
end_time=$(date +%s)
echo "read gen dir total time(secs):"$((end_time-start_time))
```
copy the generate data
```bash
cp -a /tank/test4/gen /tank/test4/gen-copy
```
Test read performance from the copied dir
```bash
cat read-cp.sh
echo 3 > /proc/sys/vm/drop_caches
start_time=$(date +%s)
test_path=/tank/test4/gen-copy
[[ ! -d $test_path ]] && mkdir -p $test_path
for j in {0..4}
do
for i in {0..1}
do
dd if=${test_path}/rw.${j} of=/dev/null bs=1M &
done
done
wait
end_time=$(date +%s)
echo "read copy dir total time(secs):"$((end_time-start_time))
```
The run process
```bash
$ cd /tank/test4
$ rm -rf gen gen-copy; sh ./gen.sh ; cp -a gen gen-copy ; sh ./read-gen.sh ; sh ./read-cp.sh
# switch the read order
$ rm -rf gen gen-copy; sh ./gen.sh ; cp -a gen gen-copy ; sh ./read-cp.sh; sh ./read-gen.sh
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
The simulate test result(The lower the better):
| test env (CentOS7) | read from generating dir(secs) | read from cp dir(secs)
|--------------------------------|--------------------------------|---------------------
| MD3060E+ 0.8.4(new) | 84~99 | 60~61
| MD3060E+ 0.8.4(FRAG 13%) | 113~119 | 67~68
| MD3060E+ 0.7.13(new) | 94~100 | 64~67
| MD3060E+ 0.7.13(FRAG 51%) | 133~141 | 75~77
| MD3060E+ 0.6.5.11(new) | 99~100 | 101~103
Here is the original in my production env, I can't re-create the zpool, I test 20 times. 10x dirA, 10x dirB, fio 3.7, I use script parallel to generate the fio test file.
| test env (CentOS8) | read from generating dir(MB/s) | read from cp dir(MB/s)
|--------------------------------|--------------------------------|---------------------
| R740 + 0.8.4(FRAG 5%) | 200~400 | 600~800
Thanks : )
Here is the zdb -dddddd show the same file in diff dirs.
[after cp](https://drive.google.com/file/d/1fiGV2mNJ1GD5lO0kjl5unMxDBmdc4gQK/view?usp=sharing)
[parallel write](https://drive.google.com/file/d/11aaAFgRclcnt1z2LsCgIKsThKbaldBFg/view?usp=sharing)
Test parameters
```bash
/sys/module/zfs/parameters/dbuf_cache_hiwater_pct 10
/sys/module/zfs/parameters/dbuf_cache_hiwater_pct 10
/sys/module/zfs/parameters/dbuf_cache_lowater_pct 10
/sys/module/zfs/parameters/dbuf_cache_lowater_pct 10
/sys/module/zfs/parameters/dbuf_cache_max_bytes 1051399104
/sys/module/zfs/parameters/dbuf_cache_max_bytes 1051399104
/sys/module/zfs/parameters/dbuf_cache_shift 5
/sys/module/zfs/parameters/dbuf_cache_shift 5
/sys/module/zfs/parameters/dbuf_metadata_cache_max_bytes 525699552
/sys/module/zfs/parameters/dbuf_metadata_cache_max_bytes 525699552
/sys/module/zfs/parameters/dbuf_metadata_cache_shift 6
/sys/module/zfs/parameters/dbuf_metadata_cache_shift 6
/sys/module/zfs/parameters/dmu_object_alloc_chunk_shift 7
/sys/module/zfs/parameters/dmu_object_alloc_chunk_shift 7
/sys/module/zfs/parameters/dmu_prefetch_max 134217728
/sys/module/zfs/parameters/dmu_prefetch_max 134217728
/sys/module/zfs/parameters/ignore_hole_birth 1
/sys/module/zfs/parameters/ignore_hole_birth 1
/sys/module/zfs/parameters/l2arc_feed_again 1
/sys/module/zfs/parameters/l2arc_feed_again 1
/sys/module/zfs/parameters/l2arc_feed_min_ms 200
/sys/module/zfs/parameters/l2arc_feed_min_ms 200
/sys/module/zfs/parameters/l2arc_feed_secs 1
/sys/module/zfs/parameters/l2arc_feed_secs 1
/sys/module/zfs/parameters/l2arc_headroom 2
/sys/module/zfs/parameters/l2arc_headroom 2
/sys/module/zfs/parameters/l2arc_headroom_boost 200
/sys/module/zfs/parameters/l2arc_headroom_boost 200
/sys/module/zfs/parameters/l2arc_noprefetch 1
/sys/module/zfs/parameters/l2arc_noprefetch 1
/sys/module/zfs/parameters/l2arc_norw 0
/sys/module/zfs/parameters/l2arc_norw 0
/sys/module/zfs/parameters/l2arc_write_boost 8388608
/sys/module/zfs/parameters/l2arc_write_boost 8388608
/sys/module/zfs/parameters/l2arc_write_max 8388608
/sys/module/zfs/parameters/l2arc_write_max 8388608
/sys/module/zfs/parameters/metaslab_aliquot 524288
/sys/module/zfs/parameters/metaslab_aliquot 524288
/sys/module/zfs/parameters/metaslab_bias_enabled 1
/sys/module/zfs/parameters/metaslab_bias_enabled 1
/sys/module/zfs/parameters/metaslab_debug_load 0
/sys/module/zfs/parameters/metaslab_debug_load 0
/sys/module/zfs/parameters/metaslab_debug_unload 1
/sys/module/zfs/parameters/metaslab_debug_unload 1
/sys/module/zfs/parameters/metaslab_df_max_search 16777216
/sys/module/zfs/parameters/metaslab_df_max_search 16777216
/sys/module/zfs/parameters/metaslab_df_use_largest_segment 0
/sys/module/zfs/parameters/metaslab_df_use_largest_segment 0
/sys/module/zfs/parameters/metaslab_force_ganging 16777217
/sys/module/zfs/parameters/metaslab_force_ganging 16777217
/sys/module/zfs/parameters/metaslab_fragmentation_factor_enabled 1
/sys/module/zfs/parameters/metaslab_fragmentation_factor_enabled 1
/sys/module/zfs/parameters/metaslab_lba_weighting_enabled 1
/sys/module/zfs/parameters/metaslab_lba_weighting_enabled 1
/sys/module/zfs/parameters/metaslab_preload_enabled 1
/sys/module/zfs/parameters/metaslab_preload_enabled 1
/sys/module/zfs/parameters/send_holes_without_birth_time 1
/sys/module/zfs/parameters/send_holes_without_birth_time 1
/sys/module/zfs/parameters/spa_asize_inflation 24
/sys/module/zfs/parameters/spa_asize_inflation 24
/sys/module/zfs/parameters/spa_config_path /etc/zfs/zpool.cache
/sys/module/zfs/parameters/spa_config_path /etc/zfs/zpool.cache
/sys/module/zfs/parameters/spa_load_print_vdev_tree 0
/sys/module/zfs/parameters/spa_load_print_vdev_tree 0
/sys/module/zfs/parameters/spa_load_verify_data 1
/sys/module/zfs/parameters/spa_load_verify_data 1
/sys/module/zfs/parameters/spa_load_verify_metadata 1
/sys/module/zfs/parameters/spa_load_verify_metadata 1
/sys/module/zfs/parameters/spa_load_verify_shift 4
/sys/module/zfs/parameters/spa_load_verify_shift 4
/sys/module/zfs/parameters/spa_slop_shift 5
/sys/module/zfs/parameters/spa_slop_shift 5
/sys/module/zfs/parameters/vdev_removal_max_span 32768
/sys/module/zfs/parameters/vdev_removal_max_span 32768
/sys/module/zfs/parameters/vdev_validate_skip 0
/sys/module/zfs/parameters/vdev_validate_skip 0
/sys/module/zfs/parameters/zap_iterate_prefetch 1
/sys/module/zfs/parameters/zap_iterate_prefetch 1
/sys/module/zfs/parameters/zfetch_array_rd_sz 1048576
/sys/module/zfs/parameters/zfetch_array_rd_sz 1048576
/sys/module/zfs/parameters/zfetch_max_distance 8388608
/sys/module/zfs/parameters/zfetch_max_distance 8388608
/sys/module/zfs/parameters/zfetch_max_streams 8
/sys/module/zfs/parameters/zfetch_max_streams 8
/sys/module/zfs/parameters/zfetch_min_sec_reap 2
/sys/module/zfs/parameters/zfetch_min_sec_reap 2
/sys/module/zfs/parameters/zfs_abd_scatter_enabled 1
/sys/module/zfs/parameters/zfs_abd_scatter_enabled 1
/sys/module/zfs/parameters/zfs_abd_scatter_max_order 10
/sys/module/zfs/parameters/zfs_abd_scatter_max_order 10
/sys/module/zfs/parameters/zfs_abd_scatter_min_size 1536
/sys/module/zfs/parameters/zfs_abd_scatter_min_size 1536
/sys/module/zfs/parameters/zfs_admin_snapshot 0
/sys/module/zfs/parameters/zfs_admin_snapshot 0
/sys/module/zfs/parameters/zfs_arc_average_blocksize 8192
/sys/module/zfs/parameters/zfs_arc_average_blocksize 8192
/sys/module/zfs/parameters/zfs_arc_dnode_limit 0
/sys/module/zfs/parameters/zfs_arc_dnode_limit 0
/sys/module/zfs/parameters/zfs_arc_dnode_limit_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_limit_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_reduce_percent 10
/sys/module/zfs/parameters/zfs_arc_dnode_reduce_percent 10
/sys/module/zfs/parameters/zfs_arc_grow_retry 0
/sys/module/zfs/parameters/zfs_arc_grow_retry 0
/sys/module/zfs/parameters/zfs_arc_lotsfree_percent 10
/sys/module/zfs/parameters/zfs_arc_lotsfree_percent 10
/sys/module/zfs/parameters/zfs_arc_max 0
/sys/module/zfs/parameters/zfs_arc_max 0
/sys/module/zfs/parameters/zfs_arc_meta_adjust_restarts 4096
/sys/module/zfs/parameters/zfs_arc_meta_adjust_restarts 4096
/sys/module/zfs/parameters/zfs_arc_meta_limit 0
/sys/module/zfs/parameters/zfs_arc_meta_limit 0
/sys/module/zfs/parameters/zfs_arc_meta_limit_percent 75
/sys/module/zfs/parameters/zfs_arc_meta_limit_percent 75
/sys/module/zfs/parameters/zfs_arc_meta_min 0
/sys/module/zfs/parameters/zfs_arc_meta_min 0
/sys/module/zfs/parameters/zfs_arc_meta_prune 10000
/sys/module/zfs/parameters/zfs_arc_meta_prune 10000
/sys/module/zfs/parameters/zfs_arc_meta_strategy 1
/sys/module/zfs/parameters/zfs_arc_meta_strategy 1
/sys/module/zfs/parameters/zfs_arc_min 0
/sys/module/zfs/parameters/zfs_arc_min 0
/sys/module/zfs/parameters/zfs_arc_min_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prescient_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_min_prescient_prefetch_ms 0
/sys/module/zfs/parameters/zfs_arc_pc_percent 0
/sys/module/zfs/parameters/zfs_arc_pc_percent 0
/sys/module/zfs/parameters/zfs_arc_p_dampener_disable 1
/sys/module/zfs/parameters/zfs_arc_p_dampener_disable 1
/sys/module/zfs/parameters/zfs_arc_p_min_shift 0
/sys/module/zfs/parameters/zfs_arc_p_min_shift 0
/sys/module/zfs/parameters/zfs_arc_shrink_shift 0
/sys/module/zfs/parameters/zfs_arc_shrink_shift 0
/sys/module/zfs/parameters/zfs_arc_sys_free 0
/sys/module/zfs/parameters/zfs_arc_sys_free 0
/sys/module/zfs/parameters/zfs_async_block_max_blocks 100000
/sys/module/zfs/parameters/zfs_async_block_max_blocks 100000
/sys/module/zfs/parameters/zfs_autoimport_disable 1
/sys/module/zfs/parameters/zfs_autoimport_disable 1
/sys/module/zfs/parameters/zfs_checksum_events_per_second 20
/sys/module/zfs/parameters/zfs_checksum_events_per_second 20
/sys/module/zfs/parameters/zfs_commit_timeout_pct 5
/sys/module/zfs/parameters/zfs_commit_timeout_pct 5
/sys/module/zfs/parameters/zfs_compressed_arc_enabled 1
/sys/module/zfs/parameters/zfs_compressed_arc_enabled 1
/sys/module/zfs/parameters/zfs_condense_indirect_commit_entry_delay_ms 0
/sys/module/zfs/parameters/zfs_condense_indirect_commit_entry_delay_ms 0
/sys/module/zfs/parameters/zfs_condense_indirect_vdevs_enable 1
/sys/module/zfs/parameters/zfs_condense_indirect_vdevs_enable 1
/sys/module/zfs/parameters/zfs_condense_max_obsolete_bytes 1073741824
/sys/module/zfs/parameters/zfs_condense_max_obsolete_bytes 1073741824
/sys/module/zfs/parameters/zfs_condense_min_mapping_bytes 131072
/sys/module/zfs/parameters/zfs_condense_min_mapping_bytes 131072
/sys/module/zfs/parameters/zfs_dbgmsg_enable 1
/sys/module/zfs/parameters/zfs_dbgmsg_enable 1
/sys/module/zfs/parameters/zfs_dbgmsg_maxsize 4194304
/sys/module/zfs/parameters/zfs_dbgmsg_maxsize 4194304
/sys/module/zfs/parameters/zfs_dbuf_state_index 0
/sys/module/zfs/parameters/zfs_dbuf_state_index 0
/sys/module/zfs/parameters/zfs_ddt_data_is_special 1
/sys/module/zfs/parameters/zfs_ddt_data_is_special 1
/sys/module/zfs/parameters/zfs_deadman_checktime_ms 60000
/sys/module/zfs/parameters/zfs_deadman_checktime_ms 60000
/sys/module/zfs/parameters/zfs_deadman_enabled 1
/sys/module/zfs/parameters/zfs_deadman_enabled 1
/sys/module/zfs/parameters/zfs_deadman_failmode wait
/sys/module/zfs/parameters/zfs_deadman_failmode wait
/sys/module/zfs/parameters/zfs_deadman_synctime_ms 600000
/sys/module/zfs/parameters/zfs_deadman_synctime_ms 600000
/sys/module/zfs/parameters/zfs_deadman_ziotime_ms 300000
/sys/module/zfs/parameters/zfs_deadman_ziotime_ms 300000
/sys/module/zfs/parameters/zfs_dedup_prefetch 0
/sys/module/zfs/parameters/zfs_dedup_prefetch 0
/sys/module/zfs/parameters/zfs_delay_min_dirty_percent 60
/sys/module/zfs/parameters/zfs_delay_min_dirty_percent 60
/sys/module/zfs/parameters/zfs_delay_scale 500000
/sys/module/zfs/parameters/zfs_delay_scale 500000
/sys/module/zfs/parameters/zfs_delete_blocks 20480
/sys/module/zfs/parameters/zfs_delete_blocks 20480
/sys/module/zfs/parameters/zfs_dirty_data_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max 4294967296
/sys/module/zfs/parameters/zfs_dirty_data_max_max_percent 25
/sys/module/zfs/parameters/zfs_dirty_data_max_max_percent 25
/sys/module/zfs/parameters/zfs_dirty_data_max_percent 10
/sys/module/zfs/parameters/zfs_dirty_data_max_percent 10
/sys/module/zfs/parameters/zfs_dirty_data_sync_percent 20
/sys/module/zfs/parameters/zfs_dirty_data_sync_percent 20
/sys/module/zfs/parameters/zfs_disable_ivset_guid_check 0
/sys/module/zfs/parameters/zfs_disable_ivset_guid_check 0
/sys/module/zfs/parameters/zfs_dmu_offset_next_sync 0
/sys/module/zfs/parameters/zfs_dmu_offset_next_sync 0
/sys/module/zfs/parameters/zfs_expire_snapshot 300
/sys/module/zfs/parameters/zfs_expire_snapshot 300
/sys/module/zfs/parameters/zfs_flags 0
/sys/module/zfs/parameters/zfs_flags 0
/sys/module/zfs/parameters/zfs_free_bpobj_enabled 1
/sys/module/zfs/parameters/zfs_free_bpobj_enabled 1
/sys/module/zfs/parameters/zfs_free_leak_on_eio 0
/sys/module/zfs/parameters/zfs_free_leak_on_eio 0
/sys/module/zfs/parameters/zfs_free_min_time_ms 1000
/sys/module/zfs/parameters/zfs_free_min_time_ms 1000
/sys/module/zfs/parameters/zfs_immediate_write_sz 32768
/sys/module/zfs/parameters/zfs_immediate_write_sz 32768
/sys/module/zfs/parameters/zfs_initialize_value 16045690984833335022
/sys/module/zfs/parameters/zfs_initialize_value 16045690984833335022
/sys/module/zfs/parameters/zfs_key_max_salt_uses 400000000
/sys/module/zfs/parameters/zfs_key_max_salt_uses 400000000
/sys/module/zfs/parameters/zfs_lua_max_instrlimit 100000000
/sys/module/zfs/parameters/zfs_lua_max_instrlimit 100000000
/sys/module/zfs/parameters/zfs_lua_max_memlimit 104857600
/sys/module/zfs/parameters/zfs_lua_max_memlimit 104857600
/sys/module/zfs/parameters/zfs_max_missing_tvds 0
/sys/module/zfs/parameters/zfs_max_missing_tvds 0
/sys/module/zfs/parameters/zfs_max_recordsize 1048576
/sys/module/zfs/parameters/zfs_max_recordsize 1048576
/sys/module/zfs/parameters/zfs_metaslab_fragmentation_threshold 70
/sys/module/zfs/parameters/zfs_metaslab_fragmentation_threshold 70
/sys/module/zfs/parameters/zfs_metaslab_segment_weight_enabled 1
/sys/module/zfs/parameters/zfs_metaslab_segment_weight_enabled 1
/sys/module/zfs/parameters/zfs_metaslab_switch_threshold 2
/sys/module/zfs/parameters/zfs_metaslab_switch_threshold 2
/sys/module/zfs/parameters/zfs_mg_fragmentation_threshold 95
/sys/module/zfs/parameters/zfs_mg_fragmentation_threshold 95
/sys/module/zfs/parameters/zfs_mg_noalloc_threshold 0
/sys/module/zfs/parameters/zfs_mg_noalloc_threshold 0
/sys/module/zfs/parameters/zfs_multihost_fail_intervals 10
/sys/module/zfs/parameters/zfs_multihost_fail_intervals 10
/sys/module/zfs/parameters/zfs_multihost_history 0
/sys/module/zfs/parameters/zfs_multihost_history 0
/sys/module/zfs/parameters/zfs_multihost_import_intervals 20
/sys/module/zfs/parameters/zfs_multihost_import_intervals 20
/sys/module/zfs/parameters/zfs_multihost_interval 1000
/sys/module/zfs/parameters/zfs_multihost_interval 1000
/sys/module/zfs/parameters/zfs_multilist_num_sublists 0
/sys/module/zfs/parameters/zfs_multilist_num_sublists 0
/sys/module/zfs/parameters/zfs_nocacheflush 0
/sys/module/zfs/parameters/zfs_nocacheflush 0
/sys/module/zfs/parameters/zfs_nopwrite_enabled 1
/sys/module/zfs/parameters/zfs_nopwrite_enabled 1
/sys/module/zfs/parameters/zfs_no_scrub_io 0
/sys/module/zfs/parameters/zfs_no_scrub_io 0
/sys/module/zfs/parameters/zfs_no_scrub_prefetch 0
/sys/module/zfs/parameters/zfs_no_scrub_prefetch 0
/sys/module/zfs/parameters/zfs_object_mutex_size 64
/sys/module/zfs/parameters/zfs_object_mutex_size 64
/sys/module/zfs/parameters/zfs_obsolete_min_time_ms 500
/sys/module/zfs/parameters/zfs_obsolete_min_time_ms 500
/sys/module/zfs/parameters/zfs_override_estimate_recordsize 0
/sys/module/zfs/parameters/zfs_override_estimate_recordsize 0
/sys/module/zfs/parameters/zfs_pd_bytes_max 52428800
/sys/module/zfs/parameters/zfs_pd_bytes_max 52428800
/sys/module/zfs/parameters/zfs_per_txg_dirty_frees_percent 5
/sys/module/zfs/parameters/zfs_per_txg_dirty_frees_percent 5
/sys/module/zfs/parameters/zfs_prefetch_disable 0
/sys/module/zfs/parameters/zfs_prefetch_disable 0
/sys/module/zfs/parameters/zfs_read_chunk_size 1048576
/sys/module/zfs/parameters/zfs_read_chunk_size 1048576
/sys/module/zfs/parameters/zfs_read_history 0
/sys/module/zfs/parameters/zfs_read_history 0
/sys/module/zfs/parameters/zfs_read_history_hits 0
/sys/module/zfs/parameters/zfs_read_history_hits 0
/sys/module/zfs/parameters/zfs_reconstruct_indirect_combinations_max 4096
/sys/module/zfs/parameters/zfs_reconstruct_indirect_combinations_max 4096
/sys/module/zfs/parameters/zfs_recover 0
/sys/module/zfs/parameters/zfs_recover 0
/sys/module/zfs/parameters/zfs_recv_queue_length 16777216
/sys/module/zfs/parameters/zfs_recv_queue_length 16777216
/sys/module/zfs/parameters/zfs_removal_ignore_errors 0
/sys/module/zfs/parameters/zfs_removal_ignore_errors 0
/sys/module/zfs/parameters/zfs_removal_suspend_progress 0
/sys/module/zfs/parameters/zfs_removal_suspend_progress 0
/sys/module/zfs/parameters/zfs_remove_max_segment 16777216
/sys/module/zfs/parameters/zfs_remove_max_segment 16777216
/sys/module/zfs/parameters/zfs_resilver_disable_defer 0
/sys/module/zfs/parameters/zfs_resilver_disable_defer 0
/sys/module/zfs/parameters/zfs_resilver_min_time_ms 3000
/sys/module/zfs/parameters/zfs_resilver_min_time_ms 3000
/sys/module/zfs/parameters/zfs_scan_checkpoint_intval 7200
/sys/module/zfs/parameters/zfs_scan_checkpoint_intval 7200
/sys/module/zfs/parameters/zfs_scan_fill_weight 3
/sys/module/zfs/parameters/zfs_scan_fill_weight 3
/sys/module/zfs/parameters/zfs_scan_ignore_errors 0
/sys/module/zfs/parameters/zfs_scan_ignore_errors 0
/sys/module/zfs/parameters/zfs_scan_issue_strategy 0
/sys/module/zfs/parameters/zfs_scan_issue_strategy 0
/sys/module/zfs/parameters/zfs_scan_legacy 0
/sys/module/zfs/parameters/zfs_scan_legacy 0
/sys/module/zfs/parameters/zfs_scan_max_ext_gap 2097152
/sys/module/zfs/parameters/zfs_scan_max_ext_gap 2097152
/sys/module/zfs/parameters/zfs_scan_mem_lim_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_soft_fact 20
/sys/module/zfs/parameters/zfs_scan_mem_lim_soft_fact 20
/sys/module/zfs/parameters/zfs_scan_strict_mem_lim 0
/sys/module/zfs/parameters/zfs_scan_strict_mem_lim 0
/sys/module/zfs/parameters/zfs_scan_suspend_progress 0
/sys/module/zfs/parameters/zfs_scan_suspend_progress 0
/sys/module/zfs/parameters/zfs_scan_vdev_limit 4194304
/sys/module/zfs/parameters/zfs_scan_vdev_limit 4194304
/sys/module/zfs/parameters/zfs_scrub_min_time_ms 1000
/sys/module/zfs/parameters/zfs_scrub_min_time_ms 1000
/sys/module/zfs/parameters/zfs_send_corrupt_data 0
/sys/module/zfs/parameters/zfs_send_corrupt_data 0
/sys/module/zfs/parameters/zfs_send_queue_length 16777216
/sys/module/zfs/parameters/zfs_send_queue_length 16777216
/sys/module/zfs/parameters/zfs_send_unmodified_spill_blocks 1
/sys/module/zfs/parameters/zfs_send_unmodified_spill_blocks 1
/sys/module/zfs/parameters/zfs_slow_io_events_per_second 20
/sys/module/zfs/parameters/zfs_slow_io_events_per_second 20
/sys/module/zfs/parameters/zfs_spa_discard_memory_limit 16777216
/sys/module/zfs/parameters/zfs_spa_discard_memory_limit 16777216
/sys/module/zfs/parameters/zfs_special_class_metadata_reserve_pct 25
/sys/module/zfs/parameters/zfs_special_class_metadata_reserve_pct 25
/sys/module/zfs/parameters/zfs_sync_pass_deferred_free 2
/sys/module/zfs/parameters/zfs_sync_pass_deferred_free 2
/sys/module/zfs/parameters/zfs_sync_pass_dont_compress 8
/sys/module/zfs/parameters/zfs_sync_pass_dont_compress 8
/sys/module/zfs/parameters/zfs_sync_pass_rewrite 2
/sys/module/zfs/parameters/zfs_sync_pass_rewrite 2
/sys/module/zfs/parameters/zfs_sync_taskq_batch_pct 75
/sys/module/zfs/parameters/zfs_sync_taskq_batch_pct 75
/sys/module/zfs/parameters/zfs_trim_extent_bytes_max 134217728
/sys/module/zfs/parameters/zfs_trim_extent_bytes_max 134217728
/sys/module/zfs/parameters/zfs_trim_extent_bytes_min 32768
/sys/module/zfs/parameters/zfs_trim_extent_bytes_min 32768
/sys/module/zfs/parameters/zfs_trim_metaslab_skip 0
/sys/module/zfs/parameters/zfs_trim_metaslab_skip 0
/sys/module/zfs/parameters/zfs_trim_queue_limit 10
/sys/module/zfs/parameters/zfs_trim_queue_limit 10
/sys/module/zfs/parameters/zfs_trim_txg_batch 32
/sys/module/zfs/parameters/zfs_trim_txg_batch 32
/sys/module/zfs/parameters/zfs_txg_history 100
/sys/module/zfs/parameters/zfs_txg_history 100
/sys/module/zfs/parameters/zfs_txg_timeout 5
/sys/module/zfs/parameters/zfs_txg_timeout 5
/sys/module/zfs/parameters/zfs_unlink_suspend_progress 0
/sys/module/zfs/parameters/zfs_unlink_suspend_progress 0
/sys/module/zfs/parameters/zfs_user_indirect_is_special 1
/sys/module/zfs/parameters/zfs_user_indirect_is_special 1
/sys/module/zfs/parameters/zfs_vdev_aggregate_trim 0
/sys/module/zfs/parameters/zfs_vdev_aggregate_trim 0
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit 1048576
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit 1048576
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit_non_rotating 131072
/sys/module/zfs/parameters/zfs_vdev_aggregation_limit_non_rotating 131072
/sys/module/zfs/parameters/zfs_vdev_async_read_max_active 3
/sys/module/zfs/parameters/zfs_vdev_async_read_max_active 3
/sys/module/zfs/parameters/zfs_vdev_async_read_min_active 1
/sys/module/zfs/parameters/zfs_vdev_async_read_min_active 1
/sys/module/zfs/parameters/zfs_vdev_async_write_active_max_dirty_percent 60
/sys/module/zfs/parameters/zfs_vdev_async_write_active_max_dirty_percent 60
/sys/module/zfs/parameters/zfs_vdev_async_write_active_min_dirty_percent 30
/sys/module/zfs/parameters/zfs_vdev_async_write_active_min_dirty_percent 30
/sys/module/zfs/parameters/zfs_vdev_async_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_async_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_async_write_min_active 2
/sys/module/zfs/parameters/zfs_vdev_async_write_min_active 2
/sys/module/zfs/parameters/zfs_vdev_cache_bshift 16
/sys/module/zfs/parameters/zfs_vdev_cache_bshift 16
/sys/module/zfs/parameters/zfs_vdev_cache_max 16384
/sys/module/zfs/parameters/zfs_vdev_cache_max 16384
/sys/module/zfs/parameters/zfs_vdev_cache_size 0
/sys/module/zfs/parameters/zfs_vdev_cache_size 0
/sys/module/zfs/parameters/zfs_vdev_default_ms_count 200
/sys/module/zfs/parameters/zfs_vdev_default_ms_count 200
/sys/module/zfs/parameters/zfs_vdev_initializing_max_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_max_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_min_active 1
/sys/module/zfs/parameters/zfs_vdev_initializing_min_active 1
/sys/module/zfs/parameters/zfs_vdev_max_active 1000
/sys/module/zfs/parameters/zfs_vdev_max_active 1000
/sys/module/zfs/parameters/zfs_vdev_min_ms_count 16
/sys/module/zfs/parameters/zfs_vdev_min_ms_count 16
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_seek_inc 1
/sys/module/zfs/parameters/zfs_vdev_mirror_non_rotating_seek_inc 1
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_inc 0
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_inc 5
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_inc 5
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_offset 1048576
/sys/module/zfs/parameters/zfs_vdev_mirror_rotating_seek_offset 1048576
/sys/module/zfs/parameters/zfs_vdev_ms_count_limit 131072
/sys/module/zfs/parameters/zfs_vdev_ms_count_limit 131072
/sys/module/zfs/parameters/zfs_vdev_queue_depth_pct 1000
/sys/module/zfs/parameters/zfs_vdev_queue_depth_pct 1000
/sys/module/zfs/parameters/zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
/sys/module/zfs/parameters/zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3 avx2
/sys/module/zfs/parameters/zfs_vdev_read_gap_limit 32768
/sys/module/zfs/parameters/zfs_vdev_read_gap_limit 32768
/sys/module/zfs/parameters/zfs_vdev_removal_max_active 2
/sys/module/zfs/parameters/zfs_vdev_removal_max_active 2
/sys/module/zfs/parameters/zfs_vdev_removal_min_active 1
/sys/module/zfs/parameters/zfs_vdev_removal_min_active 1
/sys/module/zfs/parameters/zfs_vdev_scheduler deadline
/sys/module/zfs/parameters/zfs_vdev_scheduler deadline
/sys/module/zfs/parameters/zfs_vdev_scrub_max_active 2
/sys/module/zfs/parameters/zfs_vdev_scrub_max_active 2
/sys/module/zfs/parameters/zfs_vdev_scrub_min_active 1
/sys/module/zfs/parameters/zfs_vdev_scrub_min_active 1
/sys/module/zfs/parameters/zfs_vdev_sync_read_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_read_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_max_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_min_active 10
/sys/module/zfs/parameters/zfs_vdev_sync_write_min_active 10
/sys/module/zfs/parameters/zfs_vdev_trim_max_active 2
/sys/module/zfs/parameters/zfs_vdev_trim_max_active 2
/sys/module/zfs/parameters/zfs_vdev_trim_min_active 1
/sys/module/zfs/parameters/zfs_vdev_trim_min_active 1
/sys/module/zfs/parameters/zfs_vdev_write_gap_limit 4096
/sys/module/zfs/parameters/zfs_vdev_write_gap_limit 4096
/sys/module/zfs/parameters/zfs_zevent_cols 80
/sys/module/zfs/parameters/zfs_zevent_cols 80
/sys/module/zfs/parameters/zfs_zevent_console 0
/sys/module/zfs/parameters/zfs_zevent_console 0
/sys/module/zfs/parameters/zfs_zevent_len_max 128
/sys/module/zfs/parameters/zfs_zevent_len_max 128
/sys/module/zfs/parameters/zfs_zil_clean_taskq_maxalloc 1048576
/sys/module/zfs/parameters/zfs_zil_clean_taskq_maxalloc 1048576
/sys/module/zfs/parameters/zfs_zil_clean_taskq_minalloc 1024
/sys/module/zfs/parameters/zfs_zil_clean_taskq_minalloc 1024
/sys/module/zfs/parameters/zfs_zil_clean_taskq_nthr_pct 100
/sys/module/zfs/parameters/zfs_zil_clean_taskq_nthr_pct 100
/sys/module/zfs/parameters/zil_maxblocksize 131072
/sys/module/zfs/parameters/zil_maxblocksize 131072
/sys/module/zfs/parameters/zil_nocacheflush 0
/sys/module/zfs/parameters/zil_nocacheflush 0
/sys/module/zfs/parameters/zil_replay_disable 0
/sys/module/zfs/parameters/zil_replay_disable 0
/sys/module/zfs/parameters/zil_slog_bulk 786432
/sys/module/zfs/parameters/zil_slog_bulk 786432
/sys/module/zfs/parameters/zio_deadman_log_all 0
/sys/module/zfs/parameters/zio_deadman_log_all 0
/sys/module/zfs/parameters/zio_dva_throttle_enabled 0
/sys/module/zfs/parameters/zio_dva_throttle_enabled 0
/sys/module/zfs/parameters/zio_requeue_io_start_cut_in_line 1
/sys/module/zfs/parameters/zio_requeue_io_start_cut_in_line 1
/sys/module/zfs/parameters/zio_slow_io_ms 30000
/sys/module/zfs/parameters/zio_slow_io_ms 30000
/sys/module/zfs/parameters/zio_taskq_batch_pct 75
/sys/module/zfs/parameters/zio_taskq_batch_pct 75
/sys/module/zfs/parameters/zvol_inhibit_dev 0
/sys/module/zfs/parameters/zvol_inhibit_dev 0
/sys/module/zfs/parameters/zvol_major 230
/sys/module/zfs/parameters/zvol_major 230
/sys/module/zfs/parameters/zvol_max_discard_blocks 16384
/sys/module/zfs/parameters/zvol_max_discard_blocks 16384
/sys/module/zfs/parameters/zvol_prefetch_bytes 131072
/sys/module/zfs/parameters/zvol_prefetch_bytes 131072
/sys/module/zfs/parameters/zvol_request_sync 0
/sys/module/zfs/parameters/zvol_request_sync 0
/sys/module/zfs/parameters/zvol_threads 32
/sys/module/zfs/parameters/zvol_threads 32
/sys/module/zfs/parameters/zvol_volmode 1
/sys/module/zfs/parameters/zvol_volmode 1
```
|
non_defect
|
different writing impact the read performance in zfs and is ok thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name cenos distribution version linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing in zfs and the same data directory has the diff performance with the same application if you want to get the best read performance i have to copy the dir with cp command single process eg dir a and dir b has the same data b was copied from a b has the best read performance a was generate by the application you could run the application with dir a the read throughput only s if you switch the application to access dir b the read throughput could be reached s the test did not pass the network just read and write local zpool i check a lot of the iostat and zpool iostat lv all loading is balance in each disk describe how to reproduce the problem here is the simulate bash script it could show the issue create the zpool script i switch the hardware platform from dell to the issue is the same bash for i in b k do parted s dev sd i mklabel gpt parted s dev sd i mkpart done zpool create tank dev sd b k enclosu dell disk seagate dev sdb disk seagate dev sdc disk seagate dev sdd disk seagate dev sde disk seagate dev sdf disk seagate dev sdg disk seagate dev sdh disk seagate dev sdi disk seagate dev sdj disk seagate dev sdk generate data to the zpool bash cat gen sh start time date s test path tank gen openssl rand out dev shm file mkdir p test path for j in do for i in do cat dev shm file test path rw j done done wait end time date s echo gen files total time secs end time start time read generate data from this zpool bash cat read gen sh echo proc sys vm drop caches start time date s test path tank gen mkdir p test path for j in do for i in do dd if test path rw j of dev null bs done done wait end time date s echo read gen dir total time secs end time start time copy the generate data bash cp a tank gen tank gen copy test read performance from the copied dir bash cat read cp sh echo proc sys vm drop caches start time date s test path tank gen copy mkdir p test path for j in do for i in do dd if test path rw j of dev null bs done done wait end time date s echo read copy dir total time secs end time start time the run process bash cd tank rm rf gen gen copy sh gen sh cp a gen gen copy sh read gen sh sh read cp sh switch the read order rm rf gen gen copy sh gen sh cp a gen gen copy sh read cp sh sh read gen sh include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with the simulate test result the lower the better test env read from generating dir secs read from cp dir secs new frag new frag new here is the original in my production env i can t re create the zpool i test times dira dirb fio i use script parallel to generate the fio test file test env read from generating dir mb s read from cp dir mb s frag thanks here is the zdb dddddd show the same file in diff dirs test parameters bash sys module zfs parameters dbuf cache hiwater pct sys module zfs parameters dbuf cache hiwater pct sys module zfs parameters dbuf cache lowater pct sys module zfs parameters dbuf cache lowater pct sys module zfs parameters dbuf cache max bytes sys module zfs parameters dbuf cache max bytes sys module zfs parameters dbuf cache shift sys module zfs parameters dbuf cache shift sys module zfs parameters dbuf metadata cache max bytes sys module zfs parameters dbuf metadata cache max bytes sys module zfs parameters dbuf metadata cache shift sys module zfs parameters dbuf metadata cache shift sys module zfs parameters dmu object alloc chunk shift sys module zfs parameters dmu object alloc chunk shift sys module zfs parameters dmu prefetch max sys module zfs parameters dmu prefetch max sys module zfs parameters ignore hole birth sys module zfs parameters ignore hole birth sys module zfs parameters feed again sys module zfs parameters feed again sys module zfs parameters feed min ms sys module zfs parameters feed min ms sys module zfs parameters feed secs sys module zfs parameters feed secs sys module zfs parameters headroom sys module zfs parameters headroom sys module zfs parameters headroom boost sys module zfs parameters headroom boost sys module zfs parameters noprefetch sys module zfs parameters noprefetch sys module zfs parameters norw sys module zfs parameters norw sys module zfs parameters write boost sys module zfs parameters write boost sys module zfs parameters write max sys module zfs parameters write max sys module zfs parameters metaslab aliquot sys module zfs parameters metaslab aliquot sys module zfs parameters metaslab bias enabled sys module zfs parameters metaslab bias enabled sys module zfs parameters metaslab debug load sys module zfs parameters metaslab debug load sys module zfs parameters metaslab debug unload sys module zfs parameters metaslab debug unload sys module zfs parameters metaslab df max search sys module zfs parameters metaslab df max search sys module zfs parameters metaslab df use largest segment sys module zfs parameters metaslab df use largest segment sys module zfs parameters metaslab force ganging sys module zfs parameters metaslab force ganging sys module zfs parameters metaslab fragmentation factor enabled sys module zfs parameters metaslab fragmentation factor enabled sys module zfs parameters metaslab lba weighting enabled sys module zfs parameters metaslab lba weighting enabled sys module zfs parameters metaslab preload enabled sys module zfs parameters metaslab preload enabled sys module zfs parameters send holes without birth time sys module zfs parameters send holes without birth time sys module zfs parameters spa asize inflation sys module zfs parameters spa asize inflation sys module zfs parameters spa config path etc zfs zpool cache sys module zfs parameters spa config path etc zfs zpool cache sys module zfs parameters spa load print vdev tree sys module zfs parameters spa load print vdev tree sys module zfs parameters spa load verify data sys module zfs parameters spa load verify data sys module zfs parameters spa load verify metadata sys module zfs parameters spa load verify metadata sys module zfs parameters spa load verify shift sys module zfs parameters spa load verify shift sys module zfs parameters spa slop shift sys module zfs parameters spa slop shift sys module zfs parameters vdev removal max span sys module zfs parameters vdev removal max span sys module zfs parameters vdev validate skip sys module zfs parameters vdev validate skip sys module zfs parameters zap iterate prefetch sys module zfs parameters zap iterate prefetch sys module zfs parameters zfetch array rd sz sys module zfs parameters zfetch array rd sz sys module zfs parameters zfetch max distance sys module zfs parameters zfetch max distance sys module zfs parameters zfetch max streams sys module zfs parameters zfetch max streams sys module zfs parameters zfetch min sec reap sys module zfs parameters zfetch min sec reap sys module zfs parameters zfs abd scatter enabled sys module zfs parameters zfs abd scatter enabled sys module zfs parameters zfs abd scatter max order sys module zfs parameters zfs abd scatter max order sys module zfs parameters zfs abd scatter min size sys module zfs parameters zfs abd scatter min size sys module zfs parameters zfs admin snapshot sys module zfs parameters zfs admin snapshot sys module zfs parameters zfs arc average blocksize sys module zfs parameters zfs arc average blocksize sys module zfs parameters zfs arc dnode limit sys module zfs parameters zfs arc dnode limit sys module zfs parameters zfs arc dnode limit percent sys module zfs parameters zfs arc dnode limit percent sys module zfs parameters zfs arc dnode reduce percent sys module zfs parameters zfs arc dnode reduce percent sys module zfs parameters zfs arc grow retry sys module zfs parameters zfs arc grow retry sys module zfs parameters zfs arc lotsfree percent sys module zfs parameters zfs arc lotsfree percent sys module zfs parameters zfs arc max sys module zfs parameters zfs arc max sys module zfs parameters zfs arc meta adjust restarts sys module zfs parameters zfs arc meta adjust restarts sys module zfs parameters zfs arc meta limit sys module zfs parameters zfs arc meta limit sys module zfs parameters zfs arc meta limit percent sys module zfs parameters zfs arc meta limit percent sys module zfs parameters zfs arc meta min sys module zfs parameters zfs arc meta min sys module zfs parameters zfs arc meta prune sys module zfs parameters zfs arc meta prune sys module zfs parameters zfs arc meta strategy sys module zfs parameters zfs arc meta strategy sys module zfs parameters zfs arc min sys module zfs parameters zfs arc min sys module zfs parameters zfs arc min prefetch ms sys module zfs parameters zfs arc min prefetch ms sys module zfs parameters zfs arc min prescient prefetch ms sys module zfs parameters zfs arc min prescient prefetch ms sys module zfs parameters zfs arc pc percent sys module zfs parameters zfs arc pc percent sys module zfs parameters zfs arc p dampener disable sys module zfs parameters zfs arc p dampener disable sys module zfs parameters zfs arc p min shift sys module zfs parameters zfs arc p min shift sys module zfs parameters zfs arc shrink shift sys module zfs parameters zfs arc shrink shift sys module zfs parameters zfs arc sys free sys module zfs parameters zfs arc sys free sys module zfs parameters zfs async block max blocks sys module zfs parameters zfs async block max blocks sys module zfs parameters zfs autoimport disable sys module zfs parameters zfs autoimport disable sys module zfs parameters zfs checksum events per second sys module zfs parameters zfs checksum events per second sys module zfs parameters zfs commit timeout pct sys module zfs parameters zfs commit timeout pct sys module zfs parameters zfs compressed arc enabled sys module zfs parameters zfs compressed arc enabled sys module zfs parameters zfs condense indirect commit entry delay ms sys module zfs parameters zfs condense indirect commit entry delay ms sys module zfs parameters zfs condense indirect vdevs enable sys module zfs parameters zfs condense indirect vdevs enable sys module zfs parameters zfs condense max obsolete bytes sys module zfs parameters zfs condense max obsolete bytes sys module zfs parameters zfs condense min mapping bytes sys module zfs parameters zfs condense min mapping bytes sys module zfs parameters zfs dbgmsg enable sys module zfs parameters zfs dbgmsg enable sys module zfs parameters zfs dbgmsg maxsize sys module zfs parameters zfs dbgmsg maxsize sys module zfs parameters zfs dbuf state index sys module zfs parameters zfs dbuf state index sys module zfs parameters zfs ddt data is special sys module zfs parameters zfs ddt data is special sys module zfs parameters zfs deadman checktime ms sys module zfs parameters zfs deadman checktime ms sys module zfs parameters zfs deadman enabled sys module zfs parameters zfs deadman enabled sys module zfs parameters zfs deadman failmode wait sys module zfs parameters zfs deadman failmode wait sys module zfs parameters zfs deadman synctime ms sys module zfs parameters zfs deadman synctime ms sys module zfs parameters zfs deadman ziotime ms sys module zfs parameters zfs deadman ziotime ms sys module zfs parameters zfs dedup prefetch sys module zfs parameters zfs dedup prefetch sys module zfs parameters zfs delay min dirty percent sys module zfs parameters zfs delay min dirty percent sys module zfs parameters zfs delay scale sys module zfs parameters zfs delay scale sys module zfs parameters zfs delete blocks sys module zfs parameters zfs delete blocks sys module zfs parameters zfs dirty data max sys module zfs parameters zfs dirty data max sys module zfs parameters zfs dirty data max max sys module zfs parameters zfs dirty data max max sys module zfs parameters zfs dirty data max max percent sys module zfs parameters zfs dirty data max max percent sys module zfs parameters zfs dirty data max percent sys module zfs parameters zfs dirty data max percent sys module zfs parameters zfs dirty data sync percent sys module zfs parameters zfs dirty data sync percent sys module zfs parameters zfs disable ivset guid check sys module zfs parameters zfs disable ivset guid check sys module zfs parameters zfs dmu offset next sync sys module zfs parameters zfs dmu offset next sync sys module zfs parameters zfs expire snapshot sys module zfs parameters zfs expire snapshot sys module zfs parameters zfs flags sys module zfs parameters zfs flags sys module zfs parameters zfs free bpobj enabled sys module zfs parameters zfs free bpobj enabled sys module zfs parameters zfs free leak on eio sys module zfs parameters zfs free leak on eio sys module zfs parameters zfs free min time ms sys module zfs parameters zfs free min time ms sys module zfs parameters zfs immediate write sz sys module zfs parameters zfs immediate write sz sys module zfs parameters zfs initialize value sys module zfs parameters zfs initialize value sys module zfs parameters zfs key max salt uses sys module zfs parameters zfs key max salt uses sys module zfs parameters zfs lua max instrlimit sys module zfs parameters zfs lua max instrlimit sys module zfs parameters zfs lua max memlimit sys module zfs parameters zfs lua max memlimit sys module zfs parameters zfs max missing tvds sys module zfs parameters zfs max missing tvds sys module zfs parameters zfs max recordsize sys module zfs parameters zfs max recordsize sys module zfs parameters zfs metaslab fragmentation threshold sys module zfs parameters zfs metaslab fragmentation threshold sys module zfs parameters zfs metaslab segment weight enabled sys module zfs parameters zfs metaslab segment weight enabled sys module zfs parameters zfs metaslab switch threshold sys module zfs parameters zfs metaslab switch threshold sys module zfs parameters zfs mg fragmentation threshold sys module zfs parameters zfs mg fragmentation threshold sys module zfs parameters zfs mg noalloc threshold sys module zfs parameters zfs mg noalloc threshold sys module zfs parameters zfs multihost fail intervals sys module zfs parameters zfs multihost fail intervals sys module zfs parameters zfs multihost history sys module zfs parameters zfs multihost history sys module zfs parameters zfs multihost import intervals sys module zfs parameters zfs multihost import intervals sys module zfs parameters zfs multihost interval sys module zfs parameters zfs multihost interval sys module zfs parameters zfs multilist num sublists sys module zfs parameters zfs multilist num sublists sys module zfs parameters zfs nocacheflush sys module zfs parameters zfs nocacheflush sys module zfs parameters zfs nopwrite enabled sys module zfs parameters zfs nopwrite enabled sys module zfs parameters zfs no scrub io sys module zfs parameters zfs no scrub io sys module zfs parameters zfs no scrub prefetch sys module zfs parameters zfs no scrub prefetch sys module zfs parameters zfs object mutex size sys module zfs parameters zfs object mutex size sys module zfs parameters zfs obsolete min time ms sys module zfs parameters zfs obsolete min time ms sys module zfs parameters zfs override estimate recordsize sys module zfs parameters zfs override estimate recordsize sys module zfs parameters zfs pd bytes max sys module zfs parameters zfs pd bytes max sys module zfs parameters zfs per txg dirty frees percent sys module zfs parameters zfs per txg dirty frees percent sys module zfs parameters zfs prefetch disable sys module zfs parameters zfs prefetch disable sys module zfs parameters zfs read chunk size sys module zfs parameters zfs read chunk size sys module zfs parameters zfs read history sys module zfs parameters zfs read history sys module zfs parameters zfs read history hits sys module zfs parameters zfs read history hits sys module zfs parameters zfs reconstruct indirect combinations max sys module zfs parameters zfs reconstruct indirect combinations max sys module zfs parameters zfs recover sys module zfs parameters zfs recover sys module zfs parameters zfs recv queue length sys module zfs parameters zfs recv queue length sys module zfs parameters zfs removal ignore errors sys module zfs parameters zfs removal ignore errors sys module zfs parameters zfs removal suspend progress sys module zfs parameters zfs removal suspend progress sys module zfs parameters zfs remove max segment sys module zfs parameters zfs remove max segment sys module zfs parameters zfs resilver disable defer sys module zfs parameters zfs resilver disable defer sys module zfs parameters zfs resilver min time ms sys module zfs parameters zfs resilver min time ms sys module zfs parameters zfs scan checkpoint intval sys module zfs parameters zfs scan checkpoint intval sys module zfs parameters zfs scan fill weight sys module zfs parameters zfs scan fill weight sys module zfs parameters zfs scan ignore errors sys module zfs parameters zfs scan ignore errors sys module zfs parameters zfs scan issue strategy sys module zfs parameters zfs scan issue strategy sys module zfs parameters zfs scan legacy sys module zfs parameters zfs scan legacy sys module zfs parameters zfs scan max ext gap sys module zfs parameters zfs scan max ext gap sys module zfs parameters zfs scan mem lim fact sys module zfs parameters zfs scan mem lim fact sys module zfs parameters zfs scan mem lim soft fact sys module zfs parameters zfs scan mem lim soft fact sys module zfs parameters zfs scan strict mem lim sys module zfs parameters zfs scan strict mem lim sys module zfs parameters zfs scan suspend progress sys module zfs parameters zfs scan suspend progress sys module zfs parameters zfs scan vdev limit sys module zfs parameters zfs scan vdev limit sys module zfs parameters zfs scrub min time ms sys module zfs parameters zfs scrub min time ms sys module zfs parameters zfs send corrupt data sys module zfs parameters zfs send corrupt data sys module zfs parameters zfs send queue length sys module zfs parameters zfs send queue length sys module zfs parameters zfs send unmodified spill blocks sys module zfs parameters zfs send unmodified spill blocks sys module zfs parameters zfs slow io events per second sys module zfs parameters zfs slow io events per second sys module zfs parameters zfs spa discard memory limit sys module zfs parameters zfs spa discard memory limit sys module zfs parameters zfs special class metadata reserve pct sys module zfs parameters zfs special class metadata reserve pct sys module zfs parameters zfs sync pass deferred free sys module zfs parameters zfs sync pass deferred free sys module zfs parameters zfs sync pass dont compress sys module zfs parameters zfs sync pass dont compress sys module zfs parameters zfs sync pass rewrite sys module zfs parameters zfs sync pass rewrite sys module zfs parameters zfs sync taskq batch pct sys module zfs parameters zfs sync taskq batch pct sys module zfs parameters zfs trim extent bytes max sys module zfs parameters zfs trim extent bytes max sys module zfs parameters zfs trim extent bytes min sys module zfs parameters zfs trim extent bytes min sys module zfs parameters zfs trim metaslab skip sys module zfs parameters zfs trim metaslab skip sys module zfs parameters zfs trim queue limit sys module zfs parameters zfs trim queue limit sys module zfs parameters zfs trim txg batch sys module zfs parameters zfs trim txg batch sys module zfs parameters zfs txg history sys module zfs parameters zfs txg history sys module zfs parameters zfs txg timeout sys module zfs parameters zfs txg timeout sys module zfs parameters zfs unlink suspend progress sys module zfs parameters zfs unlink suspend progress sys module zfs parameters zfs user indirect is special sys module zfs parameters zfs user indirect is special sys module zfs parameters zfs vdev aggregate trim sys module zfs parameters zfs vdev aggregate trim sys module zfs parameters zfs vdev aggregation limit sys module zfs parameters zfs vdev aggregation limit sys module zfs parameters zfs vdev aggregation limit non rotating sys module zfs parameters zfs vdev aggregation limit non rotating sys module zfs parameters zfs vdev async read max active sys module zfs parameters zfs vdev async read max active sys module zfs parameters zfs vdev async read min active sys module zfs parameters zfs vdev async read min active sys module zfs parameters zfs vdev async write active max dirty percent sys module zfs parameters zfs vdev async write active max dirty percent sys module zfs parameters zfs vdev async write active min dirty percent sys module zfs parameters zfs vdev async write active min dirty percent sys module zfs parameters zfs vdev async write max active sys module zfs parameters zfs vdev async write max active sys module zfs parameters zfs vdev async write min active sys module zfs parameters zfs vdev async write min active sys module zfs parameters zfs vdev cache bshift sys module zfs parameters zfs vdev cache bshift sys module zfs parameters zfs vdev cache max sys module zfs parameters zfs vdev cache max sys module zfs parameters zfs vdev cache size sys module zfs parameters zfs vdev cache size sys module zfs parameters zfs vdev default ms count sys module zfs parameters zfs vdev default ms count sys module zfs parameters zfs vdev initializing max active sys module zfs parameters zfs vdev initializing max active sys module zfs parameters zfs vdev initializing min active sys module zfs parameters zfs vdev initializing min active sys module zfs parameters zfs vdev max active sys module zfs parameters zfs vdev max active sys module zfs parameters zfs vdev min ms count sys module zfs parameters zfs vdev min ms count sys module zfs parameters zfs vdev mirror non rotating inc sys module zfs parameters zfs vdev mirror non rotating inc sys module zfs parameters zfs vdev mirror non rotating seek inc sys module zfs parameters zfs vdev mirror non rotating seek inc sys module zfs parameters zfs vdev mirror rotating inc sys module zfs parameters zfs vdev mirror rotating inc sys module zfs parameters zfs vdev mirror rotating seek inc sys module zfs parameters zfs vdev mirror rotating seek inc sys module zfs parameters zfs vdev mirror rotating seek offset sys module zfs parameters zfs vdev mirror rotating seek offset sys module zfs parameters zfs vdev ms count limit sys module zfs parameters zfs vdev ms count limit sys module zfs parameters zfs vdev queue depth pct sys module zfs parameters zfs vdev queue depth pct sys module zfs parameters zfs vdev raidz impl cycle original scalar sys module zfs parameters zfs vdev raidz impl cycle original scalar sys module zfs parameters zfs vdev read gap limit sys module zfs parameters zfs vdev read gap limit sys module zfs parameters zfs vdev removal max active sys module zfs parameters zfs vdev removal max active sys module zfs parameters zfs vdev removal min active sys module zfs parameters zfs vdev removal min active sys module zfs parameters zfs vdev scheduler deadline sys module zfs parameters zfs vdev scheduler deadline sys module zfs parameters zfs vdev scrub max active sys module zfs parameters zfs vdev scrub max active sys module zfs parameters zfs vdev scrub min active sys module zfs parameters zfs vdev scrub min active sys module zfs parameters zfs vdev sync read max active sys module zfs parameters zfs vdev sync read max active sys module zfs parameters zfs vdev sync read min active sys module zfs parameters zfs vdev sync read min active sys module zfs parameters zfs vdev sync write max active sys module zfs parameters zfs vdev sync write max active sys module zfs parameters zfs vdev sync write min active sys module zfs parameters zfs vdev sync write min active sys module zfs parameters zfs vdev trim max active sys module zfs parameters zfs vdev trim max active sys module zfs parameters zfs vdev trim min active sys module zfs parameters zfs vdev trim min active sys module zfs parameters zfs vdev write gap limit sys module zfs parameters zfs vdev write gap limit sys module zfs parameters zfs zevent cols sys module zfs parameters zfs zevent cols sys module zfs parameters zfs zevent console sys module zfs parameters zfs zevent console sys module zfs parameters zfs zevent len max sys module zfs parameters zfs zevent len max sys module zfs parameters zfs zil clean taskq maxalloc sys module zfs parameters zfs zil clean taskq maxalloc sys module zfs parameters zfs zil clean taskq minalloc sys module zfs parameters zfs zil clean taskq minalloc sys module zfs parameters zfs zil clean taskq nthr pct sys module zfs parameters zfs zil clean taskq nthr pct sys module zfs parameters zil maxblocksize sys module zfs parameters zil maxblocksize sys module zfs parameters zil nocacheflush sys module zfs parameters zil nocacheflush sys module zfs parameters zil replay disable sys module zfs parameters zil replay disable sys module zfs parameters zil slog bulk sys module zfs parameters zil slog bulk sys module zfs parameters zio deadman log all sys module zfs parameters zio deadman log all sys module zfs parameters zio dva throttle enabled sys module zfs parameters zio dva throttle enabled sys module zfs parameters zio requeue io start cut in line sys module zfs parameters zio requeue io start cut in line sys module zfs parameters zio slow io ms sys module zfs parameters zio slow io ms sys module zfs parameters zio taskq batch pct sys module zfs parameters zio taskq batch pct sys module zfs parameters zvol inhibit dev sys module zfs parameters zvol inhibit dev sys module zfs parameters zvol major sys module zfs parameters zvol major sys module zfs parameters zvol max discard blocks sys module zfs parameters zvol max discard blocks sys module zfs parameters zvol prefetch bytes sys module zfs parameters zvol prefetch bytes sys module zfs parameters zvol request sync sys module zfs parameters zvol request sync sys module zfs parameters zvol threads sys module zfs parameters zvol threads sys module zfs parameters zvol volmode sys module zfs parameters zvol volmode
| 0
|
65,036
| 19,042,235,419
|
IssuesEvent
|
2021-11-25 00:10:12
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
Cannot dismiss "New login. as this you?" without verifying the login
|
T-Defect
|
### Steps to reproduce
I have historical unverified logins that I can't verify because I'm not logged into them anymore.
![Uploading Screenshot_20211125-110333_Element.jpg…]()
### Outcome
#### What did you expect?
To be able to acknowledge the warning and get on with life.
#### What happened instead?
* There is no option to say "I know, but I'd like to ignore it". There is only the option to say "this wasn't me" which isn't the case.
* When I say "this wasn't me" and then "skip", it doesn't skip - the warning is still there with no change.

### Your phone model
Galaxy S21
### Operating system version
Android 11
### Application version and app store
1.3.7 [40103072] (G-b4267)
### Homeserver
matrix.org
### Will you send logs?
Yes
|
1.0
|
Cannot dismiss "New login. as this you?" without verifying the login - ### Steps to reproduce
I have historical unverified logins that I can't verify because I'm not logged into them anymore.
![Uploading Screenshot_20211125-110333_Element.jpg…]()
### Outcome
#### What did you expect?
To be able to acknowledge the warning and get on with life.
#### What happened instead?
* There is no option to say "I know, but I'd like to ignore it". There is only the option to say "this wasn't me" which isn't the case.
* When I say "this wasn't me" and then "skip", it doesn't skip - the warning is still there with no change.

### Your phone model
Galaxy S21
### Operating system version
Android 11
### Application version and app store
1.3.7 [40103072] (G-b4267)
### Homeserver
matrix.org
### Will you send logs?
Yes
|
defect
|
cannot dismiss new login as this you without verifying the login steps to reproduce i have historical unverified logins that i can t verify because i m not logged into them anymore outcome what did you expect to be able to acknowledge the warning and get on with life what happened instead there is no option to say i know but i d like to ignore it there is only the option to say this wasn t me which isn t the case when i say this wasn t me and then skip it doesn t skip the warning is still there with no change your phone model galaxy operating system version android application version and app store g homeserver matrix org will you send logs yes
| 1
|
65,606
| 19,592,457,374
|
IssuesEvent
|
2022-01-05 14:24:31
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
opened
|
Notifications not clearing after sync
|
T-Defect
|
### Steps to reproduce
1. I get messages
2. Android shows me a notification
3. I read the messages on element-web
4. I get more messages
5. Android updates the notification to show me, **but those messages I already read are still shown in the notification**
### Outcome
#### What did you expect?
Read messages are cleared from notifications
#### What happened instead?
They remain.
### Your phone model
Samsung S20FE
### Operating system version
Android 11
### Application version and app store
1.3.12
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Notifications not clearing after sync - ### Steps to reproduce
1. I get messages
2. Android shows me a notification
3. I read the messages on element-web
4. I get more messages
5. Android updates the notification to show me, **but those messages I already read are still shown in the notification**
### Outcome
#### What did you expect?
Read messages are cleared from notifications
#### What happened instead?
They remain.
### Your phone model
Samsung S20FE
### Operating system version
Android 11
### Application version and app store
1.3.12
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
notifications not clearing after sync steps to reproduce i get messages android shows me a notification i read the messages on element web i get more messages android updates the notification to show me but those messages i already read are still shown in the notification outcome what did you expect read messages are cleared from notifications what happened instead they remain your phone model samsung operating system version android application version and app store homeserver no response will you send logs no
| 1
|
50,627
| 21,218,273,958
|
IssuesEvent
|
2022-04-11 09:26:43
|
Azure/azure-cli
|
https://api.github.com/repos/Azure/azure-cli
|
closed
|
Create CorrelationFilter Rule on ServiceBus Topic Subscription
|
Service Attention Service Bus customer-reported needs-author-feedback
|
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
Trying to create a correlation filter rule (Label = xxx) on a topic subscription. The same was successfully created in C# using the SubscriptionClient.AddRuleAsync:
`
_subscriptionClient.AddRuleAsync(new RuleDescription()
{
Name = "Rule1",
Filter = new CorrelationFilter { Label = "test" }
}).GetAwaiter().GetResult();
`
**Command Name**
`az servicebus topic subscription rule create`
**Errors:**
```
BadRequest - Value cannot be null.
Parameter name: sqlExpression CorrelationId: a9cf8052-c375-4235-aba4-3a153f9aaf33
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az servicebus topic subscription rule create --name {} --namespace-name {} --subscription-name {} --topic-name {} --resource-group {} --subscription {} --enable-action-preprocessing false --enable-correlation-preprocessing true --enable-sql-preprocessing false --label {}`
## Expected Behavior
Create a correlation filter rule, with a label filter.
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
azure-cli 2.1.0
Extensions:
application-insights 0.1.3
interactive 0.4.3
storage-preview 0.2.10
```
## Additional Context
Is this a bug or am I missing something in the az servicebus topic subscription rule create command?
Also tries specifying --filter-sql-expression but this creates a SqlFilter type rule.
<!--Please don't remove this:-->
<!--auto-generated-->
|
2.0
|
Create CorrelationFilter Rule on ServiceBus Topic Subscription -
### **This is autogenerated. Please review and update as needed.**
## Describe the bug
Trying to create a correlation filter rule (Label = xxx) on a topic subscription. The same was successfully created in C# using the SubscriptionClient.AddRuleAsync:
`
_subscriptionClient.AddRuleAsync(new RuleDescription()
{
Name = "Rule1",
Filter = new CorrelationFilter { Label = "test" }
}).GetAwaiter().GetResult();
`
**Command Name**
`az servicebus topic subscription rule create`
**Errors:**
```
BadRequest - Value cannot be null.
Parameter name: sqlExpression CorrelationId: a9cf8052-c375-4235-aba4-3a153f9aaf33
```
## To Reproduce:
Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information.
- _Put any pre-requisite steps here..._
- `az servicebus topic subscription rule create --name {} --namespace-name {} --subscription-name {} --topic-name {} --resource-group {} --subscription {} --enable-action-preprocessing false --enable-correlation-preprocessing true --enable-sql-preprocessing false --label {}`
## Expected Behavior
Create a correlation filter rule, with a label filter.
## Environment Summary
```
Windows-10-10.0.18362-SP0
Python 3.6.6
azure-cli 2.1.0
Extensions:
application-insights 0.1.3
interactive 0.4.3
storage-preview 0.2.10
```
## Additional Context
Is this a bug or am I missing something in the az servicebus topic subscription rule create command?
Also tries specifying --filter-sql-expression but this creates a SqlFilter type rule.
<!--Please don't remove this:-->
<!--auto-generated-->
|
non_defect
|
create correlationfilter rule on servicebus topic subscription this is autogenerated please review and update as needed describe the bug trying to create a correlation filter rule label xxx on a topic subscription the same was successfully created in c using the subscriptionclient addruleasync subscriptionclient addruleasync new ruledescription name filter new correlationfilter label test getawaiter getresult command name az servicebus topic subscription rule create errors badrequest value cannot be null parameter name sqlexpression correlationid to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az servicebus topic subscription rule create name namespace name subscription name topic name resource group subscription enable action preprocessing false enable correlation preprocessing true enable sql preprocessing false label expected behavior create a correlation filter rule with a label filter environment summary windows python azure cli extensions application insights interactive storage preview additional context is this a bug or am i missing something in the az servicebus topic subscription rule create command also tries specifying filter sql expression but this creates a sqlfilter type rule
| 0
|
15,327
| 2,850,627,730
|
IssuesEvent
|
2015-05-31 18:49:35
|
damonkohler/android-scripting
|
https://api.github.com/repos/damonkohler/android-scripting
|
closed
|
Problem with exchanging data via events between python and webview
|
auto-migrated Priority-Medium Type-Defect
|
```
What device(s) are you experiencing the problem on?
Motorola Milestone 2 (MotoA953 - like Droid 2 global)
What firmware version are you running on the device?
Version.62.4.24.A953.O2.en.DE
What steps will reproduce the problem?
1. copy the attached files to /mnt/sdcard/sl4a/scripts on the phone
2. start canvasWebview.py
3. enter some text into the "What text should be drawn?" textbox
4. tap the "Draw" button
What is the expected output? What do you see instead?
I would expect a Javascript alert that shows the entered text and a Toast that
shows the entered text and the text in the canvas should change to the entered
text.
Instead the webview only turns white.
What version of the product are you using? On what operating system?
latest sl4a_r5x on Froyo (Android 2.2.2)
Please provide any additional information below.
The text-to-speech stuff ("Say" button), which is using very similar code,
works fine.
Others also seem to have problems with webview and event handling:
http://groups.google.com/group/android-scripting/browse_thread/thread/31a3df5ba0
ddb4cb?pli=1
http://stackoverflow.com/questions/7154669/send-events-from-python-to-javascript
-using-sl4a
```
Original issue reported on code.google.com by `Neumae...@gmail.com` on 26 Aug 2011 at 9:56
Attachments:
* [canvasWebview.py](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-559/comment-0/canvasWebview.py)
* [canvasWebview.html](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-559/comment-0/canvasWebview.html)
|
1.0
|
Problem with exchanging data via events between python and webview - ```
What device(s) are you experiencing the problem on?
Motorola Milestone 2 (MotoA953 - like Droid 2 global)
What firmware version are you running on the device?
Version.62.4.24.A953.O2.en.DE
What steps will reproduce the problem?
1. copy the attached files to /mnt/sdcard/sl4a/scripts on the phone
2. start canvasWebview.py
3. enter some text into the "What text should be drawn?" textbox
4. tap the "Draw" button
What is the expected output? What do you see instead?
I would expect a Javascript alert that shows the entered text and a Toast that
shows the entered text and the text in the canvas should change to the entered
text.
Instead the webview only turns white.
What version of the product are you using? On what operating system?
latest sl4a_r5x on Froyo (Android 2.2.2)
Please provide any additional information below.
The text-to-speech stuff ("Say" button), which is using very similar code,
works fine.
Others also seem to have problems with webview and event handling:
http://groups.google.com/group/android-scripting/browse_thread/thread/31a3df5ba0
ddb4cb?pli=1
http://stackoverflow.com/questions/7154669/send-events-from-python-to-javascript
-using-sl4a
```
Original issue reported on code.google.com by `Neumae...@gmail.com` on 26 Aug 2011 at 9:56
Attachments:
* [canvasWebview.py](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-559/comment-0/canvasWebview.py)
* [canvasWebview.html](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-559/comment-0/canvasWebview.html)
|
defect
|
problem with exchanging data via events between python and webview what device s are you experiencing the problem on motorola milestone like droid global what firmware version are you running on the device version en de what steps will reproduce the problem copy the attached files to mnt sdcard scripts on the phone start canvaswebview py enter some text into the what text should be drawn textbox tap the draw button what is the expected output what do you see instead i would expect a javascript alert that shows the entered text and a toast that shows the entered text and the text in the canvas should change to the entered text instead the webview only turns white what version of the product are you using on what operating system latest on froyo android please provide any additional information below the text to speech stuff say button which is using very similar code works fine others also seem to have problems with webview and event handling pli using original issue reported on code google com by neumae gmail com on aug at attachments
| 1
|
16,374
| 2,889,834,938
|
IssuesEvent
|
2015-06-13 20:11:34
|
damonkohler/sl4a
|
https://api.github.com/repos/damonkohler/sl4a
|
opened
|
Python interpreter doesn't work on Sony Ericsson Xperia X10
|
auto-migrated Priority-Medium Type-Defect
|
_From @GoogleCodeExporter on May 31, 2015 11:26_
```
What steps will reproduce the problem?
1. Install ASE.
2. Add Python 2.6.2 interpreter.
3. Start the interpreter. (manually from shell or by selecting from the
interpreters' list in ASE)
What is the expected output? What do you see instead?
Expected would be a working Python shell. Instead I get the following error
message:
reloc_library[1173]: 4241 cannot locate '__aeabi_dcmpun'...CANNOT LINK
EXECUTABLE
What version of the product are you using? On what operating system?
ASE r21 on Android 1.6.
Please provide any additional information below.
The handset is a Sony Ericsson Xperia X10i.
Baseband version: 1.0.14.
Kernel version: 2.6.29-rel semc-android@SEMC #2.
Build number: R1FA014.
I attached the logcat output.
Also note that I am getting the same error message when trying to start a
Perl shell or script.
```
Original issue reported on code.google.com by `NeveMa...@gmail.com` on 23 Apr 2010 at 7:17
Attachments:
* [logcat_py.txt](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-307/comment-0/logcat_py.txt)
_Copied from original issue: damonkohler/android-scripting#307_
|
1.0
|
Python interpreter doesn't work on Sony Ericsson Xperia X10 - _From @GoogleCodeExporter on May 31, 2015 11:26_
```
What steps will reproduce the problem?
1. Install ASE.
2. Add Python 2.6.2 interpreter.
3. Start the interpreter. (manually from shell or by selecting from the
interpreters' list in ASE)
What is the expected output? What do you see instead?
Expected would be a working Python shell. Instead I get the following error
message:
reloc_library[1173]: 4241 cannot locate '__aeabi_dcmpun'...CANNOT LINK
EXECUTABLE
What version of the product are you using? On what operating system?
ASE r21 on Android 1.6.
Please provide any additional information below.
The handset is a Sony Ericsson Xperia X10i.
Baseband version: 1.0.14.
Kernel version: 2.6.29-rel semc-android@SEMC #2.
Build number: R1FA014.
I attached the logcat output.
Also note that I am getting the same error message when trying to start a
Perl shell or script.
```
Original issue reported on code.google.com by `NeveMa...@gmail.com` on 23 Apr 2010 at 7:17
Attachments:
* [logcat_py.txt](https://storage.googleapis.com/google-code-attachments/android-scripting/issue-307/comment-0/logcat_py.txt)
_Copied from original issue: damonkohler/android-scripting#307_
|
defect
|
python interpreter doesn t work on sony ericsson xperia from googlecodeexporter on may what steps will reproduce the problem install ase add python interpreter start the interpreter manually from shell or by selecting from the interpreters list in ase what is the expected output what do you see instead expected would be a working python shell instead i get the following error message reloc library cannot locate aeabi dcmpun cannot link executable what version of the product are you using on what operating system ase on android please provide any additional information below the handset is a sony ericsson xperia baseband version kernel version rel semc android semc build number i attached the logcat output also note that i am getting the same error message when trying to start a perl shell or script original issue reported on code google com by nevema gmail com on apr at attachments copied from original issue damonkohler android scripting
| 1
|
475,079
| 13,686,586,202
|
IssuesEvent
|
2020-09-30 08:54:47
|
trezor/trezor-suite
|
https://api.github.com/repos/trezor/trezor-suite
|
closed
|
Modals refactor
|
High priority
|
- [x] Update generic Modal component (choose if to use header with border or not)
- [x] Update progress bar design as designed in [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c3508fc02548a5ff28ce)
- [ ] Universal modal "Follow instruction on device" [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c4626cb1d145b011da53) 🔜 💯
- [x] Receive: [Zeplin](https://zpl.io/anx6Y0v)
- [x] Cloud sync (Labelling): [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c1ad970d6a400e747a05)
- [x] Passphrase: #2368
- [x] Blocked by another window
- [x] Transaction details: [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c5c02d9ce80796cda6ae)
- [ ] Custom BE (cc @slowbackspace) 🔜 💯
- [x] Add account modal #2237
- [x] Switch to existing passphrase (Remove the illustration, since there is nothing to confirm on Device. I suggest using [this one](https://zpl.io/VQ7jkeR) instead)
<img width="414" alt="Screen Shot 2020-09-16 at 20 28 32" src="https://user-images.githubusercontent.com/29627086/93377390-3ac62d00-f85b-11ea-9955-7b22f44064d1.png">
|
1.0
|
Modals refactor - - [x] Update generic Modal component (choose if to use header with border or not)
- [x] Update progress bar design as designed in [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c3508fc02548a5ff28ce)
- [ ] Universal modal "Follow instruction on device" [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c4626cb1d145b011da53) 🔜 💯
- [x] Receive: [Zeplin](https://zpl.io/anx6Y0v)
- [x] Cloud sync (Labelling): [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c1ad970d6a400e747a05)
- [x] Passphrase: #2368
- [x] Blocked by another window
- [x] Transaction details: [Zeplin](https://app.zeplin.io/project/5ee87c00f6af719a645d14c3/screen/5f58c5c02d9ce80796cda6ae)
- [ ] Custom BE (cc @slowbackspace) 🔜 💯
- [x] Add account modal #2237
- [x] Switch to existing passphrase (Remove the illustration, since there is nothing to confirm on Device. I suggest using [this one](https://zpl.io/VQ7jkeR) instead)
<img width="414" alt="Screen Shot 2020-09-16 at 20 28 32" src="https://user-images.githubusercontent.com/29627086/93377390-3ac62d00-f85b-11ea-9955-7b22f44064d1.png">
|
non_defect
|
modals refactor update generic modal component choose if to use header with border or not update progress bar design as designed in universal modal follow instruction on device 🔜 💯 receive cloud sync labelling passphrase blocked by another window transaction details custom be cc slowbackspace 🔜 💯 add account modal switch to existing passphrase remove the illustration since there is nothing to confirm on device i suggest using instead img width alt screen shot at src
| 0
|
65,723
| 19,671,427,472
|
IssuesEvent
|
2022-01-11 07:47:58
|
line/armeria
|
https://api.github.com/repos/line/armeria
|
closed
|
Implement a stream-oriented `HttpServerUpgradeHandler`
|
defect
|
Netty's `HttpServerUpgradeHandler` fully aggregates a request to handle an upgrade request.
Armeria limits the maximum allowed request length with a hardcoded value to prevent high memory pressure.
https://github.com/line/armeria/blob/fd3b9970d7b0a308aeedca1d12deff97efee1071/core/src/main/java/com/linecorp/armeria/server/HttpServerPipelineConfigurator.java#L574
JDK's `HttpClient` sends the first HTTP request with `Upgrade: h2c` header and embedding its body.
```http
POST /foo HTTP/1
Connection: Upgrade, HTTP2-Settings
Content-Length: 100000
Host: 127.0.0.1:61166
HTTP2-Settings: AAEAAE...
Upgrade: h2c
User-Agent: Java-http-client/16.0.1
a big data...
```
If the content size is greater than 16384, the request always fails.
We can fork upstream's `HttpServerUpgradeHandler` and make it support streaming in order to handle the upgrade request with a limited memory footprint.
|
1.0
|
Implement a stream-oriented `HttpServerUpgradeHandler` - Netty's `HttpServerUpgradeHandler` fully aggregates a request to handle an upgrade request.
Armeria limits the maximum allowed request length with a hardcoded value to prevent high memory pressure.
https://github.com/line/armeria/blob/fd3b9970d7b0a308aeedca1d12deff97efee1071/core/src/main/java/com/linecorp/armeria/server/HttpServerPipelineConfigurator.java#L574
JDK's `HttpClient` sends the first HTTP request with `Upgrade: h2c` header and embedding its body.
```http
POST /foo HTTP/1
Connection: Upgrade, HTTP2-Settings
Content-Length: 100000
Host: 127.0.0.1:61166
HTTP2-Settings: AAEAAE...
Upgrade: h2c
User-Agent: Java-http-client/16.0.1
a big data...
```
If the content size is greater than 16384, the request always fails.
We can fork upstream's `HttpServerUpgradeHandler` and make it support streaming in order to handle the upgrade request with a limited memory footprint.
|
defect
|
implement a stream oriented httpserverupgradehandler netty s httpserverupgradehandler fully aggregates a request to handle an upgrade request armeria limits the maximum allowed request length with a hardcoded value to prevent high memory pressure jdk s httpclient sends the first http request with upgrade header and embedding its body http post foo http connection upgrade settings content length host settings aaeaae upgrade user agent java http client a big data if the content size is greater than the request always fails we can fork upstream s httpserverupgradehandler and make it support streaming in order to handle the upgrade request with a limited memory footprint
| 1
|
267,982
| 28,565,297,430
|
IssuesEvent
|
2023-04-21 01:11:09
|
olix0r/ort
|
https://api.github.com/repos/olix0r/ort
|
opened
|
ort: [RUSTSEC-2023-0034] Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS)
|
rust security
|
If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.
This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.
|
True
|
ort: [RUSTSEC-2023-0034] Resource exhaustion vulnerability in h2 may lead to Denial of Service (DoS) - If an attacker is able to flood the network with pairs of `HEADERS`/`RST_STREAM` frames, such that the `h2` application is not able to accept them faster than the bytes are received, the pending accept queue can grow in memory usage. Being able to do this consistently can result in excessive memory use, and eventually trigger Out Of Memory.
This flaw is corrected in [hyperium/h2#668](https://github.com/hyperium/h2/pull/668), which restricts remote reset stream count by default.
|
non_defect
|
ort resource exhaustion vulnerability in may lead to denial of service dos if an attacker is able to flood the network with pairs of headers rst stream frames such that the application is not able to accept them faster than the bytes are received the pending accept queue can grow in memory usage being able to do this consistently can result in excessive memory use and eventually trigger out of memory this flaw is corrected in which restricts remote reset stream count by default
| 0
|
43,777
| 11,842,117,494
|
IssuesEvent
|
2020-03-23 22:16:00
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
YAML handling of Discovery Strategy
|
Type: Defect
|
In 4.0, the following is required to set a discovery strategy
```
discovery-strategies:
discovery-strategies:
- class: neil.demo.MyDiscoveryStrategy
enabled: true
```
based on https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L658 and this works.
The inner section https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L661 should be `discovery-strategy` not `discovery-strategies`
Also, it does not seem possible to configure a DiscoveryStrategyFactory, a DiscoveryServiceFactory or a DiscoveryService from YAML
|
1.0
|
YAML handling of Discovery Strategy - In 4.0, the following is required to set a discovery strategy
```
discovery-strategies:
discovery-strategies:
- class: neil.demo.MyDiscoveryStrategy
enabled: true
```
based on https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L658 and this works.
The inner section https://github.com/hazelcast/hazelcast/blob/master/hazelcast/src/main/resources/hazelcast-full-example.yaml#L661 should be `discovery-strategy` not `discovery-strategies`
Also, it does not seem possible to configure a DiscoveryStrategyFactory, a DiscoveryServiceFactory or a DiscoveryService from YAML
|
defect
|
yaml handling of discovery strategy in the following is required to set a discovery strategy discovery strategies discovery strategies class neil demo mydiscoverystrategy enabled true based on and this works the inner section should be discovery strategy not discovery strategies also it does not seem possible to configure a discoverystrategyfactory a discoveryservicefactory or a discoveryservice from yaml
| 1
|
285,781
| 8,774,470,878
|
IssuesEvent
|
2018-12-18 19:55:02
|
projectcalico/libcalico-go
|
https://api.github.com/repos/projectcalico/libcalico-go
|
closed
|
Add some more evil IPAM tests
|
priority/backlog
|
The tests for IPAM are OK, but I don't think they go far enough.
Some ideas for tests to add:
- IP allocation: if I assign 4 IPs, release the 3rd, then request another, do I get the 3rd or the 5th returned? I would suggest that we should get the 5th - to avoid races with other components (e.g. the orchestrator) in the situation where you allocate/release repeatedly and quickly.
- Allocate a bunch of IPs. Then repeatedly release the oldest and allocate a new one. Check that the block of allocated IPs remains contiguous, moves through the range of the pool, then is wrapped back to the beginning.
- As above, but with 2 pools.
- Block allocation - similar to IP allocation, but for host blocks instead.
- a host has a block of addresses. Allocate them all. Release 1. Allocate 1 - check it works.
- a host has a block of addresses. Allocate them all. Release N of them, spread through the range. Request M of them (M<=N). Check it succeeds without requesting a new block. And that none of the returned IPs are already allocated.
|
1.0
|
Add some more evil IPAM tests - The tests for IPAM are OK, but I don't think they go far enough.
Some ideas for tests to add:
- IP allocation: if I assign 4 IPs, release the 3rd, then request another, do I get the 3rd or the 5th returned? I would suggest that we should get the 5th - to avoid races with other components (e.g. the orchestrator) in the situation where you allocate/release repeatedly and quickly.
- Allocate a bunch of IPs. Then repeatedly release the oldest and allocate a new one. Check that the block of allocated IPs remains contiguous, moves through the range of the pool, then is wrapped back to the beginning.
- As above, but with 2 pools.
- Block allocation - similar to IP allocation, but for host blocks instead.
- a host has a block of addresses. Allocate them all. Release 1. Allocate 1 - check it works.
- a host has a block of addresses. Allocate them all. Release N of them, spread through the range. Request M of them (M<=N). Check it succeeds without requesting a new block. And that none of the returned IPs are already allocated.
|
non_defect
|
add some more evil ipam tests the tests for ipam are ok but i don t think they go far enough some ideas for tests to add ip allocation if i assign ips release the then request another do i get the or the returned i would suggest that we should get the to avoid races with other components e g the orchestrator in the situation where you allocate release repeatedly and quickly allocate a bunch of ips then repeatedly release the oldest and allocate a new one check that the block of allocated ips remains contiguous moves through the range of the pool then is wrapped back to the beginning as above but with pools block allocation similar to ip allocation but for host blocks instead a host has a block of addresses allocate them all release allocate check it works a host has a block of addresses allocate them all release n of them spread through the range request m of them m n check it succeeds without requesting a new block and that none of the returned ips are already allocated
| 0
|
41,029
| 10,269,593,948
|
IssuesEvent
|
2019-08-23 09:25:20
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
opened
|
Oracle 12c issue with INSERT .. SELECT into tables with IDENTITY column
|
T: Defect
|
### Expected behavior and actual behavior:
Given the following JOOQ Query, it'll produce an ORA-01400 Exception
```java
Field<Integer> foreignKey ...;
InsertValuesStep2<AnyTableRecord, Integer, Double> insertInto = dslContext
.insertInto(
ANY_TABLE,
ANY_TABLE.SOME_FOREIGN_KEY,
ANY_TABLE.AMOUNT
);
insertionValues
.forEach(value -> insertInto.values(foreignKey, val(value.getAmount())));
insertInto.execute();
```
After a bit of analysis it turned out that this query results simplified in something like that:
```sql
insert into ANY_TABLE ("SOME_FOREIGN_KEY", "AMOUNT")
select * from (
select 123, 50 from dual
)
union all
select * from (
select 123, 200 from dual
);
```
Given this code it'll throw an ORA-01400 on Oracle 12c
The correct query would be
```sql
insert into ANY_TABLE ("SOME_FOREIGN_KEY", "AMOUNT")
select * from (
select 123, 50 from dual
union all
select 123, 200 from dual
);
```
Additionally it is very interesting that the first query successfully executes on Oracle 18c XE. Maybe you've got an idea why this is the case.
Thanks in advance
### Versions:
- jOOQ: 3.11.9 PRO
- Java: 8
- Database: Oracle 12c & Oracle 18c XE
|
1.0
|
Oracle 12c issue with INSERT .. SELECT into tables with IDENTITY column - ### Expected behavior and actual behavior:
Given the following JOOQ Query, it'll produce an ORA-01400 Exception
```java
Field<Integer> foreignKey ...;
InsertValuesStep2<AnyTableRecord, Integer, Double> insertInto = dslContext
.insertInto(
ANY_TABLE,
ANY_TABLE.SOME_FOREIGN_KEY,
ANY_TABLE.AMOUNT
);
insertionValues
.forEach(value -> insertInto.values(foreignKey, val(value.getAmount())));
insertInto.execute();
```
After a bit of analysis it turned out that this query results simplified in something like that:
```sql
insert into ANY_TABLE ("SOME_FOREIGN_KEY", "AMOUNT")
select * from (
select 123, 50 from dual
)
union all
select * from (
select 123, 200 from dual
);
```
Given this code it'll throw an ORA-01400 on Oracle 12c
The correct query would be
```sql
insert into ANY_TABLE ("SOME_FOREIGN_KEY", "AMOUNT")
select * from (
select 123, 50 from dual
union all
select 123, 200 from dual
);
```
Additionally it is very interesting that the first query successfully executes on Oracle 18c XE. Maybe you've got an idea why this is the case.
Thanks in advance
### Versions:
- jOOQ: 3.11.9 PRO
- Java: 8
- Database: Oracle 12c & Oracle 18c XE
|
defect
|
oracle issue with insert select into tables with identity column expected behavior and actual behavior given the following jooq query it ll produce an ora exception java field foreignkey insertinto dslcontext insertinto any table any table some foreign key any table amount insertionvalues foreach value insertinto values foreignkey val value getamount insertinto execute after a bit of analysis it turned out that this query results simplified in something like that sql insert into any table some foreign key amount select from select from dual union all select from select from dual given this code it ll throw an ora on oracle the correct query would be sql insert into any table some foreign key amount select from select from dual union all select from dual additionally it is very interesting that the first query successfully executes on oracle xe maybe you ve got an idea why this is the case thanks in advance versions jooq pro java database oracle oracle xe
| 1
|
22,715
| 3,689,855,080
|
IssuesEvent
|
2016-02-25 17:51:34
|
zerogods/phpwebsocket
|
https://api.github.com/repos/zerogods/phpwebsocket
|
closed
|
socket_select(): 5 is not valid Socket resource
|
auto-migrated Priority-Medium Type-Defect
|
```
server.php
$master = WebSocket("localhost",12345);
client.html
socket = new WebSocket("ws://localhost:12345/websocket/server.php");
php -q server.php
master socket : resource id #4
listening on : localhost port 12345
cmd have notice:
Warning: socket_select(): 5 is not valid Socket resource in
D:\www\websocket\server.php on line 13
when i visiting http://localhost/websocket/client.html
I use PHP5.2.6
my gmail is online
Please help
thx
```
Original issue reported on code.google.com by `laper...@gmail.com` on 27 Jun 2012 at 9:33
|
1.0
|
socket_select(): 5 is not valid Socket resource - ```
server.php
$master = WebSocket("localhost",12345);
client.html
socket = new WebSocket("ws://localhost:12345/websocket/server.php");
php -q server.php
master socket : resource id #4
listening on : localhost port 12345
cmd have notice:
Warning: socket_select(): 5 is not valid Socket resource in
D:\www\websocket\server.php on line 13
when i visiting http://localhost/websocket/client.html
I use PHP5.2.6
my gmail is online
Please help
thx
```
Original issue reported on code.google.com by `laper...@gmail.com` on 27 Jun 2012 at 9:33
|
defect
|
socket select is not valid socket resource server php master websocket localhost client html socket new websocket ws localhost websocket server php php q server php master socket resource id listening on localhost port cmd have notice warning socket select is not valid socket resource in d www websocket server php on line when i visiting i use my gmail is online please help thx original issue reported on code google com by laper gmail com on jun at
| 1
|
328,549
| 28,123,590,954
|
IssuesEvent
|
2023-03-31 15:49:59
|
SSathu/Magma-soen341project2023
|
https://api.github.com/repos/SSathu/Magma-soen341project2023
|
opened
|
AT search bar functionality
|
Acceptance Test
|
**User acceptance flow**
1. Sign up and log in as any user
2. Click on the arrow to check if the side bar minimizes and maximizes
3. Click on each and single of the buttons
4. Verify if it leads to another page
5.
|
1.0
|
AT search bar functionality - **User acceptance flow**
1. Sign up and log in as any user
2. Click on the arrow to check if the side bar minimizes and maximizes
3. Click on each and single of the buttons
4. Verify if it leads to another page
5.
|
non_defect
|
at search bar functionality user acceptance flow sign up and log in as any user click on the arrow to check if the side bar minimizes and maximizes click on each and single of the buttons verify if it leads to another page
| 0
|
33,624
| 7,187,951,528
|
IssuesEvent
|
2018-02-02 08:14:33
|
hazelcast/hazelcast-jet
|
https://api.github.com/repos/hazelcast/hazelcast-jet
|
closed
|
Element supports custom serialization but array is not serializable
|
core defect
|
`Tuple2` doesn't implement `java.io.Serializable`, but has serialization hook. The same is true for `java.util.Map.Entry`, but `Entry[]` is serializable.
IList<Tuple2[]> tmp = inst[0].getList("test");
tmp.add(new Tuple2[]{tuple2(1, 2)});
Eception:
Exception in thread "main" com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to serialize 'com.hazelcast.jet.datamodel.Tuple2'
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleSerializeException(SerializationUtil.java:75)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:155)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toData(AbstractSerializationService.java:122)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toData(AbstractSerializationService.java:110)
at com.hazelcast.spi.impl.NodeEngineImpl.toData(NodeEngineImpl.java:334)
at com.hazelcast.collection.impl.collection.AbstractCollectionProxyImpl.add(AbstractCollectionProxyImpl.java:101)
at com.hazelcast.jet.stream.impl.ListDecorator.add(ListDecorator.java:106)
at BatchMapping.main(BatchMapping.java:72)
Caused by: com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to serialize '[Lcom.hazelcast.jet.datamodel.Tuple2;'
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleSerializeException(SerializationUtil.java:75)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.writeObject(AbstractSerializationService.java:252)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataOutput.writeObject(ByteArrayObjectDataOutput.java:370)
at com.hazelcast.jet.datamodel.DataModelSerializerHooks$Tuple2Hook$1.write(DataModelSerializerHooks.java:139)
at com.hazelcast.jet.datamodel.DataModelSerializerHooks$Tuple2Hook$1.write(DataModelSerializerHooks.java:135)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.write(StreamSerializerAdapter.java:43)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:152)
... 6 more
Caused by: java.io.NotSerializableException: com.hazelcast.jet.datamodel.Tuple2
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.write(JavaDefaultSerializers.java:242)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.write(StreamSerializerAdapter.java:43)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.writeObject(AbstractSerializationService.java:250)
... 11 more
|
1.0
|
Element supports custom serialization but array is not serializable - `Tuple2` doesn't implement `java.io.Serializable`, but has serialization hook. The same is true for `java.util.Map.Entry`, but `Entry[]` is serializable.
IList<Tuple2[]> tmp = inst[0].getList("test");
tmp.add(new Tuple2[]{tuple2(1, 2)});
Eception:
Exception in thread "main" com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to serialize 'com.hazelcast.jet.datamodel.Tuple2'
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleSerializeException(SerializationUtil.java:75)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:155)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toData(AbstractSerializationService.java:122)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toData(AbstractSerializationService.java:110)
at com.hazelcast.spi.impl.NodeEngineImpl.toData(NodeEngineImpl.java:334)
at com.hazelcast.collection.impl.collection.AbstractCollectionProxyImpl.add(AbstractCollectionProxyImpl.java:101)
at com.hazelcast.jet.stream.impl.ListDecorator.add(ListDecorator.java:106)
at BatchMapping.main(BatchMapping.java:72)
Caused by: com.hazelcast.nio.serialization.HazelcastSerializationException: Failed to serialize '[Lcom.hazelcast.jet.datamodel.Tuple2;'
at com.hazelcast.internal.serialization.impl.SerializationUtil.handleSerializeException(SerializationUtil.java:75)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.writeObject(AbstractSerializationService.java:252)
at com.hazelcast.internal.serialization.impl.ByteArrayObjectDataOutput.writeObject(ByteArrayObjectDataOutput.java:370)
at com.hazelcast.jet.datamodel.DataModelSerializerHooks$Tuple2Hook$1.write(DataModelSerializerHooks.java:139)
at com.hazelcast.jet.datamodel.DataModelSerializerHooks$Tuple2Hook$1.write(DataModelSerializerHooks.java:135)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.write(StreamSerializerAdapter.java:43)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.toBytes(AbstractSerializationService.java:152)
... 6 more
Caused by: java.io.NotSerializableException: com.hazelcast.jet.datamodel.Tuple2
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
at java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1378)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
at com.hazelcast.internal.serialization.impl.JavaDefaultSerializers$JavaSerializer.write(JavaDefaultSerializers.java:242)
at com.hazelcast.internal.serialization.impl.StreamSerializerAdapter.write(StreamSerializerAdapter.java:43)
at com.hazelcast.internal.serialization.impl.AbstractSerializationService.writeObject(AbstractSerializationService.java:250)
... 11 more
|
defect
|
element supports custom serialization but array is not serializable doesn t implement java io serializable but has serialization hook the same is true for java util map entry but entry is serializable ilist tmp inst getlist test tmp add new eception exception in thread main com hazelcast nio serialization hazelcastserializationexception failed to serialize com hazelcast jet datamodel at com hazelcast internal serialization impl serializationutil handleserializeexception serializationutil java at com hazelcast internal serialization impl abstractserializationservice tobytes abstractserializationservice java at com hazelcast internal serialization impl abstractserializationservice todata abstractserializationservice java at com hazelcast internal serialization impl abstractserializationservice todata abstractserializationservice java at com hazelcast spi impl nodeengineimpl todata nodeengineimpl java at com hazelcast collection impl collection abstractcollectionproxyimpl add abstractcollectionproxyimpl java at com hazelcast jet stream impl listdecorator add listdecorator java at batchmapping main batchmapping java caused by com hazelcast nio serialization hazelcastserializationexception failed to serialize lcom hazelcast jet datamodel at com hazelcast internal serialization impl serializationutil handleserializeexception serializationutil java at com hazelcast internal serialization impl abstractserializationservice writeobject abstractserializationservice java at com hazelcast internal serialization impl bytearrayobjectdataoutput writeobject bytearrayobjectdataoutput java at com hazelcast jet datamodel datamodelserializerhooks write datamodelserializerhooks java at com hazelcast jet datamodel datamodelserializerhooks write datamodelserializerhooks java at com hazelcast internal serialization impl streamserializeradapter write streamserializeradapter java at com hazelcast internal serialization impl abstractserializationservice tobytes abstractserializationservice java more caused by java io notserializableexception com hazelcast jet datamodel at java io objectoutputstream objectoutputstream java at java io objectoutputstream writearray objectoutputstream java at java io objectoutputstream objectoutputstream java at java io objectoutputstream writeobject objectoutputstream java at com hazelcast internal serialization impl javadefaultserializers javaserializer write javadefaultserializers java at com hazelcast internal serialization impl streamserializeradapter write streamserializeradapter java at com hazelcast internal serialization impl abstractserializationservice writeobject abstractserializationservice java more
| 1
|
82,101
| 32,000,443,521
|
IssuesEvent
|
2023-09-21 11:57:28
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Space pills inconsistent/broken layout
|
T-Defect X-Regression S-Tolerable A-Avatar O-Occasional
|
### Steps to reproduce
1. send a space pill
### Outcome
#### What did you expect?
- square space icon looks proper in rounded pill
- icon is vertically centered
- composer behaves the same as timeline
#### What happened instead?
- square space icon looks strange as the pill around it is completely round, it looks like it doesn't properly fit in there
- slight offset towards the bottom of the pill
- composer uses round avatar for space in pill when it should be square

### Operating system
arch
### Application version
Element Nightly version: 2023082401 Olm version: 3.2.14
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
1.0
|
Space pills inconsistent/broken layout - ### Steps to reproduce
1. send a space pill
### Outcome
#### What did you expect?
- square space icon looks proper in rounded pill
- icon is vertically centered
- composer behaves the same as timeline
#### What happened instead?
- square space icon looks strange as the pill around it is completely round, it looks like it doesn't properly fit in there
- slight offset towards the bottom of the pill
- composer uses round avatar for space in pill when it should be square

### Operating system
arch
### Application version
Element Nightly version: 2023082401 Olm version: 3.2.14
### How did you install the app?
aur
### Homeserver
_No response_
### Will you send logs?
No
|
defect
|
space pills inconsistent broken layout steps to reproduce send a space pill outcome what did you expect square space icon looks proper in rounded pill icon is vertically centered composer behaves the same as timeline what happened instead square space icon looks strange as the pill around it is completely round it looks like it doesn t properly fit in there slight offset towards the bottom of the pill composer uses round avatar for space in pill when it should be square operating system arch application version element nightly version olm version how did you install the app aur homeserver no response will you send logs no
| 1
|
64,169
| 18,266,520,358
|
IssuesEvent
|
2021-10-04 09:05:36
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
opened
|
com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
|
Type: Defect Team: SQL
|
http://jenkins.hazelcast.com/view/stable/job/trial-1/67/console
fail HzClient3HZ rangeHDIdx hzcmd.map.sql.service.PersonIdRange threadId=1 com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
/disk1/workspace/trial-1/2021_10_04-07_11_48
```
com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
at com.hazelcast.sql.impl.QueryUtils.toPublicException(QueryUtils.java:74)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:211)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:162)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:158)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:154)
at hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16)
at remote.bench.marker.RollMarker.flatOut(RollMarker.java:72)
at remote.bench.marker.RollMarker.bench(RollMarker.java:59)
at remote.bench.BenchThread.call(BenchThread.java:41)
at remote.bench.BenchThread.call(BenchThread.java:14)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
at com.hazelcast.org.apache.calcite.util.ImmutableBeans.lambda$makeDef$0(ImmutableBeans.java:168)
at com.hazelcast.org.apache.calcite.util.ImmutableBeans$BeanImpl.invoke(ImmutableBeans.java:480)
at com.sun.proxy.$Proxy17.typeCoercionRules(Unknown Source)
at com.hazelcast.org.apache.calcite.sql.validate.SqlValidatorImpl.<init>(SqlValidatorImpl.java:311)
at com.hazelcast.org.apache.calcite.sql.validate.SqlValidatorImplBridge.<init>(SqlValidatorImplBridge.java:35)
at com.hazelcast.jet.sql.impl.validate.HazelcastSqlValidator.<init>(HazelcastSqlValidator.java:114)
at com.hazelcast.jet.sql.impl.OptimizerContext.create(OptimizerContext.java:115)
at com.hazelcast.jet.sql.impl.OptimizerContext.create(OptimizerContext.java:102)
at com.hazelcast.jet.sql.impl.CalciteSqlOptimizer.prepare(CalciteSqlOptimizer.java:210)
at com.hazelcast.sql.impl.SqlServiceImpl.prepare(SqlServiceImpl.java:261)
at com.hazelcast.sql.impl.SqlServiceImpl.query0(SqlServiceImpl.java:241)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:198)
... 14 more
```
|
1.0
|
com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value -
http://jenkins.hazelcast.com/view/stable/job/trial-1/67/console
fail HzClient3HZ rangeHDIdx hzcmd.map.sql.service.PersonIdRange threadId=1 com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
/disk1/workspace/trial-1/2021_10_04-07_11_48
```
com.hazelcast.sql.HazelcastSqlException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
at com.hazelcast.sql.impl.QueryUtils.toPublicException(QueryUtils.java:74)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:211)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:162)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:158)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:154)
at hzcmd.map.sql.service.PersonIdRange.timeStep(PersonIdRange.java:16)
at remote.bench.marker.RollMarker.flatOut(RollMarker.java:72)
at remote.bench.marker.RollMarker.bench(RollMarker.java:59)
at remote.bench.BenchThread.call(BenchThread.java:41)
at remote.bench.BenchThread.call(BenchThread.java:14)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.IllegalArgumentException: property 'com.hazelcast.org.apache.calcite.sql.validate.SqlValidator$Config#TypeCoercionRules' is required and has no default value
at com.hazelcast.org.apache.calcite.util.ImmutableBeans.lambda$makeDef$0(ImmutableBeans.java:168)
at com.hazelcast.org.apache.calcite.util.ImmutableBeans$BeanImpl.invoke(ImmutableBeans.java:480)
at com.sun.proxy.$Proxy17.typeCoercionRules(Unknown Source)
at com.hazelcast.org.apache.calcite.sql.validate.SqlValidatorImpl.<init>(SqlValidatorImpl.java:311)
at com.hazelcast.org.apache.calcite.sql.validate.SqlValidatorImplBridge.<init>(SqlValidatorImplBridge.java:35)
at com.hazelcast.jet.sql.impl.validate.HazelcastSqlValidator.<init>(HazelcastSqlValidator.java:114)
at com.hazelcast.jet.sql.impl.OptimizerContext.create(OptimizerContext.java:115)
at com.hazelcast.jet.sql.impl.OptimizerContext.create(OptimizerContext.java:102)
at com.hazelcast.jet.sql.impl.CalciteSqlOptimizer.prepare(CalciteSqlOptimizer.java:210)
at com.hazelcast.sql.impl.SqlServiceImpl.prepare(SqlServiceImpl.java:261)
at com.hazelcast.sql.impl.SqlServiceImpl.query0(SqlServiceImpl.java:241)
at com.hazelcast.sql.impl.SqlServiceImpl.execute(SqlServiceImpl.java:198)
... 14 more
```
|
defect
|
com hazelcast sql hazelcastsqlexception property com hazelcast org apache calcite sql validate sqlvalidator config typecoercionrules is required and has no default value fail rangehdidx hzcmd map sql service personidrange threadid com hazelcast sql hazelcastsqlexception property com hazelcast org apache calcite sql validate sqlvalidator config typecoercionrules is required and has no default value workspace trial com hazelcast sql hazelcastsqlexception property com hazelcast org apache calcite sql validate sqlvalidator config typecoercionrules is required and has no default value at com hazelcast sql impl queryutils topublicexception queryutils java at com hazelcast sql impl sqlserviceimpl execute sqlserviceimpl java at com hazelcast sql impl sqlserviceimpl execute sqlserviceimpl java at com hazelcast sql impl sqlserviceimpl execute sqlserviceimpl java at com hazelcast sql impl sqlserviceimpl execute sqlserviceimpl java at hzcmd map sql service personidrange timestep personidrange java at remote bench marker rollmarker flatout rollmarker java at remote bench marker rollmarker bench rollmarker java at remote bench benchthread call benchthread java at remote bench benchthread call benchthread java at java base java util concurrent futuretask run futuretask java at java base java util concurrent executors runnableadapter call executors java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by java lang illegalargumentexception property com hazelcast org apache calcite sql validate sqlvalidator config typecoercionrules is required and has no default value at com hazelcast org apache calcite util immutablebeans lambda makedef immutablebeans java at com hazelcast org apache calcite util immutablebeans beanimpl invoke immutablebeans java at com sun proxy typecoercionrules unknown source at com hazelcast org apache calcite sql validate sqlvalidatorimpl sqlvalidatorimpl java at com hazelcast org apache calcite sql validate sqlvalidatorimplbridge sqlvalidatorimplbridge java at com hazelcast jet sql impl validate hazelcastsqlvalidator hazelcastsqlvalidator java at com hazelcast jet sql impl optimizercontext create optimizercontext java at com hazelcast jet sql impl optimizercontext create optimizercontext java at com hazelcast jet sql impl calcitesqloptimizer prepare calcitesqloptimizer java at com hazelcast sql impl sqlserviceimpl prepare sqlserviceimpl java at com hazelcast sql impl sqlserviceimpl sqlserviceimpl java at com hazelcast sql impl sqlserviceimpl execute sqlserviceimpl java more
| 1
|
284,413
| 24,596,927,829
|
IssuesEvent
|
2022-10-14 09:06:44
|
status-im/status-mobile
|
https://api.github.com/repos/status-im/status-mobile
|
closed
|
Timer in delete for me test fails the integration test unintentionally
|
bug E: integration tests
|
Related https://github.com/status-im/status-mobile/pull/13940#issuecomment-1277269904
There's a time limit, after which user won't be able to undo deleted-for-me messages. The timer in test fails the integration test unintentionally.
Should find better ways to test the time limit.
|
1.0
|
Timer in delete for me test fails the integration test unintentionally - Related https://github.com/status-im/status-mobile/pull/13940#issuecomment-1277269904
There's a time limit, after which user won't be able to undo deleted-for-me messages. The timer in test fails the integration test unintentionally.
Should find better ways to test the time limit.
|
non_defect
|
timer in delete for me test fails the integration test unintentionally related there s a time limit after which user won t be able to undo deleted for me messages the timer in test fails the integration test unintentionally should find better ways to test the time limit
| 0
|
128,320
| 18,046,745,492
|
IssuesEvent
|
2021-09-19 02:33:41
|
girlscript/winter-of-contributing
|
https://api.github.com/repos/girlscript/winter-of-contributing
|
closed
|
Cybersecurity: 1.6 Importance of Physical Security?
|
GWOC21 Cybersecurity
|
## Description
Physical security is very important to any organization/company, including fences, laser alarms, etc.
Discuss the importance of physical security in the Cyberworld.
## Note:
- Please avoid copy/paste, `BE YOURSELF`
- Changes should be made inside the `Cyber_Security/` directory & Cyber_Security branch.
- Task will be assigned on *first come first serve*
- Check it out [Contribution Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md)
|
True
|
Cybersecurity: 1.6 Importance of Physical Security? - ## Description
Physical security is very important to any organization/company, including fences, laser alarms, etc.
Discuss the importance of physical security in the Cyberworld.
## Note:
- Please avoid copy/paste, `BE YOURSELF`
- Changes should be made inside the `Cyber_Security/` directory & Cyber_Security branch.
- Task will be assigned on *first come first serve*
- Check it out [Contribution Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md)
|
non_defect
|
cybersecurity importance of physical security description physical security is very important to any organization company including fences laser alarms etc discuss the importance of physical security in the cyberworld note please avoid copy paste be yourself changes should be made inside the cyber security directory cyber security branch task will be assigned on first come first serve check it out
| 0
|
26,655
| 4,776,673,698
|
IssuesEvent
|
2016-10-27 14:24:47
|
wheeler-microfluidics/microdrop
|
https://api.github.com/repos/wheeler-microfluidics/microdrop
|
opened
|
VideoRecorderPlugin crashes (Trac #75)
|
defect Incomplete Migration microdrop Migrated from Trac
|
Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/75
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "The VideoRecorderPlugin is crashing with the following message:\n\nERROR: wheelerlab.video_recorder plugin crashed processing on_new_frame signal.\nReason: 'cv2.cv.iplimage' object has no attribute 'step'\n\nTested on:\nVideoRecorderPlugin v0.1-4-g6691f1c\nMicrodrop v0.1-406-g7ff4d59\nWindows 7",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "VideoRecorderPlugin crashes",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-03-15T05:05:38",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
1.0
|
VideoRecorderPlugin crashes (Trac #75) - Migrated from http://microfluidics.utoronto.ca/microdrop/ticket/75
```json
{
"status": "closed",
"changetime": "2014-04-17T19:39:01",
"description": "The VideoRecorderPlugin is crashing with the following message:\n\nERROR: wheelerlab.video_recorder plugin crashed processing on_new_frame signal.\nReason: 'cv2.cv.iplimage' object has no attribute 'step'\n\nTested on:\nVideoRecorderPlugin v0.1-4-g6691f1c\nMicrodrop v0.1-406-g7ff4d59\nWindows 7",
"reporter": "ryan",
"cc": "",
"resolution": "fixed",
"_ts": "1397763541728826",
"component": "microdrop",
"summary": "VideoRecorderPlugin crashes",
"priority": "major",
"keywords": "",
"version": "0.1",
"time": "2012-03-15T05:05:38",
"milestone": "Microdrop 1.0",
"owner": "cfobel",
"type": "defect"
}
```
|
defect
|
videorecorderplugin crashes trac migrated from json status closed changetime description the videorecorderplugin is crashing with the following message n nerror wheelerlab video recorder plugin crashed processing on new frame signal nreason cv iplimage object has no attribute step n ntested on nvideorecorderplugin nmicrodrop nwindows reporter ryan cc resolution fixed ts component microdrop summary videorecorderplugin crashes priority major keywords version time milestone microdrop owner cfobel type defect
| 1
|
95,213
| 3,940,589,483
|
IssuesEvent
|
2016-04-27 01:53:11
|
InWithForward/sharetribe-old
|
https://api.github.com/repos/InWithForward/sharetribe-old
|
closed
|
data missing in sharetribe
|
bug high priority
|
Our host curators are reporting lots of data loss issues that are difficult to nail down. I've asked for screenshots and more details but don't have anything else
- Profile data sometimes disappears when emails or other account is updated
- Seems that 1st learning goal data went missing for all
- Various times that users have added profile or experience data because we have noticed that it does not always save. This could be because the user gets temporarily disconnected on wifi or a local computer issue.
*suggestions*
- can sharetribe save data each time a field is updated instead of based on the save button? If not, can there be several save buttons at various points on a longer form?
- can I see backups or older versions of data so we can see if in fact there was completed fields on the past dates that the users are reporting there was data?
|
1.0
|
data missing in sharetribe - Our host curators are reporting lots of data loss issues that are difficult to nail down. I've asked for screenshots and more details but don't have anything else
- Profile data sometimes disappears when emails or other account is updated
- Seems that 1st learning goal data went missing for all
- Various times that users have added profile or experience data because we have noticed that it does not always save. This could be because the user gets temporarily disconnected on wifi or a local computer issue.
*suggestions*
- can sharetribe save data each time a field is updated instead of based on the save button? If not, can there be several save buttons at various points on a longer form?
- can I see backups or older versions of data so we can see if in fact there was completed fields on the past dates that the users are reporting there was data?
|
non_defect
|
data missing in sharetribe our host curators are reporting lots of data loss issues that are difficult to nail down i ve asked for screenshots and more details but don t have anything else profile data sometimes disappears when emails or other account is updated seems that learning goal data went missing for all various times that users have added profile or experience data because we have noticed that it does not always save this could be because the user gets temporarily disconnected on wifi or a local computer issue suggestions can sharetribe save data each time a field is updated instead of based on the save button if not can there be several save buttons at various points on a longer form can i see backups or older versions of data so we can see if in fact there was completed fields on the past dates that the users are reporting there was data
| 0
|
603,274
| 18,536,560,096
|
IssuesEvent
|
2021-10-21 12:11:02
|
webkom/lego
|
https://api.github.com/repos/webkom/lego
|
closed
|
No notifications on abakus.no
|
priority:high
|
Hei,
Jeg får ikke varsel om kunngjøringer på arrangementer jeg har meldt meg på og heller ikke varsel om spørreundersøkelser på abakus.no, selvom jeg har krysset av på at jeg vil ha notifikasjoner:(
|
1.0
|
No notifications on abakus.no - Hei,
Jeg får ikke varsel om kunngjøringer på arrangementer jeg har meldt meg på og heller ikke varsel om spørreundersøkelser på abakus.no, selvom jeg har krysset av på at jeg vil ha notifikasjoner:(
|
non_defect
|
no notifications on abakus no hei jeg får ikke varsel om kunngjøringer på arrangementer jeg har meldt meg på og heller ikke varsel om spørreundersøkelser på abakus no selvom jeg har krysset av på at jeg vil ha notifikasjoner
| 0
|
727,391
| 25,033,829,842
|
IssuesEvent
|
2022-11-04 14:30:00
|
o108minmin/halberd
|
https://api.github.com/repos/o108minmin/halberd
|
opened
|
GUI側: モジュール分割
|
priority:low refactoring
|
**課題 or やりたいこと**
- やりたいこと
- `halberd_gui/src/App.tsx` にすべてが書かれているので分割する
**やること**
- [ ] やりたいこと
|
1.0
|
GUI側: モジュール分割 - **課題 or やりたいこと**
- やりたいこと
- `halberd_gui/src/App.tsx` にすべてが書かれているので分割する
**やること**
- [ ] やりたいこと
|
non_defect
|
gui側 モジュール分割 課題 or やりたいこと やりたいこと halberd gui src app tsx にすべてが書かれているので分割する やること やりたいこと
| 0
|
7,228
| 10,361,587,727
|
IssuesEvent
|
2019-09-06 10:23:38
|
prisma/lift
|
https://api.github.com/repos/prisma/lift
|
closed
|
`prisma lift save` should not create a database - `prisma lift up` should
|
bug/2-confirmed kind/bug process/next-milestone
|
Should `lift save` create the database or be stateless and let `lift up` do the mutative work?
Currently, for SQLite, running `lift up` without a DB throws.
|
1.0
|
`prisma lift save` should not create a database - `prisma lift up` should - Should `lift save` create the database or be stateless and let `lift up` do the mutative work?
Currently, for SQLite, running `lift up` without a DB throws.
|
non_defect
|
prisma lift save should not create a database prisma lift up should should lift save create the database or be stateless and let lift up do the mutative work currently for sqlite running lift up without a db throws
| 0
|
92,981
| 10,764,425,207
|
IssuesEvent
|
2019-11-01 08:14:22
|
dvrylc/ped
|
https://api.github.com/repos/dvrylc/ped
|
opened
|
Quick Start command does not work
|
severity.High type.DocumentationBug
|
In the Quick Start guide point 5, the command `go /members` fails to execute.
|
1.0
|
Quick Start command does not work - In the Quick Start guide point 5, the command `go /members` fails to execute.
|
non_defect
|
quick start command does not work in the quick start guide point the command go members fails to execute
| 0
|
87,296
| 3,744,682,444
|
IssuesEvent
|
2016-03-10 03:22:12
|
cs2103jan2016-t15-2j/main
|
https://api.github.com/repos/cs2103jan2016-t15-2j/main
|
closed
|
Storage implements a method to replace the contents of the textfile to logic (redo)
|
priority.high type.task
|
edit(List <Task>) #33
|
1.0
|
Storage implements a method to replace the contents of the textfile to logic (redo) - edit(List <Task>) #33
|
non_defect
|
storage implements a method to replace the contents of the textfile to logic redo edit list
| 0
|
46,058
| 9,875,389,586
|
IssuesEvent
|
2019-06-23 11:21:28
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
VSCode Improve content suggestions based on context.
|
Area/Tooling Component/LanguageServer Component/VScodePlugin Type/Improvement
|
**Description:**
<!-- Give a brief description of the issue -->
**Steps to reproduce:**
- See the attached image.

- Idealy it should suggest "new"
- Same problem can be applicable for other types such as record, array, error etc.
**Affected Versions:**
0.990.3
|
1.0
|
VSCode Improve content suggestions based on context. - **Description:**
<!-- Give a brief description of the issue -->
**Steps to reproduce:**
- See the attached image.

- Idealy it should suggest "new"
- Same problem can be applicable for other types such as record, array, error etc.
**Affected Versions:**
0.990.3
|
non_defect
|
vscode improve content suggestions based on context description steps to reproduce see the attached image idealy it should suggest new same problem can be applicable for other types such as record array error etc affected versions
| 0
|
554,272
| 16,416,221,833
|
IssuesEvent
|
2021-05-19 07:07:11
|
prathameshmm02/RetroMusicPlayer
|
https://api.github.com/repos/prathameshmm02/RetroMusicPlayer
|
opened
|
Gapless playback
|
Priority: High bug
|
even if I switched on the gapless playback feature a gap remains.
This feature worked in the previous version with the same phone and song
|
1.0
|
Gapless playback - even if I switched on the gapless playback feature a gap remains.
This feature worked in the previous version with the same phone and song
|
non_defect
|
gapless playback even if i switched on the gapless playback feature a gap remains this feature worked in the previous version with the same phone and song
| 0
|
62,464
| 7,603,458,159
|
IssuesEvent
|
2018-04-29 14:46:22
|
dev-u/TopDownENA
|
https://api.github.com/repos/dev-u/TopDownENA
|
reopened
|
Criar Personagem da Cena 2: X-Men - Ciclope
|
DESIGN PROGRAMAÇÃO animação player
|
**O Ciclope terá 100 de vida**
**- Arte**
- [x] Criar o personagem Ciclope dos X-Men
- Animação de andar.
- Ataque básico:
- [ ] Animação de acionar o laser dos olhos.
- [ ] Arte do laser(será contínuo, não em forma de projétil)
- O personagem pode andar enquanto está soltando o laser, mas ficará um pouco mais lento.
- O dano do laser será de 15 por segundo
- Powerup (Spawnado em uma coordenada aleatória da segunda fase):
- O ciclope tirara o óculos e o ataque com powerup será um laser em forma de cone
- [ ] Animação do laser em forma de cone
- [ ] Ícone do powerup para ser colocado no HUD
**- Som**
- [ ] Som do laser saindo dos olhos do Ciclope
|
1.0
|
Criar Personagem da Cena 2: X-Men - Ciclope - **O Ciclope terá 100 de vida**
**- Arte**
- [x] Criar o personagem Ciclope dos X-Men
- Animação de andar.
- Ataque básico:
- [ ] Animação de acionar o laser dos olhos.
- [ ] Arte do laser(será contínuo, não em forma de projétil)
- O personagem pode andar enquanto está soltando o laser, mas ficará um pouco mais lento.
- O dano do laser será de 15 por segundo
- Powerup (Spawnado em uma coordenada aleatória da segunda fase):
- O ciclope tirara o óculos e o ataque com powerup será um laser em forma de cone
- [ ] Animação do laser em forma de cone
- [ ] Ícone do powerup para ser colocado no HUD
**- Som**
- [ ] Som do laser saindo dos olhos do Ciclope
|
non_defect
|
criar personagem da cena x men ciclope o ciclope terá de vida arte criar o personagem ciclope dos x men animação de andar ataque básico animação de acionar o laser dos olhos arte do laser será contínuo não em forma de projétil o personagem pode andar enquanto está soltando o laser mas ficará um pouco mais lento o dano do laser será de por segundo powerup spawnado em uma coordenada aleatória da segunda fase o ciclope tirara o óculos e o ataque com powerup será um laser em forma de cone animação do laser em forma de cone ícone do powerup para ser colocado no hud som som do laser saindo dos olhos do ciclope
| 0
|
30,791
| 2,725,499,170
|
IssuesEvent
|
2015-04-15 00:50:08
|
GoogleCloudPlatform/kubernetes
|
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
|
closed
|
External load balancer creation should be a post-creation "initializer", not inline
|
api/cloudprovider priority/P1 team/cluster
|
As discussed https://github.com/GoogleCloudPlatform/kubernetes/issues/5156 and in other places, the load balancer creation should not be synchronous with resource creation (goes against the "dumb resources" model) but instead be applied by a controller both before and after creation.
|
1.0
|
External load balancer creation should be a post-creation "initializer", not inline - As discussed https://github.com/GoogleCloudPlatform/kubernetes/issues/5156 and in other places, the load balancer creation should not be synchronous with resource creation (goes against the "dumb resources" model) but instead be applied by a controller both before and after creation.
|
non_defect
|
external load balancer creation should be a post creation initializer not inline as discussed and in other places the load balancer creation should not be synchronous with resource creation goes against the dumb resources model but instead be applied by a controller both before and after creation
| 0
|
61,243
| 17,023,646,087
|
IssuesEvent
|
2021-07-03 03:05:13
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
osm2pgsql compile error against geos trunk
|
Component: osm2pgsql Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 6.10pm, Wednesday, 20th October 2010]**
Building osm2pgsql trunk against geos trunk leads to:
```
g++ -c -o build_geometry.o build_geometry.cpp
build_geometry.cpp: In function char* get_wkt_simple(osmNode*, int, int):
build_geometry.cpp:67: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp: In function size_t get_wkt_split(osmNode*, int, int, double):
build_geometry.cpp:109: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:139: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:151: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp: In function size_t build_geometry(int, osmNode**, int*, int, int, double):
build_geometry.cpp:302: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:351: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:363: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
make: *** [build_geometry.o] Error 1
```
I appears that size_t must now be explicitly passed, and this patch seems to fix the problem:
```diff
Index: build_geometry.cpp
===================================================================
--- build_geometry.cpp (revision 23726)
+++ build_geometry.cpp (working copy)
@@ -64,7 +64,9 @@
char *get_wkt_simple(osmNode *nodes, int count, int polygon) {
GeometryFactory gf;
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
try
{
@@ -106,7 +108,9 @@
size_t get_wkt_split(osmNode *nodes, int count, int polygon, double split_at) {
GeometryFactory gf;
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
double area;
WKTWriter wktw;
size_t wkt_size = 0;
@@ -136,7 +140,9 @@
double distance = 0;
std::auto_ptr<CoordinateSequence> segment;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(coords->getAt(0));
for(unsigned i=1; i<coords->getSize(); i++) {
segment->add(coords->getAt(i));
@@ -148,7 +154,8 @@
areas.push_back(0);
wkt_size++;
distance=0;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(coords->getAt(i));
}
}
@@ -299,7 +306,9 @@
try
{
for (int c=0; xnodes[c]; c++) {
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
for (int i = 0; i < xcount[c]; i++) {
struct osmNode *nodes = xnodes[c];
Coordinate c;
@@ -348,7 +357,9 @@
//std::cerr << "polygon(" << osm_id << ") is no good: points(" << pline->getNumPoints() << "), closed(" << pline->isClosed() << "). " << writer.write(pline.get()) << std::endl;
double distance = 0;
std::auto_ptr<CoordinateSequence> segment;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(pline->getCoordinateN(0));
for(unsigned i=1; i<pline->getNumPoints(); i++) {
segment->add(pline->getCoordinateN(i));
@@ -360,7 +371,7 @@
areas.push_back(0);
wkt_size++;
distance=0;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(pline->getCoordinateN(i));
}
}
```
|
1.0
|
osm2pgsql compile error against geos trunk - **[Submitted to the original trac issue database at 6.10pm, Wednesday, 20th October 2010]**
Building osm2pgsql trunk against geos trunk leads to:
```
g++ -c -o build_geometry.o build_geometry.cpp
build_geometry.cpp: In function char* get_wkt_simple(osmNode*, int, int):
build_geometry.cpp:67: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp: In function size_t get_wkt_split(osmNode*, int, int, double):
build_geometry.cpp:109: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:139: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:151: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp: In function size_t build_geometry(int, osmNode**, int*, int, int, double):
build_geometry.cpp:302: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:351: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
build_geometry.cpp:363: error: call of overloaded create(int, int) is ambiguous
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:68: note: candidates are: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(std::vector<geos::geom::Coordinate, std::allocator<geos::geom::Coordinate> >*, size_t) const
/usr/local/include/geos/geom/CoordinateSequenceFactory.h:81: note: virtual geos::geom::CoordinateSequence* geos::geom::CoordinateSequenceFactory::create(size_t, size_t) const
make: *** [build_geometry.o] Error 1
```
I appears that size_t must now be explicitly passed, and this patch seems to fix the problem:
```diff
Index: build_geometry.cpp
===================================================================
--- build_geometry.cpp (revision 23726)
+++ build_geometry.cpp (working copy)
@@ -64,7 +64,9 @@
char *get_wkt_simple(osmNode *nodes, int count, int polygon) {
GeometryFactory gf;
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
try
{
@@ -106,7 +108,9 @@
size_t get_wkt_split(osmNode *nodes, int count, int polygon, double split_at) {
GeometryFactory gf;
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
double area;
WKTWriter wktw;
size_t wkt_size = 0;
@@ -136,7 +140,9 @@
double distance = 0;
std::auto_ptr<CoordinateSequence> segment;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(coords->getAt(0));
for(unsigned i=1; i<coords->getSize(); i++) {
segment->add(coords->getAt(i));
@@ -148,7 +154,8 @@
areas.push_back(0);
wkt_size++;
distance=0;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(coords->getAt(i));
}
}
@@ -299,7 +306,9 @@
try
{
for (int c=0; xnodes[c]; c++) {
- std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ std::auto_ptr<CoordinateSequence> coords(gf.getCoordinateSequenceFactory()->create(size, dimension));
for (int i = 0; i < xcount[c]; i++) {
struct osmNode *nodes = xnodes[c];
Coordinate c;
@@ -348,7 +357,9 @@
//std::cerr << "polygon(" << osm_id << ") is no good: points(" << pline->getNumPoints() << "), closed(" << pline->isClosed() << "). " << writer.write(pline.get()) << std::endl;
double distance = 0;
std::auto_ptr<CoordinateSequence> segment;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ std::size_t size(0);
+ std::size_t dimension(2);
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(pline->getCoordinateN(0));
for(unsigned i=1; i<pline->getNumPoints(); i++) {
segment->add(pline->getCoordinateN(i));
@@ -360,7 +371,7 @@
areas.push_back(0);
wkt_size++;
distance=0;
- segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(0, 2));
+ segment = std::auto_ptr<CoordinateSequence>(gf.getCoordinateSequenceFactory()->create(size, dimension));
segment->add(pline->getCoordinateN(i));
}
}
```
|
defect
|
compile error against geos trunk building trunk against geos trunk leads to g c o build geometry o build geometry cpp build geometry cpp in function char get wkt simple osmnode int int build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp in function size t get wkt split osmnode int int double build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp in function size t build geometry int osmnode int int int double build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const build geometry cpp error call of overloaded create int int is ambiguous usr local include geos geom coordinatesequencefactory h note candidates are virtual geos geom coordinatesequence geos geom coordinatesequencefactory create std vector size t const usr local include geos geom coordinatesequencefactory h note virtual geos geom coordinatesequence geos geom coordinatesequencefactory create size t size t const make error i appears that size t must now be explicitly passed and this patch seems to fix the problem diff index build geometry cpp build geometry cpp revision build geometry cpp working copy char get wkt simple osmnode nodes int count int polygon geometryfactory gf std auto ptr coords gf getcoordinatesequencefactory create std size t size std size t dimension std auto ptr coords gf getcoordinatesequencefactory create size dimension try size t get wkt split osmnode nodes int count int polygon double split at geometryfactory gf std auto ptr coords gf getcoordinatesequencefactory create std size t size std size t dimension std auto ptr coords gf getcoordinatesequencefactory create size dimension double area wktwriter wktw size t wkt size double distance std auto ptr segment segment std auto ptr gf getcoordinatesequencefactory create std size t size std size t dimension segment std auto ptr gf getcoordinatesequencefactory create size dimension segment add coords getat for unsigned i i getsize i segment add coords getat i areas push back wkt size distance segment std auto ptr gf getcoordinatesequencefactory create segment std auto ptr gf getcoordinatesequencefactory create size dimension segment add coords getat i try for int c xnodes c std auto ptr coords gf getcoordinatesequencefactory create std size t size std size t dimension std auto ptr coords gf getcoordinatesequencefactory create size dimension for int i i xcount i struct osmnode nodes xnodes coordinate c std cerr getnumpoints isclosed writer write pline get std endl double distance std auto ptr segment segment std auto ptr gf getcoordinatesequencefactory create std size t size std size t dimension segment std auto ptr gf getcoordinatesequencefactory create size dimension segment add pline getcoordinaten for unsigned i i getnumpoints i segment add pline getcoordinaten i areas push back wkt size distance segment std auto ptr gf getcoordinatesequencefactory create segment std auto ptr gf getcoordinatesequencefactory create size dimension segment add pline getcoordinaten i
| 1
|
39,628
| 9,592,932,465
|
IssuesEvent
|
2019-05-09 10:08:06
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
https://github.com/PowerDNS/pdns/pull/7772 breaks recursor regression test
|
auth defect rec
|
Above commit breaks a recursor regression test: `test_basicNSEC.py:basicNSEC.testSecureDNAMEToInsecureCNAMEAnswer`
Resursor test log is attached. It is not clear yet if this is an auth or recursor bug, but the first suspect would be auth.
The recursor tests uses the auth process. Bisecting the auth code lead to the above commit.
[recursor.log](https://github.com/PowerDNS/pdns/files/3148528/recursor.log)
@mind04 any clue?
|
1.0
|
https://github.com/PowerDNS/pdns/pull/7772 breaks recursor regression test - Above commit breaks a recursor regression test: `test_basicNSEC.py:basicNSEC.testSecureDNAMEToInsecureCNAMEAnswer`
Resursor test log is attached. It is not clear yet if this is an auth or recursor bug, but the first suspect would be auth.
The recursor tests uses the auth process. Bisecting the auth code lead to the above commit.
[recursor.log](https://github.com/PowerDNS/pdns/files/3148528/recursor.log)
@mind04 any clue?
|
defect
|
breaks recursor regression test above commit breaks a recursor regression test test basicnsec py basicnsec testsecurednametoinsecurecnameanswer resursor test log is attached it is not clear yet if this is an auth or recursor bug but the first suspect would be auth the recursor tests uses the auth process bisecting the auth code lead to the above commit any clue
| 1
|
149,802
| 23,533,329,495
|
IssuesEvent
|
2022-08-19 17:38:46
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
closed
|
[Design] MyHealth - Design "delete folder" page for desktop and mobile
|
design my-health my-health-SM-CORE mhv-to-va.gov-SM
|
## Issue Description
As a secure messaging user, I need a way to delete secure messages folders I have created.
---
## Tasks
- [x] Design "delete folder" page for desktop
- [x] Design "delete folder" page for mobile
- [x] Drop link to mockups into a comment on this ticket
- [x] Drop a note into #mhv_on_vagov in Slack when you are ready for Tracey to review
## Acceptance Criteria
- [ ] Mobile and desktop initial designs will be complete and ready for review by PO.
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `Console-Services`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
1.0
|
[Design] MyHealth - Design "delete folder" page for desktop and mobile - ## Issue Description
As a secure messaging user, I need a way to delete secure messages folders I have created.
---
## Tasks
- [x] Design "delete folder" page for desktop
- [x] Design "delete folder" page for mobile
- [x] Drop link to mockups into a comment on this ticket
- [x] Drop a note into #mhv_on_vagov in Slack when you are ready for Tracey to review
## Acceptance Criteria
- [ ] Mobile and desktop initial designs will be complete and ready for review by PO.
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `Console-Services`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
non_defect
|
myhealth design delete folder page for desktop and mobile issue description as a secure messaging user i need a way to delete secure messages folders i have created tasks design delete folder page for desktop design delete folder page for mobile drop link to mockups into a comment on this ticket drop a note into mhv on vagov in slack when you are ready for tracey to review acceptance criteria mobile and desktop initial designs will be complete and ready for review by po how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design console services tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc
| 0
|
282,237
| 21,315,474,106
|
IssuesEvent
|
2022-04-16 07:35:35
|
kxshxsh/pe
|
https://api.github.com/repos/kxshxsh/pe
|
opened
|
UG: poorly defined target users
|
severity.Medium type.DocumentationBug
|

Are you restricted to NUS as a university? If yes, then this has not been mentioned
If no, then your class group ids, module codes etc etc are VERY specific to NUS.
As such your introduction is misleading to potential users because they may think that the product is for them when it is actually not.
<!--session: 1650087226793-f266947a-1da7-4629-8f86-32a293c5b0a1-->
<!--Version: Web v3.4.2-->
|
1.0
|
UG: poorly defined target users - 
Are you restricted to NUS as a university? If yes, then this has not been mentioned
If no, then your class group ids, module codes etc etc are VERY specific to NUS.
As such your introduction is misleading to potential users because they may think that the product is for them when it is actually not.
<!--session: 1650087226793-f266947a-1da7-4629-8f86-32a293c5b0a1-->
<!--Version: Web v3.4.2-->
|
non_defect
|
ug poorly defined target users are you restricted to nus as a university if yes then this has not been mentioned if no then your class group ids module codes etc etc are very specific to nus as such your introduction is misleading to potential users because they may think that the product is for them when it is actually not
| 0
|
62,807
| 6,813,759,806
|
IssuesEvent
|
2017-11-06 10:28:19
|
FreeRDP/FreeRDP
|
https://api.github.com/repos/FreeRDP/FreeRDP
|
closed
|
xfreerdp start consume 100% CPU if trying connect to disabled RDP
|
fixed-waiting-test
|
```
$ xfreerdp --version
This is FreeRDP version 2.0.0-dev (git n/a)
```
How reproduce:
1) Connect to worked Windows host.
2) Enter in windows terminal command for reboot:
```shutdown /r /t 0```
3) now xfreerdp will be closed
```[02:21:30:930] [17050:17051] [INFO][com.freerdp.core] - ERRINFO_LOGOFF_BY_USER (0x0000000C):The disconnection was initiated by the user logging off his or her session on the server.```
4) Without waiting when Windows started again try connect again.
Here got 100% CPU load and messages in terminal:
```
$ xfreerdp /v:host /u:login@domain
Password:
[02:24:43:409] [18401:18402] [ERROR][com.winpr.timezone] - Unable to get current timezone rule
[02:24:43:463] [18401:18402] [INFO][com.freerdp.client.x11] - Logon Error Info SESSION_ID [UNKNOWN]
[02:24:43:463] [18401:18402] [INFO][com.freerdp.client.x11] - Logon Error Info SESSION_ID [UNKNOWN]
[02:24:43:463] [18401:18402] [INFO][com.freerdp.core] - ERRINFO_LOGOFF_BY_USER (0x0000000C):The disconnection was initiated by the user logging off his or her session on the server.
```
This is backtrace:
```
(gdb) thread apply all bt
Thread 2 (Thread 0x7fec587ec700 (LWP 18402)):
#0 0x00007fec62fd9bcb in __GI___poll (fds=fds@entry=0x7fec587eb8c0, nfds=nfds@entry=1, timeout=timeout@entry=0) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007fec632e19c8 in poll (__timeout=0, __nfds=1, __fds=0x7fec587eb8c0) at /usr/include/bits/poll2.h:46
#2 waitOnFd (dwMilliseconds=0, mode=<optimized out>, fd=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:197
#3 WaitForSingleObject (hHandle=0x55e6dd3d1460, dwMilliseconds=dwMilliseconds@entry=0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:296
#4 0x00007fec63b6ec81 in freerdp_shall_disconnect (instance=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/freerdp.c:518
#5 0x00007fec63b8bbe2 in transport_check_fds (transport=transport@entry=0x55e6dd3af3f0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/transport.c:1000
#6 0x00007fec63b83908 in rdp_check_fds (rdp=rdp@entry=0x55e6dd3a1800) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/rdp.c:1505
#7 0x00007fec63b7a900 in rdp_client_connect (rdp=rdp@entry=0x55e6dd3a1800) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/connection.c:290
#8 0x00007fec63b6f9e0 in freerdp_connect (instance=instance@entry=0x55e6dd37b7b0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/freerdp.c:189
#9 0x000055e6dc5bf86f in xf_client_thread (param=0x55e6dd37b7b0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/client/X11/xf_client.c:1473
#10 0x00007fec6331aee1 in thread_launcher (arg=0x55e6dd3e3500) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/thread/thread.c:320
#11 0x00007fec62888609 in start_thread (arg=0x7fec587ec700) at pthread_create.c:465
#12 0x00007fec62fe617f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 1 (Thread 0x7fec654108c0 (LWP 18401)):
#0 0x00007fec62fd9bcb in __GI___poll (fds=fds@entry=0x7ffe2f8e2940, nfds=nfds@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007fec632e19c8 in poll (__timeout=-1, __nfds=1, __fds=0x7ffe2f8e2940) at /usr/include/bits/poll2.h:46
#2 waitOnFd (dwMilliseconds=4294967295, mode=<optimized out>, fd=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:197
#3 WaitForSingleObject (hHandle=0x55e6dd3e3500, dwMilliseconds=4294967295) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:296
#4 0x000055e6dc5ad946 in main (argc=<optimized out>, argv=0x7ffe2f8e2af8) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/client/X11/cli/xfreerdp.c:75
(gdb)
```
|
1.0
|
xfreerdp start consume 100% CPU if trying connect to disabled RDP - ```
$ xfreerdp --version
This is FreeRDP version 2.0.0-dev (git n/a)
```
How reproduce:
1) Connect to worked Windows host.
2) Enter in windows terminal command for reboot:
```shutdown /r /t 0```
3) now xfreerdp will be closed
```[02:21:30:930] [17050:17051] [INFO][com.freerdp.core] - ERRINFO_LOGOFF_BY_USER (0x0000000C):The disconnection was initiated by the user logging off his or her session on the server.```
4) Without waiting when Windows started again try connect again.
Here got 100% CPU load and messages in terminal:
```
$ xfreerdp /v:host /u:login@domain
Password:
[02:24:43:409] [18401:18402] [ERROR][com.winpr.timezone] - Unable to get current timezone rule
[02:24:43:463] [18401:18402] [INFO][com.freerdp.client.x11] - Logon Error Info SESSION_ID [UNKNOWN]
[02:24:43:463] [18401:18402] [INFO][com.freerdp.client.x11] - Logon Error Info SESSION_ID [UNKNOWN]
[02:24:43:463] [18401:18402] [INFO][com.freerdp.core] - ERRINFO_LOGOFF_BY_USER (0x0000000C):The disconnection was initiated by the user logging off his or her session on the server.
```
This is backtrace:
```
(gdb) thread apply all bt
Thread 2 (Thread 0x7fec587ec700 (LWP 18402)):
#0 0x00007fec62fd9bcb in __GI___poll (fds=fds@entry=0x7fec587eb8c0, nfds=nfds@entry=1, timeout=timeout@entry=0) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007fec632e19c8 in poll (__timeout=0, __nfds=1, __fds=0x7fec587eb8c0) at /usr/include/bits/poll2.h:46
#2 waitOnFd (dwMilliseconds=0, mode=<optimized out>, fd=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:197
#3 WaitForSingleObject (hHandle=0x55e6dd3d1460, dwMilliseconds=dwMilliseconds@entry=0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:296
#4 0x00007fec63b6ec81 in freerdp_shall_disconnect (instance=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/freerdp.c:518
#5 0x00007fec63b8bbe2 in transport_check_fds (transport=transport@entry=0x55e6dd3af3f0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/transport.c:1000
#6 0x00007fec63b83908 in rdp_check_fds (rdp=rdp@entry=0x55e6dd3a1800) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/rdp.c:1505
#7 0x00007fec63b7a900 in rdp_client_connect (rdp=rdp@entry=0x55e6dd3a1800) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/connection.c:290
#8 0x00007fec63b6f9e0 in freerdp_connect (instance=instance@entry=0x55e6dd37b7b0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/libfreerdp/core/freerdp.c:189
#9 0x000055e6dc5bf86f in xf_client_thread (param=0x55e6dd37b7b0) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/client/X11/xf_client.c:1473
#10 0x00007fec6331aee1 in thread_launcher (arg=0x55e6dd3e3500) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/thread/thread.c:320
#11 0x00007fec62888609 in start_thread (arg=0x7fec587ec700) at pthread_create.c:465
#12 0x00007fec62fe617f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Thread 1 (Thread 0x7fec654108c0 (LWP 18401)):
#0 0x00007fec62fd9bcb in __GI___poll (fds=fds@entry=0x7ffe2f8e2940, nfds=nfds@entry=1, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1 0x00007fec632e19c8 in poll (__timeout=-1, __nfds=1, __fds=0x7ffe2f8e2940) at /usr/include/bits/poll2.h:46
#2 waitOnFd (dwMilliseconds=4294967295, mode=<optimized out>, fd=<optimized out>) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:197
#3 WaitForSingleObject (hHandle=0x55e6dd3e3500, dwMilliseconds=4294967295) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/winpr/libwinpr/synch/wait.c:296
#4 0x000055e6dc5ad946 in main (argc=<optimized out>, argv=0x7ffe2f8e2af8) at /usr/src/debug/freerdp-2.0.0-34.20170831git3b83526.fc27.x86_64/client/X11/cli/xfreerdp.c:75
(gdb)
```
|
non_defect
|
xfreerdp start consume cpu if trying connect to disabled rdp xfreerdp version this is freerdp version dev git n a how reproduce connect to worked windows host enter in windows terminal command for reboot shutdown r t now xfreerdp will be closed errinfo logoff by user the disconnection was initiated by the user logging off his or her session on the server without waiting when windows started again try connect again here got cpu load and messages in terminal xfreerdp v host u login domain password unable to get current timezone rule logon error info session id logon error info session id errinfo logoff by user the disconnection was initiated by the user logging off his or her session on the server this is backtrace gdb thread apply all bt thread thread lwp in gi poll fds fds entry nfds nfds entry timeout timeout entry at sysdeps unix sysv linux poll c in poll timeout nfds fds at usr include bits h waitonfd dwmilliseconds mode fd at usr src debug freerdp winpr libwinpr synch wait c waitforsingleobject hhandle dwmilliseconds dwmilliseconds entry at usr src debug freerdp winpr libwinpr synch wait c in freerdp shall disconnect instance at usr src debug freerdp libfreerdp core freerdp c in transport check fds transport transport entry at usr src debug freerdp libfreerdp core transport c in rdp check fds rdp rdp entry at usr src debug freerdp libfreerdp core rdp c in rdp client connect rdp rdp entry at usr src debug freerdp libfreerdp core connection c in freerdp connect instance instance entry at usr src debug freerdp libfreerdp core freerdp c in xf client thread param at usr src debug freerdp client xf client c in thread launcher arg at usr src debug freerdp winpr libwinpr thread thread c in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s thread thread lwp in gi poll fds fds entry nfds nfds entry timeout timeout entry at sysdeps unix sysv linux poll c in poll timeout nfds fds at usr include bits h waitonfd dwmilliseconds mode fd at usr src debug freerdp winpr libwinpr synch wait c waitforsingleobject hhandle dwmilliseconds at usr src debug freerdp winpr libwinpr synch wait c in main argc argv at usr src debug freerdp client cli xfreerdp c gdb
| 0
|
61,552
| 17,023,723,579
|
IssuesEvent
|
2021-07-03 03:30:03
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Reverse geocoding resolves coordinate to Graubünden, Switzerland instead of Upper Austria, Austria
|
Component: nominatim Priority: minor Resolution: invalid Type: defect
|
**[Submitted to the original trac issue database at 2.35pm, Sunday, 19th June 2011]**
I have two coordinates, which are only a few hundred meters away from each other (in the same city), but the reverse geocoding request places them in two different cities (a few hundred kilometers away from each other) and even countries.
Point one is correct calculated to be in Upper Austria, Linz (Austria) - http://open.mapquestapi.com/nominatim/v1/reverse?format=xml&lat=48.260269&lon=14.325599&zoom=18&addressdetails=1&accept-language=en
Point two (approximately 200 meters away) is calculated wrong to be in Graubnden, Switzerland - http://open.mapquestapi.com/nominatim/v1/reverse?format=xml&lat=48.260269&lon=14.324499&zoom=18&addressdetails=1&accept-language=en
A reverse geocoding request for point two should also return the city 'Linz', state 'Upper Austria' and Country 'Austria'.
|
1.0
|
Reverse geocoding resolves coordinate to Graubünden, Switzerland instead of Upper Austria, Austria - **[Submitted to the original trac issue database at 2.35pm, Sunday, 19th June 2011]**
I have two coordinates, which are only a few hundred meters away from each other (in the same city), but the reverse geocoding request places them in two different cities (a few hundred kilometers away from each other) and even countries.
Point one is correct calculated to be in Upper Austria, Linz (Austria) - http://open.mapquestapi.com/nominatim/v1/reverse?format=xml&lat=48.260269&lon=14.325599&zoom=18&addressdetails=1&accept-language=en
Point two (approximately 200 meters away) is calculated wrong to be in Graubnden, Switzerland - http://open.mapquestapi.com/nominatim/v1/reverse?format=xml&lat=48.260269&lon=14.324499&zoom=18&addressdetails=1&accept-language=en
A reverse geocoding request for point two should also return the city 'Linz', state 'Upper Austria' and Country 'Austria'.
|
defect
|
reverse geocoding resolves coordinate to graubã¼nden switzerland instead of upper austria austria i have two coordinates which are only a few hundred meters away from each other in the same city but the reverse geocoding request places them in two different cities a few hundred kilometers away from each other and even countries point one is correct calculated to be in upper austria linz austria point two approximately meters away is calculated wrong to be in graubnden switzerland a reverse geocoding request for point two should also return the city linz state upper austria and country austria
| 1
|
240,809
| 20,074,347,066
|
IssuesEvent
|
2022-02-04 10:57:10
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
pkg/sql/sqlstats/persistedsqlstats/persistedsqlstats_test: TestSQLStatsCompactorNilTestingKnobCheck failed
|
C-test-failure O-robot branch-master
|
pkg/sql/sqlstats/persistedsqlstats/persistedsqlstats_test.TestSQLStatsCompactorNilTestingKnobCheck [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4300830&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4300830&tab=artifacts#/) on master @ [4074222bd03bc5960044a82be6e35eb4351388fb](https://github.com/cockroachdb/cockroach/commits/4074222bd03bc5960044a82be6e35eb4351388fb):
```
=== RUN TestSQLStatsCompactorNilTestingKnobCheck
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f6543d871e87ca5f79c9e9854efd0ecf/logTestSQLStatsCompactorNilTestingKnobCheck3294345160
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestSQLStatsCompactorNilTestingKnobCheck.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
1.0
|
pkg/sql/sqlstats/persistedsqlstats/persistedsqlstats_test: TestSQLStatsCompactorNilTestingKnobCheck failed - pkg/sql/sqlstats/persistedsqlstats/persistedsqlstats_test.TestSQLStatsCompactorNilTestingKnobCheck [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4300830&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4300830&tab=artifacts#/) on master @ [4074222bd03bc5960044a82be6e35eb4351388fb](https://github.com/cockroachdb/cockroach/commits/4074222bd03bc5960044a82be6e35eb4351388fb):
```
=== RUN TestSQLStatsCompactorNilTestingKnobCheck
test_log_scope.go:79: test logs captured to: /artifacts/tmp/_tmp/f6543d871e87ca5f79c9e9854efd0ecf/logTestSQLStatsCompactorNilTestingKnobCheck3294345160
test_log_scope.go:80: use -show-logs to present logs inline
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestSQLStatsCompactorNilTestingKnobCheck.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_defect
|
pkg sql sqlstats persistedsqlstats persistedsqlstats test testsqlstatscompactorniltestingknobcheck failed pkg sql sqlstats persistedsqlstats persistedsqlstats test testsqlstatscompactorniltestingknobcheck with on master run testsqlstatscompactorniltestingknobcheck test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline help see also parameters in this failure tags bazel gss
| 0
|
7,784
| 2,610,634,977
|
IssuesEvent
|
2015-02-26 21:33:03
|
alistairreilly/open-ig
|
https://api.github.com/repos/alistairreilly/open-ig
|
closed
|
Műholdak-49-es issue-ból kimaradt a sima műhold
|
auto-migrated Information-Display Milestone-0.93.400 Priority-Medium Radar Type-Defect
|
```
Műhold:
-Kiírja, ha a bolygó lakott: "Idegen Birodalom".
-Láthatóak az űrbázisok is.
-Épületek láthatóak, de nevek nélkül.
Kémműhold 1&2:
-Telepíthetőek üres(de nem ismert), kolonizálható bolygókra is!
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 23 Aug 2011 at 6:48
|
1.0
|
Műholdak-49-es issue-ból kimaradt a sima műhold - ```
Műhold:
-Kiírja, ha a bolygó lakott: "Idegen Birodalom".
-Láthatóak az űrbázisok is.
-Épületek láthatóak, de nevek nélkül.
Kémműhold 1&2:
-Telepíthetőek üres(de nem ismert), kolonizálható bolygókra is!
```
Original issue reported on code.google.com by `Jozsef.T...@gmail.com` on 23 Aug 2011 at 6:48
|
defect
|
műholdak es issue ból kimaradt a sima műhold műhold kiírja ha a bolygó lakott idegen birodalom láthatóak az űrbázisok is épületek láthatóak de nevek nélkül kémműhold telepíthetőek üres de nem ismert kolonizálható bolygókra is original issue reported on code google com by jozsef t gmail com on aug at
| 1
|
48,632
| 13,175,350,701
|
IssuesEvent
|
2020-08-12 01:20:27
|
PowerDNS/pdns
|
https://api.github.com/repos/PowerDNS/pdns
|
closed
|
dnsdist: addAction/rmRule/newServer/rmServer leak memory
|
defect dnsdist
|
<!-- Hi! Thanks for filing an issue. It will be read with care by human beings. Can we ask you to please fill out this template and not simply demand new features or send in complaints? Thanks! -->
<!-- Also please search the existing issues (both open and closed) to see if your report might be duplicate -->
<!-- Please don't file an issue when you have a support question, send support questions to the mailinglist or ask them on IRC (https://www.powerdns.com/opensource.html) -->
<!-- Tell us what is issue is about -->
- Program: dnsdist <!-- delete the ones that do not apply -->
- Issue type: Bug report
### Short description
<!-- Explain in a few sentences what the issue/request is -->
We have tested dnsdist as DNS load balance.
We frequently add/delete rules or servers.
and we found that when frequently add/delete rules/servers will lead to memory leak.
### Environment
<!-- Tell us about the environment -->
- Operating system: Ubuntu 16.04.5 LTS
- Software version: 1.4.0-1pdns.xenial, 1.5.0
- Software source: <!-- e.g. Operating system repository, PowerDNS repository, compiled yourself --> 1.4.0(PowerDNS repository), 1.5.0(compiled myself)
```
dnsdist -V
dnsdist 1.4.0 (Lua 5.1.4 [LuaJIT 2.0.4])
Enabled features: cdb dns-over-tls(openssl) dnscrypt ebpf ipcipher libsodium protobuf re2 recvmmsg/sendmmsg systemd
```
```
dnsdist 1.5.0 (Lua 5.1.4 [LuaJIT 2.0.4])
Enabled features: ebpf fstrm ipcipher libsodium protobuf recvmmsg/sendmmsg systemd
```
### Steps to reproduce
<!-- Tell us step-by-step how the issue can be triggered. Please include your configuration files and any (Lua) scripts that are loaded. -->
1. <!-- step 1 --> dnsdist server config
```
-- listen for console connection with the given secret key
controlSocket("127.0.0.1")
addConsoleACL("127.0.0.1")
setKey("xxxx")
addLocal("127.0.0.1:53", {reusePort=true})
newServer("8.8.8.8")
newServer("1.1.1.1")
pc = newPacketCache(5000000)
getPool(""):setCache(pc)
-- test rule
addAction(makeRule({"10.254.0.60/32", "10.254.0.71/32", "10.254.0.72/32", "10.254.0.73"}), PoolAction(""))
```
2. <!-- step 2 --> server.sh and makerule_test.sh
makerule_test.sh: add and delete the same rule 4000 times
server.sh: add and delete the same server 200 times.
server.lua: add new server and delete it
```
newServer("2400:3200::1")
rmServer(2)
collectgarbage()
```
v1.4 server.sh:
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < server.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
v1.5 server.sh:
```
mem_start=$(ps -eo rss,args | grep 1.5.0/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < server.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep 1.5.0/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
makerule_test.sh:
generate_ip.sh: create a IP list configuration
```
#!/bin/ash
count="$1"
for i in `seq 1 255`; do
for j in `seq 1 255`; do
echo 10.10.$i.$j
let count-=1
if [ $count -eq 0 ]; then
exit 0
fi
done
done
```
bash generate_ip.sh 15000 >ip.conf
dist_rule.py: the arg is a ip list file, and will generate a single rule with 15000 IPs
```
def only_make_rule():
# rule example: addAction(makeRule({"10.254.0.60/32", "10.254.0.71/32", "10.254.0.72/32", "10.254.0.73"}), PoolAction(""))
rule_template = "addAction(makeRule({0}), PoolAction(''))"
f = open(sys.argv[1])
l = f.readlines()
acl_rule = ['"{0}"'.format(i[0:-1]) for i in l]
acl_rule_final = ','.join(acl_rule)
#acl_rule_final = acl_rule_template.format(acl_rule_final)
print(rule_template.format("{{{0}}}".format(acl_rule_final)))
print('rmRule(1)')
print('collectgarbage()')
only_make_rule()
```
python3 dist_rule.py ip.conf > makerule_15000.lua
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < makerule_15000.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
makerule_test.sh: add and delete the same rule x times
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < makerule_15000.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
3. <!-- step 3 --> add and delete the same rule
```
bash makerule_test.sh 4000
```
4. stop dnsdist and restart dnsdist.
5. run newServer/rmServer two hundred times
```
bash server.sh 200
```
6. and new rules and delete old rule.
step 3: add and delete the same rule lead to memory leak.
and this step will add more IPs to a single rule, and delete old rule.
example: addAction(A, PoolAction(""))
a) first rule: with old IP set A : 15000 IPs
b) new rule: new IP set B: add two IPs to A
c)delete the first rule
### Expected behaviour
<!-- What would you expect to happen when the reproduction steps are run -->
add and delete the same rule will not increase memory usage.
add new rule and delete old rule will increase a little memory usage.
add and delete the same server will not increase memory usage.
### Actual behaviour
<!-- What did happen? Please (if possible) provide logs, output from `dig` and/or tcpdump/wireshark data -->
step 3: add and delete the same rule
$ bash makerule_test.sh 4000
output:
start: 97356 end: 949800
832
result: increased 832M byte memory!
step 5: add and delete the same server
$ bash server.sh 200
output:
start: 97156 end: 3889156
3703
result: increased 3703M byte memory!
step 6:
this also lead to memory leak
### Other information
<!-- if you already did more digging into the issue, please provide all the information you gathered -->
[issues/8530: dnsdist: addAction/rmRule consumes incrementally more memory](https://github.com/PowerDNS/pdns/issues/8530)
dnsdist -c -C ./dnsdist.conf < makerule_15000.lua will add log to ~/.dnsdist_history, in the test, we clear the history file per second.
|
1.0
|
dnsdist: addAction/rmRule/newServer/rmServer leak memory - <!-- Hi! Thanks for filing an issue. It will be read with care by human beings. Can we ask you to please fill out this template and not simply demand new features or send in complaints? Thanks! -->
<!-- Also please search the existing issues (both open and closed) to see if your report might be duplicate -->
<!-- Please don't file an issue when you have a support question, send support questions to the mailinglist or ask them on IRC (https://www.powerdns.com/opensource.html) -->
<!-- Tell us what is issue is about -->
- Program: dnsdist <!-- delete the ones that do not apply -->
- Issue type: Bug report
### Short description
<!-- Explain in a few sentences what the issue/request is -->
We have tested dnsdist as DNS load balance.
We frequently add/delete rules or servers.
and we found that when frequently add/delete rules/servers will lead to memory leak.
### Environment
<!-- Tell us about the environment -->
- Operating system: Ubuntu 16.04.5 LTS
- Software version: 1.4.0-1pdns.xenial, 1.5.0
- Software source: <!-- e.g. Operating system repository, PowerDNS repository, compiled yourself --> 1.4.0(PowerDNS repository), 1.5.0(compiled myself)
```
dnsdist -V
dnsdist 1.4.0 (Lua 5.1.4 [LuaJIT 2.0.4])
Enabled features: cdb dns-over-tls(openssl) dnscrypt ebpf ipcipher libsodium protobuf re2 recvmmsg/sendmmsg systemd
```
```
dnsdist 1.5.0 (Lua 5.1.4 [LuaJIT 2.0.4])
Enabled features: ebpf fstrm ipcipher libsodium protobuf recvmmsg/sendmmsg systemd
```
### Steps to reproduce
<!-- Tell us step-by-step how the issue can be triggered. Please include your configuration files and any (Lua) scripts that are loaded. -->
1. <!-- step 1 --> dnsdist server config
```
-- listen for console connection with the given secret key
controlSocket("127.0.0.1")
addConsoleACL("127.0.0.1")
setKey("xxxx")
addLocal("127.0.0.1:53", {reusePort=true})
newServer("8.8.8.8")
newServer("1.1.1.1")
pc = newPacketCache(5000000)
getPool(""):setCache(pc)
-- test rule
addAction(makeRule({"10.254.0.60/32", "10.254.0.71/32", "10.254.0.72/32", "10.254.0.73"}), PoolAction(""))
```
2. <!-- step 2 --> server.sh and makerule_test.sh
makerule_test.sh: add and delete the same rule 4000 times
server.sh: add and delete the same server 200 times.
server.lua: add new server and delete it
```
newServer("2400:3200::1")
rmServer(2)
collectgarbage()
```
v1.4 server.sh:
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < server.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
v1.5 server.sh:
```
mem_start=$(ps -eo rss,args | grep 1.5.0/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < server.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep 1.5.0/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
makerule_test.sh:
generate_ip.sh: create a IP list configuration
```
#!/bin/ash
count="$1"
for i in `seq 1 255`; do
for j in `seq 1 255`; do
echo 10.10.$i.$j
let count-=1
if [ $count -eq 0 ]; then
exit 0
fi
done
done
```
bash generate_ip.sh 15000 >ip.conf
dist_rule.py: the arg is a ip list file, and will generate a single rule with 15000 IPs
```
def only_make_rule():
# rule example: addAction(makeRule({"10.254.0.60/32", "10.254.0.71/32", "10.254.0.72/32", "10.254.0.73"}), PoolAction(""))
rule_template = "addAction(makeRule({0}), PoolAction(''))"
f = open(sys.argv[1])
l = f.readlines()
acl_rule = ['"{0}"'.format(i[0:-1]) for i in l]
acl_rule_final = ','.join(acl_rule)
#acl_rule_final = acl_rule_template.format(acl_rule_final)
print(rule_template.format("{{{0}}}".format(acl_rule_final)))
print('rmRule(1)')
print('collectgarbage()')
only_make_rule()
```
python3 dist_rule.py ip.conf > makerule_15000.lua
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < makerule_15000.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
makerule_test.sh: add and delete the same rule x times
```
mem_start=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
for x in $(seq 1 $1); do dnsdist -c -C ./dnsdist.conf < makerule_15000.lua >/dev/null; done
mem_end=$(ps -eo rss,args | grep bin/dnsdis[t]|awk '{print $1}')
echo "start: $mem_start end: $mem_end"
echo "($mem_end-$mem_start)/1024"|bc
```
3. <!-- step 3 --> add and delete the same rule
```
bash makerule_test.sh 4000
```
4. stop dnsdist and restart dnsdist.
5. run newServer/rmServer two hundred times
```
bash server.sh 200
```
6. and new rules and delete old rule.
step 3: add and delete the same rule lead to memory leak.
and this step will add more IPs to a single rule, and delete old rule.
example: addAction(A, PoolAction(""))
a) first rule: with old IP set A : 15000 IPs
b) new rule: new IP set B: add two IPs to A
c)delete the first rule
### Expected behaviour
<!-- What would you expect to happen when the reproduction steps are run -->
add and delete the same rule will not increase memory usage.
add new rule and delete old rule will increase a little memory usage.
add and delete the same server will not increase memory usage.
### Actual behaviour
<!-- What did happen? Please (if possible) provide logs, output from `dig` and/or tcpdump/wireshark data -->
step 3: add and delete the same rule
$ bash makerule_test.sh 4000
output:
start: 97356 end: 949800
832
result: increased 832M byte memory!
step 5: add and delete the same server
$ bash server.sh 200
output:
start: 97156 end: 3889156
3703
result: increased 3703M byte memory!
step 6:
this also lead to memory leak
### Other information
<!-- if you already did more digging into the issue, please provide all the information you gathered -->
[issues/8530: dnsdist: addAction/rmRule consumes incrementally more memory](https://github.com/PowerDNS/pdns/issues/8530)
dnsdist -c -C ./dnsdist.conf < makerule_15000.lua will add log to ~/.dnsdist_history, in the test, we clear the history file per second.
|
defect
|
dnsdist addaction rmrule newserver rmserver leak memory program dnsdist issue type bug report short description we have tested dnsdist as dns load balance we frequently add delete rules or servers and we found that when frequently add delete rules servers will lead to memory leak environment operating system ubuntu lts software version xenial software source powerdns repository compiled myself dnsdist v dnsdist lua enabled features cdb dns over tls openssl dnscrypt ebpf ipcipher libsodium protobuf recvmmsg sendmmsg systemd dnsdist lua enabled features ebpf fstrm ipcipher libsodium protobuf recvmmsg sendmmsg systemd steps to reproduce dnsdist server config listen for console connection with the given secret key controlsocket addconsoleacl setkey xxxx addlocal reuseport true newserver newserver pc newpacketcache getpool setcache pc test rule addaction makerule poolaction server sh and makerule test sh makerule test sh add and delete the same rule times server sh add and delete the same server times server lua add new server and delete it newserver rmserver collectgarbage server sh mem start ps eo rss args grep bin dnsdis awk print for x in seq do dnsdist c c dnsdist conf dev null done mem end ps eo rss args grep bin dnsdis awk print echo start mem start end mem end echo mem end mem start bc server sh mem start ps eo rss args grep dnsdis awk print for x in seq do dnsdist c c dnsdist conf dev null done mem end ps eo rss args grep dnsdis awk print echo start mem start end mem end echo mem end mem start bc makerule test sh generate ip sh create a ip list configuration bin ash count for i in seq do for j in seq do echo i j let count if then exit fi done done bash generate ip sh ip conf dist rule py the arg is a ip list file and will generate a single rule with ips def only make rule rule example addaction makerule poolaction rule template addaction makerule poolaction f open sys argv l f readlines acl rule for i in l acl rule final join acl rule acl rule final acl rule template format acl rule final print rule template format format acl rule final print rmrule print collectgarbage only make rule dist rule py ip conf makerule lua mem start ps eo rss args grep bin dnsdis awk print for x in seq do dnsdist c c dnsdist conf dev null done mem end ps eo rss args grep bin dnsdis awk print echo start mem start end mem end echo mem end mem start bc makerule test sh add and delete the same rule x times mem start ps eo rss args grep bin dnsdis awk print for x in seq do dnsdist c c dnsdist conf dev null done mem end ps eo rss args grep bin dnsdis awk print echo start mem start end mem end echo mem end mem start bc add and delete the same rule bash makerule test sh stop dnsdist and restart dnsdist run newserver rmserver two hundred times bash server sh and new rules and delete old rule step add and delete the same rule lead to memory leak and this step will add more ips to a single rule and delete old rule example addaction a poolaction a first rule with old ip set a ips b new rule new ip set b add two ips to a c)delete the first rule expected behaviour add and delete the same rule will not increase memory usage add new rule and delete old rule will increase a little memory usage add and delete the same server will not increase memory usage actual behaviour step add and delete the same rule bash makerule test sh output start end result increased byte memory step add and delete the same server bash server sh output start end result increased byte memory step this also lead to memory leak other information dnsdist c c dnsdist conf makerule lua will add log to dnsdist history in the test we clear the history file per second
| 1
|
67,192
| 20,961,583,350
|
IssuesEvent
|
2022-03-27 21:43:51
|
abedmaatalla/foursquared
|
https://api.github.com/repos/abedmaatalla/foursquared
|
closed
|
Order places list by distance
|
Priority-Medium Type-Defect auto-migrated
|
```
The list of nearby places doesn't seem to follow any particular order.
It would be very useful if the list was ordered by distance (closest to
furthest).
Also, while your at it would also be great if in the same places list there was
some kind of tip indicating the places where we've checked-in before. Like a
small icon on the venue.
```
Original issue reported on code.google.com by `hugo.m.palma` on 21 Sep 2010 at 11:15
|
1.0
|
Order places list by distance - ```
The list of nearby places doesn't seem to follow any particular order.
It would be very useful if the list was ordered by distance (closest to
furthest).
Also, while your at it would also be great if in the same places list there was
some kind of tip indicating the places where we've checked-in before. Like a
small icon on the venue.
```
Original issue reported on code.google.com by `hugo.m.palma` on 21 Sep 2010 at 11:15
|
defect
|
order places list by distance the list of nearby places doesn t seem to follow any particular order it would be very useful if the list was ordered by distance closest to furthest also while your at it would also be great if in the same places list there was some kind of tip indicating the places where we ve checked in before like a small icon on the venue original issue reported on code google com by hugo m palma on sep at
| 1
|
30,241
| 6,051,143,189
|
IssuesEvent
|
2017-06-12 22:56:21
|
googlei18n/noto-fonts
|
https://api.github.com/repos/googlei18n/noto-fonts
|
closed
|
Subscripts (inferiors) missing from Noto Sans and Noto Serif
|
Android FoundIn-1.x Priority-High Type-Defect
|
```
Currently there's a full set of superscript numerals (0–9) but the subscripts
are all missing. Shouldn't be too much of a problem to create them; they just
need to be moved down and kerned.
What version of the product are you using? On what operating system?
Noto Sans 1.04
```
Original issue reported on code.google.com by `eemail...@gmail.com` on 17 Apr 2015 at 6:55
|
1.0
|
Subscripts (inferiors) missing from Noto Sans and Noto Serif - ```
Currently there's a full set of superscript numerals (0–9) but the subscripts
are all missing. Shouldn't be too much of a problem to create them; they just
need to be moved down and kerned.
What version of the product are you using? On what operating system?
Noto Sans 1.04
```
Original issue reported on code.google.com by `eemail...@gmail.com` on 17 Apr 2015 at 6:55
|
defect
|
subscripts inferiors missing from noto sans and noto serif currently there s a full set of superscript numerals – but the subscripts are all missing shouldn t be too much of a problem to create them they just need to be moved down and kerned what version of the product are you using on what operating system noto sans original issue reported on code google com by eemail gmail com on apr at
| 1
|
57,223
| 15,726,946,669
|
IssuesEvent
|
2021-03-29 12:02:09
|
danmar/testissues
|
https://api.github.com/repos/danmar/testissues
|
opened
|
False positive: memory leak when using strcpy (Trac #116)
|
False positive Incomplete Migration Migrated from Trac defect noone
|
Migrated from https://trac.cppcheck.net/ticket/116
```json
{
"status": "closed",
"changetime": "2009-02-24T07:17:36",
"description": "There is a false positive for this code..\n\n{{{\nvoid foo()\n{\n char *p = malloc(100);\n return strcpy(p, \"foo\");\n}\n}}}",
"reporter": "hyd_danmar",
"cc": "",
"resolution": "fixed",
"_ts": "1235459856000000",
"component": "False positive",
"summary": "False positive: memory leak when using strcpy",
"priority": "",
"keywords": "",
"time": "2009-02-23T19:45:56",
"milestone": "1.29",
"owner": "noone",
"type": "defect"
}
```
|
1.0
|
False positive: memory leak when using strcpy (Trac #116) - Migrated from https://trac.cppcheck.net/ticket/116
```json
{
"status": "closed",
"changetime": "2009-02-24T07:17:36",
"description": "There is a false positive for this code..\n\n{{{\nvoid foo()\n{\n char *p = malloc(100);\n return strcpy(p, \"foo\");\n}\n}}}",
"reporter": "hyd_danmar",
"cc": "",
"resolution": "fixed",
"_ts": "1235459856000000",
"component": "False positive",
"summary": "False positive: memory leak when using strcpy",
"priority": "",
"keywords": "",
"time": "2009-02-23T19:45:56",
"milestone": "1.29",
"owner": "noone",
"type": "defect"
}
```
|
defect
|
false positive memory leak when using strcpy trac migrated from json status closed changetime description there is a false positive for this code n n nvoid foo n n char p malloc n return strcpy p foo n n reporter hyd danmar cc resolution fixed ts component false positive summary false positive memory leak when using strcpy priority keywords time milestone owner noone type defect
| 1
|
46,561
| 13,055,934,666
|
IssuesEvent
|
2020-07-30 03:09:56
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
[mue] no sphinx documentation (Trac #1446)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1446
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Good documentation is now deemed essential.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[mue] no sphinx documentation",
"priority": "major",
"keywords": "",
"time": "2015-11-24T23:42:27",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
|
1.0
|
[mue] no sphinx documentation (Trac #1446) - Migrated from https://code.icecube.wisc.edu/ticket/1446
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "Good documentation is now deemed essential.",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[mue] no sphinx documentation",
"priority": "major",
"keywords": "",
"time": "2015-11-24T23:42:27",
"milestone": "",
"owner": "dima",
"type": "defect"
}
```
|
defect
|
no sphinx documentation trac migrated from json status closed changetime description good documentation is now deemed essential reporter david schultz cc olivas resolution fixed ts component combo reconstruction summary no sphinx documentation priority major keywords time milestone owner dima type defect
| 1
|
273,519
| 20,795,871,736
|
IssuesEvent
|
2022-03-17 09:15:04
|
nspcc-dev/neofs-spec
|
https://api.github.com/repos/nspcc-dev/neofs-spec
|
closed
|
Describe object split and reconstruction
|
documentation enhancement
|
Need to describe how big objects are split into parts.
|
1.0
|
Describe object split and reconstruction - Need to describe how big objects are split into parts.
|
non_defect
|
describe object split and reconstruction need to describe how big objects are split into parts
| 0
|
26,983
| 4,839,680,540
|
IssuesEvent
|
2016-11-09 10:19:53
|
google/google-authenticator-libpam
|
https://api.github.com/repos/google/google-authenticator-libpam
|
opened
|
Can't Authenticate anymore
|
libpam Priority-Medium Type-Defect
|
_From @ThomasHabets on October 10, 2014 8:6_
Original [issue 134](https://code.google.com/p/google-authenticator/issues/detail?id=134) created by shaiament on 2012-01-16T15:21:59.000Z:
<b>What steps will reproduce the problem?</b>
1.SSH
<b>What is the expected output? What do you see instead?</b>
-bash-3.2$ ssh root@10.0.0.65
Verification code:
Password:
Verification code:
Password:
Verification code:
Password:
root@10.0.0.65's password:
Permission denied, please try again.
root@10.0.0.65's password:
Permission denied, please try again.
root@10.0.0.65's password:
Received disconnect from 10.0.0.65: 2: Too many authentication failures for root
<b>What version of the product are you using? On what operating system?</b>
Latest, Centos 6
<b>Please provide any additional information below.</b>
I Get stuck in this loop. the machine won't authenticate me. i don't even receive error messages when i type. and then it only asks for password and even when the password is correct i get Premission denied.
_Copied from original issue: google/google-authenticator#133_
|
1.0
|
Can't Authenticate anymore - _From @ThomasHabets on October 10, 2014 8:6_
Original [issue 134](https://code.google.com/p/google-authenticator/issues/detail?id=134) created by shaiament on 2012-01-16T15:21:59.000Z:
<b>What steps will reproduce the problem?</b>
1.SSH
<b>What is the expected output? What do you see instead?</b>
-bash-3.2$ ssh root@10.0.0.65
Verification code:
Password:
Verification code:
Password:
Verification code:
Password:
root@10.0.0.65's password:
Permission denied, please try again.
root@10.0.0.65's password:
Permission denied, please try again.
root@10.0.0.65's password:
Received disconnect from 10.0.0.65: 2: Too many authentication failures for root
<b>What version of the product are you using? On what operating system?</b>
Latest, Centos 6
<b>Please provide any additional information below.</b>
I Get stuck in this loop. the machine won't authenticate me. i don't even receive error messages when i type. and then it only asks for password and even when the password is correct i get Premission denied.
_Copied from original issue: google/google-authenticator#133_
|
defect
|
can t authenticate anymore from thomashabets on october original created by shaiament on what steps will reproduce the problem ssh what is the expected output what do you see instead bash ssh root verification code password verification code password verification code password root s password permission denied please try again root s password permission denied please try again root s password received disconnect from too many authentication failures for root what version of the product are you using on what operating system latest centos please provide any additional information below i get stuck in this loop the machine won t authenticate me i don t even receive error messages when i type and then it only asks for password and even when the password is correct i get premission denied copied from original issue google google authenticator
| 1
|
12,977
| 2,732,346,275
|
IssuesEvent
|
2015-04-17 04:48:04
|
rasmus/fast-member
|
https://api.github.com/repos/rasmus/fast-member
|
closed
|
Caching of property and field names
|
auto-migrated Priority-Medium Type-Defect
|
```
I have a clone in which I've added caching of property and field names. Not
sure if this is of any interest, but it's here if you want:
http://code.google.com/r/kearonsean-fast-member/source/detail?r=6488edf35577a727
1a39472e72654650b36e627e
```
Original issue reported on code.google.com by `kearon.s...@googlemail.com` on 13 May 2012 at 10:48
|
1.0
|
Caching of property and field names - ```
I have a clone in which I've added caching of property and field names. Not
sure if this is of any interest, but it's here if you want:
http://code.google.com/r/kearonsean-fast-member/source/detail?r=6488edf35577a727
1a39472e72654650b36e627e
```
Original issue reported on code.google.com by `kearon.s...@googlemail.com` on 13 May 2012 at 10:48
|
defect
|
caching of property and field names i have a clone in which i ve added caching of property and field names not sure if this is of any interest but it s here if you want original issue reported on code google com by kearon s googlemail com on may at
| 1
|
81,566
| 15,630,079,939
|
IssuesEvent
|
2021-03-22 01:17:42
|
benald/liferay-user-links
|
https://api.github.com/repos/benald/liferay-user-links
|
opened
|
CVE-2021-23337 (High) detected in lodash-4.17.11.tgz
|
security vulnerability
|
## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /liferay-user-links/package.json</p>
<p>Path to vulnerable library: liferay-user-links/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-4.1.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-23337 (High) detected in lodash-4.17.11.tgz - ## CVE-2021-23337 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.11.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.11.tgz</a></p>
<p>Path to dependency file: /liferay-user-links/package.json</p>
<p>Path to vulnerable library: liferay-user-links/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- karma-4.1.0.tgz (Root Library)
- :x: **lodash-4.17.11.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function.
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23337>CVE-2021-23337</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c">https://github.com/lodash/lodash/commit/3469357cff396a26c363f8c1b5a91dde28ba4b1c</a></p>
<p>Release Date: 2021-02-15</p>
<p>Fix Resolution: lodash - 4.17.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file liferay user links package json path to vulnerable library liferay user links node modules lodash package json dependency hierarchy karma tgz root library x lodash tgz vulnerable library vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
328,123
| 28,101,926,164
|
IssuesEvent
|
2023-03-30 20:14:16
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: kv95/enc=false/nodes=4/ssds=8/raid0 failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker
|
roachtest.kv95/enc=false/nodes=4/ssds=8/raid0 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8/raid0) on master @ [1f8024bf14433ca169e5a8c3768c5d223dc5018c](https://github.com/cockroachdb/cockroach/commits/1f8024bf14433ca169e5a8c3768c5d223dc5018c):
```
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/raid0/run_1
(cluster.go:1977).Run: output in run_184301.414480869_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: ssh verbose log retained in ssh_184302.158616782_n5_workload-run-kv-tole.log: exit status 1
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8/raid0.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: kv95/enc=false/nodes=4/ssds=8/raid0 failed - roachtest.kv95/enc=false/nodes=4/ssds=8/raid0 [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/9329887?buildTab=artifacts#/kv95/enc=false/nodes=4/ssds=8/raid0) on master @ [1f8024bf14433ca169e5a8c3768c5d223dc5018c](https://github.com/cockroachdb/cockroach/commits/1f8024bf14433ca169e5a8c3768c5d223dc5018c):
```
test artifacts and logs in: /artifacts/kv95/enc=false/nodes=4/ssds=8/raid0/run_1
(cluster.go:1977).Run: output in run_184301.414480869_n5_workload-run-kv-tole: ./workload run kv --tolerate-errors --init --histograms=perf/stats.json --concurrency=256 --splits=1000 --duration=30m0s --read-percent=95 {pgurl:1-4} returned: COMMAND_PROBLEM: ssh verbose log retained in ssh_184302.158616782_n5_workload-run-kv-tole.log: exit status 1
(monitor.go:127).Wait: monitor failure: monitor task failed: t.Fatal() was called
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=true</code>
, <code>ROACHTEST_ssd=8</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/test-eng
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*kv95/enc=false/nodes=4/ssds=8/raid0.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_defect
|
roachtest enc false nodes ssds failed roachtest enc false nodes ssds with on master test artifacts and logs in artifacts enc false nodes ssds run cluster go run output in run workload run kv tole workload run kv tolerate errors init histograms perf stats json concurrency splits duration read percent pgurl returned command problem ssh verbose log retained in ssh workload run kv tole log exit status monitor go wait monitor failure monitor task failed t fatal was called parameters roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd true roachtest ssd help see see cc cockroachdb test eng
| 0
|
51,682
| 13,211,281,323
|
IssuesEvent
|
2020-08-15 22:01:23
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
Steamshovel doesn't compile with python 3.4.2 (Trac #827)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/827">https://code.icecube.wisc.edu/projects/icecube/ticket/827</a>, reported by jpa14and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-22T20:05:18",
"_ts": "1437595518711454",
"description": "Compiling http://code.icecube.wisc.edu/svn/projects/steamshovel/releases/V14-11-00 I got the following error message:\n{{{\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:210:32:\n error: \u2018PyString_Check\u2019 was not declared in this scope\nreturn PyString_Check(obj_ptr) ? obj_ptr : 0;\n^\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:215:50:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nconst char * value = PyString_AsString( obj_ptr );\n^\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:211:2:\n warning: control reaches end of non-void function [-Wreturn-type]\n}\n^\nsteamshovel/CMakeFiles/shovelart-pybindings.dir/build.make:169: recipe for target 'steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o' failed\n}}}\n\nGoogling the error I found http://permalink.gmane.org/gmane.comp.debugging.sigrok.devel/57 which proposes a patch that should be usable by us:\n{{{#!cpp\n/* re-define some String functions for recent python (>= 3.0) */\n#if PY_VERSION_HEX >= 0x03000000\n#define PyString_AsString PyBytes_AsString\n#define PyString_FromString PyBytes_FromString\n#define PyString_Check PyBytes_Check\n#endif\n}}}\n\nFor my local copy I put it in the beginning of steamshovel/private/shovelart/pybindings/Types.cpp and it compiled, but I am not sure that is the best place to put it... or if version checked is the \"good\" value.",
"reporter": "jpa14",
"cc": "david.schultz@icecube.wisc.edu",
"resolution": "fixed",
"time": "2014-12-09T22:34:52",
"component": "combo reconstruction",
"summary": "Steamshovel doesn't compile with python 3.4.2",
"priority": "minor",
"keywords": "python3 steamshovel",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
Steamshovel doesn't compile with python 3.4.2 (Trac #827) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/827">https://code.icecube.wisc.edu/projects/icecube/ticket/827</a>, reported by jpa14and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-07-22T20:05:18",
"_ts": "1437595518711454",
"description": "Compiling http://code.icecube.wisc.edu/svn/projects/steamshovel/releases/V14-11-00 I got the following error message:\n{{{\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:210:32:\n error: \u2018PyString_Check\u2019 was not declared in this scope\nreturn PyString_Check(obj_ptr) ? obj_ptr : 0;\n^\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void scripting::shovelart\n ::QStringConversion::construct(\n PyObject *, boost::python::converter::rvalue_from_python_stage1_data\n *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:215:50:\n error: \u2018PyString_AsString\u2019 was not declared in this scope\nconst char * value = PyString_AsString( obj_ptr );\n^\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:\n In static member function \u2018static void * scripting::shovelart\n ::QStringConversion::convertible(PyObject *)\u2019:\n/home/jp/IceCube/svn/icerec/steamshovel/private/shovelart/pybindings/Types.cpp:211:2:\n warning: control reaches end of non-void function [-Wreturn-type]\n}\n^\nsteamshovel/CMakeFiles/shovelart-pybindings.dir/build.make:169: recipe for target 'steamshovel/CMakeFiles/shovelart-pybindings.dir/private/shovelart/pybindings/Types.cpp.o' failed\n}}}\n\nGoogling the error I found http://permalink.gmane.org/gmane.comp.debugging.sigrok.devel/57 which proposes a patch that should be usable by us:\n{{{#!cpp\n/* re-define some String functions for recent python (>= 3.0) */\n#if PY_VERSION_HEX >= 0x03000000\n#define PyString_AsString PyBytes_AsString\n#define PyString_FromString PyBytes_FromString\n#define PyString_Check PyBytes_Check\n#endif\n}}}\n\nFor my local copy I put it in the beginning of steamshovel/private/shovelart/pybindings/Types.cpp and it compiled, but I am not sure that is the best place to put it... or if version checked is the \"good\" value.",
"reporter": "jpa14",
"cc": "david.schultz@icecube.wisc.edu",
"resolution": "fixed",
"time": "2014-12-09T22:34:52",
"component": "combo reconstruction",
"summary": "Steamshovel doesn't compile with python 3.4.2",
"priority": "minor",
"keywords": "python3 steamshovel",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
|
defect
|
steamshovel doesn t compile with python trac migrated from json status closed changetime ts description compiling i got the following error message n n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n in static member function void scripting shovelart n qstringconversion convertible pyobject n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n error check was not declared in this scope nreturn pystring check obj ptr obj ptr n n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n in static member function void scripting shovelart n qstringconversion construct n pyobject boost python converter rvalue from python data n n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n error asstring was not declared in this scope nconst char value pystring asstring obj ptr n n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n in static member function void scripting shovelart n qstringconversion convertible pyobject n home jp icecube svn icerec steamshovel private shovelart pybindings types cpp n warning control reaches end of non void function n n nsteamshovel cmakefiles shovelart pybindings dir build make recipe for target steamshovel cmakefiles shovelart pybindings dir private shovelart pybindings types cpp o failed n n ngoogling the error i found which proposes a patch that should be usable by us n cpp n re define some string functions for recent python n if py version hex n define pystring asstring pybytes asstring n define pystring fromstring pybytes fromstring n define pystring check pybytes check n endif n n nfor my local copy i put it in the beginning of steamshovel private shovelart pybindings types cpp and it compiled but i am not sure that is the best place to put it or if version checked is the good value reporter cc david schultz icecube wisc edu resolution fixed time component combo reconstruction summary steamshovel doesn t compile with python priority minor keywords steamshovel milestone owner hdembinski type defect
| 1
|
7,537
| 2,610,404,535
|
IssuesEvent
|
2015-02-26 20:11:29
|
chrsmith/republic-at-war
|
https://api.github.com/repos/chrsmith/republic-at-war
|
opened
|
Republic Field Hospital Icon
|
auto-migrated Priority-Medium Type-Defect
|
```
The Republic Field Hospital has no icon when displayed as a "unit lost" in the
post-game summary.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 5 Jul 2011 at 5:48
|
1.0
|
Republic Field Hospital Icon - ```
The Republic Field Hospital has no icon when displayed as a "unit lost" in the
post-game summary.
```
-----
Original issue reported on code.google.com by `KillerHurdz@netscape.net` on 5 Jul 2011 at 5:48
|
defect
|
republic field hospital icon the republic field hospital has no icon when displayed as a unit lost in the post game summary original issue reported on code google com by killerhurdz netscape net on jul at
| 1
|
41,883
| 10,688,931,991
|
IssuesEvent
|
2019-10-22 19:17:53
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
integrate.solve_bvp does not check for convergence of boundary conditions
|
defect scipy.integrate
|
I’d like to report an issue with `scipy.integrate.solve_bvp`. This is an excellent piece of code but it unfortunately does not check for convergence of boundary conditions. I can demonstrate with the following example code with nonlinear boundary conditions. The code solves Laplace's equation, which is linear and converges in only one iteration.
### Reproducing code example:
```
import numpy as np
from scipy.integrate import solve_bvp
# Parameters
kappa= 1.64
ioA, ioC= 0.01, 1.0e-4
V, UA, UC= 0.5, 0.0, 1.0
f= 38.9
L=0.1
## Governing Equation- Laplace Eq. for potential
def deq(x,y):
return np.stack([y[1], np.zeros_like(x)])
## Boundary Conditions
def bc(uA,uC):
phiA, phipA= uA
phiC, phipC= uC
# Butler-Volmer Kinetics at Anode
hA=0-phiA-0
iA= ioA * ( np.exp(f*hA) - np.exp(-f*hA))
res0= iA + kappa*phipA
# Butler-Volmer Kinetics at Cathode
hC= V - phiC - UC
iC= ioC * (np.exp(f*hC) - np.exp(-f*hC))
res1= iC - kappa*phipC
return np.array([ res0, res1 ] )
# Initial Guess
xinit=np.linspace(0,L)
uinit=np.array([0,0]) [:,None] # a column vector
yinit=uinit * np.ones([len(uinit),len(xinit)])
# Find Solution
sol = solve_bvp(deq, bc, xinit, yinit, verbose=2)
# Calculate Boundary residuals
bc_res = bc(sol.y[:, 0], sol.y[:, -1])
print('Boundary Condition Residuals: ',bc_res)```
```
Using scipy v1.2.0, the above code generates the following output:
```
Iteration Max residual Total nodes Nodes added
1 3.39e-14 50 0
Solved in 1 iterations, number of nodes 50, maximum relative residual 3.39e-14.
Boundary Condition Residuals: [ 5.25230866e-02 -2.30274472e+02]
```
After only one iteration, the BC residuals are much greater than the tolerance (the default tolerance, 1e-3).
### Proposed solution
I have added code to check for convergence, modifying lines 1081-1111 of `_bvp.py` as follows:
```
bc_res = bc_wrapped(y[:, 0], y[:, -1], p)
# This relation is not trivial, but can be verified.
r_middle = 1.5 * col_res / h
sol = create_spline(y, f, x, h)
rms_res = estimate_rms_residuals(fun_wrapped, sol, x, h, p,
r_middle, f_middle)
max_rms_res = np.max(rms_res)
max_bc_res = np.max(np.abs(bc_res))
max_res = np.max([max_rms_res, max_bc_res])
if singular:
status = 2
break
insert_1, = np.nonzero((rms_res > tol) & (rms_res < 100 * tol))
insert_2, = np.nonzero(rms_res >= 100 * tol)
nodes_added = insert_1.shape[0] + 2 * insert_2.shape[0]
if m + nodes_added > max_nodes:
status = 1
if verbose == 2:
nodes_added = "({})".format(nodes_added)
print_iteration_progress(iteration, max_res, m,
nodes_added)
break
if verbose == 2:
print_iteration_progress(iteration, max_res, m, nodes_added)
if nodes_added > 0:
x = modify_mesh(x, insert_1, insert_2)
h = np.diff(x)
y = sol(x)
elif max_res <= tol:
status = 0
break
```
This code calculates `max_res` for both the domain and the boundary, and only breaks the parent `while` loop when that `max_res` falls below `tol`.
Using this version of `solve_bvp`, the same Laplace Equation code results in multiple iterations to reduce the boundary residuals to less than the tolerance.
```
Iteration Max residual Total nodes Nodes added
1 2.30e+02 50 0
2 2.86e+00 50 0
3 6.18e-03 50 0
4 7.25e-06 50 0
Solved in 4 iterations, number of nodes 50, maximum relative residual 7.25e-06.
Boundary Condition Residuals: [ 2.04326239e-07 -7.25180010e-06]
```
I've attached the modified verion of `_bvp.py`, and I'll be glad to make a pull request if needed.
### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.2.0 1.16.1 sys.version_info(major=3, minor=7, micro=2, releaselevel='final', serial=0)
```
[_bvp-scb.py.txt](https://github.com/scipy/scipy/files/2873413/_bvp-scb.py.txt)
|
1.0
|
integrate.solve_bvp does not check for convergence of boundary conditions - I’d like to report an issue with `scipy.integrate.solve_bvp`. This is an excellent piece of code but it unfortunately does not check for convergence of boundary conditions. I can demonstrate with the following example code with nonlinear boundary conditions. The code solves Laplace's equation, which is linear and converges in only one iteration.
### Reproducing code example:
```
import numpy as np
from scipy.integrate import solve_bvp
# Parameters
kappa= 1.64
ioA, ioC= 0.01, 1.0e-4
V, UA, UC= 0.5, 0.0, 1.0
f= 38.9
L=0.1
## Governing Equation- Laplace Eq. for potential
def deq(x,y):
return np.stack([y[1], np.zeros_like(x)])
## Boundary Conditions
def bc(uA,uC):
phiA, phipA= uA
phiC, phipC= uC
# Butler-Volmer Kinetics at Anode
hA=0-phiA-0
iA= ioA * ( np.exp(f*hA) - np.exp(-f*hA))
res0= iA + kappa*phipA
# Butler-Volmer Kinetics at Cathode
hC= V - phiC - UC
iC= ioC * (np.exp(f*hC) - np.exp(-f*hC))
res1= iC - kappa*phipC
return np.array([ res0, res1 ] )
# Initial Guess
xinit=np.linspace(0,L)
uinit=np.array([0,0]) [:,None] # a column vector
yinit=uinit * np.ones([len(uinit),len(xinit)])
# Find Solution
sol = solve_bvp(deq, bc, xinit, yinit, verbose=2)
# Calculate Boundary residuals
bc_res = bc(sol.y[:, 0], sol.y[:, -1])
print('Boundary Condition Residuals: ',bc_res)```
```
Using scipy v1.2.0, the above code generates the following output:
```
Iteration Max residual Total nodes Nodes added
1 3.39e-14 50 0
Solved in 1 iterations, number of nodes 50, maximum relative residual 3.39e-14.
Boundary Condition Residuals: [ 5.25230866e-02 -2.30274472e+02]
```
After only one iteration, the BC residuals are much greater than the tolerance (the default tolerance, 1e-3).
### Proposed solution
I have added code to check for convergence, modifying lines 1081-1111 of `_bvp.py` as follows:
```
bc_res = bc_wrapped(y[:, 0], y[:, -1], p)
# This relation is not trivial, but can be verified.
r_middle = 1.5 * col_res / h
sol = create_spline(y, f, x, h)
rms_res = estimate_rms_residuals(fun_wrapped, sol, x, h, p,
r_middle, f_middle)
max_rms_res = np.max(rms_res)
max_bc_res = np.max(np.abs(bc_res))
max_res = np.max([max_rms_res, max_bc_res])
if singular:
status = 2
break
insert_1, = np.nonzero((rms_res > tol) & (rms_res < 100 * tol))
insert_2, = np.nonzero(rms_res >= 100 * tol)
nodes_added = insert_1.shape[0] + 2 * insert_2.shape[0]
if m + nodes_added > max_nodes:
status = 1
if verbose == 2:
nodes_added = "({})".format(nodes_added)
print_iteration_progress(iteration, max_res, m,
nodes_added)
break
if verbose == 2:
print_iteration_progress(iteration, max_res, m, nodes_added)
if nodes_added > 0:
x = modify_mesh(x, insert_1, insert_2)
h = np.diff(x)
y = sol(x)
elif max_res <= tol:
status = 0
break
```
This code calculates `max_res` for both the domain and the boundary, and only breaks the parent `while` loop when that `max_res` falls below `tol`.
Using this version of `solve_bvp`, the same Laplace Equation code results in multiple iterations to reduce the boundary residuals to less than the tolerance.
```
Iteration Max residual Total nodes Nodes added
1 2.30e+02 50 0
2 2.86e+00 50 0
3 6.18e-03 50 0
4 7.25e-06 50 0
Solved in 4 iterations, number of nodes 50, maximum relative residual 7.25e-06.
Boundary Condition Residuals: [ 2.04326239e-07 -7.25180010e-06]
```
I've attached the modified verion of `_bvp.py`, and I'll be glad to make a pull request if needed.
### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.2.0 1.16.1 sys.version_info(major=3, minor=7, micro=2, releaselevel='final', serial=0)
```
[_bvp-scb.py.txt](https://github.com/scipy/scipy/files/2873413/_bvp-scb.py.txt)
|
defect
|
integrate solve bvp does not check for convergence of boundary conditions i’d like to report an issue with scipy integrate solve bvp this is an excellent piece of code but it unfortunately does not check for convergence of boundary conditions i can demonstrate with the following example code with nonlinear boundary conditions the code solves laplace s equation which is linear and converges in only one iteration reproducing code example import numpy as np from scipy integrate import solve bvp parameters kappa ioa ioc v ua uc f l governing equation laplace eq for potential def deq x y return np stack np zeros like x boundary conditions def bc ua uc phia phipa ua phic phipc uc butler volmer kinetics at anode ha phia ia ioa np exp f ha np exp f ha ia kappa phipa butler volmer kinetics at cathode hc v phic uc ic ioc np exp f hc np exp f hc ic kappa phipc return np array initial guess xinit np linspace l uinit np array a column vector yinit uinit np ones find solution sol solve bvp deq bc xinit yinit verbose calculate boundary residuals bc res bc sol y sol y print boundary condition residuals bc res using scipy the above code generates the following output iteration max residual total nodes nodes added solved in iterations number of nodes maximum relative residual boundary condition residuals after only one iteration the bc residuals are much greater than the tolerance the default tolerance proposed solution i have added code to check for convergence modifying lines of bvp py as follows bc res bc wrapped y y p this relation is not trivial but can be verified r middle col res h sol create spline y f x h rms res estimate rms residuals fun wrapped sol x h p r middle f middle max rms res np max rms res max bc res np max np abs bc res max res np max if singular status break insert np nonzero rms res tol rms res tol insert np nonzero rms res tol nodes added insert shape insert shape if m nodes added max nodes status if verbose nodes added format nodes added print iteration progress iteration max res m nodes added break if verbose print iteration progress iteration max res m nodes added if nodes added x modify mesh x insert insert h np diff x y sol x elif max res tol status break this code calculates max res for both the domain and the boundary and only breaks the parent while loop when that max res falls below tol using this version of solve bvp the same laplace equation code results in multiple iterations to reduce the boundary residuals to less than the tolerance iteration max residual total nodes nodes added solved in iterations number of nodes maximum relative residual boundary condition residuals i ve attached the modified verion of bvp py and i ll be glad to make a pull request if needed scipy numpy python version information import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial
| 1
|
388,212
| 26,754,905,488
|
IssuesEvent
|
2023-01-30 23:03:20
|
o3de/o3de.org
|
https://api.github.com/repos/o3de/o3de.org
|
closed
|
Update docs to replace legacy version CMake vars with new ones
|
needs-triage documentation
|
## Describe the issue briefly
I've updated the O3DE version vars and need to update them in the docs.
For example `LY_VERSION_ENGINE_NAME` is now `O3DE_ENGINE_NAME`
Reference:
https://github.com/o3de/o3de.org/blob/46746a8d745c3819e81f1acc573b1fe54597dd8a/content/docs/user-guide/build/distributable-engine.md?plain=1#L31
## Which page(s) / section(s) are affected?
https://www.o3de.org/docs/user-guide/build/distributable-engine/#example-values
## Does this work have an engineering dependency? What is it?
Related RFC (work in progress) https://github.com/o3de/sig-core/issues/44
|
1.0
|
Update docs to replace legacy version CMake vars with new ones - ## Describe the issue briefly
I've updated the O3DE version vars and need to update them in the docs.
For example `LY_VERSION_ENGINE_NAME` is now `O3DE_ENGINE_NAME`
Reference:
https://github.com/o3de/o3de.org/blob/46746a8d745c3819e81f1acc573b1fe54597dd8a/content/docs/user-guide/build/distributable-engine.md?plain=1#L31
## Which page(s) / section(s) are affected?
https://www.o3de.org/docs/user-guide/build/distributable-engine/#example-values
## Does this work have an engineering dependency? What is it?
Related RFC (work in progress) https://github.com/o3de/sig-core/issues/44
|
non_defect
|
update docs to replace legacy version cmake vars with new ones describe the issue briefly i ve updated the version vars and need to update them in the docs for example ly version engine name is now engine name reference which page s section s are affected does this work have an engineering dependency what is it related rfc work in progress
| 0
|
28,747
| 5,348,389,286
|
IssuesEvent
|
2017-02-18 04:23:27
|
amitdholiya/vqmod
|
https://api.github.com/repos/amitdholiya/vqmod
|
reopened
|
VQMOD won't install.
|
auto-migrated Priority-Medium Type-Defect
|
```
Would you happen to know why VQMOD would say " Parse error: syntax error,
unexpected '/' in ...\vqmod\install\index.php on line 93. "
We're hosted on a windows server and upon installation we get a message UPGRADE
COMPLETE instead of INSTALLED SUCCESSFUL.
```
Original issue reported on code.google.com by `demb....@gmail.com` on 27 Jul 2014 at 5:32
|
1.0
|
VQMOD won't install. - ```
Would you happen to know why VQMOD would say " Parse error: syntax error,
unexpected '/' in ...\vqmod\install\index.php on line 93. "
We're hosted on a windows server and upon installation we get a message UPGRADE
COMPLETE instead of INSTALLED SUCCESSFUL.
```
Original issue reported on code.google.com by `demb....@gmail.com` on 27 Jul 2014 at 5:32
|
defect
|
vqmod won t install would you happen to know why vqmod would say parse error syntax error unexpected in vqmod install index php on line we re hosted on a windows server and upon installation we get a message upgrade complete instead of installed successful original issue reported on code google com by demb gmail com on jul at
| 1
|
83,757
| 15,716,308,826
|
IssuesEvent
|
2021-03-28 06:32:36
|
elastic/kibana
|
https://api.github.com/repos/elastic/kibana
|
opened
|
[Security Solution] Sending Entire Detection Alert JSON to JIRA Service Desk Issue Output Doesn't Work
|
Team: SecuritySolution bug
|
**Describe the bug:**
> Sending Entire Detection Alert JSON to JIRA Service Desk Output Doesn't work
It probably has to deal with [JIRA Wiki Rendering](https://jira.atlassian.com/secure/WikiRendererHelpAction.jspa?section=all&error=login_required&error_description=Login+required&state=39dbd052-7639-42b8-a857-82127f2c4be6) I really wish I could send the JSON into a code block in JIRA.
**Kibana/Elasticsearch Stack version:**
7.12.0
**Server OS version:**
Elastic Cloud
**Browser and Browser OS versions:**
Brave,Chrome
```
"subActionParams": {
"comments": [],
"incident": {
"issueType": "10001",
"summary": "{{alertName}}",
"description": "{code:json}\n{{#context.alerts}}{{{.}}}{{/context.alerts}}\n{code}",
"priority": "Highest"
}
```
## It looks like this when it creates a ticket in JIRA.

|
True
|
[Security Solution] Sending Entire Detection Alert JSON to JIRA Service Desk Issue Output Doesn't Work - **Describe the bug:**
> Sending Entire Detection Alert JSON to JIRA Service Desk Output Doesn't work
It probably has to deal with [JIRA Wiki Rendering](https://jira.atlassian.com/secure/WikiRendererHelpAction.jspa?section=all&error=login_required&error_description=Login+required&state=39dbd052-7639-42b8-a857-82127f2c4be6) I really wish I could send the JSON into a code block in JIRA.
**Kibana/Elasticsearch Stack version:**
7.12.0
**Server OS version:**
Elastic Cloud
**Browser and Browser OS versions:**
Brave,Chrome
```
"subActionParams": {
"comments": [],
"incident": {
"issueType": "10001",
"summary": "{{alertName}}",
"description": "{code:json}\n{{#context.alerts}}{{{.}}}{{/context.alerts}}\n{code}",
"priority": "Highest"
}
```
## It looks like this when it creates a ticket in JIRA.

|
non_defect
|
sending entire detection alert json to jira service desk issue output doesn t work describe the bug sending entire detection alert json to jira service desk output doesn t work it probably has to deal with i really wish i could send the json into a code block in jira kibana elasticsearch stack version server os version elastic cloud browser and browser os versions brave chrome subactionparams comments incident issuetype summary alertname description code json n context alerts context alerts n code priority highest it looks like this when it creates a ticket in jira
| 0
|
9,902
| 2,616,008,927
|
IssuesEvent
|
2015-03-02 00:52:42
|
jasonhall/bwapi
|
https://api.github.com/repos/jasonhall/bwapi
|
closed
|
Solution relies on global directories for includes
|
auto-migrated Priority-High Type-Defect
|
```
What steps will reproduce the problem?
Open a fully synced svn copy's .sln
Update solution as needed (I had to upgrade to 2010 format).
Try to build
What is the expected output? What do you see instead?
Expected: dll to build, no compile errors
Actual: Cannot open <file> for tons of different files, various problems
related to use of numbers for an expected enum, etc
e.g. svnrev.h and util/bitmask.h are missing
The svnrev should be created by SubWCRev, but that program fails to start
What version of the product are you using? On what operating system?
svn HEAD on VS2010 (MS internal build) on Windows 7 x64
Please provide any additional information below.
This problem can be alleviated by using a second computer to test changes
before check in. Currently, I believe whoever is making the solution files
has their visual studio settings configured with global directories.
Instead, include paths, libraries, et cetera should all be project-based
and set relatively instead of absolutely.
```
Original issue reported on code.google.com by `taw...@gmail.com` on 7 Dec 2009 at 2:33
|
1.0
|
Solution relies on global directories for includes - ```
What steps will reproduce the problem?
Open a fully synced svn copy's .sln
Update solution as needed (I had to upgrade to 2010 format).
Try to build
What is the expected output? What do you see instead?
Expected: dll to build, no compile errors
Actual: Cannot open <file> for tons of different files, various problems
related to use of numbers for an expected enum, etc
e.g. svnrev.h and util/bitmask.h are missing
The svnrev should be created by SubWCRev, but that program fails to start
What version of the product are you using? On what operating system?
svn HEAD on VS2010 (MS internal build) on Windows 7 x64
Please provide any additional information below.
This problem can be alleviated by using a second computer to test changes
before check in. Currently, I believe whoever is making the solution files
has their visual studio settings configured with global directories.
Instead, include paths, libraries, et cetera should all be project-based
and set relatively instead of absolutely.
```
Original issue reported on code.google.com by `taw...@gmail.com` on 7 Dec 2009 at 2:33
|
defect
|
solution relies on global directories for includes what steps will reproduce the problem open a fully synced svn copy s sln update solution as needed i had to upgrade to format try to build what is the expected output what do you see instead expected dll to build no compile errors actual cannot open for tons of different files various problems related to use of numbers for an expected enum etc e g svnrev h and util bitmask h are missing the svnrev should be created by subwcrev but that program fails to start what version of the product are you using on what operating system svn head on ms internal build on windows please provide any additional information below this problem can be alleviated by using a second computer to test changes before check in currently i believe whoever is making the solution files has their visual studio settings configured with global directories instead include paths libraries et cetera should all be project based and set relatively instead of absolutely original issue reported on code google com by taw gmail com on dec at
| 1
|
46,950
| 13,056,006,134
|
IssuesEvent
|
2020-07-30 03:22:12
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
opened
|
MuonGun produces inconsistent weights between different versions (Trac #2179)
|
Incomplete Migration Migrated from Trac analysis defect
|
Migrated from https://code.icecube.wisc.edu/ticket/2179
```json
{
"status": "closed",
"changetime": "2018-08-03T13:43:03",
"description": "The MuonGun weighting module produces wrong weights for events above 10^6^ GeV. This behaviour starts at rev 159352 and is probably at least related to tickets #2139 and #2157.\nThis can be tested with running the script\n{{{\n/home/mmeier/test_muongun_weighting.py --out_file=/path/to/file.hd5\n}}}\nonce with a combo metaproject with MuonGun rev 159351 and with rev 159352 or later.",
"reporter": "mmeier",
"cc": "",
"resolution": "fixed",
"_ts": "1533303783524855",
"component": "analysis",
"summary": "MuonGun produces inconsistent weights between different versions",
"priority": "normal",
"keywords": "MuonGun",
"time": "2018-08-02T15:15:04",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
|
1.0
|
MuonGun produces inconsistent weights between different versions (Trac #2179) - Migrated from https://code.icecube.wisc.edu/ticket/2179
```json
{
"status": "closed",
"changetime": "2018-08-03T13:43:03",
"description": "The MuonGun weighting module produces wrong weights for events above 10^6^ GeV. This behaviour starts at rev 159352 and is probably at least related to tickets #2139 and #2157.\nThis can be tested with running the script\n{{{\n/home/mmeier/test_muongun_weighting.py --out_file=/path/to/file.hd5\n}}}\nonce with a combo metaproject with MuonGun rev 159351 and with rev 159352 or later.",
"reporter": "mmeier",
"cc": "",
"resolution": "fixed",
"_ts": "1533303783524855",
"component": "analysis",
"summary": "MuonGun produces inconsistent weights between different versions",
"priority": "normal",
"keywords": "MuonGun",
"time": "2018-08-02T15:15:04",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
|
defect
|
muongun produces inconsistent weights between different versions trac migrated from json status closed changetime description the muongun weighting module produces wrong weights for events above gev this behaviour starts at rev and is probably at least related to tickets and nthis can be tested with running the script n n home mmeier test muongun weighting py out file path to file n nonce with a combo metaproject with muongun rev and with rev or later reporter mmeier cc resolution fixed ts component analysis summary muongun produces inconsistent weights between different versions priority normal keywords muongun time milestone owner jvansanten type defect
| 1
|
211,704
| 16,343,056,043
|
IssuesEvent
|
2021-05-13 01:48:16
|
vmasc-odu/Virginia-Philosophy-Reality-Lab
|
https://api.github.com/repos/vmasc-odu/Virginia-Philosophy-Reality-Lab
|
closed
|
Data Full Exhaustive List
|
beta_testing data trolley_problem
|
# Study CheckList
:o: Go for it!
:hourglass: Working on it
:bug: A Bug
✔️ Complete
| Status <br> :o: :hourglass: :bug: :heavy_check_mark: | Assigned To | Qualtrics Code | Study Type | Handle Training | Barrel Training | Moral Control Result | Study Result |
| --- | --- | --- | --- | --- | --- | --- | --- |
| :bug: 🔧 | @KisselPhil | 100000 | Switch | Bridge | Barrel Push | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100002 | Switch | Curve | Barrel Push | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100004 | Switch | Bridge | No Barrel | Pass | Kill 1 |
| ✔️| @KisselPhil | 100006 | Switch | Curve | No Barrel | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100008 | Switch | Bridge | Barrel Push | Fail | Kill 1 |
| :bug:| @KisselPhil | 100010 | Switch | Curve | Barrel Push | Fail | Kill 1 |
| ✔️ | @KisselPhil | 100012 | Switch | Bridge | No Barrel | Fail | Kill 1 |
| :o: | @KisselPhil | 100014 | Switch | Curve | No Barrel | Fail | Kill 1 |
| :o: | @KisselPhil | 100016 | Switch | Bridge | Barrel Push | Pass | Kill 5 |
| :o: | @KisselPhil | 100018 | Switch | Curve | Barrel Push | Pass | Kill 5 |
| :o: | @KisselPhil | 100020 | Switch | Bridge | No Barrel | Pass | Kill 5 |
| ✔️ | @Jshull | 100022 | Switch | Curve | No Barrel | Pass | Kill 5 |
| ✔️ | @Jshull | 100024 | Switch | Bridge | Barrel Push | Fail | Kill 5 |
| ✔️ | @Jshull | 100026 | Switch | Curve | Barrel Push | Fail | Kill 5 |
| :o: | @Jshull | 100028 | Switch | Bridge | No Barrel | Fail | Kill 5 |
| :o: | @Jshull | 100030 | Switch | Curve | No Barrel | Fail | Kill 5 |
| ⭕ | @Jshull | 300001 | Push | Bridge | Barrel Push | Pass | Kill 1 |
| :o: | @Jshull | 300003 | Push | Curve | Barrel Push | Pass | Kill 1 |
| :o: | @Jshull | 300005 | Push | Bridge | No Barrel | Pass | Kill 1 |
| :o: | @Jshull | 300007 | Push | Curve | No Barrel | Pass | Kill 1 |
| :o: | @Jshull | 300009 | Push | Bridge | Barrel Push | Fail | Kill 1 |
| :o: | @Jshull | 300011 | Push | Curve | Barrel Push | Fail | Kill 1 |
|:bug: | @Krechowicz | 300013 | Push | Bridge | No Barrel | Fail | Kill 1 |
|:bug: | @Krechowicz | 300015 | Push | Curve | No Barrel | Fail | Kill 1 |
| 🐛 | @Krechowicz | 300017 | Push | Bridge | Barrel Push | Pass | Kill 5 |
| :o: | @Krechowicz | 300019 | Push | Curve | Barrel Push | Pass | Kill 5 |
| :o: | @Krechowicz | 300021 | Push | Bridge | No Barrel | Pass | Kill 5 |
| :o: | @Krechowicz | 300023 | Push | Curve | No Barrel | Pass | Kill 5 |
| :o: | @Krechowicz | 300025 | Push | Bridge | Barrel Push | Fail | Kill 5 |
| :o: | @Krechowicz | 300027 | Push | Curve | Barrel Push | Fail | Kill 5 |
| :o: | @Krechowicz | 300029 | Push | Bridge | No Barrel | Fail | Kill 5 |
| :o: | @Krechowicz | 300031 | Push | Curve | No Barrel | Fail | Kill 5 |
|
1.0
|
Data Full Exhaustive List - # Study CheckList
:o: Go for it!
:hourglass: Working on it
:bug: A Bug
✔️ Complete
| Status <br> :o: :hourglass: :bug: :heavy_check_mark: | Assigned To | Qualtrics Code | Study Type | Handle Training | Barrel Training | Moral Control Result | Study Result |
| --- | --- | --- | --- | --- | --- | --- | --- |
| :bug: 🔧 | @KisselPhil | 100000 | Switch | Bridge | Barrel Push | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100002 | Switch | Curve | Barrel Push | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100004 | Switch | Bridge | No Barrel | Pass | Kill 1 |
| ✔️| @KisselPhil | 100006 | Switch | Curve | No Barrel | Pass | Kill 1 |
| ✔️ | @KisselPhil | 100008 | Switch | Bridge | Barrel Push | Fail | Kill 1 |
| :bug:| @KisselPhil | 100010 | Switch | Curve | Barrel Push | Fail | Kill 1 |
| ✔️ | @KisselPhil | 100012 | Switch | Bridge | No Barrel | Fail | Kill 1 |
| :o: | @KisselPhil | 100014 | Switch | Curve | No Barrel | Fail | Kill 1 |
| :o: | @KisselPhil | 100016 | Switch | Bridge | Barrel Push | Pass | Kill 5 |
| :o: | @KisselPhil | 100018 | Switch | Curve | Barrel Push | Pass | Kill 5 |
| :o: | @KisselPhil | 100020 | Switch | Bridge | No Barrel | Pass | Kill 5 |
| ✔️ | @Jshull | 100022 | Switch | Curve | No Barrel | Pass | Kill 5 |
| ✔️ | @Jshull | 100024 | Switch | Bridge | Barrel Push | Fail | Kill 5 |
| ✔️ | @Jshull | 100026 | Switch | Curve | Barrel Push | Fail | Kill 5 |
| :o: | @Jshull | 100028 | Switch | Bridge | No Barrel | Fail | Kill 5 |
| :o: | @Jshull | 100030 | Switch | Curve | No Barrel | Fail | Kill 5 |
| ⭕ | @Jshull | 300001 | Push | Bridge | Barrel Push | Pass | Kill 1 |
| :o: | @Jshull | 300003 | Push | Curve | Barrel Push | Pass | Kill 1 |
| :o: | @Jshull | 300005 | Push | Bridge | No Barrel | Pass | Kill 1 |
| :o: | @Jshull | 300007 | Push | Curve | No Barrel | Pass | Kill 1 |
| :o: | @Jshull | 300009 | Push | Bridge | Barrel Push | Fail | Kill 1 |
| :o: | @Jshull | 300011 | Push | Curve | Barrel Push | Fail | Kill 1 |
|:bug: | @Krechowicz | 300013 | Push | Bridge | No Barrel | Fail | Kill 1 |
|:bug: | @Krechowicz | 300015 | Push | Curve | No Barrel | Fail | Kill 1 |
| 🐛 | @Krechowicz | 300017 | Push | Bridge | Barrel Push | Pass | Kill 5 |
| :o: | @Krechowicz | 300019 | Push | Curve | Barrel Push | Pass | Kill 5 |
| :o: | @Krechowicz | 300021 | Push | Bridge | No Barrel | Pass | Kill 5 |
| :o: | @Krechowicz | 300023 | Push | Curve | No Barrel | Pass | Kill 5 |
| :o: | @Krechowicz | 300025 | Push | Bridge | Barrel Push | Fail | Kill 5 |
| :o: | @Krechowicz | 300027 | Push | Curve | Barrel Push | Fail | Kill 5 |
| :o: | @Krechowicz | 300029 | Push | Bridge | No Barrel | Fail | Kill 5 |
| :o: | @Krechowicz | 300031 | Push | Curve | No Barrel | Fail | Kill 5 |
|
non_defect
|
data full exhaustive list study checklist o go for it hourglass working on it bug a bug ✔️ complete status o hourglass bug heavy check mark assigned to qualtrics code study type handle training barrel training moral control result study result bug 🔧 kisselphil switch bridge barrel push pass kill ✔️ kisselphil switch curve barrel push pass kill ✔️ kisselphil switch bridge no barrel pass kill ✔️ kisselphil switch curve no barrel pass kill ✔️ kisselphil switch bridge barrel push fail kill bug kisselphil switch curve barrel push fail kill ✔️ kisselphil switch bridge no barrel fail kill o kisselphil switch curve no barrel fail kill o kisselphil switch bridge barrel push pass kill o kisselphil switch curve barrel push pass kill o kisselphil switch bridge no barrel pass kill ✔️ jshull switch curve no barrel pass kill ✔️ jshull switch bridge barrel push fail kill ✔️ jshull switch curve barrel push fail kill o jshull switch bridge no barrel fail kill o jshull switch curve no barrel fail kill ⭕ jshull push bridge barrel push pass kill o jshull push curve barrel push pass kill o jshull push bridge no barrel pass kill o jshull push curve no barrel pass kill o jshull push bridge barrel push fail kill o jshull push curve barrel push fail kill bug krechowicz push bridge no barrel fail kill bug krechowicz push curve no barrel fail kill 🐛 krechowicz push bridge barrel push pass kill o krechowicz push curve barrel push pass kill o krechowicz push bridge no barrel pass kill o krechowicz push curve no barrel pass kill o krechowicz push bridge barrel push fail kill o krechowicz push curve barrel push fail kill o krechowicz push bridge no barrel fail kill o krechowicz push curve no barrel fail kill
| 0
|
647,554
| 21,110,941,383
|
IssuesEvent
|
2022-04-05 01:26:05
|
nerbGG/Matched
|
https://api.github.com/repos/nerbGG/Matched
|
opened
|
navbar doesnt go all the way to the bottom of the page
|
bug priority
|
investigate why that happens and fix it
|
1.0
|
navbar doesnt go all the way to the bottom of the page - investigate why that happens and fix it
|
non_defect
|
navbar doesnt go all the way to the bottom of the page investigate why that happens and fix it
| 0
|
72,291
| 24,039,106,134
|
IssuesEvent
|
2022-09-15 22:33:39
|
idaholab/moose
|
https://api.github.com/repos/idaholab/moose
|
closed
|
FVConvectionCorrelationInterface does not consider face orientation
|
T: defect P: normal C: Modules/Navier Stokes
|
## Bug Description
This can lead to problems where the object heats the fluid instead of cooling it on certain parts (and vica-versa). This has been reported in:
https://github.com/idaholab/moose/discussions/22070
## Steps to Reproduce
Use the input files provided in:
https://github.com/idaholab/moose/discussions/22070
## Impact
Will allow reliable surface-heat-exchange process modeling.
|
1.0
|
FVConvectionCorrelationInterface does not consider face orientation - ## Bug Description
This can lead to problems where the object heats the fluid instead of cooling it on certain parts (and vica-versa). This has been reported in:
https://github.com/idaholab/moose/discussions/22070
## Steps to Reproduce
Use the input files provided in:
https://github.com/idaholab/moose/discussions/22070
## Impact
Will allow reliable surface-heat-exchange process modeling.
|
defect
|
fvconvectioncorrelationinterface does not consider face orientation bug description this can lead to problems where the object heats the fluid instead of cooling it on certain parts and vica versa this has been reported in steps to reproduce use the input files provided in impact will allow reliable surface heat exchange process modeling
| 1
|
19,635
| 3,228,468,642
|
IssuesEvent
|
2015-10-12 02:36:01
|
vickyg3/social-photos
|
https://api.github.com/repos/vickyg3/social-photos
|
closed
|
Files not transferring on Chrome OS
|
Priority-Medium Status-New Type-Defect
|
Originally reported on Google Code with ID 20
```
What steps will reproduce the problem?
1. Sign into any two services while using a Chrome OS device
2. Attempt to initiate a transfer by drag and drop
3.
What is the expected output? What do you see instead?
Expect a photo transfer to be initiated. See no indication that any operation was attempted.
What operating system and browser are you using? On what operating system?
Chrome OS version 36.0.1985.138
Please provide any additional information below.
```
Reported by `8bbell` on 2014-08-02 21:03:26
|
1.0
|
Files not transferring on Chrome OS - Originally reported on Google Code with ID 20
```
What steps will reproduce the problem?
1. Sign into any two services while using a Chrome OS device
2. Attempt to initiate a transfer by drag and drop
3.
What is the expected output? What do you see instead?
Expect a photo transfer to be initiated. See no indication that any operation was attempted.
What operating system and browser are you using? On what operating system?
Chrome OS version 36.0.1985.138
Please provide any additional information below.
```
Reported by `8bbell` on 2014-08-02 21:03:26
|
defect
|
files not transferring on chrome os originally reported on google code with id what steps will reproduce the problem sign into any two services while using a chrome os device attempt to initiate a transfer by drag and drop what is the expected output what do you see instead expect a photo transfer to be initiated see no indication that any operation was attempted what operating system and browser are you using on what operating system chrome os version please provide any additional information below reported by on
| 1
|
100,277
| 30,664,751,166
|
IssuesEvent
|
2023-07-25 17:24:03
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
[WASM][Debugger]Test timed out: DebuggerTests.BreakpointTests.DebuggerTests.BreakpointTests.CreateGoodBreakpoint
|
blocking-clean-ci Known Build Error
|
## Build Information
Build: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_build/results?buildId=351450
Build error leg or test failing: DebuggerTests.BreakpointTests.DebuggerTests.BreakpointTests.CreateGoodBreakpoint
Pull request: https://github.com/dotnet/runtime/pull/89217
<!-- Error message template -->
## Error Message
Fill the error message using [step by step known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
<!-- Use ErrorMessage for String.Contains matches. Use ErrorPattern for regex matches (single line/no backtracking). Set BuildRetry to `true` to retry builds with this error. Set ExcludeConsoleLog to `true` to skip helix logs analysis. -->
```json
{
"ErrorMessage": "System.Threading.Tasks.TaskCanceledException : Test timed out",
"ErrorPattern": "",
"BuildRetry": false,
"ExcludeConsoleLog": false
}
```
|
1.0
|
[WASM][Debugger]Test timed out: DebuggerTests.BreakpointTests.DebuggerTests.BreakpointTests.CreateGoodBreakpoint - ## Build Information
Build: https://dev.azure.com/dnceng-public/cbb18261-c48f-4abb-8651-8cdcb5474649/_build/results?buildId=351450
Build error leg or test failing: DebuggerTests.BreakpointTests.DebuggerTests.BreakpointTests.CreateGoodBreakpoint
Pull request: https://github.com/dotnet/runtime/pull/89217
<!-- Error message template -->
## Error Message
Fill the error message using [step by step known issues guidance](https://github.com/dotnet/arcade/blob/main/Documentation/Projects/Build%20Analysis/KnownIssues.md#how-to-fill-out-a-known-issue-error-section).
<!-- Use ErrorMessage for String.Contains matches. Use ErrorPattern for regex matches (single line/no backtracking). Set BuildRetry to `true` to retry builds with this error. Set ExcludeConsoleLog to `true` to skip helix logs analysis. -->
```json
{
"ErrorMessage": "System.Threading.Tasks.TaskCanceledException : Test timed out",
"ErrorPattern": "",
"BuildRetry": false,
"ExcludeConsoleLog": false
}
```
|
non_defect
|
test timed out debuggertests breakpointtests debuggertests breakpointtests creategoodbreakpoint build information build build error leg or test failing debuggertests breakpointtests debuggertests breakpointtests creategoodbreakpoint pull request error message fill the error message using json errormessage system threading tasks taskcanceledexception test timed out errorpattern buildretry false excludeconsolelog false
| 0
|
74,845
| 25,355,546,195
|
IssuesEvent
|
2022-11-20 09:43:03
|
cython/cython
|
https://api.github.com/repos/cython/cython
|
closed
|
cgi.escape was removed in Python 3.8
|
defect
|
I just stumbled across this grepping for `cgi.escape` . I am not sure Tempita is public and documented for usage but found a usage causing `AttributeError`. `cgi.escape` was removed in Python 3.8 with https://github.com/python/cpython/pull/7662 . Usage of `html.escape` is recommended.
https://github.com/cython/cython/blob/6e72d84f43fc8f8da4ddc7695ac9f5b434788ef5/Cython/Tempita/_tempita.py#L444
```
Python 3.8.0 (default, Nov 6 2019, 21:49:08)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import Cython.Tempita
>>> Cython.Tempita.HTMLTemplate("a")._repr(1, 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Cython/Tempita/_tempita.py", line 489, in Cython.Tempita._tempita.HTMLTemplate._repr
File "Cython/Tempita/_tempita.py", line 446, in Cython.Tempita._tempita.html_quote
AttributeError: module 'cgi' has no attribute 'escape'
```
Thanks for Cython!
|
1.0
|
cgi.escape was removed in Python 3.8 - I just stumbled across this grepping for `cgi.escape` . I am not sure Tempita is public and documented for usage but found a usage causing `AttributeError`. `cgi.escape` was removed in Python 3.8 with https://github.com/python/cpython/pull/7662 . Usage of `html.escape` is recommended.
https://github.com/cython/cython/blob/6e72d84f43fc8f8da4ddc7695ac9f5b434788ef5/Cython/Tempita/_tempita.py#L444
```
Python 3.8.0 (default, Nov 6 2019, 21:49:08)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import Cython.Tempita
>>> Cython.Tempita.HTMLTemplate("a")._repr(1, 2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "Cython/Tempita/_tempita.py", line 489, in Cython.Tempita._tempita.HTMLTemplate._repr
File "Cython/Tempita/_tempita.py", line 446, in Cython.Tempita._tempita.html_quote
AttributeError: module 'cgi' has no attribute 'escape'
```
Thanks for Cython!
|
defect
|
cgi escape was removed in python i just stumbled across this grepping for cgi escape i am not sure tempita is public and documented for usage but found a usage causing attributeerror cgi escape was removed in python with usage of html escape is recommended python default nov anaconda inc on linux type help copyright credits or license for more information import cython tempita cython tempita htmltemplate a repr traceback most recent call last file line in file cython tempita tempita py line in cython tempita tempita htmltemplate repr file cython tempita tempita py line in cython tempita tempita html quote attributeerror module cgi has no attribute escape thanks for cython
| 1
|
28,239
| 5,222,192,091
|
IssuesEvent
|
2017-01-27 06:46:09
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
ClientHeartbeatTest testClientEndpointsDelaySeconds_whenHeartbeatResumed
|
Team: Client Type: Defect
|
```
Error Message
expected:<2> but was:<4>
Stacktrace
java.lang.AssertionError: expected:<2> but was:<4>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.client.heartbeat.ClientHeartbeatTest$6.run(ClientHeartbeatTest.java:264)
at com.hazelcast.test.HazelcastTestSupport.assertTrueAllTheTime(HazelcastTestSupport.java:895)
at com.hazelcast.client.heartbeat.ClientHeartbeatTest.testClientEndpointsDelaySeconds_whenHeartbeatResumed(ClientHeartbeatTest.java:261)
Standard Output
Finished Running Test: testInvocation_whenHeartbeatStopped in 7.244 seconds.
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-IbmJDK1.8/com.hazelcast$hazelcast-client/167/testReport/junit/com.hazelcast.client.heartbeat/ClientHeartbeatTest/testClientEndpointsDelaySeconds_whenHeartbeatResumed/
|
1.0
|
ClientHeartbeatTest testClientEndpointsDelaySeconds_whenHeartbeatResumed - ```
Error Message
expected:<2> but was:<4>
Stacktrace
java.lang.AssertionError: expected:<2> but was:<4>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at com.hazelcast.client.heartbeat.ClientHeartbeatTest$6.run(ClientHeartbeatTest.java:264)
at com.hazelcast.test.HazelcastTestSupport.assertTrueAllTheTime(HazelcastTestSupport.java:895)
at com.hazelcast.client.heartbeat.ClientHeartbeatTest.testClientEndpointsDelaySeconds_whenHeartbeatResumed(ClientHeartbeatTest.java:261)
Standard Output
Finished Running Test: testInvocation_whenHeartbeatStopped in 7.244 seconds.
```
https://hazelcast-l337.ci.cloudbees.com/job/Hazelcast-3.maintenance-IbmJDK1.8/com.hazelcast$hazelcast-client/167/testReport/junit/com.hazelcast.client.heartbeat/ClientHeartbeatTest/testClientEndpointsDelaySeconds_whenHeartbeatResumed/
|
defect
|
clientheartbeattest testclientendpointsdelayseconds whenheartbeatresumed error message expected but was stacktrace java lang assertionerror expected but was at org junit assert fail assert java at org junit assert failnotequals assert java at org junit assert assertequals assert java at org junit assert assertequals assert java at com hazelcast client heartbeat clientheartbeattest run clientheartbeattest java at com hazelcast test hazelcasttestsupport asserttrueallthetime hazelcasttestsupport java at com hazelcast client heartbeat clientheartbeattest testclientendpointsdelayseconds whenheartbeatresumed clientheartbeattest java standard output finished running test testinvocation whenheartbeatstopped in seconds
| 1
|
66,086
| 19,977,133,767
|
IssuesEvent
|
2022-01-29 09:06:13
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Inputtextarea: overflows container
|
defect
|
A wide `InputTextArea` will not resize to the proper width when needed : when displayed on a mobile screen for example.
An example is visible from the showcase itself: the last example on the page overflows its container when displayed on a mobile phone in portrait mode:
https://www.primefaces.org/showcase/ui/input/inputTextarea.xhtml?jfwid=ece10
|
1.0
|
Inputtextarea: overflows container - A wide `InputTextArea` will not resize to the proper width when needed : when displayed on a mobile screen for example.
An example is visible from the showcase itself: the last example on the page overflows its container when displayed on a mobile phone in portrait mode:
https://www.primefaces.org/showcase/ui/input/inputTextarea.xhtml?jfwid=ece10
|
defect
|
inputtextarea overflows container a wide inputtextarea will not resize to the proper width when needed when displayed on a mobile screen for example an example is visible from the showcase itself the last example on the page overflows its container when displayed on a mobile phone in portrait mode
| 1
|
41,713
| 10,574,694,954
|
IssuesEvent
|
2019-10-07 14:27:08
|
spacchetti/spago
|
https://api.github.com/repos/spacchetti/spago
|
closed
|
`No matches found when trying to watch the following directories` printed even for empty list of directories
|
UX defect
|
The warning added as part of https://github.com/spacchetti/spago/pull/420 seems to be always printed, even when there's empty list of not-matched directories.
```bash
$ spago run -w
Installation complete.
WARNING: No matches found when trying to watch the following directories:
Build succeeded.
```
This should only be printed when the list of not matched directories is not empty.
|
1.0
|
`No matches found when trying to watch the following directories` printed even for empty list of directories - The warning added as part of https://github.com/spacchetti/spago/pull/420 seems to be always printed, even when there's empty list of not-matched directories.
```bash
$ spago run -w
Installation complete.
WARNING: No matches found when trying to watch the following directories:
Build succeeded.
```
This should only be printed when the list of not matched directories is not empty.
|
defect
|
no matches found when trying to watch the following directories printed even for empty list of directories the warning added as part of seems to be always printed even when there s empty list of not matched directories bash spago run w installation complete warning no matches found when trying to watch the following directories build succeeded this should only be printed when the list of not matched directories is not empty
| 1
|
17,802
| 3,013,047,307
|
IssuesEvent
|
2015-07-29 05:45:54
|
yawlfoundation/yawl
|
https://api.github.com/repos/yawlfoundation/yawl
|
closed
|
Editor - Imported icons not visible, if the editor is started through the installer
|
auto-migrated Priority-Low Type-Defect
|
```
What steps will reproduce the problem?
1. Add an icon (i.e. a png file 24x24 pixels) in the
...\editor\YAWLEditorPlugins\TaskIcons folder
2. Start the editor through the start menu i.e. Start->YAWL4Study -
2.0RC2->YAWL-Editor
What is the expected output?
The icon appears in the icon-list of the editor.
What do you see instead?
The icon does not appear in the icon-list.
Please use labels and text to provide additional information.
The icon import works nicely if the editor is started directly (i.e. by
clicking on YAWLEditor2.0_RC2.jar file in the ..\editor folder)
```
Original issue reported on code.google.com by `petia.wo...@gmail.com` on 3 Aug 2009 at 1:14
|
1.0
|
Editor - Imported icons not visible, if the editor is started through the installer - ```
What steps will reproduce the problem?
1. Add an icon (i.e. a png file 24x24 pixels) in the
...\editor\YAWLEditorPlugins\TaskIcons folder
2. Start the editor through the start menu i.e. Start->YAWL4Study -
2.0RC2->YAWL-Editor
What is the expected output?
The icon appears in the icon-list of the editor.
What do you see instead?
The icon does not appear in the icon-list.
Please use labels and text to provide additional information.
The icon import works nicely if the editor is started directly (i.e. by
clicking on YAWLEditor2.0_RC2.jar file in the ..\editor folder)
```
Original issue reported on code.google.com by `petia.wo...@gmail.com` on 3 Aug 2009 at 1:14
|
defect
|
editor imported icons not visible if the editor is started through the installer what steps will reproduce the problem add an icon i e a png file pixels in the editor yawleditorplugins taskicons folder start the editor through the start menu i e start yawl editor what is the expected output the icon appears in the icon list of the editor what do you see instead the icon does not appear in the icon list please use labels and text to provide additional information the icon import works nicely if the editor is started directly i e by clicking on jar file in the editor folder original issue reported on code google com by petia wo gmail com on aug at
| 1
|
75,697
| 26,002,692,507
|
IssuesEvent
|
2022-12-20 16:31:17
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
reopened
|
FacesWrapper implementations should push the wrapped instance to the constructor
|
:lady_beetle: defect
|
### Describe the bug
Since JSF 2.3 the default constructor of `FacesWrapper` subclasses has been deprecated in order to force implementors to instead use the constructor taking the wrapped instance (and to raise their awareness), so that logically the inherited `getWrapped()` method will be used throughout the implementation instead of the local `wrapped` variable. This will ensure that the correct implementation is returned and correct behavior is performed might the `FacesWrapper` implementation itself being wrapped by yet another `FacesWrapper` implementation further down the chain. Because, when the `FacesWrapper` implementation incorrectly/accidentally uses the local `wrapped` variable instead of the `getWrapped()` method, then that other `FacesWrapper` implementation will basically be completely ignored, hereby breaking the decorator pattern.
For example, the `PrimeExceptionHandler` of 12.0.0 is implemented as follows:
```
private final ExceptionHandler wrapped;
private final Lazy<PrimeConfiguration> config;
@SuppressWarnings("deprecation") // the default constructor is deprecated in JSF 2.3
public PrimeExceptionHandler(ExceptionHandler wrapped) {
this.wrapped = wrapped;
this.config = new Lazy(() -> PrimeApplicationContext.getCurrentInstance(FacesContext.getCurrentInstance()).getConfig());
}
@Override
public ExceptionHandler getWrapped() {
return wrapped;
}
```
This is not entirely correct. It should have been implemented as follows:
```
private final Lazy<PrimeConfiguration> config;
public PrimeExceptionHandler(ExceptionHandler wrapped) {
super(wrapped);
this.config = new Lazy(() -> PrimeApplicationContext.getCurrentInstance(FacesContext.getCurrentInstance()).getConfig());
}
```
And wherever the local `wrapped` variable was referenced within the same class, it has to be replaced by `getWrapped()`.
And all other `FacesWrapper` implementations throughout the PrimeFaces library, if any, should follow the same pattern.
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
None
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
4.0
### Java version
17
### Browser(s)
_No response_
|
1.0
|
FacesWrapper implementations should push the wrapped instance to the constructor - ### Describe the bug
Since JSF 2.3 the default constructor of `FacesWrapper` subclasses has been deprecated in order to force implementors to instead use the constructor taking the wrapped instance (and to raise their awareness), so that logically the inherited `getWrapped()` method will be used throughout the implementation instead of the local `wrapped` variable. This will ensure that the correct implementation is returned and correct behavior is performed might the `FacesWrapper` implementation itself being wrapped by yet another `FacesWrapper` implementation further down the chain. Because, when the `FacesWrapper` implementation incorrectly/accidentally uses the local `wrapped` variable instead of the `getWrapped()` method, then that other `FacesWrapper` implementation will basically be completely ignored, hereby breaking the decorator pattern.
For example, the `PrimeExceptionHandler` of 12.0.0 is implemented as follows:
```
private final ExceptionHandler wrapped;
private final Lazy<PrimeConfiguration> config;
@SuppressWarnings("deprecation") // the default constructor is deprecated in JSF 2.3
public PrimeExceptionHandler(ExceptionHandler wrapped) {
this.wrapped = wrapped;
this.config = new Lazy(() -> PrimeApplicationContext.getCurrentInstance(FacesContext.getCurrentInstance()).getConfig());
}
@Override
public ExceptionHandler getWrapped() {
return wrapped;
}
```
This is not entirely correct. It should have been implemented as follows:
```
private final Lazy<PrimeConfiguration> config;
public PrimeExceptionHandler(ExceptionHandler wrapped) {
super(wrapped);
this.config = new Lazy(() -> PrimeApplicationContext.getCurrentInstance(FacesContext.getCurrentInstance()).getConfig());
}
```
And wherever the local `wrapped` variable was referenced within the same class, it has to be replaced by `getWrapped()`.
And all other `FacesWrapper` implementations throughout the PrimeFaces library, if any, should follow the same pattern.
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
None
### PrimeFaces version
12.0.0
### Theme
_No response_
### JSF implementation
Mojarra
### JSF version
4.0
### Java version
17
### Browser(s)
_No response_
|
defect
|
faceswrapper implementations should push the wrapped instance to the constructor describe the bug since jsf the default constructor of faceswrapper subclasses has been deprecated in order to force implementors to instead use the constructor taking the wrapped instance and to raise their awareness so that logically the inherited getwrapped method will be used throughout the implementation instead of the local wrapped variable this will ensure that the correct implementation is returned and correct behavior is performed might the faceswrapper implementation itself being wrapped by yet another faceswrapper implementation further down the chain because when the faceswrapper implementation incorrectly accidentally uses the local wrapped variable instead of the getwrapped method then that other faceswrapper implementation will basically be completely ignored hereby breaking the decorator pattern for example the primeexceptionhandler of is implemented as follows private final exceptionhandler wrapped private final lazy config suppresswarnings deprecation the default constructor is deprecated in jsf public primeexceptionhandler exceptionhandler wrapped this wrapped wrapped this config new lazy primeapplicationcontext getcurrentinstance facescontext getcurrentinstance getconfig override public exceptionhandler getwrapped return wrapped this is not entirely correct it should have been implemented as follows private final lazy config public primeexceptionhandler exceptionhandler wrapped super wrapped this config new lazy primeapplicationcontext getcurrentinstance facescontext getcurrentinstance getconfig and wherever the local wrapped variable was referenced within the same class it has to be replaced by getwrapped and all other faceswrapper implementations throughout the primefaces library if any should follow the same pattern reproducer no response expected behavior no response primefaces edition none primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
| 1
|
794,903
| 28,054,206,474
|
IssuesEvent
|
2023-03-29 08:18:11
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.linkedin.com - site is not usable
|
priority-critical browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 112.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/112.0 Firefox/112.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/120181 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.linkedin.com/feed?trk=p_mwlite_me_notifications-primary_nav
**Browser / Version**: Firefox Mobile 112.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
LinkedIn feed just loads the first item and nothing else. Have no special configuration.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/54a69627-c6f3-4fee-bcd1-66692f3a56bd.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230326180212</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/3/c2bd58ec-2c67-4354-b342-39df93919967)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.linkedin.com - site is not usable - <!-- @browser: Firefox Mobile 112.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 11; Mobile; rv:109.0) Gecko/112.0 Firefox/112.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/120181 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://www.linkedin.com/feed?trk=p_mwlite_me_notifications-primary_nav
**Browser / Version**: Firefox Mobile 112.0
**Operating System**: Android 11
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
LinkedIn feed just loads the first item and nothing else. Have no special configuration.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2023/3/54a69627-c6f3-4fee-bcd1-66692f3a56bd.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20230326180212</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2023/3/c2bd58ec-2c67-4354-b342-39df93919967)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_defect
|
site is not usable url browser version firefox mobile operating system android tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce linkedin feed just loads the first item and nothing else have no special configuration view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
37,964
| 18,857,307,292
|
IssuesEvent
|
2021-11-12 08:26:25
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
opened
|
Algorithm HmacPBESHA256 not available
|
created via performance template
|
<!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a performance problem, then fill out the template below.
Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Hi
I'm getting Error while Make Apk file
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> Failed to read key key from store "C:\Users\ankit\Downloads\App2\vs\Source File V_2.0.1\FlyWeb_Flutter\android\app\key.jks": Integrity check failed: java.security.NoSuchAlgorithmException: Algorithm HmacPBESHA256 not available
<!--
1. Please tell us exactly how to reproduce the problem you are running into.
2. Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
3. Switch flutter to master channel and run this app on a physical device
using profile mode with Skia tracing enabled, as follows:
flutter channel master
flutter run --profile --trace-skia
The bleeding edge master channel is encouraged here because Flutter is
constantly fixing bugs and improving its performance. Your problem in an
older Flutter version may have already been solved in the master channel.
4. Record a video of the performance issue using another phone so we
can have an intuitive understanding of what happened. Don’t use
"adb screenrecord", as that affects the performance of the profile run.
5. Open Observatory and save a timeline trace of the performance issue
so we know which functions might be causing it. See "How to Collect
and Read Timeline Traces" on this blog post:
https://medium.com/flutter/profiling-flutter-applications-using-the-timeline-a1a434964af3#a499
Make sure the performance overlay is turned OFF when recording the
trace as that may affect the performance of the profile run.
(Pressing ‘P’ on the command line toggles the overlay.)
-->
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform:**
**Target OS version/browser:**
**Devices:**
## Logs
<details>
<summary>Logs</summary>
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
```
<!-- Finally, paste the output of running `flutter doctor -v` here, with your device plugged in. -->
```
```
</details>
|
True
|
Algorithm HmacPBESHA256 not available - <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a performance problem, then fill out the template below.
Please read our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Hi
I'm getting Error while Make Apk file
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:signReleaseBundle'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
> Failed to read key key from store "C:\Users\ankit\Downloads\App2\vs\Source File V_2.0.1\FlyWeb_Flutter\android\app\key.jks": Integrity check failed: java.security.NoSuchAlgorithmException: Algorithm HmacPBESHA256 not available
<!--
1. Please tell us exactly how to reproduce the problem you are running into.
2. Please attach a small application (ideally just one main.dart file) that
reproduces the problem. You could use https://gist.github.com/ for this.
3. Switch flutter to master channel and run this app on a physical device
using profile mode with Skia tracing enabled, as follows:
flutter channel master
flutter run --profile --trace-skia
The bleeding edge master channel is encouraged here because Flutter is
constantly fixing bugs and improving its performance. Your problem in an
older Flutter version may have already been solved in the master channel.
4. Record a video of the performance issue using another phone so we
can have an intuitive understanding of what happened. Don’t use
"adb screenrecord", as that affects the performance of the profile run.
5. Open Observatory and save a timeline trace of the performance issue
so we know which functions might be causing it. See "How to Collect
and Read Timeline Traces" on this blog post:
https://medium.com/flutter/profiling-flutter-applications-using-the-timeline-a1a434964af3#a499
Make sure the performance overlay is turned OFF when recording the
trace as that may affect the performance of the profile run.
(Pressing ‘P’ on the command line toggles the overlay.)
-->
<!--
Please tell us which target platform(s) the problem occurs (Android / iOS / Web / macOS / Linux / Windows)
Which target OS version, for Web, browser, is the test system running?
Does the problem occur on emulator/simulator as well as on physical devices?
-->
**Target Platform:**
**Target OS version/browser:**
**Devices:**
## Logs
<details>
<summary>Logs</summary>
<!--
Run `flutter analyze` and attach any output of that command below.
If there are any analysis errors, try resolving them before filing this issue.
-->
```
```
<!-- Finally, paste the output of running `flutter doctor -v` here, with your device plugged in. -->
```
```
</details>
|
non_defect
|
algorithm not available thank you for using flutter if you are looking for support please check out our documentation or consider asking a question on stack overflow if you have found a performance problem then fill out the template below please read our guide to filing a bug first hi i m getting error while make apk file failure build failed with an exception what went wrong execution failed for task app signreleasebundle a failure occurred while executing com android build gradle internal tasks workers actionfacade failed to read key key from store c users ankit downloads vs source file v flyweb flutter android app key jks integrity check failed java security nosuchalgorithmexception algorithm not available please tell us exactly how to reproduce the problem you are running into please attach a small application ideally just one main dart file that reproduces the problem you could use for this switch flutter to master channel and run this app on a physical device using profile mode with skia tracing enabled as follows flutter channel master flutter run profile trace skia the bleeding edge master channel is encouraged here because flutter is constantly fixing bugs and improving its performance your problem in an older flutter version may have already been solved in the master channel record a video of the performance issue using another phone so we can have an intuitive understanding of what happened don’t use adb screenrecord as that affects the performance of the profile run open observatory and save a timeline trace of the performance issue so we know which functions might be causing it see how to collect and read timeline traces on this blog post make sure the performance overlay is turned off when recording the trace as that may affect the performance of the profile run pressing ‘p’ on the command line toggles the overlay please tell us which target platform s the problem occurs android ios web macos linux windows which target os version for web browser is the test system running does the problem occur on emulator simulator as well as on physical devices target platform target os version browser devices logs logs run flutter analyze and attach any output of that command below if there are any analysis errors try resolving them before filing this issue
| 0
|
36,290
| 7,878,647,218
|
IssuesEvent
|
2018-06-26 10:55:37
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
Calendar dateFormat day name or month name gives error
|
defect
|
**I'm submitting a ...** (check one with "x")
```
[x] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Plunkr Case (Bug Reports)**
https://plnkr.co/edit/ahFrADebHoh1Etj5I48j?p=preview
created by [ihudson](https://forum.primefaces.org/memberlist.php?mode=viewprofile&u=130731)
**Current behavior**
There is an error:
Chrome: TypeError: Cannot read property 'monthNamesShort' of undefined
Firefox: TypeError: this.locale is undefined
**Expected behavior**
Show the date containing the day name and/or month name
**Minimal reproduction of the problem with instructions**
```
<Calendar dateFormat={'dd M yy'}
value={new Date()} />
```
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **React version:**
16.2.0
* **PrimeReact version:**
1.6.2
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
all
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
|
1.0
|
Calendar dateFormat day name or month name gives error - **I'm submitting a ...** (check one with "x")
```
[x] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Plunkr Case (Bug Reports)**
https://plnkr.co/edit/ahFrADebHoh1Etj5I48j?p=preview
created by [ihudson](https://forum.primefaces.org/memberlist.php?mode=viewprofile&u=130731)
**Current behavior**
There is an error:
Chrome: TypeError: Cannot read property 'monthNamesShort' of undefined
Firefox: TypeError: this.locale is undefined
**Expected behavior**
Show the date containing the day name and/or month name
**Minimal reproduction of the problem with instructions**
```
<Calendar dateFormat={'dd M yy'}
value={new Date()} />
```
**Please tell us about your environment:**
<!-- Operating system, IDE, package manager, HTTP server, ... -->
* **React version:**
16.2.0
* **PrimeReact version:**
1.6.2
* **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
all
* **Language:** [all | TypeScript X.X | ES6/7 | ES5]
|
defect
|
calendar dateformat day name or month name gives error i m submitting a check one with x bug report feature request support request please do not submit support request here instead see plunkr case bug reports created by current behavior there is an error chrome typeerror cannot read property monthnamesshort of undefined firefox typeerror this locale is undefined expected behavior show the date containing the day name and or month name minimal reproduction of the problem with instructions calendar dateformat dd m yy value new date please tell us about your environment react version primereact version browser all language
| 1
|
147,873
| 23,287,074,339
|
IssuesEvent
|
2022-08-05 17:44:28
|
rhmdnd/compserv
|
https://api.github.com/repos/rhmdnd/compserv
|
closed
|
What should the schema look like for results?
|
help wanted question design database
|
This project doesn't have a formal process for enhancements, so let's use an issue.
The goal of this issue is to understand what we need to write to persistent storage, and come up with a schema. Once we have agreement, we can port those schemas to the applicable open issues.
- [x] Define result schema
- [x] Define profile schema
- [x] Define a control schema
- [x] Define a catalog schema
- [x] Define an assessment schema
- [x] Define subject schema
- [x] Update subject schema to allow for hierarchical relationships
- [x] Define metadata schema
- [x] Determine how we want to store dates
- [x] Determine a consistent format for identifiers
## Terminology
`result` is the outcome of a compliance scan. It includes information like the outcome of the scan, the control the rule may be associated to, along with general information about the rule.
`subject` is the target of a scan, or the piece of infrastructure that was assessed.
`profile` is the baseline that results or rules refer to (e.g., NIST 800-53).
|
1.0
|
What should the schema look like for results? - This project doesn't have a formal process for enhancements, so let's use an issue.
The goal of this issue is to understand what we need to write to persistent storage, and come up with a schema. Once we have agreement, we can port those schemas to the applicable open issues.
- [x] Define result schema
- [x] Define profile schema
- [x] Define a control schema
- [x] Define a catalog schema
- [x] Define an assessment schema
- [x] Define subject schema
- [x] Update subject schema to allow for hierarchical relationships
- [x] Define metadata schema
- [x] Determine how we want to store dates
- [x] Determine a consistent format for identifiers
## Terminology
`result` is the outcome of a compliance scan. It includes information like the outcome of the scan, the control the rule may be associated to, along with general information about the rule.
`subject` is the target of a scan, or the piece of infrastructure that was assessed.
`profile` is the baseline that results or rules refer to (e.g., NIST 800-53).
|
non_defect
|
what should the schema look like for results this project doesn t have a formal process for enhancements so let s use an issue the goal of this issue is to understand what we need to write to persistent storage and come up with a schema once we have agreement we can port those schemas to the applicable open issues define result schema define profile schema define a control schema define a catalog schema define an assessment schema define subject schema update subject schema to allow for hierarchical relationships define metadata schema determine how we want to store dates determine a consistent format for identifiers terminology result is the outcome of a compliance scan it includes information like the outcome of the scan the control the rule may be associated to along with general information about the rule subject is the target of a scan or the piece of infrastructure that was assessed profile is the baseline that results or rules refer to e g nist
| 0
|
64,536
| 18,727,583,962
|
IssuesEvent
|
2021-11-03 17:51:17
|
SAP/fundamental-ngx
|
https://api.github.com/repos/SAP/fundamental-ngx
|
closed
|
Platform Table filtering bug
|
bug platform Defect Hunting ariba High TBD table planned
|
#### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
Filtering: Only 'and' condition is applied when searching in multiple columns: - open an issue

#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
latest
reported in #6701
|
1.0
|
Platform Table filtering bug - #### Is this a bug, enhancement, or feature request?
bug
#### Briefly describe your proposal.
Filtering: Only 'and' condition is applied when searching in multiple columns: - open an issue

#### Which versions of Angular and Fundamental Library for Angular are affected? (If this is a feature request, use current version.)
latest
reported in #6701
|
defect
|
platform table filtering bug is this a bug enhancement or feature request bug briefly describe your proposal filtering only and condition is applied when searching in multiple columns open an issue which versions of angular and fundamental library for angular are affected if this is a feature request use current version latest reported in
| 1
|
33,883
| 7,293,450,626
|
IssuesEvent
|
2018-02-25 14:24:54
|
STEllAR-GROUP/phylanx
|
https://api.github.com/repos/STEllAR-GROUP/phylanx
|
opened
|
Python frontend TODOs
|
submodule: frontend type: defect type: feature request
|
This ticket is meant for collecting missing features, feature requests etc. for the Python frontend (`@Phylanx`). Please add more, if needed.
- [ ] extracting a single double result from a Phylanx expression invocation shouldn't require `[0]` indexing, generally, return types should be handled dynamically
- [ ] all symbols generated from `@Phylanx` should have symbol information
- [ ] allow calling a `@Phylanx` function from within another one
- [ ] add a way to extract the undecorated PhySL expression from the `@Phylanx` decorator (symbol information removed), useful for debugging
|
1.0
|
Python frontend TODOs - This ticket is meant for collecting missing features, feature requests etc. for the Python frontend (`@Phylanx`). Please add more, if needed.
- [ ] extracting a single double result from a Phylanx expression invocation shouldn't require `[0]` indexing, generally, return types should be handled dynamically
- [ ] all symbols generated from `@Phylanx` should have symbol information
- [ ] allow calling a `@Phylanx` function from within another one
- [ ] add a way to extract the undecorated PhySL expression from the `@Phylanx` decorator (symbol information removed), useful for debugging
|
defect
|
python frontend todos this ticket is meant for collecting missing features feature requests etc for the python frontend phylanx please add more if needed extracting a single double result from a phylanx expression invocation shouldn t require indexing generally return types should be handled dynamically all symbols generated from phylanx should have symbol information allow calling a phylanx function from within another one add a way to extract the undecorated physl expression from the phylanx decorator symbol information removed useful for debugging
| 1
|
237,434
| 7,760,181,157
|
IssuesEvent
|
2018-06-01 04:16:50
|
TMats/survey
|
https://api.github.com/repos/TMats/survey
|
opened
|
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning
|
Priority: High RL
|
https://arxiv.org/abs/1707.08475
- Irina Higgins, Arka Pal, Andrei A. Rusu, Loic Matthey, Christopher P Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, Alexander Lerchner
- Submitted on 26 Jul 2017
|
1.0
|
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning - https://arxiv.org/abs/1707.08475
- Irina Higgins, Arka Pal, Andrei A. Rusu, Loic Matthey, Christopher P Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, Alexander Lerchner
- Submitted on 26 Jul 2017
|
non_defect
|
darla improving zero shot transfer in reinforcement learning irina higgins arka pal andrei a rusu loic matthey christopher p burgess alexander pritzel matthew botvinick charles blundell alexander lerchner submitted on jul
| 0
|
467,979
| 13,459,353,789
|
IssuesEvent
|
2020-09-09 12:08:18
|
unoplatform/uno
|
https://api.github.com/repos/unoplatform/uno
|
closed
|
Wasm: Clipping does not support wider than control itself
|
area/android area/ios area/wasm kind/bug kind/regression priority/backlog
|
## I'm submitting a...
- Bug report
## Current behavior
If you set a `Clip` rectangle on a `Grid` which is larger than the actual size, children are still clipped.
## Expected behavior
Controls can "overflow"
## Minimal reproduction of the problem with instructions
```xaml
<Grid Width="150" Height="150" BorderBrush="Red" BorderThickness="1">
<Grid.Clip>
<RectangleGeometry Rect="0,0,200,200" />
</Grid.Clip>
<ToggleButton
Width="100"
Height="100"
Content="Hello world">
<ToggleButton.RenderTransform>
<TranslateTransform X="100" Y="100"/>
</ToggleButton.RenderTransform>
</ToggleButton>
</Grid>
```
Expected result:

## Environment
```
Nuget Package: Uno.UI
Package Version(s): current master
Affected platform(s):
- [?] iOS
- [x] Android
- [?] WebAssembly
- [ ] Windows
- [ ] Build tasks
Visual Studio: _irrelevant_
Relevant plugins: _none_
```
|
1.0
|
Wasm: Clipping does not support wider than control itself - ## I'm submitting a...
- Bug report
## Current behavior
If you set a `Clip` rectangle on a `Grid` which is larger than the actual size, children are still clipped.
## Expected behavior
Controls can "overflow"
## Minimal reproduction of the problem with instructions
```xaml
<Grid Width="150" Height="150" BorderBrush="Red" BorderThickness="1">
<Grid.Clip>
<RectangleGeometry Rect="0,0,200,200" />
</Grid.Clip>
<ToggleButton
Width="100"
Height="100"
Content="Hello world">
<ToggleButton.RenderTransform>
<TranslateTransform X="100" Y="100"/>
</ToggleButton.RenderTransform>
</ToggleButton>
</Grid>
```
Expected result:

## Environment
```
Nuget Package: Uno.UI
Package Version(s): current master
Affected platform(s):
- [?] iOS
- [x] Android
- [?] WebAssembly
- [ ] Windows
- [ ] Build tasks
Visual Studio: _irrelevant_
Relevant plugins: _none_
```
|
non_defect
|
wasm clipping does not support wider than control itself i m submitting a bug report current behavior if you set a clip rectangle on a grid which is larger than the actual size children are still clipped expected behavior controls can overflow minimal reproduction of the problem with instructions xaml togglebutton width height content hello world expected result environment nuget package uno ui package version s current master affected platform s ios android webassembly windows build tasks visual studio irrelevant relevant plugins none
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.