Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
58,395 | 16,523,302,549 | IssuesEvent | 2021-05-26 16:47:13 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | InputMask: Disabled or ReadOnly breaks the page | defect | **Environment:**
- PF Version: _10.0_
- JSF + version: ALL
- Affected browsers: ALL
**To Reproduce**
[pf-inputmask.zip](https://github.com/primefaces/primefaces/files/6548359/pf-inputmask.zip)
Steps to reproduce the behavior:
1. Run the reproducer the menu and save buttons do not work and no JavaScript errors are throw.
2. Set `readonly="false"` and `disabled="false"` the page begins to work again.
**Expected behavior**
The page should work properly with `readonly` or `disabled` InputMasks.
**Example XHTML**
```html
<h:form id="frmTest">
<p:inputMask value="#{testView.string1}" mask="*****-*****-*****-*****" maxlength="23" size="55" readonly="true"/>
</h:form>
```
| 1.0 | InputMask: Disabled or ReadOnly breaks the page - **Environment:**
- PF Version: _10.0_
- JSF + version: ALL
- Affected browsers: ALL
**To Reproduce**
[pf-inputmask.zip](https://github.com/primefaces/primefaces/files/6548359/pf-inputmask.zip)
Steps to reproduce the behavior:
1. Run the reproducer the menu and save buttons do not work and no JavaScript errors are throw.
2. Set `readonly="false"` and `disabled="false"` the page begins to work again.
**Expected behavior**
The page should work properly with `readonly` or `disabled` InputMasks.
**Example XHTML**
```html
<h:form id="frmTest">
<p:inputMask value="#{testView.string1}" mask="*****-*****-*****-*****" maxlength="23" size="55" readonly="true"/>
</h:form>
```
| defect | inputmask disabled or readonly breaks the page environment pf version jsf version all affected browsers all to reproduce steps to reproduce the behavior run the reproducer the menu and save buttons do not work and no javascript errors are throw set readonly false and disabled false the page begins to work again expected behavior the page should work properly with readonly or disabled inputmasks example xhtml html | 1 |
79,386 | 28,142,123,600 | IssuesEvent | 2023-04-02 03:20:47 | FreeRADIUS/freeradius-server | https://api.github.com/repos/FreeRADIUS/freeradius-server | closed | [defect]: Python3 module still crashes freeradius | defect | ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
see #4951 , debian bullseye (stable)
### Log output from the FreeRADIUS daemon
```shell
see #4951
```
### Relevant log output from client utilities
Does not apply, server unable to start
### Backtrace from LLDB or GDB
_No response_ | 1.0 | [defect]: Python3 module still crashes freeradius - ### What type of defect/bug is this?
Crash or memory corruption (segv, abort, etc...)
### How can the issue be reproduced?
see #4951 , debian bullseye (stable)
### Log output from the FreeRADIUS daemon
```shell
see #4951
```
### Relevant log output from client utilities
Does not apply, server unable to start
### Backtrace from LLDB or GDB
_No response_ | defect | module still crashes freeradius what type of defect bug is this crash or memory corruption segv abort etc how can the issue be reproduced see debian bullseye stable log output from the freeradius daemon shell see relevant log output from client utilities does not apply server unable to start backtrace from lldb or gdb no response | 1 |
52,969 | 13,249,541,341 | IssuesEvent | 2020-08-19 21:00:00 | ophrescue/RescueRails | https://api.github.com/repos/ophrescue/RescueRails | opened | AdopterSearcher not finding some applications | Defect | I think this bug was introduced in #1649
Users reported issues when searching for Adoption Applications, from initial testing it seems that Running and AdopterSearch will only return Unassigned applications.
Need to confirm bug, write tests to replicate and apply fix. | 1.0 | AdopterSearcher not finding some applications - I think this bug was introduced in #1649
Users reported issues when searching for Adoption Applications, from initial testing it seems that Running and AdopterSearch will only return Unassigned applications.
Need to confirm bug, write tests to replicate and apply fix. | defect | adoptersearcher not finding some applications i think this bug was introduced in users reported issues when searching for adoption applications from initial testing it seems that running and adoptersearch will only return unassigned applications need to confirm bug write tests to replicate and apply fix | 1 |
43,837 | 17,687,790,561 | IssuesEvent | 2021-08-24 05:42:52 | ambanum/test-repo | https://api.github.com/repos/ambanum/test-repo | opened | Add OnlyFans - Privacy Policy | add-document add-service |
New service addition requested through the contribution tool
You can see the work done by the awesome contributor here:
http://localhost:3000/en/contribute/service?documentType=Privacy%20Policy&expertMode=true&name=OnlyFans&selectedCss[]=.b-static-content&step=2&url=https%3A%2F%2Fonlyfans.com%2Fprivacy&expertMode=true
Or you can see the JSON generated here:
```json
{
"name": "OnlyFans",
"documents": {
"Privacy Policy": {
"fetch": "https://onlyfans.com/privacy",
"select": [
".b-static-content"
]
}
}
}
```
You will need to create the following file in the root of the project: `services/OnlyFans.json`
| 1.0 | Add OnlyFans - Privacy Policy -
New service addition requested through the contribution tool
You can see the work done by the awesome contributor here:
http://localhost:3000/en/contribute/service?documentType=Privacy%20Policy&expertMode=true&name=OnlyFans&selectedCss[]=.b-static-content&step=2&url=https%3A%2F%2Fonlyfans.com%2Fprivacy&expertMode=true
Or you can see the JSON generated here:
```json
{
"name": "OnlyFans",
"documents": {
"Privacy Policy": {
"fetch": "https://onlyfans.com/privacy",
"select": [
".b-static-content"
]
}
}
}
```
You will need to create the following file in the root of the project: `services/OnlyFans.json`
| non_defect | add onlyfans privacy policy new service addition requested through the contribution tool you can see the work done by the awesome contributor here b static content step url https com expertmode true or you can see the json generated here json name onlyfans documents privacy policy fetch select b static content you will need to create the following file in the root of the project services onlyfans json | 0 |
297,094 | 9,160,450,558 | IssuesEvent | 2019-03-01 07:27:44 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :190974]Integer handling issues in /subsys/net/ip/trickle.c | Coverity area: Networking bug priority: medium | Static code scan issues seen in File: /subsys/net/ip/trickle.c
Category: Integer handling issues
Function: net_trickle_create
Component: Networking
CID: 190974
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | 1.0 | [Coverity CID :190974]Integer handling issues in /subsys/net/ip/trickle.c - Static code scan issues seen in File: /subsys/net/ip/trickle.c
Category: Integer handling issues
Function: net_trickle_create
Component: Networking
CID: 190974
Please fix or provide comments to square it off in coverity in the link: https://scan9.coverity.com/reports.htm#v32951/p12996 | non_defect | integer handling issues in subsys net ip trickle c static code scan issues seen in file subsys net ip trickle c category integer handling issues function net trickle create component networking cid please fix or provide comments to square it off in coverity in the link | 0 |
57,991 | 16,244,885,584 | IssuesEvent | 2021-05-07 13:43:49 | Questie/Questie | https://api.github.com/repos/Questie/Questie | opened | Aurel Goldleaf (8331) | Type - Defect | Misplaced in the Journey feature.
Aurel Goldleaf (8331)
Appears as available in the Journey, even tho it's completed. (breadcrumb)
Placed in Available section (Silithus)
Should be placed in Completed section (Silithus)
https://classic.wowhead.com/quest=8331/aurel-goldleaf | 1.0 | Aurel Goldleaf (8331) - Misplaced in the Journey feature.
Aurel Goldleaf (8331)
Appears as available in the Journey, even tho it's completed. (breadcrumb)
Placed in Available section (Silithus)
Should be placed in Completed section (Silithus)
https://classic.wowhead.com/quest=8331/aurel-goldleaf | defect | aurel goldleaf misplaced in the journey feature aurel goldleaf appears as available in the journey even tho it s completed breadcrumb placed in available section silithus should be placed in completed section silithus | 1 |
33,676 | 7,196,344,093 | IssuesEvent | 2018-02-05 02:13:53 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Convert.ToChar fails when converting from object | defect in-progress | Assigning a char to object and then invoking Convert.ToChar on it throws an exception.
### Steps To Reproduce
https://deck.net/712252a7e05755c5a602d767c3ee9328
```csharp
public class Program
{
public static void Main()
{
object a = 'a';
Console.WriteLine(Convert.ToChar(a));
}
}
```
### Expected Result
```js
a
```
### Actual Result
```js
System.Exception: Uncaught System.InvalidCastException: Invalid cast from 'Object' to 'Char'.
Error: Invalid cast from 'Object' to 'Char'.
at ctor (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:88149)
at new ctor (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:93335)
at Object.throwInvalidCastEx (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:325127)
at Object.toChar (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:305220)
at Function.Main (https://deck.net/RunHandler.ashx?h=-2003126879:10:73)
at https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:39626
at HTMLDocument.i (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:5182)
at i (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:27151)
at Object.add [as done] (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:27450)
at n.fn.init.n.fn.ready (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:29515)
```
| 1.0 | Convert.ToChar fails when converting from object - Assigning a char to object and then invoking Convert.ToChar on it throws an exception.
### Steps To Reproduce
https://deck.net/712252a7e05755c5a602d767c3ee9328
```csharp
public class Program
{
public static void Main()
{
object a = 'a';
Console.WriteLine(Convert.ToChar(a));
}
}
```
### Expected Result
```js
a
```
### Actual Result
```js
System.Exception: Uncaught System.InvalidCastException: Invalid cast from 'Object' to 'Char'.
Error: Invalid cast from 'Object' to 'Char'.
at ctor (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:88149)
at new ctor (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:93335)
at Object.throwInvalidCastEx (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:325127)
at Object.toChar (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:305220)
at Function.Main (https://deck.net/RunHandler.ashx?h=-2003126879:10:73)
at https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:39626
at HTMLDocument.i (https://deck.net/resources/js/bridge/bridge.min.js?16.7.0:7:5182)
at i (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:27151)
at Object.add [as done] (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:27450)
at n.fn.init.n.fn.ready (https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js:2:29515)
```
| defect | convert tochar fails when converting from object assigning a char to object and then invoking convert tochar on it throws an exception steps to reproduce csharp public class program public static void main object a a console writeline convert tochar a expected result js a actual result js system exception uncaught system invalidcastexception invalid cast from object to char error invalid cast from object to char at ctor at new ctor at object throwinvalidcastex at object tochar at function main at at htmldocument i at i at object add at n fn init n fn ready | 1 |
27,250 | 4,952,141,212 | IssuesEvent | 2016-12-01 10:57:36 | icatproject/ijp.torque | https://api.github.com/repos/icatproject/ijp.torque | closed | Walltime limit needs increasing | Priority-Medium Type-Defect | ```
I had a number of quincy jobs running simultaneously and they were running very slowly.
Returning to check on them later I found that Torque had killed them off, providing
the error message:
PBS: job killed: walltime 3610 exceeded limit 3600
```
Original issue reported on code.google.com by `kevin.phipps13` on 2013-05-15 15:55:31
| 1.0 | Walltime limit needs increasing - ```
I had a number of quincy jobs running simultaneously and they were running very slowly.
Returning to check on them later I found that Torque had killed them off, providing
the error message:
PBS: job killed: walltime 3610 exceeded limit 3600
```
Original issue reported on code.google.com by `kevin.phipps13` on 2013-05-15 15:55:31
| defect | walltime limit needs increasing i had a number of quincy jobs running simultaneously and they were running very slowly returning to check on them later i found that torque had killed them off providing the error message pbs job killed walltime exceeded limit original issue reported on code google com by kevin on | 1 |
35,187 | 12,321,114,799 | IssuesEvent | 2020-05-13 08:11:36 | tamirverthim/fitbit-api-example-java | https://api.github.com/repos/tamirverthim/fitbit-api-example-java | opened | CVE-2019-12814 (Medium) detected in jackson-databind-2.8.1.jar | security vulnerability | ## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/fitbit-api-example-java/commits/1d4a86820b5ccc9e51b82198be488c68e9299e40">1d4a86820b5ccc9e51b82198be488c68e9299e40</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2341">https://github.com/FasterXML/jackson-databind/issues/2341</a></p>
<p>Release Date: 2019-06-19</p>
<p>Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0"}],"vulnerabilityIdentifier":"CVE-2019-12814","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-12814 (Medium) detected in jackson-databind-2.8.1.jar - ## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/fitbit-api-example-java/pom.xml</p>
<p>Path to vulnerable library: /root/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.1/jackson-databind-2.8.1.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library)
- :x: **jackson-databind-2.8.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://api.github.com/repos/tamirverthim/fitbit-api-example-java/commits/1d4a86820b5ccc9e51b82198be488c68e9299e40">1d4a86820b5ccc9e51b82198be488c68e9299e40</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2341">https://github.com/FasterXML/jackson-databind/issues/2341</a></p>
<p>Release Date: 2019-06-19</p>
<p>Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.8.1","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.8.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0"}],"vulnerabilityIdentifier":"CVE-2019-12814","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in jackson databind jar cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm fitbit api example java pom xml path to vulnerable library root repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring boot starter web release jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server vulnerabilityurl | 0 |
95,831 | 10,889,679,208 | IssuesEvent | 2019-11-18 18:46:34 | openequella/openEQUELLA | https://api.github.com/repos/openequella/openEQUELLA | closed | Unable to upload File | Documentation bug question | Unable to upload a file
Steps to reproduce the behavior:
1. Go to 'Contribute'
2. Click on 'Learning Resources'
3. Select 'Files, URLs, YoutTube'
4. Follow the process and Try to upload a file from your system.
**Expected behavior**
The file should be upload under the learning resource.
**Screenshots**


**Stacktrace**
`Blocked loading mixed active content “http://equella.dev.eummena.org/FG/api/content/ajax/access/ru…4311e8ad6e&pages.pg=0&event__=p0c11_dialog_fuh.uploadCommand`
**Platform:**
- OpenEquella Version: [Installer built from the dev branch manually]
- OS: [Ubuntu 16.04]
- Browser [Firefox]
The problem according to me is the **http** URL where the file upload request is directing to. I am running **SSL** but the file upload call is directing to **http** instead of **https**. | 1.0 | Unable to upload File - Unable to upload a file
Steps to reproduce the behavior:
1. Go to 'Contribute'
2. Click on 'Learning Resources'
3. Select 'Files, URLs, YoutTube'
4. Follow the process and Try to upload a file from your system.
**Expected behavior**
The file should be upload under the learning resource.
**Screenshots**


**Stacktrace**
`Blocked loading mixed active content “http://equella.dev.eummena.org/FG/api/content/ajax/access/ru…4311e8ad6e&pages.pg=0&event__=p0c11_dialog_fuh.uploadCommand`
**Platform:**
- OpenEquella Version: [Installer built from the dev branch manually]
- OS: [Ubuntu 16.04]
- Browser [Firefox]
The problem according to me is the **http** URL where the file upload request is directing to. I am running **SSL** but the file upload call is directing to **http** instead of **https**. | non_defect | unable to upload file unable to upload a file steps to reproduce the behavior go to contribute click on learning resources select files urls youttube follow the process and try to upload a file from your system expected behavior the file should be upload under the learning resource screenshots stacktrace blocked loading mixed active content “ platform openequella version os browser the problem according to me is the http url where the file upload request is directing to i am running ssl but the file upload call is directing to http instead of https | 0 |
38,934 | 6,714,950,390 | IssuesEvent | 2017-10-13 19:00:12 | blackbaud/skyux2 | https://api.github.com/repos/blackbaud/skyux2 | closed | Add `Sky-` class prefix requirement to contributing guidelines | documentation | All classes, directives, services, components, etc. should be prefixed with `Sky-`. | 1.0 | Add `Sky-` class prefix requirement to contributing guidelines - All classes, directives, services, components, etc. should be prefixed with `Sky-`. | non_defect | add sky class prefix requirement to contributing guidelines all classes directives services components etc should be prefixed with sky | 0 |
140,110 | 5,397,183,078 | IssuesEvent | 2017-02-27 14:02:04 | blipinsk/cortado | https://api.github.com/repos/blipinsk/cortado | opened | Add simplified names for methods without args | enhancement low-priority | Introduce simpler names, so the method chain sounds better. E.g.:
`isClickable()` -> `clickable()`
`isFocusable()` -> `focusable()`
`hasFocus()` -> `focused()`
... | 1.0 | Add simplified names for methods without args - Introduce simpler names, so the method chain sounds better. E.g.:
`isClickable()` -> `clickable()`
`isFocusable()` -> `focusable()`
`hasFocus()` -> `focused()`
... | non_defect | add simplified names for methods without args introduce simpler names so the method chain sounds better e g isclickable clickable isfocusable focusable hasfocus focused | 0 |
766,868 | 26,902,392,367 | IssuesEvent | 2023-02-06 16:29:36 | noctuelles/42-ft_transcendance | https://api.github.com/repos/noctuelles/42-ft_transcendance | closed | Le match ne quitte pas si un user se delog en plein match | bug medium priority front back | Reproduction
Jouer un match contre qq, se delog en plein match avec le bouton delog (pas fermer la fenetre !) | 1.0 | Le match ne quitte pas si un user se delog en plein match - Reproduction
Jouer un match contre qq, se delog en plein match avec le bouton delog (pas fermer la fenetre !) | non_defect | le match ne quitte pas si un user se delog en plein match reproduction jouer un match contre qq se delog en plein match avec le bouton delog pas fermer la fenetre | 0 |
564 | 2,886,770,357 | IssuesEvent | 2015-06-12 10:42:19 | xproc/specification | https://api.github.com/repos/xproc/specification | closed | 2.7.1 Syntax: allow <p:pipe step="name"/> | must requirement | XProc 1.0 offers relatively few default behaviors, requiring instead that pipelines specify every construct fully. User experience has demonstrated that this leads to very verbose pipelines and has been a constant source of complaint. XProc 2.0 will introduce a variety of syntactic simplifications as an aid to readability and usability, including but not limited to:
`<p:pipe step="name"/>` binds to the primary output port of the step named “name”.
| 1.0 | 2.7.1 Syntax: allow <p:pipe step="name"/> - XProc 1.0 offers relatively few default behaviors, requiring instead that pipelines specify every construct fully. User experience has demonstrated that this leads to very verbose pipelines and has been a constant source of complaint. XProc 2.0 will introduce a variety of syntactic simplifications as an aid to readability and usability, including but not limited to:
`<p:pipe step="name"/>` binds to the primary output port of the step named “name”.
| non_defect | syntax allow xproc offers relatively few default behaviors requiring instead that pipelines specify every construct fully user experience has demonstrated that this leads to very verbose pipelines and has been a constant source of complaint xproc will introduce a variety of syntactic simplifications as an aid to readability and usability including but not limited to binds to the primary output port of the step named “name” | 0 |
225,550 | 7,488,328,953 | IssuesEvent | 2018-04-06 00:32:32 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: Game requires login | High Priority | **Version:** 0.7.1.2 beta
**Steps to Reproduce:**
1. Launch the game from steam
**Expected behavior:**
Brought me to the main menu
**Actual behavior:**
requires me to login with my Strange Loop account. After repeatedly closing and reopening the game eventually it will show the main menu. Probably 7 out of 8 times it asks for login rather than display the menu. I also registered an account just in case and it would not recognize my credentials, but that's separate. | 1.0 | USER ISSUE: Game requires login - **Version:** 0.7.1.2 beta
**Steps to Reproduce:**
1. Launch the game from steam
**Expected behavior:**
Brought me to the main menu
**Actual behavior:**
requires me to login with my Strange Loop account. After repeatedly closing and reopening the game eventually it will show the main menu. Probably 7 out of 8 times it asks for login rather than display the menu. I also registered an account just in case and it would not recognize my credentials, but that's separate. | non_defect | user issue game requires login version beta steps to reproduce launch the game from steam expected behavior brought me to the main menu actual behavior requires me to login with my strange loop account after repeatedly closing and reopening the game eventually it will show the main menu probably out of times it asks for login rather than display the menu i also registered an account just in case and it would not recognize my credentials but that s separate | 0 |
15,967 | 2,870,247,961 | IssuesEvent | 2015-06-07 00:30:26 | pdelia/away3d | https://api.github.com/repos/pdelia/away3d | closed | events onMouseOver and onMouseOut don't works without looping view.render() | auto-migrated Priority-Medium Type-Defect | #15 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:06Z
```
What steps will reproduce the problem?
1. create Plane
_model = new Plane({material:faceMaterial, back:backMaterial, width:440,
height:820, segmentsW:1, segmentsH:1, yUp:true});
_model.ownCanvas = true;
_model.bothsides = true;
container.addChild( _model );
2. add event listeners
_model.addOnMouseOver(onMouseOver);
_model.addOnMouseOut(onMouseOut);
_model.addOnMouseDown(onMouseDown);
3. And now, if I have looped rendering then all ok, but if I don't have
looped rendering, then onMouseDown working fine, but onMouseOver and
onMouseOut are don't working. For repair this I need to execute
view.render() all the time during I moving the mouse...
What is the expected output? What do you see instead?
I expect events onMouseOver and onMouseOut without looping view.render()
What version of the product are you using? On what operating system?
rev. 593
Please provide any additional information below.
```
Original issue reported on code.google.com by `nauro...@gmail.com` on 23 Jul 2008 at 4:10 | 1.0 | events onMouseOver and onMouseOut don't works without looping view.render() - #15 Issue by __GoogleCodeExporter__, created on: 2015-04-24T07:51:06Z
```
What steps will reproduce the problem?
1. create Plane
_model = new Plane({material:faceMaterial, back:backMaterial, width:440,
height:820, segmentsW:1, segmentsH:1, yUp:true});
_model.ownCanvas = true;
_model.bothsides = true;
container.addChild( _model );
2. add event listeners
_model.addOnMouseOver(onMouseOver);
_model.addOnMouseOut(onMouseOut);
_model.addOnMouseDown(onMouseDown);
3. And now, if I have looped rendering then all ok, but if I don't have
looped rendering, then onMouseDown working fine, but onMouseOver and
onMouseOut are don't working. For repair this I need to execute
view.render() all the time during I moving the mouse...
What is the expected output? What do you see instead?
I expect events onMouseOver and onMouseOut without looping view.render()
What version of the product are you using? On what operating system?
rev. 593
Please provide any additional information below.
```
Original issue reported on code.google.com by `nauro...@gmail.com` on 23 Jul 2008 at 4:10 | defect | events onmouseover and onmouseout don t works without looping view render issue by googlecodeexporter created on what steps will reproduce the problem create plane model new plane material facematerial back backmaterial width height segmentsw segmentsh yup true model owncanvas true model bothsides true container addchild model add event listeners model addonmouseover onmouseover model addonmouseout onmouseout model addonmousedown onmousedown and now if i have looped rendering then all ok but if i don t have looped rendering then onmousedown working fine but onmouseover and onmouseout are don t working for repair this i need to execute view render all the time during i moving the mouse what is the expected output what do you see instead i expect events onmouseover and onmouseout without looping view render what version of the product are you using on what operating system rev please provide any additional information below original issue reported on code google com by nauro gmail com on jul at | 1 |
41,008 | 10,264,413,915 | IssuesEvent | 2019-08-22 16:18:25 | STEllAR-GROUP/phylanx | https://api.github.com/repos/STEllAR-GROUP/phylanx | closed | Phylanx failing build on Fedora | type: compatibility issue type: defect | My docker build is failing since yesterday when I try to run "make tests." I'm using the latest hpx, blaze, blaze-tensor, and pybind11, gcc 9.1.1, boost 1.69.0.
```
Configuration is:
RUN git clone https://github.com/STEllAR-GROUP/hpx.git && \
mkdir -p /hpx/build && \
cd /hpx/build && \
cmake -DCMAKE_BUILD_TYPE=Debug \
-DHPX_WITH_MALLOC=system \
-DHPX_WITH_MORE_THAN_64_THREADS=ON \
-DHPX_WITH_MAX_CPU_COUNT=80 \
-DHPX_WITH_EXAMPLES=Off \
.. && \
make -j ${CPUS} install
```
--Steve
```
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/phytype.cpp.o
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/primitive_component.cpp.o
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/primitive_component_base.cpp.o
In file included from /blaze_tensor/blaze_tensor/math/DenseArray.h:120,
from /blaze_tensor/blaze_tensor/math/CustomArray.h:48,
from /blaze_tensor/blaze_tensor/Math.h:48,
from /home/jovyan/phylanx/phylanx/ir/node_data.hpp:28,
from /home/jovyan/phylanx/phylanx/ast/node.hpp:12,
from /home/jovyan/phylanx/phylanx/execution_tree/primitives/base_primitive.hpp:12,
from /home/jovyan/phylanx/src/execution_tree/primitives/node_data_helpers2d.cpp:8:
/blaze_tensor/blaze_tensor/math/dense/DenseArray.h:55:10: fatal error: blaze/util/DecltypeAuto.h: No such file or directory
55 | #include <blaze/util/DecltypeAuto.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
In file included from /blaze_tensor/blaze_tensor/math/DenseArray.h:120,
from /blaze_tensor/blaze_tensor/math/CustomArray.h:48,
from /blaze_tensor/blaze_tensor/Math.h:48,
from /home/jovyan/phylanx/phylanx/ir/node_data.hpp:28,
from /home/jovyan/phylanx/phylanx/ast/node.hpp:12,
from /home/jovyan/phylanx/phylanx/ast/detail/is_placeholder.hpp:10,
from /home/jovyan/phylanx/src/ast/detail/is_placeholder.cpp:7:
/blaze_tensor/blaze_tensor/math/dense/DenseArray.h:55:10: fatal error: blaze/util/DecltypeAuto.h: No such file or directory
55 | #include <blaze/util/DecltypeAuto.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
``` | 1.0 | Phylanx failing build on Fedora - My docker build is failing since yesterday when I try to run "make tests." I'm using the latest hpx, blaze, blaze-tensor, and pybind11, gcc 9.1.1, boost 1.69.0.
```
Configuration is:
RUN git clone https://github.com/STEllAR-GROUP/hpx.git && \
mkdir -p /hpx/build && \
cd /hpx/build && \
cmake -DCMAKE_BUILD_TYPE=Debug \
-DHPX_WITH_MALLOC=system \
-DHPX_WITH_MORE_THAN_64_THREADS=ON \
-DHPX_WITH_MAX_CPU_COUNT=80 \
-DHPX_WITH_EXAMPLES=Off \
.. && \
make -j ${CPUS} install
```
--Steve
```
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/phytype.cpp.o
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/primitive_component.cpp.o
[ 15%] Building CXX object src/CMakeFiles/phylanx_component.dir/execution_tree/primitives/primitive_component_base.cpp.o
In file included from /blaze_tensor/blaze_tensor/math/DenseArray.h:120,
from /blaze_tensor/blaze_tensor/math/CustomArray.h:48,
from /blaze_tensor/blaze_tensor/Math.h:48,
from /home/jovyan/phylanx/phylanx/ir/node_data.hpp:28,
from /home/jovyan/phylanx/phylanx/ast/node.hpp:12,
from /home/jovyan/phylanx/phylanx/execution_tree/primitives/base_primitive.hpp:12,
from /home/jovyan/phylanx/src/execution_tree/primitives/node_data_helpers2d.cpp:8:
/blaze_tensor/blaze_tensor/math/dense/DenseArray.h:55:10: fatal error: blaze/util/DecltypeAuto.h: No such file or directory
55 | #include <blaze/util/DecltypeAuto.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
In file included from /blaze_tensor/blaze_tensor/math/DenseArray.h:120,
from /blaze_tensor/blaze_tensor/math/CustomArray.h:48,
from /blaze_tensor/blaze_tensor/Math.h:48,
from /home/jovyan/phylanx/phylanx/ir/node_data.hpp:28,
from /home/jovyan/phylanx/phylanx/ast/node.hpp:12,
from /home/jovyan/phylanx/phylanx/ast/detail/is_placeholder.hpp:10,
from /home/jovyan/phylanx/src/ast/detail/is_placeholder.cpp:7:
/blaze_tensor/blaze_tensor/math/dense/DenseArray.h:55:10: fatal error: blaze/util/DecltypeAuto.h: No such file or directory
55 | #include <blaze/util/DecltypeAuto.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
``` | defect | phylanx failing build on fedora my docker build is failing since yesterday when i try to run make tests i m using the latest hpx blaze blaze tensor and gcc boost configuration is run git clone mkdir p hpx build cd hpx build cmake dcmake build type debug dhpx with malloc system dhpx with more than threads on dhpx with max cpu count dhpx with examples off make j cpus install steve building cxx object src cmakefiles phylanx component dir execution tree primitives phytype cpp o building cxx object src cmakefiles phylanx component dir execution tree primitives primitive component cpp o building cxx object src cmakefiles phylanx component dir execution tree primitives primitive component base cpp o in file included from blaze tensor blaze tensor math densearray h from blaze tensor blaze tensor math customarray h from blaze tensor blaze tensor math h from home jovyan phylanx phylanx ir node data hpp from home jovyan phylanx phylanx ast node hpp from home jovyan phylanx phylanx execution tree primitives base primitive hpp from home jovyan phylanx src execution tree primitives node data cpp blaze tensor blaze tensor math dense densearray h fatal error blaze util decltypeauto h no such file or directory include compilation terminated in file included from blaze tensor blaze tensor math densearray h from blaze tensor blaze tensor math customarray h from blaze tensor blaze tensor math h from home jovyan phylanx phylanx ir node data hpp from home jovyan phylanx phylanx ast node hpp from home jovyan phylanx phylanx ast detail is placeholder hpp from home jovyan phylanx src ast detail is placeholder cpp blaze tensor blaze tensor math dense densearray h fatal error blaze util decltypeauto h no such file or directory include | 1 |
47,120 | 13,056,035,568 | IssuesEvent | 2020-07-30 03:27:30 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | physics-timer (Trac #24) | IceTray Migrated from Trac defect | shouldn't display unless there are enough events
Migrated from https://code.icecube.wisc.edu/ticket/24
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "shouldn't display unless there are enough events",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "IceTray",
"summary": "physics-timer",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:40:15",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| 1.0 | physics-timer (Trac #24) - shouldn't display unless there are enough events
Migrated from https://code.icecube.wisc.edu/ticket/24
```json
{
"status": "closed",
"changetime": "2007-11-11T03:51:18",
"description": "shouldn't display unless there are enough events",
"reporter": "troy",
"cc": "",
"resolution": "fixed",
"_ts": "1194753078000000",
"component": "IceTray",
"summary": "physics-timer",
"priority": "normal",
"keywords": "",
"time": "2007-06-03T16:40:15",
"milestone": "",
"owner": "troy",
"type": "defect"
}
```
| defect | physics timer trac shouldn t display unless there are enough events migrated from json status closed changetime description shouldn t display unless there are enough events reporter troy cc resolution fixed ts component icetray summary physics timer priority normal keywords time milestone owner troy type defect | 1 |
61,839 | 17,023,790,210 | IssuesEvent | 2021-07-03 03:52:12 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Layering of railway=tram | Component: mapnik Priority: minor Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 1.17pm, Thursday, 5th April 2012]**
This is related to [http://trac.openstreetmap.org/ticket/4342].
Lines tagged as railway=tram now get drawn over roads, which principally is fine, as it means tram lines running parallel to roads are visible in lower zoom levels as well.
However, proper layering seems to be broken now, as a railway=tram passing under a highway=secondary with bridge=yes and layer=1 is now incorrectly drawn on top of the road.
See example here: [http://www.openstreetmap.org/?lat=49.114926&lon=8.479103&zoom=18&layers=M] | 1.0 | Layering of railway=tram - **[Submitted to the original trac issue database at 1.17pm, Thursday, 5th April 2012]**
This is related to [http://trac.openstreetmap.org/ticket/4342].
Lines tagged as railway=tram now get drawn over roads, which principally is fine, as it means tram lines running parallel to roads are visible in lower zoom levels as well.
However, proper layering seems to be broken now, as a railway=tram passing under a highway=secondary with bridge=yes and layer=1 is now incorrectly drawn on top of the road.
See example here: [http://www.openstreetmap.org/?lat=49.114926&lon=8.479103&zoom=18&layers=M] | defect | layering of railway tram this is related to lines tagged as railway tram now get drawn over roads which principally is fine as it means tram lines running parallel to roads are visible in lower zoom levels as well however proper layering seems to be broken now as a railway tram passing under a highway secondary with bridge yes and layer is now incorrectly drawn on top of the road see example here | 1 |
2,166 | 5,013,340,502 | IssuesEvent | 2016-12-13 14:17:59 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | closed | Add unique constraint to "records.source_url" | 3. In Development API data cleaning Processors | On #532 we found a few records with repeated `source_url` values, and found out that was a bug. We then added a functionality to the `record_remover` processor to remove trials with the same `source_url`. Now we need to make sure that doesn't happen again by adding a unique constraint on that table. After that's done and tested, we can open a new issue to remove the then-useless functionality from `record_remover`.
# Tasks
- [x] Investigate if we still have records with repeated `source_url` in our database.
- [x] If we still have bad data in our DB, find why they were created, fix the bug and run the `record_remover` processor to clean our DB
- [x] Add a unique constraint on `records.source_url`
- [x] Remove the logic on removing records with duplicated `source_url` from the `record_remover` processor, as with the constraint we'll guarantee that this will never happen | 1.0 | Add unique constraint to "records.source_url" - On #532 we found a few records with repeated `source_url` values, and found out that was a bug. We then added a functionality to the `record_remover` processor to remove trials with the same `source_url`. Now we need to make sure that doesn't happen again by adding a unique constraint on that table. After that's done and tested, we can open a new issue to remove the then-useless functionality from `record_remover`.
# Tasks
- [x] Investigate if we still have records with repeated `source_url` in our database.
- [x] If we still have bad data in our DB, find why they were created, fix the bug and run the `record_remover` processor to clean our DB
- [x] Add a unique constraint on `records.source_url`
- [x] Remove the logic on removing records with duplicated `source_url` from the `record_remover` processor, as with the constraint we'll guarantee that this will never happen | non_defect | add unique constraint to records source url on we found a few records with repeated source url values and found out that was a bug we then added a functionality to the record remover processor to remove trials with the same source url now we need to make sure that doesn t happen again by adding a unique constraint on that table after that s done and tested we can open a new issue to remove the then useless functionality from record remover tasks investigate if we still have records with repeated source url in our database if we still have bad data in our db find why they were created fix the bug and run the record remover processor to clean our db add a unique constraint on records source url remove the logic on removing records with duplicated source url from the record remover processor as with the constraint we ll guarantee that this will never happen | 0 |
29,097 | 8,287,153,418 | IssuesEvent | 2018-09-19 08:00:03 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | error: class "at::TensorImpl" has no member "set_data" | build | Observed the following error while building pytorch [76070fe](https://github.com/pytorch/pytorch/commit/76070fe73c5cce61cb9554990079594f83384629) using python setup.py install on ppc64le
```
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_TensorTransformations.cu.o
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
```
| 1.0 | error: class "at::TensorImpl" has no member "set_data" - Observed the following error while building pytorch [76070fe](https://github.com/pytorch/pytorch/commit/76070fe73c5cce61cb9554990079594f83384629) using python setup.py install on ppc64le
```
[ 64%] Building NVCC (Device) object caffe2/CMakeFiles/caffe2_gpu.dir/__/aten/src/ATen/native/cuda/caffe2_gpu_generated_TensorTransformations.cu.o
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
/<path>/pytorch/torch/lib/tmp_install/include/ATen/Tensor.h(248): error: class "at::TensorImpl" has no member "set_data"
```
| non_defect | error class at tensorimpl has no member set data observed the following error while building pytorch using python setup py install on building nvcc device object cmakefiles gpu dir aten src aten native cuda gpu generated tensortransformations cu o pytorch torch lib tmp install include aten tensor h error class at tensorimpl has no member set data pytorch torch lib tmp install include aten tensor h error class at tensorimpl has no member set data pytorch torch lib tmp install include aten tensor h error class at tensorimpl has no member set data pytorch torch lib tmp install include aten tensor h error class at tensorimpl has no member set data | 0 |
15,815 | 20,718,350,030 | IssuesEvent | 2022-03-13 01:18:57 | BobsMods/bobsmods | https://api.github.com/repos/BobsMods/bobsmods | closed | Hidden void recipes | enhancement mod compatibility Bob's Metals Chemicals and Intermediates Bob's Revamp | Void recipes should not be marked as hidden. Instead should be marked as `hide_from_player_crafting` .
https://wiki.factorio.com/Prototype/Recipe#hide_from_player_crafting
This will assist helper mods such as FNEI and Helmod. | True | Hidden void recipes - Void recipes should not be marked as hidden. Instead should be marked as `hide_from_player_crafting` .
https://wiki.factorio.com/Prototype/Recipe#hide_from_player_crafting
This will assist helper mods such as FNEI and Helmod. | non_defect | hidden void recipes void recipes should not be marked as hidden instead should be marked as hide from player crafting this will assist helper mods such as fnei and helmod | 0 |
49,446 | 6,026,661,523 | IssuesEvent | 2017-06-08 11:56:02 | LDMW/app | https://api.github.com/repos/LDMW/app | closed | Ensure that feedback form works if only a few questions are answered | bug please-test | As a user, I want to be able to fill out as many questions on the feedback form as I like and not get an internal server error when I only fill in one question so that I trust the credibility of the service and don't feel under pressure to provide more information than I feel comfortable providing
- [x] Users can hit submit after having filled out one or more questions on the feedback page
- [x] Internal error message should not appear
| 1.0 | Ensure that feedback form works if only a few questions are answered - As a user, I want to be able to fill out as many questions on the feedback form as I like and not get an internal server error when I only fill in one question so that I trust the credibility of the service and don't feel under pressure to provide more information than I feel comfortable providing
- [x] Users can hit submit after having filled out one or more questions on the feedback page
- [x] Internal error message should not appear
| non_defect | ensure that feedback form works if only a few questions are answered as a user i want to be able to fill out as many questions on the feedback form as i like and not get an internal server error when i only fill in one question so that i trust the credibility of the service and don t feel under pressure to provide more information than i feel comfortable providing users can hit submit after having filled out one or more questions on the feedback page internal error message should not appear | 0 |
172,848 | 21,055,576,317 | IssuesEvent | 2022-04-01 02:38:31 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | [Security Solution]comma separated process.arg not wraps properly | bug triage_needed Team: SecuritySolution | **Describe the bug**
comma separated process.arg not wraps properly
**Build Details**
```
Version:8.2.0-SNAPSHOT
BUILD 51431
COMMIT a743498436a863e142592cb535b43f44c448851a
```
**Steps**
- Login to Kibana
- Generate some alert data , in our case we create a custom rule for process.name: "cmd.exe" and executed mutiple instance of cmd on windows host
- Click on Alert Flyout
- Observed that comma separated process.arg not wraps properly
**Screen-Shoot**



**Additional Details:**
- **_actual content copied in clipboard: process.args_**: "cmd,/c,rmdir,C:\Users\zeus\AppData\Local\Temp\peazip-tmp\.pztmp\neutral22033117,/s,/q"
- **_filter in of above process.args_**

| True | [Security Solution]comma separated process.arg not wraps properly - **Describe the bug**
comma separated process.arg not wraps properly
**Build Details**
```
Version:8.2.0-SNAPSHOT
BUILD 51431
COMMIT a743498436a863e142592cb535b43f44c448851a
```
**Steps**
- Login to Kibana
- Generate some alert data , in our case we create a custom rule for process.name: "cmd.exe" and executed mutiple instance of cmd on windows host
- Click on Alert Flyout
- Observed that comma separated process.arg not wraps properly
**Screen-Shoot**



**Additional Details:**
- **_actual content copied in clipboard: process.args_**: "cmd,/c,rmdir,C:\Users\zeus\AppData\Local\Temp\peazip-tmp\.pztmp\neutral22033117,/s,/q"
- **_filter in of above process.args_**

| non_defect | comma separated process arg not wraps properly describe the bug comma separated process arg not wraps properly build details version snapshot build commit steps login to kibana generate some alert data in our case we create a custom rule for process name cmd exe and executed mutiple instance of cmd on windows host click on alert flyout observed that comma separated process arg not wraps properly screen shoot additional details actual content copied in clipboard process args cmd c rmdir c users zeus appdata local temp peazip tmp pztmp s q filter in of above process args | 0 |
39,905 | 10,419,680,781 | IssuesEvent | 2019-09-15 18:20:49 | zerotier/ZeroTierOne | https://api.github.com/repos/zerotier/ZeroTierOne | closed | zerotier-containerized build doesn't work with Zerotier version 1.4.2 | bug build problem | If update the version of Zerotier in the current containerized build we will get a container where Zerotier can't be correctly started.
**To Reproduce**
1. Go to https://github.com/zerotier/ZeroTierOne/blob/1.4.2/ext/installfiles/linux/zerotier-containerized/Dockerfile and change "RUN apt-get update && apt-get install -y zerotier-one=1.2.12" -> "RUN apt-get update && apt-get install -y zerotier-one=1.4.2-2"
2. Build docker: docker build . -t zerotier/zerotier-containerized:1.4.2
3. Run docker: docker run -it --cap-add=NET_ADMIN --device=/dev/net/tun zerotier/zerotier-containerized:1.4.2 /bin/sh
4. Run inside the docker:
```
zerotier-cli
/bin/sh: zerotier-cli: not found
```
^ expected to work
```
apk add --update binutils
```
```
readelf -l /usr/sbin/zerotier-cli |grep "program interpreter"
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
```
^ which is unavailable
**Expected behavior**
Expect to get a docker that works the same way as it currently works with 1.2.12
**Proposed solution**
My quick fix can be useful: https://github.com/Spikhalskiy/ZeroTierOne/commit/b61b3da59fe24bb96c1c0b09e28deac5814306a7 | 1.0 | zerotier-containerized build doesn't work with Zerotier version 1.4.2 - If update the version of Zerotier in the current containerized build we will get a container where Zerotier can't be correctly started.
**To Reproduce**
1. Go to https://github.com/zerotier/ZeroTierOne/blob/1.4.2/ext/installfiles/linux/zerotier-containerized/Dockerfile and change "RUN apt-get update && apt-get install -y zerotier-one=1.2.12" -> "RUN apt-get update && apt-get install -y zerotier-one=1.4.2-2"
2. Build docker: docker build . -t zerotier/zerotier-containerized:1.4.2
3. Run docker: docker run -it --cap-add=NET_ADMIN --device=/dev/net/tun zerotier/zerotier-containerized:1.4.2 /bin/sh
4. Run inside the docker:
```
zerotier-cli
/bin/sh: zerotier-cli: not found
```
^ expected to work
```
apk add --update binutils
```
```
readelf -l /usr/sbin/zerotier-cli |grep "program interpreter"
[Requesting program interpreter: /lib64/ld-linux-x86-64.so.2]
```
^ which is unavailable
**Expected behavior**
Expect to get a docker that works the same way as it currently works with 1.2.12
**Proposed solution**
My quick fix can be useful: https://github.com/Spikhalskiy/ZeroTierOne/commit/b61b3da59fe24bb96c1c0b09e28deac5814306a7 | non_defect | zerotier containerized build doesn t work with zerotier version if update the version of zerotier in the current containerized build we will get a container where zerotier can t be correctly started to reproduce go to and change run apt get update apt get install y zerotier one run apt get update apt get install y zerotier one build docker docker build t zerotier zerotier containerized run docker docker run it cap add net admin device dev net tun zerotier zerotier containerized bin sh run inside the docker zerotier cli bin sh zerotier cli not found expected to work apk add update binutils readelf l usr sbin zerotier cli grep program interpreter which is unavailable expected behavior expect to get a docker that works the same way as it currently works with proposed solution my quick fix can be useful | 0 |
76,478 | 26,450,476,162 | IssuesEvent | 2023-01-16 10:48:26 | matrix-org/matrix-hookshot | https://api.github.com/repos/matrix-org/matrix-hookshot | closed | `transformationFunction` updates not applied | T-Defect | **Steps to reproduce the problem**
- Room with a configured webhook and `transformationFunction` set
- Update `transformationFunction`
- Trigger the webhook
**Expected:**
New transformation function applied
**What actually happens:**
Old transformation function applied

 | 1.0 | `transformationFunction` updates not applied - **Steps to reproduce the problem**
- Room with a configured webhook and `transformationFunction` set
- Update `transformationFunction`
- Trigger the webhook
**Expected:**
New transformation function applied
**What actually happens:**
Old transformation function applied

 | defect | transformationfunction updates not applied steps to reproduce the problem room with a configured webhook and transformationfunction set update transformationfunction trigger the webhook expected new transformation function applied what actually happens old transformation function applied | 1 |
9,343 | 2,615,145,274 | IssuesEvent | 2015-03-01 06:21:05 | chrsmith/html5rocks | https://api.github.com/repos/chrsmith/html5rocks | closed | Newspaper columns | auto-migrated Milestone-2 Priority-Medium Studio Type-Defect | ```
mostly done, need to apply same style
```
Original issue reported on code.google.com by `v...@google.com` on 29 Jul 2010 at 4:30 | 1.0 | Newspaper columns - ```
mostly done, need to apply same style
```
Original issue reported on code.google.com by `v...@google.com` on 29 Jul 2010 at 4:30 | defect | newspaper columns mostly done need to apply same style original issue reported on code google com by v google com on jul at | 1 |
127,804 | 27,129,807,517 | IssuesEvent | 2023-02-16 08:58:24 | Regalis11/Barotrauma | https://api.github.com/repos/Regalis11/Barotrauma | closed | [Factions] Clown Event is global | Bug Need more info Code | ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
### Key Information
Game mode: Campaign
Game type: Multiplayer
Campaign save version: Started in 100.13.0.0, issue occurred in version 100.15.0.0
Server type: Steam P2P
### Issue Report
Whilst inside the outpost, I was given text from one of the clown events. I hadn't interacted with any clown or NPC and the text was advancing on its own. I also asked others in the session if they received the text and had mixed results. Many players who did not get the text were not inside the outpost, so perhaps this is related? It could be a networking error though.

### Reproduction steps
_No response_
### Bug prevalence
Just once
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | 1.0 | [Factions] Clown Event is global - ### Disclaimers
- [X] I have searched the issue tracker to check if the issue has already been reported.
- [ ] My issue happened while using mods.
### What happened?
### Key Information
Game mode: Campaign
Game type: Multiplayer
Campaign save version: Started in 100.13.0.0, issue occurred in version 100.15.0.0
Server type: Steam P2P
### Issue Report
Whilst inside the outpost, I was given text from one of the clown events. I hadn't interacted with any clown or NPC and the text was advancing on its own. I also asked others in the session if they received the text and had mixed results. Many players who did not get the text were not inside the outpost, so perhaps this is related? It could be a networking error though.

### Reproduction steps
_No response_
### Bug prevalence
Just once
### Version
Faction/endgame test branch
### -
_No response_
### Which operating system did you encounter this bug on?
Windows
### Relevant error messages and crash reports
_No response_ | non_defect | clown event is global disclaimers i have searched the issue tracker to check if the issue has already been reported my issue happened while using mods what happened key information game mode campaign game type multiplayer campaign save version started in issue occurred in version server type steam issue report whilst inside the outpost i was given text from one of the clown events i hadn t interacted with any clown or npc and the text was advancing on its own i also asked others in the session if they received the text and had mixed results many players who did not get the text were not inside the outpost so perhaps this is related it could be a networking error though reproduction steps no response bug prevalence just once version faction endgame test branch no response which operating system did you encounter this bug on windows relevant error messages and crash reports no response | 0 |
23,244 | 4,004,306,806 | IssuesEvent | 2016-05-12 06:34:42 | magento/mtf | https://api.github.com/repos/magento/mtf | closed | Magento admin logging out automatically | magento test question | Hi Lanken,
command: vendor/bin/phpunit --debug tests/app/Magento/Catalog/Test/TestCase/Product/CreateSimpleProductEntityTest.php
If i run any test, it is logging in to backend and again it is logging out automatically so test getting fail.
any suggestion?
Thanks,
| 1.0 | Magento admin logging out automatically - Hi Lanken,
command: vendor/bin/phpunit --debug tests/app/Magento/Catalog/Test/TestCase/Product/CreateSimpleProductEntityTest.php
If i run any test, it is logging in to backend and again it is logging out automatically so test getting fail.
any suggestion?
Thanks,
| non_defect | magento admin logging out automatically hi lanken command vendor bin phpunit debug tests app magento catalog test testcase product createsimpleproductentitytest php if i run any test it is logging in to backend and again it is logging out automatically so test getting fail any suggestion thanks | 0 |
166,791 | 6,311,491,798 | IssuesEvent | 2017-07-23 20:00:15 | ericwbailey/empathy-prompts | https://api.github.com/repos/ericwbailey/empathy-prompts | closed | Mobile breakpoint design tweaks | [Priority] Low [Status] Accepted [Type] Enhancement | - [x] Fix about link being full-width
- [x] Add padding to focus | 1.0 | Mobile breakpoint design tweaks - - [x] Fix about link being full-width
- [x] Add padding to focus | non_defect | mobile breakpoint design tweaks fix about link being full width add padding to focus | 0 |
412,947 | 12,058,608,612 | IssuesEvent | 2020-04-15 17:45:57 | phetsims/aqua | https://api.github.com/repos/phetsims/aqua | closed | Future enhancements to automated testing | priority:5-deferred | After chipper 2.0 and some other work settles it would be good to discuss and prioritize some enhancements to automated testing.
I will bring this issue up at dev meeting for further discussion, but currently marking deferred.
A few comments already brought up:
> SR: It would be great to easily be able to run all tests locally (including fuzz tests and unit tests)
> AP: screenshot snapshot comparison
> AP: keyboard-navigation fuzz testing
> CM: list of shas for some column in CT report
> SR: instructions for how to reproduce a failed test locally
seems also somewhat related to https://github.com/phetsims/aqua/issues/15 | 1.0 | Future enhancements to automated testing - After chipper 2.0 and some other work settles it would be good to discuss and prioritize some enhancements to automated testing.
I will bring this issue up at dev meeting for further discussion, but currently marking deferred.
A few comments already brought up:
> SR: It would be great to easily be able to run all tests locally (including fuzz tests and unit tests)
> AP: screenshot snapshot comparison
> AP: keyboard-navigation fuzz testing
> CM: list of shas for some column in CT report
> SR: instructions for how to reproduce a failed test locally
seems also somewhat related to https://github.com/phetsims/aqua/issues/15 | non_defect | future enhancements to automated testing after chipper and some other work settles it would be good to discuss and prioritize some enhancements to automated testing i will bring this issue up at dev meeting for further discussion but currently marking deferred a few comments already brought up sr it would be great to easily be able to run all tests locally including fuzz tests and unit tests ap screenshot snapshot comparison ap keyboard navigation fuzz testing cm list of shas for some column in ct report sr instructions for how to reproduce a failed test locally seems also somewhat related to | 0 |
135,236 | 18,677,945,625 | IssuesEvent | 2021-10-31 21:49:02 | samq-ghdemo/easybuggy-private | https://api.github.com/repos/samq-ghdemo/easybuggy-private | opened | CVE-2016-4433 (High) detected in struts2-core-2.3.20.jar, xwork-core-2.3.20.jar | security vulnerability | ## CVE-2016-4433 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>struts2-core-2.3.20.jar</b>, <b>xwork-core-2.3.20.jar</b></p></summary>
<p>
<details><summary><b>struts2-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: easybuggy-private/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- vulnpackage-1.0.jar (Root Library)
- struts2-rest-plugin-2.3.20.jar
- :x: **struts2-core-2.3.20.jar** (Vulnerable Library)
</details>
<details><summary><b>xwork-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: easybuggy-private/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- vulnpackage-1.0.jar (Root Library)
- struts2-rest-plugin-2.3.20.jar
- struts2-core-2.3.20.jar
- :x: **xwork-core-2.3.20.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/easybuggy-private/commit/6ef2566cb8b39d29f6b8b76a1bd3860df7fac401">6ef2566cb8b39d29f6b8b76a1bd3860df7fac401</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2 2.3.20 through 2.3.28.1 allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4433>CVE-2016-4433</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/struts/tree/STRUTS_2_3_29">https://github.com/apache/struts/tree/STRUTS_2_3_29</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.3.29, org.apache.struts.xwork:xwork-core:2.3.29</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.20","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"wss:vulnpackage:1.0;org.apache.struts:struts2-rest-plugin:2.3.20;org.apache.struts:struts2-core:2.3.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.29,\torg.apache.struts.xwork:xwork-core:2.3.29"},{"packageType":"Java","groupId":"org.apache.struts.xwork","packageName":"xwork-core","packageVersion":"2.3.20","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"wss:vulnpackage:1.0;org.apache.struts:struts2-rest-plugin:2.3.20;org.apache.struts:struts2-core:2.3.20;org.apache.struts.xwork:xwork-core:2.3.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.29,\torg.apache.struts.xwork:xwork-core:2.3.29"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-4433","vulnerabilityDetails":"Apache Struts 2 2.3.20 through 2.3.28.1 allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4433","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2016-4433 (High) detected in struts2-core-2.3.20.jar, xwork-core-2.3.20.jar - ## CVE-2016-4433 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>struts2-core-2.3.20.jar</b>, <b>xwork-core-2.3.20.jar</b></p></summary>
<p>
<details><summary><b>struts2-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Path to dependency file: easybuggy-private/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/struts2-core/2.3.20/struts2-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- vulnpackage-1.0.jar (Root Library)
- struts2-rest-plugin-2.3.20.jar
- :x: **struts2-core-2.3.20.jar** (Vulnerable Library)
</details>
<details><summary><b>xwork-core-2.3.20.jar</b></p></summary>
<p>Apache Struts 2</p>
<p>Library home page: <a href="http://struts.apache.org/">http://struts.apache.org/</a></p>
<p>Path to dependency file: easybuggy-private/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/struts/xwork/xwork-core/2.3.20/xwork-core-2.3.20.jar</p>
<p>
Dependency Hierarchy:
- vulnpackage-1.0.jar (Root Library)
- struts2-rest-plugin-2.3.20.jar
- struts2-core-2.3.20.jar
- :x: **xwork-core-2.3.20.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samq-ghdemo/easybuggy-private/commit/6ef2566cb8b39d29f6b8b76a1bd3860df7fac401">6ef2566cb8b39d29f6b8b76a1bd3860df7fac401</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Struts 2 2.3.20 through 2.3.28.1 allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request.
<p>Publish Date: 2016-07-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4433>CVE-2016-4433</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/apache/struts/tree/STRUTS_2_3_29">https://github.com/apache/struts/tree/STRUTS_2_3_29</a></p>
<p>Release Date: 2016-07-04</p>
<p>Fix Resolution: org.apache.struts:struts2-core:2.3.29, org.apache.struts.xwork:xwork-core:2.3.29</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.struts","packageName":"struts2-core","packageVersion":"2.3.20","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"wss:vulnpackage:1.0;org.apache.struts:struts2-rest-plugin:2.3.20;org.apache.struts:struts2-core:2.3.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.29,\torg.apache.struts.xwork:xwork-core:2.3.29"},{"packageType":"Java","groupId":"org.apache.struts.xwork","packageName":"xwork-core","packageVersion":"2.3.20","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"wss:vulnpackage:1.0;org.apache.struts:struts2-rest-plugin:2.3.20;org.apache.struts:struts2-core:2.3.20;org.apache.struts.xwork:xwork-core:2.3.20","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.struts:struts2-core:2.3.29,\torg.apache.struts.xwork:xwork-core:2.3.29"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2016-4433","vulnerabilityDetails":"Apache Struts 2 2.3.20 through 2.3.28.1 allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-4433","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in core jar xwork core jar cve high severity vulnerability vulnerable libraries core jar xwork core jar core jar apache struts path to dependency file easybuggy private pom xml path to vulnerable library home wss scanner repository org apache struts core core jar dependency hierarchy vulnpackage jar root library rest plugin jar x core jar vulnerable library xwork core jar apache struts library home page a href path to dependency file easybuggy private pom xml path to vulnerable library home wss scanner repository org apache struts xwork xwork core xwork core jar dependency hierarchy vulnpackage jar root library rest plugin jar core jar x xwork core jar vulnerable library found in head commit a href found in base branch master vulnerability details apache struts through allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache struts core org apache struts xwork xwork core isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree wss vulnpackage org apache struts rest plugin org apache struts core isminimumfixversionavailable true minimumfixversion org apache struts core torg apache struts xwork xwork core packagetype java groupid org apache struts xwork packagename xwork core packageversion packagefilepaths istransitivedependency true dependencytree wss vulnpackage org apache struts rest plugin org apache struts core org apache struts xwork xwork core isminimumfixversionavailable true minimumfixversion org apache struts core torg apache struts xwork xwork core basebranches vulnerabilityidentifier cve vulnerabilitydetails apache struts through allows remote attackers to bypass intended access restrictions and conduct redirection attacks via a crafted request vulnerabilityurl | 0 |
388,700 | 11,491,243,789 | IssuesEvent | 2020-02-11 18:36:46 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | [Coverity CID :208193] Unchecked return value in tests/bluetooth/mesh/src/microbit.c | Coverity bug priority: low |
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/e089906b339aad4cd1b6589a3b6ce94782d93f54/tests/bluetooth/mesh/src/microbit.c#L35
Category: Error handling issues
Function: `configure_button`
Component: Tests
CID: [208193](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=208193)
Details:
```
29 static void configure_button(void)
30 {
31 static struct gpio_callback button_cb;
32
33 gpio = device_get_binding(DT_ALIAS_SW0_GPIOS_CONTROLLER);
34
>>> CID 208193: Error handling issues (CHECKED_RETURN)
>>> Calling "gpio_pin_configure" without checking return value (as is done elsewhere 28 out of 29 times).
35 gpio_pin_configure(gpio, DT_ALIAS_SW0_GPIOS_PIN,
36 DT_ALIAS_SW0_GPIOS_FLAGS | GPIO_INPUT);
37
38 gpio_init_callback(&button_cb, button_pressed,
39 BIT(DT_ALIAS_SW0_GPIOS_PIN));
40
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| 1.0 | [Coverity CID :208193] Unchecked return value in tests/bluetooth/mesh/src/microbit.c -
Static code scan issues found in file:
https://github.com/zephyrproject-rtos/zephyr/tree/e089906b339aad4cd1b6589a3b6ce94782d93f54/tests/bluetooth/mesh/src/microbit.c#L35
Category: Error handling issues
Function: `configure_button`
Component: Tests
CID: [208193](https://scan9.coverity.com/reports.htm#v29726/p12996/mergedDefectId=208193)
Details:
```
29 static void configure_button(void)
30 {
31 static struct gpio_callback button_cb;
32
33 gpio = device_get_binding(DT_ALIAS_SW0_GPIOS_CONTROLLER);
34
>>> CID 208193: Error handling issues (CHECKED_RETURN)
>>> Calling "gpio_pin_configure" without checking return value (as is done elsewhere 28 out of 29 times).
35 gpio_pin_configure(gpio, DT_ALIAS_SW0_GPIOS_PIN,
36 DT_ALIAS_SW0_GPIOS_FLAGS | GPIO_INPUT);
37
38 gpio_init_callback(&button_cb, button_pressed,
39 BIT(DT_ALIAS_SW0_GPIOS_PIN));
40
```
Please fix or provide comments in coverity using the link:
https://scan9.coverity.com/reports.htm#v32951/p12996.
Note: This issue was created automatically. Priority was set based on classification
of the file affected and the impact field in coverity. Assignees were set using the CODEOWNERS file.
| non_defect | unchecked return value in tests bluetooth mesh src microbit c static code scan issues found in file category error handling issues function configure button component tests cid details static void configure button void static struct gpio callback button cb gpio device get binding dt alias gpios controller cid error handling issues checked return calling gpio pin configure without checking return value as is done elsewhere out of times gpio pin configure gpio dt alias gpios pin dt alias gpios flags gpio input gpio init callback button cb button pressed bit dt alias gpios pin please fix or provide comments in coverity using the link note this issue was created automatically priority was set based on classification of the file affected and the impact field in coverity assignees were set using the codeowners file | 0 |
406,838 | 11,903,242,714 | IssuesEvent | 2020-03-30 15:04:10 | googleapis/google-cloud-dotnet | https://api.github.com/repos/googleapis/google-cloud-dotnet | closed | Synthesis failed for Google.Cloud.RecaptchaEnterprise.V1Beta1 | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate Google.Cloud.RecaptchaEnterprise.V1Beta1. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1Beta1'
Cloning into '/tmpfs/tmp/tmpdhfyst66/googleapis'...
Note: checking out '88316b63a486002727e14032d104690541179fc9'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 88316b63a Generated synth.py files with multiple commits enabled
Note: checking out '7be2811ad17013a5ea24cd75dfd9e399dd6e18fe'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes
Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1Beta1-2'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']
2020-03-30 07:26:39,770 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/synth.py.
Contents of preconfig file:
{"preclonedRepos": {"https://github.com/googleapis/google-cloud-dotnet.git": "/tmpfs/src/git/autosynth/working_repo", "https://github.com/googleapis/googleapis.git": "/tmpfs/tmp/tmpdhfyst66/googleapis"}}Extracted repo location: /tmpfs/tmp/tmpdhfyst66/googleapis
generateapis.sh: line 40: declare: GOOGLEAPIS: readonly variable
2020-03-30 07:26:39,817 synthtool > Failed executing /bin/bash generateapis.sh --check_compatibility Google.Cloud.RecaptchaEnterprise.V1Beta1:
None
2020-03-30 07:26:39,818 synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/synth.py", line 22, in <module>
hide_output = False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '('/bin/bash', 'generateapis.sh', '--check_compatibility', 'Google.Cloud.RecaptchaEnterprise.V1Beta1')' returned non-zero exit status 1.
Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 482, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 334, in main
return _inner_main(temp_dir)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 429, in _inner_main
branch,
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 135, in synthesize_loop
metadata_path, extra_args, deprecated_execution, environ, synthesize_py_path,
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 278, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/e62b0d94-aaf2-4a14-b8f8-8803561a9577).
| 1.0 | Synthesis failed for Google.Cloud.RecaptchaEnterprise.V1Beta1 - Hello! Autosynth couldn't regenerate Google.Cloud.RecaptchaEnterprise.V1Beta1. :broken_heart:
Here's the output from running `synth.py`:
```
Cloning into 'working_repo'...
Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1Beta1'
Cloning into '/tmpfs/tmp/tmpdhfyst66/googleapis'...
Note: checking out '88316b63a486002727e14032d104690541179fc9'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 88316b63a Generated synth.py files with multiple commits enabled
Note: checking out '7be2811ad17013a5ea24cd75dfd9e399dd6e18fe'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:
git checkout -b <new-branch-name>
HEAD is now at 7be2811a fix: Update gapic-generator version to pickup discogapic fixes
Switched to a new branch 'autosynth-Google.Cloud.RecaptchaEnterprise.V1Beta1-2'
Running synthtool
['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']
2020-03-30 07:26:39,770 synthtool > Executing /tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/synth.py.
Contents of preconfig file:
{"preclonedRepos": {"https://github.com/googleapis/google-cloud-dotnet.git": "/tmpfs/src/git/autosynth/working_repo", "https://github.com/googleapis/googleapis.git": "/tmpfs/tmp/tmpdhfyst66/googleapis"}}Extracted repo location: /tmpfs/tmp/tmpdhfyst66/googleapis
generateapis.sh: line 40: declare: GOOGLEAPIS: readonly variable
2020-03-30 07:26:39,817 synthtool > Failed executing /bin/bash generateapis.sh --check_compatibility Google.Cloud.RecaptchaEnterprise.V1Beta1:
None
2020-03-30 07:26:39,818 synthtool > Wrote metadata to synth.metadata.
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmpfs/src/git/autosynth/working_repo/apis/Google.Cloud.RecaptchaEnterprise.V1Beta1/synth.py", line 22, in <module>
hide_output = False)
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/git/autosynth/env/lib/python3.6/site-packages/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '('/bin/bash', 'generateapis.sh', '--check_compatibility', 'Google.Cloud.RecaptchaEnterprise.V1Beta1')' returned non-zero exit status 1.
Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 482, in <module>
main()
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 334, in main
return _inner_main(temp_dir)
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 429, in _inner_main
branch,
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 135, in synthesize_loop
metadata_path, extra_args, deprecated_execution, environ, synthesize_py_path,
File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 278, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/git/autosynth/env/bin/python3', '-m', 'synthtool', '--metadata', 'synth.metadata', 'synth.py', '--']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/e62b0d94-aaf2-4a14-b8f8-8803561a9577).
| non_defect | synthesis failed for google cloud recaptchaenterprise hello autosynth couldn t regenerate google cloud recaptchaenterprise broken heart here s the output from running synth py cloning into working repo switched to a new branch autosynth google cloud recaptchaenterprise cloning into tmpfs tmp googleapis note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at generated synth py files with multiple commits enabled note checking out you are in detached head state you can look around make experimental changes and commit them and you can discard any commits you make in this state without impacting any branches by performing another checkout if you want to create a new branch to retain commits you create you may do so now or later by using b with the checkout command again example git checkout b head is now at fix update gapic generator version to pickup discogapic fixes switched to a new branch autosynth google cloud recaptchaenterprise running synthtool synthtool executing tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py contents of preconfig file preclonedrepos tmpfs src git autosynth working repo tmpfs tmp googleapis extracted repo location tmpfs tmp googleapis generateapis sh line declare googleapis readonly variable synthtool failed executing bin bash generateapis sh check compatibility google cloud recaptchaenterprise none synthtool wrote metadata to synth metadata traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth env lib site packages synthtool main py line in main file tmpfs src git autosynth env lib site packages click core py line in call return self main args kwargs file tmpfs src git autosynth env lib site packages click core py line in main rv self invoke ctx file tmpfs src git autosynth env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src git autosynth env lib site packages click core py line in invoke return callback args kwargs file tmpfs src git autosynth env lib site packages synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file tmpfs src git autosynth working repo apis google cloud recaptchaenterprise synth py line in hide output false file tmpfs src git autosynth env lib site packages synthtool shell py line in run raise exc file tmpfs src git autosynth env lib site packages synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command bin bash generateapis sh check compatibility google cloud recaptchaenterprise returned non zero exit status synthesis failed traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main return inner main temp dir file tmpfs src git autosynth autosynth synth py line in inner main branch file tmpfs src git autosynth autosynth synth py line in synthesize loop metadata path extra args deprecated execution environ synthesize py path file tmpfs src git autosynth autosynth synth py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 0 |
509,691 | 14,741,818,252 | IssuesEvent | 2021-01-07 11:13:20 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.facebook.com - desktop site instead of mobile site | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65107 -->
**URL**: https://www.facebook.com/ad_center/create/ad?entry_point=www_left_nav_promote_button&page_id=110458017065622&so=eyJjcmVhdGl2ZV90ZW1wbGF0ZV9wb3N0X2lkIjoxMTA0NjY1OTAzOTgwOTh9&use_template_post=true
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/ad076bf9-9462-426c-b68c-5839a98a00ab.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210105185604</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/54656e7d-4279-47d6-a27e-fbe0e2d3788a)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.facebook.com - desktop site instead of mobile site - <!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65107 -->
**URL**: https://www.facebook.com/ad_center/create/ad?entry_point=www_left_nav_promote_button&page_id=110458017065622&so=eyJjcmVhdGl2ZV90ZW1wbGF0ZV9wb3N0X2lkIjoxMTA0NjY1OTAzOTgwOTh9&use_template_post=true
**Browser / Version**: Firefox 85.0
**Operating System**: Windows 7
**Tested Another Browser**: No
**Problem type**: Desktop site instead of mobile site
**Description**: Desktop site instead of mobile site
**Steps to Reproduce**:
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/ad076bf9-9462-426c-b68c-5839a98a00ab.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210105185604</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/54656e7d-4279-47d6-a27e-fbe0e2d3788a)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | desktop site instead of mobile site url browser version firefox operating system windows tested another browser no problem type desktop site instead of mobile site description desktop site instead of mobile site steps to reproduce view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
60,244 | 3,121,793,218 | IssuesEvent | 2015-09-06 00:12:43 | MDAnalysis/mdanalysis | https://api.github.com/repos/MDAnalysis/mdanalysis | closed | feasibility of porting to Python 3k | auto-migrated enhancement Priority-Medium Python3 | At the moment, MDAnalysis will not run in Python 3.x.
With the move to only support Python 2.6+ (see Issue #130) it should become doable to find out how difficult it will be to make the library also run under Python 3.x.
For the foreseeable future, we will have to maintain a working library under 2.6+ but it would be useful to find out if any of the [porting methods](http://wiki.python.org/moin/PortingPythonToPy3k) would work for us, e.g. maintaining a "conversion-ready" 2.x code base and then use the [2to3](http://docs.python.org/2/library/2to3.html) tool to generate the 3.x code when needed.
If anyone wants to try this then let me know and I'll make you owner of this issue. We can maintain an experimental py3k branch in the repository.
-- Oliver
Original issue reported on code.google.com by `orbeckst` on 17 Apr 2013 at 5:35
* Blocked on: #130, #211 | 1.0 | feasibility of porting to Python 3k - At the moment, MDAnalysis will not run in Python 3.x.
With the move to only support Python 2.6+ (see Issue #130) it should become doable to find out how difficult it will be to make the library also run under Python 3.x.
For the foreseeable future, we will have to maintain a working library under 2.6+ but it would be useful to find out if any of the [porting methods](http://wiki.python.org/moin/PortingPythonToPy3k) would work for us, e.g. maintaining a "conversion-ready" 2.x code base and then use the [2to3](http://docs.python.org/2/library/2to3.html) tool to generate the 3.x code when needed.
If anyone wants to try this then let me know and I'll make you owner of this issue. We can maintain an experimental py3k branch in the repository.
-- Oliver
Original issue reported on code.google.com by `orbeckst` on 17 Apr 2013 at 5:35
* Blocked on: #130, #211 | non_defect | feasibility of porting to python at the moment mdanalysis will not run in python x with the move to only support python see issue it should become doable to find out how difficult it will be to make the library also run under python x for the foreseeable future we will have to maintain a working library under but it would be useful to find out if any of the would work for us e g maintaining a conversion ready x code base and then use the tool to generate the x code when needed if anyone wants to try this then let me know and i ll make you owner of this issue we can maintain an experimental branch in the repository oliver original issue reported on code google com by orbeckst on apr at blocked on | 0 |
23,569 | 3,851,863,792 | IssuesEvent | 2016-04-06 05:27:04 | GPF/imame4all | https://api.github.com/repos/GPF/imame4all | closed | mame 32 vol.1 | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `shahidiq...@gmail.com` on 5 Apr 2015 at 4:44 | 1.0 | mame 32 vol.1 - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `shahidiq...@gmail.com` on 5 Apr 2015 at 4:44 | defect | mame vol what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by shahidiq gmail com on apr at | 1 |
154,580 | 19,730,315,687 | IssuesEvent | 2022-01-14 01:11:14 | harrinry/DataflowTemplates | https://api.github.com/repos/harrinry/DataflowTemplates | opened | CVE-2021-35517 (High) detected in commons-compress-1.4.1.jar | security vulnerability | ## CVE-2021-35517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.4.1.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with compression and archive formats.
These include: bzip2, gzip, pack200, xz and ar, cpio, jar, tar, zip, dump.</p>
<p>Library home page: <a href="http://commons.apache.org/compress/">http://commons.apache.org/compress/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar</p>
<p>
Dependency Hierarchy:
- hadoop-common-2.8.5.jar (Root Library)
- :x: **commons-compress-1.4.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/DataflowTemplates/commit/dd7cd6660b3c3d0de5f379d8294b49e38a94ca65">dd7cd6660b3c3d0de5f379d8294b49e38a94ca65</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517>CVE-2021-35517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.4.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.hadoop:hadoop-common:2.8.5;org.apache.commons:commons-compress:1.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35517","vulnerabilityDetails":"When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress\u0027 tar package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-35517 (High) detected in commons-compress-1.4.1.jar - ## CVE-2021-35517 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.4.1.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with compression and archive formats.
These include: bzip2, gzip, pack200, xz and ar, cpio, jar, tar, zip, dump.</p>
<p>Library home page: <a href="http://commons.apache.org/compress/">http://commons.apache.org/compress/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.4.1/commons-compress-1.4.1.jar</p>
<p>
Dependency Hierarchy:
- hadoop-common-2.8.5.jar (Root Library)
- :x: **commons-compress-1.4.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/DataflowTemplates/commit/dd7cd6660b3c3d0de5f379d8294b49e38a94ca65">dd7cd6660b3c3d0de5f379d8294b49e38a94ca65</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' tar package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517>CVE-2021-35517</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.commons","packageName":"commons-compress","packageVersion":"1.4.1","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"org.apache.hadoop:hadoop-common:2.8.5;org.apache.commons:commons-compress:1.4.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.commons:commons-compress:1.21","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-35517","vulnerabilityDetails":"When reading a specially crafted TAR archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress\u0027 tar package.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35517","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip xz and ar cpio jar tar zip dump library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org apache commons commons compress commons compress jar dependency hierarchy hadoop common jar root library x commons compress jar vulnerable library found in head commit a href found in base branch master vulnerability details when reading a specially crafted tar archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress tar package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree org apache hadoop hadoop common org apache commons commons compress isminimumfixversionavailable true minimumfixversion org apache commons commons compress isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails when reading a specially crafted tar archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress tar package vulnerabilityurl | 0 |
418,718 | 28,124,740,308 | IssuesEvent | 2023-03-31 16:43:03 | amzn/selling-partner-api-docs | https://api.github.com/repos/amzn/selling-partner-api-docs | closed | Where is the Sellers API (and AWS credentials provider) in the C# SDK?! | Documentation enhancement request | Hi everyone,
I've been a software developer for 12 years and I've never seen anything like this API! Especially considering it's from a multi-billion-pound organisation! I realise that it's only a few months old, but still!
Anyway, I've been following this developer guide here: [https://github.com/amzn/selling-partner-api-docs/blob/main/guides/developer-guide/SellingPartnerApiDeveloperGuide.md](url)
I've registered as a developer.
I've registered my application.
I've created an AWS account.
I've created an IAM user.
I've created an IAM policy.
I've created an IAM role.
The user has 'assume role'.
I've self-authorised the app.
I've got the refresh token.
I've have the C# SDK which after some messing about, I got it to work.
I've configured my AWS credentials.
I've NOT configured my AWS credentials provider because this class doesn't seem to exist in the C# SDK.
I've configured my LWA credentials.
I've not created an instance of the Sellers API as it's not the C# SDK either!
How do us C# people move forward with this? :-D
Thanks, Antony...
| 1.0 | Where is the Sellers API (and AWS credentials provider) in the C# SDK?! - Hi everyone,
I've been a software developer for 12 years and I've never seen anything like this API! Especially considering it's from a multi-billion-pound organisation! I realise that it's only a few months old, but still!
Anyway, I've been following this developer guide here: [https://github.com/amzn/selling-partner-api-docs/blob/main/guides/developer-guide/SellingPartnerApiDeveloperGuide.md](url)
I've registered as a developer.
I've registered my application.
I've created an AWS account.
I've created an IAM user.
I've created an IAM policy.
I've created an IAM role.
The user has 'assume role'.
I've self-authorised the app.
I've got the refresh token.
I've have the C# SDK which after some messing about, I got it to work.
I've configured my AWS credentials.
I've NOT configured my AWS credentials provider because this class doesn't seem to exist in the C# SDK.
I've configured my LWA credentials.
I've not created an instance of the Sellers API as it's not the C# SDK either!
How do us C# people move forward with this? :-D
Thanks, Antony...
| non_defect | where is the sellers api and aws credentials provider in the c sdk hi everyone i ve been a software developer for years and i ve never seen anything like this api especially considering it s from a multi billion pound organisation i realise that it s only a few months old but still anyway i ve been following this developer guide here url i ve registered as a developer i ve registered my application i ve created an aws account i ve created an iam user i ve created an iam policy i ve created an iam role the user has assume role i ve self authorised the app i ve got the refresh token i ve have the c sdk which after some messing about i got it to work i ve configured my aws credentials i ve not configured my aws credentials provider because this class doesn t seem to exist in the c sdk i ve configured my lwa credentials i ve not created an instance of the sellers api as it s not the c sdk either how do us c people move forward with this d thanks antony | 0 |
63,242 | 17,483,057,677 | IssuesEvent | 2021-08-09 07:12:06 | secureCodeBox/secureCodeBox | https://api.github.com/repos/secureCodeBox/secureCodeBox | opened | DefectDojo date/time parsing bug | bug defectdojo | ## 🐞 Bug report
<!--
Thank you for reporting an issue in our project 🙌
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.
-->
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
The DefectDojo persistence Hook seems to have an bug parsing some findings:
```bash
Exception in thread "main" com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 date/time type `java.time.LocalDateTime` not supported by default: add Module "com.fasterxml.jackson.datatype:jackson-datatype-jsr310" to enable handling
at [Source: (String)"{"count":10,"next":null,"previous":null,"results":[{"id":13680,"tags":[],"request_response":{"req_resp":[]},"accepted_risks":[],"push_to_jira":false,"age":0,"sla_days_remaining":90,"finding_meta":[],"related_fields":null,"jira_creation":null,"jira_change":null,"display_status":"Inactive, Duplicate","finding_groups":[],"title":"Displays Information About Page Retrievals, Including Other Users.","date":"2021-08-09","sla_start_date":null,"cwe":0,"cve":null,"cvssv3":null,"cvssv3_score":null,"url":nu"[truncated 19279 chars]; line: 1, column: 1499] (through reference chain: io.securecodebox.persistence.defectdojo.models.DefectDojoResponse["results"]->java.util.ArrayList[0]->io.securecodebox.persistence.defectdojo.models.Finding["created"])
at com.fasterxml.jackson.databind.exc.InvalidDefinitionException.from(InvalidDefinitionException.java:67)
at com.fasterxml.jackson.databind.DeserializationContext.reportBadDefinition(DeserializationContext.java:1764)
at com.fasterxml.jackson.databind.deser.impl.UnsupportedTypeDeserializer.deserialize(UnsupportedTypeDeserializer.java:36)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3548)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3531)
at io.securecodebox.persistence.defectdojo.service.FindingService.deserializeList(FindingService.java:48)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.internalSearch(GenericDefectDojoService.java:115)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.search(GenericDefectDojoService.java:124)
at io.securecodebox.persistence.strategies.VersionedEngagementsStrategy.run(VersionedEngagementsStrategy.java:101)
at io.securecodebox.persistence.DefectDojoPersistenceProvider.main(DefectDojoPersistenceProvider.java:42)
```
### Steps To Reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### System (please complete the following information):
<!--
- secureCodeBox Version/Release
- OS: [e.g. iOS]
- Kubernetes Version [command: `kubectl version`]
- Docker Version [command: `docker -v`]
- Browser [e.g. chrome, safari, firefox,...]
-->
### Screenshots / Logs
<!-- If applicable, add screenshots to help explain your problem. -->
### Additional context
<!-- Add any other context about the problem here. -->
| 1.0 | DefectDojo date/time parsing bug - ## 🐞 Bug report
<!--
Thank you for reporting an issue in our project 🙌
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.
-->
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
The DefectDojo persistence Hook seems to have an bug parsing some findings:
```bash
Exception in thread "main" com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Java 8 date/time type `java.time.LocalDateTime` not supported by default: add Module "com.fasterxml.jackson.datatype:jackson-datatype-jsr310" to enable handling
at [Source: (String)"{"count":10,"next":null,"previous":null,"results":[{"id":13680,"tags":[],"request_response":{"req_resp":[]},"accepted_risks":[],"push_to_jira":false,"age":0,"sla_days_remaining":90,"finding_meta":[],"related_fields":null,"jira_creation":null,"jira_change":null,"display_status":"Inactive, Duplicate","finding_groups":[],"title":"Displays Information About Page Retrievals, Including Other Users.","date":"2021-08-09","sla_start_date":null,"cwe":0,"cve":null,"cvssv3":null,"cvssv3_score":null,"url":nu"[truncated 19279 chars]; line: 1, column: 1499] (through reference chain: io.securecodebox.persistence.defectdojo.models.DefectDojoResponse["results"]->java.util.ArrayList[0]->io.securecodebox.persistence.defectdojo.models.Finding["created"])
at com.fasterxml.jackson.databind.exc.InvalidDefinitionException.from(InvalidDefinitionException.java:67)
at com.fasterxml.jackson.databind.DeserializationContext.reportBadDefinition(DeserializationContext.java:1764)
at com.fasterxml.jackson.databind.deser.impl.UnsupportedTypeDeserializer.deserialize(UnsupportedTypeDeserializer.java:36)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer._deserializeFromArray(CollectionDeserializer.java:355)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:244)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:28)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:324)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:187)
at com.fasterxml.jackson.databind.deser.DefaultDeserializationContext.readRootValue(DefaultDeserializationContext.java:322)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4593)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3548)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3531)
at io.securecodebox.persistence.defectdojo.service.FindingService.deserializeList(FindingService.java:48)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.internalSearch(GenericDefectDojoService.java:115)
at io.securecodebox.persistence.defectdojo.service.GenericDefectDojoService.search(GenericDefectDojoService.java:124)
at io.securecodebox.persistence.strategies.VersionedEngagementsStrategy.run(VersionedEngagementsStrategy.java:101)
at io.securecodebox.persistence.DefectDojoPersistenceProvider.main(DefectDojoPersistenceProvider.java:42)
```
### Steps To Reproduce
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### System (please complete the following information):
<!--
- secureCodeBox Version/Release
- OS: [e.g. iOS]
- Kubernetes Version [command: `kubectl version`]
- Docker Version [command: `docker -v`]
- Browser [e.g. chrome, safari, firefox,...]
-->
### Screenshots / Logs
<!-- If applicable, add screenshots to help explain your problem. -->
### Additional context
<!-- Add any other context about the problem here. -->
| defect | defectdojo date time parsing bug 🐞 bug report thank you for reporting an issue in our project 🙌 before opening a new issue please make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead describe the bug the defectdojo persistence hook seems to have an bug parsing some findings bash exception in thread main com fasterxml jackson databind exc invaliddefinitionexception java date time type java time localdatetime not supported by default add module com fasterxml jackson datatype jackson datatype to enable handling at request response req resp accepted risks push to jira false age sla days remaining finding meta related fields null jira creation null jira change null display status inactive duplicate finding groups title displays information about page retrievals including other users date sla start date null cwe cve null null score null url nu line column through reference chain io securecodebox persistence defectdojo models defectdojoresponse java util arraylist io securecodebox persistence defectdojo models finding at com fasterxml jackson databind exc invaliddefinitionexception from invaliddefinitionexception java at com fasterxml jackson databind deserializationcontext reportbaddefinition deserializationcontext java at com fasterxml jackson databind deser impl unsupportedtypedeserializer deserialize unsupportedtypedeserializer java at com fasterxml jackson databind deser impl methodproperty deserializeandset methodproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserializefromarray collectiondeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java at com fasterxml jackson databind deser std collectiondeserializer deserialize collectiondeserializer java at com fasterxml jackson databind deser impl methodproperty deserializeandset methodproperty java at com fasterxml jackson databind deser beandeserializer vanilladeserialize beandeserializer java at com fasterxml jackson databind deser beandeserializer deserialize beandeserializer java at com fasterxml jackson databind deser defaultdeserializationcontext readrootvalue defaultdeserializationcontext java at com fasterxml jackson databind objectmapper readmapandclose objectmapper java at com fasterxml jackson databind objectmapper readvalue objectmapper java at com fasterxml jackson databind objectmapper readvalue objectmapper java at io securecodebox persistence defectdojo service findingservice deserializelist findingservice java at io securecodebox persistence defectdojo service genericdefectdojoservice internalsearch genericdefectdojoservice java at io securecodebox persistence defectdojo service genericdefectdojoservice search genericdefectdojoservice java at io securecodebox persistence strategies versionedengagementsstrategy run versionedengagementsstrategy java at io securecodebox persistence defectdojopersistenceprovider main defectdojopersistenceprovider java steps to reproduce steps to reproduce the behavior go to click on scroll down to see error expected behavior system please complete the following information securecodebox version release os kubernetes version docker version browser screenshots logs additional context | 1 |
20,205 | 3,315,166,875 | IssuesEvent | 2015-11-06 10:29:52 | junichi11/cakephp3-netbeans | https://api.github.com/repos/junichi11/cakephp3-netbeans | closed | Can't install the plugin | defect | Unfortunately, this plugin depends on the build version of NetBeans #4 .
This problem will be resolved in the next stable version (probably NB8.1).
If there is not a nbm for your build version, please let me know. I'll create it for your version.
You can check it in Help > About . e.g. NetBeans 8.0.2 Patch 1 (Build 201411181905)
nbms are named like the following:
org-netbeans-modules-php-cake3-[plugin version number]-[dev]-[build version number of NetBeans].nbm
org-netbeans-modules-php-cake3-0.0.1-dev-201408251540.nbm
### another workaround
- Clone this repository
- Run create nbm
| 1.0 | Can't install the plugin - Unfortunately, this plugin depends on the build version of NetBeans #4 .
This problem will be resolved in the next stable version (probably NB8.1).
If there is not a nbm for your build version, please let me know. I'll create it for your version.
You can check it in Help > About . e.g. NetBeans 8.0.2 Patch 1 (Build 201411181905)
nbms are named like the following:
org-netbeans-modules-php-cake3-[plugin version number]-[dev]-[build version number of NetBeans].nbm
org-netbeans-modules-php-cake3-0.0.1-dev-201408251540.nbm
### another workaround
- Clone this repository
- Run create nbm
| defect | can t install the plugin unfortunately this plugin depends on the build version of netbeans this problem will be resolved in the next stable version probably if there is not a nbm for your build version please let me know i ll create it for your version you can check it in help about e g netbeans patch build nbms are named like the following org netbeans modules php nbm org netbeans modules php dev nbm another workaround clone this repository run create nbm | 1 |
231,073 | 17,661,549,239 | IssuesEvent | 2021-08-21 16:02:30 | amzn/selling-partner-api-docs | https://api.github.com/repos/amzn/selling-partner-api-docs | opened | report's ReportDocument compressionAlgorithm is confusing and unnecessary | documentation enhancement request | As mentioned previously in Bug #664 (and some other bugs that involved the old API's extra crypto layer), the report's ReportDocument compressionAlgorithm is confusing. It's also unnecessary as HTTP supports this feature natively.
I recently requested (accidentally) a `GET_FBA_FULFILLMENT_CURRENT_INVENTORY_DATA` report with 11 megabytes of data uncompressed. It was sent as documented per https://github.com/amzn/selling-partner-api-docs/blob/main/references/reports-api/reports_2021-06-30.md#reportdocument and that all works.
However it is needlessly confusing the way the SP-API does this though.
The HTTP protocol natively supports compression encoding. Instead of all of this `"compressionAlgorithm": "GZIP"` that has to be documented separately, the SP-API should just send the standard HTTP header: `Content-Encoding: gzip`. And then the HTTP client will seamlessly perform decompression. This also would solve a current oddity where the content type doesn't seem to be specified properly. (It's UTF-8 seemingly, but I didn't see that in the SP-API protocol.) | 1.0 | report's ReportDocument compressionAlgorithm is confusing and unnecessary - As mentioned previously in Bug #664 (and some other bugs that involved the old API's extra crypto layer), the report's ReportDocument compressionAlgorithm is confusing. It's also unnecessary as HTTP supports this feature natively.
I recently requested (accidentally) a `GET_FBA_FULFILLMENT_CURRENT_INVENTORY_DATA` report with 11 megabytes of data uncompressed. It was sent as documented per https://github.com/amzn/selling-partner-api-docs/blob/main/references/reports-api/reports_2021-06-30.md#reportdocument and that all works.
However it is needlessly confusing the way the SP-API does this though.
The HTTP protocol natively supports compression encoding. Instead of all of this `"compressionAlgorithm": "GZIP"` that has to be documented separately, the SP-API should just send the standard HTTP header: `Content-Encoding: gzip`. And then the HTTP client will seamlessly perform decompression. This also would solve a current oddity where the content type doesn't seem to be specified properly. (It's UTF-8 seemingly, but I didn't see that in the SP-API protocol.) | non_defect | report s reportdocument compressionalgorithm is confusing and unnecessary as mentioned previously in bug and some other bugs that involved the old api s extra crypto layer the report s reportdocument compressionalgorithm is confusing it s also unnecessary as http supports this feature natively i recently requested accidentally a get fba fulfillment current inventory data report with megabytes of data uncompressed it was sent as documented per and that all works however it is needlessly confusing the way the sp api does this though the http protocol natively supports compression encoding instead of all of this compressionalgorithm gzip that has to be documented separately the sp api should just send the standard http header content encoding gzip and then the http client will seamlessly perform decompression this also would solve a current oddity where the content type doesn t seem to be specified properly it s utf seemingly but i didn t see that in the sp api protocol | 0 |
73,701 | 24,760,111,891 | IssuesEvent | 2022-10-21 22:30:08 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Design | Discovery | Profile | Screenreader Accessibility | Nametag | design 508/Accessibility vsa authenticated-experience profile 508-defect-3 | ## Background
The Accessibility team informed us that a screenreader may not pick up the Nametag section of Profile because it is not under an H2. Currently, the Nametag is hidden under the first heading so by putting it under its own H2, under the first H1, screenreaders will be able to identify it better.
Accessibility had some recommendations for consideration as we evaluate options including:
- Angela considers this to be "basic information," which could be displayed right below the H1 under a new H2 heading that says “basic information”. This will make it visible to the screen reader.
- If we put it _above_ H1, we might not need to display it on each page, as it could get redundant. Instead, the Nametag could be included and read out on Personal information page only.
- Or for less priority, it could be at the bottom of the page
## Tasks
- [ ] Consider alternatives for Nametag info
- [ ] Review viability and ease of implementation with FE
- [ ] Create 2 - 3 mock-ups
- [ ] Review with team (and update mocks if needed)
- [ ] Review with MY VA (and update mocks if needed)
## Acceptance Criteria
- [ ] Align on design
| 1.0 | Design | Discovery | Profile | Screenreader Accessibility | Nametag - ## Background
The Accessibility team informed us that a screenreader may not pick up the Nametag section of Profile because it is not under an H2. Currently, the Nametag is hidden under the first heading so by putting it under its own H2, under the first H1, screenreaders will be able to identify it better.
Accessibility had some recommendations for consideration as we evaluate options including:
- Angela considers this to be "basic information," which could be displayed right below the H1 under a new H2 heading that says “basic information”. This will make it visible to the screen reader.
- If we put it _above_ H1, we might not need to display it on each page, as it could get redundant. Instead, the Nametag could be included and read out on Personal information page only.
- Or for less priority, it could be at the bottom of the page
## Tasks
- [ ] Consider alternatives for Nametag info
- [ ] Review viability and ease of implementation with FE
- [ ] Create 2 - 3 mock-ups
- [ ] Review with team (and update mocks if needed)
- [ ] Review with MY VA (and update mocks if needed)
## Acceptance Criteria
- [ ] Align on design
| defect | design discovery profile screenreader accessibility nametag background the accessibility team informed us that a screenreader may not pick up the nametag section of profile because it is not under an currently the nametag is hidden under the first heading so by putting it under its own under the first screenreaders will be able to identify it better accessibility had some recommendations for consideration as we evaluate options including angela considers this to be basic information which could be displayed right below the under a new heading that says “basic information” this will make it visible to the screen reader if we put it above we might not need to display it on each page as it could get redundant instead the nametag could be included and read out on personal information page only or for less priority it could be at the bottom of the page tasks consider alternatives for nametag info review viability and ease of implementation with fe create mock ups review with team and update mocks if needed review with my va and update mocks if needed acceptance criteria align on design | 1 |
431,626 | 12,484,210,869 | IssuesEvent | 2020-05-30 13:38:04 | sonia-auv/provider_actuators | https://api.github.com/repos/sonia-auv/provider_actuators | closed | Move custom messages and services to sonia_msgs | Priority: High Type: Feature | Messages:
- [x] DoAction
Services:
- [x] DoActionSrv | 1.0 | Move custom messages and services to sonia_msgs - Messages:
- [x] DoAction
Services:
- [x] DoActionSrv | non_defect | move custom messages and services to sonia msgs messages doaction services doactionsrv | 0 |
163,758 | 6,205,179,739 | IssuesEvent | 2017-07-06 15:37:21 | openshift/origin-web-console | https://api.github.com/repos/openshift/origin-web-console | opened | Don't spam connection errors | kind/bug priority/P2 | We shouldn't spam errors for each type when the API server isn't responding.

cc @jwforres | 1.0 | Don't spam connection errors - We shouldn't spam errors for each type when the API server isn't responding.

cc @jwforres | non_defect | don t spam connection errors we shouldn t spam errors for each type when the api server isn t responding cc jwforres | 0 |
23,933 | 3,874,058,195 | IssuesEvent | 2016-04-11 19:08:53 | niaquinto/pdf2json | https://api.github.com/repos/niaquinto/pdf2json | closed | Windows style line endings for Makefile and Makefile.in in the linux archive | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. extract pdf2json-0.65.tar.gz
2. cat -v Makefile
What is the expected output? What do you see instead?
The expected output is that the files must use UNIX style line endings.
What version of the product are you using? On what operating system?
pdf2json 0.65
```
Original issue reported on code.google.com by `erazo...@gmail.com` on 15 Nov 2013 at 3:42 | 1.0 | Windows style line endings for Makefile and Makefile.in in the linux archive - ```
What steps will reproduce the problem?
1. extract pdf2json-0.65.tar.gz
2. cat -v Makefile
What is the expected output? What do you see instead?
The expected output is that the files must use UNIX style line endings.
What version of the product are you using? On what operating system?
pdf2json 0.65
```
Original issue reported on code.google.com by `erazo...@gmail.com` on 15 Nov 2013 at 3:42 | defect | windows style line endings for makefile and makefile in in the linux archive what steps will reproduce the problem extract tar gz cat v makefile what is the expected output what do you see instead the expected output is that the files must use unix style line endings what version of the product are you using on what operating system original issue reported on code google com by erazo gmail com on nov at | 1 |
24,563 | 4,018,241,281 | IssuesEvent | 2016-05-16 09:35:30 | akvo/akvo-flow | https://api.github.com/repos/akvo/akvo-flow | closed | Spreadsheet Import converts data to numbers regardless of the type of data expected by the associated question | 1 - Defect 2 - Fixed | When importing data in a spreadsheet into the dashboard, cells containing only numbers are imported as numeric data regardless of the expected type of the of responses as defined by the question.
How to reproduce:
1. create survey containing a free text question
2. Export an empty raw data report of the survey
3. enter a mix of numbers some 9+ digits long in the column for the free text question
4. import the survey to the dashboard
5. check the values of the imported data
Expected results: strings containing numbers formatted in scientific notation for those with longer digits.
The correct behaviour should be that the type of the data imported is based on the type of data expected by the question associated with that column in the spreadsheet.
| 1.0 | Spreadsheet Import converts data to numbers regardless of the type of data expected by the associated question - When importing data in a spreadsheet into the dashboard, cells containing only numbers are imported as numeric data regardless of the expected type of the of responses as defined by the question.
How to reproduce:
1. create survey containing a free text question
2. Export an empty raw data report of the survey
3. enter a mix of numbers some 9+ digits long in the column for the free text question
4. import the survey to the dashboard
5. check the values of the imported data
Expected results: strings containing numbers formatted in scientific notation for those with longer digits.
The correct behaviour should be that the type of the data imported is based on the type of data expected by the question associated with that column in the spreadsheet.
| defect | spreadsheet import converts data to numbers regardless of the type of data expected by the associated question when importing data in a spreadsheet into the dashboard cells containing only numbers are imported as numeric data regardless of the expected type of the of responses as defined by the question how to reproduce create survey containing a free text question export an empty raw data report of the survey enter a mix of numbers some digits long in the column for the free text question import the survey to the dashboard check the values of the imported data expected results strings containing numbers formatted in scientific notation for those with longer digits the correct behaviour should be that the type of the data imported is based on the type of data expected by the question associated with that column in the spreadsheet | 1 |
61,314 | 17,023,664,847 | IssuesEvent | 2021-07-03 03:10:55 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Multipolygon relations with node members fail | Component: potlatch2 Priority: major Resolution: fixed Type: defect | **[Submitted to the original trac issue database at 8.40am, Friday, 24th December 2010]**
...because solidLines in WayUI assumes the member entity is a Way, and tries to draw it, and bad things therefore happen. | 1.0 | Multipolygon relations with node members fail - **[Submitted to the original trac issue database at 8.40am, Friday, 24th December 2010]**
...because solidLines in WayUI assumes the member entity is a Way, and tries to draw it, and bad things therefore happen. | defect | multipolygon relations with node members fail because solidlines in wayui assumes the member entity is a way and tries to draw it and bad things therefore happen | 1 |
11,598 | 2,659,900,555 | IssuesEvent | 2015-03-19 00:22:31 | perfsonar/project | https://api.github.com/repos/perfsonar/project | closed | old files building up in /home/var/lib/perfsonar/regular_testing | Milestone-Release3.5 Priority-Medium Type-Defect | Original [issue 1068](https://code.google.com/p/perfsonar-ps/issues/detail?id=1068) created by arlake228 on 2015-01-30T19:59:48.000Z:
I'm seeing files that are > 3 months old in /home/var/lib/perfsonar/regular_testing, and users have reported this on the list as well. There must be cases where these do not get cleaned up properly. Maybe due to a reboot or service restart? Maybe a bug?
Maybe we should add a cron job to remove anything > 1 month old, just to make sure we dont run out of disk space / inodes?
| 1.0 | old files building up in /home/var/lib/perfsonar/regular_testing - Original [issue 1068](https://code.google.com/p/perfsonar-ps/issues/detail?id=1068) created by arlake228 on 2015-01-30T19:59:48.000Z:
I'm seeing files that are > 3 months old in /home/var/lib/perfsonar/regular_testing, and users have reported this on the list as well. There must be cases where these do not get cleaned up properly. Maybe due to a reboot or service restart? Maybe a bug?
Maybe we should add a cron job to remove anything > 1 month old, just to make sure we dont run out of disk space / inodes?
| defect | old files building up in home var lib perfsonar regular testing original created by on i m seeing files that are gt months old in home var lib perfsonar regular testing and users have reported this on the list as well there must be cases where these do not get cleaned up properly maybe due to a reboot or service restart maybe a bug maybe we should add a cron job to remove anything gt month old just to make sure we dont run out of disk space inodes | 1 |
29,544 | 5,716,958,489 | IssuesEvent | 2017-04-19 16:09:42 | bancika/diy-layout-creator | https://api.github.com/repos/bancika/diy-layout-creator | closed | Mouse focusses wrongly after window resizing | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Open the application
2. Resize the window
3. Drag any object
I've attached a screencap video that demonstrates the issue. I assume this is a
bug in the way the application co-operates with Linux's window managers.
I'm running DIY LC 3.23.0 on LMDE with java from official Oracle packages:
java version "1.7.0_09"
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
Seems like the cursor focus gets set only on application launch. After that the
focus stays the same even if the wondow gets resized. This is quite an annoying
issue when drawing anything more complicated that needs more room on the screen.
```
Original issue reported on code.google.com by `miro...@gmail.com` on 5 Dec 2012 at 9:35
Attachments:
- [out-1.ogv](https://storage.googleapis.com/google-code-attachments/diy-layout-creator/issue-190/comment-0/out-1.ogv)
| 1.0 | Mouse focusses wrongly after window resizing - ```
What steps will reproduce the problem?
1. Open the application
2. Resize the window
3. Drag any object
I've attached a screencap video that demonstrates the issue. I assume this is a
bug in the way the application co-operates with Linux's window managers.
I'm running DIY LC 3.23.0 on LMDE with java from official Oracle packages:
java version "1.7.0_09"
Java(TM) SE Runtime Environment (build 1.7.0_09-b05)
Java HotSpot(TM) 64-Bit Server VM (build 23.5-b02, mixed mode)
Seems like the cursor focus gets set only on application launch. After that the
focus stays the same even if the wondow gets resized. This is quite an annoying
issue when drawing anything more complicated that needs more room on the screen.
```
Original issue reported on code.google.com by `miro...@gmail.com` on 5 Dec 2012 at 9:35
Attachments:
- [out-1.ogv](https://storage.googleapis.com/google-code-attachments/diy-layout-creator/issue-190/comment-0/out-1.ogv)
| defect | mouse focusses wrongly after window resizing what steps will reproduce the problem open the application resize the window drag any object i ve attached a screencap video that demonstrates the issue i assume this is a bug in the way the application co operates with linux s window managers i m running diy lc on lmde with java from official oracle packages java version java tm se runtime environment build java hotspot tm bit server vm build mixed mode seems like the cursor focus gets set only on application launch after that the focus stays the same even if the wondow gets resized this is quite an annoying issue when drawing anything more complicated that needs more room on the screen original issue reported on code google com by miro gmail com on dec at attachments | 1 |
46,348 | 13,055,897,079 | IssuesEvent | 2020-07-30 03:03:21 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | Problems with scripts using pycuda on the gpu cluster (Trac #1039) | Incomplete Migration Migrated from Trac cvmfs defect | Migrated from https://code.icecube.wisc.edu/ticket/1039
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:02",
"description": "This message was received at help@icecube. It relates to a CVMFS software issue, so I am opening a ticket here to handle it. Adding the user in CC, so I hope Sebastian gets updates.\n\nGonzalo\n\n----- Original email from Sebastian:\n\nDear cvmfs team, \n\nat the moment I'm using pycuda locally. But I run into problems with scripts using pycuda on the gpu cluster. \nThus, at the moment I'm only able to use pycuda on specific CUDA devices. \nSo setting a specific CUDADeviceName is the only \"solution\" I found so far. \n\nNow my question, would it be possible to install pycuda in cvmfs? \nVlad told me that he couldn't find a pycuda RPM that could be installed system-wide. \n\nThanks in advance! \n\nBest, \nSebastian ",
"reporter": "gmerino",
"cc": "schoenen@physik.rwth-aachen.de, nega",
"resolution": "wontfix",
"_ts": "1458335642885003",
"component": "cvmfs",
"summary": "Problems with scripts using pycuda on the gpu cluster",
"priority": "normal",
"keywords": "",
"time": "2015-06-30T14:34:46",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
| 1.0 | Problems with scripts using pycuda on the gpu cluster (Trac #1039) - Migrated from https://code.icecube.wisc.edu/ticket/1039
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:02",
"description": "This message was received at help@icecube. It relates to a CVMFS software issue, so I am opening a ticket here to handle it. Adding the user in CC, so I hope Sebastian gets updates.\n\nGonzalo\n\n----- Original email from Sebastian:\n\nDear cvmfs team, \n\nat the moment I'm using pycuda locally. But I run into problems with scripts using pycuda on the gpu cluster. \nThus, at the moment I'm only able to use pycuda on specific CUDA devices. \nSo setting a specific CUDADeviceName is the only \"solution\" I found so far. \n\nNow my question, would it be possible to install pycuda in cvmfs? \nVlad told me that he couldn't find a pycuda RPM that could be installed system-wide. \n\nThanks in advance! \n\nBest, \nSebastian ",
"reporter": "gmerino",
"cc": "schoenen@physik.rwth-aachen.de, nega",
"resolution": "wontfix",
"_ts": "1458335642885003",
"component": "cvmfs",
"summary": "Problems with scripts using pycuda on the gpu cluster",
"priority": "normal",
"keywords": "",
"time": "2015-06-30T14:34:46",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
| defect | problems with scripts using pycuda on the gpu cluster trac migrated from json status closed changetime description this message was received at help icecube it relates to a cvmfs software issue so i am opening a ticket here to handle it adding the user in cc so i hope sebastian gets updates n ngonzalo n n original email from sebastian n ndear cvmfs team n nat the moment i m using pycuda locally but i run into problems with scripts using pycuda on the gpu cluster nthus at the moment i m only able to use pycuda on specific cuda devices nso setting a specific cudadevicename is the only solution i found so far n nnow my question would it be possible to install pycuda in cvmfs nvlad told me that he couldn t find a pycuda rpm that could be installed system wide n nthanks in advance n nbest nsebastian reporter gmerino cc schoenen physik rwth aachen de nega resolution wontfix ts component cvmfs summary problems with scripts using pycuda on the gpu cluster priority normal keywords time milestone owner david schultz type defect | 1 |
27,720 | 5,081,701,019 | IssuesEvent | 2016-12-29 11:49:46 | phingofficial/phing | https://api.github.com/repos/phingofficial/phing | closed | Variable problem (Trac #1136) | defect Migrated from Trac phing-core | Hi,
I am trying create automatical build with PHPCI and Phing. But I have a problem with evaluation of variable expresions.
There is a part of my phing build with a output:
'''build.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project name="MyApplication" default="main" >
<property file="./build/properties/default.properties" />
<property name="appDir" value="${project.basedir}" />
<if>
<isset property="build.env" />
<then>
<echo message="Overwriting default.properties with ${build.env}.properties" />
<property file="build/properties/${build.env}.properties" override="true" />
</then>
</if>
<target name="main">
<echo message="+------------------------------------------+"/>
<echo message="| |"/>
<echo message="| Building The Project |"/>
<echo message="| Dir: ${project.basedir} |"/>
<echo message="| appDir: ${appDir} |"/>
<echo message="| |"/>
<echo message="+------------------------------------------+"/>
<phing phingfile="build/build-logs.xml" target="logs" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
<phing phingfile="build/sync-to-server.xml" target="sync-web" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
<phing phingfile="build/sync-database.xml" target="sync-db" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
</target>
</project>
```
'''sync-to-server.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project default="sync-web">
<target name="sync-web"
description="Synchronize server files"
>
<echo msg="Synchronizing files..." />
<filesync
sourcedir = "."
destinationdir = "${remote.user}@${remote.server}:${remote.dir}"
excludeFile="./build/properties/exclude-files.properties"
itemizechanges = "true"
delete = "true"
checksum = "true" />
<echo msg="End synchronizing files..." />
</target>
</project>
```
'''sync-database.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project default="sync-db">
<target name="sync-db">
<echo msg="Synchronizing database..." />
<echo msg="AppDir: ${appDir}..." />
<echo msg="BaseDir: ${project.basedir}..." />
<tstamp/>
<property name="deployfile" value="/db/scripts/deploy-${DSTAMP}${TSTAMP}.sql" />
<property name="undofile" value="/db/scripts/undo-${DSTAMP}${TSTAMP}.sql" />
<property name="test" value="${appDir}" />
<property name="dbDeploy" value="${appDir}${deployfile}" />
<property name="dbUndo" value="${test}${undofile}" />
<echo msg="${test} -- ${test} x ${dbDeploy} vs ${dbUndo}" />
<dbdeploy
url="mysql:host=${db.host};dbname=${db.name}"
userid="${db.user}"
password="${db.pass}"
dir="${test}/build/db/deltas"
outputfile="${dbDeploy}"
undooutputfile="${dbUndo}" />
<exec
command="${tools.mysql} -h${db.host} -u${db.user} -p${db.pass} ${db.name} < ${dbDeploy}"
dir="."
output="true"
checkreturn="true" />
</target>
</project>
```
'''output:'''
``` text
[echo] +------------------------------------------+
[echo] | |
[echo] | Building The Project |
[echo] | Dir: /data/www/PHPCI/PHPCI/build/project2-build479 |
[echo] | appDir: /data/www/PHPCI/PHPCI/build/project2-build479 |
[echo] | |
[echo] +------------------------------------------+
[phing] Calling Buildfile '/build/build-logs.xml' with target 'logs'
...
...
...
...
sync-db:
[echo] Synchronizing database...
[echo] AppDir: /data/www/PHPCI/PHPCI/build/project2-build479...
[echo] BaseDir: /data/www/PHPCI/PHPCI/build/project2-build479...
[echo] /data/www/PHPCI/PHPCI/build/project2-build479 -- /data/www/PHPCI/PHPCI/build/project2-build479 x /db/scripts/deploy-201408282328.sql vs /db/scripts/undo-201408282328.sql
```
After "'''vs'''" in output, missing full path. In xml is ${dbUndo} and ${dbUndo} = ${test}${undofile}.
I noticed, if i remove '''/''' from the begin of the ${undofile} (so values is db/scripts/undo-${DSTAMP}${TSTAMP}.sql), output is ok with full url(but path is bad :( )
``` text
/data/www/PHPCI/PHPCI/build/project2-build478db/scripts/undo-201408281551.sql
```
Maybe problem is in using Phing with PHPCI.
Thanks for help
Migrated from https://www.phing.info/trac/ticket/1136
``` json
{
"status": "new",
"changetime": "2014-08-29T15:22:29",
"description": "Hi, \n\nI am trying create automatical build with PHPCI and Phing. But I have a problem with evaluation of variable expresions.\n\nThere is a part of my phing build with a output:\n\n'''build.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project name=\"MyApplication\" default=\"main\" >\n <property file=\"./build/properties/default.properties\" />\n <property name=\"appDir\" value=\"${project.basedir}\" />\n <if>\n <isset property=\"build.env\" />\n <then>\n <echo message=\"Overwriting default.properties with ${build.env}.properties\" />\n <property file=\"build/properties/${build.env}.properties\" override=\"true\" />\n </then>\n </if>\n <target name=\"main\">\n <echo message=\"+------------------------------------------+\"/>\n <echo message=\"| |\"/>\n <echo message=\"| Building The Project |\"/>\n <echo message=\"| Dir: ${project.basedir} |\"/>\n <echo message=\"| appDir: ${appDir} |\"/>\n <echo message=\"| |\"/>\n <echo message=\"+------------------------------------------+\"/>\n\n <phing phingfile=\"build/build-logs.xml\" target=\"logs\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n <phing phingfile=\"build/sync-to-server.xml\" target=\"sync-web\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n <phing phingfile=\"build/sync-database.xml\" target=\"sync-db\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n </target>\n</project>\n}}}\n\n'''sync-to-server.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project default=\"sync-web\">\n <target name=\"sync-web\"\n description=\"Synchronize server files\"\n >\n <echo msg=\"Synchronizing files...\" />\n <filesync\n sourcedir = \".\"\n destinationdir = \"${remote.user}@${remote.server}:${remote.dir}\"\n excludeFile=\"./build/properties/exclude-files.properties\"\n itemizechanges = \"true\"\n delete = \"true\"\n checksum = \"true\" />\n <echo msg=\"End synchronizing files...\" />\n </target>\n</project>\n}}}\n\n\n'''sync-database.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project default=\"sync-db\">\n <target name=\"sync-db\">\n <echo msg=\"Synchronizing database...\" />\n <echo msg=\"AppDir: ${appDir}...\" />\n <echo msg=\"BaseDir: ${project.basedir}...\" />\n <tstamp/>\n <property name=\"deployfile\" value=\"/db/scripts/deploy-${DSTAMP}${TSTAMP}.sql\" />\n <property name=\"undofile\" value=\"/db/scripts/undo-${DSTAMP}${TSTAMP}.sql\" />\n <property name=\"test\" value=\"${appDir}\" />\n <property name=\"dbDeploy\" value=\"${appDir}${deployfile}\" />\n <property name=\"dbUndo\" value=\"${test}${undofile}\" />\n <echo msg=\"${test} -- ${test} x ${dbDeploy} vs ${dbUndo}\" />\n\n <dbdeploy\n url=\"mysql:host=${db.host};dbname=${db.name}\"\n userid=\"${db.user}\"\n password=\"${db.pass}\"\n dir=\"${test}/build/db/deltas\"\n outputfile=\"${dbDeploy}\"\n undooutputfile=\"${dbUndo}\" />\n <exec\n command=\"${tools.mysql} -h${db.host} -u${db.user} -p${db.pass} ${db.name} < ${dbDeploy}\"\n dir=\".\"\n output=\"true\"\n checkreturn=\"true\" />\n\n </target>\n</project>\n}}}\n\n'''output:'''\n\n{{{\n [echo] +------------------------------------------+\n [echo] | |\n [echo] | Building The Project |\n [echo] | Dir: /data/www/PHPCI/PHPCI/build/project2-build479 |\n [echo] | appDir: /data/www/PHPCI/PHPCI/build/project2-build479 |\n [echo] | |\n [echo] +------------------------------------------+\n [phing] Calling Buildfile '/build/build-logs.xml' with target 'logs'\n...\n...\n...\n...\n sync-db:\n\n [echo] Synchronizing database...\n [echo] AppDir: /data/www/PHPCI/PHPCI/build/project2-build479...\n [echo] BaseDir: /data/www/PHPCI/PHPCI/build/project2-build479...\n [echo] /data/www/PHPCI/PHPCI/build/project2-build479 -- /data/www/PHPCI/PHPCI/build/project2-build479 x /db/scripts/deploy-201408282328.sql vs /db/scripts/undo-201408282328.sql\n}}}\n\n\nAfter \"'''vs'''\" in output, missing full path. In xml is ${dbUndo} and ${dbUndo} = ${test}${undofile}.\n\nI noticed, if i remove '''/''' from the begin of the ${undofile} (so values is db/scripts/undo-${DSTAMP}${TSTAMP}.sql), output is ok with full url(but path is bad :( )\n\n\n{{{\n/data/www/PHPCI/PHPCI/build/project2-build478db/scripts/undo-201408281551.sql\n\n}}}\n\nMaybe problem is in using Phing with PHPCI. \n\nThanks for help",
"reporter": "josberger@seznam.cz",
"cc": "",
"resolution": "",
"_ts": "1409325749143622",
"component": "phing-core",
"summary": "Variable problem",
"priority": "major",
"keywords": "",
"version": "2.7.0",
"time": "2014-08-28T22:06:35",
"milestone": "Backlog",
"owner": "mrook",
"type": "defect"
}
```
| 1.0 | Variable problem (Trac #1136) - Hi,
I am trying create automatical build with PHPCI and Phing. But I have a problem with evaluation of variable expresions.
There is a part of my phing build with a output:
'''build.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project name="MyApplication" default="main" >
<property file="./build/properties/default.properties" />
<property name="appDir" value="${project.basedir}" />
<if>
<isset property="build.env" />
<then>
<echo message="Overwriting default.properties with ${build.env}.properties" />
<property file="build/properties/${build.env}.properties" override="true" />
</then>
</if>
<target name="main">
<echo message="+------------------------------------------+"/>
<echo message="| |"/>
<echo message="| Building The Project |"/>
<echo message="| Dir: ${project.basedir} |"/>
<echo message="| appDir: ${appDir} |"/>
<echo message="| |"/>
<echo message="+------------------------------------------+"/>
<phing phingfile="build/build-logs.xml" target="logs" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
<phing phingfile="build/sync-to-server.xml" target="sync-web" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
<phing phingfile="build/sync-database.xml" target="sync-db" inheritRefs="true" inheritAll="true">
<property name="baseDir" value="${project.basedir}" />
</phing>
</target>
</project>
```
'''sync-to-server.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project default="sync-web">
<target name="sync-web"
description="Synchronize server files"
>
<echo msg="Synchronizing files..." />
<filesync
sourcedir = "."
destinationdir = "${remote.user}@${remote.server}:${remote.dir}"
excludeFile="./build/properties/exclude-files.properties"
itemizechanges = "true"
delete = "true"
checksum = "true" />
<echo msg="End synchronizing files..." />
</target>
</project>
```
'''sync-database.xml'''
``` text
<?xml version="1.0" encoding="UTF-8"?>
<project default="sync-db">
<target name="sync-db">
<echo msg="Synchronizing database..." />
<echo msg="AppDir: ${appDir}..." />
<echo msg="BaseDir: ${project.basedir}..." />
<tstamp/>
<property name="deployfile" value="/db/scripts/deploy-${DSTAMP}${TSTAMP}.sql" />
<property name="undofile" value="/db/scripts/undo-${DSTAMP}${TSTAMP}.sql" />
<property name="test" value="${appDir}" />
<property name="dbDeploy" value="${appDir}${deployfile}" />
<property name="dbUndo" value="${test}${undofile}" />
<echo msg="${test} -- ${test} x ${dbDeploy} vs ${dbUndo}" />
<dbdeploy
url="mysql:host=${db.host};dbname=${db.name}"
userid="${db.user}"
password="${db.pass}"
dir="${test}/build/db/deltas"
outputfile="${dbDeploy}"
undooutputfile="${dbUndo}" />
<exec
command="${tools.mysql} -h${db.host} -u${db.user} -p${db.pass} ${db.name} < ${dbDeploy}"
dir="."
output="true"
checkreturn="true" />
</target>
</project>
```
'''output:'''
``` text
[echo] +------------------------------------------+
[echo] | |
[echo] | Building The Project |
[echo] | Dir: /data/www/PHPCI/PHPCI/build/project2-build479 |
[echo] | appDir: /data/www/PHPCI/PHPCI/build/project2-build479 |
[echo] | |
[echo] +------------------------------------------+
[phing] Calling Buildfile '/build/build-logs.xml' with target 'logs'
...
...
...
...
sync-db:
[echo] Synchronizing database...
[echo] AppDir: /data/www/PHPCI/PHPCI/build/project2-build479...
[echo] BaseDir: /data/www/PHPCI/PHPCI/build/project2-build479...
[echo] /data/www/PHPCI/PHPCI/build/project2-build479 -- /data/www/PHPCI/PHPCI/build/project2-build479 x /db/scripts/deploy-201408282328.sql vs /db/scripts/undo-201408282328.sql
```
After "'''vs'''" in output, missing full path. In xml is ${dbUndo} and ${dbUndo} = ${test}${undofile}.
I noticed, if i remove '''/''' from the begin of the ${undofile} (so values is db/scripts/undo-${DSTAMP}${TSTAMP}.sql), output is ok with full url(but path is bad :( )
``` text
/data/www/PHPCI/PHPCI/build/project2-build478db/scripts/undo-201408281551.sql
```
Maybe problem is in using Phing with PHPCI.
Thanks for help
Migrated from https://www.phing.info/trac/ticket/1136
``` json
{
"status": "new",
"changetime": "2014-08-29T15:22:29",
"description": "Hi, \n\nI am trying create automatical build with PHPCI and Phing. But I have a problem with evaluation of variable expresions.\n\nThere is a part of my phing build with a output:\n\n'''build.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project name=\"MyApplication\" default=\"main\" >\n <property file=\"./build/properties/default.properties\" />\n <property name=\"appDir\" value=\"${project.basedir}\" />\n <if>\n <isset property=\"build.env\" />\n <then>\n <echo message=\"Overwriting default.properties with ${build.env}.properties\" />\n <property file=\"build/properties/${build.env}.properties\" override=\"true\" />\n </then>\n </if>\n <target name=\"main\">\n <echo message=\"+------------------------------------------+\"/>\n <echo message=\"| |\"/>\n <echo message=\"| Building The Project |\"/>\n <echo message=\"| Dir: ${project.basedir} |\"/>\n <echo message=\"| appDir: ${appDir} |\"/>\n <echo message=\"| |\"/>\n <echo message=\"+------------------------------------------+\"/>\n\n <phing phingfile=\"build/build-logs.xml\" target=\"logs\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n <phing phingfile=\"build/sync-to-server.xml\" target=\"sync-web\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n <phing phingfile=\"build/sync-database.xml\" target=\"sync-db\" inheritRefs=\"true\" inheritAll=\"true\">\n <property name=\"baseDir\" value=\"${project.basedir}\" />\n </phing>\n </target>\n</project>\n}}}\n\n'''sync-to-server.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project default=\"sync-web\">\n <target name=\"sync-web\"\n description=\"Synchronize server files\"\n >\n <echo msg=\"Synchronizing files...\" />\n <filesync\n sourcedir = \".\"\n destinationdir = \"${remote.user}@${remote.server}:${remote.dir}\"\n excludeFile=\"./build/properties/exclude-files.properties\"\n itemizechanges = \"true\"\n delete = \"true\"\n checksum = \"true\" />\n <echo msg=\"End synchronizing files...\" />\n </target>\n</project>\n}}}\n\n\n'''sync-database.xml'''\n\n{{{\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project default=\"sync-db\">\n <target name=\"sync-db\">\n <echo msg=\"Synchronizing database...\" />\n <echo msg=\"AppDir: ${appDir}...\" />\n <echo msg=\"BaseDir: ${project.basedir}...\" />\n <tstamp/>\n <property name=\"deployfile\" value=\"/db/scripts/deploy-${DSTAMP}${TSTAMP}.sql\" />\n <property name=\"undofile\" value=\"/db/scripts/undo-${DSTAMP}${TSTAMP}.sql\" />\n <property name=\"test\" value=\"${appDir}\" />\n <property name=\"dbDeploy\" value=\"${appDir}${deployfile}\" />\n <property name=\"dbUndo\" value=\"${test}${undofile}\" />\n <echo msg=\"${test} -- ${test} x ${dbDeploy} vs ${dbUndo}\" />\n\n <dbdeploy\n url=\"mysql:host=${db.host};dbname=${db.name}\"\n userid=\"${db.user}\"\n password=\"${db.pass}\"\n dir=\"${test}/build/db/deltas\"\n outputfile=\"${dbDeploy}\"\n undooutputfile=\"${dbUndo}\" />\n <exec\n command=\"${tools.mysql} -h${db.host} -u${db.user} -p${db.pass} ${db.name} < ${dbDeploy}\"\n dir=\".\"\n output=\"true\"\n checkreturn=\"true\" />\n\n </target>\n</project>\n}}}\n\n'''output:'''\n\n{{{\n [echo] +------------------------------------------+\n [echo] | |\n [echo] | Building The Project |\n [echo] | Dir: /data/www/PHPCI/PHPCI/build/project2-build479 |\n [echo] | appDir: /data/www/PHPCI/PHPCI/build/project2-build479 |\n [echo] | |\n [echo] +------------------------------------------+\n [phing] Calling Buildfile '/build/build-logs.xml' with target 'logs'\n...\n...\n...\n...\n sync-db:\n\n [echo] Synchronizing database...\n [echo] AppDir: /data/www/PHPCI/PHPCI/build/project2-build479...\n [echo] BaseDir: /data/www/PHPCI/PHPCI/build/project2-build479...\n [echo] /data/www/PHPCI/PHPCI/build/project2-build479 -- /data/www/PHPCI/PHPCI/build/project2-build479 x /db/scripts/deploy-201408282328.sql vs /db/scripts/undo-201408282328.sql\n}}}\n\n\nAfter \"'''vs'''\" in output, missing full path. In xml is ${dbUndo} and ${dbUndo} = ${test}${undofile}.\n\nI noticed, if i remove '''/''' from the begin of the ${undofile} (so values is db/scripts/undo-${DSTAMP}${TSTAMP}.sql), output is ok with full url(but path is bad :( )\n\n\n{{{\n/data/www/PHPCI/PHPCI/build/project2-build478db/scripts/undo-201408281551.sql\n\n}}}\n\nMaybe problem is in using Phing with PHPCI. \n\nThanks for help",
"reporter": "josberger@seznam.cz",
"cc": "",
"resolution": "",
"_ts": "1409325749143622",
"component": "phing-core",
"summary": "Variable problem",
"priority": "major",
"keywords": "",
"version": "2.7.0",
"time": "2014-08-28T22:06:35",
"milestone": "Backlog",
"owner": "mrook",
"type": "defect"
}
```
| defect | variable problem trac hi i am trying create automatical build with phpci and phing but i have a problem with evaluation of variable expresions there is a part of my phing build with a output build xml text sync to server xml text target name sync web description synchronize server files filesync sourcedir destinationdir remote user remote server remote dir excludefile build properties exclude files properties itemizechanges true delete true checksum true sync database xml text dbdeploy url mysql host db host dbname db name userid db user password db pass dir test build db deltas outputfile dbdeploy undooutputfile dbundo exec command tools mysql h db host u db user p db pass db name lt dbdeploy dir output true checkreturn true output text building the project dir data www phpci phpci build appdir data www phpci phpci build calling buildfile build build logs xml with target logs sync db synchronizing database appdir data www phpci phpci build basedir data www phpci phpci build data www phpci phpci build data www phpci phpci build x db scripts deploy sql vs db scripts undo sql after vs in output missing full path in xml is dbundo and dbundo test undofile i noticed if i remove from the begin of the undofile so values is db scripts undo dstamp tstamp sql output is ok with full url but path is bad text data www phpci phpci build scripts undo sql maybe problem is in using phing with phpci thanks for help migrated from json status new changetime description hi n ni am trying create automatical build with phpci and phing but i have a problem with evaluation of variable expresions n nthere is a part of my phing build with a output n n build xml n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n sync to server xml n n n n n n n n n n n n n n sync database xml n n n n n n n n n n n n n n n n n n n n n n n n output n n n n n building the project n dir data www phpci phpci build n appdir data www phpci phpci build n n n calling buildfile build build logs xml with target logs n n n n n sync db n n synchronizing database n appdir data www phpci phpci build n basedir data www phpci phpci build n data www phpci phpci build data www phpci phpci build x db scripts deploy sql vs db scripts undo sql n n n nafter vs in output missing full path in xml is dbundo and dbundo test undofile n ni noticed if i remove from the begin of the undofile so values is db scripts undo dstamp tstamp sql output is ok with full url but path is bad n n n n data www phpci phpci build scripts undo sql n n n nmaybe problem is in using phing with phpci n nthanks for help reporter josberger seznam cz cc resolution ts component phing core summary variable problem priority major keywords version time milestone backlog owner mrook type defect | 1 |
140,112 | 12,887,297,938 | IssuesEvent | 2020-07-13 10:58:50 | PyTorchLightning/pytorch-lightning | https://api.github.com/repos/PyTorchLightning/pytorch-lightning | opened | The documents about override backward() | documentation | I'm trying to set retain_graph=True in loss.backward()
The document says backward function could be overridden like:
[https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility](https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility)
`
class LitMNIST(LightningModule):
def backward(self, use_amp, loss, optimizer):
# do a custom way of backward
loss.backward(retain_graph=True)
`
But the error occurs:
`
An exception has occurred: TypeError
backward() takes 4 positional arguments but 5 were given
`
I check the code, it seems the arguments for backward are
`model_ref.backward(self, closure_loss, optimizer, opt_idx)`
[https://github.com/PyTorchLightning/pytorch-lightning/blob/1d565e175d98103c2ebd6164e681f76143501da9/pytorch_lightning/trainer/training_loop.py#L820](https://github.com/PyTorchLightning/pytorch-lightning/blob/1d565e175d98103c2ebd6164e681f76143501da9/pytorch_lightning/trainer/training_loop.py#L820)
I think this part of the document needs to be updated. | 1.0 | The documents about override backward() - I'm trying to set retain_graph=True in loss.backward()
The document says backward function could be overridden like:
[https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility](https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility)
`
class LitMNIST(LightningModule):
def backward(self, use_amp, loss, optimizer):
# do a custom way of backward
loss.backward(retain_graph=True)
`
But the error occurs:
`
An exception has occurred: TypeError
backward() takes 4 positional arguments but 5 were given
`
I check the code, it seems the arguments for backward are
`model_ref.backward(self, closure_loss, optimizer, opt_idx)`
[https://github.com/PyTorchLightning/pytorch-lightning/blob/1d565e175d98103c2ebd6164e681f76143501da9/pytorch_lightning/trainer/training_loop.py#L820](https://github.com/PyTorchLightning/pytorch-lightning/blob/1d565e175d98103c2ebd6164e681f76143501da9/pytorch_lightning/trainer/training_loop.py#L820)
I think this part of the document needs to be updated. | non_defect | the documents about override backward i m trying to set retain graph true in loss backward the document says backward function could be overridden like class litmnist lightningmodule def backward self use amp loss optimizer do a custom way of backward loss backward retain graph true but the error occurs an exception has occurred typeerror backward takes positional arguments but were given i check the code it seems the arguments for backward are model ref backward self closure loss optimizer opt idx i think this part of the document needs to be updated | 0 |
314,036 | 26,971,788,390 | IssuesEvent | 2023-02-09 05:53:12 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | closed | Only one broken attachment displayed if multiple blob container attachments with the same name | 🧪 testing :gear: blobs :beetle: regression :gear: adls gen2 | **Storage Explorer Version**: 1.28.0
**Build Number**: 20230208.3
**Branch**: rel/1.28.0
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.2 (Apple M1 Pro)
**Architecture**: ia32/x64/arm64
**How Found**: Exploratory testing
**Regression From**: Previous release (1.27.2)
## Steps to Reproduce ##
1. Azure AD attach two blob containers (make sure already given the correct RBAC role).
3. Rename the two attachments using a same name.
4. Switch to the 'ACCOUNT MANAGEMENT' panel -> Remove the azure account that associated with the Azure AD attachments.
5. Switch to the 'EXPLORER' panel.
6. Check whether displays two broken blob container attachments.
## Expected Experience ##
Display two broken blob container attachments.
## Actual Experience ##
Only display one broken blob container attachment.
## Additional Context ##
This issue doesn't reproduce for queues/tables.
| 1.0 | Only one broken attachment displayed if multiple blob container attachments with the same name - **Storage Explorer Version**: 1.28.0
**Build Number**: 20230208.3
**Branch**: rel/1.28.0
**Platform/OS**: Windows 10/Linux Ubuntu 20.04/MacOS Ventura 13.2 (Apple M1 Pro)
**Architecture**: ia32/x64/arm64
**How Found**: Exploratory testing
**Regression From**: Previous release (1.27.2)
## Steps to Reproduce ##
1. Azure AD attach two blob containers (make sure already given the correct RBAC role).
3. Rename the two attachments using a same name.
4. Switch to the 'ACCOUNT MANAGEMENT' panel -> Remove the azure account that associated with the Azure AD attachments.
5. Switch to the 'EXPLORER' panel.
6. Check whether displays two broken blob container attachments.
## Expected Experience ##
Display two broken blob container attachments.
## Actual Experience ##
Only display one broken blob container attachment.
## Additional Context ##
This issue doesn't reproduce for queues/tables.
| non_defect | only one broken attachment displayed if multiple blob container attachments with the same name storage explorer version build number branch rel platform os windows linux ubuntu macos ventura apple pro architecture how found exploratory testing regression from previous release steps to reproduce azure ad attach two blob containers make sure already given the correct rbac role rename the two attachments using a same name switch to the account management panel remove the azure account that associated with the azure ad attachments switch to the explorer panel check whether displays two broken blob container attachments expected experience display two broken blob container attachments actual experience only display one broken blob container attachment additional context this issue doesn t reproduce for queues tables | 0 |
45,317 | 9,739,892,110 | IssuesEvent | 2019-06-01 15:31:28 | EdenServer/community | https://api.github.com/repos/EdenServer/community | closed | AOE Weaponskill possibly bypassing claimshield on NMs | in-code-review | ### Checklist
<!--
Don't edit or delete this section, but tick the boxes after you have submitted your issue.
If there are unticked boxes a developer may not address the issue.
Make sure you comply with the checklist and then start writing in the details section below.
-->
- [x] I have searched for existing issues for issues like this one. The issue has not been posted. (Duplicate reports slow down development.)
- [x] I have provided reproducable steps. (No "as the title says" posts please. Provide reproducable steps even if it seems like a waste of time.)
- [x] I have provided my client version in the details. (type /ver into your game window)
### Details
An AOE weaponskill used on a mob near a NM may bypass the claimshield on said NM.
| 1.0 | AOE Weaponskill possibly bypassing claimshield on NMs - ### Checklist
<!--
Don't edit or delete this section, but tick the boxes after you have submitted your issue.
If there are unticked boxes a developer may not address the issue.
Make sure you comply with the checklist and then start writing in the details section below.
-->
- [x] I have searched for existing issues for issues like this one. The issue has not been posted. (Duplicate reports slow down development.)
- [x] I have provided reproducable steps. (No "as the title says" posts please. Provide reproducable steps even if it seems like a waste of time.)
- [x] I have provided my client version in the details. (type /ver into your game window)
### Details
An AOE weaponskill used on a mob near a NM may bypass the claimshield on said NM.
| non_defect | aoe weaponskill possibly bypassing claimshield on nms checklist don t edit or delete this section but tick the boxes after you have submitted your issue if there are unticked boxes a developer may not address the issue make sure you comply with the checklist and then start writing in the details section below i have searched for existing issues for issues like this one the issue has not been posted duplicate reports slow down development i have provided reproducable steps no as the title says posts please provide reproducable steps even if it seems like a waste of time i have provided my client version in the details type ver into your game window details an aoe weaponskill used on a mob near a nm may bypass the claimshield on said nm | 0 |
27,222 | 4,937,527,751 | IssuesEvent | 2016-11-29 08:07:49 | TNGSB/eWallet | https://api.github.com/repos/TNGSB/eWallet | closed | eWallet_MobileApp_Android (Registration) #13 | Defect - High (Sev-2) | [Uploading Defect_Mobile App #13.xlsx…]()
Test Description : To validate error message displayed when user input wrong email format
Expected Result : System should display a correct error message "Please fill in the blank with *"
Actual Result : System displayed wrong error message
Refer attached document for POT
*Apply to both Android & IOS | 1.0 | eWallet_MobileApp_Android (Registration) #13 - [Uploading Defect_Mobile App #13.xlsx…]()
Test Description : To validate error message displayed when user input wrong email format
Expected Result : System should display a correct error message "Please fill in the blank with *"
Actual Result : System displayed wrong error message
Refer attached document for POT
*Apply to both Android & IOS | defect | ewallet mobileapp android registration test description to validate error message displayed when user input wrong email format expected result system should display a correct error message please fill in the blank with actual result system displayed wrong error message refer attached document for pot apply to both android ios | 1 |
52,457 | 13,752,060,340 | IssuesEvent | 2020-10-06 14:05:10 | ckauhaus/nixpkgs | https://api.github.com/repos/ckauhaus/nixpkgs | opened | Vulnerability roundup 7: qt-4.8.7: 3 advisories [7.5] | 1.severity: security | [search](https://search.nix.gsc.io/?q=qt&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=qt+in%3Apath&type=Code)
* [ ] [CVE-2018-21035](https://nvd.nist.gov/vuln/detail/CVE-2018-21035) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2020-0570](https://nvd.nist.gov/vuln/detail/CVE-2020-0570) CVSSv3=7.3 (nixos-unstable)
* [ ] [CVE-2020-17507](https://nvd.nist.gov/vuln/detail/CVE-2020-17507) CVSSv3=5.3 (nixos-unstable)
Scanned versions: nixos-unstable: 84d74ae9c9c.
| True | Vulnerability roundup 7: qt-4.8.7: 3 advisories [7.5] - [search](https://search.nix.gsc.io/?q=qt&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=qt+in%3Apath&type=Code)
* [ ] [CVE-2018-21035](https://nvd.nist.gov/vuln/detail/CVE-2018-21035) CVSSv3=7.5 (nixos-unstable)
* [ ] [CVE-2020-0570](https://nvd.nist.gov/vuln/detail/CVE-2020-0570) CVSSv3=7.3 (nixos-unstable)
* [ ] [CVE-2020-17507](https://nvd.nist.gov/vuln/detail/CVE-2020-17507) CVSSv3=5.3 (nixos-unstable)
Scanned versions: nixos-unstable: 84d74ae9c9c.
| non_defect | vulnerability roundup qt advisories nixos unstable nixos unstable nixos unstable scanned versions nixos unstable | 0 |
97,268 | 11,007,547,130 | IssuesEvent | 2019-12-04 08:45:13 | cseeger-epages/mail2most | https://api.github.com/repos/cseeger-epages/mail2most | closed | Support posting direct messages | documentation enhancement | We‘re using a Mattermost admin account allowed to post all channels and direct messages.
When configuring a profile posting to an @username, a mail never gets forwarded to the user. Is this not supported or is our configuration buggy?
| 1.0 | Support posting direct messages - We‘re using a Mattermost admin account allowed to post all channels and direct messages.
When configuring a profile posting to an @username, a mail never gets forwarded to the user. Is this not supported or is our configuration buggy?
| non_defect | support posting direct messages we‘re using a mattermost admin account allowed to post all channels and direct messages when configuring a profile posting to an username a mail never gets forwarded to the user is this not supported or is our configuration buggy | 0 |
12,093 | 2,684,928,633 | IssuesEvent | 2015-03-29 14:34:07 | sandrogauci/tftptheft | https://api.github.com/repos/sandrogauci/tftptheft | closed | it doesn't work ! | auto-migrated Priority-Medium Type-Defect | ```
Traceback (most recent call last):
File "finder.py", line 10, in ?
from lib.tftplib import tftp, tftpstruct
File "/user/tftptheft/lib/tftplib.py", line 4, in ?
from contrib.construct import *
File "/user/tftptheft/lib/contrib/construct/__init__.py", line 27, in ?
from core import *
File "/user/tftptheft/lib/contrib/construct/core.py", line 861
finally:
^
SyntaxError: invalid syntax
why ????????
```
Original issue reported on code.google.com by `rahimr...@gmail.com` on 19 Oct 2011 at 2:36 | 1.0 | it doesn't work ! - ```
Traceback (most recent call last):
File "finder.py", line 10, in ?
from lib.tftplib import tftp, tftpstruct
File "/user/tftptheft/lib/tftplib.py", line 4, in ?
from contrib.construct import *
File "/user/tftptheft/lib/contrib/construct/__init__.py", line 27, in ?
from core import *
File "/user/tftptheft/lib/contrib/construct/core.py", line 861
finally:
^
SyntaxError: invalid syntax
why ????????
```
Original issue reported on code.google.com by `rahimr...@gmail.com` on 19 Oct 2011 at 2:36 | defect | it doesn t work traceback most recent call last file finder py line in from lib tftplib import tftp tftpstruct file user tftptheft lib tftplib py line in from contrib construct import file user tftptheft lib contrib construct init py line in from core import file user tftptheft lib contrib construct core py line finally syntaxerror invalid syntax why original issue reported on code google com by rahimr gmail com on oct at | 1 |
454,414 | 13,100,465,416 | IssuesEvent | 2020-08-04 00:37:28 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | Bonus ATK stack and overwrite with Bonus atk from refine weapon | component:core mode:prerenewal mode:renewal priority:low status:confirmed type:bug | <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**:
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->757a45932
* **Client Date**:
20180621
<!-- Please specify the client date you used. -->
* **Server Mode**:
RE
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
* Result: <!-- Describe the issue that you experienced in detail. -->
Bonus ATK not affect
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
Should Increase
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
Try to create Some Headgear which is have Bonus ATK Ex: 18980
And Create 2 rod
1 have refine weapon
1 without refine weapon
try to refine this item 18980 to +10
If use Old Wind Whisper(18980)+10 without weapon or with weapon no refine is work perfeclty get bonus atk +40
see this SS below :
http://prntscr.com/mwkczq
http://prntscr.com/mwkdfn
If i try to refine rod to +4 Above bonus atk from HG didn't working
See this SS Below:
http://prntscr.com/mwkdzq
See this if use HG and weapon refine+4
http://prntscr.com/mwke44
Only increase affect from alltstat+1 and increase 1 atk ist, should have bonus atk 25+41(From HG)
Or You can see my video
* Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
No custom
See this Video :
https://youtu.be/i3eToJiheY4 | 1.0 | Bonus ATK stack and overwrite with Bonus atk from refine weapon - <!-- NOTE: Anything within these brackets will be hidden on the preview of the Issue. -->
* **rAthena Hash**:
<!-- Please specify the rAthena [GitHub hash](https://help.github.com/articles/autolinked-references-and-urls/#commit-shas) on which you encountered this issue.
How to get your GitHub Hash:
1. cd your/rAthena/directory/
2. git rev-parse --short HEAD
3. Copy the resulting hash.
-->757a45932
* **Client Date**:
20180621
<!-- Please specify the client date you used. -->
* **Server Mode**:
RE
<!-- Which mode does your server use: Pre-Renewal or Renewal? -->
* **Description of Issue**:
* Result: <!-- Describe the issue that you experienced in detail. -->
Bonus ATK not affect
* Expected Result: <!-- Describe what you would expect to happen in detail. -->
Should Increase
* How to Reproduce: <!-- If you have not stated in the description of the result already, please give us a short guide how we can reproduce your issue. -->
Try to create Some Headgear which is have Bonus ATK Ex: 18980
And Create 2 rod
1 have refine weapon
1 without refine weapon
try to refine this item 18980 to +10
If use Old Wind Whisper(18980)+10 without weapon or with weapon no refine is work perfeclty get bonus atk +40
see this SS below :
http://prntscr.com/mwkczq
http://prntscr.com/mwkdfn
If i try to refine rod to +4 Above bonus atk from HG didn't working
See this SS Below:
http://prntscr.com/mwkdzq
See this if use HG and weapon refine+4
http://prntscr.com/mwke44
Only increase affect from alltstat+1 and increase 1 atk ist, should have bonus atk 25+41(From HG)
Or You can see my video
* Official Information:<!-- If possible, provide information from official servers (kRO or other sources) which prove that the result is wrong. Please take into account that iRO (especially iRO Wiki) is not always the same as kRO. -->
<!-- * _NOTE: Make sure you quote ``` `@atcommands` ``` just like this so that you do not tag uninvolved GitHub users!_ -->
* **Modifications that may affect results**:
<!-- * Please provide any information that could influence the expected result. -->
<!-- * This can be either configurations you changed, database values you changed, or even external source modifications. -->
No custom
See this Video :
https://youtu.be/i3eToJiheY4 | non_defect | bonus atk stack and overwrite with bonus atk from refine weapon rathena hash please specify the rathena on which you encountered this issue how to get your github hash cd your rathena directory git rev parse short head copy the resulting hash client date server mode re description of issue result bonus atk not affect expected result should increase how to reproduce try to create some headgear which is have bonus atk ex and create rod have refine weapon without refine weapon try to refine this item to if use old wind whisper without weapon or with weapon no refine is work perfeclty get bonus atk see this ss below if i try to refine rod to above bonus atk from hg didn t working see this ss below see this if use hg and weapon refine only increase affect from alltstat and increase atk ist should have bonus atk from hg or you can see my video official information modifications that may affect results no custom see this video | 0 |
16,888 | 2,957,616,249 | IssuesEvent | 2015-07-08 17:13:43 | master801/PS-64 | https://api.github.com/repos/master801/PS-64 | closed | Support for WiiKKey Fusion/WiiWasp modded Gamecubes | auto-migrated Priority-Medium Type-Defect | ```
I have my Gamecube modded with a WiiKey Fusion. Swiss is flashed to the WiiKey,
and I can run DOLs and games. I can run Cube64 from an SD, but it can't find
any ROMS or save settings.cfg to the SD card. The SD card is plugged into the
SD reader of my WiiKey modchip. Would this be causing a problem? Could it be
that Cube64 is looking for an SD card in the wrong place? Yes, everything is on
the root of the SD card. I'm asking if Cube64 can't tell that I have an SD card
plugged into my modchip.
Ideally, I'd like the option to save settings.cfg and roms on whatever device
you're running Cube64 from. That way, it wouldn't matter what you have it on,
and it wouldn't matter that the SD card was plugged into a modchip.
This wasn't exactly the most eloquent way to word my issue. If anyone can
explain it better, please do. I'd love to have Cube64 running properly on my
Gamecube, and I'm sure there are others out who'd agree.
```
Original issue reported on code.google.com by `wdd...@gmail.com` on 26 Jul 2014 at 1:24 | 1.0 | Support for WiiKKey Fusion/WiiWasp modded Gamecubes - ```
I have my Gamecube modded with a WiiKey Fusion. Swiss is flashed to the WiiKey,
and I can run DOLs and games. I can run Cube64 from an SD, but it can't find
any ROMS or save settings.cfg to the SD card. The SD card is plugged into the
SD reader of my WiiKey modchip. Would this be causing a problem? Could it be
that Cube64 is looking for an SD card in the wrong place? Yes, everything is on
the root of the SD card. I'm asking if Cube64 can't tell that I have an SD card
plugged into my modchip.
Ideally, I'd like the option to save settings.cfg and roms on whatever device
you're running Cube64 from. That way, it wouldn't matter what you have it on,
and it wouldn't matter that the SD card was plugged into a modchip.
This wasn't exactly the most eloquent way to word my issue. If anyone can
explain it better, please do. I'd love to have Cube64 running properly on my
Gamecube, and I'm sure there are others out who'd agree.
```
Original issue reported on code.google.com by `wdd...@gmail.com` on 26 Jul 2014 at 1:24 | defect | support for wiikkey fusion wiiwasp modded gamecubes i have my gamecube modded with a wiikey fusion swiss is flashed to the wiikey and i can run dols and games i can run from an sd but it can t find any roms or save settings cfg to the sd card the sd card is plugged into the sd reader of my wiikey modchip would this be causing a problem could it be that is looking for an sd card in the wrong place yes everything is on the root of the sd card i m asking if can t tell that i have an sd card plugged into my modchip ideally i d like the option to save settings cfg and roms on whatever device you re running from that way it wouldn t matter what you have it on and it wouldn t matter that the sd card was plugged into a modchip this wasn t exactly the most eloquent way to word my issue if anyone can explain it better please do i d love to have running properly on my gamecube and i m sure there are others out who d agree original issue reported on code google com by wdd gmail com on jul at | 1 |
78,818 | 27,771,402,106 | IssuesEvent | 2023-03-16 14:40:33 | openziti/zrok | https://api.github.com/repos/openziti/zrok | closed | verify apiEndpoint before zrok config set | defect | **Observed Behavior:**
I ran:
* zrok config set apiEndpoint api.zrok.io
* zrok enable xxxxxx
zrok errored with:
```
[ERROR]: error creating service client (error getting version from api endpoint 'api.zrok.io': Get "http:///api/v1/version": http: no Host in request URL: Get "http:///api/v1/version": http: no Host in request URL)
```
It took me a moment to realize I had to supply `https://` to `zrok config set`
**Expectecd Behavior:**
zrok either should have just prepended https:// for me and I never would have known any better, or it should have error'ed with `zrok config set` and told me I need to provide a valid api URL
| 1.0 | verify apiEndpoint before zrok config set - **Observed Behavior:**
I ran:
* zrok config set apiEndpoint api.zrok.io
* zrok enable xxxxxx
zrok errored with:
```
[ERROR]: error creating service client (error getting version from api endpoint 'api.zrok.io': Get "http:///api/v1/version": http: no Host in request URL: Get "http:///api/v1/version": http: no Host in request URL)
```
It took me a moment to realize I had to supply `https://` to `zrok config set`
**Expectecd Behavior:**
zrok either should have just prepended https:// for me and I never would have known any better, or it should have error'ed with `zrok config set` and told me I need to provide a valid api URL
| defect | verify apiendpoint before zrok config set observed behavior i ran zrok config set apiendpoint api zrok io zrok enable xxxxxx zrok errored with error creating service client error getting version from api endpoint api zrok io get http no host in request url get http no host in request url it took me a moment to realize i had to supply to zrok config set expectecd behavior zrok either should have just prepended https for me and i never would have known any better or it should have error ed with zrok config set and told me i need to provide a valid api url | 1 |
161,025 | 12,529,899,770 | IssuesEvent | 2020-06-04 12:11:05 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | sql: TestActiveCancelSession failed under stress | C-test-failure O-robot branch-master | SHA: https://github.com/cockroachdb/cockroach/commits/7c583d146d498057b154fb67e54446fa705cec6c
Parameters:
```
TAGS=
GOFLAGS=
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestActiveCancelSession PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1551386&tab=buildLog
```
I191022 05:49:40.790860 99476 server/node.go:546 [n2] node=2: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I191022 05:49:40.791055 99476 server/status/recorder.go:609 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 29 GiB, using system memory
I191022 05:49:40.791080 99476 server/server.go:1820 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I191022 05:49:40.791141 99476 server/server.go:1534 [n2] starting https server at 127.0.0.1:38565 (use: 127.0.0.1:38565)
I191022 05:49:40.791156 99476 server/server.go:1536 [n2] starting grpc/postgres server at 127.0.0.1:43059
I191022 05:49:40.791169 99476 server/server.go:1537 [n2] advertising CockroachDB node at 127.0.0.1:43059
I191022 05:49:40.793255 99476 server/server.go:1590 [n2] done ensuring all necessary migrations have run
I191022 05:49:40.793281 99476 server/server.go:1593 [n2] serving sql connections
I191022 05:49:40.798937 99408 rpc/nodedialer/nodedialer.go:95 [n1] unable to connect to n2: failed to resolve n2: unable to look up descriptor for n2
I191022 05:49:40.804508 99939 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I191022 05:49:40.808606 99479 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I191022 05:49:40.814418 99941 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:43059} Attrs: Locality: ServerVersion:2.1 BuildTag:v2.1.9-7-g7c583d1 StartedAt:1571723380789509688 LocalityAddress:[]} ClusterID:0262a10f-3a90-41ac-82e6-b80400949884 StartedAt:1571723380789509688 LastUp:1571723380789509688}
I191022 05:49:41.663524 99983 storage/replica_consistency.go:127 [n1,consistencyChecker,s1,r1/1:/M{in-ax}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1571723380801782718 IntentAge:0 GCBytesAge:0 LiveBytes:-44602 LiveCount:-932 KeyBytes:-43412 KeyCount:-932 ValBytes:-1190 ValCount:-932 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I191022 05:49:42.300591 99516 gossip/gossip.go:1489 [n1] node has connected to cluster via gossip
I191022 05:49:42.300745 99516 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I191022 05:49:43.084551 99790 gossip/gossip.go:1489 [n2] node has connected to cluster via gossip
I191022 05:49:43.084708 99790 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I191022 05:49:50.649304 99716 server/status/runtime.go:465 [n1] runtime stats: 185 MiB RSS, 656 goroutines, 24 MiB/59 MiB/103 MiB GO alloc/idle/total, 18 MiB/55 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (700x), 88 MiB/88 MiB (r/w)net
W191022 05:49:50.675356 99718 server/node.go:886 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:1}]}
I191022 05:49:50.792899 99913 server/status/runtime.go:465 [n2] runtime stats: 185 MiB RSS, 656 goroutines, 29 MiB/54 MiB/103 MiB GO alloc/idle/total, 18 MiB/55 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (700x), 91 MiB/91 MiB (r/w)net
E191022 05:49:50.799616 100340 sql/distsqlrun/flow_registry.go:230 [intExec=read orphaned table leases] flow id:382a35fa-6ba4-4a7d-a0ab-b016c5c4eec3 : 1 inbound streams timed out after 10s; propagated error throughout flow
I191022 05:49:51.047161 100343 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I191022 05:49:51.047187 100342 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
W191022 05:49:51.047267 99790 gossip/gossip.go:1475 [n2] no incoming or outgoing connections
I191022 05:49:51.047832 100343 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
I191022 05:49:51.047989 100342 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
--- FAIL: TestActiveCancelSession (10.44s)
run_control_test.go:321: expected 2 sessions but found 3
``` | 1.0 | sql: TestActiveCancelSession failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/7c583d146d498057b154fb67e54446fa705cec6c
Parameters:
```
TAGS=
GOFLAGS=
```
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stressrace instead of stress and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stress TESTS=TestActiveCancelSession PKG=github.com/cockroachdb/cockroach/pkg/sql TESTTIMEOUT=5m STRESSFLAGS='-stderr=false -maxtime 20m -timeout 10m'
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1551386&tab=buildLog
```
I191022 05:49:40.790860 99476 server/node.go:546 [n2] node=2: started with [<no-attributes>=<in-mem>] engine(s) and attributes []
I191022 05:49:40.791055 99476 server/status/recorder.go:609 [n2] available memory from cgroups (8.0 EiB) exceeds system memory 29 GiB, using system memory
I191022 05:49:40.791080 99476 server/server.go:1820 [n2] Could not start heap profiler worker due to: directory to store profiles could not be determined
I191022 05:49:40.791141 99476 server/server.go:1534 [n2] starting https server at 127.0.0.1:38565 (use: 127.0.0.1:38565)
I191022 05:49:40.791156 99476 server/server.go:1536 [n2] starting grpc/postgres server at 127.0.0.1:43059
I191022 05:49:40.791169 99476 server/server.go:1537 [n2] advertising CockroachDB node at 127.0.0.1:43059
I191022 05:49:40.793255 99476 server/server.go:1590 [n2] done ensuring all necessary migrations have run
I191022 05:49:40.793281 99476 server/server.go:1593 [n2] serving sql connections
I191022 05:49:40.798937 99408 rpc/nodedialer/nodedialer.go:95 [n1] unable to connect to n2: failed to resolve n2: unable to look up descriptor for n2
I191022 05:49:40.804508 99939 server/server_update.go:67 [n2] no need to upgrade, cluster already at the newest version
I191022 05:49:40.808606 99479 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I191022 05:49:40.814418 99941 sql/event_log.go:126 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:43059} Attrs: Locality: ServerVersion:2.1 BuildTag:v2.1.9-7-g7c583d1 StartedAt:1571723380789509688 LocalityAddress:[]} ClusterID:0262a10f-3a90-41ac-82e6-b80400949884 StartedAt:1571723380789509688 LastUp:1571723380789509688}
I191022 05:49:41.663524 99983 storage/replica_consistency.go:127 [n1,consistencyChecker,s1,r1/1:/M{in-ax}] triggering stats recomputation to resolve delta of {ContainsEstimates:true LastUpdateNanos:1571723380801782718 IntentAge:0 GCBytesAge:0 LiveBytes:-44602 LiveCount:-932 KeyBytes:-43412 KeyCount:-932 ValBytes:-1190 ValCount:-932 IntentBytes:0 IntentCount:0 SysBytes:0 SysCount:0}
I191022 05:49:42.300591 99516 gossip/gossip.go:1489 [n1] node has connected to cluster via gossip
I191022 05:49:42.300745 99516 storage/stores.go:261 [n1] wrote 1 node addresses to persistent storage
I191022 05:49:43.084551 99790 gossip/gossip.go:1489 [n2] node has connected to cluster via gossip
I191022 05:49:43.084708 99790 storage/stores.go:261 [n2] wrote 1 node addresses to persistent storage
I191022 05:49:50.649304 99716 server/status/runtime.go:465 [n1] runtime stats: 185 MiB RSS, 656 goroutines, 24 MiB/59 MiB/103 MiB GO alloc/idle/total, 18 MiB/55 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (700x), 88 MiB/88 MiB (r/w)net
W191022 05:49:50.675356 99718 server/node.go:886 [n1,summaries] health alerts detected: {Alerts:[{StoreID:1 Category:METRICS Description:ranges.underreplicated Value:1}]}
I191022 05:49:50.792899 99913 server/status/runtime.go:465 [n2] runtime stats: 185 MiB RSS, 656 goroutines, 29 MiB/54 MiB/103 MiB GO alloc/idle/total, 18 MiB/55 MiB CGO alloc/total, 0.0 CGO/sec, 0.0/0.0 %(u/s)time, 0.0 %gc (700x), 91 MiB/91 MiB (r/w)net
E191022 05:49:50.799616 100340 sql/distsqlrun/flow_registry.go:230 [intExec=read orphaned table leases] flow id:382a35fa-6ba4-4a7d-a0ab-b016c5c4eec3 : 1 inbound streams timed out after 10s; propagated error throughout flow
I191022 05:49:51.047161 100343 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
I191022 05:49:51.047187 100342 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-subscription
1 [async] closedts-rangefeed-subscriber
W191022 05:49:51.047267 99790 gossip/gossip.go:1475 [n2] no incoming or outgoing connections
I191022 05:49:51.047832 100343 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
I191022 05:49:51.047989 100342 util/stop/stopper.go:548 quiescing; tasks left:
1 [async] closedts-rangefeed-subscriber
--- FAIL: TestActiveCancelSession (10.44s)
run_control_test.go:321: expected 2 sessions but found 3
``` | non_defect | sql testactivecancelsession failed under stress sha parameters tags goflags to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stressrace instead of stress and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stress tests testactivecancelsession pkg github com cockroachdb cockroach pkg sql testtimeout stressflags stderr false maxtime timeout failed test server node go node started with engine s and attributes server status recorder go available memory from cgroups eib exceeds system memory gib using system memory server server go could not start heap profiler worker due to directory to store profiles could not be determined server server go starting https server at use server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections rpc nodedialer nodedialer go unable to connect to failed to resolve unable to look up descriptor for server server update go no need to upgrade cluster already at the newest version storage stores go wrote node addresses to persistent storage sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality serverversion buildtag startedat localityaddress clusterid startedat lastup storage replica consistency go triggering stats recomputation to resolve delta of containsestimates true lastupdatenanos intentage gcbytesage livebytes livecount keybytes keycount valbytes valcount intentbytes intentcount sysbytes syscount gossip gossip go node has connected to cluster via gossip storage stores go wrote node addresses to persistent storage gossip gossip go node has connected to cluster via gossip storage stores go wrote node addresses to persistent storage server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net server node go health alerts detected alerts server status runtime go runtime stats mib rss goroutines mib mib mib go alloc idle total mib mib cgo alloc total cgo sec u s time gc mib mib r w net sql distsqlrun flow registry go flow id inbound streams timed out after propagated error throughout flow util stop stopper go quiescing tasks left closedts subscription closedts rangefeed subscriber util stop stopper go quiescing tasks left closedts subscription closedts rangefeed subscriber gossip gossip go no incoming or outgoing connections util stop stopper go quiescing tasks left closedts rangefeed subscriber util stop stopper go quiescing tasks left closedts rangefeed subscriber fail testactivecancelsession run control test go expected sessions but found | 0 |
41,326 | 8,958,966,939 | IssuesEvent | 2019-01-27 18:35:26 | MadeMadeDane/castle_o_puzzles | https://api.github.com/repos/MadeMadeDane/castle_o_puzzles | closed | [PhysicsProp] Wind | code mapping physicsprop | Wind will cause a trigger region to act as a vector field of force (for now with constant direction/magnitude) with configurable effects on FluidDynamic objects, including the player. | 1.0 | [PhysicsProp] Wind - Wind will cause a trigger region to act as a vector field of force (for now with constant direction/magnitude) with configurable effects on FluidDynamic objects, including the player. | non_defect | wind wind will cause a trigger region to act as a vector field of force for now with constant direction magnitude with configurable effects on fluiddynamic objects including the player | 0 |
346,896 | 31,032,343,591 | IssuesEvent | 2023-08-10 13:15:23 | kiwix/kiwix-js | https://api.github.com/repos/kiwix/kiwix-js | closed | Re-enable Unit Tests on IE11 | tests | This arises from #736. Having polyfilled Promises, it should now be possible to re-enable (online) Unit Tests on IE11. #736 fixes local (browser-run) tests for IE11, so there is no reason, on the face of it, why they should not be able to run automatically as well, assuming Sauce Labs provides an IE11 instance. | 1.0 | Re-enable Unit Tests on IE11 - This arises from #736. Having polyfilled Promises, it should now be possible to re-enable (online) Unit Tests on IE11. #736 fixes local (browser-run) tests for IE11, so there is no reason, on the face of it, why they should not be able to run automatically as well, assuming Sauce Labs provides an IE11 instance. | non_defect | re enable unit tests on this arises from having polyfilled promises it should now be possible to re enable online unit tests on fixes local browser run tests for so there is no reason on the face of it why they should not be able to run automatically as well assuming sauce labs provides an instance | 0 |
63,700 | 17,864,338,587 | IssuesEvent | 2021-09-06 07:34:03 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | opened | Unable to connect calls when app is not running. | T-Defect | ### Steps to reproduce
Steps to reproduce the behavior:
1. Login to your account, then kill the app.
2. Use another account to initiate a one-to-one voice call with this account.
3. When call kit incoming screen show up, click blue check button to accept the call invitation.
### What happened?
### What did you expect?
When I accept the call invitation, app should navigate to call screen and I can attend the call.
### What happened?
* When I accept the call invitation, it only shows Home screen:

* The room screen is displayed like this:

* Call is not connected but the native screen still running ( dismissed after 1 minutes):

### Your phone model
iPhone 11
### Operating system version
iOS 14.6
### Application version
Element version 1.5.1
### Homeserver
matrix.org
### Have you submitted a rageshake?
No | 1.0 | Unable to connect calls when app is not running. - ### Steps to reproduce
Steps to reproduce the behavior:
1. Login to your account, then kill the app.
2. Use another account to initiate a one-to-one voice call with this account.
3. When call kit incoming screen show up, click blue check button to accept the call invitation.
### What happened?
### What did you expect?
When I accept the call invitation, app should navigate to call screen and I can attend the call.
### What happened?
* When I accept the call invitation, it only shows Home screen:

* The room screen is displayed like this:

* Call is not connected but the native screen still running ( dismissed after 1 minutes):

### Your phone model
iPhone 11
### Operating system version
iOS 14.6
### Application version
Element version 1.5.1
### Homeserver
matrix.org
### Have you submitted a rageshake?
No | defect | unable to connect calls when app is not running steps to reproduce steps to reproduce the behavior login to your account then kill the app use another account to initiate a one to one voice call with this account when call kit incoming screen show up click blue check button to accept the call invitation what happened what did you expect when i accept the call invitation app should navigate to call screen and i can attend the call what happened when i accept the call invitation it only shows home screen the room screen is displayed like this call is not connected but the native screen still running dismissed after minutes your phone model iphone operating system version ios application version element version homeserver matrix org have you submitted a rageshake no | 1 |
70,848 | 23,342,247,901 | IssuesEvent | 2022-08-09 14:53:10 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | Orphaned form label within Editorial Workflow component of the CMS | Needs refining ⭐️ Sitewide CMS 508/Accessibility 508-defect-2 | ## Description
In the Editorial workflow component on the edit screens in the CMS, the "Current State" is displayed using the `<label>` element. However, there is no associated field causing this to be an orphaned form label.
## Screenshot

## Accessibility Standard
WCAG version 2.0 AA, [Criterion 1.3.1](https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html)
## Acceptance Criteria
- [ ] Determine if this text could be better displayed using a different element tag, perhaps `<p>`
- [ ] Technical feasibility review
- [ ] Change management consulted
- [ ] Implementation ticket created
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| 1.0 | Orphaned form label within Editorial Workflow component of the CMS - ## Description
In the Editorial workflow component on the edit screens in the CMS, the "Current State" is displayed using the `<label>` element. However, there is no associated field causing this to be an orphaned form label.
## Screenshot

## Accessibility Standard
WCAG version 2.0 AA, [Criterion 1.3.1](https://www.w3.org/WAI/WCAG21/Understanding/info-and-relationships.html)
## Acceptance Criteria
- [ ] Determine if this text could be better displayed using a different element tag, perhaps `<p>`
- [ ] Technical feasibility review
- [ ] Change management consulted
- [ ] Implementation ticket created
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| defect | orphaned form label within editorial workflow component of the cms description in the editorial workflow component on the edit screens in the cms the current state is displayed using the element however there is no associated field causing this to be an orphaned form label screenshot accessibility standard wcag version aa acceptance criteria determine if this text could be better displayed using a different element tag perhaps technical feasibility review change management consulted implementation ticket created cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support | 1 |
135,876 | 30,442,800,804 | IssuesEvent | 2023-07-15 09:20:51 | linwu-hi/coding-time | https://api.github.com/repos/linwu-hi/coding-time | opened | poker | javascript typescript dart leetcode 数据结构和算法 data-structures algorithms | # TS实战之扑克牌排序
[在线运行](https://code.juejin.cn/pen/7254739493366333499)
我们用`ts实现扑克牌排序问题`,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。
## 类型和转换
定义一些我们需要的类型。`Rank`和`Suit`是明显的[联合类型](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types)。
```ts
type Rank =
| 'A' | '2' | '3' | '4' | '5' | '6' | '7'
| '8' | '9' | '10' | 'J' | 'Q' | 'K'
type Suit = '♥' | '♦' | '♠' | '♣';
```
我们将使用`Card`对象进行处理,将rank和suit转换为数字。卡片将用从1(Ace)到13(King)的值表示,花色从1(红心)到4(梅花)。`rankToNumber()`和`suitToNumber()`函数处理从`Rank`和`Suit`值到数字的转换。
```ts
type Card = { rank: number; suit: number };
const rankToNumber = (rank: Rank): number =>
rank === 'A' ? 1
: rank === 'J' ? 11
: rank === 'Q' ? 12
: rank === 'K' ? 13
: Number(rank);
const suitToNumber = (suit: Suit): number =>
suit === '♥' ? 1
: suit === '♦' ? 2
: suit === '♠' ? 3
: /* suit === "♣" */ 4;
```

这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个[枚举](https://www.typescriptlang.org/docs/handbook/enums.html)类型来表示手牌的可能值。这些值按照从最低("高牌")到最高("皇家同花顺")的顺序排列。
```ts
enum Hand {
HighCard, // 高牌
OnePair, // 一对
TwoPairs, // 两对
ThreeOfAKind, // 三条
Straight, // 顺子
Flush, // 同花
FullHouse, // 葫芦
FourOfAKind, // 四条
StraightFlush, // 同花顺
RoyalFlush //皇家同花顺
}
```
## 我们有什么手牌?
让我们首先定义我们将要构建的`handRank()`函数。我们的函数将接收一个包含`五张牌的元组`,并返回一个`Hand`结果。
```ts
export function handRank(
cardStrings: [string, string, string, string, string]
): Hand {
.
.
.
}
```
由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字`rank`和`suit`值的`Card`对象,以便更容易编写。
```ts
const cards: Card[] = cardStrings.map((str: string) => ({
rank: rankToNumber(
str.substring(0, str.length - 1) as Rank
),
suit: suitToNumber(str.at(-1) as Suit)
}));
.
.
.
// 继续...
```

确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张J和两张K,J的计数为3,K的计数为2。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个Q,两个A和一个5,我们会得到两个计数为两和一个计数为一;我们有两对。
生成计数很简单。我们希望A的计数在`countByRank[1]`处,因此我们不会使用`countByRank`数组的初始位置。类似地,花色的计数将位于`countBySuit[1]`到`countBySuit[4]`之间,因此我们也不会使用该数组的初始位置。
```ts
// ...继续
.
.
.
const countBySuit = new Array(5).fill(0);
const countByRank = new Array(15).fill(0);
const countBySet = new Array(5).fill(0);
cards.forEach((card: Card) => {
countByRank[card.rank]++;
countBySuit[card.suit]++;
});
countByRank.forEach(
(count: number) => count && countBySet[count]++
);
.
.
.
// 继续...
```
我们不要忘记A可能位于顺子的开头(A-2-3-4-5)或结尾(10-J-Q-K-A)。我们可以通过在K之后复制Aces计数来处理这个问题。
```ts
// ...继续
.
.
.
countByRank[14] = countByRank[1];
.
.
.
// 继续...
```
现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌:
```ts
// ...继续
.
.
.
if (count
BySet[4] === 1 && countBySet[1] === 1)
return Hand.FourOfAKind;
else if (countBySet[3] && countBySet[2] === 1)
return Hand.FullHouse;
else if (countBySet[3] && countBySet[1] === 2)
return Hand.ThreeOfAKind;
else if (countBySet[2] === 2 && countBySet[1] === 1)
return Hand.TwoPairs;
else if (countBySet[2] === 1 && countBySet[1] === 3)
return Hand.OnePair;
.
.
.
// 继续...
```
例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果`countBySet[4] === 1`,为什么还要测试`countBySet[1] === 1`?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是[“防御性编程”](https://en.wikipedia.org/wiki/Defensive_programming)——在开发代码时,有时会出现错误,通过在测试中更加具体,有助于排查错误。
上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。
```ts
// ...继续
.
.
.
else if (countBySet[1] === 5) {
if (countByRank.join('').includes('11111'))
return !countBySuit.includes(5)
? Hand.Straight
: countByRank.slice(10).join('') === '11111'
? Hand.RoyalFlush
: Hand.StraightFlush;
else {
return countBySuit.includes(5)
? Hand.Flush
: Hand.HighCard;
}
} else {
throw new Error(
'Unknown hand! This cannot happen! Bad logic!'
);
}
```
这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个`throw`。
我们如何测试顺子?我们应该有五个连续的等级。如果我们查看`countByRank`数组,它应该有五个连续的1,所以通过执行`countByRank.join()`并检查生成的字符串是否包含`11111`,我们可以确定是顺子。

我们必须区分几种情况:
* 如果没有五张相同花色的牌,那么它是一个普通的顺子
* 如果所有牌都是相同花色,如果顺子以一张A结束,则为皇家同花顺
* 如果所有牌都是相同花色,但我们不以A结束,那么我们有一个同花顺
如果我们没有顺子,只有两种可能性:
* 如果所有牌都是相同花色,我们有一个同花
* 如果不是所有牌都是相同花色,我们有一个“高牌”
完整的函数如下所示:
```ts
export function handRank(
cardStrings: [string, string, string, string, string]
): Hand {
const cards: Card[] = cardStrings.map((str: string) => ({
rank: rankToNumber(
str.substring(0, str.length - 1) as Rank
),
suit: suitToNumber(str.at(-1) as Suit)
}));
// We won't use the [0] place in the following arrays
const countBySuit = new Array(5).fill(0);
const countByRank = new Array(15).fill(0);
const countBySet = new Array(5).fill(0);
cards.forEach((card: Card) => {
countByRank[card.rank]++;
countBySuit[card.suit]++;
});
countByRank.forEach(
(count: number) => count && countBySet[count]++
);
// count the A also as a 14, for straights
countByRank[14] = countByRank[1];
if (countBySet[4] === 1 && countBySet[1] === 1)
return Hand.FourOfAKind;
else if (countBySet[3] && countBySet[2] === 1)
return Hand.FullHouse;
else if (countBySet[3] && countBySet[1] === 2)
return Hand.ThreeOfAKind;
else if (countBySet[2] === 2 && countBySet[1] === 1)
return Hand.TwoPairs;
else if (countBySet[2] === 1 && countBySet[1] === 3)
return Hand.OnePair;
else if (countBySet[1] === 5) {
if (countByRank.join('').includes('11111'))
return !countBySuit.includes(5)
? Hand.Straight
: countByRank.slice(10).join('') === '11111'
? Hand.RoyalFlush
: Hand.StraightFlush;
else {
/* !countByRank.join("").includes("11111") */
return countBySuit.includes(5)
? Hand.Flush
: Hand.HighCard;
}
} else {
throw new Error(
'Unknown hand! This cannot happen! Bad logic!'
);
}
}
```
## 测试代码
```ts
console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '6♠'])); // 0
console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '5♠'])); // 1
console.log(handRank(['3♥', '5♦', '3♣', 'A♥', '5♠'])); // 2
console.log(handRank(['3♥', '5♦', '8♣', '5♥', '5♠'])); // 3
console.log(handRank(['3♥', '2♦', 'A♣', '5♥', '4♠'])); // 4
console.log(handRank(['J♥', '10♦', 'A♣', 'Q♥', 'K♠'])); // 4
console.log(handRank(['3♥', '4♦', '7♣', '5♥', '6♠'])); // 4
console.log(handRank(['3♥', '4♥', '9♥', '5♥', '6♥'])); // 5
console.log(handRank(['3♥', '5♦', '3♣', '5♥', '3♠'])); // 6
console.log(handRank(['3♥', '3♦', '3♣', '5♥', '3♠'])); // 7
console.log(handRank(['3♥', '4♥', '7♥', '5♥', '6♥'])); // 8
console.log(handRank(['K♥', 'Q♥', 'A♥', '10♥', 'J♥'])); // 9
```
[在线运行](https://code.juejin.cn/pen/7254739493366333499)
| 1.0 | poker - # TS实战之扑克牌排序
[在线运行](https://code.juejin.cn/pen/7254739493366333499)
我们用`ts实现扑克牌排序问题`,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。
## 类型和转换
定义一些我们需要的类型。`Rank`和`Suit`是明显的[联合类型](https://www.typescriptlang.org/docs/handbook/2/everyday-types.html#union-types)。
```ts
type Rank =
| 'A' | '2' | '3' | '4' | '5' | '6' | '7'
| '8' | '9' | '10' | 'J' | 'Q' | 'K'
type Suit = '♥' | '♦' | '♠' | '♣';
```
我们将使用`Card`对象进行处理,将rank和suit转换为数字。卡片将用从1(Ace)到13(King)的值表示,花色从1(红心)到4(梅花)。`rankToNumber()`和`suitToNumber()`函数处理从`Rank`和`Suit`值到数字的转换。
```ts
type Card = { rank: number; suit: number };
const rankToNumber = (rank: Rank): number =>
rank === 'A' ? 1
: rank === 'J' ? 11
: rank === 'Q' ? 12
: rank === 'K' ? 13
: Number(rank);
const suitToNumber = (suit: Suit): number =>
suit === '♥' ? 1
: suit === '♦' ? 2
: suit === '♠' ? 3
: /* suit === "♣" */ 4;
```

这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个[枚举](https://www.typescriptlang.org/docs/handbook/enums.html)类型来表示手牌的可能值。这些值按照从最低("高牌")到最高("皇家同花顺")的顺序排列。
```ts
enum Hand {
HighCard, // 高牌
OnePair, // 一对
TwoPairs, // 两对
ThreeOfAKind, // 三条
Straight, // 顺子
Flush, // 同花
FullHouse, // 葫芦
FourOfAKind, // 四条
StraightFlush, // 同花顺
RoyalFlush //皇家同花顺
}
```
## 我们有什么手牌?
让我们首先定义我们将要构建的`handRank()`函数。我们的函数将接收一个包含`五张牌的元组`,并返回一个`Hand`结果。
```ts
export function handRank(
cardStrings: [string, string, string, string, string]
): Hand {
.
.
.
}
```
由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字`rank`和`suit`值的`Card`对象,以便更容易编写。
```ts
const cards: Card[] = cardStrings.map((str: string) => ({
rank: rankToNumber(
str.substring(0, str.length - 1) as Rank
),
suit: suitToNumber(str.at(-1) as Suit)
}));
.
.
.
// 继续...
```

确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张J和两张K,J的计数为3,K的计数为2。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个Q,两个A和一个5,我们会得到两个计数为两和一个计数为一;我们有两对。
生成计数很简单。我们希望A的计数在`countByRank[1]`处,因此我们不会使用`countByRank`数组的初始位置。类似地,花色的计数将位于`countBySuit[1]`到`countBySuit[4]`之间,因此我们也不会使用该数组的初始位置。
```ts
// ...继续
.
.
.
const countBySuit = new Array(5).fill(0);
const countByRank = new Array(15).fill(0);
const countBySet = new Array(5).fill(0);
cards.forEach((card: Card) => {
countByRank[card.rank]++;
countBySuit[card.suit]++;
});
countByRank.forEach(
(count: number) => count && countBySet[count]++
);
.
.
.
// 继续...
```
我们不要忘记A可能位于顺子的开头(A-2-3-4-5)或结尾(10-J-Q-K-A)。我们可以通过在K之后复制Aces计数来处理这个问题。
```ts
// ...继续
.
.
.
countByRank[14] = countByRank[1];
.
.
.
// 继续...
```
现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌:
```ts
// ...继续
.
.
.
if (count
BySet[4] === 1 && countBySet[1] === 1)
return Hand.FourOfAKind;
else if (countBySet[3] && countBySet[2] === 1)
return Hand.FullHouse;
else if (countBySet[3] && countBySet[1] === 2)
return Hand.ThreeOfAKind;
else if (countBySet[2] === 2 && countBySet[1] === 1)
return Hand.TwoPairs;
else if (countBySet[2] === 1 && countBySet[1] === 3)
return Hand.OnePair;
.
.
.
// 继续...
```
例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果`countBySet[4] === 1`,为什么还要测试`countBySet[1] === 1`?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是[“防御性编程”](https://en.wikipedia.org/wiki/Defensive_programming)——在开发代码时,有时会出现错误,通过在测试中更加具体,有助于排查错误。
上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。
```ts
// ...继续
.
.
.
else if (countBySet[1] === 5) {
if (countByRank.join('').includes('11111'))
return !countBySuit.includes(5)
? Hand.Straight
: countByRank.slice(10).join('') === '11111'
? Hand.RoyalFlush
: Hand.StraightFlush;
else {
return countBySuit.includes(5)
? Hand.Flush
: Hand.HighCard;
}
} else {
throw new Error(
'Unknown hand! This cannot happen! Bad logic!'
);
}
```
这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个`throw`。
我们如何测试顺子?我们应该有五个连续的等级。如果我们查看`countByRank`数组,它应该有五个连续的1,所以通过执行`countByRank.join()`并检查生成的字符串是否包含`11111`,我们可以确定是顺子。

我们必须区分几种情况:
* 如果没有五张相同花色的牌,那么它是一个普通的顺子
* 如果所有牌都是相同花色,如果顺子以一张A结束,则为皇家同花顺
* 如果所有牌都是相同花色,但我们不以A结束,那么我们有一个同花顺
如果我们没有顺子,只有两种可能性:
* 如果所有牌都是相同花色,我们有一个同花
* 如果不是所有牌都是相同花色,我们有一个“高牌”
完整的函数如下所示:
```ts
export function handRank(
cardStrings: [string, string, string, string, string]
): Hand {
const cards: Card[] = cardStrings.map((str: string) => ({
rank: rankToNumber(
str.substring(0, str.length - 1) as Rank
),
suit: suitToNumber(str.at(-1) as Suit)
}));
// We won't use the [0] place in the following arrays
const countBySuit = new Array(5).fill(0);
const countByRank = new Array(15).fill(0);
const countBySet = new Array(5).fill(0);
cards.forEach((card: Card) => {
countByRank[card.rank]++;
countBySuit[card.suit]++;
});
countByRank.forEach(
(count: number) => count && countBySet[count]++
);
// count the A also as a 14, for straights
countByRank[14] = countByRank[1];
if (countBySet[4] === 1 && countBySet[1] === 1)
return Hand.FourOfAKind;
else if (countBySet[3] && countBySet[2] === 1)
return Hand.FullHouse;
else if (countBySet[3] && countBySet[1] === 2)
return Hand.ThreeOfAKind;
else if (countBySet[2] === 2 && countBySet[1] === 1)
return Hand.TwoPairs;
else if (countBySet[2] === 1 && countBySet[1] === 3)
return Hand.OnePair;
else if (countBySet[1] === 5) {
if (countByRank.join('').includes('11111'))
return !countBySuit.includes(5)
? Hand.Straight
: countByRank.slice(10).join('') === '11111'
? Hand.RoyalFlush
: Hand.StraightFlush;
else {
/* !countByRank.join("").includes("11111") */
return countBySuit.includes(5)
? Hand.Flush
: Hand.HighCard;
}
} else {
throw new Error(
'Unknown hand! This cannot happen! Bad logic!'
);
}
}
```
## 测试代码
```ts
console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '6♠'])); // 0
console.log(handRank(['3♥', '5♦', '8♣', 'A♥', '5♠'])); // 1
console.log(handRank(['3♥', '5♦', '3♣', 'A♥', '5♠'])); // 2
console.log(handRank(['3♥', '5♦', '8♣', '5♥', '5♠'])); // 3
console.log(handRank(['3♥', '2♦', 'A♣', '5♥', '4♠'])); // 4
console.log(handRank(['J♥', '10♦', 'A♣', 'Q♥', 'K♠'])); // 4
console.log(handRank(['3♥', '4♦', '7♣', '5♥', '6♠'])); // 4
console.log(handRank(['3♥', '4♥', '9♥', '5♥', '6♥'])); // 5
console.log(handRank(['3♥', '5♦', '3♣', '5♥', '3♠'])); // 6
console.log(handRank(['3♥', '3♦', '3♣', '5♥', '3♠'])); // 7
console.log(handRank(['3♥', '4♥', '7♥', '5♥', '6♥'])); // 8
console.log(handRank(['K♥', 'Q♥', 'A♥', '10♥', 'J♥'])); // 9
```
[在线运行](https://code.juejin.cn/pen/7254739493366333499)
| non_defect | poker ts实战之扑克牌排序 我们用 ts实现扑克牌排序问题 ,首先,我们将定义所需的数据类型,然后专注于模式查找算法,该算法有几个有趣的要点。 类型和转换 定义一些我们需要的类型。 rank 和 suit 是明显的 ts type rank a j q k type suit ♥ ♦ ♠ ♣ 我们将使用 card 对象进行处理,将rank和suit转换为数字。 (ace) (king)的值表示, (红心) (梅花)。 ranktonumber 和 suittonumber 函数处理从 rank 和 suit 值到数字的转换。 ts type card rank number suit number const ranktonumber rank rank number rank a rank j rank q rank k number rank const suittonumber suit suit number suit ♥ suit ♦ suit ♠ suit ♣ images png 这些类型用于内部工作;我们还必须定义手牌检测算法的结果类型。我们需要一个 ts enum hand highcard 高牌 onepair 一对 twopairs 两对 threeofakind 三条 straight 顺子 flush 同花 fullhouse 葫芦 fourofakind 四条 straightflush 同花顺 royalflush 皇家同花顺 我们有什么手牌? 让我们首先定义我们将要构建的 handrank 函数。我们的函数将接收一个包含 五张牌的元组 ,并返回一个 hand 结果。 ts export function handrank cardstrings hand 由于处理字符串比我们需要的要困难,我们将把牌字符串转换为具有数字 rank 和 suit 值的 card 对象,以便更容易编写。 ts const cards card cardstrings map str string rank ranktonumber str substring str length as rank suit suittonumber str at as suit 继续 images png 确定玩家手牌的价值的关键在于知道每个等级的牌有多少张,以及我们有多少计数。例如,如果我们有三张j和两张k, , 。然后,知道我们有一个计数为三和一个计数为两的计数,我们可以确定我们有一个葫芦。另一个例子:如果我们有两个q, ,我们会得到两个计数为两和一个计数为一;我们有两对。 生成计数很简单。我们希望a的计数在 countbyrank 处,因此我们不会使用 countbyrank 数组的初始位置。类似地,花色的计数将位于 countbysuit 到 countbysuit 之间,因此我们也不会使用该数组的初始位置。 ts 继续 const countbysuit new array fill const countbyrank new array fill const countbyset new array fill cards foreach card card countbyrank countbysuit countbyrank foreach count number count countbyset 继续 我们不要忘记a可能位于顺子的开头(a )或结尾( j q k a)。我们可以通过在k之后复制aces计数来处理这个问题。 ts 继续 countbyrank countbyrank 继续 现在我们可以开始识别手牌了。我们只需要查看按等级计数即可识别几种手牌: ts 继续 if count byset countbyset return hand fourofakind else if countbyset countbyset return hand fullhouse else if countbyset countbyset return hand threeofakind else if countbyset countbyset return hand twopairs else if countbyset countbyset return hand onepair 继续 例如,如果有四张相同等级的牌,我们知道玩家将获得“四条”。可能会问:如果 countbyset ,为什么还要测试 countbyset ?如果四张牌的等级相同,应该只有一张其他牌,对吗?答案是 上面的情况包括了所有某个等级出现多次的可能性。我们必须处理其他情况,包括顺子、同花和“高牌”。 ts 继续 else if countbyset if countbyrank join includes return countbysuit includes hand straight countbyrank slice join hand royalflush hand straightflush else return countbysuit includes hand flush hand highcard else throw new error unknown hand this cannot happen bad logic 这里我们再次进行防御性编程;即使我们知道我们有五个不同的等级,我们也确保逻辑工作良好,甚至在出现问题时抛出一个 throw 。 我们如何测试顺子?我们应该有五个连续的等级。如果我们查看 countbyrank 数组, ,所以通过执行 countbyrank join 并检查生成的字符串是否包含 ,我们可以确定是顺子。 images png 我们必须区分几种情况: 如果没有五张相同花色的牌,那么它是一个普通的顺子 如果所有牌都是相同花色,如果顺子以一张a结束,则为皇家同花顺 如果所有牌都是相同花色,但我们不以a结束,那么我们有一个同花顺 如果我们没有顺子,只有两种可能性: 如果所有牌都是相同花色,我们有一个同花 如果不是所有牌都是相同花色,我们有一个“高牌” 完整的函数如下所示: ts export function handrank cardstrings hand const cards card cardstrings map str string rank ranktonumber str substring str length as rank suit suittonumber str at as suit we won t use the place in the following arrays const countbysuit new array fill const countbyrank new array fill const countbyset new array fill cards foreach card card countbyrank countbysuit countbyrank foreach count number count countbyset count the a also as a for straights countbyrank countbyrank if countbyset countbyset return hand fourofakind else if countbyset countbyset return hand fullhouse else if countbyset countbyset return hand threeofakind else if countbyset countbyset return hand twopairs else if countbyset countbyset return hand onepair else if countbyset if countbyrank join includes return countbysuit includes hand straight countbyrank slice join hand royalflush hand straightflush else countbyrank join includes return countbysuit includes hand flush hand highcard else throw new error unknown hand this cannot happen bad logic 测试代码 ts console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank console log handrank | 0 |
125,243 | 4,954,681,102 | IssuesEvent | 2016-12-01 18:17:12 | sandialabs/slycat | https://api.github.com/repos/sandialabs/slycat | opened | Test that the CSV format option works for XYCE outputs (.prn) | Low Priority Timeseries Model | Create a sample CSV to read in XYCE formatted timeseries and test in the timeseries wizard. | 1.0 | Test that the CSV format option works for XYCE outputs (.prn) - Create a sample CSV to read in XYCE formatted timeseries and test in the timeseries wizard. | non_defect | test that the csv format option works for xyce outputs prn create a sample csv to read in xyce formatted timeseries and test in the timeseries wizard | 0 |
22,450 | 6,246,171,977 | IssuesEvent | 2017-07-13 02:50:31 | xceedsoftware/wpftoolkit | https://api.github.com/repos/xceedsoftware/wpftoolkit | closed | Incorrect assembly revision number | CodePlex | <b>doomer[CodePlex]</b> <br />Errors while loading WPFToolkit.Extended.dll from 19 Sep 2012:
=== Pre-bind state information ===
...
LOG: DisplayName = WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
...
...
LOG: Post-policy reference: WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
...
LOG: Assembly Name is: WPFToolkit.Extended, Version=1.7.4644.13122, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
WRN: Comparing the assembly name resulted in the mismatch: Revision Number
ERR: The assembly reference did not match the assembly definition found.
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.
| 1.0 | Incorrect assembly revision number - <b>doomer[CodePlex]</b> <br />Errors while loading WPFToolkit.Extended.dll from 19 Sep 2012:
=== Pre-bind state information ===
...
LOG: DisplayName = WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
...
...
LOG: Post-policy reference: WPFToolkit.Extended, Version=1.7.4644.13121, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
...
LOG: Assembly Name is: WPFToolkit.Extended, Version=1.7.4644.13122, Culture=neutral, PublicKeyToken=3e4669d2f30244f4
WRN: Comparing the assembly name resulted in the mismatch: Revision Number
ERR: The assembly reference did not match the assembly definition found.
ERR: Failed to complete setup of assembly (hr = 0x80131040). Probing terminated.
| non_defect | incorrect assembly revision number doomer errors while loading wpftoolkit extended dll from sep pre bind state information log displayname wpftoolkit extended version culture neutral publickeytoken log post policy reference wpftoolkit extended version culture neutral publickeytoken log assembly name is wpftoolkit extended version culture neutral publickeytoken wrn comparing the assembly name resulted in the mismatch revision number err the assembly reference did not match the assembly definition found err failed to complete setup of assembly hr probing terminated | 0 |
441,360 | 30,779,774,976 | IssuesEvent | 2023-07-31 09:13:01 | Avaiga/taipy-doc | https://api.github.com/repos/Avaiga/taipy-doc | closed | Documentation for Data migration API | Core 📄 Documentation 🟨 Priority: Medium | - [x] Manual doc
- [x] Reference manual doc
- [x] Release note
- [ ] Demo to the team | 1.0 | Documentation for Data migration API - - [x] Manual doc
- [x] Reference manual doc
- [x] Release note
- [ ] Demo to the team | non_defect | documentation for data migration api manual doc reference manual doc release note demo to the team | 0 |
45,134 | 5,695,558,035 | IssuesEvent | 2017-04-16 00:04:09 | rlf/uSkyBlock | https://api.github.com/repos/rlf/uSkyBlock | closed | Random Players are Added to Random Islands | A bug T ready for test | _Please paste the output from `/usb version` below_
```
Name: uSkyBlock
Version: 2.7.3-alpha16
Description: Ultimate SkyBlock v2.7.3-alpha16-08f268-451
Language: en (en)
State: d=500, r=152, i=1,971, p=12,975, n=false, awe=true
Server: Paper git-Paper-1032 (MC: 1.11.2)
State: online=true, bungee=true
------------------------------
Vault 1.5.6-b49 (ENABLED)
WorldEdit 6.1.7-SNAPSHOT;3674-9f24f84 (ENABLED)
WorldGuard 6.1.2;e38d98d (ENABLED)
FastAsyncWorldEdit 17.01.15-812c12f-505-10.3.0 (ENABLED)
Multiverse-Core 2.5-b719 (ENABLED)
Multiverse-NetherPortals 2.5-b710 (ENABLED)
------------------------------
```
_Description of the problem:_
I've had a few reports lately of players showing up on other players' island members lists. I was chalking it up to players simply forgetting they added someone or not removing them properly until it happened to a couple of my staff members simultaneously. It seems that players will randomly appear in random players' members lists, even if the player appearing in the list already owns an island. Removing them with "/is remove <player>" or "/usb island remove <player>" doesn't seem to work in some cases, but other times "/is remove <player>" does work. If I add them with "/usb island addmember <player>", I can then remove them with "/usb island remove <player>" but have to re-register them to their original island afterwards.
It gets really strange after that. When a player on an affected island tries to change permissions for ANY other player, it instead opens the permissions menu for the player that was randomly added to the island. If they then change any permissions, the player gets re-added to the island.
I wish I had more info or errors to post, but that's all I've got. =(
_If you have any log-files, please paste them to [pastebin.com](http://pastebin.com)_
* Affected island file 1: http://pastebin.com/raw/S5qnWNmb
* Affected island file 2: http://pastebin.com/raw/rfLBDisZ
* Player file for player appearing in the members list: http://pastebin.com/raw/0W2E82uU
| 1.0 | Random Players are Added to Random Islands - _Please paste the output from `/usb version` below_
```
Name: uSkyBlock
Version: 2.7.3-alpha16
Description: Ultimate SkyBlock v2.7.3-alpha16-08f268-451
Language: en (en)
State: d=500, r=152, i=1,971, p=12,975, n=false, awe=true
Server: Paper git-Paper-1032 (MC: 1.11.2)
State: online=true, bungee=true
------------------------------
Vault 1.5.6-b49 (ENABLED)
WorldEdit 6.1.7-SNAPSHOT;3674-9f24f84 (ENABLED)
WorldGuard 6.1.2;e38d98d (ENABLED)
FastAsyncWorldEdit 17.01.15-812c12f-505-10.3.0 (ENABLED)
Multiverse-Core 2.5-b719 (ENABLED)
Multiverse-NetherPortals 2.5-b710 (ENABLED)
------------------------------
```
_Description of the problem:_
I've had a few reports lately of players showing up on other players' island members lists. I was chalking it up to players simply forgetting they added someone or not removing them properly until it happened to a couple of my staff members simultaneously. It seems that players will randomly appear in random players' members lists, even if the player appearing in the list already owns an island. Removing them with "/is remove <player>" or "/usb island remove <player>" doesn't seem to work in some cases, but other times "/is remove <player>" does work. If I add them with "/usb island addmember <player>", I can then remove them with "/usb island remove <player>" but have to re-register them to their original island afterwards.
It gets really strange after that. When a player on an affected island tries to change permissions for ANY other player, it instead opens the permissions menu for the player that was randomly added to the island. If they then change any permissions, the player gets re-added to the island.
I wish I had more info or errors to post, but that's all I've got. =(
_If you have any log-files, please paste them to [pastebin.com](http://pastebin.com)_
* Affected island file 1: http://pastebin.com/raw/S5qnWNmb
* Affected island file 2: http://pastebin.com/raw/rfLBDisZ
* Player file for player appearing in the members list: http://pastebin.com/raw/0W2E82uU
| non_defect | random players are added to random islands please paste the output from usb version below name uskyblock version description ultimate skyblock language en en state d r i p n false awe true server paper git paper mc state online true bungee true vault enabled worldedit snapshot enabled worldguard enabled fastasyncworldedit enabled multiverse core enabled multiverse netherportals enabled description of the problem i ve had a few reports lately of players showing up on other players island members lists i was chalking it up to players simply forgetting they added someone or not removing them properly until it happened to a couple of my staff members simultaneously it seems that players will randomly appear in random players members lists even if the player appearing in the list already owns an island removing them with is remove or usb island remove doesn t seem to work in some cases but other times is remove does work if i add them with usb island addmember i can then remove them with usb island remove but have to re register them to their original island afterwards it gets really strange after that when a player on an affected island tries to change permissions for any other player it instead opens the permissions menu for the player that was randomly added to the island if they then change any permissions the player gets re added to the island i wish i had more info or errors to post but that s all i ve got if you have any log files please paste them to affected island file affected island file player file for player appearing in the members list | 0 |
51,808 | 27,249,501,639 | IssuesEvent | 2023-02-22 06:39:16 | questdb/questdb | https://api.github.com/repos/questdb/questdb | closed | Slow inserts | Question Performance | ### Describe the bug
Hi, I absolutely love this product, QuestDB team has done an amazing job! I do face an issue
I have a table which has 3 Symbol columns(Ticker, Exchange, etc), a Date/Timestamp column and OHLCV columns. the Symbol columns have indices on them as well. When I try to insert rows in this table, the performance is quite bad, I am afraid. it takes more than an hour to insert 1 million rows. (I commit about 100 tickers at a time, could be between 100k - 200k rows).
1. the bad performance could be because of the Indices - can you confirm? if so, is there a way to disable indices?
2. I tried without indices on Symbol columns, the query performance with indices is far better - is this expected?
3. I note that the new version of QuestDB is better at handling commits. I have some code to close connection using close(true). should I batch the results myself or let QuestDB handle it?
4. I havent tried disabling indices and reinserting. looks like the only I can do that is by dropping indices and then Reindex - is there another way
I use the python lib to insert data in the database. even with the above, I cant match the million rows per second performance. I must admit my knowledge of the lib is quiet nascent, a piece of documentation on optimizing inserts would help users like me a lot.
thanks again
Roh
### To reproduce
_No response_
### Expected Behavior
_No response_
### Environment
```markdown
- **QuestDB version**:
- **OS**:
- **Browser**:
```
### Additional context
_No response_ | True | Slow inserts - ### Describe the bug
Hi, I absolutely love this product, QuestDB team has done an amazing job! I do face an issue
I have a table which has 3 Symbol columns(Ticker, Exchange, etc), a Date/Timestamp column and OHLCV columns. the Symbol columns have indices on them as well. When I try to insert rows in this table, the performance is quite bad, I am afraid. it takes more than an hour to insert 1 million rows. (I commit about 100 tickers at a time, could be between 100k - 200k rows).
1. the bad performance could be because of the Indices - can you confirm? if so, is there a way to disable indices?
2. I tried without indices on Symbol columns, the query performance with indices is far better - is this expected?
3. I note that the new version of QuestDB is better at handling commits. I have some code to close connection using close(true). should I batch the results myself or let QuestDB handle it?
4. I havent tried disabling indices and reinserting. looks like the only I can do that is by dropping indices and then Reindex - is there another way
I use the python lib to insert data in the database. even with the above, I cant match the million rows per second performance. I must admit my knowledge of the lib is quiet nascent, a piece of documentation on optimizing inserts would help users like me a lot.
thanks again
Roh
### To reproduce
_No response_
### Expected Behavior
_No response_
### Environment
```markdown
- **QuestDB version**:
- **OS**:
- **Browser**:
```
### Additional context
_No response_ | non_defect | slow inserts describe the bug hi i absolutely love this product questdb team has done an amazing job i do face an issue i have a table which has symbol columns ticker exchange etc a date timestamp column and ohlcv columns the symbol columns have indices on them as well when i try to insert rows in this table the performance is quite bad i am afraid it takes more than an hour to insert million rows i commit about tickers at a time could be between rows the bad performance could be because of the indices can you confirm if so is there a way to disable indices i tried without indices on symbol columns the query performance with indices is far better is this expected i note that the new version of questdb is better at handling commits i have some code to close connection using close true should i batch the results myself or let questdb handle it i havent tried disabling indices and reinserting looks like the only i can do that is by dropping indices and then reindex is there another way i use the python lib to insert data in the database even with the above i cant match the million rows per second performance i must admit my knowledge of the lib is quiet nascent a piece of documentation on optimizing inserts would help users like me a lot thanks again roh to reproduce no response expected behavior no response environment markdown questdb version os browser additional context no response | 0 |
4,388 | 2,610,093,085 | IssuesEvent | 2015-02-26 18:28:06 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳光子痤疮祛除 | auto-migrated Priority-Medium Type-Defect | ```
深圳光子痤疮祛除【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:05 | 1.0 | 深圳光子痤疮祛除 - ```
深圳光子痤疮祛除【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:05 | defect | 深圳光子痤疮祛除 深圳光子痤疮祛除【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at | 1 |
213,997 | 16,552,488,656 | IssuesEvent | 2021-05-28 10:09:23 | cbeust/testng | https://api.github.com/repos/cbeust/testng | reopened | Skip test if its data provider provides no data | Feature: data-provider Feature: skip test | ### TestNG Version
>TestNG version 7.4.0
### Expected behavior
Some message should be display so end user can usderstand what is the problem
### Actual behavior
I get the blank output
### Is the issue reproductible on runner?
-[]TestNG
### Test case sample
>
I try to execute the simple test case which call dataProvider and this dataProvider has return type as Iterator<Object[]>,i know this return type is not accept by @DataProvider method but after excute the below programm i get the blank output nothing is display in colsole,i accept at least message like the dataProvider is not exsist,please refer below program.
```java
public class Practice {
@Test(dataProvider="NotWorking")
public void testCase(Object[] obj)
{
System.out.println(obj[0]);
System.out.println(obj[1]);
}
@DataProvider(name="NotWorking")
public Iterator<Object[]> dataProvider2()
{
List<Object[]> obj =new ArrayList<Object[]>();
Object[] obj1=new Object[2];
obj1[0]=new String("First_Object_First_value");
obj1[1]=new String("First_Object_Second_value");
Object[] obj2=new Object[2];
obj2[0]=new String("Second_Object_First_value");
obj1[1]=new String("Second_Object_Second_value");
Iterator<Object[]> itr = obj.iterator();
return itr;
}
}
```
i get the below output for above code,
```
[RemoteTestNG] detected TestNG version 7.4.0
===============================================
Default test
Tests run: 1, Failures: 0, Skips: 0
``` | 1.0 | Skip test if its data provider provides no data - ### TestNG Version
>TestNG version 7.4.0
### Expected behavior
Some message should be display so end user can usderstand what is the problem
### Actual behavior
I get the blank output
### Is the issue reproductible on runner?
-[]TestNG
### Test case sample
>
I try to execute the simple test case which call dataProvider and this dataProvider has return type as Iterator<Object[]>,i know this return type is not accept by @DataProvider method but after excute the below programm i get the blank output nothing is display in colsole,i accept at least message like the dataProvider is not exsist,please refer below program.
```java
public class Practice {
@Test(dataProvider="NotWorking")
public void testCase(Object[] obj)
{
System.out.println(obj[0]);
System.out.println(obj[1]);
}
@DataProvider(name="NotWorking")
public Iterator<Object[]> dataProvider2()
{
List<Object[]> obj =new ArrayList<Object[]>();
Object[] obj1=new Object[2];
obj1[0]=new String("First_Object_First_value");
obj1[1]=new String("First_Object_Second_value");
Object[] obj2=new Object[2];
obj2[0]=new String("Second_Object_First_value");
obj1[1]=new String("Second_Object_Second_value");
Iterator<Object[]> itr = obj.iterator();
return itr;
}
}
```
i get the below output for above code,
```
[RemoteTestNG] detected TestNG version 7.4.0
===============================================
Default test
Tests run: 1, Failures: 0, Skips: 0
``` | non_defect | skip test if its data provider provides no data testng version testng version expected behavior some message should be display so end user can usderstand what is the problem actual behavior i get the blank output is the issue reproductible on runner testng test case sample i try to execute the simple test case which call dataprovider and this dataprovider has return type as iterator i know this return type is not accept by dataprovider method but after excute the below programm i get the blank output nothing is display in colsole i accept at least message like the dataprovider is not exsist please refer below program java public class practice test dataprovider notworking public void testcase object obj system out println obj system out println obj dataprovider name notworking public iterator list obj new arraylist object new object new string first object first value new string first object second value object new object new string second object first value new string second object second value iterator itr obj iterator return itr i get the below output for above code detected testng version default test tests run failures skips | 0 |
290,362 | 8,893,779,698 | IssuesEvent | 2019-01-16 00:54:50 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Ad notifications sometimes presented to user when Ads toggle is off | QA/Yes bug feature/ads priority/P1 | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Sometimes when Ads is off, users are still being presented with Ads.
## Steps to Reproduce
NOTE: these steps are in progress and not 100% confirmed
1. Have a Dev profile with Rewards enabled (note, Ads is not enabled) on 0.60.9.
2. Update to 0.60.13.
3. Navigate to brave://rewards
4. Verify Ads are not shown as enabled.
5. Browse normally.
## Actual result:
You may be presented with an Ad.
## Expected result:
No Ads since you did not manually enable with 0.60.13.
## Reproduces how often:
unsure
## Brave version (brave://version info)
Brave | 0.60.13 Chromium: 72.0.3626.53 (Official Build) dev(64-bit)
-- | --
Revision | 98434e6cd182d68ce396daa92e9c6310422e6763-refs/branch-heads/3626@{#620}
OS | Mac OS X
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds? saw on Dev
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
### Additional Information
cc @mandar-brave @jsecretan @mrose17
| 1.0 | Ad notifications sometimes presented to user when Ads toggle is off - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Sometimes when Ads is off, users are still being presented with Ads.
## Steps to Reproduce
NOTE: these steps are in progress and not 100% confirmed
1. Have a Dev profile with Rewards enabled (note, Ads is not enabled) on 0.60.9.
2. Update to 0.60.13.
3. Navigate to brave://rewards
4. Verify Ads are not shown as enabled.
5. Browse normally.
## Actual result:
You may be presented with an Ad.
## Expected result:
No Ads since you did not manually enable with 0.60.13.
## Reproduces how often:
unsure
## Brave version (brave://version info)
Brave | 0.60.13 Chromium: 72.0.3626.53 (Official Build) dev(64-bit)
-- | --
Revision | 98434e6cd182d68ce396daa92e9c6310422e6763-refs/branch-heads/3626@{#620}
OS | Mac OS X
### Reproducible on current release:
- Does it reproduce on brave-browser dev/beta builds? saw on Dev
### Website problems only:
- Does the issue resolve itself when disabling Brave Shields? n/a
- Is the issue reproducible on the latest version of Chrome? n/a
### Additional Information
cc @mandar-brave @jsecretan @mrose17
| non_defect | ad notifications sometimes presented to user when ads toggle is off have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description sometimes when ads is off users are still being presented with ads steps to reproduce note these steps are in progress and not confirmed have a dev profile with rewards enabled note ads is not enabled on update to navigate to brave rewards verify ads are not shown as enabled browse normally actual result you may be presented with an ad expected result no ads since you did not manually enable with reproduces how often unsure brave version brave version info brave chromium official build dev bit revision refs branch heads os mac os x reproducible on current release does it reproduce on brave browser dev beta builds saw on dev website problems only does the issue resolve itself when disabling brave shields n a is the issue reproducible on the latest version of chrome n a additional information cc mandar brave jsecretan | 0 |
76,969 | 26,702,003,480 | IssuesEvent | 2023-01-27 15:05:44 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | closed | Fix accordion header levels on Burial Benefits detail pages | Needs refining Public Websites 508/Accessibility 508-defect-2 | ## Description
As a short term solution, because of a specific Veteran having an issue on certain pages, I would like to resolve the accordion header violations on the Burials benefits detail pages using the entity ID option.
### Additional Context
There are 3 Benefits detail pages within the Burials and Memorials section that need adjustments to their accordion headers:
- [Eligibility](https://www.va.gov/burials-memorials/eligibility/)
- There are 2 accordion groups on this page. Both sets are currently using the default H4 heading
- First set needs to change from the default H4 to H3
- Second set should remain using the default H4

- [Burial in a Private Cemetery](https://www.va.gov/burials-memorials/eligibility/burial-in-private-cemetery/)
- There is one accordion group on this page. Currently these are using the default H4 level but they should be using H5. **Can this be done with the entity ID list solution?**

- [Veterans headstones, markers, and medallions](https://www.va.gov/burials-memorials/memorial-items/headstones-markers-medallions/)
- There is one accordion group on this page. Currently these are using the default H4 level but they should be using H3.

## Acceptance Criteria
- [ ] Update first accordion group on Eligibility page to H3
- [ ] Confirm second accordion group on Eligibility page remains H4
- [ ] Update accordion group on Burial in a Private Cemetery page to H5 (if possible)
- [ ] Update accordion group on Veterans headstones, markers, and medallions page to H3
- [ ] Accessibility Lead Review
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [x] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| 1.0 | Fix accordion header levels on Burial Benefits detail pages - ## Description
As a short term solution, because of a specific Veteran having an issue on certain pages, I would like to resolve the accordion header violations on the Burials benefits detail pages using the entity ID option.
### Additional Context
There are 3 Benefits detail pages within the Burials and Memorials section that need adjustments to their accordion headers:
- [Eligibility](https://www.va.gov/burials-memorials/eligibility/)
- There are 2 accordion groups on this page. Both sets are currently using the default H4 heading
- First set needs to change from the default H4 to H3
- Second set should remain using the default H4

- [Burial in a Private Cemetery](https://www.va.gov/burials-memorials/eligibility/burial-in-private-cemetery/)
- There is one accordion group on this page. Currently these are using the default H4 level but they should be using H5. **Can this be done with the entity ID list solution?**

- [Veterans headstones, markers, and medallions](https://www.va.gov/burials-memorials/memorial-items/headstones-markers-medallions/)
- There is one accordion group on this page. Currently these are using the default H4 level but they should be using H3.

## Acceptance Criteria
- [ ] Update first accordion group on Eligibility page to H3
- [ ] Confirm second accordion group on Eligibility page remains H4
- [ ] Update accordion group on Burial in a Private Cemetery page to H5 (if possible)
- [ ] Update accordion group on Veterans headstones, markers, and medallions page to H3
- [ ] Accessibility Lead Review
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [x] `⭐️ Public Websites`
- [ ] `⭐️ Facilities`
- [ ] `⭐️ User support`
| defect | fix accordion header levels on burial benefits detail pages description as a short term solution because of a specific veteran having an issue on certain pages i would like to resolve the accordion header violations on the burials benefits detail pages using the entity id option additional context there are benefits detail pages within the burials and memorials section that need adjustments to their accordion headers there are accordion groups on this page both sets are currently using the default heading first set needs to change from the default to second set should remain using the default there is one accordion group on this page currently these are using the default level but they should be using can this be done with the entity id list solution there is one accordion group on this page currently these are using the default level but they should be using acceptance criteria update first accordion group on eligibility page to confirm second accordion group on eligibility page remains update accordion group on burial in a private cemetery page to if possible update accordion group on veterans headstones markers and medallions page to accessibility lead review cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support | 1 |
48,381 | 13,068,481,484 | IssuesEvent | 2020-07-31 03:42:58 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [DOMlauncher] Correctly handle SPE discriminator threshold. (Trac #2272) | Migrated from Trac combo simulation defect | it needs to be different in data and simulation. right now it is hard coded in dataclasses/private/dataclasses/I3DOMFunctions.cxx
Migrated from https://code.icecube.wisc.edu/ticket/2272
```json
{
"status": "closed",
"changetime": "2019-06-11T15:30:38",
"description": "it needs to be different in data and simulation. right now it is hard coded in dataclasses/private/dataclasses/I3DOMFunctions.cxx ",
"reporter": "kjmeagher",
"cc": "jkelley",
"resolution": "fixed",
"_ts": "1560267038292349",
"component": "combo simulation",
"summary": "[DOMlauncher] Correctly handle SPE discriminator threshold.",
"priority": "blocker",
"keywords": "",
"time": "2019-04-16T15:45:05",
"milestone": "Summer Solstice 2019",
"owner": "mjansson",
"type": "defect"
}
```
| 1.0 | [DOMlauncher] Correctly handle SPE discriminator threshold. (Trac #2272) - it needs to be different in data and simulation. right now it is hard coded in dataclasses/private/dataclasses/I3DOMFunctions.cxx
Migrated from https://code.icecube.wisc.edu/ticket/2272
```json
{
"status": "closed",
"changetime": "2019-06-11T15:30:38",
"description": "it needs to be different in data and simulation. right now it is hard coded in dataclasses/private/dataclasses/I3DOMFunctions.cxx ",
"reporter": "kjmeagher",
"cc": "jkelley",
"resolution": "fixed",
"_ts": "1560267038292349",
"component": "combo simulation",
"summary": "[DOMlauncher] Correctly handle SPE discriminator threshold.",
"priority": "blocker",
"keywords": "",
"time": "2019-04-16T15:45:05",
"milestone": "Summer Solstice 2019",
"owner": "mjansson",
"type": "defect"
}
```
| defect | correctly handle spe discriminator threshold trac it needs to be different in data and simulation right now it is hard coded in dataclasses private dataclasses cxx migrated from json status closed changetime description it needs to be different in data and simulation right now it is hard coded in dataclasses private dataclasses cxx reporter kjmeagher cc jkelley resolution fixed ts component combo simulation summary correctly handle spe discriminator threshold priority blocker keywords time milestone summer solstice owner mjansson type defect | 1 |
673,589 | 23,021,063,888 | IssuesEvent | 2022-07-22 04:49:26 | dwyl/app-mvp | https://api.github.com/repos/dwyl/app-mvp | closed | Chore: Rename `timer.end` to `timer.stop` | enhancement help wanted in-progress technical priority-3 discuss chore T1h | At present the `timer` schema has an **`end`** field: 🔚
https://github.com/dwyl/app-mvp-phoenix/blob/bf65127b434c5861e56a501a6350978c142df788/lib/app/timer.ex#L10
This is a remnant from the _previous_ MVP (`JavaScript`) where we selected `end` thinking it was a good field name.
But in `Elixir`, `end` is a "reserved" keyword for _ending_ functions, conditionals and loops ... 🙄
The MVP is _working_ ... https://github.com/dwyl/app-mvp-phoenix/issues/89#issuecomment-1190240856
But as I'm updating the `README.md` I'm spotting things I want to improve.
This is definitely one of them that we can "fix" _reasonably_ easily now. 🤞
# Todo
+ [x] update all instances of `timer.end` to `timer.stop` to be consistent.
+ [x] _manually_ update the MVP DB from `timer.end` to `timer.stop` via CLI: https://github.com/dwyl/app-mvp/issues/114#issuecomment-1192150876
This might take a while because there are _many_ instances ... ⏳
But I really want to avoid any unnecessary head-scratchers for other people reading the `README.md` in the future. 💭 | 1.0 | Chore: Rename `timer.end` to `timer.stop` - At present the `timer` schema has an **`end`** field: 🔚
https://github.com/dwyl/app-mvp-phoenix/blob/bf65127b434c5861e56a501a6350978c142df788/lib/app/timer.ex#L10
This is a remnant from the _previous_ MVP (`JavaScript`) where we selected `end` thinking it was a good field name.
But in `Elixir`, `end` is a "reserved" keyword for _ending_ functions, conditionals and loops ... 🙄
The MVP is _working_ ... https://github.com/dwyl/app-mvp-phoenix/issues/89#issuecomment-1190240856
But as I'm updating the `README.md` I'm spotting things I want to improve.
This is definitely one of them that we can "fix" _reasonably_ easily now. 🤞
# Todo
+ [x] update all instances of `timer.end` to `timer.stop` to be consistent.
+ [x] _manually_ update the MVP DB from `timer.end` to `timer.stop` via CLI: https://github.com/dwyl/app-mvp/issues/114#issuecomment-1192150876
This might take a while because there are _many_ instances ... ⏳
But I really want to avoid any unnecessary head-scratchers for other people reading the `README.md` in the future. 💭 | non_defect | chore rename timer end to timer stop at present the timer schema has an end field 🔚 this is a remnant from the previous mvp javascript where we selected end thinking it was a good field name but in elixir end is a reserved keyword for ending functions conditionals and loops 🙄 the mvp is working but as i m updating the readme md i m spotting things i want to improve this is definitely one of them that we can fix reasonably easily now 🤞 todo update all instances of timer end to timer stop to be consistent manually update the mvp db from timer end to timer stop via cli this might take a while because there are many instances ⏳ but i really want to avoid any unnecessary head scratchers for other people reading the readme md in the future 💭 | 0 |
166,724 | 12,969,961,961 | IssuesEvent | 2020-07-21 08:36:23 | Azure/azure-cli | https://api.github.com/repos/Azure/azure-cli | opened | Test framework migration | Test Framework feature-request | **Is your feature request related to a problem? Please describe.**
<!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --->
As bussiness logic going more and more complex, we might overwrite/extend the our test framework Azure/azure-python-devtools more and more.
For now, we temporary decide to keep sharing the that repo between Azure SDK team and us. Because we found it's not necessary to split for now as human resource, degree of code dependency, convenience, etc.
Opening this issue to track the history we need to midify the test framework or overwrite/extend from our side. Maybe we might need to fork / reimpelement Azure CLI test framework someday as we need more dedicated features for Azure CLI.
**Describe the solution you'd like**
<!--- A clear and concise description of what you want to happen. --->
**Describe alternatives you've considered**
<!--- A clear and concise description of any alternative solutions or features you've considered. --->
**Additional context**
<!--- Add any other context or screenshots about the feature request here. --->
| 1.0 | Test framework migration - **Is your feature request related to a problem? Please describe.**
<!--- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --->
As bussiness logic going more and more complex, we might overwrite/extend the our test framework Azure/azure-python-devtools more and more.
For now, we temporary decide to keep sharing the that repo between Azure SDK team and us. Because we found it's not necessary to split for now as human resource, degree of code dependency, convenience, etc.
Opening this issue to track the history we need to midify the test framework or overwrite/extend from our side. Maybe we might need to fork / reimpelement Azure CLI test framework someday as we need more dedicated features for Azure CLI.
**Describe the solution you'd like**
<!--- A clear and concise description of what you want to happen. --->
**Describe alternatives you've considered**
<!--- A clear and concise description of any alternative solutions or features you've considered. --->
**Additional context**
<!--- Add any other context or screenshots about the feature request here. --->
| non_defect | test framework migration is your feature request related to a problem please describe as bussiness logic going more and more complex we might overwrite extend the our test framework azure azure python devtools more and more for now we temporary decide to keep sharing the that repo between azure sdk team and us because we found it s not necessary to split for now as human resource degree of code dependency convenience etc opening this issue to track the history we need to midify the test framework or overwrite extend from our side maybe we might need to fork reimpelement azure cli test framework someday as we need more dedicated features for azure cli describe the solution you d like describe alternatives you ve considered additional context | 0 |
282,425 | 24,474,479,928 | IssuesEvent | 2022-10-08 02:13:39 | lowRISC/opentitan | https://api.github.com/repos/lowRISC/opentitan | opened | [test-triage] rom_e2e_smoke | Component:TestTriage | ### Hierarchy of regression failure
Chip Level
### Failure Description
UVM_FATAL @ * us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_*.unnamed$$_*] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt. has 3 failures:
Test rom_e2e_smoke has 3 failures.
0.rom_e2e_smoke.3118367884
Line 487, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/0.rom_e2e_smoke/latest/run.log
UVM_FATAL @ 11.393316 us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_0.unnamed$$_1] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt.
UVM_INFO @ 11.393316 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
1.rom_e2e_smoke.1132519800
Line 488, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/1.rom_e2e_smoke/latest/run.log
UVM_FATAL @ 10.995501 us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_0.unnamed$$_1] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt.
UVM_INFO @ 10.995501 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- Commit hash where failure was observed
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i rom_e2e_smoke --fixed-seed 3118367884 --build-seed 1505772856 --waves -v h
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_ | 1.0 | [test-triage] rom_e2e_smoke - ### Hierarchy of regression failure
Chip Level
### Failure Description
UVM_FATAL @ * us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_*.unnamed$$_*] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt. has 3 failures:
Test rom_e2e_smoke has 3 failures.
0.rom_e2e_smoke.3118367884
Line 487, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/0.rom_e2e_smoke/latest/run.log
UVM_FATAL @ 11.393316 us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_0.unnamed$$_1] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt.
UVM_INFO @ 11.393316 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
1.rom_e2e_smoke.1132519800
Line 488, in log /container/opentitan-public/scratch/os_regression/chip_earlgrey_asic-sim-vcs/1.rom_e2e_smoke/latest/run.log
UVM_FATAL @ 10.995501 us: (sw_logger_if.sv:157) [tb.u_sim_sram.u_sim_sram_if.u_sw_logger_if.parse_sw_log_file.unnamed$$_0.unnamed$$_1] Failed to open sw log db file rom_e2e_smoke_prog_sim_dv.logs.txt.
UVM_INFO @ 10.995501 us: (uvm_report_catcher.svh:705) [UVM/REPORT/CATCHER]
--- UVM Report catcher Summary ---
### Steps to Reproduce
- Commit hash where failure was observed
- dvsim invocation command to reproduce the failure, inclusive of build and run seeds:
./util/dvsim/dvsim.py hw/top_earlgrey/dv/chip_sim_cfg.hjson -i rom_e2e_smoke --fixed-seed 3118367884 --build-seed 1505772856 --waves -v h
- Kokoro build number if applicable
### Tests with similar or related failures
_No response_ | non_defect | rom smoke hierarchy of regression failure chip level failure description uvm fatal us sw logger if sv failed to open sw log db file rom smoke prog sim dv logs txt has failures test rom smoke has failures rom smoke line in log container opentitan public scratch os regression chip earlgrey asic sim vcs rom smoke latest run log uvm fatal us sw logger if sv failed to open sw log db file rom smoke prog sim dv logs txt uvm info us uvm report catcher svh uvm report catcher summary rom smoke line in log container opentitan public scratch os regression chip earlgrey asic sim vcs rom smoke latest run log uvm fatal us sw logger if sv failed to open sw log db file rom smoke prog sim dv logs txt uvm info us uvm report catcher svh uvm report catcher summary steps to reproduce commit hash where failure was observed dvsim invocation command to reproduce the failure inclusive of build and run seeds util dvsim dvsim py hw top earlgrey dv chip sim cfg hjson i rom smoke fixed seed build seed waves v h kokoro build number if applicable tests with similar or related failures no response | 0 |
37,607 | 8,468,914,139 | IssuesEvent | 2018-10-23 21:07:11 | jccastillo0007/eFacturaT | https://api.github.com/repos/jccastillo0007/eFacturaT | opened | Cuando una factura se genera de manera automática (programada) no se le asigna fecha de vencimiento | bug defect | Y por lo tanto no tiene CxCStatus asociado, y por lo tanto no se le puede generar el REP.... | 1.0 | Cuando una factura se genera de manera automática (programada) no se le asigna fecha de vencimiento - Y por lo tanto no tiene CxCStatus asociado, y por lo tanto no se le puede generar el REP.... | defect | cuando una factura se genera de manera automática programada no se le asigna fecha de vencimiento y por lo tanto no tiene cxcstatus asociado y por lo tanto no se le puede generar el rep | 1 |
70,283 | 30,603,471,109 | IssuesEvent | 2023-07-22 17:42:08 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | Using aws_vpc_dhcp_options datasource with FSX | bug service/ec2 stale service/fsx | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Terraform - 1.0.0
AWS - 3.46.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
Data -> aws_vpc_dhcp_options
Resource -> aws_fsx_windows_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
data "aws_vpc" "vpc" {
id = var.vpc_id
}
data "aws_vpc_dhcp_options" "domainsetup" {
dhcp_options_id = data.aws_vpc.vpc.dhcp_options_id
}
resource "aws_fsx_windows_file_system" "fsx" {
storage_capacity = var.storage_capacity
subnet_ids = [var.subnet_id]
throughput_capacity = var.fsx_throughput_capacity
automatic_backup_retention_days = 14
daily_automatic_backup_start_time = "02:00"
weekly_maintenance_start_time = "7:23:00"
security_group_ids = [aws_security_group.fsx.id]
self_managed_active_directory {
dns_ips = data.aws_vpc_dhcp_options.domainsetup.domain_name_servers
domain_name = data.aws_vpc_dhcp_options.domainsetup.domain_name
username = var.ad_username
password = var.ad_password
}
tags = {
Name = var.name
TeamEmail = "redacted"
Description = "File Store"
ShortCode = "ALLCUST"
Environment = var.env
CostCentre = "redacted"
Product = var.product
Datadog = "true"
ManagedBy = "Terraform"
TerraformRepo = "redacted"
}
}
```
### Expected Behavior
data.aws_vpc_dhcp_options.domainsetup.domain_name_servers should output the DNS server IP addresses in a usable format for the dns_ips stage which should then create a self managed AD joined FSX.
### Actual Behavior
When trying to apply:
> Error: expected self_managed_active_directory.0.dns_ips.0 to contain a valid IP, got: 000.00.00.00
(Actual IP Address redacted but a valid IP is returned)
I have also tried to use tolist() and other formats, including accessing each DNS IP individually using the index but to no avail.
### Steps to Reproduce
1. `terraform apply`
| 2.0 | Using aws_vpc_dhcp_options datasource with FSX - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform CLI and Terraform AWS Provider Version
Terraform - 1.0.0
AWS - 3.46.0
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
Data -> aws_vpc_dhcp_options
Resource -> aws_fsx_windows_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
```hcl
data "aws_vpc" "vpc" {
id = var.vpc_id
}
data "aws_vpc_dhcp_options" "domainsetup" {
dhcp_options_id = data.aws_vpc.vpc.dhcp_options_id
}
resource "aws_fsx_windows_file_system" "fsx" {
storage_capacity = var.storage_capacity
subnet_ids = [var.subnet_id]
throughput_capacity = var.fsx_throughput_capacity
automatic_backup_retention_days = 14
daily_automatic_backup_start_time = "02:00"
weekly_maintenance_start_time = "7:23:00"
security_group_ids = [aws_security_group.fsx.id]
self_managed_active_directory {
dns_ips = data.aws_vpc_dhcp_options.domainsetup.domain_name_servers
domain_name = data.aws_vpc_dhcp_options.domainsetup.domain_name
username = var.ad_username
password = var.ad_password
}
tags = {
Name = var.name
TeamEmail = "redacted"
Description = "File Store"
ShortCode = "ALLCUST"
Environment = var.env
CostCentre = "redacted"
Product = var.product
Datadog = "true"
ManagedBy = "Terraform"
TerraformRepo = "redacted"
}
}
```
### Expected Behavior
data.aws_vpc_dhcp_options.domainsetup.domain_name_servers should output the DNS server IP addresses in a usable format for the dns_ips stage which should then create a self managed AD joined FSX.
### Actual Behavior
When trying to apply:
> Error: expected self_managed_active_directory.0.dns_ips.0 to contain a valid IP, got: 000.00.00.00
(Actual IP Address redacted but a valid IP is returned)
I have also tried to use tolist() and other formats, including accessing each DNS IP individually using the index but to no avail.
### Steps to Reproduce
1. `terraform apply`
| non_defect | using aws vpc dhcp options datasource with fsx community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform cli and terraform aws provider version terraform aws affected resource s data aws vpc dhcp options resource aws fsx windows file system terraform configuration files please include all terraform configurations required to reproduce the bug bug reports without a functional reproduction may be closed without investigation hcl data aws vpc vpc id var vpc id data aws vpc dhcp options domainsetup dhcp options id data aws vpc vpc dhcp options id resource aws fsx windows file system fsx storage capacity var storage capacity subnet ids throughput capacity var fsx throughput capacity automatic backup retention days daily automatic backup start time weekly maintenance start time security group ids self managed active directory dns ips data aws vpc dhcp options domainsetup domain name servers domain name data aws vpc dhcp options domainsetup domain name username var ad username password var ad password tags name var name teamemail redacted description file store shortcode allcust environment var env costcentre redacted product var product datadog true managedby terraform terraformrepo redacted expected behavior data aws vpc dhcp options domainsetup domain name servers should output the dns server ip addresses in a usable format for the dns ips stage which should then create a self managed ad joined fsx actual behavior when trying to apply error expected self managed active directory dns ips to contain a valid ip got actual ip address redacted but a valid ip is returned i have also tried to use tolist and other formats including accessing each dns ip individually using the index but to no avail steps to reproduce terraform apply | 0 |
255,046 | 8,102,778,973 | IssuesEvent | 2018-08-13 04:18:29 | CAU-Kiel-Tech-Inf/socha | https://api.github.com/repos/CAU-Kiel-Tech-Inf/socha | opened | Server aus gezipptem Build laesst sich nicht starten | bug priority | Starten des mit Gradle gebauten Servers
```
./gradelw build
cd build/deploy
unzip software-challenge-server.zip -d software-challenge-server
cd software-challenge-server
./start.sh
```
ergibt die Fehlermeldung
```
no main manifest attribute, in software-challenge-server.jar
``` | 1.0 | Server aus gezipptem Build laesst sich nicht starten - Starten des mit Gradle gebauten Servers
```
./gradelw build
cd build/deploy
unzip software-challenge-server.zip -d software-challenge-server
cd software-challenge-server
./start.sh
```
ergibt die Fehlermeldung
```
no main manifest attribute, in software-challenge-server.jar
``` | non_defect | server aus gezipptem build laesst sich nicht starten starten des mit gradle gebauten servers gradelw build cd build deploy unzip software challenge server zip d software challenge server cd software challenge server start sh ergibt die fehlermeldung no main manifest attribute in software challenge server jar | 0 |
2,476 | 2,607,904,383 | IssuesEvent | 2015-02-26 00:14:59 | chrsmithdemos/zen-coding | https://api.github.com/repos/chrsmithdemos/zen-coding | closed | Self commenting for easy identification of tags in large codes | auto-migrated Priority-Medium Type-Defect | ```
I am proposing this as an enhancement to zen-coding.
zen-coding produces a formatted output which is neat. Now with HUGE codes
that sometimes developers have to handle, they fall in to a complex nest of
various HTML tags. For dealing with this we put a comment at the end of
each tag so that they could be identified properly. This is a big big help
when debugging a faulty markup
An example is:
<div id="treatment">
<div id="nav">
... ... 1mn lines .. ..
</div><!--/div#nav-->
... ... 1mn lines .. ..
</div><!--div#treatment-->
I would love to see such a feature in zen-coding. If zen-coding could auto
insert these closing comments, it will really help.
Thanks,
Kishu
```
-----
Original issue reported on code.google.com by `kish...@gmail.com` on 7 May 2010 at 8:36 | 1.0 | Self commenting for easy identification of tags in large codes - ```
I am proposing this as an enhancement to zen-coding.
zen-coding produces a formatted output which is neat. Now with HUGE codes
that sometimes developers have to handle, they fall in to a complex nest of
various HTML tags. For dealing with this we put a comment at the end of
each tag so that they could be identified properly. This is a big big help
when debugging a faulty markup
An example is:
<div id="treatment">
<div id="nav">
... ... 1mn lines .. ..
</div><!--/div#nav-->
... ... 1mn lines .. ..
</div><!--div#treatment-->
I would love to see such a feature in zen-coding. If zen-coding could auto
insert these closing comments, it will really help.
Thanks,
Kishu
```
-----
Original issue reported on code.google.com by `kish...@gmail.com` on 7 May 2010 at 8:36 | defect | self commenting for easy identification of tags in large codes i am proposing this as an enhancement to zen coding zen coding produces a formatted output which is neat now with huge codes that sometimes developers have to handle they fall in to a complex nest of various html tags for dealing with this we put a comment at the end of each tag so that they could be identified properly this is a big big help when debugging a faulty markup an example is lines lines i would love to see such a feature in zen coding if zen coding could auto insert these closing comments it will really help thanks kishu original issue reported on code google com by kish gmail com on may at | 1 |
132,874 | 18,769,983,450 | IssuesEvent | 2021-11-06 17:03:00 | celery/celery | https://api.github.com/repos/celery/celery | closed | Provide a generic way to `delay` 3rd-party authored methods/function | Issue Type: Enhancement Status: Design Decision Needed ✘ | <!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
enhancement requests which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical enhancement to an existing feature.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed enhancements.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the if the same enhancement was already implemented in the
master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Brief Summary
<!--
Please include a brief summary of what the enhancement is
and why it is needed.
-->
This proposes Celery package to provide a generic `delay(func, *args, **kwargs)` that wraps any func or method thrown at it.
# Design
Historically, to make a function "able" to be delayed via Celery you need to wrap it in a decorator at least. This can be ok for own code, but not for 3rd-party code. Also makes your code coupled with Celery or at least with Celery API for jobs.
"Modern" Python versions can natively pickle functions, so this is no more needed for simpler uses. If Celery provides a shared task that receives a function and its arguments, no custom code is needed for standard user code.
Example:
```py
# mymodule.py
import requests
def ping_website(url):
requests.get(url)
# main.py
from celery.contrib import delay
from .mymodule import ping_website
from .myothermodule import greet
delay(ping_website, 'http://www.google.com.br')
delay(greet)
```
Note how the Celery dependent part occur only on the main module, not on each of function definitions. Even better, as we do not care about the response, such call could be wrote as:
```py
delay(requests.get, 'http://www.google.com.br')
```
This is not possible with today's prescription of "create a Task to be able to .delay it", you need to import the 3rd-party funcs and code in a task that wraps it to be able to send to Celery.
## Architectural Considerations
<!--
If more components other than Celery are involved,
describe them here and the effect it would have on Celery.
-->
Code `delay()`ed can be considered more convenient. Yet could potentially carry more data than needed or more data than the former way of calling. People can optimize by writing in the former way if needed.
Methods carry the whole `self` when serialized and this should be informed to the user, as they can consider if is acceptable to pay the cost to do so. Also is needed to inform that such feature needs pickle and reproduce the notice about the danger of accepting pickled jobs.
## Proposed Behavior
<!--
Please describe in detail how this enhancement is going to change the behavior
of an existing feature.
Describe what happens in case of failures as well if applicable.
-->
I propose some code along the following to be included on Celery release and the usage to be featured on Quickstart as the simpler way to sideload your code via Celery
Code to be included:
```py
# celery/contrib.py
@shared_task(serializer='pickle')
def _call(func, *args, **kwargs):
return func(*args, **kwargs)
def delay(func, *args, **kwargs) -> celery.result.AsyncResult:
"""
Defer a call of func(*args, **kwargs) to a Celery worker
"""
return _call.delay(func, *args, **kwargs)
```
Example to be included in docs:
```py
from celery.contrib import delay
# synchronous version:
my_slow_procedure(3, 4, spam='eggs')
# sideloaded version:
delay(my_slow_procedure, 3, 4, spam='eggs')
```
## Proposed UI/UX
<!--
Please provide your ideas for the API, CLI options,
configuration key names etc. that will be adjusted for this enhancement.
-->
The simpler usage is to `delay(fn, *args, **kwargs)` arbitrary functions/methods.
Anything more advanced than this could be considered advanced enough to honor the today's way of do it.
## Diagrams
<!--
Please include any diagrams that might be relevant
to the implementation of this enhancement such as:
* Class Diagrams
* Sequence Diagrams
* Activity Diagrams
You can drag and drop images into the text box to attach them to this issue.
-->
N/A
## Alternatives
<!--
If you have considered any alternative implementations
describe them in detail below.
-->
None
| 1.0 | Provide a generic way to `delay` 3rd-party authored methods/function - <!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
enhancement requests which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?q=is%3Aissue+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for similar or identical enhancement to an existing feature.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Issue+Type%3A+Enhancement%22+-label%3A%22Category%3A+Documentation%22)
for existing proposed enhancements.
- [ ] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the if the same enhancement was already implemented in the
master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Brief Summary
<!--
Please include a brief summary of what the enhancement is
and why it is needed.
-->
This proposes Celery package to provide a generic `delay(func, *args, **kwargs)` that wraps any func or method thrown at it.
# Design
Historically, to make a function "able" to be delayed via Celery you need to wrap it in a decorator at least. This can be ok for own code, but not for 3rd-party code. Also makes your code coupled with Celery or at least with Celery API for jobs.
"Modern" Python versions can natively pickle functions, so this is no more needed for simpler uses. If Celery provides a shared task that receives a function and its arguments, no custom code is needed for standard user code.
Example:
```py
# mymodule.py
import requests
def ping_website(url):
requests.get(url)
# main.py
from celery.contrib import delay
from .mymodule import ping_website
from .myothermodule import greet
delay(ping_website, 'http://www.google.com.br')
delay(greet)
```
Note how the Celery dependent part occur only on the main module, not on each of function definitions. Even better, as we do not care about the response, such call could be wrote as:
```py
delay(requests.get, 'http://www.google.com.br')
```
This is not possible with today's prescription of "create a Task to be able to .delay it", you need to import the 3rd-party funcs and code in a task that wraps it to be able to send to Celery.
## Architectural Considerations
<!--
If more components other than Celery are involved,
describe them here and the effect it would have on Celery.
-->
Code `delay()`ed can be considered more convenient. Yet could potentially carry more data than needed or more data than the former way of calling. People can optimize by writing in the former way if needed.
Methods carry the whole `self` when serialized and this should be informed to the user, as they can consider if is acceptable to pay the cost to do so. Also is needed to inform that such feature needs pickle and reproduce the notice about the danger of accepting pickled jobs.
## Proposed Behavior
<!--
Please describe in detail how this enhancement is going to change the behavior
of an existing feature.
Describe what happens in case of failures as well if applicable.
-->
I propose some code along the following to be included on Celery release and the usage to be featured on Quickstart as the simpler way to sideload your code via Celery
Code to be included:
```py
# celery/contrib.py
@shared_task(serializer='pickle')
def _call(func, *args, **kwargs):
return func(*args, **kwargs)
def delay(func, *args, **kwargs) -> celery.result.AsyncResult:
"""
Defer a call of func(*args, **kwargs) to a Celery worker
"""
return _call.delay(func, *args, **kwargs)
```
Example to be included in docs:
```py
from celery.contrib import delay
# synchronous version:
my_slow_procedure(3, 4, spam='eggs')
# sideloaded version:
delay(my_slow_procedure, 3, 4, spam='eggs')
```
## Proposed UI/UX
<!--
Please provide your ideas for the API, CLI options,
configuration key names etc. that will be adjusted for this enhancement.
-->
The simpler usage is to `delay(fn, *args, **kwargs)` arbitrary functions/methods.
Anything more advanced than this could be considered advanced enough to honor the today's way of do it.
## Diagrams
<!--
Please include any diagrams that might be relevant
to the implementation of this enhancement such as:
* Class Diagrams
* Sequence Diagrams
* Activity Diagrams
You can drag and drop images into the text box to attach them to this issue.
-->
N/A
## Alternatives
<!--
If you have considered any alternative implementations
describe them in detail below.
-->
None
| non_defect | provide a generic way to delay party authored methods function please fill this template entirely and do not erase parts of it we reserve the right to close without a response enhancement requests which are incomplete checklist to check an item on the list replace with i have checked the for similar or identical enhancement to an existing feature i have checked the for existing proposed enhancements i have checked the to find out if the if the same enhancement was already implemented in the master branch i have included all related issues and possible duplicate issues in this issue if there are none check this box anyway related issues and possible duplicates please make sure to search and mention any related issues or possible duplicates to this issue as requested by the checklist above this may or may not include issues in other repositories that the celery project maintains or other repositories that are dependencies of celery if you don t know how to mention issues please refer to github s documentation on the subject related issues none possible duplicates none brief summary please include a brief summary of what the enhancement is and why it is needed this proposes celery package to provide a generic delay func args kwargs that wraps any func or method thrown at it design historically to make a function able to be delayed via celery you need to wrap it in a decorator at least this can be ok for own code but not for party code also makes your code coupled with celery or at least with celery api for jobs modern python versions can natively pickle functions so this is no more needed for simpler uses if celery provides a shared task that receives a function and its arguments no custom code is needed for standard user code example py mymodule py import requests def ping website url requests get url main py from celery contrib import delay from mymodule import ping website from myothermodule import greet delay ping website delay greet note how the celery dependent part occur only on the main module not on each of function definitions even better as we do not care about the response such call could be wrote as py delay requests get this is not possible with today s prescription of create a task to be able to delay it you need to import the party funcs and code in a task that wraps it to be able to send to celery architectural considerations if more components other than celery are involved describe them here and the effect it would have on celery code delay ed can be considered more convenient yet could potentially carry more data than needed or more data than the former way of calling people can optimize by writing in the former way if needed methods carry the whole self when serialized and this should be informed to the user as they can consider if is acceptable to pay the cost to do so also is needed to inform that such feature needs pickle and reproduce the notice about the danger of accepting pickled jobs proposed behavior please describe in detail how this enhancement is going to change the behavior of an existing feature describe what happens in case of failures as well if applicable i propose some code along the following to be included on celery release and the usage to be featured on quickstart as the simpler way to sideload your code via celery code to be included py celery contrib py shared task serializer pickle def call func args kwargs return func args kwargs def delay func args kwargs celery result asyncresult defer a call of func args kwargs to a celery worker return call delay func args kwargs example to be included in docs py from celery contrib import delay synchronous version my slow procedure spam eggs sideloaded version delay my slow procedure spam eggs proposed ui ux please provide your ideas for the api cli options configuration key names etc that will be adjusted for this enhancement the simpler usage is to delay fn args kwargs arbitrary functions methods anything more advanced than this could be considered advanced enough to honor the today s way of do it diagrams please include any diagrams that might be relevant to the implementation of this enhancement such as class diagrams sequence diagrams activity diagrams you can drag and drop images into the text box to attach them to this issue n a alternatives if you have considered any alternative implementations describe them in detail below none | 0 |
57,997 | 16,255,158,794 | IssuesEvent | 2021-05-08 03:07:54 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | Paginator TypeError on indirect controller pagination | compatibility defect pagination | This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* CakePHP Version: latest master
### What you did
Upgrade cake3 app to cake4
URL `/clubs/users` without any args.
```php
public $paginate = ['order' => ['Clubs.name' => 'ASC']];
// ClubController users action
$query = $this->Clubs->ClubUsers->find();
$clubUsers = $this->paginate($query)->toArray();
```
### What happened
```
[TypeError] strtolower() expects parameter 1 to be string, bool given in
/var/www/site/vendor/cakephp/cakephp/src/View/Helper/PaginatorHelper.php on line 566
```
I solved it on project level for now using
```php
// HACK FOR NOW
if ($this->request->getAttribute('paging') && $this->request->getAttribute('paging')['ClubUsers']['direction'] === false) {
$paging = $this->request->getAttribute('paging');
$paging['ClubUsers']['direction'] = 'asc';
$this->request = $this->request->withAttribute('paging', $paging);
}
```
ServerRequest content
```
protected attributes => [
'isAjax' => false,
'paging' => [
'ClubUsers' => [
'count' => (int) 614,
'current' => (int) 20,
'perPage' => (int) 20,
'page' => (int) 1,
'requestedPage' => (int) 1,
'pageCount' => (int) 31,
'start' => (int) 1,
'end' => (int) 20,
'prevPage' => false,
'nextPage' => true,
'sort' => 'Clubs.name',
'direction' => false, // !
'sortDefault' => 'Clubs.name',
'directionDefault' => 'ASC',
'completeSort' => [ ],
'limit' => null,
'scope' => null,
'finder' => 'all',
],
],
]
```
### What you expected to happen
No TypeError, either meaningful exception to dev, or defaulting to a sane default for direction (asc/desc etc).
I assume there is some shenanigans going on inside the default whitelist validation for aliases models.
Which then, combined with a non-direct component paginate seems to not work with the model being used for sorting.
| 1.0 | Paginator TypeError on indirect controller pagination - This is a (multiple allowed):
* [x] bug
* [ ] enhancement
* CakePHP Version: latest master
### What you did
Upgrade cake3 app to cake4
URL `/clubs/users` without any args.
```php
public $paginate = ['order' => ['Clubs.name' => 'ASC']];
// ClubController users action
$query = $this->Clubs->ClubUsers->find();
$clubUsers = $this->paginate($query)->toArray();
```
### What happened
```
[TypeError] strtolower() expects parameter 1 to be string, bool given in
/var/www/site/vendor/cakephp/cakephp/src/View/Helper/PaginatorHelper.php on line 566
```
I solved it on project level for now using
```php
// HACK FOR NOW
if ($this->request->getAttribute('paging') && $this->request->getAttribute('paging')['ClubUsers']['direction'] === false) {
$paging = $this->request->getAttribute('paging');
$paging['ClubUsers']['direction'] = 'asc';
$this->request = $this->request->withAttribute('paging', $paging);
}
```
ServerRequest content
```
protected attributes => [
'isAjax' => false,
'paging' => [
'ClubUsers' => [
'count' => (int) 614,
'current' => (int) 20,
'perPage' => (int) 20,
'page' => (int) 1,
'requestedPage' => (int) 1,
'pageCount' => (int) 31,
'start' => (int) 1,
'end' => (int) 20,
'prevPage' => false,
'nextPage' => true,
'sort' => 'Clubs.name',
'direction' => false, // !
'sortDefault' => 'Clubs.name',
'directionDefault' => 'ASC',
'completeSort' => [ ],
'limit' => null,
'scope' => null,
'finder' => 'all',
],
],
]
```
### What you expected to happen
No TypeError, either meaningful exception to dev, or defaulting to a sane default for direction (asc/desc etc).
I assume there is some shenanigans going on inside the default whitelist validation for aliases models.
Which then, combined with a non-direct component paginate seems to not work with the model being used for sorting.
| defect | paginator typeerror on indirect controller pagination this is a multiple allowed bug enhancement cakephp version latest master what you did upgrade app to url clubs users without any args php public paginate clubcontroller users action query this clubs clubusers find clubusers this paginate query toarray what happened strtolower expects parameter to be string bool given in var www site vendor cakephp cakephp src view helper paginatorhelper php on line i solved it on project level for now using php hack for now if this request getattribute paging this request getattribute paging false paging this request getattribute paging paging asc this request this request withattribute paging paging serverrequest content protected attributes isajax false paging clubusers count int current int perpage int page int requestedpage int pagecount int start int end int prevpage false nextpage true sort clubs name direction false sortdefault clubs name directiondefault asc completesort limit null scope null finder all what you expected to happen no typeerror either meaningful exception to dev or defaulting to a sane default for direction asc desc etc i assume there is some shenanigans going on inside the default whitelist validation for aliases models which then combined with a non direct component paginate seems to not work with the model being used for sorting | 1 |
2,652 | 5,003,149,932 | IssuesEvent | 2016-12-11 19:26:57 | ejuor/ejuor.github.io | https://api.github.com/repos/ejuor/ejuor.github.io | closed | Implement function to collect data from Twitch.tv API | requirement | Use `https://wind-bow.hyperdev.space/` instead of `https://api.twitch.tv/kraken` or aquire own API key from Twitch | 1.0 | Implement function to collect data from Twitch.tv API - Use `https://wind-bow.hyperdev.space/` instead of `https://api.twitch.tv/kraken` or aquire own API key from Twitch | non_defect | implement function to collect data from twitch tv api use instead of or aquire own api key from twitch | 0 |
436,914 | 12,555,292,112 | IssuesEvent | 2020-06-07 05:33:49 | Aizher800/CRE311-RoyalChairBrawl | https://api.github.com/repos/Aizher800/CRE311-RoyalChairBrawl | opened | Player characters stack and collide with eachother | Priority: Med bug | **Describe the bug**
While in the _Arena_ scene, we noticed that the player characters could stack on top of each other. They are also unable to pass through each other.
**Version**
Game build: RoyalChairBrawl_alpha_v002
Unity: 2019.3.5
**To Reproduce**
Steps to reproduce the behaviour:
1. Open the Royal Chair Brawl Unity project.
2. Navigate to and open the _Arena_ scene.
3. Press play and arrange the player characters within the scene.
a. Move the characters so that at least one is on a platform above the others.

b. Move the higher character to the edge of the platform and arrange the others so that they are directly below.

c. Move the higher character off the edge.

4. See error
**Expected behaviour**
The player is able to move their character around the game levels without colliding with other player characters. They are not able to stack on top of each other.
**Screenshots**
Additional to that included in the reproduction steps, this is a screenshot of the bug. This screenshot is from the _Ew_Animation_ scene.

**Desktop System:**
Windows10
**Additional context**
This bug was identified after playtesting, by the Royal Chair Brawl developers.
| 1.0 | Player characters stack and collide with eachother - **Describe the bug**
While in the _Arena_ scene, we noticed that the player characters could stack on top of each other. They are also unable to pass through each other.
**Version**
Game build: RoyalChairBrawl_alpha_v002
Unity: 2019.3.5
**To Reproduce**
Steps to reproduce the behaviour:
1. Open the Royal Chair Brawl Unity project.
2. Navigate to and open the _Arena_ scene.
3. Press play and arrange the player characters within the scene.
a. Move the characters so that at least one is on a platform above the others.

b. Move the higher character to the edge of the platform and arrange the others so that they are directly below.

c. Move the higher character off the edge.

4. See error
**Expected behaviour**
The player is able to move their character around the game levels without colliding with other player characters. They are not able to stack on top of each other.
**Screenshots**
Additional to that included in the reproduction steps, this is a screenshot of the bug. This screenshot is from the _Ew_Animation_ scene.

**Desktop System:**
Windows10
**Additional context**
This bug was identified after playtesting, by the Royal Chair Brawl developers.
| non_defect | player characters stack and collide with eachother describe the bug while in the arena scene we noticed that the player characters could stack on top of each other they are also unable to pass through each other version game build royalchairbrawl alpha unity to reproduce steps to reproduce the behaviour open the royal chair brawl unity project navigate to and open the arena scene press play and arrange the player characters within the scene a move the characters so that at least one is on a platform above the others b move the higher character to the edge of the platform and arrange the others so that they are directly below c move the higher character off the edge see error expected behaviour the player is able to move their character around the game levels without colliding with other player characters they are not able to stack on top of each other screenshots additional to that included in the reproduction steps this is a screenshot of the bug this screenshot is from the ew animation scene desktop system additional context this bug was identified after playtesting by the royal chair brawl developers | 0 |
310,022 | 26,694,645,965 | IssuesEvent | 2023-01-27 09:17:31 | MetaMask/metamask-mobile | https://api.github.com/repos/MetaMask/metamask-mobile | opened | Improve test coverage of send flow | tests team-confirmations | Improve test coverage of send flow on mobile. This includes:
1. Writing more unit test coverage for different components used in send flow
2. Writing more e2e test for send screens:
 | 1.0 | Improve test coverage of send flow - Improve test coverage of send flow on mobile. This includes:
1. Writing more unit test coverage for different components used in send flow
2. Writing more e2e test for send screens:
 | non_defect | improve test coverage of send flow improve test coverage of send flow on mobile this includes writing more unit test coverage for different components used in send flow writing more test for send screens | 0 |
80,788 | 10,059,316,308 | IssuesEvent | 2019-07-22 15:57:26 | ipfs/docs | https://api.github.com/repos/ipfs/docs | closed | Features List: Set up collaborative workspace | OKR: Features List design-ux | Set up a collaborative workspace where the docs task force as a whole can contribute ideas for the features list:
- Make it tough to duplicate entries (because async) but easy to annotate others' entries with comments of your own (because async)
- Able to include links and screenshots
- Do we need a commenting feature so we can start inline discussions?
- Other considerations? | 1.0 | Features List: Set up collaborative workspace - Set up a collaborative workspace where the docs task force as a whole can contribute ideas for the features list:
- Make it tough to duplicate entries (because async) but easy to annotate others' entries with comments of your own (because async)
- Able to include links and screenshots
- Do we need a commenting feature so we can start inline discussions?
- Other considerations? | non_defect | features list set up collaborative workspace set up a collaborative workspace where the docs task force as a whole can contribute ideas for the features list make it tough to duplicate entries because async but easy to annotate others entries with comments of your own because async able to include links and screenshots do we need a commenting feature so we can start inline discussions other considerations | 0 |
290,588 | 21,890,490,027 | IssuesEvent | 2022-05-20 00:39:58 | devcodeabode/MoDiBo-Plugin-Template | https://api.github.com/repos/devcodeabode/MoDiBo-Plugin-Template | opened | Implement Template Plugin main.js | blocked documentation enhancement good first issue | - [ ] Implement `main.js` for the template plugin following [the code on the MoDiBo Wiki](https://github.com/devcodeabode/MoDiBo/wiki/Blueprint-Plugin)
- [ ] Update the README to further explain the methods of processing information. Be sure to clearly state that at least one of these must be exported, but more than one can be as well.
1. `processCommand`: Gets the command and args strings. It's only ever called when a command is issued that matches one in its array. The `COMMANDS` array is also required to be exported.
2. `processMessage`, which gets called on every message no matter if it has a command or not. This will receive the message object itself, not a string representation.
3. `startCron` gets run after all plugins are loaded and should be used to start tasks on an interval.
- [ ] Add a section to the README clarifying the difference between `startCron` and `onLoad`
- `onLoad` is not required. It's run when the plugin is first loaded. There is no guarantee that any other plugins are loaded. This should be used to perform any initial setup required for the plugin.
- `startCron` is one of the three options for processing information. It is run after all plugins are loaded, and only called once. It is intended to kick off tasks run on an interval, hence the name "Cron."
**Dependencies**
- [ ] Final approval of the Blueprint Plugin schema from project leadership
| 1.0 | Implement Template Plugin main.js - - [ ] Implement `main.js` for the template plugin following [the code on the MoDiBo Wiki](https://github.com/devcodeabode/MoDiBo/wiki/Blueprint-Plugin)
- [ ] Update the README to further explain the methods of processing information. Be sure to clearly state that at least one of these must be exported, but more than one can be as well.
1. `processCommand`: Gets the command and args strings. It's only ever called when a command is issued that matches one in its array. The `COMMANDS` array is also required to be exported.
2. `processMessage`, which gets called on every message no matter if it has a command or not. This will receive the message object itself, not a string representation.
3. `startCron` gets run after all plugins are loaded and should be used to start tasks on an interval.
- [ ] Add a section to the README clarifying the difference between `startCron` and `onLoad`
- `onLoad` is not required. It's run when the plugin is first loaded. There is no guarantee that any other plugins are loaded. This should be used to perform any initial setup required for the plugin.
- `startCron` is one of the three options for processing information. It is run after all plugins are loaded, and only called once. It is intended to kick off tasks run on an interval, hence the name "Cron."
**Dependencies**
- [ ] Final approval of the Blueprint Plugin schema from project leadership
| non_defect | implement template plugin main js implement main js for the template plugin following update the readme to further explain the methods of processing information be sure to clearly state that at least one of these must be exported but more than one can be as well processcommand gets the command and args strings it s only ever called when a command is issued that matches one in its array the commands array is also required to be exported processmessage which gets called on every message no matter if it has a command or not this will receive the message object itself not a string representation startcron gets run after all plugins are loaded and should be used to start tasks on an interval add a section to the readme clarifying the difference between startcron and onload onload is not required it s run when the plugin is first loaded there is no guarantee that any other plugins are loaded this should be used to perform any initial setup required for the plugin startcron is one of the three options for processing information it is run after all plugins are loaded and only called once it is intended to kick off tasks run on an interval hence the name cron dependencies final approval of the blueprint plugin schema from project leadership | 0 |
630,537 | 20,112,295,014 | IssuesEvent | 2022-02-07 16:06:25 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Gif retry not successful once socket is closed | bug Chat priority 2: medium | 1. Run desktop app
2. Open gif popup widget
2. Disconnect internet
5. Reconnect internet
6. Open gif popup widget again (should see "Error while contacting Tenor API, please retry")
6. Click "retry" button
7. Expected: tenor gifs should be fetched
8. Actual: three failed retry attempts with "socket closed" error message:
```
ERR 2021-09-23 14:58:18.758+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
ERR 2021-09-23 14:58:18.858+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
ERR 2021-09-23 14:58:19.058+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
```
 | 1.0 | Gif retry not successful once socket is closed - 1. Run desktop app
2. Open gif popup widget
2. Disconnect internet
5. Reconnect internet
6. Open gif popup widget again (should see "Error while contacting Tenor API, please retry")
6. Click "retry" button
7. Expected: tenor gifs should be fetched
8. Actual: three failed retry attempts with "socket closed" error message:
```
ERR 2021-09-23 14:58:18.758+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
ERR 2021-09-23 14:58:18.858+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
ERR 2021-09-23 14:58:19.058+10:00 could not query tenor API topics="gif" tid=43363289 file=gif.nim:88 msg="/Users/emizzle/repos/status-im/status-desktop/vendor/nimbus-build-system/vendor/Nim/lib/pure/net.nim(1463, 9) `not socket.isClosed` Cannot `send` on a closed socket"
```
 | non_defect | gif retry not successful once socket is closed run desktop app open gif popup widget disconnect internet reconnect internet open gif popup widget again should see error while contacting tenor api please retry click retry button expected tenor gifs should be fetched actual three failed retry attempts with socket closed error message err could not query tenor api topics gif tid file gif nim msg users emizzle repos status im status desktop vendor nimbus build system vendor nim lib pure net nim not socket isclosed cannot send on a closed socket err could not query tenor api topics gif tid file gif nim msg users emizzle repos status im status desktop vendor nimbus build system vendor nim lib pure net nim not socket isclosed cannot send on a closed socket err could not query tenor api topics gif tid file gif nim msg users emizzle repos status im status desktop vendor nimbus build system vendor nim lib pure net nim not socket isclosed cannot send on a closed socket | 0 |
3,603 | 2,610,065,545 | IssuesEvent | 2015-02-26 18:19:18 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 临海治疗前列腺炎价格 | auto-migrated Priority-Medium Type-Defect | ```
临海治疗前列腺炎价格【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:22 | 1.0 | 临海治疗前列腺炎价格 - ```
临海治疗前列腺炎价格【台州五洲生殖医院】24小时健康咨询
热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市
椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1
18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、
112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:22 | defect | 临海治疗前列腺炎价格 临海治疗前列腺炎价格【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 1 |
24,021 | 3,899,774,443 | IssuesEvent | 2016-04-17 23:03:38 | STEllAR-GROUP/hpx | https://api.github.com/repos/STEllAR-GROUP/hpx | closed | /work/dmarce1/debug/include/hpx/external/boost/cache/local_cache.hpp(89): error #303: explicit type is missing ("int" assumed) HPX_MOVABLE_ONLY(local_cache); | tag: info needed type: defect | I am getting this error:
/work/dmarce1/debug/include/hpx/external/boost/cache/local_cache.hpp(89): error #303: explicit type is missing ("int" assumed)
HPX_MOVABLE_ONLY(local_cache);
when I attempt to build octotiger on queenbee. Using the exact same builds script, .modules, and .bashrc files I do not get this error on SuperMIC. Its not a result of a recent HPX commit, I checked out an older commit from a few weeks ago and got the same problem.
Does anyone have any idea what this means?
| 1.0 | /work/dmarce1/debug/include/hpx/external/boost/cache/local_cache.hpp(89): error #303: explicit type is missing ("int" assumed) HPX_MOVABLE_ONLY(local_cache); - I am getting this error:
/work/dmarce1/debug/include/hpx/external/boost/cache/local_cache.hpp(89): error #303: explicit type is missing ("int" assumed)
HPX_MOVABLE_ONLY(local_cache);
when I attempt to build octotiger on queenbee. Using the exact same builds script, .modules, and .bashrc files I do not get this error on SuperMIC. Its not a result of a recent HPX commit, I checked out an older commit from a few weeks ago and got the same problem.
Does anyone have any idea what this means?
| defect | work debug include hpx external boost cache local cache hpp error explicit type is missing int assumed hpx movable only local cache i am getting this error work debug include hpx external boost cache local cache hpp error explicit type is missing int assumed hpx movable only local cache when i attempt to build octotiger on queenbee using the exact same builds script modules and bashrc files i do not get this error on supermic its not a result of a recent hpx commit i checked out an older commit from a few weeks ago and got the same problem does anyone have any idea what this means | 1 |
81,214 | 30,753,772,599 | IssuesEvent | 2023-07-28 22:23:35 | Sigil-Ebook/flightcrew | https://api.github.com/repos/Sigil-Ebook/flightcrew | closed | Troble when using from Sigil | Type-Defect Priority-Medium auto-migrated | ```
What steps will reproduce the problem?
1. Install latest Sigil issue AS ADMINISTRATOR on Windows XP SP3
2. Run Sigil AS USER and try to validate ePUB
3. Sigil crashes after validator complaint about 'unique:path' and 'temporary
profile'
4. Suspect trouble with local variables, as everything fine when validating AS
ADMINISTRATOR
Posting here following Valloric's reply to my cry for help on MobilRead Sigil
Forum...
```
Original issue reported on code.google.com by `CFier...@gmail.com` on 16 Jan 2011 at 5:18
| 1.0 | Troble when using from Sigil - ```
What steps will reproduce the problem?
1. Install latest Sigil issue AS ADMINISTRATOR on Windows XP SP3
2. Run Sigil AS USER and try to validate ePUB
3. Sigil crashes after validator complaint about 'unique:path' and 'temporary
profile'
4. Suspect trouble with local variables, as everything fine when validating AS
ADMINISTRATOR
Posting here following Valloric's reply to my cry for help on MobilRead Sigil
Forum...
```
Original issue reported on code.google.com by `CFier...@gmail.com` on 16 Jan 2011 at 5:18
| defect | troble when using from sigil what steps will reproduce the problem install latest sigil issue as administrator on windows xp run sigil as user and try to validate epub sigil crashes after validator complaint about unique path and temporary profile suspect trouble with local variables as everything fine when validating as administrator posting here following valloric s reply to my cry for help on mobilread sigil forum original issue reported on code google com by cfier gmail com on jan at | 1 |
68,216 | 17,193,436,515 | IssuesEvent | 2021-07-16 14:09:09 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | Support for QNX Neutrino | category: build/install | ##### System information (version)
- OpenCV => 4.5.3
- Operating System / Platform => Host: Windows 64 Bit / Target: QNX 7.1
- Compiler => QCC 7.1
##### Detailed description
Cross-compiling for QNX is not possible due to missing include headers in parallel due to lack of generalization for Unix systems.
##### Steps to reproduce
Build for any OS non-Linux system such as QNX, which is neither:
```
defined __linux__ || defined __APPLE__ || defined __GLIBC__ \
|| defined __HAIKU__ || defined __EMSCRIPTEN__ || defined __FreeBSD__ \
|| defined __OpenBSD__
```
See `parallel.cpp`.
##### Issue submission checklist
- [x] I report the issue, it's not a question
<!--
OpenCV team works with forum.opencv.org, Stack Overflow and other communities
to discuss problems. Tickets with question without real issue statement will be
closed.
-->
- [x] I checked the problem with documentation, FAQ, open issues,
forum.opencv.org, Stack Overflow, etc and have not found solution
<!--
Places to check:
* OpenCV documentation: https://docs.opencv.org
* FAQ page: https://github.com/opencv/opencv/wiki/FAQ
* OpenCV forum: https://forum.opencv.org
* OpenCV issue tracker: https://github.com/opencv/opencv/issues?q=is%3Aissue
* Stack Overflow branch: https://stackoverflow.com/questions/tagged/opencv
-->
- [x] I updated to latest OpenCV version and the issue is still there
<!--
master branch for OpenCV 4.x and 3.4 branch for OpenCV 3.x releases.
OpenCV team supports only latest release for each branch.
The ticket is closed, if the problem is not reproduced with modern version.
-->
- [x] There is reproducer code and related data files: videos, images, onnx, etc
<!--
The best reproducer -- test case for OpenCV that we can add to the library.
Recommendations for media files and binary files:
* Try to reproduce the issue with images and videos in opencv_extra repository
to reduce attachment size
* Use PNG for images, if you report some CV related bug, but not image reader
issue
* Attach the image as archive to the ticket, if you report some reader issue.
Image hosting services compress images and it breaks the repro code.
* Provide ONNX file for some public model or ONNX file with with random weights,
if you report ONNX parsing or handling issue. Architecture details diagram
from netron tool can be very useful too. See https://lutzroeder.github.io/netron/
-->
| 1.0 | Support for QNX Neutrino - ##### System information (version)
- OpenCV => 4.5.3
- Operating System / Platform => Host: Windows 64 Bit / Target: QNX 7.1
- Compiler => QCC 7.1
##### Detailed description
Cross-compiling for QNX is not possible due to missing include headers in parallel due to lack of generalization for Unix systems.
##### Steps to reproduce
Build for any OS non-Linux system such as QNX, which is neither:
```
defined __linux__ || defined __APPLE__ || defined __GLIBC__ \
|| defined __HAIKU__ || defined __EMSCRIPTEN__ || defined __FreeBSD__ \
|| defined __OpenBSD__
```
See `parallel.cpp`.
##### Issue submission checklist
- [x] I report the issue, it's not a question
<!--
OpenCV team works with forum.opencv.org, Stack Overflow and other communities
to discuss problems. Tickets with question without real issue statement will be
closed.
-->
- [x] I checked the problem with documentation, FAQ, open issues,
forum.opencv.org, Stack Overflow, etc and have not found solution
<!--
Places to check:
* OpenCV documentation: https://docs.opencv.org
* FAQ page: https://github.com/opencv/opencv/wiki/FAQ
* OpenCV forum: https://forum.opencv.org
* OpenCV issue tracker: https://github.com/opencv/opencv/issues?q=is%3Aissue
* Stack Overflow branch: https://stackoverflow.com/questions/tagged/opencv
-->
- [x] I updated to latest OpenCV version and the issue is still there
<!--
master branch for OpenCV 4.x and 3.4 branch for OpenCV 3.x releases.
OpenCV team supports only latest release for each branch.
The ticket is closed, if the problem is not reproduced with modern version.
-->
- [x] There is reproducer code and related data files: videos, images, onnx, etc
<!--
The best reproducer -- test case for OpenCV that we can add to the library.
Recommendations for media files and binary files:
* Try to reproduce the issue with images and videos in opencv_extra repository
to reduce attachment size
* Use PNG for images, if you report some CV related bug, but not image reader
issue
* Attach the image as archive to the ticket, if you report some reader issue.
Image hosting services compress images and it breaks the repro code.
* Provide ONNX file for some public model or ONNX file with with random weights,
if you report ONNX parsing or handling issue. Architecture details diagram
from netron tool can be very useful too. See https://lutzroeder.github.io/netron/
-->
| non_defect | support for qnx neutrino system information version opencv operating system platform host windows bit target qnx compiler qcc detailed description cross compiling for qnx is not possible due to missing include headers in parallel due to lack of generalization for unix systems steps to reproduce build for any os non linux system such as qnx which is neither defined linux defined apple defined glibc defined haiku defined emscripten defined freebsd defined openbsd see parallel cpp issue submission checklist i report the issue it s not a question opencv team works with forum opencv org stack overflow and other communities to discuss problems tickets with question without real issue statement will be closed i checked the problem with documentation faq open issues forum opencv org stack overflow etc and have not found solution places to check opencv documentation faq page opencv forum opencv issue tracker stack overflow branch i updated to latest opencv version and the issue is still there master branch for opencv x and branch for opencv x releases opencv team supports only latest release for each branch the ticket is closed if the problem is not reproduced with modern version there is reproducer code and related data files videos images onnx etc the best reproducer test case for opencv that we can add to the library recommendations for media files and binary files try to reproduce the issue with images and videos in opencv extra repository to reduce attachment size use png for images if you report some cv related bug but not image reader issue attach the image as archive to the ticket if you report some reader issue image hosting services compress images and it breaks the repro code provide onnx file for some public model or onnx file with with random weights if you report onnx parsing or handling issue architecture details diagram from netron tool can be very useful too see | 0 |
43,070 | 11,460,537,951 | IssuesEvent | 2020-02-07 09:57:12 | snowplow/snowplow-javascript-tracker | https://api.github.com/repos/snowplow/snowplow-javascript-tracker | closed | Beacon implementation fails in Chrome and Safari 12+ due to browser bugs | category:browser priority:high status:completed type:defect | Hey guys,
The implementation of Beacon API support in this library appears to be impacted by two separate browser bugs.
https://github.com/snowplow/snowplow-javascript-tracker/blob/ed751aaf8486f72d0abc0f0f880754c3dd61aa65/src/js/out_queue.js#L306
Here you are sending a Beacon request with JSON as the type, which is not in Chrome's CORS pre-approved headers list. This causes Chrome to throw an exception when you try to send a beacon this way. Of course this means it will fallback to POST and correctly log the event.
ref: https://bugs.chromium.org/p/chromium/issues/detail?id=490015
In Safari 12 & 13, it appears the same bug is present however it will silently fail and send a blank request body. This is a much bigger problem because it doesn't fallback and no tracking data is actually sent. | 1.0 | Beacon implementation fails in Chrome and Safari 12+ due to browser bugs - Hey guys,
The implementation of Beacon API support in this library appears to be impacted by two separate browser bugs.
https://github.com/snowplow/snowplow-javascript-tracker/blob/ed751aaf8486f72d0abc0f0f880754c3dd61aa65/src/js/out_queue.js#L306
Here you are sending a Beacon request with JSON as the type, which is not in Chrome's CORS pre-approved headers list. This causes Chrome to throw an exception when you try to send a beacon this way. Of course this means it will fallback to POST and correctly log the event.
ref: https://bugs.chromium.org/p/chromium/issues/detail?id=490015
In Safari 12 & 13, it appears the same bug is present however it will silently fail and send a blank request body. This is a much bigger problem because it doesn't fallback and no tracking data is actually sent. | defect | beacon implementation fails in chrome and safari due to browser bugs hey guys the implementation of beacon api support in this library appears to be impacted by two separate browser bugs here you are sending a beacon request with json as the type which is not in chrome s cors pre approved headers list this causes chrome to throw an exception when you try to send a beacon this way of course this means it will fallback to post and correctly log the event ref in safari it appears the same bug is present however it will silently fail and send a blank request body this is a much bigger problem because it doesn t fallback and no tracking data is actually sent | 1 |
77,255 | 26,882,286,465 | IssuesEvent | 2023-02-05 19:34:52 | Vylpes/vylbot-app | https://api.github.com/repos/Vylpes/vylbot-app | reopened | Event embeds don't have the user's avatar in the thumbnail | events defect | It is blank, it should have the user's avatar as the thumbnail like it had before. | 1.0 | Event embeds don't have the user's avatar in the thumbnail - It is blank, it should have the user's avatar as the thumbnail like it had before. | defect | event embeds don t have the user s avatar in the thumbnail it is blank it should have the user s avatar as the thumbnail like it had before | 1 |
99,807 | 16,456,414,863 | IssuesEvent | 2021-05-21 13:10:26 | nextcloud/server | https://api.github.com/repos/nextcloud/server | closed | Verifiable releases based on strong cryptography | 1. to develop enhancement security | Could you please consider doing the following steps to improve authenticity and security for the users and the development process of NextCloud:
- [x] Sign git tags, GitHub also supports this and shows nice "Verified" badges then. Example: https://github.com/opening-hours/opening_hours.js/releases
This allows devs and users to ensure that no one tampered with the release as the platform where you host your source code is not under your control (and even if it where …)
According to your [last release](https://github.com/nextcloud/server/releases/tag/v9.0.53) you are already doing that :+1:
- [x] Upload the release public key(s) on https://sks-keyservers.net/ Currently only found on: https://nextcloud.com/nextcloud.asc
This ensures that the public keys are available for everyone
- [ ] Mention `gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 28806A878AE423A28372792ED75899B9A724937A` on the website instead of downloading both files from the same origin.
- Additional for the developers consider to signed and verify git commits (a few devs are already doing this :+1:)
Related to: https://github.com/nextcloud/vm/issues/11
Related to: https://github.com/owncloud/core/issues/25710
Refs:
- https://github.com/debops/ansible-owncloud/pull/44#discussion_r73707876
| True | Verifiable releases based on strong cryptography - Could you please consider doing the following steps to improve authenticity and security for the users and the development process of NextCloud:
- [x] Sign git tags, GitHub also supports this and shows nice "Verified" badges then. Example: https://github.com/opening-hours/opening_hours.js/releases
This allows devs and users to ensure that no one tampered with the release as the platform where you host your source code is not under your control (and even if it where …)
According to your [last release](https://github.com/nextcloud/server/releases/tag/v9.0.53) you are already doing that :+1:
- [x] Upload the release public key(s) on https://sks-keyservers.net/ Currently only found on: https://nextcloud.com/nextcloud.asc
This ensures that the public keys are available for everyone
- [ ] Mention `gpg --keyserver ha.pool.sks-keyservers.net --recv-keys 28806A878AE423A28372792ED75899B9A724937A` on the website instead of downloading both files from the same origin.
- Additional for the developers consider to signed and verify git commits (a few devs are already doing this :+1:)
Related to: https://github.com/nextcloud/vm/issues/11
Related to: https://github.com/owncloud/core/issues/25710
Refs:
- https://github.com/debops/ansible-owncloud/pull/44#discussion_r73707876
| non_defect | verifiable releases based on strong cryptography could you please consider doing the following steps to improve authenticity and security for the users and the development process of nextcloud sign git tags github also supports this and shows nice verified badges then example this allows devs and users to ensure that no one tampered with the release as the platform where you host your source code is not under your control and even if it where … according to your you are already doing that upload the release public key s on currently only found on this ensures that the public keys are available for everyone mention gpg keyserver ha pool sks keyservers net recv keys on the website instead of downloading both files from the same origin additional for the developers consider to signed and verify git commits a few devs are already doing this related to related to refs | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.