Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 844 | labels stringlengths 4 721 | body stringlengths 1 261k | index stringclasses 12 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 248k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
217,284 | 16,687,271,134 | IssuesEvent | 2021-06-08 09:18:48 | dankamongmen/notcurses | https://api.github.com/repos/dankamongmen/notcurses | closed | support NCVISUAL_OPTION_BLEND in kitty graphics via translucency | bitmaps documentation enhancement | We don't honor `NCVISUAL_OPTION_BLEND` with `NCBLIT_PIXEL` currently. This will never be supported with Sixel, but with Kitty, we could just cut the alpha value in half at each pixel. Go ahead and implement this, and document both in `notcurses_visual.3`. | 1.0 | support NCVISUAL_OPTION_BLEND in kitty graphics via translucency - We don't honor `NCVISUAL_OPTION_BLEND` with `NCBLIT_PIXEL` currently. This will never be supported with Sixel, but with Kitty, we could just cut the alpha value in half at each pixel. Go ahead and implement this, and document both in `notcurses_visual.3`. | non_priority | support ncvisual option blend in kitty graphics via translucency we don t honor ncvisual option blend with ncblit pixel currently this will never be supported with sixel but with kitty we could just cut the alpha value in half at each pixel go ahead and implement this and document both in notcurses visual | 0 |
15,615 | 5,148,740,738 | IssuesEvent | 2017-01-13 12:26:13 | AquariaOSE/Aquaria | https://api.github.com/repos/AquariaOSE/Aquaria | closed | Recipe list: Vedha sea crisp: Excess blank space | non-code unconfirmed | There appears to be an extra blank space in the "Any B_one" ingredient to Vedha sea crisp on the recipes list:

Indeed thats a super mini bug, but it still seems wrong and - I assume - will be easy to fix. I was looking at the text files in files/data but could not immediately spot what would be wrong there.
| 1.0 | Recipe list: Vedha sea crisp: Excess blank space - There appears to be an extra blank space in the "Any B_one" ingredient to Vedha sea crisp on the recipes list:

Indeed thats a super mini bug, but it still seems wrong and - I assume - will be easy to fix. I was looking at the text files in files/data but could not immediately spot what would be wrong there.
| non_priority | recipe list vedha sea crisp excess blank space there appears to be an extra blank space in the any b one ingredient to vedha sea crisp on the recipes list indeed thats a super mini bug but it still seems wrong and i assume will be easy to fix i was looking at the text files in files data but could not immediately spot what would be wrong there | 0 |
76,026 | 15,495,768,011 | IssuesEvent | 2021-03-11 01:27:27 | FlipFloop/reactchat | https://api.github.com/repos/FlipFloop/reactchat | opened | CVE-2021-24033 (Medium) detected in react-dev-utils-10.2.1.tgz | security vulnerability | ## CVE-2021-24033 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-dev-utils-10.2.1.tgz</b></p></summary>
<p>webpack utilities used by Create React App</p>
<p>Library home page: <a href="https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-10.2.1.tgz">https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-10.2.1.tgz</a></p>
<p>Path to dependency file: reactchat/package.json</p>
<p>Path to vulnerable library: reactchat/node_modules/react-dev-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- :x: **react-dev-utils-10.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
react-dev-utils prior to v11.0.4 exposes a function, getProcessForPort, where an input argument is concatenated into a command string to be executed. This function is typically used from react-scripts (in Create React App projects), where the usage is safe. Only when this function is manually invoked with user-provided values (ie: by custom code) is there the potential for command injection. If you're consuming it from react-scripts then this issue does not affect you.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24033>CVE-2021-24033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.facebook.com/security/advisories/cve-2021-24033">https://www.facebook.com/security/advisories/cve-2021-24033</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: react-dev-utils-11.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-24033 (Medium) detected in react-dev-utils-10.2.1.tgz - ## CVE-2021-24033 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>react-dev-utils-10.2.1.tgz</b></p></summary>
<p>webpack utilities used by Create React App</p>
<p>Library home page: <a href="https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-10.2.1.tgz">https://registry.npmjs.org/react-dev-utils/-/react-dev-utils-10.2.1.tgz</a></p>
<p>Path to dependency file: reactchat/package.json</p>
<p>Path to vulnerable library: reactchat/node_modules/react-dev-utils/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.3.tgz (Root Library)
- :x: **react-dev-utils-10.2.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
react-dev-utils prior to v11.0.4 exposes a function, getProcessForPort, where an input argument is concatenated into a command string to be executed. This function is typically used from react-scripts (in Create React App projects), where the usage is safe. Only when this function is manually invoked with user-provided values (ie: by custom code) is there the potential for command injection. If you're consuming it from react-scripts then this issue does not affect you.
<p>Publish Date: 2021-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-24033>CVE-2021-24033</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.facebook.com/security/advisories/cve-2021-24033">https://www.facebook.com/security/advisories/cve-2021-24033</a></p>
<p>Release Date: 2021-03-09</p>
<p>Fix Resolution: react-dev-utils-11.0.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in react dev utils tgz cve medium severity vulnerability vulnerable library react dev utils tgz webpack utilities used by create react app library home page a href path to dependency file reactchat package json path to vulnerable library reactchat node modules react dev utils package json dependency hierarchy react scripts tgz root library x react dev utils tgz vulnerable library found in base branch main vulnerability details react dev utils prior to exposes a function getprocessforport where an input argument is concatenated into a command string to be executed this function is typically used from react scripts in create react app projects where the usage is safe only when this function is manually invoked with user provided values ie by custom code is there the potential for command injection if you re consuming it from react scripts then this issue does not affect you publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution react dev utils step up your open source security game with whitesource | 0 |
186,312 | 21,923,362,168 | IssuesEvent | 2022-05-22 22:28:47 | doc-ai/tensorflow | https://api.github.com/repos/doc-ai/tensorflow | opened | CVE-2021-41201 (High) detected in tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl | security vulnerability | ## CVE-2021-41201 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/29/6c/2c9a5c4d095c63c2fb37d20def0e4f92685f7aee9243d6aae25862694fd1/tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/29/6c/2c9a5c4d095c63c2fb37d20def0e4f92685f7aee9243d6aae25862694fd1/tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /lite/micro/examples/magic_wand/train/requirements.txt</p>
<p>Path to vulnerable library: /lite/micro/examples/magic_wand/train/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affeced versions during execution, `EinsumHelper::ParseEquation()` is supposed to set the flags in `input_has_ellipsis` vector and `*output_has_ellipsis` boolean to indicate whether there is ellipsis in the corresponding inputs and output. However, the code only changes these flags to `true` and never assigns `false`. This results in unitialized variable access if callers assume that `EinsumHelper::ParseEquation()` always sets these flags. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41201>CVE-2021-41201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j86v-p27c-73fm">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j86v-p27c-73fm</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"tensorflow","packageVersion":"2.0.0b1","packageFilePaths":["/lite/micro/examples/magic_wand/train/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"tensorflow:2.0.0b1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-41201","vulnerabilityDetails":"TensorFlow is an open source platform for machine learning. In affeced versions during execution, `EinsumHelper::ParseEquation()` is supposed to set the flags in `input_has_ellipsis` vector and `*output_has_ellipsis` boolean to indicate whether there is ellipsis in the corresponding inputs and output. However, the code only changes these flags to `true` and never assigns `false`. This results in unitialized variable access if callers assume that `EinsumHelper::ParseEquation()` always sets these flags. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41201","cvss3Severity":"high","cvss3Score":"7.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-41201 (High) detected in tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl - ## CVE-2021-41201 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/29/6c/2c9a5c4d095c63c2fb37d20def0e4f92685f7aee9243d6aae25862694fd1/tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/29/6c/2c9a5c4d095c63c2fb37d20def0e4f92685f7aee9243d6aae25862694fd1/tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: /lite/micro/examples/magic_wand/train/requirements.txt</p>
<p>Path to vulnerable library: /lite/micro/examples/magic_wand/train/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-2.0.0b1-cp36-cp36m-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an open source platform for machine learning. In affeced versions during execution, `EinsumHelper::ParseEquation()` is supposed to set the flags in `input_has_ellipsis` vector and `*output_has_ellipsis` boolean to indicate whether there is ellipsis in the corresponding inputs and output. However, the code only changes these flags to `true` and never assigns `false`. This results in unitialized variable access if callers assume that `EinsumHelper::ParseEquation()` always sets these flags. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-11-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41201>CVE-2021-41201</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j86v-p27c-73fm">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-j86v-p27c-73fm</a></p>
<p>Release Date: 2021-11-05</p>
<p>Fix Resolution: tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"tensorflow","packageVersion":"2.0.0b1","packageFilePaths":["/lite/micro/examples/magic_wand/train/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"tensorflow:2.0.0b1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tensorflow - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-cpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0;tensorflow-gpu - 2.4.4, 2.5.2, 2.6.1, 2.7.0","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-41201","vulnerabilityDetails":"TensorFlow is an open source platform for machine learning. In affeced versions during execution, `EinsumHelper::ParseEquation()` is supposed to set the flags in `input_has_ellipsis` vector and `*output_has_ellipsis` boolean to indicate whether there is ellipsis in the corresponding inputs and output. However, the code only changes these flags to `true` and never assigns `false`. This results in unitialized variable access if callers assume that `EinsumHelper::ParseEquation()` always sets these flags. The fix will be included in TensorFlow 2.7.0. We will also cherrypick this commit on TensorFlow 2.6.1, TensorFlow 2.5.2, and TensorFlow 2.4.4, as these are also affected and still in supported range.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41201","cvss3Severity":"high","cvss3Score":"7.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"Low","S":"Unchanged","C":"High","UI":"None","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> --> | non_priority | cve high detected in tensorflow whl cve high severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file lite micro examples magic wand train requirements txt path to vulnerable library lite micro examples magic wand train requirements txt dependency hierarchy x tensorflow whl vulnerable library vulnerability details tensorflow is an open source platform for machine learning in affeced versions during execution einsumhelper parseequation is supposed to set the flags in input has ellipsis vector and output has ellipsis boolean to indicate whether there is ellipsis in the corresponding inputs and output however the code only changes these flags to true and never assigns false this results in unitialized variable access if callers assume that einsumhelper parseequation always sets these flags the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree tensorflow isminimumfixversionavailable true minimumfixversion tensorflow tensorflow cpu tensorflow gpu isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails tensorflow is an open source platform for machine learning in affeced versions during execution einsumhelper parseequation is supposed to set the flags in input has ellipsis vector and output has ellipsis boolean to indicate whether there is ellipsis in the corresponding inputs and output however the code only changes these flags to true and never assigns false this results in unitialized variable access if callers assume that einsumhelper parseequation always sets these flags the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range vulnerabilityurl | 0 |
430,585 | 30,191,336,975 | IssuesEvent | 2023-07-04 15:37:58 | KhiopsML/khiops-python | https://api.github.com/repos/KhiopsML/khiops-python | opened | Separate conventions from technical details in CONTRIBUTING.md | Status/ReadyForDev Type/Documentation | #### Description
Currently the the development practices and mini Git *howto*s are interwinded in `CONTRIBUTING.md`. We should separate them so as to not show unnecesary information to users already familiarized with Git
#### Questions/Ideas
- Move the Git *howto*s to the `GIT_HOWTO.md` files
- Point from `CONTRIBUTING.md` to this file when necessary | 1.0 | Separate conventions from technical details in CONTRIBUTING.md - #### Description
Currently the the development practices and mini Git *howto*s are interwinded in `CONTRIBUTING.md`. We should separate them so as to not show unnecesary information to users already familiarized with Git
#### Questions/Ideas
- Move the Git *howto*s to the `GIT_HOWTO.md` files
- Point from `CONTRIBUTING.md` to this file when necessary | non_priority | separate conventions from technical details in contributing md description currently the the development practices and mini git howto s are interwinded in contributing md we should separate them so as to not show unnecesary information to users already familiarized with git questions ideas move the git howto s to the git howto md files point from contributing md to this file when necessary | 0 |
43,586 | 11,764,940,733 | IssuesEvent | 2020-03-14 15:07:12 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | ZoneHVAC:HybridUnitaryHVAC Second and Third fuel types - consumption is not metered | Defect NotIDDChange | Issue overview
--------------
Based on code review while removing fuel type synonyms, it appears that:
1. These energy consumption variables for ZoneHVAC:HybridUnitaryHVAC are not connected to a meter:
Zone Hybrid Unitary HVAC Secondary Fuel Consumption
Zone Hybrid Unitary HVAC Third Fuel Consumption Rate
2. Zone Hybrid Unitary HVAC Supply Fan Electric Energy is metered, that seems ok. But Zone Hybrid Unitary HVAC Electric Energy is also metered. If the fan energy is included in this, then it's gettting double-counted on the electric meter.
3. Field: First Fuel Type has a full range of choices, but the default it electricity (in code, noted in the IDD), but this appears to be connected to the Electric meter - not variable based on which fuel type is specified in the input. Actually, I'm not finding where this is metered.
4. There are several (maybe more) variables which have SetupOutputVariable statements but searching HybridUnitaryAirConditioners.cc for the variable name shows it does not appear to be used. Stopped looking after these three, there could be more?
```
SetupOutputVariable("Zone Hybrid Unitary HVAC Electric Energy",
OutputProcessor::Unit::J,
ZoneHybridUnitaryAirConditioner(UnitLoop).FinalElectricalEnergy,
SetupOutputVariable("Zone Hybrid Unitary HVAC Requested Outdoor Air Ventilation Mass Flow Rate",
OutputProcessor::Unit::kg_s,
ZoneHybridUnitaryAirConditioner(UnitLoop).MinOA_Msa,
SetupOutputVariable("Zone Hybrid Unitary HVAC Ventilation Air Mass Flow Rate",
OutputProcessor::Unit::kg_s,
ZoneHybridUnitaryAirConditioner(UnitLoop).SupplyVentilationAir,
```
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus v9.2.0-https://github.com/NREL/EnergyPlus/commit/60ed7207c2f3058fe8dc400a1aa741149094bebe
- Noticed while working on #6601
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here) - don't have any defect files - will need some for this
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| 1.0 | ZoneHVAC:HybridUnitaryHVAC Second and Third fuel types - consumption is not metered - Issue overview
--------------
Based on code review while removing fuel type synonyms, it appears that:
1. These energy consumption variables for ZoneHVAC:HybridUnitaryHVAC are not connected to a meter:
Zone Hybrid Unitary HVAC Secondary Fuel Consumption
Zone Hybrid Unitary HVAC Third Fuel Consumption Rate
2. Zone Hybrid Unitary HVAC Supply Fan Electric Energy is metered, that seems ok. But Zone Hybrid Unitary HVAC Electric Energy is also metered. If the fan energy is included in this, then it's gettting double-counted on the electric meter.
3. Field: First Fuel Type has a full range of choices, but the default it electricity (in code, noted in the IDD), but this appears to be connected to the Electric meter - not variable based on which fuel type is specified in the input. Actually, I'm not finding where this is metered.
4. There are several (maybe more) variables which have SetupOutputVariable statements but searching HybridUnitaryAirConditioners.cc for the variable name shows it does not appear to be used. Stopped looking after these three, there could be more?
```
SetupOutputVariable("Zone Hybrid Unitary HVAC Electric Energy",
OutputProcessor::Unit::J,
ZoneHybridUnitaryAirConditioner(UnitLoop).FinalElectricalEnergy,
SetupOutputVariable("Zone Hybrid Unitary HVAC Requested Outdoor Air Ventilation Mass Flow Rate",
OutputProcessor::Unit::kg_s,
ZoneHybridUnitaryAirConditioner(UnitLoop).MinOA_Msa,
SetupOutputVariable("Zone Hybrid Unitary HVAC Ventilation Air Mass Flow Rate",
OutputProcessor::Unit::kg_s,
ZoneHybridUnitaryAirConditioner(UnitLoop).SupplyVentilationAir,
```
### Details
Some additional details for this issue (if relevant):
- Platform (Operating system, version)
- Version of EnergyPlus v9.2.0-https://github.com/NREL/EnergyPlus/commit/60ed7207c2f3058fe8dc400a1aa741149094bebe
- Noticed while working on #6601
### Checklist
Add to this list or remove from it as applicable. This is a simple templated set of guidelines.
- [ ] Defect file added (list location of defect file here) - don't have any defect files - will need some for this
- [ ] Ticket added to Pivotal for defect (development team task)
- [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
| non_priority | zonehvac hybridunitaryhvac second and third fuel types consumption is not metered issue overview based on code review while removing fuel type synonyms it appears that these energy consumption variables for zonehvac hybridunitaryhvac are not connected to a meter zone hybrid unitary hvac secondary fuel consumption zone hybrid unitary hvac third fuel consumption rate zone hybrid unitary hvac supply fan electric energy is metered that seems ok but zone hybrid unitary hvac electric energy is also metered if the fan energy is included in this then it s gettting double counted on the electric meter field first fuel type has a full range of choices but the default it electricity in code noted in the idd but this appears to be connected to the electric meter not variable based on which fuel type is specified in the input actually i m not finding where this is metered there are several maybe more variables which have setupoutputvariable statements but searching hybridunitaryairconditioners cc for the variable name shows it does not appear to be used stopped looking after these three there could be more setupoutputvariable zone hybrid unitary hvac electric energy outputprocessor unit j zonehybridunitaryairconditioner unitloop finalelectricalenergy setupoutputvariable zone hybrid unitary hvac requested outdoor air ventilation mass flow rate outputprocessor unit kg s zonehybridunitaryairconditioner unitloop minoa msa setupoutputvariable zone hybrid unitary hvac ventilation air mass flow rate outputprocessor unit kg s zonehybridunitaryairconditioner unitloop supplyventilationair details some additional details for this issue if relevant platform operating system version version of energyplus noticed while working on checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here don t have any defect files will need some for this ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect | 0 |
4,710 | 2,870,691,361 | IssuesEvent | 2015-06-07 12:22:58 | opencb/opencga | https://api.github.com/repos/opencb/opencga | opened | Swagger documentation improvements for REST web services | documentation web services | Endpoint and method documentation need to be clearly improved, currently it is poor. | 1.0 | Swagger documentation improvements for REST web services - Endpoint and method documentation need to be clearly improved, currently it is poor. | non_priority | swagger documentation improvements for rest web services endpoint and method documentation need to be clearly improved currently it is poor | 0 |
83,922 | 16,396,603,795 | IssuesEvent | 2021-05-18 01:10:35 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Null Reference Exception In System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment | area-CodeGen-coreclr tracking-external-issue untriaged | ### Description
In production systems (but not our dev environment, we cannot reproduce at will) we are seeing a null reference exception thrown out of System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment. This is being called as the result of an MS App Insights (v2.17) SDK call.
The stack trace/exception details look like:
System.TypeInitializationException: The type initializer for 'PerTypeValues`1' threw an exception. ---> System.NullReferenceException: Object reference not set to an instance of an object.
at System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment()
at System.SpanHelpers.PerTypeValues`1..cctor()
--- End of inner exception stack trace ---
at System.SpanHelpers.IsReferenceOrContainsReferences[T]()
at System.Diagnostics.ActivityTraceId.CreateRandom()
at System.Diagnostics.Activity.GenerateW3CId()
at System.Diagnostics.Activity.Start()
at Microsoft.ApplicationInsights.TelemetryClientExtensions.<>c__DisplayClass3_0`1.<StartOperation>b__0()
at Microsoft.ApplicationInsights.ActivityExtensions.TryRun(Action action)
at Microsoft.ApplicationInsights.TelemetryClientExtensions.StartOperation[T](TelemetryClient telemetryClient, T operationTelemetry)
<rest of stacktrace contains user application methods and has been snipped for brevity/privacy>
This has only started happening on recent builds of our software, going back to earlier builds of our product gets rid of the error. The two significant changes between those builds are our application assemblies changed from targeting .Net 4.5 to .Net 4.8, and the app insights SDK and dependencies were all upgraded (to 2.17 of App Insights, and the dependency that came with that except for System.Diagnostics.DiagnosticsSource which has been updated to 5.0.0.1).
### Configuration
.Net 4.8
Windows 10
x86 .Net WinForms executable
Do not believe this is specific to this configuration, but uncertain.
| 1.0 | Null Reference Exception In System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment - ### Description
In production systems (but not our dev environment, we cannot reproduce at will) we are seeing a null reference exception thrown out of System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment. This is being called as the result of an MS App Insights (v2.17) SDK call.
The stack trace/exception details look like:
System.TypeInitializationException: The type initializer for 'PerTypeValues`1' threw an exception. ---> System.NullReferenceException: Object reference not set to an instance of an object.
at System.SpanHelpers.PerTypeValues`1.MeasureArrayAdjustment()
at System.SpanHelpers.PerTypeValues`1..cctor()
--- End of inner exception stack trace ---
at System.SpanHelpers.IsReferenceOrContainsReferences[T]()
at System.Diagnostics.ActivityTraceId.CreateRandom()
at System.Diagnostics.Activity.GenerateW3CId()
at System.Diagnostics.Activity.Start()
at Microsoft.ApplicationInsights.TelemetryClientExtensions.<>c__DisplayClass3_0`1.<StartOperation>b__0()
at Microsoft.ApplicationInsights.ActivityExtensions.TryRun(Action action)
at Microsoft.ApplicationInsights.TelemetryClientExtensions.StartOperation[T](TelemetryClient telemetryClient, T operationTelemetry)
<rest of stacktrace contains user application methods and has been snipped for brevity/privacy>
This has only started happening on recent builds of our software, going back to earlier builds of our product gets rid of the error. The two significant changes between those builds are our application assemblies changed from targeting .Net 4.5 to .Net 4.8, and the app insights SDK and dependencies were all upgraded (to 2.17 of App Insights, and the dependency that came with that except for System.Diagnostics.DiagnosticsSource which has been updated to 5.0.0.1).
### Configuration
.Net 4.8
Windows 10
x86 .Net WinForms executable
Do not believe this is specific to this configuration, but uncertain.
| non_priority | null reference exception in system spanhelpers pertypevalues measurearrayadjustment description in production systems but not our dev environment we cannot reproduce at will we are seeing a null reference exception thrown out of system spanhelpers pertypevalues measurearrayadjustment this is being called as the result of an ms app insights sdk call the stack trace exception details look like system typeinitializationexception the type initializer for pertypevalues threw an exception system nullreferenceexception object reference not set to an instance of an object at system spanhelpers pertypevalues measurearrayadjustment at system spanhelpers pertypevalues cctor end of inner exception stack trace at system spanhelpers isreferenceorcontainsreferences at system diagnostics activitytraceid createrandom at system diagnostics activity at system diagnostics activity start at microsoft applicationinsights telemetryclientextensions c b at microsoft applicationinsights activityextensions tryrun action action at microsoft applicationinsights telemetryclientextensions startoperation telemetryclient telemetryclient t operationtelemetry this has only started happening on recent builds of our software going back to earlier builds of our product gets rid of the error the two significant changes between those builds are our application assemblies changed from targeting net to net and the app insights sdk and dependencies were all upgraded to of app insights and the dependency that came with that except for system diagnostics diagnosticssource which has been updated to configuration net windows net winforms executable do not believe this is specific to this configuration but uncertain | 0 |
155,458 | 13,625,638,625 | IssuesEvent | 2020-09-24 09:47:34 | drafthub/drafthub | https://api.github.com/repos/drafthub/drafthub | closed | missing docstring for `draft.apps` | documentation good first issue help wanted | Here are some missings docstring reported by pylint.
```
$ docker-compose exec web python check.py lint | grep docstring | grep draft | grep apps
drafthub/draft/apps.py:1:0: C0114: Missing module docstring (missing-module-docstring)
drafthub/draft/apps.py:4:0: C0115: Missing class docstring (missing-class-docstring)
```
See how you can contribute: [CONTRIBUTING.md](https://github.com/drafthub/drafthub/blob/master/CONTRIBUTING.md) | 1.0 | missing docstring for `draft.apps` - Here are some missings docstring reported by pylint.
```
$ docker-compose exec web python check.py lint | grep docstring | grep draft | grep apps
drafthub/draft/apps.py:1:0: C0114: Missing module docstring (missing-module-docstring)
drafthub/draft/apps.py:4:0: C0115: Missing class docstring (missing-class-docstring)
```
See how you can contribute: [CONTRIBUTING.md](https://github.com/drafthub/drafthub/blob/master/CONTRIBUTING.md) | non_priority | missing docstring for draft apps here are some missings docstring reported by pylint docker compose exec web python check py lint grep docstring grep draft grep apps drafthub draft apps py missing module docstring missing module docstring drafthub draft apps py missing class docstring missing class docstring see how you can contribute | 0 |
92,902 | 11,724,571,883 | IssuesEvent | 2020-03-10 11:13:29 | luisfabib/deerlab | https://api.github.com/repos/luisfabib/deerlab | closed | multispin effect's future | design enhancement | In the ``supressghost`` function there is no re-scaling as described in the original van Hagens paper | 1.0 | multispin effect's future - In the ``supressghost`` function there is no re-scaling as described in the original van Hagens paper | non_priority | multispin effect s future in the supressghost function there is no re scaling as described in the original van hagens paper | 0 |
13,345 | 5,345,234,917 | IssuesEvent | 2017-02-17 16:28:12 | elegantthemes/Divi-Beta | https://api.github.com/repos/elegantthemes/Divi-Beta | opened | Builder Sync :: Fix error message thrown when syncing from BB to VB | BUILDER SYNC | ### Problem:
Here's the error message looks like:

The issue occurs if VB settings modal is opened when syncing from BB to VB
### Steps To Reproduce:
1. Open VB and BB side by side
2. Open any settings modal on VB
3. Edit the same module on BB
4. Switch to VB to sync. You'll see the issue
### Additional Information:
*Please include any of the following details that are relevant to the issue you are reporting.*
**Device**:
**Operating System**:
**Browser**:
**Screen Resolution**:
**Screenshot or Screencast**:
**Link To Download Layout JSON File**:
-
#### Before submitting your issue, please ensure the following:
* You searched this Github repo for the issue and found no open reports of the issue.
* You properly formated the issue title - eg. `Scope :: One sentence to describe the issue`.
* You have provided a detailed description of the issue.
* You have provided any/all relevant information: screenshot, screencast, page URL.
* You have provided the steps that one must take to reproduce the issue.
[View Example](https://github.com/elegantthemes/Divi-Beta/tree/master/.github/ISSUE_EXAMPLE.md)
| 1.0 | Builder Sync :: Fix error message thrown when syncing from BB to VB - ### Problem:
Here's the error message looks like:

The issue occurs if VB settings modal is opened when syncing from BB to VB
### Steps To Reproduce:
1. Open VB and BB side by side
2. Open any settings modal on VB
3. Edit the same module on BB
4. Switch to VB to sync. You'll see the issue
### Additional Information:
*Please include any of the following details that are relevant to the issue you are reporting.*
**Device**:
**Operating System**:
**Browser**:
**Screen Resolution**:
**Screenshot or Screencast**:
**Link To Download Layout JSON File**:
-
#### Before submitting your issue, please ensure the following:
* You searched this Github repo for the issue and found no open reports of the issue.
* You properly formated the issue title - eg. `Scope :: One sentence to describe the issue`.
* You have provided a detailed description of the issue.
* You have provided any/all relevant information: screenshot, screencast, page URL.
* You have provided the steps that one must take to reproduce the issue.
[View Example](https://github.com/elegantthemes/Divi-Beta/tree/master/.github/ISSUE_EXAMPLE.md)
| non_priority | builder sync fix error message thrown when syncing from bb to vb problem here s the error message looks like the issue occurs if vb settings modal is opened when syncing from bb to vb steps to reproduce open vb and bb side by side open any settings modal on vb edit the same module on bb switch to vb to sync you ll see the issue additional information please include any of the following details that are relevant to the issue you are reporting device operating system browser screen resolution screenshot or screencast link to download layout json file before submitting your issue please ensure the following you searched this github repo for the issue and found no open reports of the issue you properly formated the issue title eg scope one sentence to describe the issue you have provided a detailed description of the issue you have provided any all relevant information screenshot screencast page url you have provided the steps that one must take to reproduce the issue | 0 |
99,624 | 8,706,253,677 | IssuesEvent | 2018-12-06 01:54:14 | red/red | https://api.github.com/repos/red/red | closed | Phantom map! keys | status.resolved status.tested type.bug | Any `integer!` keys after the 64th to be added to a `map!` hang around after being removed.
```
red>> m: #()
== #()
red>> repeat k 70 [
[ m/:k: {x}
[ m/:k: none
[ ]
== none
red>> m
== #()
red>> keys-of m
== [65 66 67 68 69 70]
red>> m/65
== none
```
| 1.0 | Phantom map! keys - Any `integer!` keys after the 64th to be added to a `map!` hang around after being removed.
```
red>> m: #()
== #()
red>> repeat k 70 [
[ m/:k: {x}
[ m/:k: none
[ ]
== none
red>> m
== #()
red>> keys-of m
== [65 66 67 68 69 70]
red>> m/65
== none
```
| non_priority | phantom map keys any integer keys after the to be added to a map hang around after being removed red m red repeat k m k x m k none none red m red keys of m red m none | 0 |
231,957 | 18,836,511,652 | IssuesEvent | 2021-11-11 02:01:36 | simonw/s3-credentials | https://api.github.com/repos/simonw/s3-credentials | closed | Mechanism for running tests against a real AWS account | tests | The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.
But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.
These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.
I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts. | 1.0 | Mechanism for running tests against a real AWS account - The tests for this project currently run against mocks - which is good, because I don't like the idea of GitHub Action tests hitting real APIs.
But... this project is about building securely against AWS. As such, automated tests that genuinely exercise a live AWS account (and check that the resulting permissions behave as expected) would be incredibly valuable for growing my confidence that this tool works as advertised.
These tests would need quite a high level of administrative access, because they need to be able to create users, roles etc.
I don't like the idea of storing my own AWS administrator account credentials in a GitHub Actions secret though. I think I'll write these tests such that they can be run outside of GitHub Actions, maybe configured via environment variables that allow other project contributors to run tests against their own accounts. | non_priority | mechanism for running tests against a real aws account the tests for this project currently run against mocks which is good because i don t like the idea of github action tests hitting real apis but this project is about building securely against aws as such automated tests that genuinely exercise a live aws account and check that the resulting permissions behave as expected would be incredibly valuable for growing my confidence that this tool works as advertised these tests would need quite a high level of administrative access because they need to be able to create users roles etc i don t like the idea of storing my own aws administrator account credentials in a github actions secret though i think i ll write these tests such that they can be run outside of github actions maybe configured via environment variables that allow other project contributors to run tests against their own accounts | 0 |
95,677 | 10,885,390,731 | IssuesEvent | 2019-11-18 10:18:10 | lennartdeknikker/frontend-data | https://api.github.com/repos/lennartdeknikker/frontend-data | opened | Write out a concept | documentation | I will continue to build on our previous project. Requirements for the new concept as stated in [the rubric](https://github.com/cmda-tt/course-19-20/tree/master/frontend-data) are:
1. add interaction
2. use data joins
3. apply motion
4. apply learning outcomes attained in the previous courses | 1.0 | Write out a concept - I will continue to build on our previous project. Requirements for the new concept as stated in [the rubric](https://github.com/cmda-tt/course-19-20/tree/master/frontend-data) are:
1. add interaction
2. use data joins
3. apply motion
4. apply learning outcomes attained in the previous courses | non_priority | write out a concept i will continue to build on our previous project requirements for the new concept as stated in are add interaction use data joins apply motion apply learning outcomes attained in the previous courses | 0 |
73,140 | 19,580,335,439 | IssuesEvent | 2022-01-04 20:23:38 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | the compile of rust-std-aarch64-unknown-linux-gnu pulls in hosts cflags and fails | O-ARM A-rustbuild | this is the error from the commandline:
```
process didn't exit successfully: `/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage2-std/release/build/std-73d4c8b2d9bf1c5d/build-script-build` (exit code: 101)
```
```
TARGET = Some("aarch64-unknown-linux-gnu")
OPT_LEVEL = Some("2")
HOST = Some("x86_64-unknown-linux-gnu")
CC_aarch64-unknown-linux-gnu = Some("aarch64-unknown-linux-gnu-gcc")
CFLAGS_aarch64-unknown-linux-gnu = Some("-ffunction-sections -fdata-sections -fPIC -march=znver1 -pipe")
running: "aarch64-unknown-linux-gnu-gcc" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=znver1" "-pipe" "-g" "-fno-omit-frame-pointer" "-I" "../libbacktrace" "-I" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=64" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-o" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace/../libbacktrace/alloc.o" "-c" "../libbacktrace/alloc.c"
cargo:warning=Assembler messages:
cargo:warning=Error: unknown architecture `znver1'
cargo:warning=
cargo:warning=Error: unrecognized option -march=znver1
cargo:warning=cc1: error: unknown value ‘znver1’ for -march
cargo:warning=cc1: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a
exit code: 1
--- stderr
thread 'main' panicked at '
Internal error occurred: Command "aarch64-unknown-linux-gnu-gcc" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=znver1" "-pipe" "-g" "-fno-omit-frame-pointer" "-I" "../libbacktrace" "-I" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=64" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-o" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace/../libbacktrace/alloc.o" "-c" "../libbacktrace/alloc.c" with args "aarch64-unknown-linux-gnu-gcc" did not execute successfully (status code exit code: 1).
``` | 1.0 | the compile of rust-std-aarch64-unknown-linux-gnu pulls in hosts cflags and fails - this is the error from the commandline:
```
process didn't exit successfully: `/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/x86_64-unknown-linux-gnu/stage2-std/release/build/std-73d4c8b2d9bf1c5d/build-script-build` (exit code: 101)
```
```
TARGET = Some("aarch64-unknown-linux-gnu")
OPT_LEVEL = Some("2")
HOST = Some("x86_64-unknown-linux-gnu")
CC_aarch64-unknown-linux-gnu = Some("aarch64-unknown-linux-gnu-gcc")
CFLAGS_aarch64-unknown-linux-gnu = Some("-ffunction-sections -fdata-sections -fPIC -march=znver1 -pipe")
running: "aarch64-unknown-linux-gnu-gcc" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=znver1" "-pipe" "-g" "-fno-omit-frame-pointer" "-I" "../libbacktrace" "-I" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=64" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-o" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace/../libbacktrace/alloc.o" "-c" "../libbacktrace/alloc.c"
cargo:warning=Assembler messages:
cargo:warning=Error: unknown architecture `znver1'
cargo:warning=
cargo:warning=Error: unrecognized option -march=znver1
cargo:warning=cc1: error: unknown value ‘znver1’ for -march
cargo:warning=cc1: note: valid arguments are: armv8-a armv8.1-a armv8.2-a armv8.3-a armv8.4-a
exit code: 1
--- stderr
thread 'main' panicked at '
Internal error occurred: Command "aarch64-unknown-linux-gnu-gcc" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-ffunction-sections" "-fdata-sections" "-fPIC" "-march=znver1" "-pipe" "-g" "-fno-omit-frame-pointer" "-I" "../libbacktrace" "-I" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace" "-fvisibility=hidden" "-DBACKTRACE_ELF_SIZE=64" "-DBACKTRACE_SUPPORTED=1" "-DBACKTRACE_USES_MALLOC=1" "-DBACKTRACE_SUPPORTS_THREADS=0" "-DBACKTRACE_SUPPORTS_DATA=0" "-DHAVE_DL_ITERATE_PHDR=1" "-D_GNU_SOURCE=1" "-D_LARGE_FILES=1" "-o" "/var/tmp/portage/dev-lang/rust-1.32.0/work/rustc-1.32.0-src/build/aarch64-unknown-linux-gnu/native/libbacktrace/../libbacktrace/alloc.o" "-c" "../libbacktrace/alloc.c" with args "aarch64-unknown-linux-gnu-gcc" did not execute successfully (status code exit code: 1).
``` | non_priority | the compile of rust std unknown linux gnu pulls in hosts cflags and fails this is the error from the commandline process didn t exit successfully var tmp portage dev lang rust work rustc src build unknown linux gnu std release build std build script build exit code target some unknown linux gnu opt level some host some unknown linux gnu cc unknown linux gnu some unknown linux gnu gcc cflags unknown linux gnu some ffunction sections fdata sections fpic march pipe running unknown linux gnu gcc ffunction sections fdata sections fpic ffunction sections fdata sections fpic march pipe g fno omit frame pointer i libbacktrace i var tmp portage dev lang rust work rustc src build unknown linux gnu native libbacktrace fvisibility hidden dbacktrace elf size dbacktrace supported dbacktrace uses malloc dbacktrace supports threads dbacktrace supports data dhave dl iterate phdr d gnu source d large files o var tmp portage dev lang rust work rustc src build unknown linux gnu native libbacktrace libbacktrace alloc o c libbacktrace alloc c cargo warning assembler messages cargo warning error unknown architecture cargo warning cargo warning error unrecognized option march cargo warning error unknown value ‘ ’ for march cargo warning note valid arguments are a a a a a exit code stderr thread main panicked at internal error occurred command unknown linux gnu gcc ffunction sections fdata sections fpic ffunction sections fdata sections fpic march pipe g fno omit frame pointer i libbacktrace i var tmp portage dev lang rust work rustc src build unknown linux gnu native libbacktrace fvisibility hidden dbacktrace elf size dbacktrace supported dbacktrace uses malloc dbacktrace supports threads dbacktrace supports data dhave dl iterate phdr d gnu source d large files o var tmp portage dev lang rust work rustc src build unknown linux gnu native libbacktrace libbacktrace alloc o c libbacktrace alloc c with args unknown linux gnu gcc did not execute successfully status code exit code | 0 |
237,288 | 19,606,333,468 | IssuesEvent | 2022-01-06 09:57:24 | infor-design/enterprise | https://api.github.com/repos/infor-design/enterprise | closed | Datagrid: Create a Puppeteer Script for Allowing a selected filter condition | [3] type: unit testing | Describe the Issue
Create a unit testing for for allowing a selected filter condition.
See attached BDD for more information.
[BDDStory_github5750.docx](https://github.com/infor-design/enterprise/files/7520797/BDDStory_github5750.docx)
Related github issue
#5750 | 1.0 | Datagrid: Create a Puppeteer Script for Allowing a selected filter condition - Describe the Issue
Create a unit testing for for allowing a selected filter condition.
See attached BDD for more information.
[BDDStory_github5750.docx](https://github.com/infor-design/enterprise/files/7520797/BDDStory_github5750.docx)
Related github issue
#5750 | non_priority | datagrid create a puppeteer script for allowing a selected filter condition describe the issue create a unit testing for for allowing a selected filter condition see attached bdd for more information related github issue | 0 |
17,161 | 4,147,540,611 | IssuesEvent | 2016-06-15 07:32:01 | Zer0-One/praetor | https://api.github.com/repos/Zer0-One/praetor | closed | Hash table needs improved documentation | documentation | 02:38 <+iphy> Does [add] replace the old entry when one exists?
02:39 <+iphy> Does it do so by allocating, or does it modify the existing bucket?
02:39 <+iphy> I.e. in oom situations, can I still replace existing values?
02:39 <+iphy> What comparator is used?
02:43 <+iphy> What happens if load threshold is negative or 0 or above 1?
02:49 <+iphy> The hash function is not specific to add, so it should be specified in the hash table docs, not in the add docs | 1.0 | Hash table needs improved documentation - 02:38 <+iphy> Does [add] replace the old entry when one exists?
02:39 <+iphy> Does it do so by allocating, or does it modify the existing bucket?
02:39 <+iphy> I.e. in oom situations, can I still replace existing values?
02:39 <+iphy> What comparator is used?
02:43 <+iphy> What happens if load threshold is negative or 0 or above 1?
02:49 <+iphy> The hash function is not specific to add, so it should be specified in the hash table docs, not in the add docs | non_priority | hash table needs improved documentation does replace the old entry when one exists does it do so by allocating or does it modify the existing bucket i e in oom situations can i still replace existing values what comparator is used what happens if load threshold is negative or or above the hash function is not specific to add so it should be specified in the hash table docs not in the add docs | 0 |
35,669 | 6,488,151,093 | IssuesEvent | 2017-08-20 14:44:08 | octree-gva/react-d3-timeline | https://api.github.com/repos/octree-gva/react-d3-timeline | closed | Add README.md some infos | documentation | *Add basics informations on the README.md*
```
[ ] bug report (please search on github for a similar issue before submiting)
[X] feature request
[ ] support request
```
**Current behavior**
Readme is actually empty
**Expected behavior**
Add authors, link to licence, and some description of the project.
**Minimal reproduction of the problem with instructions**
* **React d3 timeline version:** 0.0.0
* **Browser and environment:**
| 1.0 | Add README.md some infos - *Add basics informations on the README.md*
```
[ ] bug report (please search on github for a similar issue before submiting)
[X] feature request
[ ] support request
```
**Current behavior**
Readme is actually empty
**Expected behavior**
Add authors, link to licence, and some description of the project.
**Minimal reproduction of the problem with instructions**
* **React d3 timeline version:** 0.0.0
* **Browser and environment:**
| non_priority | add readme md some infos add basics informations on the readme md bug report please search on github for a similar issue before submiting feature request support request current behavior readme is actually empty expected behavior add authors link to licence and some description of the project minimal reproduction of the problem with instructions react timeline version browser and environment | 0 |
132,377 | 10,742,929,023 | IssuesEvent | 2019-10-30 00:09:48 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | teamcity: failed test: TestHashJoinerAgainstProcessor | C-test-failure O-robot | The following tests appear to have failed on master (testrace): TestHashJoinerAgainstProcessor
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestHashJoinerAgainstProcessor).
[#1563413](https://teamcity.cockroachdb.com/viewLog.html?buildId=1563413):
```
TestHashJoinerAgainstProcessor
--- FAIL: testrace/TestHashJoinerAgainstProcessor (4.990s)
------- Stdout: -------
--- join type = FULL_OUTER onExpr = "" filter = "" seed = 6464211974309822396 run = 2 ---
--- lEqCols = [1 0 2] rEqCols = [1 0 2] ---
--- inputTypes = [{{StringFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 25 <nil>}} {{OidFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 2205 <nil>}} {{BytesFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 17 <nil>}}] ---
columnar_operators_test.go:267: unexpected meta &{Ranges:[] Err:assertion failure
- error with attached stack trace:
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError.func1
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:77
runtime.gopanic
/usr/local/go/src/runtime/panic.go:522
github.com/cockroachdb/cockroach/pkg/col/coldata.(*Bytes).UpdateOffsetsToBeNonDecreasing
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/bytes.go:75
github.com/cockroachdb/cockroach/pkg/col/coldata.(*MemBatch).SetLength
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/batch.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).emitUnmatched
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:330
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:267
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:149
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).nextAdapter
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:91
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:177
github.com/cockroachdb/cockroach/pkg/sql/distsql.verifyColOperator
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_utils_test.go:119
github.com/cockroachdb/cockroach/pkg/sql/distsql.TestHashJoinerAgainstProcessor
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_operators_test.go:256
testing.tRunner
/usr/local/go/src/testing/testing.go:865
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
- error with embedded safe details: unexpected error from the vectorized runtime: %+v
-- arg 1: <string>
- unexpected error from the vectorized runtime: unexpectedly found a decreasing non-zero offset: previous max=32, found=30 TraceData:[] TxnCoordMeta:<nil> RowNum:<nil> SamplerProgress:<nil> BulkProcessorProgress:<nil> Metrics:<nil>} from columnar operator
```
Please assign, take a look and update the issue accordingly.
| 1.0 | teamcity: failed test: TestHashJoinerAgainstProcessor - The following tests appear to have failed on master (testrace): TestHashJoinerAgainstProcessor
You may want to check [for open issues](https://github.com/cockroachdb/cockroach/issues?q=is%3Aissue+is%3Aopen+TestHashJoinerAgainstProcessor).
[#1563413](https://teamcity.cockroachdb.com/viewLog.html?buildId=1563413):
```
TestHashJoinerAgainstProcessor
--- FAIL: testrace/TestHashJoinerAgainstProcessor (4.990s)
------- Stdout: -------
--- join type = FULL_OUTER onExpr = "" filter = "" seed = 6464211974309822396 run = 2 ---
--- lEqCols = [1 0 2] rEqCols = [1 0 2] ---
--- inputTypes = [{{StringFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 25 <nil>}} {{OidFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 2205 <nil>}} {{BytesFamily 0 0 [] 0x8fefc80 0 <nil> [] [] 17 <nil>}}] ---
columnar_operators_test.go:267: unexpected meta &{Ranges:[] Err:assertion failure
- error with attached stack trace:
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError.func1
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:77
runtime.gopanic
/usr/local/go/src/runtime/panic.go:522
github.com/cockroachdb/cockroach/pkg/col/coldata.(*Bytes).UpdateOffsetsToBeNonDecreasing
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/bytes.go:75
github.com/cockroachdb/cockroach/pkg/col/coldata.(*MemBatch).SetLength
/go/src/github.com/cockroachdb/cockroach/pkg/col/coldata/batch.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).emitUnmatched
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:330
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*hashJoinEqOp).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/hashjoiner.go:267
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:149
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).nextAdapter
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:140
github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror.CatchVectorizedRuntimeError
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/execerror/error.go:91
github.com/cockroachdb/cockroach/pkg/sql/colexec.(*Materializer).Next
/go/src/github.com/cockroachdb/cockroach/pkg/sql/colexec/materializer.go:177
github.com/cockroachdb/cockroach/pkg/sql/distsql.verifyColOperator
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_utils_test.go:119
github.com/cockroachdb/cockroach/pkg/sql/distsql.TestHashJoinerAgainstProcessor
/go/src/github.com/cockroachdb/cockroach/pkg/sql/distsql/columnar_operators_test.go:256
testing.tRunner
/usr/local/go/src/testing/testing.go:865
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
- error with embedded safe details: unexpected error from the vectorized runtime: %+v
-- arg 1: <string>
- unexpected error from the vectorized runtime: unexpectedly found a decreasing non-zero offset: previous max=32, found=30 TraceData:[] TxnCoordMeta:<nil> RowNum:<nil> SamplerProgress:<nil> BulkProcessorProgress:<nil> Metrics:<nil>} from columnar operator
```
Please assign, take a look and update the issue accordingly.
| non_priority | teamcity failed test testhashjoineragainstprocessor the following tests appear to have failed on master testrace testhashjoineragainstprocessor you may want to check testhashjoineragainstprocessor fail testrace testhashjoineragainstprocessor stdout join type full outer onexpr filter seed run leqcols reqcols inputtypes oidfamily bytesfamily columnar operators test go unexpected meta ranges err assertion failure error with attached stack trace github com cockroachdb cockroach pkg sql colexec execerror catchvectorizedruntimeerror go src github com cockroachdb cockroach pkg sql colexec execerror error go runtime gopanic usr local go src runtime panic go github com cockroachdb cockroach pkg col coldata bytes updateoffsetstobenondecreasing go src github com cockroachdb cockroach pkg col coldata bytes go github com cockroachdb cockroach pkg col coldata membatch setlength go src github com cockroachdb cockroach pkg col coldata batch go github com cockroachdb cockroach pkg sql colexec hashjoineqop emitunmatched go src github com cockroachdb cockroach pkg sql colexec hashjoiner go github com cockroachdb cockroach pkg sql colexec hashjoineqop next go src github com cockroachdb cockroach pkg sql colexec hashjoiner go github com cockroachdb cockroach pkg sql colexec materializer next go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql colexec materializer nextadapter go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql colexec execerror catchvectorizedruntimeerror go src github com cockroachdb cockroach pkg sql colexec execerror error go github com cockroachdb cockroach pkg sql colexec materializer next go src github com cockroachdb cockroach pkg sql colexec materializer go github com cockroachdb cockroach pkg sql distsql verifycoloperator go src github com cockroachdb cockroach pkg sql distsql columnar utils test go github com cockroachdb cockroach pkg sql distsql testhashjoineragainstprocessor go src github com cockroachdb cockroach pkg sql distsql columnar operators test go testing trunner usr local go src testing testing go runtime goexit usr local go src runtime asm s error with embedded safe details unexpected error from the vectorized runtime v arg unexpected error from the vectorized runtime unexpectedly found a decreasing non zero offset previous max found tracedata txncoordmeta rownum samplerprogress bulkprocessorprogress metrics from columnar operator please assign take a look and update the issue accordingly | 0 |
57,850 | 24,246,703,345 | IssuesEvent | 2022-09-27 11:10:25 | ganga-devs/ganga | https://api.github.com/repos/ganga-devs/ganga | closed | Create a test backend that simulates remote backend behavior | aio monitoring service | A testing backend should be created that allows for testing common remote backend patterns. This will be done as part of testing the new async monitoring service. | 1.0 | Create a test backend that simulates remote backend behavior - A testing backend should be created that allows for testing common remote backend patterns. This will be done as part of testing the new async monitoring service. | non_priority | create a test backend that simulates remote backend behavior a testing backend should be created that allows for testing common remote backend patterns this will be done as part of testing the new async monitoring service | 0 |
232,310 | 17,778,684,800 | IssuesEvent | 2021-08-30 23:24:38 | pgPilfs/pil-money-g4c-pil-money-g4c | https://api.github.com/repos/pgPilfs/pil-money-g4c-pil-money-g4c | closed | US_005_Listado de servicios a pagar | documentation | Como Usuario quiero visualizar un listado de los posibles servicios a pagar para saber cuales estan disponibles | 1.0 | US_005_Listado de servicios a pagar - Como Usuario quiero visualizar un listado de los posibles servicios a pagar para saber cuales estan disponibles | non_priority | us listado de servicios a pagar como usuario quiero visualizar un listado de los posibles servicios a pagar para saber cuales estan disponibles | 0 |
179,360 | 13,877,633,916 | IssuesEvent | 2020-10-17 05:13:06 | crowdleague/crowdleague | https://api.github.com/repos/crowdleague/crowdleague | closed | finish auth service tests | Auth test | - [ ] test that sign streams close when sign in has finished
- [ ] test that sign streams close when sign in errors
- [ ] test sign out - dispatches a StoreProblem action on error
- [ ] test sign in with email
- [ ] test sign up account with email | 1.0 | finish auth service tests - - [ ] test that sign streams close when sign in has finished
- [ ] test that sign streams close when sign in errors
- [ ] test sign out - dispatches a StoreProblem action on error
- [ ] test sign in with email
- [ ] test sign up account with email | non_priority | finish auth service tests test that sign streams close when sign in has finished test that sign streams close when sign in errors test sign out dispatches a storeproblem action on error test sign in with email test sign up account with email | 0 |
319,292 | 27,363,342,464 | IssuesEvent | 2023-02-27 17:17:18 | medic/cht-core | https://api.github.com/repos/medic/cht-core | closed | create_user_for_contacts unit test flakes | Type: Technical issue Testing Flaky | **Describe the issue**
A unit test fails occasionally:
```
1 failing
1) create_user_for_contacts
onMatch
replaces user when 10 username collisions occur:
AssertionError: expected 10 to equal 11
+ expected - actual
-10
+11
at Context.<anonymous> (test/unit/transitions/create_user_for_contacts.js:307:53)
```
https://github.com/medic/cht-core/actions/runs/3683370354/jobs/6231870648
**Describe the improvement you'd like**
Investigate whether the unit test failing reveals a true problem in the code. Fix the problem or stabilize the unit test.
| 1.0 | create_user_for_contacts unit test flakes - **Describe the issue**
A unit test fails occasionally:
```
1 failing
1) create_user_for_contacts
onMatch
replaces user when 10 username collisions occur:
AssertionError: expected 10 to equal 11
+ expected - actual
-10
+11
at Context.<anonymous> (test/unit/transitions/create_user_for_contacts.js:307:53)
```
https://github.com/medic/cht-core/actions/runs/3683370354/jobs/6231870648
**Describe the improvement you'd like**
Investigate whether the unit test failing reveals a true problem in the code. Fix the problem or stabilize the unit test.
| non_priority | create user for contacts unit test flakes describe the issue a unit test fails occasionally failing create user for contacts onmatch replaces user when username collisions occur assertionerror expected to equal expected actual at context test unit transitions create user for contacts js describe the improvement you d like investigate whether the unit test failing reveals a true problem in the code fix the problem or stabilize the unit test | 0 |
317,552 | 27,244,437,334 | IssuesEvent | 2023-02-22 00:03:23 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | closed | [Flaky Test] creates a group from multiple blocks of the same type via block transforms | [Status] Stale [Type] Flaky Test | <!-- __META_DATA__:{"failedTimes":0,"totalCommits":0} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
creates a group from multiple blocks of the same type via block transforms
## Test path
`specs/editor/various/block-grouping.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2022-09-29T05:14:49.550Z"><code>[2022-09-29T05:14:49.550Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3148717750"><code>trunk</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2023-01-19T10:27:17.511Z"><code>[2023-01-19T10:27:17.511Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3957164978"><code>update/use-select-resubscribe</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2023-01-20T11:43:44.715Z"><code>[2023-01-20T11:43:44.715Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3957164978"><code>update/use-select-resubscribe</code></a>.
</summary>
```
● Block Grouping › Group creation › creates a group from multiple blocks of the same type via block transforms
TimeoutError: waiting for selector `.block-editor-block-switcher__container` failed: timeout 30000ms exceeded
at new WaitTask (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:813:28)
at DOMWorld.waitForSelectorInPage (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:656:22)
at Object.internalHandler.waitFor (../../node_modules/puppeteer-core/src/common/QueryHandler.ts:78:19)
at DOMWorld.waitForSelector (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:511:25)
at Frame.waitForSelector (../../node_modules/puppeteer-core/src/common/FrameManager.ts:1273:47)
at Page.waitForSelector (../../node_modules/puppeteer-core/src/common/Page.ts:3210:29)
at transformBlockTo (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/transform-block-to.js:22:8)
at runMicrotasks (<anonymous>)
at Object.<anonymous> (specs/editor/various/block-grouping.test.js:55:4)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| 1.0 | [Flaky Test] creates a group from multiple blocks of the same type via block transforms - <!-- __META_DATA__:{"failedTimes":0,"totalCommits":0} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
creates a group from multiple blocks of the same type via block transforms
## Test path
`specs/editor/various/block-grouping.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2022-09-29T05:14:49.550Z"><code>[2022-09-29T05:14:49.550Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3148717750"><code>trunk</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><time datetime="2023-01-19T10:27:17.511Z"><code>[2023-01-19T10:27:17.511Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3957164978"><code>update/use-select-resubscribe</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2023-01-20T11:43:44.715Z"><code>[2023-01-20T11:43:44.715Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3957164978"><code>update/use-select-resubscribe</code></a>.
</summary>
```
● Block Grouping › Group creation › creates a group from multiple blocks of the same type via block transforms
TimeoutError: waiting for selector `.block-editor-block-switcher__container` failed: timeout 30000ms exceeded
at new WaitTask (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:813:28)
at DOMWorld.waitForSelectorInPage (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:656:22)
at Object.internalHandler.waitFor (../../node_modules/puppeteer-core/src/common/QueryHandler.ts:78:19)
at DOMWorld.waitForSelector (../../node_modules/puppeteer-core/src/common/DOMWorld.ts:511:25)
at Frame.waitForSelector (../../node_modules/puppeteer-core/src/common/FrameManager.ts:1273:47)
at Page.waitForSelector (../../node_modules/puppeteer-core/src/common/Page.ts:3210:29)
at transformBlockTo (../e2e-test-utils/build/@wordpress/e2e-test-utils/src/transform-block-to.js:22:8)
at runMicrotasks (<anonymous>)
at Object.<anonymous> (specs/editor/various/block-grouping.test.js:55:4)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| non_priority | creates a group from multiple blocks of the same type via block transforms flaky test detected this is an auto generated issue by github actions please do not edit this manually test title creates a group from multiple blocks of the same type via block transforms test path specs editor various block grouping test js errors test passed after failed attempt on test passed after failed attempt on test passed after failed attempt on a href ● block grouping › group creation › creates a group from multiple blocks of the same type via block transforms timeouterror waiting for selector block editor block switcher container failed timeout exceeded at new waittask node modules puppeteer core src common domworld ts at domworld waitforselectorinpage node modules puppeteer core src common domworld ts at object internalhandler waitfor node modules puppeteer core src common queryhandler ts at domworld waitforselector node modules puppeteer core src common domworld ts at frame waitforselector node modules puppeteer core src common framemanager ts at page waitforselector node modules puppeteer core src common page ts at transformblockto test utils build wordpress test utils src transform block to js at runmicrotasks at object specs editor various block grouping test js | 0 |
257,062 | 22,143,975,012 | IssuesEvent | 2022-06-03 09:54:20 | pharo-project/pharo | https://api.github.com/repos/pharo-project/pharo | closed | FFITypesTest>>testSignedLongLong sometimes failing on the CI | Bug Fleaky-test | **Bug description**
On the CI (only seen on the mac for now) we have very seldomly:
```
Error
Got 2147483647 instead of -2147483649.
Stacktrace
TestFailure
Got 2147483647 instead of -2147483649.
FFITypesTest(TestAsserter)>>assert:description:resumable:
FFITypesTest(TestAsserter)>>assert:description:
FFITypesTest(TestAsserter)>>assert:equals:
[ :int |
|ref|
ref := ByteArray new: FFIInt64 externalTypeSize.
ref signedLongLongAt: 1 put: int.
self assert: (ref signedLongLongAt: 1) equals: int ] in FFITypesTest>>testSignedLongLong
Array(SequenceableCollection)>>do:
FFITypesTest>>testSignedLongLong
```
| 1.0 | FFITypesTest>>testSignedLongLong sometimes failing on the CI - **Bug description**
On the CI (only seen on the mac for now) we have very seldomly:
```
Error
Got 2147483647 instead of -2147483649.
Stacktrace
TestFailure
Got 2147483647 instead of -2147483649.
FFITypesTest(TestAsserter)>>assert:description:resumable:
FFITypesTest(TestAsserter)>>assert:description:
FFITypesTest(TestAsserter)>>assert:equals:
[ :int |
|ref|
ref := ByteArray new: FFIInt64 externalTypeSize.
ref signedLongLongAt: 1 put: int.
self assert: (ref signedLongLongAt: 1) equals: int ] in FFITypesTest>>testSignedLongLong
Array(SequenceableCollection)>>do:
FFITypesTest>>testSignedLongLong
```
| non_priority | ffitypestest testsignedlonglong sometimes failing on the ci bug description on the ci only seen on the mac for now we have very seldomly error got instead of stacktrace testfailure got instead of ffitypestest testasserter assert description resumable ffitypestest testasserter assert description ffitypestest testasserter assert equals int ref ref bytearray new externaltypesize ref signedlonglongat put int self assert ref signedlonglongat equals int in ffitypestest testsignedlonglong array sequenceablecollection do ffitypestest testsignedlonglong | 0 |
257,229 | 22,152,889,700 | IssuesEvent | 2022-06-03 18:54:24 | Sage-Bionetworks/challenge-registry | https://api.github.com/repos/Sage-Bionetworks/challenge-registry | closed | Developers greated with the message `not-test onProcessExit: process exit with code=127, signal=undefined` after fresh clone of the repo | typescript/tests | 
This error occurs because of the VS Code plugin `orta.vscode-jest` that we have added in #162. The solution is currently to install Jest, which occurs when installing the project dependencies.
An ideally solution would be for this VS Code plugin to not try to run as soon as the workspace is open in VS Code. | 1.0 | Developers greated with the message `not-test onProcessExit: process exit with code=127, signal=undefined` after fresh clone of the repo - 
This error occurs because of the VS Code plugin `orta.vscode-jest` that we have added in #162. The solution is currently to install Jest, which occurs when installing the project dependencies.
An ideally solution would be for this VS Code plugin to not try to run as soon as the workspace is open in VS Code. | non_priority | developers greated with the message not test onprocessexit process exit with code signal undefined after fresh clone of the repo this error occurs because of the vs code plugin orta vscode jest that we have added in the solution is currently to install jest which occurs when installing the project dependencies an ideally solution would be for this vs code plugin to not try to run as soon as the workspace is open in vs code | 0 |
24,701 | 4,075,025,589 | IssuesEvent | 2016-05-28 22:03:09 | Nuand/bladeRF | https://api.github.com/repos/Nuand/bladeRF | closed | BLADERF_FORMAT_SC16_Q11_META docs confusing | documentation defect | > /**
* This format is the same as the ::BLADERF_FORMAT_SC16_Q11 format, except the
* first 4 samples (16 bytes) in every block of 1024 samples are replaced
* with metadata, organized as follows, with all fields being little endian
* byte order:
Is a sample 2 bytes or 4 bytes? This sentence is confusing. Saying either "first 8 samples (16 bytes) in every block of 1024 samples (2048 bytes)" or "first 4 samples (16 bytes) in every block of 512 samples (2048 bytes)" seems less confusing to me. Or maybe just stick with bytes and forget about samples?
| 1.0 | BLADERF_FORMAT_SC16_Q11_META docs confusing - > /**
* This format is the same as the ::BLADERF_FORMAT_SC16_Q11 format, except the
* first 4 samples (16 bytes) in every block of 1024 samples are replaced
* with metadata, organized as follows, with all fields being little endian
* byte order:
Is a sample 2 bytes or 4 bytes? This sentence is confusing. Saying either "first 8 samples (16 bytes) in every block of 1024 samples (2048 bytes)" or "first 4 samples (16 bytes) in every block of 512 samples (2048 bytes)" seems less confusing to me. Or maybe just stick with bytes and forget about samples?
| non_priority | bladerf format meta docs confusing this format is the same as the bladerf format format except the first samples bytes in every block of samples are replaced with metadata organized as follows with all fields being little endian byte order is a sample bytes or bytes this sentence is confusing saying either first samples bytes in every block of samples bytes or first samples bytes in every block of samples bytes seems less confusing to me or maybe just stick with bytes and forget about samples | 0 |
38,642 | 8,518,121,725 | IssuesEvent | 2018-11-01 10:28:54 | Yoast/wordpress-seo | https://api.github.com/repos/Yoast/wordpress-seo | closed | LinkedIn doesn't respect Open Graph title tags properly | bug code-review opengraph | ### Preamble
**LinkedIn doesn't handle open graph title tags properly**
I don't think we can (easily, sensibly) fix this from our side; it looks like the issue is with how LinkedIn handles metadata and scraping.
Credit to @rmarcano for discovering this. I've shamelessly stolen his screenshots and examples; just needed to move the ticket from a private repo. I've reproduced this precisely on a test site, running the latest version of WP + Yoast (and without Yoast), and with no other plugins.
### Please describe what you expected to happen and why.
LinkedIn's [documentation](https://www.linkedin.com/help/linkedin/answer/46687) states that they require/support open graph meta tags to determine if/how their post sharing experience works.
This works broadly as you'd expect in most cases, however, in some cases the `og:title` is ignored, and the post title is used instead.
Specifically, when oembed functionality is enabled on a post (WP default), LinkedIn appears to ignore the `og:title` tag, and instead, use the post title.
When omebed functionality is disabled, the correct title is used.
### How can we reproduce this behavior?
- Create a new post, and manually specify an `og:title` (using Yoast or similar).
- Test via https://www.linkedin.com/post-inspector/, and observe that the post title (not the `og:title`) is used.
- Disable oembed functionality (e.g., via [a plugin](https://wordpress.org/plugins/disable-embeds/)).
- Test again via https://www.linkedin.com/post-inspector/.
### Walkthrough
- Creating a post with a title

- Specifying a (different) title for use in `og:title` scenarios

- Confirmation that the tags output correctly in the `<head>`

- Markup from the omebed version of the same post

- LinkedIn returning the omebed/wrong version of the title

- With oembed disabled, LinkedIn returning the correct title

### Thoughts
I'm speculating that some aspect of how LinkedIn retrieves data from the page is accidentally (or intentionally?) catching the oembed version, and extracting that information. This is either intentional (but baffling) design, or a bot getting confused with mixed signals. | 1.0 | LinkedIn doesn't respect Open Graph title tags properly - ### Preamble
**LinkedIn doesn't handle open graph title tags properly**
I don't think we can (easily, sensibly) fix this from our side; it looks like the issue is with how LinkedIn handles metadata and scraping.
Credit to @rmarcano for discovering this. I've shamelessly stolen his screenshots and examples; just needed to move the ticket from a private repo. I've reproduced this precisely on a test site, running the latest version of WP + Yoast (and without Yoast), and with no other plugins.
### Please describe what you expected to happen and why.
LinkedIn's [documentation](https://www.linkedin.com/help/linkedin/answer/46687) states that they require/support open graph meta tags to determine if/how their post sharing experience works.
This works broadly as you'd expect in most cases, however, in some cases the `og:title` is ignored, and the post title is used instead.
Specifically, when oembed functionality is enabled on a post (WP default), LinkedIn appears to ignore the `og:title` tag, and instead, use the post title.
When omebed functionality is disabled, the correct title is used.
### How can we reproduce this behavior?
- Create a new post, and manually specify an `og:title` (using Yoast or similar).
- Test via https://www.linkedin.com/post-inspector/, and observe that the post title (not the `og:title`) is used.
- Disable oembed functionality (e.g., via [a plugin](https://wordpress.org/plugins/disable-embeds/)).
- Test again via https://www.linkedin.com/post-inspector/.
### Walkthrough
- Creating a post with a title

- Specifying a (different) title for use in `og:title` scenarios

- Confirmation that the tags output correctly in the `<head>`

- Markup from the omebed version of the same post

- LinkedIn returning the omebed/wrong version of the title

- With oembed disabled, LinkedIn returning the correct title

### Thoughts
I'm speculating that some aspect of how LinkedIn retrieves data from the page is accidentally (or intentionally?) catching the oembed version, and extracting that information. This is either intentional (but baffling) design, or a bot getting confused with mixed signals. | non_priority | linkedin doesn t respect open graph title tags properly preamble linkedin doesn t handle open graph title tags properly i don t think we can easily sensibly fix this from our side it looks like the issue is with how linkedin handles metadata and scraping credit to rmarcano for discovering this i ve shamelessly stolen his screenshots and examples just needed to move the ticket from a private repo i ve reproduced this precisely on a test site running the latest version of wp yoast and without yoast and with no other plugins please describe what you expected to happen and why linkedin s states that they require support open graph meta tags to determine if how their post sharing experience works this works broadly as you d expect in most cases however in some cases the og title is ignored and the post title is used instead specifically when oembed functionality is enabled on a post wp default linkedin appears to ignore the og title tag and instead use the post title when omebed functionality is disabled the correct title is used how can we reproduce this behavior create a new post and manually specify an og title using yoast or similar test via and observe that the post title not the og title is used disable oembed functionality e g via test again via walkthrough creating a post with a title specifying a different title for use in og title scenarios confirmation that the tags output correctly in the markup from the omebed version of the same post linkedin returning the omebed wrong version of the title with oembed disabled linkedin returning the correct title thoughts i m speculating that some aspect of how linkedin retrieves data from the page is accidentally or intentionally catching the oembed version and extracting that information this is either intentional but baffling design or a bot getting confused with mixed signals | 0 |
5,656 | 5,109,135,123 | IssuesEvent | 2017-01-05 19:51:19 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | Add a low watermark on request queue event to servers | core performance | Something like:
```c
/* Produces \a tag on \a cq when the number of requested calls for \a registered_method is <= \a num_requests. */
grpc_call_error grpc_server_request_low_watermark_notification_on_registered_call(grpc_server *server, void *registered_method, size_t num_requests, grpc_completion_queue *cq, void *tag);
```
Such a notification would allow wrapped languages to dynamically scale the number of outstanding requests they enqueue to the offered load against a server. | True | Add a low watermark on request queue event to servers - Something like:
```c
/* Produces \a tag on \a cq when the number of requested calls for \a registered_method is <= \a num_requests. */
grpc_call_error grpc_server_request_low_watermark_notification_on_registered_call(grpc_server *server, void *registered_method, size_t num_requests, grpc_completion_queue *cq, void *tag);
```
Such a notification would allow wrapped languages to dynamically scale the number of outstanding requests they enqueue to the offered load against a server. | non_priority | add a low watermark on request queue event to servers something like c produces a tag on a cq when the number of requested calls for a registered method is a num requests grpc call error grpc server request low watermark notification on registered call grpc server server void registered method size t num requests grpc completion queue cq void tag such a notification would allow wrapped languages to dynamically scale the number of outstanding requests they enqueue to the offered load against a server | 0 |
15,054 | 5,049,135,109 | IssuesEvent | 2016-12-20 15:06:17 | joomla/joomla-cms | https://api.github.com/repos/joomla/joomla-cms | closed | Custom Fields PR merge downgraded some vendor libs | No Code Attached Yet | ### Steps to reproduce the issue
Custom Fields PR merge downgraded some vendor libs.
Example: phpMailer 5.2.16 -> 5.2.14 https://github.com/joomla/joomla-cms/commits/staging/libraries/vendor/phpmailer/phpmailer/VERSION
Merged PR: https://github.com/joomla/joomla-cms/commit/2570285679ddd9174694687d2dca87f1f2581c04 | 1.0 | Custom Fields PR merge downgraded some vendor libs - ### Steps to reproduce the issue
Custom Fields PR merge downgraded some vendor libs.
Example: phpMailer 5.2.16 -> 5.2.14 https://github.com/joomla/joomla-cms/commits/staging/libraries/vendor/phpmailer/phpmailer/VERSION
Merged PR: https://github.com/joomla/joomla-cms/commit/2570285679ddd9174694687d2dca87f1f2581c04 | non_priority | custom fields pr merge downgraded some vendor libs steps to reproduce the issue custom fields pr merge downgraded some vendor libs example phpmailer merged pr | 0 |
24,641 | 11,052,756,090 | IssuesEvent | 2019-12-10 10:00:47 | omisego/plasma-contracts | https://api.github.com/repos/omisego/plasma-contracts | closed | Recommendation: Remove `PaymentOutputToPaymentTxCondition` and `SpendingConditionRegistry` | security | <!--
See https://help.github.com/articles/basic-writing-and-formatting-syntax/ for help with formatting code.
-->
## Issue Type
<!-- Check one with "x" -->
```
[ ] bug report
[X] feature request
```
#### Description
`PaymentOutputToPaymentTxCondition` is an abstraction around the transaction signature check needed for many components of the exit games. Its only function, `verify`, returns `true` if one transaction (`inputTxBytes`) is spent by another transaction (`spendingTxBytes`):
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/exits/payment/spendingConditions/PaymentOutputToPaymentTxCondition.sol#L40-L69
##### Verification process
The verification process is relatively straightforward. The contract performs some basic input validation, checking that the input transaction's `txType` matches `supportInputTxType`, and that the spending transaction's `txType` matches `supportSpendingTxType`. These values are set during construction.
Next, `verify` checks that the spending transaction contains an input that matches the position of one of the input transaction's outputs.
Finally, `verify` performs an EIP-712 hash on the spending transaction, and ensures it is signed by the owner of the output in question.
##### Implications of the abstraction
The abstraction used requires several files to be visited to fully understand the function of each line of code: `ISpendingCondition`, `PaymentEIP712Lib`, `UtxoPosLib`, `TxPosLib`, `PaymentTransactionModel`, `PaymentOutputModel`, `RLPReader`, `ECDSA`, and `SpendingConditionRegistry`. Additionally, the abstraction obfuscates the underlying spending condition verification primitive where used.
Finally, understanding the abstraction requires an understanding of how `SpendingConditionRegistry` is initialized, as well as the nature of its relationship with `PlasmaFramework` and `ExitGameRegistry`. The aforementioned `txType` values, `supportInputTxType` and `supportSpendingTxType`, are set during construction. Their use in `ExitGameRegistry` seems to suggest they are intended to represent different versions of transaction types, and that separate exit game contracts are meant to handle different transaction types:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/framework/registries/ExitGameRegistry.sol#L58-L78
##### Migration and initialization
The migration script seems to corroborate this interpretation:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L109-L124
The migration script shown above deploys two different versions of `PaymentOutputToPaymentTxCondition`. The first sets `supportInputTxType` and `supportSpendingTxType` to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_TX_TYPE`, respectively. The second sets those same variables to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_V2_TX_TYPE`, respectively.
The migration script then registers both of these contracts in `SpendingConditionRegistry`, and then calls `renounceOwnership`, freezing the spending conditions registered permanently:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L126-L135
Finally, the migration script registers a single exit game contract in `PlasmaFramework`:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L137-L143
Note that the associated `_txType` is permanently associated with the deployed exit game contract:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/framework/registries/ExitGameRegistry.sol#L58-L78
##### Conclusion
Crucially, this association is never used. It is implied heavily that transactions with some `txType` must use a certain registered exit game contract. In fact, this is not true. When using `PaymentExitGame`, its routers, and their associated controllers, the `txType` is invariably inferred from the encoded transaction, not from the mappings in `ExitGameRegistry`. If initialized as-is, both `PAYMENT_TX_TYPE` and `PAYMENT_V2_TX_TYPE` transactions may be exited using `PaymentExitGame`, provided they exist in the plasma chain.
#### Recommendation
* Remove `PaymentOutputToPaymentTxCondition` and `SpendingConditionRegistry`
* Implement checks for specific spending conditions directly in exit game controllers. Emphasize clarity of function: ensure it is clear when called from the top level that a signature verification check and spending condition check are being performed.
* If the inferred relationship between `txType` and `PaymentExitGame` is correct, ensure that each `PaymentExitGame` router checks for its supported `txType`. Alternatively, the check could be made in `PaymentExitGame` itself. | True | Recommendation: Remove `PaymentOutputToPaymentTxCondition` and `SpendingConditionRegistry` - <!--
See https://help.github.com/articles/basic-writing-and-formatting-syntax/ for help with formatting code.
-->
## Issue Type
<!-- Check one with "x" -->
```
[ ] bug report
[X] feature request
```
#### Description
`PaymentOutputToPaymentTxCondition` is an abstraction around the transaction signature check needed for many components of the exit games. Its only function, `verify`, returns `true` if one transaction (`inputTxBytes`) is spent by another transaction (`spendingTxBytes`):
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/exits/payment/spendingConditions/PaymentOutputToPaymentTxCondition.sol#L40-L69
##### Verification process
The verification process is relatively straightforward. The contract performs some basic input validation, checking that the input transaction's `txType` matches `supportInputTxType`, and that the spending transaction's `txType` matches `supportSpendingTxType`. These values are set during construction.
Next, `verify` checks that the spending transaction contains an input that matches the position of one of the input transaction's outputs.
Finally, `verify` performs an EIP-712 hash on the spending transaction, and ensures it is signed by the owner of the output in question.
##### Implications of the abstraction
The abstraction used requires several files to be visited to fully understand the function of each line of code: `ISpendingCondition`, `PaymentEIP712Lib`, `UtxoPosLib`, `TxPosLib`, `PaymentTransactionModel`, `PaymentOutputModel`, `RLPReader`, `ECDSA`, and `SpendingConditionRegistry`. Additionally, the abstraction obfuscates the underlying spending condition verification primitive where used.
Finally, understanding the abstraction requires an understanding of how `SpendingConditionRegistry` is initialized, as well as the nature of its relationship with `PlasmaFramework` and `ExitGameRegistry`. The aforementioned `txType` values, `supportInputTxType` and `supportSpendingTxType`, are set during construction. Their use in `ExitGameRegistry` seems to suggest they are intended to represent different versions of transaction types, and that separate exit game contracts are meant to handle different transaction types:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/framework/registries/ExitGameRegistry.sol#L58-L78
##### Migration and initialization
The migration script seems to corroborate this interpretation:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L109-L124
The migration script shown above deploys two different versions of `PaymentOutputToPaymentTxCondition`. The first sets `supportInputTxType` and `supportSpendingTxType` to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_TX_TYPE`, respectively. The second sets those same variables to `PAYMENT_OUTPUT_TYPE` and `PAYMENT_V2_TX_TYPE`, respectively.
The migration script then registers both of these contracts in `SpendingConditionRegistry`, and then calls `renounceOwnership`, freezing the spending conditions registered permanently:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L126-L135
Finally, the migration script registers a single exit game contract in `PlasmaFramework`:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/migrations/5_deploy_and_register_payment_exit_game.js#L137-L143
Note that the associated `_txType` is permanently associated with the deployed exit game contract:
https://github.com/omisego/plasma-contracts/blob/493ddeadc821fae7293cfc3088939eddf89cb559/plasma_framework/contracts/src/framework/registries/ExitGameRegistry.sol#L58-L78
##### Conclusion
Crucially, this association is never used. It is implied heavily that transactions with some `txType` must use a certain registered exit game contract. In fact, this is not true. When using `PaymentExitGame`, its routers, and their associated controllers, the `txType` is invariably inferred from the encoded transaction, not from the mappings in `ExitGameRegistry`. If initialized as-is, both `PAYMENT_TX_TYPE` and `PAYMENT_V2_TX_TYPE` transactions may be exited using `PaymentExitGame`, provided they exist in the plasma chain.
#### Recommendation
* Remove `PaymentOutputToPaymentTxCondition` and `SpendingConditionRegistry`
* Implement checks for specific spending conditions directly in exit game controllers. Emphasize clarity of function: ensure it is clear when called from the top level that a signature verification check and spending condition check are being performed.
* If the inferred relationship between `txType` and `PaymentExitGame` is correct, ensure that each `PaymentExitGame` router checks for its supported `txType`. Alternatively, the check could be made in `PaymentExitGame` itself. | non_priority | recommendation remove paymentoutputtopaymenttxcondition and spendingconditionregistry see for help with formatting code issue type bug report feature request description paymentoutputtopaymenttxcondition is an abstraction around the transaction signature check needed for many components of the exit games its only function verify returns true if one transaction inputtxbytes is spent by another transaction spendingtxbytes verification process the verification process is relatively straightforward the contract performs some basic input validation checking that the input transaction s txtype matches supportinputtxtype and that the spending transaction s txtype matches supportspendingtxtype these values are set during construction next verify checks that the spending transaction contains an input that matches the position of one of the input transaction s outputs finally verify performs an eip hash on the spending transaction and ensures it is signed by the owner of the output in question implications of the abstraction the abstraction used requires several files to be visited to fully understand the function of each line of code ispendingcondition utxoposlib txposlib paymenttransactionmodel paymentoutputmodel rlpreader ecdsa and spendingconditionregistry additionally the abstraction obfuscates the underlying spending condition verification primitive where used finally understanding the abstraction requires an understanding of how spendingconditionregistry is initialized as well as the nature of its relationship with plasmaframework and exitgameregistry the aforementioned txtype values supportinputtxtype and supportspendingtxtype are set during construction their use in exitgameregistry seems to suggest they are intended to represent different versions of transaction types and that separate exit game contracts are meant to handle different transaction types migration and initialization the migration script seems to corroborate this interpretation the migration script shown above deploys two different versions of paymentoutputtopaymenttxcondition the first sets supportinputtxtype and supportspendingtxtype to payment output type and payment tx type respectively the second sets those same variables to payment output type and payment tx type respectively the migration script then registers both of these contracts in spendingconditionregistry and then calls renounceownership freezing the spending conditions registered permanently finally the migration script registers a single exit game contract in plasmaframework note that the associated txtype is permanently associated with the deployed exit game contract conclusion crucially this association is never used it is implied heavily that transactions with some txtype must use a certain registered exit game contract in fact this is not true when using paymentexitgame its routers and their associated controllers the txtype is invariably inferred from the encoded transaction not from the mappings in exitgameregistry if initialized as is both payment tx type and payment tx type transactions may be exited using paymentexitgame provided they exist in the plasma chain recommendation remove paymentoutputtopaymenttxcondition and spendingconditionregistry implement checks for specific spending conditions directly in exit game controllers emphasize clarity of function ensure it is clear when called from the top level that a signature verification check and spending condition check are being performed if the inferred relationship between txtype and paymentexitgame is correct ensure that each paymentexitgame router checks for its supported txtype alternatively the check could be made in paymentexitgame itself | 0 |
91,130 | 15,856,371,213 | IssuesEvent | 2021-04-08 02:11:09 | jinuem/IonicV2Tabs | https://api.github.com/repos/jinuem/IonicV2Tabs | opened | CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz | security vulnerability | ## CVE-2019-10746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary>
<p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p>
<p>Path to dependency file: /IonicV2Tabs/package.json</p>
<p>Path to vulnerable library: IonicV2Tabs/node_modules/mixin-deep/package.json</p>
<p>
Dependency Hierarchy:
- app-scripts-1.3.5.tgz (Root Library)
- chokidar-1.6.1.tgz
- readdirp-2.2.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10746>CVE-2019-10746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: 1.3.2,2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10746 (High) detected in mixin-deep-1.3.1.tgz - ## CVE-2019-10746 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixin-deep-1.3.1.tgz</b></p></summary>
<p>Deeply mix the properties of objects into the first object. Like merge-deep, but doesn't clone.</p>
<p>Library home page: <a href="https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz">https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.1.tgz</a></p>
<p>Path to dependency file: /IonicV2Tabs/package.json</p>
<p>Path to vulnerable library: IonicV2Tabs/node_modules/mixin-deep/package.json</p>
<p>
Dependency Hierarchy:
- app-scripts-1.3.5.tgz (Root Library)
- chokidar-1.6.1.tgz
- readdirp-2.2.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- :x: **mixin-deep-1.3.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
mixin-deep is vulnerable to Prototype Pollution in versions before 1.3.2 and version 2.0.0. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10746>CVE-2019-10746</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9">https://github.com/jonschlinkert/mixin-deep/commit/8f464c8ce9761a8c9c2b3457eaeee9d404fa7af9</a></p>
<p>Release Date: 2019-07-11</p>
<p>Fix Resolution: 1.3.2,2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in mixin deep tgz cve high severity vulnerability vulnerable library mixin deep tgz deeply mix the properties of objects into the first object like merge deep but doesn t clone library home page a href path to dependency file package json path to vulnerable library node modules mixin deep package json dependency hierarchy app scripts tgz root library chokidar tgz readdirp tgz micromatch tgz snapdragon tgz base tgz x mixin deep tgz vulnerable library vulnerability details mixin deep is vulnerable to prototype pollution in versions before and version the function mixin deep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
5,340 | 5,636,446,862 | IssuesEvent | 2017-04-06 05:49:57 | ChurchCRM/CRM | https://api.github.com/repos/ChurchCRM/CRM | closed | Use Secure / randomized default password | enhancement In Review Security | This issue explores the implications of removing the sDefault_Pass parameter, and using a securely generated random password instead. | True | Use Secure / randomized default password - This issue explores the implications of removing the sDefault_Pass parameter, and using a securely generated random password instead. | non_priority | use secure randomized default password this issue explores the implications of removing the sdefault pass parameter and using a securely generated random password instead | 0 |
123,760 | 10,290,397,996 | IssuesEvent | 2019-08-27 09:58:54 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | Using `--format=json` with `--nocapture` | A-libtest T-dev-tools | I'm looking for a way to get test results, including stdout/stderr, in json format.
I've tried to use `--format=json` with `--nocapture` and have got an unexpected result, imo.
Example file:
```rust
#[cfg(test)]
mod tests {
#[test]
fn test1() {
println!("Hello from test #1");
panic!();
}
#[test]
fn test2() {
println!("Hello from test #2");
}
}
```
1. `cargo test -- -Z unstable-options --format=json` output:
```
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running target/debug/deps/rust_sandbox-3aca71fd58cbc041
{ "type": "suite", "event": "started", "test_count": "0" }
{ "type": "suite", "event": "ok", "passed": 0, "failed": 0, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
Running target/debug/deps/tests-03a860f74891dd25
{ "type": "suite", "event": "started", "test_count": "2" }
{ "type": "test", "event": "started", "name": "tests::test1" }
{ "type": "test", "event": "started", "name": "tests::test2" }
{ "type": "test", "name": "tests::test2", "event": "ok" }
{ "type": "test", "name": "tests::test1", "event": "failed", "stdout": "Hello from test #1\nthread 'tests::test1' panicked at 'explicit panic', tests/tests.rs:6:9\nnote: Run with `RUST_BACKTRACE=1` for a backtrace.\n" }
{ "type": "suite", "event": "failed", "passed": 1, "failed": 1, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
error: test failed, to rerun pass '--test tests'
```
2. `cargo test -- --nocapture -Z unstable-options --format=json` output:
```
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running target/debug/deps/rust_sandbox-3aca71fd58cbc041
{ "type": "suite", "event": "started", "test_count": "0" }
{ "type": "suite", "event": "ok", "passed": 0, "failed": 0, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
Running target/debug/deps/tests-03a860f74891dd25
{ "type": "suite", "event": "started", "test_count": "2" }
{ "type": "test", "event": "started", "name": "tests::test1" }
{ "type": "test", "event": "started", "name": "tests::test2" }
Hello from test #1
Hello from test #2
thread 'tests::test1' panicked at 'explicit panic', tests/tests.rs:6:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
{ "type": "test", "name": "tests::test2", "event": "ok" }
{ "type": "test", "name": "tests::test1", "event": "failed" }
{ "type": "suite", "event": "failed", "passed": 1, "failed": 1, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
error: test failed, to rerun pass '--test tests'
```
I expected to see this happen:
* In the 1st case, the `Hello from test #1` line doesn't appear in the `stdout` field.
* In the 2nd case, the `println!` output and errors are recorded in different fields (e.g. `stdout` and `stderr`).
Expected behavior can be useful for integrating test frameworks with IDEs and editors.
I'm using:
* rustc 1.29.1 (b801ae664 2018-09-20) | 1.0 | Using `--format=json` with `--nocapture` - I'm looking for a way to get test results, including stdout/stderr, in json format.
I've tried to use `--format=json` with `--nocapture` and have got an unexpected result, imo.
Example file:
```rust
#[cfg(test)]
mod tests {
#[test]
fn test1() {
println!("Hello from test #1");
panic!();
}
#[test]
fn test2() {
println!("Hello from test #2");
}
}
```
1. `cargo test -- -Z unstable-options --format=json` output:
```
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running target/debug/deps/rust_sandbox-3aca71fd58cbc041
{ "type": "suite", "event": "started", "test_count": "0" }
{ "type": "suite", "event": "ok", "passed": 0, "failed": 0, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
Running target/debug/deps/tests-03a860f74891dd25
{ "type": "suite", "event": "started", "test_count": "2" }
{ "type": "test", "event": "started", "name": "tests::test1" }
{ "type": "test", "event": "started", "name": "tests::test2" }
{ "type": "test", "name": "tests::test2", "event": "ok" }
{ "type": "test", "name": "tests::test1", "event": "failed", "stdout": "Hello from test #1\nthread 'tests::test1' panicked at 'explicit panic', tests/tests.rs:6:9\nnote: Run with `RUST_BACKTRACE=1` for a backtrace.\n" }
{ "type": "suite", "event": "failed", "passed": 1, "failed": 1, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
error: test failed, to rerun pass '--test tests'
```
2. `cargo test -- --nocapture -Z unstable-options --format=json` output:
```
Finished dev [unoptimized + debuginfo] target(s) in 0.03s
Running target/debug/deps/rust_sandbox-3aca71fd58cbc041
{ "type": "suite", "event": "started", "test_count": "0" }
{ "type": "suite", "event": "ok", "passed": 0, "failed": 0, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
Running target/debug/deps/tests-03a860f74891dd25
{ "type": "suite", "event": "started", "test_count": "2" }
{ "type": "test", "event": "started", "name": "tests::test1" }
{ "type": "test", "event": "started", "name": "tests::test2" }
Hello from test #1
Hello from test #2
thread 'tests::test1' panicked at 'explicit panic', tests/tests.rs:6:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
{ "type": "test", "name": "tests::test2", "event": "ok" }
{ "type": "test", "name": "tests::test1", "event": "failed" }
{ "type": "suite", "event": "failed", "passed": 1, "failed": 1, "allowed_fail": 0, "ignored": 0, "measured": 0, "filtered_out": "0" }
error: test failed, to rerun pass '--test tests'
```
I expected to see this happen:
* In the 1st case, the `Hello from test #1` line doesn't appear in the `stdout` field.
* In the 2nd case, the `println!` output and errors are recorded in different fields (e.g. `stdout` and `stderr`).
Expected behavior can be useful for integrating test frameworks with IDEs and editors.
I'm using:
* rustc 1.29.1 (b801ae664 2018-09-20) | non_priority | using format json with nocapture i m looking for a way to get test results including stdout stderr in json format i ve tried to use format json with nocapture and have got an unexpected result imo example file rust mod tests fn println hello from test panic fn println hello from test cargo test z unstable options format json output finished dev target s in running target debug deps rust sandbox type suite event started test count type suite event ok passed failed allowed fail ignored measured filtered out running target debug deps tests type suite event started test count type test event started name tests type test event started name tests type test name tests event ok type test name tests event failed stdout hello from test nthread tests panicked at explicit panic tests tests rs nnote run with rust backtrace for a backtrace n type suite event failed passed failed allowed fail ignored measured filtered out error test failed to rerun pass test tests cargo test nocapture z unstable options format json output finished dev target s in running target debug deps rust sandbox type suite event started test count type suite event ok passed failed allowed fail ignored measured filtered out running target debug deps tests type suite event started test count type test event started name tests type test event started name tests hello from test hello from test thread tests panicked at explicit panic tests tests rs note run with rust backtrace for a backtrace type test name tests event ok type test name tests event failed type suite event failed passed failed allowed fail ignored measured filtered out error test failed to rerun pass test tests i expected to see this happen in the case the hello from test line doesn t appear in the stdout field in the case the println output and errors are recorded in different fields e g stdout and stderr expected behavior can be useful for integrating test frameworks with ides and editors i m using rustc | 0 |
75,846 | 21,010,987,591 | IssuesEvent | 2022-03-30 06:32:03 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Bug]: Label does not show on top of Password text inputs when resized | Bug App Viewers Pod UI Building Pod Needs Triaging | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Normally When a Text input is resized the Label moves to its top. But for Password Text inputs this behaviour does not happen.
https://user-images.githubusercontent.com/37867493/160695912-bfcbd25b-258b-4f6b-9cdd-92cc3a17d612.mov
### Steps To Reproduce
1. Add an Input widget on canvas
2. Add a Label to it
3. Drag the input down and notice that the Label aligns to the top
4. Switch Data type to Password
5. Watch this alignment fail.
### Public Sample App
_No response_
### Version
1.6.16 | 1.0 | [Bug]: Label does not show on top of Password text inputs when resized - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Description
Normally When a Text input is resized the Label moves to its top. But for Password Text inputs this behaviour does not happen.
https://user-images.githubusercontent.com/37867493/160695912-bfcbd25b-258b-4f6b-9cdd-92cc3a17d612.mov
### Steps To Reproduce
1. Add an Input widget on canvas
2. Add a Label to it
3. Drag the input down and notice that the Label aligns to the top
4. Switch Data type to Password
5. Watch this alignment fail.
### Public Sample App
_No response_
### Version
1.6.16 | non_priority | label does not show on top of password text inputs when resized is there an existing issue for this i have searched the existing issues description normally when a text input is resized the label moves to its top but for password text inputs this behaviour does not happen steps to reproduce add an input widget on canvas add a label to it drag the input down and notice that the label aligns to the top switch data type to password watch this alignment fail public sample app no response version | 0 |
47,104 | 10,027,715,176 | IssuesEvent | 2019-07-17 09:50:49 | mozilla/release-services | https://api.github.com/repos/mozilla/release-services | closed | In the file endpoint, coverage information is not mapped from the tip changeset to the requested changeset | app:codecoverage/backend | We just assume the changeset is the only one in the push changing those files. | 1.0 | In the file endpoint, coverage information is not mapped from the tip changeset to the requested changeset - We just assume the changeset is the only one in the push changing those files. | non_priority | in the file endpoint coverage information is not mapped from the tip changeset to the requested changeset we just assume the changeset is the only one in the push changing those files | 0 |
53,108 | 6,300,878,601 | IssuesEvent | 2017-07-21 06:05:58 | foundersandcoders/master-reference | https://api.github.com/repos/foundersandcoders/master-reference | opened | Todo project: `sortTodos` needs better explanation | week-testing | Students were confused about _how_ they were meant to sort the todos (i.e. by id? by status? alphabetically? etc.)
Simply listing these as options in the comments might help clear up some of this confusion. | 1.0 | Todo project: `sortTodos` needs better explanation - Students were confused about _how_ they were meant to sort the todos (i.e. by id? by status? alphabetically? etc.)
Simply listing these as options in the comments might help clear up some of this confusion. | non_priority | todo project sorttodos needs better explanation students were confused about how they were meant to sort the todos i e by id by status alphabetically etc simply listing these as options in the comments might help clear up some of this confusion | 0 |
191,288 | 14,593,753,100 | IssuesEvent | 2020-12-20 00:51:18 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | riscv/riscv-go: src/regexp/all_test.go; 3 LoC | fresh test tiny |
Found a possible issue in [riscv/riscv-go](https://www.github.com/riscv/riscv-go) at [src/regexp/all_test.go](https://github.com/riscv/riscv-go/blob/124ebd6fcc8e6c6aed193c1dce855709d8614fd2/src/regexp/all_test.go#L95-L97)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 96 may start a goroutine
[Click here to see the code in its original context.](https://github.com/riscv/riscv-go/blob/124ebd6fcc8e6c6aed193c1dce855709d8614fd2/src/regexp/all_test.go#L95-L97)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range findTests {
matchTest(t, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 124ebd6fcc8e6c6aed193c1dce855709d8614fd2
| 1.0 | riscv/riscv-go: src/regexp/all_test.go; 3 LoC -
Found a possible issue in [riscv/riscv-go](https://www.github.com/riscv/riscv-go) at [src/regexp/all_test.go](https://github.com/riscv/riscv-go/blob/124ebd6fcc8e6c6aed193c1dce855709d8614fd2/src/regexp/all_test.go#L95-L97)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to test at line 96 may start a goroutine
[Click here to see the code in its original context.](https://github.com/riscv/riscv-go/blob/124ebd6fcc8e6c6aed193c1dce855709d8614fd2/src/regexp/all_test.go#L95-L97)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range findTests {
matchTest(t, &test)
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 124ebd6fcc8e6c6aed193c1dce855709d8614fd2
| non_priority | riscv riscv go src regexp all test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to test at line may start a goroutine click here to show the line s of go which triggered the analyzer go for test range findtests matchtest t test leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
298,260 | 25,808,140,706 | IssuesEvent | 2022-12-11 15:52:20 | foxthefox/ioBroker.fritzdect | https://api.github.com/repos/foxthefox/ioBroker.fritzdect | closed | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github | test feedback missing | **Describe the bug**
BugReport from fritzdect in iobroker default configuration
A clear and concise description of what the bug is.
**To Reproduce**
default communication with driver and fritzbox (sample 7590: os: [7.39-101351 BETA])
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots & Logfiles**
2022-11-14 00:41:08.835 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.834 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.771 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.769 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.650 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.650 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.518 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.518 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.883 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.882 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.790 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.789 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.694 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.693 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.540 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.540 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingActive please inform devloper and open issue in github
<br class="Apple-interchange-newline">
**Versions:**
Verfügbare Version:
2.2.3
Installierte Version:
2.2.3
- JS-Controller version: <js-controller-version> <!-- determine this with `iobroker -v` on the console -->
- Node version: <node-version> <!-- determine this with `node -v` on the console -->
- Operating system: <os-name>
**Additional context**
Add any other context about the problem here.
| 1.0 | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github - **Describe the bug**
BugReport from fritzdect in iobroker default configuration
A clear and concise description of what the bug is.
**To Reproduce**
default communication with driver and fritzbox (sample 7590: os: [7.39-101351 BETA])
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots & Logfiles**
2022-11-14 00:41:08.835 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.834 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.771 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.769 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.650 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.650 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.518 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:41:08.518 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.883 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.882 | warn | unknown datapoint DECT_139790037754.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.790 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.789 | warn | unknown datapoint DECT_099950573066.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.694 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.693 | warn | unknown datapoint DECT_099950570176.adaptiveHeatingActive please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.540 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingRunning please inform devloper and open issue in github
fritzdect.0 | 2022-11-14 00:36:08.540 | warn | unknown datapoint DECT_099950670969.adaptiveHeatingActive please inform devloper and open issue in github
<br class="Apple-interchange-newline">
**Versions:**
Verfügbare Version:
2.2.3
Installierte Version:
2.2.3
- JS-Controller version: <js-controller-version> <!-- determine this with `iobroker -v` on the console -->
- Node version: <node-version> <!-- determine this with `node -v` on the console -->
- Operating system: <os-name>
**Additional context**
Add any other context about the problem here.
| non_priority | unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github describe the bug bugreport from fritzdect in iobroker default configuration a clear and concise description of what the bug is to reproduce default communication with driver and fritzbox sample os expected behavior a clear and concise description of what you expected to happen screenshots logfiles warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingrunning please inform devloper and open issue in github fritzdect warn unknown datapoint dect adaptiveheatingactive please inform devloper and open issue in github versions verfügbare version installierte version js controller version node version operating system additional context add any other context about the problem here | 0 |
143,979 | 11,590,013,573 | IssuesEvent | 2020-02-24 04:59:42 | escobard/create-app | https://api.github.com/repos/escobard/create-app | closed | UI - Header tests | UI Unit Tests | Part of the #39.
1. Move verbage to `constants/catalogue`.
1. Date formatter util + test.
1. PropType to header.
1. Unit tests for header. | 1.0 | UI - Header tests - Part of the #39.
1. Move verbage to `constants/catalogue`.
1. Date formatter util + test.
1. PropType to header.
1. Unit tests for header. | non_priority | ui header tests part of the move verbage to constants catalogue date formatter util test proptype to header unit tests for header | 0 |
123,670 | 16,523,934,094 | IssuesEvent | 2021-05-26 17:31:39 | BarryCap/BarryCap.github.io | https://api.github.com/repos/BarryCap/BarryCap.github.io | opened | Add buttons with icons in Realizations page | CSS HTML SVG design enhancement improvement setup text | **Realizations page is currently a bit blank**, and as said in #38, Drawing page must be more accessible, it is the same for series pages. | 1.0 | Add buttons with icons in Realizations page - **Realizations page is currently a bit blank**, and as said in #38, Drawing page must be more accessible, it is the same for series pages. | non_priority | add buttons with icons in realizations page realizations page is currently a bit blank and as said in drawing page must be more accessible it is the same for series pages | 0 |
255,374 | 19,299,730,282 | IssuesEvent | 2021-12-13 02:53:29 | RackReaver/FinPack | https://api.github.com/repos/RackReaver/FinPack | closed | Create CONTRIBUTE.md | documentation help wanted | I want this to be simple but all the examples and templates online are quite long. If anyone can provide assistance it would be greatly appreciated.
Resource:
https://gist.github.com/PurpleBooth/b24679402957c63ec426 | 1.0 | Create CONTRIBUTE.md - I want this to be simple but all the examples and templates online are quite long. If anyone can provide assistance it would be greatly appreciated.
Resource:
https://gist.github.com/PurpleBooth/b24679402957c63ec426 | non_priority | create contribute md i want this to be simple but all the examples and templates online are quite long if anyone can provide assistance it would be greatly appreciated resource | 0 |
143,150 | 13,055,519,372 | IssuesEvent | 2020-07-30 01:56:12 | miguelangelsoria/Reclamaciones | https://api.github.com/repos/miguelangelsoria/Reclamaciones | reopened | Creacion de la base de datos | documentation | se crea la tabla que alojara los datos de los trabajadores que presenten su formato de reclamacion | 1.0 | Creacion de la base de datos - se crea la tabla que alojara los datos de los trabajadores que presenten su formato de reclamacion | non_priority | creacion de la base de datos se crea la tabla que alojara los datos de los trabajadores que presenten su formato de reclamacion | 0 |
75,927 | 26,155,153,013 | IssuesEvent | 2022-12-30 20:11:42 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | opened | Voice message timeline | T-Defect | ### Steps to reproduce
1. Tap and hold microphone button on Android to record a short voice message and post it in the room
2. Play back the message and note where the audio starts in relation to the timeline indicator.
### Outcome
#### What did you expect?
The current position indicator should move smoothly and match with when I hear audio.
#### What happened instead?
The current position indicator seemingly updates once per second and I seem to hear audio before where it shows up in the curve.
**Here is a screenshot with a red arrow added roughly when I start hearing audio.**

### Your phone model
Pixel 6a
### Operating system version
Android 13
### Application version and app store
Element 1.5.16 olm 3.2.12 from Google Play
### Homeserver
element.io
### Will you send logs?
No
### Are you willing to provide a PR?
No | 1.0 | Voice message timeline - ### Steps to reproduce
1. Tap and hold microphone button on Android to record a short voice message and post it in the room
2. Play back the message and note where the audio starts in relation to the timeline indicator.
### Outcome
#### What did you expect?
The current position indicator should move smoothly and match with when I hear audio.
#### What happened instead?
The current position indicator seemingly updates once per second and I seem to hear audio before where it shows up in the curve.
**Here is a screenshot with a red arrow added roughly when I start hearing audio.**

### Your phone model
Pixel 6a
### Operating system version
Android 13
### Application version and app store
Element 1.5.16 olm 3.2.12 from Google Play
### Homeserver
element.io
### Will you send logs?
No
### Are you willing to provide a PR?
No | non_priority | voice message timeline steps to reproduce tap and hold microphone button on android to record a short voice message and post it in the room play back the message and note where the audio starts in relation to the timeline indicator outcome what did you expect the current position indicator should move smoothly and match with when i hear audio what happened instead the current position indicator seemingly updates once per second and i seem to hear audio before where it shows up in the curve here is a screenshot with a red arrow added roughly when i start hearing audio your phone model pixel operating system version android application version and app store element olm from google play homeserver element io will you send logs no are you willing to provide a pr no | 0 |
334,551 | 29,893,236,877 | IssuesEvent | 2023-06-21 00:58:20 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | DISABLED test_bitwise2_cpu (__main__.CpuTests) | triaged module: flaky-tests skipped module: inductor | Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_bitwise2_cpu&suite=CpuTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14032417714).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_bitwise2_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py` or `inductor/test_torchinductor.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 | 1.0 | DISABLED test_bitwise2_cpu (__main__.CpuTests) - Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_bitwise2_cpu&suite=CpuTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/14032417714).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_bitwise2_cpu`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_torchinductor.py` or `inductor/test_torchinductor.py`
ResponseTimeoutError: Response timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/inductor/test_torchinductor.py -1 (connected: true, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 | non_priority | disabled test cpu main cputests platforms rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test cpu there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path inductor test torchinductor py or inductor test torchinductor py responsetimeouterror response timeout for get connected true keepalive socket false sockethandledrequests sockethandledresponses headers cc voznesenskym penguinwu eikanwang guobing chen xiaobingsuper zhuhaozhe blzheng xia weiwen wenzhe nrv jiayisunx ipiszy ngimel | 0 |
299,779 | 25,927,592,094 | IssuesEvent | 2022-12-16 06:43:38 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | ccl/backupccl: TestDataDriven failed | C-test-failure O-robot branch-release-22.1 | ccl/backupccl.TestDataDriven [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7985204&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7985204&tab=artifacts#/) on release-22.1 @ [310c0c5956115c393dea307fdd9f2e177b8c3a5e](https://github.com/cockroachdb/cockroach/commits/310c0c5956115c393dea307fdd9f2e177b8c3a5e):
```
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:79:
exec-sql [0 args]
BACKUP DATABASE no_region_db INTO 'nodelocal://1/no_region_database_backup/';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:83:
exec-sql [0 args]
BACKUP INTO 'nodelocal://1/no_region_cluster_backup/';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:87:
exec-sql [0 args]
DROP DATABASE no_region_db;
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:91:
exec-sql [0 args]
DROP DATABASE no_region_db_2;
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:95:
exec-sql [0 args]
SET CLUSTER SETTING sql.defaults.primary_region = 'non-existent-region';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:99:
exec-sql [0 args]
RESTORE DATABASE no_region_db FROM LATEST IN 'nodelocal://1/no_region_database_backup/';
----
pq: region "non-existent-region" does not exist
HINT: valid regions: eu-central-1, eu-north-1
--
set the default PRIMARY REGION to a region that exists (see SHOW REGIONS FROM CLUSTER) then using SET CLUSTER SETTING sql.defaults.primary_region = 'region'
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:107:
exec-sql [0 args]
SET CLUSTER SETTING sql.defaults.primary_region = 'eu-central-1';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:111: RESTORE DATABASE no_region_db FROM LATEST IN 'nodelocal://1/no_region_database_backup/';
expected:
NOTICE: setting the PRIMARY REGION as eu-central-1 on database no_region_db
HINT: to change the default primary region, use SET CLUSTER SETTING sql.defaults.primary_region = 'region' or use RESET CLUSTER SETTING sql.defaults.primary_region to disable this behavior
found:
pq: region "non-existent-region" does not exist
HINT: valid regions: eu-central-1, eu-north-1
--
set the default PRIMARY REGION to a region that exists (see SHOW REGIONS FROM CLUSTER) then using SET CLUSTER SETTING sql.defaults.primary_region = 'region'
--- FAIL: TestDataDriven/multiregion (11.89s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #93492 ccl/backupccl: TestDataDriven failed [C-test-failure O-robot T-disaster-recovery branch-master]
- #92886 ccl/backupccl: TestDataDriven failed [C-test-failure O-robot T-disaster-recovery branch-release-22.2.0]
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDataDriven.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| 1.0 | ccl/backupccl: TestDataDriven failed - ccl/backupccl.TestDataDriven [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=7985204&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=7985204&tab=artifacts#/) on release-22.1 @ [310c0c5956115c393dea307fdd9f2e177b8c3a5e](https://github.com/cockroachdb/cockroach/commits/310c0c5956115c393dea307fdd9f2e177b8c3a5e):
```
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:79:
exec-sql [0 args]
BACKUP DATABASE no_region_db INTO 'nodelocal://1/no_region_database_backup/';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:83:
exec-sql [0 args]
BACKUP INTO 'nodelocal://1/no_region_cluster_backup/';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:87:
exec-sql [0 args]
DROP DATABASE no_region_db;
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:91:
exec-sql [0 args]
DROP DATABASE no_region_db_2;
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:95:
exec-sql [0 args]
SET CLUSTER SETTING sql.defaults.primary_region = 'non-existent-region';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:99:
exec-sql [0 args]
RESTORE DATABASE no_region_db FROM LATEST IN 'nodelocal://1/no_region_database_backup/';
----
pq: region "non-existent-region" does not exist
HINT: valid regions: eu-central-1, eu-north-1
--
set the default PRIMARY REGION to a region that exists (see SHOW REGIONS FROM CLUSTER) then using SET CLUSTER SETTING sql.defaults.primary_region = 'region'
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:107:
exec-sql [0 args]
SET CLUSTER SETTING sql.defaults.primary_region = 'eu-central-1';
----
datadriven_test.go:364:
/home/roach/.cache/bazel/_bazel_roach/c5a4e7d36696d9cd970af2045211a7df/sandbox/processwrapper-sandbox/3758/execroot/cockroach/bazel-out/k8-dbg/bin/pkg/ccl/backupccl/backupccl_test_/backupccl_test.runfiles/cockroach/pkg/ccl/backupccl/testdata/backup-restore/multiregion:111: RESTORE DATABASE no_region_db FROM LATEST IN 'nodelocal://1/no_region_database_backup/';
expected:
NOTICE: setting the PRIMARY REGION as eu-central-1 on database no_region_db
HINT: to change the default primary region, use SET CLUSTER SETTING sql.defaults.primary_region = 'region' or use RESET CLUSTER SETTING sql.defaults.primary_region to disable this behavior
found:
pq: region "non-existent-region" does not exist
HINT: valid regions: eu-central-1, eu-north-1
--
set the default PRIMARY REGION to a region that exists (see SHOW REGIONS FROM CLUSTER) then using SET CLUSTER SETTING sql.defaults.primary_region = 'region'
--- FAIL: TestDataDriven/multiregion (11.89s)
```
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
Parameters in this failure:
- TAGS=bazel,gss
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #93492 ccl/backupccl: TestDataDriven failed [C-test-failure O-robot T-disaster-recovery branch-master]
- #92886 ccl/backupccl: TestDataDriven failed [C-test-failure O-robot T-disaster-recovery branch-release-22.2.0]
</p>
</details>
/cc @cockroachdb/disaster-recovery
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestDataDriven.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
| non_priority | ccl backupccl testdatadriven failed ccl backupccl testdatadriven with on release home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql backup database no region db into nodelocal no region database backup datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql backup into nodelocal no region cluster backup datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql drop database no region db datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql drop database no region db datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql set cluster setting sql defaults primary region non existent region datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql restore database no region db from latest in nodelocal no region database backup pq region non existent region does not exist hint valid regions eu central eu north set the default primary region to a region that exists see show regions from cluster then using set cluster setting sql defaults primary region region datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion exec sql set cluster setting sql defaults primary region eu central datadriven test go home roach cache bazel bazel roach sandbox processwrapper sandbox execroot cockroach bazel out dbg bin pkg ccl backupccl backupccl test backupccl test runfiles cockroach pkg ccl backupccl testdata backup restore multiregion restore database no region db from latest in nodelocal no region database backup expected notice setting the primary region as eu central on database no region db hint to change the default primary region use set cluster setting sql defaults primary region region or use reset cluster setting sql defaults primary region to disable this behavior found pq region non existent region does not exist hint valid regions eu central eu north set the default primary region to a region that exists see show regions from cluster then using set cluster setting sql defaults primary region region fail testdatadriven multiregion help see also parameters in this failure tags bazel gss same failure on other branches ccl backupccl testdatadriven failed ccl backupccl testdatadriven failed cc cockroachdb disaster recovery | 0 |
43,692 | 17,632,640,300 | IssuesEvent | 2021-08-19 09:53:43 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | The private certificate should also contain the root certificate in the certificate chain, shouldn't it? | app-service/svc triaged cxp product-question Pri1 | Hi Team,
This page says that your private certificate must meet the following requirements:
- Exported as a password-protected PFX file, encrypted using triple DES.
- Contains private key at least 2048 bits long
- Contains all intermediate certificates in the certificate chain
Regarding the 3rd requirement, the private certificate should also contain the root certificate in the certificate chain, shouldn't it?
So I'm thinking it better that the 3rd requirement is revised as the following.
- Contains all intermediate certificate and the root one in the certificate chain.
Best Regards,
Takeshi Katayama
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: cc2ee874-df45-1de2-1b30-1fd75c7fd709
* Version Independent ID: ee181722-8386-9842-407f-d0549012d2e9
* Content: [Add and manage TLS/SSL certificates - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/configure-ssl-certificate#private-certificate-requirements)
* Content Source: [articles/app-service/configure-ssl-certificate.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/configure-ssl-certificate.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | The private certificate should also contain the root certificate in the certificate chain, shouldn't it? - Hi Team,
This page says that your private certificate must meet the following requirements:
- Exported as a password-protected PFX file, encrypted using triple DES.
- Contains private key at least 2048 bits long
- Contains all intermediate certificates in the certificate chain
Regarding the 3rd requirement, the private certificate should also contain the root certificate in the certificate chain, shouldn't it?
So I'm thinking it better that the 3rd requirement is revised as the following.
- Contains all intermediate certificate and the root one in the certificate chain.
Best Regards,
Takeshi Katayama
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: cc2ee874-df45-1de2-1b30-1fd75c7fd709
* Version Independent ID: ee181722-8386-9842-407f-d0549012d2e9
* Content: [Add and manage TLS/SSL certificates - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/configure-ssl-certificate#private-certificate-requirements)
* Content Source: [articles/app-service/configure-ssl-certificate.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/app-service/configure-ssl-certificate.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_priority | the private certificate should also contain the root certificate in the certificate chain shouldn t it hi team this page says that your private certificate must meet the following requirements exported as a password protected pfx file encrypted using triple des contains private key at least bits long contains all intermediate certificates in the certificate chain regarding the requirement the private certificate should also contain the root certificate in the certificate chain shouldn t it so i m thinking it better that the requirement is revised as the following contains all intermediate certificate and the root one in the certificate chain best regards takeshi katayama document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin | 0 |
189,455 | 22,047,043,392 | IssuesEvent | 2022-05-30 03:46:31 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | closed | CVE-2019-10126 (High) detected in linuxlinux-4.19.6 - autoclosed | security vulnerability | ## CVE-2019-10126 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel. A heap based buffer overflow in mwifiex_uap_parse_tail_ies function in drivers/net/wireless/marvell/mwifiex/ie.c might lead to memory corruption and possibly other consequences.
<p>Publish Date: 2019-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10126>CVE-2019-10126</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-10126">https://nvd.nist.gov/vuln/detail/CVE-2019-10126</a></p>
<p>Release Date: 2019-06-14</p>
<p>Fix Resolution: kernel-headers - 3.10.0-1062.4.1,4.18.0-147,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-957.54.1,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,3.10.0-957.54.1,4.18.0-147,3.10.0-1062.4.1;kernel-rt-trace - 3.10.0-1062.4.1.rt56.1027;kernel-debuginfo-common-x86_64 - 4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-rt - 3.10.0-1062.4.1.rt56.1027,3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93,4.18.0-147.rt24.93;kernel-zfcpdump - 4.18.0-147;kernel-rt-debug-modules-extra - 4.18.0-147.rt24.93;python3-perf-debuginfo - 4.18.0-80.15.1;kernel-rt-modules-extra - 4.18.0-147.rt24.93;kernel-doc - 3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-core - 4.18.0-147.rt24.93;kernel-rt-debug-debuginfo - 4.18.0-147.rt24.93;kernel-abi-whitelists - 4.18.0-147,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-80.15.1;kernel-zfcpdump-modules - 4.18.0-147;kernel-rt-trace-devel - 3.10.0-1062.4.1.rt56.1027;kernel-debug-modules-extra - 4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-debug-kvm - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;kernel-bootwrapper - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-147.rt24.93;kernel-rt-debug-modules - 4.18.0-147.rt24.93;kernel-zfcpdump-devel - 4.18.0-147;perf - 4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-80.15.1,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-147;kernel-debuginfo - 4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-debug-devel - 4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,3.10.0-1062.4.1;bpftool - 4.18.0-147,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-debug-kvm-debuginfo - 4.18.0-147.rt24.93;kernel-rt-debug-core - 4.18.0-147.rt24.93;kernel-tools-libs - 4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-1062.4.1;kernel-rt-kvm-debuginfo - 4.18.0-147.rt24.93;kernel-cross-headers - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147;kernel-debug-debuginfo - 3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1;kernel-debug - 4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1;kernel-devel - 3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,4.18.0-147;kernel - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-147,3.10.0-1062.4.1,4.18.0-80.15.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-147;bpftool-debuginfo - 4.18.0-80.15.1,3.10.0-1062.4.1;kernel-zfcpdump-core - 4.18.0-147;kernel-debug-core - 4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-modules-extra - 4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147;kernel-rt-debug-devel - 4.18.0-147.rt24.93,3.10.0-1062.4.1.rt56.1027;python-perf - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-core - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147;kernel-rt-debug - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;kernel-rt-devel - 4.18.0-147.rt24.93,3.10.0-1062.4.1.rt56.1027;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1062.4.1;python3-perf - 4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147;kernel-tools - 4.18.0-80.15.1,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1;kernel-debug-modules - 4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-rt-trace-kvm - 3.10.0-1062.4.1.rt56.1027;kernel-rt-debuginfo-common-x86_64 - 4.18.0-147.rt24.93;kernel-tools-libs-devel - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-tools-debuginfo - 3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-rt-modules - 4.18.0-147.rt24.93;kernel-rt-doc - 3.10.0-1062.4.1.rt56.1027;kernel-rt-kvm - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;python-perf-debuginfo - 3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10126 (High) detected in linuxlinux-4.19.6 - autoclosed - ## CVE-2019-10126 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.6</b></p></summary>
<p>
<p>Apache Software Foundation (ASF)</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in the Linux kernel. A heap based buffer overflow in mwifiex_uap_parse_tail_ies function in drivers/net/wireless/marvell/mwifiex/ie.c might lead to memory corruption and possibly other consequences.
<p>Publish Date: 2019-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10126>CVE-2019-10126</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2019-10126">https://nvd.nist.gov/vuln/detail/CVE-2019-10126</a></p>
<p>Release Date: 2019-06-14</p>
<p>Fix Resolution: kernel-headers - 3.10.0-1062.4.1,4.18.0-147,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-957.54.1,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,3.10.0-957.54.1,4.18.0-147,3.10.0-1062.4.1;kernel-rt-trace - 3.10.0-1062.4.1.rt56.1027;kernel-debuginfo-common-x86_64 - 4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-rt - 3.10.0-1062.4.1.rt56.1027,3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93,4.18.0-147.rt24.93;kernel-zfcpdump - 4.18.0-147;kernel-rt-debug-modules-extra - 4.18.0-147.rt24.93;python3-perf-debuginfo - 4.18.0-80.15.1;kernel-rt-modules-extra - 4.18.0-147.rt24.93;kernel-doc - 3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-core - 4.18.0-147.rt24.93;kernel-rt-debug-debuginfo - 4.18.0-147.rt24.93;kernel-abi-whitelists - 4.18.0-147,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-80.15.1;kernel-zfcpdump-modules - 4.18.0-147;kernel-rt-trace-devel - 3.10.0-1062.4.1.rt56.1027;kernel-debug-modules-extra - 4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-debug-kvm - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;kernel-bootwrapper - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-rt-debuginfo - 4.18.0-147.rt24.93;kernel-rt-debug-modules - 4.18.0-147.rt24.93;kernel-zfcpdump-devel - 4.18.0-147;perf - 4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-80.15.1,3.10.0-957.54.1;kernel-zfcpdump-modules-extra - 4.18.0-147;kernel-debuginfo - 4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-debug-devel - 4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,3.10.0-1062.4.1;bpftool - 4.18.0-147,4.18.0-147,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1;kernel-rt-debug-kvm-debuginfo - 4.18.0-147.rt24.93;kernel-rt-debug-core - 4.18.0-147.rt24.93;kernel-tools-libs - 4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1;perf-debuginfo - 3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-1062.4.1;kernel-rt-kvm-debuginfo - 4.18.0-147.rt24.93;kernel-cross-headers - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147;kernel-debug-debuginfo - 3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1;kernel-debug - 4.18.0-80.15.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1;kernel-devel - 3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,3.10.0-957.54.1,4.18.0-147;kernel - 3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-147,3.10.0-1062.4.1,4.18.0-80.15.1,3.10.0-1062.4.1,4.18.0-147,4.18.0-147;bpftool-debuginfo - 4.18.0-80.15.1,3.10.0-1062.4.1;kernel-zfcpdump-core - 4.18.0-147;kernel-debug-core - 4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-modules-extra - 4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147;kernel-rt-debug-devel - 4.18.0-147.rt24.93,3.10.0-1062.4.1.rt56.1027;python-perf - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-core - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-147;kernel-rt-debug - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;kernel-rt-devel - 4.18.0-147.rt24.93,3.10.0-1062.4.1.rt56.1027;kernel-debuginfo-common-ppc64 - 3.10.0-957.54.1,3.10.0-1062.4.1;python3-perf - 4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1,4.18.0-80.15.1,4.18.0-147;kernel-tools - 4.18.0-80.15.1,4.18.0-147,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-1062.4.1,4.18.0-147,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,4.18.0-147,4.18.0-80.15.1;kernel-debug-modules - 4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-rt-trace-kvm - 3.10.0-1062.4.1.rt56.1027;kernel-rt-debuginfo-common-x86_64 - 4.18.0-147.rt24.93;kernel-tools-libs-devel - 3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-957.54.1,3.10.0-957.54.1;kernel-modules - 4.18.0-147,4.18.0-80.15.1,4.18.0-147,4.18.0-147,4.18.0-147,4.18.0-80.15.1;kernel-tools-debuginfo - 3.10.0-957.54.1,4.18.0-80.15.1,3.10.0-1062.4.1,3.10.0-1062.4.1,3.10.0-957.54.1;kernel-rt-modules - 4.18.0-147.rt24.93;kernel-rt-doc - 3.10.0-1062.4.1.rt56.1027;kernel-rt-kvm - 3.10.0-1062.4.1.rt56.1027,4.18.0-147.rt24.93;python-perf-debuginfo - 3.10.0-957.54.1,3.10.0-1062.4.1,3.10.0-957.54.1,3.10.0-1062.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve high detected in linuxlinux autoclosed cve high severity vulnerability vulnerable library linuxlinux apache software foundation asf library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details a flaw was found in the linux kernel a heap based buffer overflow in mwifiex uap parse tail ies function in drivers net wireless marvell mwifiex ie c might lead to memory corruption and possibly other consequences publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kernel headers kernel rt trace kernel debuginfo common kernel rt kernel zfcpdump kernel rt debug modules extra perf debuginfo kernel rt modules extra kernel doc kernel rt core kernel rt debug debuginfo kernel abi whitelists kernel zfcpdump modules kernel rt trace devel kernel debug modules extra kernel rt debug kvm kernel bootwrapper kernel rt debuginfo kernel rt debug modules kernel zfcpdump devel perf kernel zfcpdump modules extra kernel debuginfo kernel debug devel bpftool kernel rt debug kvm debuginfo kernel rt debug core kernel tools libs perf debuginfo kernel rt kvm debuginfo kernel cross headers kernel debug debuginfo kernel debug kernel devel kernel bpftool debuginfo kernel zfcpdump core kernel debug core kernel modules extra kernel rt debug devel python perf kernel core kernel rt debug kernel rt devel kernel debuginfo common perf kernel tools kernel debug modules kernel rt trace kvm kernel rt debuginfo common kernel tools libs devel kernel modules kernel tools debuginfo kernel rt modules kernel rt doc kernel rt kvm python perf debuginfo step up your open source security game with whitesource | 0 |
69,059 | 8,372,897,813 | IssuesEvent | 2018-10-05 08:40:00 | SpareBank1/designsystem | https://api.github.com/repos/SpareBank1/designsystem | closed | Grid vs spacing | :gem: enhancement :nail_care: design :raising_hand_woman: help wanted | [ffe-grid](https://github.com/SpareBank1/designsystem/tree/develop/packages/ffe-grid)[(-react)](https://github.com/SpareBank1/designsystem/tree/develop/packages/ffe-icons-react) comes with gutters, which makes it difficult to nest grids or use it in any other context than page layout. This has caused some issues in cases where it has been used for internal layout in components or sections. At the moment ffe-grid is the only alternative for creating layouts, and we need to figure out how to handle this in components.
One suggestion (originally intended to standardize spacing in general) is to remove all hardcoded spacing from the grid and other components, to be replaced by a set of predefined variables throughout the design system. In the case of ffe-grid, could removing the gutters altogether in favor of letting the contents of each column deal with spacing be a possible way forward?
Input wanted! | 1.0 | Grid vs spacing - [ffe-grid](https://github.com/SpareBank1/designsystem/tree/develop/packages/ffe-grid)[(-react)](https://github.com/SpareBank1/designsystem/tree/develop/packages/ffe-icons-react) comes with gutters, which makes it difficult to nest grids or use it in any other context than page layout. This has caused some issues in cases where it has been used for internal layout in components or sections. At the moment ffe-grid is the only alternative for creating layouts, and we need to figure out how to handle this in components.
One suggestion (originally intended to standardize spacing in general) is to remove all hardcoded spacing from the grid and other components, to be replaced by a set of predefined variables throughout the design system. In the case of ffe-grid, could removing the gutters altogether in favor of letting the contents of each column deal with spacing be a possible way forward?
Input wanted! | non_priority | grid vs spacing comes with gutters which makes it difficult to nest grids or use it in any other context than page layout this has caused some issues in cases where it has been used for internal layout in components or sections at the moment ffe grid is the only alternative for creating layouts and we need to figure out how to handle this in components one suggestion originally intended to standardize spacing in general is to remove all hardcoded spacing from the grid and other components to be replaced by a set of predefined variables throughout the design system in the case of ffe grid could removing the gutters altogether in favor of letting the contents of each column deal with spacing be a possible way forward input wanted | 0 |
931 | 3,004,212,272 | IssuesEvent | 2015-07-25 18:07:03 | elmsln/elmsln | https://api.github.com/repos/elmsln/elmsln | opened | Apply Pure-speed setup to VagrantFile | enhancement infrastructure scale / performance | CentOS installer that runs as part of Vagrant spin up (and one line installers) nets us Mysql 5.1 and PHP 5.3.3. While the VM config (and it being local memory) makes it fast, this isn't fast enough for me. @bradallenfisher 's been doing some work on an effort called Pure Speed and I was hoping we could work on getting those improvements into the CentOS one-line installer / Vagrant that's spun up.
Goals / steps in that from remi or where ever:
- [ ] Get php 5.6.x
- [ ] Get mysql 5.6.x
- [ ] Get apache 2.4.x
- [ ] Apply opcache tweaks to php out of the box to further optimize opcache (https://tideways.io/profiler/blog/fine-tune-your-opcache-configuration-to-avoid-caching-suprises)
- [ ] Automatically setup and configure Varnish
- [ ] See how fast things get while logged in w/ Authcache and ESIs
- [ ] SSL terminator OOTB would be nice but might be too new at the moment.
We want the speed mechanism OOTB to be bullet proof for deployments that reside on a single system which is currently all the PSU deployments. I want this to be able to hit as ridiculous a scale possible before getting into clustering #19 or load balancing / server scale. | 1.0 | Apply Pure-speed setup to VagrantFile - CentOS installer that runs as part of Vagrant spin up (and one line installers) nets us Mysql 5.1 and PHP 5.3.3. While the VM config (and it being local memory) makes it fast, this isn't fast enough for me. @bradallenfisher 's been doing some work on an effort called Pure Speed and I was hoping we could work on getting those improvements into the CentOS one-line installer / Vagrant that's spun up.
Goals / steps in that from remi or where ever:
- [ ] Get php 5.6.x
- [ ] Get mysql 5.6.x
- [ ] Get apache 2.4.x
- [ ] Apply opcache tweaks to php out of the box to further optimize opcache (https://tideways.io/profiler/blog/fine-tune-your-opcache-configuration-to-avoid-caching-suprises)
- [ ] Automatically setup and configure Varnish
- [ ] See how fast things get while logged in w/ Authcache and ESIs
- [ ] SSL terminator OOTB would be nice but might be too new at the moment.
We want the speed mechanism OOTB to be bullet proof for deployments that reside on a single system which is currently all the PSU deployments. I want this to be able to hit as ridiculous a scale possible before getting into clustering #19 or load balancing / server scale. | non_priority | apply pure speed setup to vagrantfile centos installer that runs as part of vagrant spin up and one line installers nets us mysql and php while the vm config and it being local memory makes it fast this isn t fast enough for me bradallenfisher s been doing some work on an effort called pure speed and i was hoping we could work on getting those improvements into the centos one line installer vagrant that s spun up goals steps in that from remi or where ever get php x get mysql x get apache x apply opcache tweaks to php out of the box to further optimize opcache automatically setup and configure varnish see how fast things get while logged in w authcache and esis ssl terminator ootb would be nice but might be too new at the moment we want the speed mechanism ootb to be bullet proof for deployments that reside on a single system which is currently all the psu deployments i want this to be able to hit as ridiculous a scale possible before getting into clustering or load balancing server scale | 0 |
95,225 | 27,412,911,721 | IssuesEvent | 2023-03-01 11:50:00 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | opened | Various: Project cmake wrongly uses $ENV{ZEPHYR_BASE} | bug area: Build System area: Samples | **Describe the bug**
There are projects in zephyr which are still using `$ENV{ZEPHYR_BASE}` - this should not be used for e.g. including files or directories.
**Environment (please complete the following information):**
- Commit SHA or Version used: 865ee855d11f76c764356a6fe9643abae58b8f20 | 1.0 | Various: Project cmake wrongly uses $ENV{ZEPHYR_BASE} - **Describe the bug**
There are projects in zephyr which are still using `$ENV{ZEPHYR_BASE}` - this should not be used for e.g. including files or directories.
**Environment (please complete the following information):**
- Commit SHA or Version used: 865ee855d11f76c764356a6fe9643abae58b8f20 | non_priority | various project cmake wrongly uses env zephyr base describe the bug there are projects in zephyr which are still using env zephyr base this should not be used for e g including files or directories environment please complete the following information commit sha or version used | 0 |
165,861 | 12,881,639,268 | IssuesEvent | 2020-07-12 13:06:57 | danielgtaylor/python-betterproto | https://api.github.com/repos/danielgtaylor/python-betterproto | closed | to_dict returns wrong enum fields when numbering is not consecutive | bug good first issue has test small | Protobuf spec file like this:
```
syntax = "proto3";
package message;
message TeraMessage {
int64 timestamp = 1;
enum MessageType {
MESSAGE_TYPE_UNKNOWN = 0;
MESSAGE_TYPE_ACTION_MESSAGE = 1;
// @exclude MESSAGE_TYPE_COMMAND_MESSAGE = 2; // DEPRECATED
MESSAGE_TYPE_CONFIG_MESSAGE = 3;
MESSAGE_TYPE_HEARTBEAT_MESSAGE = 4;
}
MessageType message_type = 5;
bytes message = 6;
}
```
Generates the following Python bindings:
```python
from dataclasses import dataclass
import betterproto
class TeraMessageMessageType(betterproto.Enum):
MESSAGE_TYPE_UNKNOWN = 0
MESSAGE_TYPE_ACTION_MESSAGE = 1
# NB: notice that 2 is missing
MESSAGE_TYPE_CONFIG_MESSAGE = 3
MESSAGE_TYPE_HEARTBEAT_MESSAGE = 4
@dataclass
class TeraMessage(betterproto.Message):
timestamp: int = betterproto.int64_field(1)
message_type: "TeraMessageMessageType" = betterproto.enum_field(5)
message: bytes = betterproto.bytes_field(6)
```
To reproduce the bug:
```
>>> from my.path.message import TeraMessage, TeraMessageMessageType
>>> message = TeraMessage(message_type=TeraMessageMessageType.MESSAGE_TYPE_CONFIG_MESSAGE)
>>> message.to_dict()
{'messageType': 'MESSAGE_TYPE_HEARTBEAT_MESSAGE'}
>>> message.to_json()
'{"messageType": "MESSAGE_TYPE_HEARTBEAT_MESSAGE"}'
>>> TeraMessage().parse(bytes(message)).message_type == TeraMessageMessageType.MESSAGE_TYPE_CONFIG_MESSAGE
True
```
``` | 1.0 | to_dict returns wrong enum fields when numbering is not consecutive - Protobuf spec file like this:
```
syntax = "proto3";
package message;
message TeraMessage {
int64 timestamp = 1;
enum MessageType {
MESSAGE_TYPE_UNKNOWN = 0;
MESSAGE_TYPE_ACTION_MESSAGE = 1;
// @exclude MESSAGE_TYPE_COMMAND_MESSAGE = 2; // DEPRECATED
MESSAGE_TYPE_CONFIG_MESSAGE = 3;
MESSAGE_TYPE_HEARTBEAT_MESSAGE = 4;
}
MessageType message_type = 5;
bytes message = 6;
}
```
Generates the following Python bindings:
```python
from dataclasses import dataclass
import betterproto
class TeraMessageMessageType(betterproto.Enum):
MESSAGE_TYPE_UNKNOWN = 0
MESSAGE_TYPE_ACTION_MESSAGE = 1
# NB: notice that 2 is missing
MESSAGE_TYPE_CONFIG_MESSAGE = 3
MESSAGE_TYPE_HEARTBEAT_MESSAGE = 4
@dataclass
class TeraMessage(betterproto.Message):
timestamp: int = betterproto.int64_field(1)
message_type: "TeraMessageMessageType" = betterproto.enum_field(5)
message: bytes = betterproto.bytes_field(6)
```
To reproduce the bug:
```
>>> from my.path.message import TeraMessage, TeraMessageMessageType
>>> message = TeraMessage(message_type=TeraMessageMessageType.MESSAGE_TYPE_CONFIG_MESSAGE)
>>> message.to_dict()
{'messageType': 'MESSAGE_TYPE_HEARTBEAT_MESSAGE'}
>>> message.to_json()
'{"messageType": "MESSAGE_TYPE_HEARTBEAT_MESSAGE"}'
>>> TeraMessage().parse(bytes(message)).message_type == TeraMessageMessageType.MESSAGE_TYPE_CONFIG_MESSAGE
True
```
``` | non_priority | to dict returns wrong enum fields when numbering is not consecutive protobuf spec file like this syntax package message message teramessage timestamp enum messagetype message type unknown message type action message exclude message type command message deprecated message type config message message type heartbeat message messagetype message type bytes message generates the following python bindings python from dataclasses import dataclass import betterproto class teramessagemessagetype betterproto enum message type unknown message type action message nb notice that is missing message type config message message type heartbeat message dataclass class teramessage betterproto message timestamp int betterproto field message type teramessagemessagetype betterproto enum field message bytes betterproto bytes field to reproduce the bug from my path message import teramessage teramessagemessagetype message teramessage message type teramessagemessagetype message type config message message to dict messagetype message type heartbeat message message to json messagetype message type heartbeat message teramessage parse bytes message message type teramessagemessagetype message type config message true | 0 |
89,761 | 15,837,528,906 | IssuesEvent | 2021-04-06 20:55:25 | TIBCOSoftware/tci-flogo | https://api.github.com/repos/TIBCOSoftware/tci-flogo | closed | CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz - autoclosed | security vulnerability | ## CVE-2017-16113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parsejson-0.0.3.tgz</b></p></summary>
<p>Method that parses a JSON string and returns a JSON object</p>
<p>Library home page: <a href="https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz">https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz</a></p>
<p>
Dependency Hierarchy:
- karma-1.5.0.tgz (Root Library)
- socket.io-1.7.3.tgz
- socket.io-client-1.7.3.tgz
- engine.io-client-1.8.3.tgz
- :x: **parsejson-0.0.3.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16113>CVE-2017-16113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| True | CVE-2017-16113 (High) detected in parsejson-0.0.3.tgz - autoclosed - ## CVE-2017-16113 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parsejson-0.0.3.tgz</b></p></summary>
<p>Method that parses a JSON string and returns a JSON object</p>
<p>Library home page: <a href="https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz">https://registry.npmjs.org/parsejson/-/parsejson-0.0.3.tgz</a></p>
<p>
Dependency Hierarchy:
- karma-1.5.0.tgz (Root Library)
- socket.io-1.7.3.tgz
- socket.io-client-1.7.3.tgz
- engine.io-client-1.8.3.tgz
- :x: **parsejson-0.0.3.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16113>CVE-2017-16113</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
| non_priority | cve high detected in parsejson tgz autoclosed cve high severity vulnerability vulnerable library parsejson tgz method that parses a json string and returns a json object library home page a href dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x parsejson tgz vulnerable library vulnerability details the parsejson module is vulnerable to regular expression denial of service when untrusted user input is passed into it to be parsed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href | 0 |
416,415 | 28,081,270,940 | IssuesEvent | 2023-03-30 06:34:57 | InnopolisUni/innofw | https://api.github.com/repos/InnopolisUni/innofw | closed | 💡 title | documentation | ### Start Date
_No response_
### Implementation PR
_No response_
### Reference Issues
_No response_
### Summary
Just for test
### Motivation
Just for test
### Basic Example
Just for test
### Drawbacks
_No response_
### Unresolved questions
_No response_ | 1.0 | 💡 title - ### Start Date
_No response_
### Implementation PR
_No response_
### Reference Issues
_No response_
### Summary
Just for test
### Motivation
Just for test
### Basic Example
Just for test
### Drawbacks
_No response_
### Unresolved questions
_No response_ | non_priority | 💡 title start date no response implementation pr no response reference issues no response summary just for test motivation just for test basic example just for test drawbacks no response unresolved questions no response | 0 |
15,715 | 3,333,295,230 | IssuesEvent | 2015-11-12 00:23:01 | palantir/plottable | https://api.github.com/repos/palantir/plottable | opened | [SelectionBoxLayer] Add Generics | API design | `SelectionBoxLayer` should be updated to use a generics structure similar to `GuideLineLayer`. | 1.0 | [SelectionBoxLayer] Add Generics - `SelectionBoxLayer` should be updated to use a generics structure similar to `GuideLineLayer`. | non_priority | add generics selectionboxlayer should be updated to use a generics structure similar to guidelinelayer | 0 |
264,697 | 23,133,967,653 | IssuesEvent | 2022-07-28 12:56:06 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | DISABLED test_delayed_reduce_scatter_offload_true_shard_grad_op (__main__.TestParityWithDDP) | module: flaky-tests skipped module: unknown | Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_delayed_reduce_scatter_offload_true_shard_grad_op&suite=TestParityWithDDP&file=distributed/fsdp/test_fsdp_core.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7554131571).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. | 1.0 | DISABLED test_delayed_reduce_scatter_offload_true_shard_grad_op (__main__.TestParityWithDDP) - Platforms: linux, rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_delayed_reduce_scatter_offload_true_shard_grad_op&suite=TestParityWithDDP&file=distributed/fsdp/test_fsdp_core.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7554131571).
Over the past 3 hours, it has been determined flaky in 1 workflow(s) with 1 red and 1 green. | non_priority | disabled test delayed reduce scatter offload true shard grad op main testparitywithddp platforms linux rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with red and green | 0 |
218,857 | 17,026,908,903 | IssuesEvent | 2021-07-03 18:18:41 | Realm667/WolfenDoom | https://api.github.com/repos/Realm667/WolfenDoom | closed | [3.1] Developer comments significantly lower frame rate | help wanted language playtesting zscript | Dead serious: when a devcomment shows on screen, 35 FPS (I have it capped) drops to 27, sometimes even 24, and again becomes as normal when the comment disappears (you can test it easily on TEST_DLG, or INTERMAP, or C1M2, ...) Noticed when testing https://github.com/Realm667/WolfenDoom/commit/4eb07eb134868d2b07ddeca80eaef43e034c6adb (at first I thought it could be because of my changes, but then I reverted them locally and experienced exactly the same problem on older commit).
Does not seem to happen with other message types. | 1.0 | [3.1] Developer comments significantly lower frame rate - Dead serious: when a devcomment shows on screen, 35 FPS (I have it capped) drops to 27, sometimes even 24, and again becomes as normal when the comment disappears (you can test it easily on TEST_DLG, or INTERMAP, or C1M2, ...) Noticed when testing https://github.com/Realm667/WolfenDoom/commit/4eb07eb134868d2b07ddeca80eaef43e034c6adb (at first I thought it could be because of my changes, but then I reverted them locally and experienced exactly the same problem on older commit).
Does not seem to happen with other message types. | non_priority | developer comments significantly lower frame rate dead serious when a devcomment shows on screen fps i have it capped drops to sometimes even and again becomes as normal when the comment disappears you can test it easily on test dlg or intermap or noticed when testing at first i thought it could be because of my changes but then i reverted them locally and experienced exactly the same problem on older commit does not seem to happen with other message types | 0 |
368,883 | 25,812,355,918 | IssuesEvent | 2022-12-11 23:57:05 | NixOS/nix | https://api.github.com/repos/NixOS/nix | closed | Documenting uninstall for multi-user on macOS | documentation installer macos | This is the script I've used for testing:
```sh
#!/bin/sh
set -x
if [ -f /Library/LaunchDaemons/org.nixos.nix-daemon.plist ]; then
sudo launchctl unload /Library/LaunchDaemons/org.nixos.nix-daemon.plist
sudo rm /Library/LaunchDaemons/org.nixos.nix-daemon.plist
fi
if [ -f /etc/profile.backup-before-nix ]; then
sudo mv /etc/profile.backup-before-nix /etc/profile
fi
if [ -f /etc/bashrc.backup-before-nix ]; then
sudo mv /etc/bashrc.backup-before-nix /etc/bashrc
fi
if [ -f /etc/zshrc.backup-before-nix ]; then
sudo mv /etc/zshrc.backup-before-nix /etc/zshrc
fi
for i in $(seq 1 $(sysctl -n hw.ncpu)); do
sudo /usr/bin/dscl . -delete "/Users/nixbld$i"
done
sudo /usr/bin/dscl . -delete "/Groups/nixbld"
``` | 1.0 | Documenting uninstall for multi-user on macOS - This is the script I've used for testing:
```sh
#!/bin/sh
set -x
if [ -f /Library/LaunchDaemons/org.nixos.nix-daemon.plist ]; then
sudo launchctl unload /Library/LaunchDaemons/org.nixos.nix-daemon.plist
sudo rm /Library/LaunchDaemons/org.nixos.nix-daemon.plist
fi
if [ -f /etc/profile.backup-before-nix ]; then
sudo mv /etc/profile.backup-before-nix /etc/profile
fi
if [ -f /etc/bashrc.backup-before-nix ]; then
sudo mv /etc/bashrc.backup-before-nix /etc/bashrc
fi
if [ -f /etc/zshrc.backup-before-nix ]; then
sudo mv /etc/zshrc.backup-before-nix /etc/zshrc
fi
for i in $(seq 1 $(sysctl -n hw.ncpu)); do
sudo /usr/bin/dscl . -delete "/Users/nixbld$i"
done
sudo /usr/bin/dscl . -delete "/Groups/nixbld"
``` | non_priority | documenting uninstall for multi user on macos this is the script i ve used for testing sh bin sh set x if then sudo launchctl unload library launchdaemons org nixos nix daemon plist sudo rm library launchdaemons org nixos nix daemon plist fi if then sudo mv etc profile backup before nix etc profile fi if then sudo mv etc bashrc backup before nix etc bashrc fi if then sudo mv etc zshrc backup before nix etc zshrc fi for i in seq sysctl n hw ncpu do sudo usr bin dscl delete users nixbld i done sudo usr bin dscl delete groups nixbld | 0 |
332,944 | 24,355,768,874 | IssuesEvent | 2022-10-03 07:17:43 | scalameta/nvim-metals | https://api.github.com/repos/scalameta/nvim-metals | closed | Update docs about F | documentation wait for metals release | ### Task
Now that https://github.com/neovim/neovim/pull/18251 has been merged in we no longer need to instruction users to remove F since they will see the messages. Do this after the next release. | 1.0 | Update docs about F - ### Task
Now that https://github.com/neovim/neovim/pull/18251 has been merged in we no longer need to instruction users to remove F since they will see the messages. Do this after the next release. | non_priority | update docs about f task now that has been merged in we no longer need to instruction users to remove f since they will see the messages do this after the next release | 0 |
122,784 | 26,163,616,235 | IssuesEvent | 2023-01-01 00:38:45 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [TRAINEE] [MOBILE] [REMOTO] Trainne em Desenvolvimento Mobile na [KINVO] | TRAINEE JAVASCRIPT MOBILE REMOTO TDD INGLÊS CLEAN CODE METODOLOGIAS ÁGEIS HELP WANTED feature_request Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Você irá contribuir diariamente ajudando o nosso time no desenvolvimento da nossa plataforma Mobile, implementando novas funcionalidades, fazendo revisão de Pull requests e participando de reuniões de planejamento para discutirmos prioridades no desenvolvimento.
- Atividades/responsabilidade
- Desenvolverá garantindo funcionalidade, manutenibilidade e velocidade, além de práticas de teste e qualidade;
- Garantirá que todas as soluções técnicas estejam alinhadas a estratégia do negócio;
- Entenderá o impacto das soluções que o time desenvolve no usuário final;
- Ajudará o time no desenho de soluções com arquitetura evolutiva, além de investigar constantemente novas tecnologias e formas de trabalho;
- Colaborará com a evolução das skills técnicas do time e esforços de melhoria contínua que impactam vários times;
- Trabalhará em um ambiente colaborativo, em que práticas de pareamento, feedbacks e motivação para se desenvolver são comuns a todos os times.
## Local
- Remoto
## Benefícios
- Auxílio Saúde
## Requisitos
**Obrigatórios:**
- Conhecimento em desenvolvimento de software;
- Preocupação com a excelência técnica e boas práticas de desenvolvimento (TDD, arquitetura, design de código, clean code, etc.)
- Familiaridade com metodologias ágeis;
- Conhecimentos em Javascript;
- Comunicação estruturada baseada em fatos e dados;
- Pragmatismo para resolução de problemas;
- Ler em Inglês;
**Diferenciais:**
- Conhecimentos em Mobx / Redux;
- Conhecimentos sobre investimentos;
- Participação ativa em comunidades de Tecnologia (palestras, meetups, eventos, tech talks, blogs) é um diferencial, como forma de disseminar o conhecimento;
## Contratação
- a combinar
## Nossa empresa
- O Kinvo é uma fintech da área de investimentos criada em 2017 com a missão de empoderar o investidor. Desenvolvemos um aplicativo que permite o investidor cadastrar seus investimentos, independente de qual instituição financeira ele está, consolidar esses dados como uma carteira de investimentos, e a partir daí extrair métricas sobre a saúde dos seus investimentos.
## Como se candidatar
- [Clique aqui para se candidatar](https://www.linkedin.com/jobs/view/2163940819/?alternateChannel=search&refId=750ab1e7-9c52-4f55-b566-a8f98831e4d4&trackingId=Z39cpFQo%2BlD4G3XsH5pLKQ%3D%3D&trk=flagship3_search_srp_jobs)
| 1.0 | [TRAINEE] [MOBILE] [REMOTO] Trainne em Desenvolvimento Mobile na [KINVO] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Você irá contribuir diariamente ajudando o nosso time no desenvolvimento da nossa plataforma Mobile, implementando novas funcionalidades, fazendo revisão de Pull requests e participando de reuniões de planejamento para discutirmos prioridades no desenvolvimento.
- Atividades/responsabilidade
- Desenvolverá garantindo funcionalidade, manutenibilidade e velocidade, além de práticas de teste e qualidade;
- Garantirá que todas as soluções técnicas estejam alinhadas a estratégia do negócio;
- Entenderá o impacto das soluções que o time desenvolve no usuário final;
- Ajudará o time no desenho de soluções com arquitetura evolutiva, além de investigar constantemente novas tecnologias e formas de trabalho;
- Colaborará com a evolução das skills técnicas do time e esforços de melhoria contínua que impactam vários times;
- Trabalhará em um ambiente colaborativo, em que práticas de pareamento, feedbacks e motivação para se desenvolver são comuns a todos os times.
## Local
- Remoto
## Benefícios
- Auxílio Saúde
## Requisitos
**Obrigatórios:**
- Conhecimento em desenvolvimento de software;
- Preocupação com a excelência técnica e boas práticas de desenvolvimento (TDD, arquitetura, design de código, clean code, etc.)
- Familiaridade com metodologias ágeis;
- Conhecimentos em Javascript;
- Comunicação estruturada baseada em fatos e dados;
- Pragmatismo para resolução de problemas;
- Ler em Inglês;
**Diferenciais:**
- Conhecimentos em Mobx / Redux;
- Conhecimentos sobre investimentos;
- Participação ativa em comunidades de Tecnologia (palestras, meetups, eventos, tech talks, blogs) é um diferencial, como forma de disseminar o conhecimento;
## Contratação
- a combinar
## Nossa empresa
- O Kinvo é uma fintech da área de investimentos criada em 2017 com a missão de empoderar o investidor. Desenvolvemos um aplicativo que permite o investidor cadastrar seus investimentos, independente de qual instituição financeira ele está, consolidar esses dados como uma carteira de investimentos, e a partir daí extrair métricas sobre a saúde dos seus investimentos.
## Como se candidatar
- [Clique aqui para se candidatar](https://www.linkedin.com/jobs/view/2163940819/?alternateChannel=search&refId=750ab1e7-9c52-4f55-b566-a8f98831e4d4&trackingId=Z39cpFQo%2BlD4G3XsH5pLKQ%3D%3D&trk=flagship3_search_srp_jobs)
| non_priority | trainne em desenvolvimento mobile na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga você irá contribuir diariamente ajudando o nosso time no desenvolvimento da nossa plataforma mobile implementando novas funcionalidades fazendo revisão de pull requests e participando de reuniões de planejamento para discutirmos prioridades no desenvolvimento atividades responsabilidade desenvolverá garantindo funcionalidade manutenibilidade e velocidade além de práticas de teste e qualidade garantirá que todas as soluções técnicas estejam alinhadas a estratégia do negócio entenderá o impacto das soluções que o time desenvolve no usuário final ajudará o time no desenho de soluções com arquitetura evolutiva além de investigar constantemente novas tecnologias e formas de trabalho colaborará com a evolução das skills técnicas do time e esforços de melhoria contínua que impactam vários times trabalhará em um ambiente colaborativo em que práticas de pareamento feedbacks e motivação para se desenvolver são comuns a todos os times local remoto benefícios auxílio saúde requisitos obrigatórios conhecimento em desenvolvimento de software preocupação com a excelência técnica e boas práticas de desenvolvimento tdd arquitetura design de código clean code etc familiaridade com metodologias ágeis conhecimentos em javascript comunicação estruturada baseada em fatos e dados pragmatismo para resolução de problemas ler em inglês diferenciais conhecimentos em mobx redux conhecimentos sobre investimentos participação ativa em comunidades de tecnologia palestras meetups eventos tech talks blogs é um diferencial como forma de disseminar o conhecimento contratação a combinar nossa empresa o kinvo é uma fintech da área de investimentos criada em com a missão de empoderar o investidor desenvolvemos um aplicativo que permite o investidor cadastrar seus investimentos independente de qual instituição financeira ele está consolidar esses dados como uma carteira de investimentos e a partir daí extrair métricas sobre a saúde dos seus investimentos como se candidatar | 0 |
385,633 | 26,646,550,237 | IssuesEvent | 2023-01-25 10:24:40 | MarcoBetschart/RunningTracker | https://api.github.com/repos/MarcoBetschart/RunningTracker | closed | Documentation | documentation | ## User Story
Als Entwickler der App
möchte ich die App dokumentiert haben,
um meine Arbeit auch weiterzugeben.
## Acceptance Criteria
Gegeben, ein Dokument
Wenn ich durch das Dokument scrolle,
erhalte ich einen guten Überblick über das Projekt | 1.0 | Documentation - ## User Story
Als Entwickler der App
möchte ich die App dokumentiert haben,
um meine Arbeit auch weiterzugeben.
## Acceptance Criteria
Gegeben, ein Dokument
Wenn ich durch das Dokument scrolle,
erhalte ich einen guten Überblick über das Projekt | non_priority | documentation user story als entwickler der app möchte ich die app dokumentiert haben um meine arbeit auch weiterzugeben acceptance criteria gegeben ein dokument wenn ich durch das dokument scrolle erhalte ich einen guten überblick über das projekt | 0 |
100,614 | 11,200,416,902 | IssuesEvent | 2020-01-03 21:42:32 | rdmtc/RedMatic | https://api.github.com/repos/rdmtc/RedMatic | closed | Anpassungen an CCU Firmware 3.41 | ✋help wanted 📖documentation | * [x] Direkter Connect über Loopback auf die Schnittstellenprozesse, Lighttpd reverse Proxy umgehen (erfordert Anpassung der verwendeten Ports in node-red-contrib-ccu, das sollte automatisch passieren wenn eine Firmware >= 3.41 erkannt wird).
* [ ] Dokumentation ergänzen um Anleitung welche Ports ggf. freigegeben werden müssen (z.B. für RedMatic-HomeKit) | 1.0 | Anpassungen an CCU Firmware 3.41 - * [x] Direkter Connect über Loopback auf die Schnittstellenprozesse, Lighttpd reverse Proxy umgehen (erfordert Anpassung der verwendeten Ports in node-red-contrib-ccu, das sollte automatisch passieren wenn eine Firmware >= 3.41 erkannt wird).
* [ ] Dokumentation ergänzen um Anleitung welche Ports ggf. freigegeben werden müssen (z.B. für RedMatic-HomeKit) | non_priority | anpassungen an ccu firmware direkter connect über loopback auf die schnittstellenprozesse lighttpd reverse proxy umgehen erfordert anpassung der verwendeten ports in node red contrib ccu das sollte automatisch passieren wenn eine firmware erkannt wird dokumentation ergänzen um anleitung welche ports ggf freigegeben werden müssen z b für redmatic homekit | 0 |
298,990 | 25,874,181,899 | IssuesEvent | 2022-12-14 06:22:10 | Shayokh144/ic-survey-swiftui-combine | https://api.github.com/repos/Shayokh144/ic-survey-swiftui-combine | opened | [Integrate] As a user, I can forget my password | feature Integration authentication unit test | ## Why
When a user happens to forget his/her password, the app should offers a way to recover it through the Forget Password feature.
So, in the Login screen, we will display the Forgot Password button to begin the flow to reset his/her password.
## Acceptance Criteria
### In login screen
- when tap on `forget password` take the user to `forget password` screen
### In Forget Password screen
- When the Reset button is tapped, call the reset password request with the email in the Email text input.
- If the request is successful,
- Clear the Email text input
- A successful in-app notification will be implemented in [another issue](https://www.pivotaltracker.com/story/show/178279558)
- Otherwise, Display a modal alert dialog with the returned error message with an OK button
- [ ] Write unit test | 1.0 | [Integrate] As a user, I can forget my password - ## Why
When a user happens to forget his/her password, the app should offers a way to recover it through the Forget Password feature.
So, in the Login screen, we will display the Forgot Password button to begin the flow to reset his/her password.
## Acceptance Criteria
### In login screen
- when tap on `forget password` take the user to `forget password` screen
### In Forget Password screen
- When the Reset button is tapped, call the reset password request with the email in the Email text input.
- If the request is successful,
- Clear the Email text input
- A successful in-app notification will be implemented in [another issue](https://www.pivotaltracker.com/story/show/178279558)
- Otherwise, Display a modal alert dialog with the returned error message with an OK button
- [ ] Write unit test | non_priority | as a user i can forget my password why when a user happens to forget his her password the app should offers a way to recover it through the forget password feature so in the login screen we will display the forgot password button to begin the flow to reset his her password acceptance criteria in login screen when tap on forget password take the user to forget password screen in forget password screen when the reset button is tapped call the reset password request with the email in the email text input if the request is successful clear the email text input a successful in app notification will be implemented in otherwise display a modal alert dialog with the returned error message with an ok button write unit test | 0 |
88,225 | 11,046,340,230 | IssuesEvent | 2019-12-09 16:41:04 | zooniverse/Panoptes-Front-End | https://api.github.com/repos/zooniverse/Panoptes-Front-End | closed | Remove 'Gill Sans' and 'Open Sans' from font list | design help wanted | https://github.com/zooniverse/Panoptes-Front-End/blob/5f3109368ccdb5604c05880a801a9c411cc95ec4/css/typography.styl#L1-L3
These aren't needed – only Karla and Arial. | 1.0 | Remove 'Gill Sans' and 'Open Sans' from font list - https://github.com/zooniverse/Panoptes-Front-End/blob/5f3109368ccdb5604c05880a801a9c411cc95ec4/css/typography.styl#L1-L3
These aren't needed – only Karla and Arial. | non_priority | remove gill sans and open sans from font list these aren t needed – only karla and arial | 0 |
236,985 | 18,149,328,438 | IssuesEvent | 2021-09-26 02:09:48 | AKPR2007/W10-in-android_termux | https://api.github.com/repos/AKPR2007/W10-in-android_termux | closed | File comes in 7zip(7z) format | documentation enhancement | the boot image file comes in 7zip(7z) format. not a big deal but direct image file will be cool. i will fix it soon. also make image selection auto by using wget. have any good file sharing site with no expiration ? dont suggest mediafire or mega beacuse mediafire has 8m expiration and also not available in most countries. mega has a bad bandwidth limit. | 1.0 | File comes in 7zip(7z) format - the boot image file comes in 7zip(7z) format. not a big deal but direct image file will be cool. i will fix it soon. also make image selection auto by using wget. have any good file sharing site with no expiration ? dont suggest mediafire or mega beacuse mediafire has 8m expiration and also not available in most countries. mega has a bad bandwidth limit. | non_priority | file comes in format the boot image file comes in format not a big deal but direct image file will be cool i will fix it soon also make image selection auto by using wget have any good file sharing site with no expiration dont suggest mediafire or mega beacuse mediafire has expiration and also not available in most countries mega has a bad bandwidth limit | 0 |
293,203 | 22,047,422,444 | IssuesEvent | 2022-05-30 04:27:21 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | XYPairs doesn't display correct X Axis label in documentation | documentation | XYPairs are written to documentation using different methods compared to the GUI.
Could affect documentation for:
- LinearInterpolationFunction
- HourlyInterpolation
- SoilTemperatureWeightedFunction
- WeightedTemperatureFunction | 1.0 | XYPairs doesn't display correct X Axis label in documentation - XYPairs are written to documentation using different methods compared to the GUI.
Could affect documentation for:
- LinearInterpolationFunction
- HourlyInterpolation
- SoilTemperatureWeightedFunction
- WeightedTemperatureFunction | non_priority | xypairs doesn t display correct x axis label in documentation xypairs are written to documentation using different methods compared to the gui could affect documentation for linearinterpolationfunction hourlyinterpolation soiltemperatureweightedfunction weightedtemperaturefunction | 0 |
124,182 | 12,226,714,298 | IssuesEvent | 2020-05-03 12:13:40 | Sundin18/backend | https://api.github.com/repos/Sundin18/backend | closed | Replikace databází | documentation | Ahoj kluci, tak se mi konečně po delším časovém vývoji podařilo uchodit replikaci DB.
Funguje to následovně pokud se něco zapíše, případně změní v DB na form.rp.szdc.cz (master), tak se to automaticky zreplikuje na 10.99.164.38 (slave).
Vše probíhá na úrovni binárních logů, takže to je nezávislé na aplikaci šediváku. Samozřejmě se nám to propíše i do aplikace. Zkusil jsem si cvičně 2 události a vše proběhlo.
Případně si to otestujte také.
Předpokládám, že to také pustíme rovnou při přechodové noční.
Ještě bych rád rozeběhnul pro replikaci Jail (https://en.wikipedia.org/wiki/FreeBSD_jail), což je oddělené prostředí od kernelu, ale to asi až později. Zde jsem ještě nezačal načítat dokumentaci, všechny tyto věci jsou dosti časově náročné na nastudování a následné otestování a implementaci. Jsou to desítky hodin a do přechodové noční bych rád udělal jiné věci. Ten Jail je něco jako virtualizované prostředí uprostřed systému, takže bych mohl umístit do naší virtuálky jail jenom s DB, kam se nám bude replikovat ostrá databáze. | 1.0 | Replikace databází - Ahoj kluci, tak se mi konečně po delším časovém vývoji podařilo uchodit replikaci DB.
Funguje to následovně pokud se něco zapíše, případně změní v DB na form.rp.szdc.cz (master), tak se to automaticky zreplikuje na 10.99.164.38 (slave).
Vše probíhá na úrovni binárních logů, takže to je nezávislé na aplikaci šediváku. Samozřejmě se nám to propíše i do aplikace. Zkusil jsem si cvičně 2 události a vše proběhlo.
Případně si to otestujte také.
Předpokládám, že to také pustíme rovnou při přechodové noční.
Ještě bych rád rozeběhnul pro replikaci Jail (https://en.wikipedia.org/wiki/FreeBSD_jail), což je oddělené prostředí od kernelu, ale to asi až později. Zde jsem ještě nezačal načítat dokumentaci, všechny tyto věci jsou dosti časově náročné na nastudování a následné otestování a implementaci. Jsou to desítky hodin a do přechodové noční bych rád udělal jiné věci. Ten Jail je něco jako virtualizované prostředí uprostřed systému, takže bych mohl umístit do naší virtuálky jail jenom s DB, kam se nám bude replikovat ostrá databáze. | non_priority | replikace databází ahoj kluci tak se mi konečně po delším časovém vývoji podařilo uchodit replikaci db funguje to následovně pokud se něco zapíše případně změní v db na form rp szdc cz master tak se to automaticky zreplikuje na slave vše probíhá na úrovni binárních logů takže to je nezávislé na aplikaci šediváku samozřejmě se nám to propíše i do aplikace zkusil jsem si cvičně události a vše proběhlo případně si to otestujte také předpokládám že to také pustíme rovnou při přechodové noční ještě bych rád rozeběhnul pro replikaci jail což je oddělené prostředí od kernelu ale to asi až později zde jsem ještě nezačal načítat dokumentaci všechny tyto věci jsou dosti časově náročné na nastudování a následné otestování a implementaci jsou to desítky hodin a do přechodové noční bych rád udělal jiné věci ten jail je něco jako virtualizované prostředí uprostřed systému takže bych mohl umístit do naší virtuálky jail jenom s db kam se nám bude replikovat ostrá databáze | 0 |
218,900 | 17,028,925,432 | IssuesEvent | 2021-07-04 06:19:04 | steveseguin/vdo.ninja | https://api.github.com/repos/steveseguin/vdo.ninja | closed | transfers not working all the time? | Needs Testing | thefluffytoucan11/17/2020
Had 2 cases of transfers not working on beta just now. Might have been caching-related as I can't reproduce anymore. Here's what I did:
- open fresh chrome instance
- join 3 guests to a beta room
- arm two transfer buttons and transfer out to second room
- enter room name, click ok
- nothing happened
- tried again: transfer succeeded
then restarted chrome, same thing happened again
then cleared the cache, now I can't make it happen anymore | 1.0 | transfers not working all the time? - thefluffytoucan11/17/2020
Had 2 cases of transfers not working on beta just now. Might have been caching-related as I can't reproduce anymore. Here's what I did:
- open fresh chrome instance
- join 3 guests to a beta room
- arm two transfer buttons and transfer out to second room
- enter room name, click ok
- nothing happened
- tried again: transfer succeeded
then restarted chrome, same thing happened again
then cleared the cache, now I can't make it happen anymore | non_priority | transfers not working all the time had cases of transfers not working on beta just now might have been caching related as i can t reproduce anymore here s what i did open fresh chrome instance join guests to a beta room arm two transfer buttons and transfer out to second room enter room name click ok nothing happened tried again transfer succeeded then restarted chrome same thing happened again then cleared the cache now i can t make it happen anymore | 0 |
26,956 | 4,031,177,885 | IssuesEvent | 2016-05-18 16:19:25 | eregs/notice-and-comment | https://api.github.com/repos/eregs/notice-and-comment | opened | Header and subheader CSS tweaks | design polish | Here are a couple of small CSS polish to the header and subheader.
#### Header
- [ ] Take out "partnership between." @donjo and I have talked about this. That adds some clutter up around the tabs, we should keep that language in the footer on the homepage, but let's clean it up in the header. Especially since we are also taking about "About" in #297.

#### Subheader
**Wayfinding**
- [ ] The wayfinding citation shouldn't be italicized; it feels "shaky" that way, which we don't want. Can it just be Source Sans Pro Regular?
- [ ] Also, it feels indented from the main content area. Can we line it up (left justify) with title on the page, or in the case below the "I. The first section." _This may be related to the missing "subpart" part of this header?_

**Subheader size**
- [ ] "Actions:" really points this part out because it feels squished. Can we either vertically center all the text "Section XX," "Read the proposal," and "Write a comment" in the subheader? Or we could make the subheader a few pixels taller so the bottom padding matches the top padding. Either way works for me. | 1.0 | Header and subheader CSS tweaks - Here are a couple of small CSS polish to the header and subheader.
#### Header
- [ ] Take out "partnership between." @donjo and I have talked about this. That adds some clutter up around the tabs, we should keep that language in the footer on the homepage, but let's clean it up in the header. Especially since we are also taking about "About" in #297.

#### Subheader
**Wayfinding**
- [ ] The wayfinding citation shouldn't be italicized; it feels "shaky" that way, which we don't want. Can it just be Source Sans Pro Regular?
- [ ] Also, it feels indented from the main content area. Can we line it up (left justify) with title on the page, or in the case below the "I. The first section." _This may be related to the missing "subpart" part of this header?_

**Subheader size**
- [ ] "Actions:" really points this part out because it feels squished. Can we either vertically center all the text "Section XX," "Read the proposal," and "Write a comment" in the subheader? Or we could make the subheader a few pixels taller so the bottom padding matches the top padding. Either way works for me. | non_priority | header and subheader css tweaks here are a couple of small css polish to the header and subheader header take out partnership between donjo and i have talked about this that adds some clutter up around the tabs we should keep that language in the footer on the homepage but let s clean it up in the header especially since we are also taking about about in subheader wayfinding the wayfinding citation shouldn t be italicized it feels shaky that way which we don t want can it just be source sans pro regular also it feels indented from the main content area can we line it up left justify with title on the page or in the case below the i the first section this may be related to the missing subpart part of this header subheader size actions really points this part out because it feels squished can we either vertically center all the text section xx read the proposal and write a comment in the subheader or we could make the subheader a few pixels taller so the bottom padding matches the top padding either way works for me | 0 |
117,237 | 25,077,996,310 | IssuesEvent | 2022-11-07 16:53:14 | GoogleForCreators/web-stories-wp | https://api.github.com/repos/GoogleForCreators/web-stories-wp | closed | TypeScript: Convert `story-editor` config provider | Type: Code Quality Pod: Prometheus Package: Story Editor | <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
<!-- A clear and concise description of what this task is about. -->
See also #12579
Some ideas can be taken from #12553
`useConfig()` is used a lot in `wp-story-editor`, properly typing it would help unlock converting that package.
| 1.0 | TypeScript: Convert `story-editor` config provider - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
<!-- A clear and concise description of what this task is about. -->
See also #12579
Some ideas can be taken from #12553
`useConfig()` is used a lot in `wp-story-editor`, properly typing it would help unlock converting that package.
| non_priority | typescript convert story editor config provider task description see also some ideas can be taken from useconfig is used a lot in wp story editor properly typing it would help unlock converting that package | 0 |
147,785 | 11,807,400,639 | IssuesEvent | 2020-03-19 11:21:44 | berenslab/MorphoPy | https://api.github.com/repos/berenslab/MorphoPy | closed | create unit tests for read_swc(filepath, unit, voxelsize) | unittests | proposed tests
1. function returns three arguments
2. return arguments have the right format. Which format would that be?
| 1.0 | create unit tests for read_swc(filepath, unit, voxelsize) - proposed tests
1. function returns three arguments
2. return arguments have the right format. Which format would that be?
| non_priority | create unit tests for read swc filepath unit voxelsize proposed tests function returns three arguments return arguments have the right format which format would that be | 0 |
62,371 | 6,795,010,562 | IssuesEvent | 2017-11-01 14:22:53 | ChadGoymer/githapi | https://api.github.com/repos/ChadGoymer/githapi | closed | Fix broken tests | test | There are a few broken tests:
- gh_commit and gh_tag now return an extra list item called "verification"
- Update the gh_get test due to change in README
| 1.0 | Fix broken tests - There are a few broken tests:
- gh_commit and gh_tag now return an extra list item called "verification"
- Update the gh_get test due to change in README
| non_priority | fix broken tests there are a few broken tests gh commit and gh tag now return an extra list item called verification update the gh get test due to change in readme | 0 |
3,497 | 2,869,827,481 | IssuesEvent | 2015-06-06 15:26:38 | eldipa/ConcuDebug | https://api.github.com/repos/eldipa/ConcuDebug | closed | Soportar unsubscribe | code this! enhancement | Actualmente el sistema publish subscribe solo soparta las operaciones basicas.
Es necesario extender la api para que permita desuscribir cualquier callback, tanto en la api de python como la de javascript asi como tambien en el notifier. | 1.0 | Soportar unsubscribe - Actualmente el sistema publish subscribe solo soparta las operaciones basicas.
Es necesario extender la api para que permita desuscribir cualquier callback, tanto en la api de python como la de javascript asi como tambien en el notifier. | non_priority | soportar unsubscribe actualmente el sistema publish subscribe solo soparta las operaciones basicas es necesario extender la api para que permita desuscribir cualquier callback tanto en la api de python como la de javascript asi como tambien en el notifier | 0 |
163,422 | 20,363,751,246 | IssuesEvent | 2022-02-21 01:23:29 | Guillerbr/api-laravel-auth | https://api.github.com/repos/Guillerbr/api-laravel-auth | opened | CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz | security vulnerability | ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 4.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3664 (Medium) detected in url-parse-1.4.7.tgz - ## CVE-2021-3664 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>url-parse-1.4.7.tgz</b></p></summary>
<p>Small footprint URL parser that works seamlessly across Node.js and browser environments</p>
<p>Library home page: <a href="https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz">https://registry.npmjs.org/url-parse/-/url-parse-1.4.7.tgz</a></p>
<p>Path to dependency file: /api-laravel-auth/package.json</p>
<p>Path to vulnerable library: /node_modules/url-parse/package.json</p>
<p>
Dependency Hierarchy:
- laravel-mix-4.0.16.tgz (Root Library)
- webpack-dev-server-3.7.2.tgz
- sockjs-client-1.3.0.tgz
- :x: **url-parse-1.4.7.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
url-parse is vulnerable to URL Redirection to Untrusted Site
<p>Publish Date: 2021-07-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3664>CVE-2021-3664</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-3664</a></p>
<p>Release Date: 2021-07-26</p>
<p>Fix Resolution (url-parse): 1.5.2</p>
<p>Direct dependency fix Resolution (laravel-mix): 4.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | cve medium detected in url parse tgz cve medium severity vulnerability vulnerable library url parse tgz small footprint url parser that works seamlessly across node js and browser environments library home page a href path to dependency file api laravel auth package json path to vulnerable library node modules url parse package json dependency hierarchy laravel mix tgz root library webpack dev server tgz sockjs client tgz x url parse tgz vulnerable library vulnerability details url parse is vulnerable to url redirection to untrusted site publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution url parse direct dependency fix resolution laravel mix step up your open source security game with whitesource | 0 |
18,849 | 10,242,226,167 | IssuesEvent | 2019-08-20 03:57:57 | NuGet/NuGetGallery | https://api.github.com/repos/NuGet/NuGetGallery | closed | OData $batch appears slower than it should be | Area: Gallery UI Area: Performance Area: V2 Feed | Executing an odata $batch of FindPackageById calls appears on a fast network appears to perform about the same speed as executing the discrete calls from the client sequentially.
However, executing the calls concurrently from the client and you get a big jump in performance - up to 3 times quicker. (Note the only additional work the client is doing is building and parsing the actual batch MIME packet.)
This indicates the core gallery code and SQL database is capable of significantly better performance with the batch and the gallery is serializing the work in the batch.
| True | OData $batch appears slower than it should be - Executing an odata $batch of FindPackageById calls appears on a fast network appears to perform about the same speed as executing the discrete calls from the client sequentially.
However, executing the calls concurrently from the client and you get a big jump in performance - up to 3 times quicker. (Note the only additional work the client is doing is building and parsing the actual batch MIME packet.)
This indicates the core gallery code and SQL database is capable of significantly better performance with the batch and the gallery is serializing the work in the batch.
| non_priority | odata batch appears slower than it should be executing an odata batch of findpackagebyid calls appears on a fast network appears to perform about the same speed as executing the discrete calls from the client sequentially however executing the calls concurrently from the client and you get a big jump in performance up to times quicker note the only additional work the client is doing is building and parsing the actual batch mime packet this indicates the core gallery code and sql database is capable of significantly better performance with the batch and the gallery is serializing the work in the batch | 0 |
146,371 | 23,055,055,684 | IssuesEvent | 2022-07-25 03:21:06 | TravelMockPlanner/TravelManagerPlanner | https://api.github.com/repos/TravelMockPlanner/TravelManagerPlanner | closed | DestiSearchView 비즈니스 구현 | feat backend design | ## 기능 설명
사용자 목적지 클릭시 해당 데이터 기반 추천지 불러오기
## 완료 조건
- [x] 사용자 입력에 따른 목적지 기반 노출
- [x] 뷰모델, 모델 작업
- [x] 테이블 뷰 기반 api 통신이후 추천지 노출
| 1.0 | DestiSearchView 비즈니스 구현 - ## 기능 설명
사용자 목적지 클릭시 해당 데이터 기반 추천지 불러오기
## 완료 조건
- [x] 사용자 입력에 따른 목적지 기반 노출
- [x] 뷰모델, 모델 작업
- [x] 테이블 뷰 기반 api 통신이후 추천지 노출
| non_priority | destisearchview 비즈니스 구현 기능 설명 사용자 목적지 클릭시 해당 데이터 기반 추천지 불러오기 완료 조건 사용자 입력에 따른 목적지 기반 노출 뷰모델 모델 작업 테이블 뷰 기반 api 통신이후 추천지 노출 | 0 |
299,404 | 25,901,309,543 | IssuesEvent | 2022-12-15 06:04:28 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | reopened | [Flaky Test] should select by dragging into nested block | [Status] Stale [Type] Flaky Test | <!-- __META_DATA__:{} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
should select by dragging into nested block
## Test path
`specs/editor/various/multi-block-selection.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2022-10-04T17:05:53.227Z"><code>[2022-10-04T17:05:53.227Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3183894413"><code>rnmobile/upgrade-android-api-31</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2022-12-15T06:04:27.985Z"><code>[2022-12-15T06:04:27.985Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3701377036"><code>add/flaky-tests-comment</code></a>.
</summary>
```
● Multi-block selection › should select by dragging into nested block
expect(received).toEqual(expected) // deep equality
Expected: [1, 2]
Received: 3
428 | await page.mouse.up();
429 |
> 430 | expect( await getSelectedFlatIndices() ).toEqual( [ 1, 2 ] );
| ^
431 | } );
432 |
433 | it( 'should cut and paste', async () => {
at Object.<anonymous> (specs/editor/various/multi-block-selection.test.js:430:44)
at runMicrotasks (<anonymous>)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| 1.0 | [Flaky Test] should select by dragging into nested block - <!-- __META_DATA__:{} -->
**Flaky test detected. This is an auto-generated issue by GitHub Actions. Please do NOT edit this manually.**
## Test title
should select by dragging into nested block
## Test path
`specs/editor/various/multi-block-selection.test.js`
## Errors
<!-- __TEST_RESULTS_LIST__ -->
<!-- __TEST_RESULT__ --><time datetime="2022-10-04T17:05:53.227Z"><code>[2022-10-04T17:05:53.227Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3183894413"><code>rnmobile/upgrade-android-api-31</code></a>.<!-- /__TEST_RESULT__ -->
<br/>
<!-- __TEST_RESULT__ --><details>
<summary>
<time datetime="2022-12-15T06:04:27.985Z"><code>[2022-12-15T06:04:27.985Z]</code></time> Test passed after 1 failed attempt on <a href="https://github.com/WordPress/gutenberg/actions/runs/3701377036"><code>add/flaky-tests-comment</code></a>.
</summary>
```
● Multi-block selection › should select by dragging into nested block
expect(received).toEqual(expected) // deep equality
Expected: [1, 2]
Received: 3
428 | await page.mouse.up();
429 |
> 430 | expect( await getSelectedFlatIndices() ).toEqual( [ 1, 2 ] );
| ^
431 | } );
432 |
433 | it( 'should cut and paste', async () => {
at Object.<anonymous> (specs/editor/various/multi-block-selection.test.js:430:44)
at runMicrotasks (<anonymous>)
```
</details><!-- /__TEST_RESULT__ -->
<!-- /__TEST_RESULTS_LIST__ -->
| non_priority | should select by dragging into nested block flaky test detected this is an auto generated issue by github actions please do not edit this manually test title should select by dragging into nested block test path specs editor various multi block selection test js errors test passed after failed attempt on test passed after failed attempt on a href ● multi block selection › should select by dragging into nested block expect received toequal expected deep equality expected received await page mouse up expect await getselectedflatindices toequal it should cut and paste async at object specs editor various multi block selection test js at runmicrotasks | 0 |
177,333 | 21,472,688,077 | IssuesEvent | 2022-04-26 10:57:56 | Satheesh575555/external_okhttp_AOSP10_r33 | https://api.github.com/repos/Satheesh575555/external_okhttp_AOSP10_r33 | opened | WS-2020-0408 (High) detected in netty-handler-4.0.15.Final.jar | security vulnerability | ## WS-2020-0408 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-handler-4.0.15.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /benchmarks/pom.xml</p>
<p>Path to vulnerable library: /tty/netty-handler/4.0.15.Final/netty-handler-4.0.15.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-handler-4.0.15.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/external_okhttp_AOSP10_r33/commit/bc64b343c3b9efb1056d611791ba83f70d7419ae">bc64b343c3b9efb1056d611791ba83f70d7419ae</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host.
<p>Publish Date: 2020-06-22
<p>URL: <a href=https://github.com/netty/netty/issues/10362>WS-2020-0408</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2020-0408">https://nvd.nist.gov/vuln/detail/WS-2020-0408</a></p>
<p>Release Date: 2020-06-22</p>
<p>Fix Resolution: io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0408 (High) detected in netty-handler-4.0.15.Final.jar - ## WS-2020-0408 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-handler-4.0.15.Final.jar</b></p></summary>
<p>Netty is an asynchronous event-driven network application framework for
rapid development of maintainable high performance protocol servers and
clients.</p>
<p>Library home page: <a href="http://netty.io/">http://netty.io/</a></p>
<p>Path to dependency file: /benchmarks/pom.xml</p>
<p>Path to vulnerable library: /tty/netty-handler/4.0.15.Final/netty-handler-4.0.15.Final.jar</p>
<p>
Dependency Hierarchy:
- :x: **netty-handler-4.0.15.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Satheesh575555/external_okhttp_AOSP10_r33/commit/bc64b343c3b9efb1056d611791ba83f70d7419ae">bc64b343c3b9efb1056d611791ba83f70d7419ae</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host.
<p>Publish Date: 2020-06-22
<p>URL: <a href=https://github.com/netty/netty/issues/10362>WS-2020-0408</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2020-0408">https://nvd.nist.gov/vuln/detail/WS-2020-0408</a></p>
<p>Release Date: 2020-06-22</p>
<p>Fix Resolution: io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_priority | ws high detected in netty handler final jar ws high severity vulnerability vulnerable library netty handler final jar netty is an asynchronous event driven network application framework for rapid development of maintainable high performance protocol servers and clients library home page a href path to dependency file benchmarks pom xml path to vulnerable library tty netty handler final netty handler final jar dependency hierarchy x netty handler final jar vulnerable library found in head commit a href found in base branch master vulnerability details an issue was found in all versions of io netty netty all host verification in netty is disabled by default this can lead to mitm attack in which an attacker can forge valid ssl tls certificates for a different hostname in order to intercept traffic that doesn’t intend for him this is an issue because the certificate is not matched with the host publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all final redhat final final redhat io netty netty handler final redhat final redhat step up your open source security game with whitesource | 0 |
273,017 | 23,722,219,897 | IssuesEvent | 2022-08-30 16:14:45 | Uuvana-Studios/longvinter-windows-client | https://api.github.com/repos/Uuvana-Studios/longvinter-windows-client | opened | See trough Windows | Bug Not Tested | Is it Normal that i can see Trough it?
Look in the Windows in the Upgraded House.
https://i.imgur.com/KB4Roni.png
**Desktop (please complete the following information):**
- OS: [e.g. Windows 11]
- Game Version [e.g. 1.0.8b]
| 1.0 | See trough Windows - Is it Normal that i can see Trough it?
Look in the Windows in the Upgraded House.
https://i.imgur.com/KB4Roni.png
**Desktop (please complete the following information):**
- OS: [e.g. Windows 11]
- Game Version [e.g. 1.0.8b]
| non_priority | see trough windows is it normal that i can see trough it look in the windows in the upgraded house desktop please complete the following information os game version | 0 |
18,595 | 3,391,004,356 | IssuesEvent | 2015-11-30 13:45:36 | Automattic/wp-calypso | https://api.github.com/repos/Automattic/wp-calypso | closed | Me: verification code placeholder color hard adjustment for easier use | Me Site Settings [Status] Needs Design Review [Type] Janitorial [Type] Question | When you are asked to add a verification code, the placeholder makes it hard to see it's a placeholder. This can be confusing for some users. I wasn't really thinking and ended up clicking verify thinking it has pre-filled. Of course it hadn't, but because the color was so slight it looked like it should be a proper input not a placeholder.

Perhaps it should be a placeholder such as
> 6 digit code | 1.0 | Me: verification code placeholder color hard adjustment for easier use - When you are asked to add a verification code, the placeholder makes it hard to see it's a placeholder. This can be confusing for some users. I wasn't really thinking and ended up clicking verify thinking it has pre-filled. Of course it hadn't, but because the color was so slight it looked like it should be a proper input not a placeholder.

Perhaps it should be a placeholder such as
> 6 digit code | non_priority | me verification code placeholder color hard adjustment for easier use when you are asked to add a verification code the placeholder makes it hard to see it s a placeholder this can be confusing for some users i wasn t really thinking and ended up clicking verify thinking it has pre filled of course it hadn t but because the color was so slight it looked like it should be a proper input not a placeholder perhaps it should be a placeholder such as digit code | 0 |
69,091 | 22,146,845,402 | IssuesEvent | 2022-06-03 12:57:53 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Login on a new account unable to verify device | T-Defect S-Minor A-E2EE A-E2EE-Cross-Signing O-Uncommon Z-WTF Z-NewUserJourney | ### Steps to reproduce
1. Create a new account
2. Log in
3. Go to a new browser and login that same account
### Outcome
#### What happened instead?
I have the `Unable to verify this device` E2E dialog asking me to proceed with a "reset".
#### What did you expect?
To be logged in
I have 0 DMs in my new account, and have not setup anything related to encryption. That step should be skipped entirely
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Login on a new account unable to verify device - ### Steps to reproduce
1. Create a new account
2. Log in
3. Go to a new browser and login that same account
### Outcome
#### What happened instead?
I have the `Unable to verify this device` E2E dialog asking me to proceed with a "reset".
#### What did you expect?
To be logged in
I have 0 DMs in my new account, and have not setup anything related to encryption. That step should be skipped entirely
### Operating system
_No response_
### Browser information
_No response_
### URL for webapp
_No response_
### Application version
_No response_
### Homeserver
_No response_
### Will you send logs?
No | non_priority | login on a new account unable to verify device steps to reproduce create a new account log in go to a new browser and login that same account outcome what happened instead i have the unable to verify this device dialog asking me to proceed with a reset what did you expect to be logged in i have dms in my new account and have not setup anything related to encryption that step should be skipped entirely operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no | 0 |
158,524 | 24,851,819,102 | IssuesEvent | 2022-10-26 20:48:37 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Develop usability testing plan for Read-Only Admin persona | service-design | **Describe the issue**
In order to do usability testing on the new Product Registry, the Read-Only Admin persona and use cases need to be properly defined.
**Tasks**
- [x] Met with SMEs to get their input
- [x] Create PO persona for testing
- [ ] Create list of functionalities a Read-Only Admin would require of the registry for testing
- [ ] Create use cases for testing
**How does this benefit the users of our platform?**
Usability testing ensures that Read-Only Admin needs are met in terms of functionality and usability.
**Definition of done**
- [ ] Tasks are completed and usability testing is ready to commence | 1.0 | Develop usability testing plan for Read-Only Admin persona - **Describe the issue**
In order to do usability testing on the new Product Registry, the Read-Only Admin persona and use cases need to be properly defined.
**Tasks**
- [x] Met with SMEs to get their input
- [x] Create PO persona for testing
- [ ] Create list of functionalities a Read-Only Admin would require of the registry for testing
- [ ] Create use cases for testing
**How does this benefit the users of our platform?**
Usability testing ensures that Read-Only Admin needs are met in terms of functionality and usability.
**Definition of done**
- [ ] Tasks are completed and usability testing is ready to commence | non_priority | develop usability testing plan for read only admin persona describe the issue in order to do usability testing on the new product registry the read only admin persona and use cases need to be properly defined tasks met with smes to get their input create po persona for testing create list of functionalities a read only admin would require of the registry for testing create use cases for testing how does this benefit the users of our platform usability testing ensures that read only admin needs are met in terms of functionality and usability definition of done tasks are completed and usability testing is ready to commence | 0 |
124,514 | 12,233,213,298 | IssuesEvent | 2020-05-04 11:11:50 | gnuradio/gr-verilog | https://api.github.com/repos/gnuradio/gr-verilog | opened | Add example of generating VCD trace files for GTKWave | documentation enhancement help wanted | Verilator is capable of generating VCD files with the states of logic lines for visualizing the execution of the HDL. GTKWave is a Free and Open Source Software application for working with these files.
https://github.com/gtkwave/gtkwave
An example of generating the VCD files is here.
https://zipcpu.com/blog/2017/06/21/looking-at-verilator.html
The GRC Block should have an option for enabling trace generation and a file path for where to generate it. | 1.0 | Add example of generating VCD trace files for GTKWave - Verilator is capable of generating VCD files with the states of logic lines for visualizing the execution of the HDL. GTKWave is a Free and Open Source Software application for working with these files.
https://github.com/gtkwave/gtkwave
An example of generating the VCD files is here.
https://zipcpu.com/blog/2017/06/21/looking-at-verilator.html
The GRC Block should have an option for enabling trace generation and a file path for where to generate it. | non_priority | add example of generating vcd trace files for gtkwave verilator is capable of generating vcd files with the states of logic lines for visualizing the execution of the hdl gtkwave is a free and open source software application for working with these files an example of generating the vcd files is here the grc block should have an option for enabling trace generation and a file path for where to generate it | 0 |
38,911 | 5,204,519,853 | IssuesEvent | 2017-01-24 15:45:41 | publiclab/plots2 | https://api.github.com/repos/publiclab/plots2 | closed | Add code coverage to plots2 | in progress testing | Its time that we did some analytics on our code. Add code coverage and test coverage to plots2. Trying in two different services - [codeclimate](https://codeclimate.com) and [coveralls](http://coveralls.io/) | 1.0 | Add code coverage to plots2 - Its time that we did some analytics on our code. Add code coverage and test coverage to plots2. Trying in two different services - [codeclimate](https://codeclimate.com) and [coveralls](http://coveralls.io/) | non_priority | add code coverage to its time that we did some analytics on our code add code coverage and test coverage to trying in two different services and | 0 |
91,959 | 10,732,933,806 | IssuesEvent | 2019-10-28 23:24:45 | RebecaM94/Proyecto_Integrador | https://api.github.com/repos/RebecaM94/Proyecto_Integrador | closed | Documentación de Pruebas de Caja Negra | documentation | Realizar la documentación de las Pruebas de Caja Negra.
**Definición de terminado.** Esta épica puede transicionar a "Done" cuando se haya terminado de documentar la etapa de Pruebas de Caja Negra. | 1.0 | Documentación de Pruebas de Caja Negra - Realizar la documentación de las Pruebas de Caja Negra.
**Definición de terminado.** Esta épica puede transicionar a "Done" cuando se haya terminado de documentar la etapa de Pruebas de Caja Negra. | non_priority | documentación de pruebas de caja negra realizar la documentación de las pruebas de caja negra definición de terminado esta épica puede transicionar a done cuando se haya terminado de documentar la etapa de pruebas de caja negra | 0 |
277,573 | 24,085,928,909 | IssuesEvent | 2022-09-19 10:55:57 | pgadmin-org/pgadmin4 | https://api.github.com/repos/pgadmin-org/pgadmin4 | reopened | The message "Cannot read properties of undefined (reading 'attnum')" is displayed if the user add comments or column through ERD tool | Bug EDB Sprint 126 In Testing | The message "Cannot read properties of undefined (reading 'attnum')" is displayed if the user add comments or column through the ERD tool
Steps:
1. Click on database edb
2. Right Click on generate script
3. Double click on dept table
4. Table dialogue open
5. write comments
6. Click on the Save button
7. Error Cannot read properties of undefined (reading 'attnum') is displayed
<img width="1758" alt="Screen Shot 2022-09-19 at 12 57 48 PM" src="https://user-images.githubusercontent.com/43006122/190974625-4d908fed-ba65-4f93-aef3-a467c06460a6.png">
| 1.0 | The message "Cannot read properties of undefined (reading 'attnum')" is displayed if the user add comments or column through ERD tool - The message "Cannot read properties of undefined (reading 'attnum')" is displayed if the user add comments or column through the ERD tool
Steps:
1. Click on database edb
2. Right Click on generate script
3. Double click on dept table
4. Table dialogue open
5. write comments
6. Click on the Save button
7. Error Cannot read properties of undefined (reading 'attnum') is displayed
<img width="1758" alt="Screen Shot 2022-09-19 at 12 57 48 PM" src="https://user-images.githubusercontent.com/43006122/190974625-4d908fed-ba65-4f93-aef3-a467c06460a6.png">
| non_priority | the message cannot read properties of undefined reading attnum is displayed if the user add comments or column through erd tool the message cannot read properties of undefined reading attnum is displayed if the user add comments or column through the erd tool steps click on database edb right click on generate script double click on dept table table dialogue open write comments click on the save button error cannot read properties of undefined reading attnum is displayed img width alt screen shot at pm src | 0 |
351,790 | 25,039,120,371 | IssuesEvent | 2022-11-04 18:49:26 | ComputationalCryoEM/ASPIRE-Python | https://api.github.com/repos/ComputationalCryoEM/ASPIRE-Python | closed | `ArrayImageSource` docstring formatting | documentation cleanup | It appears that the docstring for `ArrayImageSource` is improperly formatted as the parameters are not displaying correctly in the docs (see pic). Just need to add a blank line in the `__init__`.

| 1.0 | `ArrayImageSource` docstring formatting - It appears that the docstring for `ArrayImageSource` is improperly formatted as the parameters are not displaying correctly in the docs (see pic). Just need to add a blank line in the `__init__`.

| non_priority | arrayimagesource docstring formatting it appears that the docstring for arrayimagesource is improperly formatted as the parameters are not displaying correctly in the docs see pic just need to add a blank line in the init | 0 |
350,054 | 24,966,721,213 | IssuesEvent | 2022-11-01 20:04:06 | jenkinsci/lockable-resources-plugin | https://api.github.com/repos/jenkinsci/lockable-resources-plugin | closed | Deactivate Java8 support | dependencies good first issue documentation | Deactivate Java8 support, because it is no more supported by Jenkins-core self
ex: here are still used tests for Java8 with cost cpu and time and waste development process
https://github.com/jenkinsci/lockable-resources-plugin/blob/master/Jenkinsfile
We shall also provide official Java8 deprecation (maybe changelog is enough)
| 1.0 | Deactivate Java8 support - Deactivate Java8 support, because it is no more supported by Jenkins-core self
ex: here are still used tests for Java8 with cost cpu and time and waste development process
https://github.com/jenkinsci/lockable-resources-plugin/blob/master/Jenkinsfile
We shall also provide official Java8 deprecation (maybe changelog is enough)
| non_priority | deactivate support deactivate support because it is no more supported by jenkins core self ex here are still used tests for with cost cpu and time and waste development process we shall also provide official deprecation maybe changelog is enough | 0 |
56,125 | 13,759,157,808 | IssuesEvent | 2020-10-07 02:11:39 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | ModuleNotFoundError: No module named 'tensorflow' | stalled stat:awaiting response subtype:windows type:build/install | -using anaconda python v3.8
-working with jupyter notebook
-try to add on cmd system --- conda install tensorflow but loading old version tensorflow and than give to me this error code:
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-62c45ef75a16> in <module>
----> 1 from tensorflow.keras.models import Sequential
2 #modelleri oluşturmak için
3 from tensorflow.keras.layers import Dense
4 #katmanları da böyle oluştururuz
ModuleNotFoundError: No module named 'tensorflow'
| 1.0 | ModuleNotFoundError: No module named 'tensorflow' - -using anaconda python v3.8
-working with jupyter notebook
-try to add on cmd system --- conda install tensorflow but loading old version tensorflow and than give to me this error code:
```
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-62c45ef75a16> in <module>
----> 1 from tensorflow.keras.models import Sequential
2 #modelleri oluşturmak için
3 from tensorflow.keras.layers import Dense
4 #katmanları da böyle oluştururuz
ModuleNotFoundError: No module named 'tensorflow'
| non_priority | modulenotfounderror no module named tensorflow using anaconda python working with jupyter notebook try to add on cmd system conda install tensorflow but loading old version tensorflow and than give to me this error code from tensorflow keras models import sequential from tensorflow keras layers import dense modulenotfounderror traceback most recent call last in from tensorflow keras models import sequential modelleri oluşturmak için from tensorflow keras layers import dense katmanları da böyle oluştururuz modulenotfounderror no module named tensorflow | 0 |
413,455 | 27,952,933,805 | IssuesEvent | 2023-03-24 10:13:10 | ovh/terraform-provider-ovh | https://api.github.com/repos/ovh/terraform-provider-ovh | closed | Error upgrading kubernetes cluster version. | Type: Documentation Product: Managed Kubernetes Service Status: Backlog |
### Terraform Version
Terraform v1.3.6
### Affected Resource(s)
- ovh_cloud_project_kube
### Terraform Configuration Files
```hcl
resource "ovh_cloud_project_kube" "cluster" {
service_name = var.ovh.project
name = "cluster_name"
region = var.ovh.region
version = var.kube_version
private_network_id = var.private_network_id
update_policy = "NEVER_UPDATE"
lifecycle {
prevent_destroy = true
}
}
```
### Debug Output
https://gist.github.com/skydjol/f2a0d9e4f152d6067957f729f6713c46
### Expected Behavior
Upgrade to the version specified in terraform.
### Actual Behavior
terraform exits in error.
```
│ Error: HTTP Error 409: Client::Conflict: "409: Update aborted for service xxxxxxxxx because of policy NEVER_UPDATE (request ID: xxxxxxxxx)" (X-OVH-Query-Id: xxxxxxxxx)
```
### Steps to Reproduce
1. `create cluster in 1.22 with terraform module`
2. `update cluster in 1.23 with terraform module`
| 1.0 | Error upgrading kubernetes cluster version. -
### Terraform Version
Terraform v1.3.6
### Affected Resource(s)
- ovh_cloud_project_kube
### Terraform Configuration Files
```hcl
resource "ovh_cloud_project_kube" "cluster" {
service_name = var.ovh.project
name = "cluster_name"
region = var.ovh.region
version = var.kube_version
private_network_id = var.private_network_id
update_policy = "NEVER_UPDATE"
lifecycle {
prevent_destroy = true
}
}
```
### Debug Output
https://gist.github.com/skydjol/f2a0d9e4f152d6067957f729f6713c46
### Expected Behavior
Upgrade to the version specified in terraform.
### Actual Behavior
terraform exits in error.
```
│ Error: HTTP Error 409: Client::Conflict: "409: Update aborted for service xxxxxxxxx because of policy NEVER_UPDATE (request ID: xxxxxxxxx)" (X-OVH-Query-Id: xxxxxxxxx)
```
### Steps to Reproduce
1. `create cluster in 1.22 with terraform module`
2. `update cluster in 1.23 with terraform module`
| non_priority | error upgrading kubernetes cluster version terraform version terraform affected resource s ovh cloud project kube terraform configuration files hcl resource ovh cloud project kube cluster service name var ovh project name cluster name region var ovh region version var kube version private network id var private network id update policy never update lifecycle prevent destroy true debug output expected behavior upgrade to the version specified in terraform actual behavior terraform exits in error │ error http error client conflict update aborted for service xxxxxxxxx because of policy never update request id xxxxxxxxx x ovh query id xxxxxxxxx steps to reproduce create cluster in with terraform module update cluster in with terraform module | 0 |
278,351 | 21,075,279,180 | IssuesEvent | 2022-04-02 03:39:41 | Shopify/shopify-cli | https://api.github.com/repos/Shopify/shopify-cli | closed | Docs are very sparse, how do the 'Mutations' and 'Handlers' fit into the overall flow of the node app generated by 'create'? | area:documentation no-issue-activity | The docs currently have zero information about the resulting apps created by the 'shopify create' command. I'm working with the node version, but I'm sure there are people struggling in the same way with the Ruby version.
A full on tutorial would be great, but in the meantime if someone who is familiar with the architecture of these created apps could just add ** something ** in there about them.
Specifically in my case I am trying to figure out how the 'Mutations' and 'Handlers' directories fit into the rest of the app. It is especially confusing to see them nested within a 'Server' directory when they seem to have nothing to do with the serving of the app.
If anyone can enlighten me on, for example, how I should actually USE a mutation file in my app, that would be awesome!
| 1.0 | Docs are very sparse, how do the 'Mutations' and 'Handlers' fit into the overall flow of the node app generated by 'create'? - The docs currently have zero information about the resulting apps created by the 'shopify create' command. I'm working with the node version, but I'm sure there are people struggling in the same way with the Ruby version.
A full on tutorial would be great, but in the meantime if someone who is familiar with the architecture of these created apps could just add ** something ** in there about them.
Specifically in my case I am trying to figure out how the 'Mutations' and 'Handlers' directories fit into the rest of the app. It is especially confusing to see them nested within a 'Server' directory when they seem to have nothing to do with the serving of the app.
If anyone can enlighten me on, for example, how I should actually USE a mutation file in my app, that would be awesome!
| non_priority | docs are very sparse how do the mutations and handlers fit into the overall flow of the node app generated by create the docs currently have zero information about the resulting apps created by the shopify create command i m working with the node version but i m sure there are people struggling in the same way with the ruby version a full on tutorial would be great but in the meantime if someone who is familiar with the architecture of these created apps could just add something in there about them specifically in my case i am trying to figure out how the mutations and handlers directories fit into the rest of the app it is especially confusing to see them nested within a server directory when they seem to have nothing to do with the serving of the app if anyone can enlighten me on for example how i should actually use a mutation file in my app that would be awesome | 0 |
9,744 | 3,965,543,001 | IssuesEvent | 2016-05-03 08:50:27 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Reverse P/Invoke Vector3 argument passing failure on Linux | 2 - In Progress bug CodeGen | On Linux, the following RPInvoke case fails. The second Vector3 object, v3f32_mem0 is assigned onto stack, and its z value is wrong. The testcase passes on Windows.
Native side
```
//
// RPInvoke native call for Vector3 argument
//
typedef void (__stdcall *CallBack_RPInvoke_Vector3Arg_Unix)(
Vector3 v3f32_xmm0,
float f32_xmm2,
float f32_xmm3,
float f32_xmm4,
float f32_xmm5,
float f32_xmm6,
float f32_xmm7,
float f32_mem0,
Vector3 v3f32_mem1,
float f32_mem2,
float f32_mem3);
EXPORT(void) __stdcall nativeCall_RPInvoke_Vector3Arg_Unix(
CallBack_RPInvoke_Vector3Arg_Unix notify)
{
Vector3 v1, v2;
v1.x = 1; v1.y = 2; v1.z = 3;
v2.x = 10; v2.y = 20; v2.z = 30;
float f0 = 0, f1 = 1, f2 = 2, f3 = 3, f4 = 4, f5 = 5, f6 = 6, f7 = 7, f8 = 8;
notify(
v1,
f0, f1, f2, f3, f4, f5,
f6, // mapped onto stack
v2,
f7, f8);
}
```
Managed side
```
static void callBack_RPInvoke_Vector3Arg_Unix(
Vector3 v3f32_xmm0,
float f32_xmm2,
float f32_xmm3,
float f32_xmm4,
float f32_xmm5,
float f32_xmm6,
float f32_xmm7,
float f32_mem0,
Vector3 v3f32_mem0,
float f32_mem1,
float f32_mem2)
{
result = v3f32_mem0.Z == 30;
/*
// sum = (1, 2, 3) dot (1, 2, 3) = 14
float sum0 = Vector3.Dot(v3f32_xmm0, v3f32_xmm0);
// sum = (10, 20, 30) dot (10, 20, 30) = 1400
float sum1 = Vector3.Dot(v3f32_mem0, v3f32_mem0);
// sum = (1, 2, 3) dot (10, 20, 30) = 140
float sum2 = Vector3.Dot(v3f32_xmm0, v3f32_mem0);
// sum = 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36
float sum3 = f32_xmm2 + f32_xmm3 + f32_xmm4 + f32_xmm5 + f32_xmm6 + f32_xmm7
+ f32_mem0 + f32_mem1 + f32_mem2;
Console.WriteLine("callBack_RPInvoke_Vector3Arg_Unix:");
Console.WriteLine(" {0}, {1}, {2}", v3f32_xmm0.X, v3f32_xmm0.Y, v3f32_xmm0.Z);
Console.WriteLine(" {0}, {1}, {2}", v3f32_mem0.X, v3f32_mem0.Y, v3f32_mem0.Z);
Console.WriteLine(" Sum0,1,2,3 = {0}, {1}, {2}, {3}", sum0, sum1, sum2, sum3);
result = (sum0 == 14) && (sum1 == 1400) && (sum2 == 140) && (sum3==36);
*/
}
```
v3f32_mem0 is assigned (10,20,30) on the native side.
1) Native stack right before calling the callback function
$rsp = 0x7fffffffd0b0
```
(gdb) x /32f 0x7fffffffd0b0
; 6 is f32_mem0 and 10, 20, 30 are located at [rsp + 8, 12, 16].
0x7fffffffd0b0: 6 4.59163468e-41 10 20
0x7fffffffd0c0: 30 0 7 4.59163468e-41
```
2) Before entering IL_STUB_ReversePInvoke, it looks like the stack arguments are copied to different stack locations. $rsp is adjusted from 0x7fffffffd0b0 to 0x7fffffffcfc8, and stack arguments are found at the new stack addresses below.
```
(gdb) print $rbp
$6 = (void *) 0x7fffffffd0a0
(gdb) print $rsp
$7 = (void *) 0x7fffffffcfc8
(gdb) x /32f 0x7fffffffcfd0
; 6. 10, 20 are correct but v3f32_mem0.z now has 1 instead of 30.
0x7fffffffcfd0: 6 4.59163468e-41 10 20
0x7fffffffcfe0: 1 2 0 0
``` | 1.0 | Reverse P/Invoke Vector3 argument passing failure on Linux - On Linux, the following RPInvoke case fails. The second Vector3 object, v3f32_mem0 is assigned onto stack, and its z value is wrong. The testcase passes on Windows.
Native side
```
//
// RPInvoke native call for Vector3 argument
//
typedef void (__stdcall *CallBack_RPInvoke_Vector3Arg_Unix)(
Vector3 v3f32_xmm0,
float f32_xmm2,
float f32_xmm3,
float f32_xmm4,
float f32_xmm5,
float f32_xmm6,
float f32_xmm7,
float f32_mem0,
Vector3 v3f32_mem1,
float f32_mem2,
float f32_mem3);
EXPORT(void) __stdcall nativeCall_RPInvoke_Vector3Arg_Unix(
CallBack_RPInvoke_Vector3Arg_Unix notify)
{
Vector3 v1, v2;
v1.x = 1; v1.y = 2; v1.z = 3;
v2.x = 10; v2.y = 20; v2.z = 30;
float f0 = 0, f1 = 1, f2 = 2, f3 = 3, f4 = 4, f5 = 5, f6 = 6, f7 = 7, f8 = 8;
notify(
v1,
f0, f1, f2, f3, f4, f5,
f6, // mapped onto stack
v2,
f7, f8);
}
```
Managed side
```
static void callBack_RPInvoke_Vector3Arg_Unix(
Vector3 v3f32_xmm0,
float f32_xmm2,
float f32_xmm3,
float f32_xmm4,
float f32_xmm5,
float f32_xmm6,
float f32_xmm7,
float f32_mem0,
Vector3 v3f32_mem0,
float f32_mem1,
float f32_mem2)
{
result = v3f32_mem0.Z == 30;
/*
// sum = (1, 2, 3) dot (1, 2, 3) = 14
float sum0 = Vector3.Dot(v3f32_xmm0, v3f32_xmm0);
// sum = (10, 20, 30) dot (10, 20, 30) = 1400
float sum1 = Vector3.Dot(v3f32_mem0, v3f32_mem0);
// sum = (1, 2, 3) dot (10, 20, 30) = 140
float sum2 = Vector3.Dot(v3f32_xmm0, v3f32_mem0);
// sum = 0 + 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 = 36
float sum3 = f32_xmm2 + f32_xmm3 + f32_xmm4 + f32_xmm5 + f32_xmm6 + f32_xmm7
+ f32_mem0 + f32_mem1 + f32_mem2;
Console.WriteLine("callBack_RPInvoke_Vector3Arg_Unix:");
Console.WriteLine(" {0}, {1}, {2}", v3f32_xmm0.X, v3f32_xmm0.Y, v3f32_xmm0.Z);
Console.WriteLine(" {0}, {1}, {2}", v3f32_mem0.X, v3f32_mem0.Y, v3f32_mem0.Z);
Console.WriteLine(" Sum0,1,2,3 = {0}, {1}, {2}, {3}", sum0, sum1, sum2, sum3);
result = (sum0 == 14) && (sum1 == 1400) && (sum2 == 140) && (sum3==36);
*/
}
```
v3f32_mem0 is assigned (10,20,30) on the native side.
1) Native stack right before calling the callback function
$rsp = 0x7fffffffd0b0
```
(gdb) x /32f 0x7fffffffd0b0
; 6 is f32_mem0 and 10, 20, 30 are located at [rsp + 8, 12, 16].
0x7fffffffd0b0: 6 4.59163468e-41 10 20
0x7fffffffd0c0: 30 0 7 4.59163468e-41
```
2) Before entering IL_STUB_ReversePInvoke, it looks like the stack arguments are copied to different stack locations. $rsp is adjusted from 0x7fffffffd0b0 to 0x7fffffffcfc8, and stack arguments are found at the new stack addresses below.
```
(gdb) print $rbp
$6 = (void *) 0x7fffffffd0a0
(gdb) print $rsp
$7 = (void *) 0x7fffffffcfc8
(gdb) x /32f 0x7fffffffcfd0
; 6. 10, 20 are correct but v3f32_mem0.z now has 1 instead of 30.
0x7fffffffcfd0: 6 4.59163468e-41 10 20
0x7fffffffcfe0: 1 2 0 0
``` | non_priority | reverse p invoke argument passing failure on linux on linux the following rpinvoke case fails the second object is assigned onto stack and its z value is wrong the testcase passes on windows native side rpinvoke native call for argument typedef void stdcall callback rpinvoke unix float float float float float float float float float export void stdcall nativecall rpinvoke unix callback rpinvoke unix notify x y z x y z float notify mapped onto stack managed side static void callback rpinvoke unix float float float float float float float float float result z sum dot float dot sum dot float dot sum dot float dot sum float console writeline callback rpinvoke unix console writeline x y z console writeline x y z console writeline result is assigned on the native side native stack right before calling the callback function rsp gdb x is and are located at before entering il stub reversepinvoke it looks like the stack arguments are copied to different stack locations rsp is adjusted from to and stack arguments are found at the new stack addresses below gdb print rbp void gdb print rsp void gdb x are correct but z now has instead of | 0 |
3,123 | 2,658,418,215 | IssuesEvent | 2015-03-18 15:31:51 | genome/gms | https://api.github.com/repos/genome/gms | closed | Use of qw(...) as parentheses is deprecated | bug testing merge | Starting a ref-align build gives this error.
```
gmsuser@clia1 /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Model/Build (gms-pub-2015.02.22)> genome model build start hcc1395-tumor-refalign-wgs
Using libraries at /opt/gms/TB3QU62/sw/genome/lib/perl
'models' may require verification...
Resolving parameter 'models' from command argument 'hcc1395-tumor-refalign-wgs'... found 1
Trying to start #1: hcc1395-tumor-refalign-wgs (2891325873)...
Exception is Error while autoloading with 'use Genome::Model::Build::ReferenceAlignment': Error while autoloading with 'use Genome::Model::Build::RunsDV2': Use of qw(...) as parentheses is deprecated at /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Model/Build/RunsDV2.pm line 46.
Compilation failed in require at (eval 2024) line 2.
BEGIN failed--compilation aborted at (eval 2024) line 2.
at /opt/gms/TB3QU62/sw/ur/lib/UR/ModuleLoader.pm line 83
Compilation failed in require at (eval 2021) line 2.
BEGIN failed--compilation aborted at (eval 2021) line 2.
at /opt/gms/TB3QU62/sw/ur/lib/UR/ModuleLoader.pm line 83
ERROR: Died at /opt/gms/TB3QU62/sw/ur/lib/Command/V2.pm line 332.
Removing remaining resource lock: 'build_requested/2891325873' at /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Sys/Lock/FileBackend.pm line 244.
``` | 1.0 | Use of qw(...) as parentheses is deprecated - Starting a ref-align build gives this error.
```
gmsuser@clia1 /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Model/Build (gms-pub-2015.02.22)> genome model build start hcc1395-tumor-refalign-wgs
Using libraries at /opt/gms/TB3QU62/sw/genome/lib/perl
'models' may require verification...
Resolving parameter 'models' from command argument 'hcc1395-tumor-refalign-wgs'... found 1
Trying to start #1: hcc1395-tumor-refalign-wgs (2891325873)...
Exception is Error while autoloading with 'use Genome::Model::Build::ReferenceAlignment': Error while autoloading with 'use Genome::Model::Build::RunsDV2': Use of qw(...) as parentheses is deprecated at /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Model/Build/RunsDV2.pm line 46.
Compilation failed in require at (eval 2024) line 2.
BEGIN failed--compilation aborted at (eval 2024) line 2.
at /opt/gms/TB3QU62/sw/ur/lib/UR/ModuleLoader.pm line 83
Compilation failed in require at (eval 2021) line 2.
BEGIN failed--compilation aborted at (eval 2021) line 2.
at /opt/gms/TB3QU62/sw/ur/lib/UR/ModuleLoader.pm line 83
ERROR: Died at /opt/gms/TB3QU62/sw/ur/lib/Command/V2.pm line 332.
Removing remaining resource lock: 'build_requested/2891325873' at /opt/gms/TB3QU62/sw/genome/lib/perl/Genome/Sys/Lock/FileBackend.pm line 244.
``` | non_priority | use of qw as parentheses is deprecated starting a ref align build gives this error gmsuser opt gms sw genome lib perl genome model build gms pub genome model build start tumor refalign wgs using libraries at opt gms sw genome lib perl models may require verification resolving parameter models from command argument tumor refalign wgs found trying to start tumor refalign wgs exception is error while autoloading with use genome model build referencealignment error while autoloading with use genome model build use of qw as parentheses is deprecated at opt gms sw genome lib perl genome model build pm line compilation failed in require at eval line begin failed compilation aborted at eval line at opt gms sw ur lib ur moduleloader pm line compilation failed in require at eval line begin failed compilation aborted at eval line at opt gms sw ur lib ur moduleloader pm line error died at opt gms sw ur lib command pm line removing remaining resource lock build requested at opt gms sw genome lib perl genome sys lock filebackend pm line | 0 |
13,097 | 15,387,542,179 | IssuesEvent | 2021-03-03 09:39:59 | docker/compose-cli | https://api.github.com/repos/docker/compose-cli | opened | Unable to bind to IPv6 | compatibility compose local | **Description**
`docker compose` on the local context does not allow binding to an IPv6 address.
**Steps to reproduce the issue:**
Compose file with IPv6 port, e.g.:
```yaml
...
ports:
- "${IP6_ADDR}:53:53/tcp"
...
```
**Describe the results you received:**
```console
1 error(s) decoding:
* error decoding 'Ports': Invalid ip address REDACTED: address REDACTED: too many colons in address
```
**Describe the results you expected:**
Same behavior as `docker-compose up`.
**Additional information you deem important (e.g. issue happens only occasionally):**
Reproducible with current `HEAD:main` (12ffdd140521deeeb0b1694c2b5615817c24c66c). | True | Unable to bind to IPv6 - **Description**
`docker compose` on the local context does not allow binding to an IPv6 address.
**Steps to reproduce the issue:**
Compose file with IPv6 port, e.g.:
```yaml
...
ports:
- "${IP6_ADDR}:53:53/tcp"
...
```
**Describe the results you received:**
```console
1 error(s) decoding:
* error decoding 'Ports': Invalid ip address REDACTED: address REDACTED: too many colons in address
```
**Describe the results you expected:**
Same behavior as `docker-compose up`.
**Additional information you deem important (e.g. issue happens only occasionally):**
Reproducible with current `HEAD:main` (12ffdd140521deeeb0b1694c2b5615817c24c66c). | non_priority | unable to bind to description docker compose on the local context does not allow binding to an address steps to reproduce the issue compose file with port e g yaml ports addr tcp describe the results you received console error s decoding error decoding ports invalid ip address redacted address redacted too many colons in address describe the results you expected same behavior as docker compose up additional information you deem important e g issue happens only occasionally reproducible with current head main | 0 |
126,782 | 26,913,202,744 | IssuesEvent | 2023-02-07 02:44:35 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | opened | Saving may fail while code analysis is running due to willSaveWaitUntil returning before the file lock is actually released | bug Language Service investigate Feature: Code Analysis | It seems like we may be returning too soon, not sure why yet. I have a case which very easily repros it.
| 1.0 | Saving may fail while code analysis is running due to willSaveWaitUntil returning before the file lock is actually released - It seems like we may be returning too soon, not sure why yet. I have a case which very easily repros it.
| non_priority | saving may fail while code analysis is running due to willsavewaituntil returning before the file lock is actually released it seems like we may be returning too soon not sure why yet i have a case which very easily repros it | 0 |
3,335 | 5,774,675,957 | IssuesEvent | 2017-04-28 07:58:01 | parksandwildlife/biosys-turtles | https://api.github.com/repos/parksandwildlife/biosys-turtles | opened | Data classification | Business Requirement must have | ### Source
Name requesting role or stakeholder from which this requirement was sourced:
* Core stakeholders
* Turtle tagging stakeholders
* Turtle track count stakeholders
* Marine wildlife stranding stakeholders
* Departmental stakeholders
### Requirement
There needs to be a departmental data classification to determine sensitivities of data.
The classification should contain a comprehensive list of sensitivities around data, together with methods to mitigate them, so that the mitigated data is fit for public release without risk to the Department.
### Use cases
* Personally identifying information needs to be anonymised
* Precise locations of threatened species may be sensitive
* Departmental involvement in controversial development projects may be seen negatively in the public eye | 1.0 | Data classification - ### Source
Name requesting role or stakeholder from which this requirement was sourced:
* Core stakeholders
* Turtle tagging stakeholders
* Turtle track count stakeholders
* Marine wildlife stranding stakeholders
* Departmental stakeholders
### Requirement
There needs to be a departmental data classification to determine sensitivities of data.
The classification should contain a comprehensive list of sensitivities around data, together with methods to mitigate them, so that the mitigated data is fit for public release without risk to the Department.
### Use cases
* Personally identifying information needs to be anonymised
* Precise locations of threatened species may be sensitive
* Departmental involvement in controversial development projects may be seen negatively in the public eye | non_priority | data classification source name requesting role or stakeholder from which this requirement was sourced core stakeholders turtle tagging stakeholders turtle track count stakeholders marine wildlife stranding stakeholders departmental stakeholders requirement there needs to be a departmental data classification to determine sensitivities of data the classification should contain a comprehensive list of sensitivities around data together with methods to mitigate them so that the mitigated data is fit for public release without risk to the department use cases personally identifying information needs to be anonymised precise locations of threatened species may be sensitive departmental involvement in controversial development projects may be seen negatively in the public eye | 0 |
193,489 | 14,654,212,721 | IssuesEvent | 2020-12-28 08:08:09 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | jfrog-solutiontest/dtls: conn_test.go; 62 LoC | fresh medium test |
Found a possible issue in [jfrog-solutiontest/dtls](https://www.github.com/jfrog-solutiontest/dtls) at [conn_test.go](https://github.com/jfrog-solutiontest/dtls/blob/565f66d4065e494b78c285fdc92bcadb240cf63b/conn_test.go#L305-L366)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable test used in defer or goroutine at line 349
[Click here to see the code in its original context.](https://github.com/jfrog-solutiontest/dtls/blob/565f66d4065e494b78c285fdc92bcadb240cf63b/conn_test.go#L305-L366)
<details>
<summary>Click here to show the 62 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range []struct {
Name string
ClientCipherSuites []CipherSuiteID
ServerCipherSuites []CipherSuiteID
WantClientError error
WantServerError error
}{
{
Name: "No CipherSuites specified",
ClientCipherSuites: nil,
ServerCipherSuites: nil,
WantClientError: nil,
WantServerError: nil,
},
{
Name: "Invalid CipherSuite",
ClientCipherSuites: []CipherSuiteID{0x00},
ServerCipherSuites: []CipherSuiteID{0x00},
WantClientError: errors.New("CipherSuite with id(0) is not valid"),
WantServerError: errors.New("CipherSuite with id(0) is not valid"),
},
{
Name: "Valid CipherSuites specified",
ClientCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
ServerCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
WantClientError: nil,
WantServerError: nil,
},
{
Name: "CipherSuites mismatch",
ClientCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
ServerCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA},
WantClientError: errCipherSuiteNoIntersection,
WantServerError: errCipherSuiteNoIntersection,
},
} {
ca, cb := net.Pipe()
type result struct {
c *Conn
err error
}
c := make(chan result)
go func() {
client, err := testClient(ca, &Config{CipherSuites: test.ClientCipherSuites}, true)
c <- result{client, err}
}()
_, err := testServer(cb, &Config{CipherSuites: test.ServerCipherSuites}, true)
if err != nil || test.WantServerError != nil {
if !(err != nil && test.WantServerError != nil && err.Error() == test.WantServerError.Error()) {
t.Errorf("TestCipherSuiteConfiguration: Server Error Mismatch '%s': expected(%v) actual(%v)", test.Name, test.WantServerError, err)
}
}
res := <-c
if res.err != nil || test.WantClientError != nil {
if !(res.err != nil && test.WantClientError != nil && err.Error() == test.WantClientError.Error()) {
t.Errorf("TestSRTPConfiguration: Client Error Mismatch '%s': expected(%v) actual(%v)", test.Name, test.WantClientError, err)
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 565f66d4065e494b78c285fdc92bcadb240cf63b
| 1.0 | jfrog-solutiontest/dtls: conn_test.go; 62 LoC -
Found a possible issue in [jfrog-solutiontest/dtls](https://www.github.com/jfrog-solutiontest/dtls) at [conn_test.go](https://github.com/jfrog-solutiontest/dtls/blob/565f66d4065e494b78c285fdc92bcadb240cf63b/conn_test.go#L305-L366)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> range-loop variable test used in defer or goroutine at line 349
[Click here to see the code in its original context.](https://github.com/jfrog-solutiontest/dtls/blob/565f66d4065e494b78c285fdc92bcadb240cf63b/conn_test.go#L305-L366)
<details>
<summary>Click here to show the 62 line(s) of Go which triggered the analyzer.</summary>
```go
for _, test := range []struct {
Name string
ClientCipherSuites []CipherSuiteID
ServerCipherSuites []CipherSuiteID
WantClientError error
WantServerError error
}{
{
Name: "No CipherSuites specified",
ClientCipherSuites: nil,
ServerCipherSuites: nil,
WantClientError: nil,
WantServerError: nil,
},
{
Name: "Invalid CipherSuite",
ClientCipherSuites: []CipherSuiteID{0x00},
ServerCipherSuites: []CipherSuiteID{0x00},
WantClientError: errors.New("CipherSuite with id(0) is not valid"),
WantServerError: errors.New("CipherSuite with id(0) is not valid"),
},
{
Name: "Valid CipherSuites specified",
ClientCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
ServerCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
WantClientError: nil,
WantServerError: nil,
},
{
Name: "CipherSuites mismatch",
ClientCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256},
ServerCipherSuites: []CipherSuiteID{TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA},
WantClientError: errCipherSuiteNoIntersection,
WantServerError: errCipherSuiteNoIntersection,
},
} {
ca, cb := net.Pipe()
type result struct {
c *Conn
err error
}
c := make(chan result)
go func() {
client, err := testClient(ca, &Config{CipherSuites: test.ClientCipherSuites}, true)
c <- result{client, err}
}()
_, err := testServer(cb, &Config{CipherSuites: test.ServerCipherSuites}, true)
if err != nil || test.WantServerError != nil {
if !(err != nil && test.WantServerError != nil && err.Error() == test.WantServerError.Error()) {
t.Errorf("TestCipherSuiteConfiguration: Server Error Mismatch '%s': expected(%v) actual(%v)", test.Name, test.WantServerError, err)
}
}
res := <-c
if res.err != nil || test.WantClientError != nil {
if !(res.err != nil && test.WantClientError != nil && err.Error() == test.WantClientError.Error()) {
t.Errorf("TestSRTPConfiguration: Client Error Mismatch '%s': expected(%v) actual(%v)", test.Name, test.WantClientError, err)
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 565f66d4065e494b78c285fdc92bcadb240cf63b
| non_priority | jfrog solutiontest dtls conn test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message range loop variable test used in defer or goroutine at line click here to show the line s of go which triggered the analyzer go for test range struct name string clientciphersuites ciphersuiteid serverciphersuites ciphersuiteid wantclienterror error wantservererror error name no ciphersuites specified clientciphersuites nil serverciphersuites nil wantclienterror nil wantservererror nil name invalid ciphersuite clientciphersuites ciphersuiteid serverciphersuites ciphersuiteid wantclienterror errors new ciphersuite with id is not valid wantservererror errors new ciphersuite with id is not valid name valid ciphersuites specified clientciphersuites ciphersuiteid tls ecdhe ecdsa with aes gcm serverciphersuites ciphersuiteid tls ecdhe ecdsa with aes gcm wantclienterror nil wantservererror nil name ciphersuites mismatch clientciphersuites ciphersuiteid tls ecdhe ecdsa with aes gcm serverciphersuites ciphersuiteid tls ecdhe ecdsa with aes cbc sha wantclienterror errciphersuitenointersection wantservererror errciphersuitenointersection ca cb net pipe type result struct c conn err error c make chan result go func client err testclient ca config ciphersuites test clientciphersuites true c result client err err testserver cb config ciphersuites test serverciphersuites true if err nil test wantservererror nil if err nil test wantservererror nil err error test wantservererror error t errorf testciphersuiteconfiguration server error mismatch s expected v actual v test name test wantservererror err res c if res err nil test wantclienterror nil if res err nil test wantclienterror nil err error test wantclienterror error t errorf testsrtpconfiguration client error mismatch s expected v actual v test name test wantclienterror err leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
176,340 | 28,071,781,845 | IssuesEvent | 2023-03-29 19:40:38 | HYF-Class19/RCP-Team | https://api.github.com/repos/HYF-Class19/RCP-Team | closed | 33 - figma user account - v2 | design | ## Figma design
- design the sign-up
- design the sign-in
- design the reset password
- add a screenshot to the `readme.md`
| 1.0 | 33 - figma user account - v2 - ## Figma design
- design the sign-up
- design the sign-in
- design the reset password
- add a screenshot to the `readme.md`
| non_priority | figma user account figma design design the sign up design the sign in design the reset password add a screenshot to the readme md | 0 |
96,934 | 10,967,342,357 | IssuesEvent | 2019-11-28 09:21:32 | fnielsen/scholia | https://api.github.com/repos/fnielsen/scholia | closed | Write a summary of progress towards Goal 6: Improving documentation | documentation | Improving documentation of corpus, code, queries, workflows, examples and related resources, as well as limits of Scholia
https://github.com/fnielsen/scholia/projects/30 | 1.0 | Write a summary of progress towards Goal 6: Improving documentation - Improving documentation of corpus, code, queries, workflows, examples and related resources, as well as limits of Scholia
https://github.com/fnielsen/scholia/projects/30 | non_priority | write a summary of progress towards goal improving documentation improving documentation of corpus code queries workflows examples and related resources as well as limits of scholia | 0 |
59,106 | 7,201,606,493 | IssuesEvent | 2018-02-05 23:18:58 | learningequality/kolibri | https://api.github.com/repos/learningequality/kolibri | closed | not-very-helpful error in profile when a username already exists | TAG: needs design TASK: ux update |
### Observed behavior
In the 'profile' view, try to change your username to one that already exists:

### Expected behavior
Error says "That username is already in use" or something similar.
### User-facing consequences
Might not understand why the username change didn't work
### Steps to reproduce
* create two users
* log in to one, and try to give it the same username as the other
### Context
Kolibri 0.7 | 1.0 | not-very-helpful error in profile when a username already exists -
### Observed behavior
In the 'profile' view, try to change your username to one that already exists:

### Expected behavior
Error says "That username is already in use" or something similar.
### User-facing consequences
Might not understand why the username change didn't work
### Steps to reproduce
* create two users
* log in to one, and try to give it the same username as the other
### Context
Kolibri 0.7 | non_priority | not very helpful error in profile when a username already exists observed behavior in the profile view try to change your username to one that already exists expected behavior error says that username is already in use or something similar user facing consequences might not understand why the username change didn t work steps to reproduce create two users log in to one and try to give it the same username as the other context kolibri | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.