Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,830
| 15,212,060,112
|
IssuesEvent
|
2021-02-17 09:55:09
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Error: Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it
|
bug/2-confirmed kind/bug process/candidate team/migrations tech/engines topic: migrate
|
<!-- If required, please update the title to be clear and descriptive -->
Command: `prisma migrate up --experimental --verbose`
Version: `2.3.0`
Binary Version: `e11114fa1ea826f9e7b4fa1ced34e78892fe8e0e`
Report: https://prisma-errors.netlify.app/report/9746
OS: `x64 darwin 19.5.0`
JS Stacktrace:
```
Error: Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it
0: migration_core::api::ApplyMigration
with migration_id="20200803181634-init"
at migration-engine/core/src/api.rs:77)
at Object.<anonymous> (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2135004)
at MigrateEngine.handleResponse (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2133212)
at LineStream.<anonymous> (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2134669)
at LineStream.emit (events.js:314:20)
at addChunk (_stream_readable.js:304:12)
at readableAddChunk (_stream_readable.js:280:9)
at LineStream.Readable.push (_stream_readable.js:219:10)
at LineStream.Transform.push (_stream_transform.js:166:32)
at LineStream._pushBuffer (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:1819384)
at LineStream._transform (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:1819205)
```
Rust Stacktrace:
```
Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it
0: migration_core::api::ApplyMigration
with migration_id="20200803181634-init"
at migration-engine/core/src/api.rs:77)
```
|
1.0
|
Error: Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma migrate up --experimental --verbose`
Version: `2.3.0`
Binary Version: `e11114fa1ea826f9e7b4fa1ced34e78892fe8e0e`
Report: https://prisma-errors.netlify.app/report/9746
OS: `x64 darwin 19.5.0`
JS Stacktrace:
```
Error: Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it
0: migration_core::api::ApplyMigration
with migration_id="20200803181634-init"
at migration-engine/core/src/api.rs:77)
at Object.<anonymous> (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2135004)
at MigrateEngine.handleResponse (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2133212)
at LineStream.<anonymous> (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:2134669)
at LineStream.emit (events.js:314:20)
at addChunk (_stream_readable.js:304:12)
at readableAddChunk (_stream_readable.js:280:9)
at LineStream.Readable.push (_stream_readable.js:219:10)
at LineStream.Transform.push (_stream_transform.js:166:32)
at LineStream._pushBuffer (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:1819384)
at LineStream._transform (/Users/arubaito/thelab/10factory/elibro-api/node_modules/@prisma/cli/build/index.js:2:1819205)
```
Rust Stacktrace:
```
Failure during a migration command: Connector error. (error: Error querying the database: Error querying the database: db error: ERROR: cannot drop type country_status_enum because other objects depend on it
0: migration_core::api::ApplyMigration
with migration_id="20200803181634-init"
at migration-engine/core/src/api.rs:77)
```
|
process
|
error failure during a migration command connector error error error querying the database error querying the database db error error cannot drop type country status enum because other objects depend on it command prisma migrate up experimental verbose version binary version report os darwin js stacktrace error failure during a migration command connector error error error querying the database error querying the database db error error cannot drop type country status enum because other objects depend on it migration core api applymigration with migration id init at migration engine core src api rs at object users arubaito thelab elibro api node modules prisma cli build index js at migrateengine handleresponse users arubaito thelab elibro api node modules prisma cli build index js at linestream users arubaito thelab elibro api node modules prisma cli build index js at linestream emit events js at addchunk stream readable js at readableaddchunk stream readable js at linestream readable push stream readable js at linestream transform push stream transform js at linestream pushbuffer users arubaito thelab elibro api node modules prisma cli build index js at linestream transform users arubaito thelab elibro api node modules prisma cli build index js rust stacktrace failure during a migration command connector error error error querying the database error querying the database db error error cannot drop type country status enum because other objects depend on it migration core api applymigration with migration id init at migration engine core src api rs
| 1
|
11,842
| 7,481,374,047
|
IssuesEvent
|
2018-04-04 20:27:23
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
Add configurable shortcut for "Rename.." file on "FileSystem" editor. [Feature request]
|
enhancement junior job topic:editor usability
|
<!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.0.2 stable official
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
Add configurable shorkey for "Rename.." file on "FileSystem" editor. [Feature request]

There is currently no shortcut.
For windows users is common to rename "things" (files, folders, etc) with the F2 key. I don't know for other OS.
The F2 key is currently used as a shortkey for the "3D" mode. So it could be enough to add the shortkey as "None".

|
True
|
Add configurable shortcut for "Rename.." file on "FileSystem" editor. [Feature request] - <!-- Please search existing issues for potential duplicates before filing yours:
https://github.com/godotengine/godot/issues?q=is%3Aissue
-->
**Godot version:** v3.0.2 stable official
<!-- Specify commit hash if non-official. -->
**OS/device including version:** Windows 10
<!-- Specify GPU model and drivers if graphics-related. -->
**Issue description:**
<!-- What happened, and what was expected. -->
Add configurable shorkey for "Rename.." file on "FileSystem" editor. [Feature request]

There is currently no shortcut.
For windows users is common to rename "things" (files, folders, etc) with the F2 key. I don't know for other OS.
The F2 key is currently used as a shortkey for the "3D" mode. So it could be enough to add the shortkey as "None".

|
non_process
|
add configurable shortcut for rename file on filesystem editor please search existing issues for potential duplicates before filing yours godot version stable official os device including version windows issue description add configurable shorkey for rename file on filesystem editor there is currently no shortcut for windows users is common to rename things files folders etc with the key i don t know for other os the key is currently used as a shortkey for the mode so it could be enough to add the shortkey as none
| 0
|
125,606
| 26,697,050,373
|
IssuesEvent
|
2023-01-27 11:15:51
|
OudayAhmed/Assignment-1-DECIDE
|
https://api.github.com/repos/OudayAhmed/Assignment-1-DECIDE
|
closed
|
CMV-6
|
code
|
Description: Implement a method for DECIDE().
Input: N_PTS, DIST, NUMPOINTS
Output: Boolean
There exists at least one set of N PTS consecutive data points such that at least one of the
points lies a distance greater than DIST from the line joining the first and last of these N PTS
points. If the first and last points of these N PTS are identical, then the calculated distance
to compare with DIST will be the distance from the coincident point to all other points of
the N PTS consecutive points. The condition is not met when NUMPOINTS < 3.
(3 ≤ N PTS ≤ NUMPOINTS), (0 ≤ DIST)
|
1.0
|
CMV-6 - Description: Implement a method for DECIDE().
Input: N_PTS, DIST, NUMPOINTS
Output: Boolean
There exists at least one set of N PTS consecutive data points such that at least one of the
points lies a distance greater than DIST from the line joining the first and last of these N PTS
points. If the first and last points of these N PTS are identical, then the calculated distance
to compare with DIST will be the distance from the coincident point to all other points of
the N PTS consecutive points. The condition is not met when NUMPOINTS < 3.
(3 ≤ N PTS ≤ NUMPOINTS), (0 ≤ DIST)
|
non_process
|
cmv description implement a method for decide input n pts dist numpoints output boolean there exists at least one set of n pts consecutive data points such that at least one of the points lies a distance greater than dist from the line joining the first and last of these n pts points if the first and last points of these n pts are identical then the calculated distance to compare with dist will be the distance from the coincident point to all other points of the n pts consecutive points the condition is not met when numpoints ≤ n pts ≤ numpoints ≤ dist
| 0
|
19,023
| 25,030,669,499
|
IssuesEvent
|
2022-11-04 12:05:34
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Release 5.3.2 - October 2022
|
P1 type: process release team-OSS
|
# Status of Bazel 5.3.2
- Expected release date: 10.19.2022
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/43)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 5.3, simply send a PR against the `release-5.3.2` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Release 5.3.2 - October 2022 - # Status of Bazel 5.3.2
- Expected release date: 10.19.2022
- [List of release blockers](https://github.com/bazelbuild/bazel/milestone/43)
To report a release-blocking bug, please add a comment with the text `@bazel-io flag` to the issue. A release manager will triage it and add it to the milestone.
To cherry-pick a mainline commit into 5.3, simply send a PR against the `release-5.3.2` branch.
Task list:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
release october status of bazel expected release date to report a release blocking bug please add a comment with the text bazel io flag to the issue a release manager will triage it and add it to the milestone to cherry pick a mainline commit into simply send a pr against the release branch task list send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the
| 1
|
137,272
| 11,105,778,810
|
IssuesEvent
|
2019-12-17 10:30:14
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
Add 'a' in the tooltip of the 'Clone with New Name' button for one file share
|
:gear: files 🧪 testing
|
**Storage Explorer Version:** 1.11.2
**Build:** [20191216.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3326258)
**Branch:** hotfix/1.11.2
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand one storage account -> File Shares.
2. Create a new file share -> Open it -> Hover over the 'Clone with New Name...' button on toolbar.
3. Observe the tooltip.
**Expect Experience:**
The tooltip shows 'Clone selected file or directory with **a** new name'.
**Actual Experience:**
The tooltip shows 'Clone selected file or directory with new name'.

**More Info:**
This issue doesn't reproduce for one blob container.

|
1.0
|
Add 'a' in the tooltip of the 'Clone with New Name' button for one file share - **Storage Explorer Version:** 1.11.2
**Build:** [20191216.5](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3326258)
**Branch:** hotfix/1.11.2
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/macOS High Sierra
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Expand one storage account -> File Shares.
2. Create a new file share -> Open it -> Hover over the 'Clone with New Name...' button on toolbar.
3. Observe the tooltip.
**Expect Experience:**
The tooltip shows 'Clone selected file or directory with **a** new name'.
**Actual Experience:**
The tooltip shows 'Clone selected file or directory with new name'.

**More Info:**
This issue doesn't reproduce for one blob container.

|
non_process
|
add a in the tooltip of the clone with new name button for one file share storage explorer version build branch hotfix platform os windows linux ubuntu macos high sierra architecture regression from not a regression steps to reproduce expand one storage account file shares create a new file share open it hover over the clone with new name button on toolbar observe the tooltip expect experience the tooltip shows clone selected file or directory with a new name actual experience the tooltip shows clone selected file or directory with new name more info this issue doesn t reproduce for one blob container
| 0
|
22,248
| 30,801,673,279
|
IssuesEvent
|
2023-08-01 02:17:00
|
cncf/tag-security
|
https://api.github.com/repos/cncf/tag-security
|
closed
|
Lightweight Threat Modelling Guidance for CNCF Projects
|
assessment-process audit-process project
|
Description: A lightweight threat modelling framework can help to increase the STAG's security review velocity. Also provides maintainers with an effective mechanism to drive secure feature development.
Impact: Reduce the time investment for STAG reviewers, lower the barrier to entry for new contributors, and widen the pool of individuals that can participate in the threat modelling process.
Scope: To generate a checklist for threat modelling, some recommended tooling, and distilled bullet points to help guide the process.
Prior art:
- [Mozillia Rapid Risk Assessment](https://infosec.mozilla.org/guidelines/risk/rapid_risk_assessment.html)
- [Adam Shostack's STRIDE methodology](https://shostack.org/files/essays/uncover/)
Docs:
-https://docs.google.com/document/d/1tuGtKrjcreDFlHcXYCTjLvy3mjyamdQzwCZr6uqFcR4/edit#heading=h.hc3y1ed9v90a
General timeline:
- 7 Dec: FLUX multi-tenancy threat model exercise
- Review/post-mortem and evaluation of threat model exercise
- Reconcile threat modeling into security assessment process
- Maybe trying out another with another model?
- [??] Integrate with security assessments guide #999
|
2.0
|
Lightweight Threat Modelling Guidance for CNCF Projects - Description: A lightweight threat modelling framework can help to increase the STAG's security review velocity. Also provides maintainers with an effective mechanism to drive secure feature development.
Impact: Reduce the time investment for STAG reviewers, lower the barrier to entry for new contributors, and widen the pool of individuals that can participate in the threat modelling process.
Scope: To generate a checklist for threat modelling, some recommended tooling, and distilled bullet points to help guide the process.
Prior art:
- [Mozillia Rapid Risk Assessment](https://infosec.mozilla.org/guidelines/risk/rapid_risk_assessment.html)
- [Adam Shostack's STRIDE methodology](https://shostack.org/files/essays/uncover/)
Docs:
-https://docs.google.com/document/d/1tuGtKrjcreDFlHcXYCTjLvy3mjyamdQzwCZr6uqFcR4/edit#heading=h.hc3y1ed9v90a
General timeline:
- 7 Dec: FLUX multi-tenancy threat model exercise
- Review/post-mortem and evaluation of threat model exercise
- Reconcile threat modeling into security assessment process
- Maybe trying out another with another model?
- [??] Integrate with security assessments guide #999
|
process
|
lightweight threat modelling guidance for cncf projects description a lightweight threat modelling framework can help to increase the stag s security review velocity also provides maintainers with an effective mechanism to drive secure feature development impact reduce the time investment for stag reviewers lower the barrier to entry for new contributors and widen the pool of individuals that can participate in the threat modelling process scope to generate a checklist for threat modelling some recommended tooling and distilled bullet points to help guide the process prior art docs general timeline dec flux multi tenancy threat model exercise review post mortem and evaluation of threat model exercise reconcile threat modeling into security assessment process maybe trying out another with another model integrate with security assessments guide
| 1
|
152,185
| 19,680,204,654
|
IssuesEvent
|
2022-01-11 16:04:47
|
brightcove/videojs-overlay
|
https://api.github.com/repos/brightcove/videojs-overlay
|
opened
|
CVE-2022-21670 (Medium) detected in markdown-it-8.3.2.tgz
|
security vulnerability
|
## CVE-2022-21670 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>markdown-it-8.3.2.tgz</b></p></summary>
<p>Markdown-it - modern pluggable markdown parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/markdown-it/-/markdown-it-8.3.2.tgz">https://registry.npmjs.org/markdown-it/-/markdown-it-8.3.2.tgz</a></p>
<p>
Dependency Hierarchy:
- jsdoc-git+https://github.com/BrandonOCasey/jsdoc.git#da41874b82ee87a28b4f615cf5306c6f84e53d57.tgz (Root Library)
- :x: **markdown-it-8.3.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
markdown-it is a Markdown parser. Prior to version 1.3.2, special patterns with length greater than 50 thousand characterss could slow down the parser significantly. Users should upgrade to version 12.3.2 to receive a patch. There are no known workarounds aside from upgrading.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21670>CVE-2022-21670</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markdown-it/markdown-it/security/advisories/GHSA-6vfc-qv3f-vr6c">https://github.com/markdown-it/markdown-it/security/advisories/GHSA-6vfc-qv3f-vr6c</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: markdown-it - 12.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"markdown-it","packageVersion":"8.3.2","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"jsdoc:git+https://github.com/BrandonOCasey/jsdoc.git#da41874b82ee87a28b4f615cf5306c6f84e53d57;markdown-it:8.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"markdown-it - 12.3.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-21670","vulnerabilityDetails":"markdown-it is a Markdown parser. Prior to version 1.3.2, special patterns with length greater than 50 thousand characterss could slow down the parser significantly. Users should upgrade to version 12.3.2 to receive a patch. There are no known workarounds aside from upgrading.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21670","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2022-21670 (Medium) detected in markdown-it-8.3.2.tgz - ## CVE-2022-21670 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>markdown-it-8.3.2.tgz</b></p></summary>
<p>Markdown-it - modern pluggable markdown parser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/markdown-it/-/markdown-it-8.3.2.tgz">https://registry.npmjs.org/markdown-it/-/markdown-it-8.3.2.tgz</a></p>
<p>
Dependency Hierarchy:
- jsdoc-git+https://github.com/BrandonOCasey/jsdoc.git#da41874b82ee87a28b4f615cf5306c6f84e53d57.tgz (Root Library)
- :x: **markdown-it-8.3.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
markdown-it is a Markdown parser. Prior to version 1.3.2, special patterns with length greater than 50 thousand characterss could slow down the parser significantly. Users should upgrade to version 12.3.2 to receive a patch. There are no known workarounds aside from upgrading.
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21670>CVE-2022-21670</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/markdown-it/markdown-it/security/advisories/GHSA-6vfc-qv3f-vr6c">https://github.com/markdown-it/markdown-it/security/advisories/GHSA-6vfc-qv3f-vr6c</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution: markdown-it - 12.3.2</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"markdown-it","packageVersion":"8.3.2","packageFilePaths":[null],"isTransitiveDependency":true,"dependencyTree":"jsdoc:git+https://github.com/BrandonOCasey/jsdoc.git#da41874b82ee87a28b4f615cf5306c6f84e53d57;markdown-it:8.3.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"markdown-it - 12.3.2","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2022-21670","vulnerabilityDetails":"markdown-it is a Markdown parser. Prior to version 1.3.2, special patterns with length greater than 50 thousand characterss could slow down the parser significantly. Users should upgrade to version 12.3.2 to receive a patch. There are no known workarounds aside from upgrading.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21670","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in markdown it tgz cve medium severity vulnerability vulnerable library markdown it tgz markdown it modern pluggable markdown parser library home page a href dependency hierarchy jsdoc git root library x markdown it tgz vulnerable library found in base branch master vulnerability details markdown it is a markdown parser prior to version special patterns with length greater than thousand characterss could slow down the parser significantly users should upgrade to version to receive a patch there are no known workarounds aside from upgrading publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution markdown it isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree jsdoc git isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails markdown it is a markdown parser prior to version special patterns with length greater than thousand characterss could slow down the parser significantly users should upgrade to version to receive a patch there are no known workarounds aside from upgrading vulnerabilityurl
| 0
|
8,146
| 11,354,713,494
|
IssuesEvent
|
2020-01-24 18:16:53
|
googleapis/java-billingbudgets
|
https://api.github.com/repos/googleapis/java-billingbudgets
|
closed
|
Promote to Beta
|
type: process
|
Package name: **google-cloud-billingbudgets**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] Server API is beta or GA
- [ ] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
1.0
|
Promote to Beta - Package name: **google-cloud-billingbudgets**
Current release: **alpha**
Proposed release: **beta**
## Instructions
Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue.
## Required
- [ ] Server API is beta or GA
- [ ] Service API is public
- [ ] Client surface is mostly stable (no known issues that could significantly change the surface)
- [ ] All manual types and methods have comment documentation
- [ ] Package name is idiomatic for the platform
- [ ] At least one integration/smoke test is defined and passing
- [ ] Central GitHub README lists and points to the per-API README
- [ ] Per-API README links to product page on cloud.google.com
- [ ] Manual code has been reviewed for API stability by repo owner
## Optional
- [ ] Most common / important scenarios have descriptive samples
- [ ] Public manual methods have at least one usage sample each (excluding overloads)
- [ ] Per-API README includes a full description of the API
- [ ] Per-API README contains at least one “getting started” sample using the most common API scenario
- [ ] Manual code has been reviewed by API producer
- [ ] Manual code has been reviewed by a DPE responsible for samples
- [ ] 'Client LIbraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
|
process
|
promote to beta package name google cloud billingbudgets current release alpha proposed release beta instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required server api is beta or ga service api is public client surface is mostly stable no known issues that could significantly change the surface all manual types and methods have comment documentation package name is idiomatic for the platform at least one integration smoke test is defined and passing central github readme lists and points to the per api readme per api readme links to product page on cloud google com manual code has been reviewed for api stability by repo owner optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
| 1
|
12,999
| 2,732,850,479
|
IssuesEvent
|
2015-04-17 09:44:37
|
tiku01/oryx-editor
|
https://api.github.com/repos/tiku01/oryx-editor
|
closed
|
Spelling error in ORYX.Core.Canvas attribute
|
auto-migrated Priority-Medium Type-Defect
|
```
ORYX.Core.Canvas sets an attribute, line-heigth. Most likely, line-height was
intended.
At time of writing, the relevant code can be seen at
http://oryx-editor.googlecode.com/svn/trunk/editor/client/scripts/Core/canvas.js
, line 122:
this.node.setAttributeNS(null, 'line-heigth', 'normal');
```
Original issue reported on code.google.com by `ro...@mcgovern.id.au` on 18 Feb 2013 at 3:58
|
1.0
|
Spelling error in ORYX.Core.Canvas attribute - ```
ORYX.Core.Canvas sets an attribute, line-heigth. Most likely, line-height was
intended.
At time of writing, the relevant code can be seen at
http://oryx-editor.googlecode.com/svn/trunk/editor/client/scripts/Core/canvas.js
, line 122:
this.node.setAttributeNS(null, 'line-heigth', 'normal');
```
Original issue reported on code.google.com by `ro...@mcgovern.id.au` on 18 Feb 2013 at 3:58
|
non_process
|
spelling error in oryx core canvas attribute oryx core canvas sets an attribute line heigth most likely line height was intended at time of writing the relevant code can be seen at line this node setattributens null line heigth normal original issue reported on code google com by ro mcgovern id au on feb at
| 0
|
11,330
| 14,143,571,672
|
IssuesEvent
|
2020-11-10 15:25:47
|
xr3ngine/xr3ngine
|
https://api.github.com/repos/xr3ngine/xr3ngine
|
closed
|
Optimize CDN assets as they are processed.
|
media-processing server
|
Change format and resolution.
Theatre example.
Huge thumbnails are being processed on the client every time! We need all of them resized down to 512*512 on the server.
|
1.0
|
Optimize CDN assets as they are processed. - Change format and resolution.
Theatre example.
Huge thumbnails are being processed on the client every time! We need all of them resized down to 512*512 on the server.
|
process
|
optimize cdn assets as they are processed change format and resolution theatre example huge thumbnails are being processed on the client every time we need all of them resized down to on the server
| 1
|
11,094
| 13,936,861,171
|
IssuesEvent
|
2020-10-22 13:28:48
|
w3c/transitions
|
https://api.github.com/repos/w3c/transitions
|
opened
|
How to close the loop on superseding/obsoleting Recommendations?
|
Process Issue
|
The transitions requirements are not clear on what happens once an AC review to supersede/obsolete a Recommendation is over.
|
1.0
|
How to close the loop on superseding/obsoleting Recommendations? - The transitions requirements are not clear on what happens once an AC review to supersede/obsolete a Recommendation is over.
|
process
|
how to close the loop on superseding obsoleting recommendations the transitions requirements are not clear on what happens once an ac review to supersede obsolete a recommendation is over
| 1
|
442,416
| 12,745,214,323
|
IssuesEvent
|
2020-06-26 13:53:28
|
nokazn/spotify-player
|
https://api.github.com/repos/nokazn/spotify-player
|
closed
|
背景色
|
0. low-priority 2. UI/UX 🎨
|
## TODO
- [x] 各ページの読み込み時にセットする (`beforeDestroy`時にリセットしない)
- ヘッダーとコンテンツ表示部の色がずれる瞬間がなくなる
- 追加で設定するページ
- デフォルト
- [x] `/library/tracks`
- ~~`/library/releases`~~
- ~~`/library/artists`~~
- リセット
- [x] ログイン `/login`
- [x] アカウント `/account`
- [x] トップ `/`
- [x] `/library/releases`
- [x] `/library/artists`
- [x] `/browse`
- [x] `/genres/:genreId`
- [x] デフォルトの色をもう少し明るくする。 -> 暗くする割合を `0.9` にする
|
1.0
|
背景色 - ## TODO
- [x] 各ページの読み込み時にセットする (`beforeDestroy`時にリセットしない)
- ヘッダーとコンテンツ表示部の色がずれる瞬間がなくなる
- 追加で設定するページ
- デフォルト
- [x] `/library/tracks`
- ~~`/library/releases`~~
- ~~`/library/artists`~~
- リセット
- [x] ログイン `/login`
- [x] アカウント `/account`
- [x] トップ `/`
- [x] `/library/releases`
- [x] `/library/artists`
- [x] `/browse`
- [x] `/genres/:genreId`
- [x] デフォルトの色をもう少し明るくする。 -> 暗くする割合を `0.9` にする
|
non_process
|
背景色 todo 各ページの読み込み時にセットする beforedestroy 時にリセットしない ヘッダーとコンテンツ表示部の色がずれる瞬間がなくなる 追加で設定するページ デフォルト library tracks library releases library artists リセット ログイン login アカウント account トップ library releases library artists browse genres genreid デフォルトの色をもう少し明るくする。 暗くする割合を にする
| 0
|
80,906
| 15,589,847,241
|
IssuesEvent
|
2021-03-18 08:36:54
|
TIBCOSoftware/TCSTK-Angular
|
https://api.github.com/repos/TIBCOSoftware/TCSTK-Angular
|
opened
|
CVE-2020-28498 (Medium) detected in elliptic-6.5.3.tgz
|
security vulnerability
|
## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.3.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.3.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.3.tgz</a></p>
<p>Path to dependency file: TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: TCSTK-Angular/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- webpack-4.44.2.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.2.1.tgz
- :x: **elliptic-6.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/TCSTK-Angular/commit/d1b6477f436bdf55dbed46ee5ed582741e66dbe7">d1b6477f436bdf55dbed46ee5ed582741e66dbe7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.5.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;webpack:4.44.2;node-libs-browser:2.2.1;crypto-browserify:3.12.0;browserify-sign:4.2.1;elliptic:6.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28498","vulnerabilityDetails":"The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498","cvss3Severity":"medium","cvss3Score":"6.8","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-28498 (Medium) detected in elliptic-6.5.3.tgz - ## CVE-2020-28498 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.5.3.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.5.3.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.5.3.tgz</a></p>
<p>Path to dependency file: TCSTK-Angular/package.json</p>
<p>Path to vulnerable library: TCSTK-Angular/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.1100.7.tgz (Root Library)
- webpack-4.44.2.tgz
- node-libs-browser-2.2.1.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.2.1.tgz
- :x: **elliptic-6.5.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/TIBCOSoftware/TCSTK-Angular/commit/d1b6477f436bdf55dbed46ee5ed582741e66dbe7">d1b6477f436bdf55dbed46ee5ed582741e66dbe7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.
<p>Publish Date: 2021-02-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498>CVE-2020-28498</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-28498</a></p>
<p>Release Date: 2021-02-02</p>
<p>Fix Resolution: v6.5.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.5.3","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@angular-devkit/build-angular:0.1100.7;webpack:4.44.2;node-libs-browser:2.2.1;crypto-browserify:3.12.0;browserify-sign:4.2.1;elliptic:6.5.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-28498","vulnerabilityDetails":"The package elliptic before 6.5.4 are vulnerable to Cryptographic Issues via the secp256k1 implementation in elliptic/ec/key.js. There is no check to confirm that the public key point passed into the derive function actually exists on the secp256k1 curve. This results in the potential for the private key used in this implementation to be revealed after a number of ECDH operations are performed.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28498","cvss3Severity":"medium","cvss3Score":"6.8","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Changed","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in elliptic tgz cve medium severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tcstk angular package json path to vulnerable library tcstk angular node modules elliptic package json dependency hierarchy build angular tgz root library webpack tgz node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package elliptic before are vulnerable to cryptographic issues via the implementation in elliptic ec key js there is no check to confirm that the public key point passed into the derive function actually exists on the curve this results in the potential for the private key used in this implementation to be revealed after a number of ecdh operations are performed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree angular devkit build angular webpack node libs browser crypto browserify browserify sign elliptic isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the package elliptic before are vulnerable to cryptographic issues via the implementation in elliptic ec key js there is no check to confirm that the public key point passed into the derive function actually exists on the curve this results in the potential for the private key used in this implementation to be revealed after a number of ecdh operations are performed vulnerabilityurl
| 0
|
99,361
| 16,446,160,619
|
IssuesEvent
|
2021-05-20 19:52:03
|
Dima2021/NodeGoat
|
https://api.github.com/repos/Dima2021/NodeGoat
|
opened
|
CVE-2017-16032 (Medium) detected in brace-expansion-1.1.6.tgz
|
security vulnerability
|
## CVE-2017-16032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: NodeGoat/package.json</p>
<p>Path to vulnerable library: NodeGoat/node_modules/npm/node_modules/node-gyp/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/init-package-json/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/fstream-npm/node_modules/fstream-ignore/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/read-package-json/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/nyc/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- read-package-json-2.0.4.tgz
- glob-6.0.4.tgz
- minimatch-3.0.3.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/NodeGoat/commit/0301d3c3a84246b82928f324214ed9d2d757798e">0301d3c3a84246b82928f324214ed9d2d757798e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032>CVE-2017-16032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/338">https://www.npmjs.com/advisories/338</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v1.1.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"brace-expansion","packageVersion":"1.1.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;read-package-json:2.0.4;glob:6.0.4;minimatch:3.0.3;brace-expansion:1.1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.1.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16032","vulnerabilityDetails":"brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-16032 (Medium) detected in brace-expansion-1.1.6.tgz - ## CVE-2017-16032 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: NodeGoat/package.json</p>
<p>Path to vulnerable library: NodeGoat/node_modules/npm/node_modules/node-gyp/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/init-package-json/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/fstream-npm/node_modules/fstream-ignore/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/read-package-json/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json,NodeGoat/node_modules/nyc/node_modules/brace-expansion/package.json,NodeGoat/node_modules/npm/node_modules/glob/node_modules/minimatch/node_modules/brace-expansion/package.json</p>
<p>
Dependency Hierarchy:
- grunt-npm-install-0.3.1.tgz (Root Library)
- npm-3.10.10.tgz
- read-package-json-2.0.4.tgz
- glob-6.0.4.tgz
- minimatch-3.0.3.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/NodeGoat/commit/0301d3c3a84246b82928f324214ed9d2d757798e">0301d3c3a84246b82928f324214ed9d2d757798e</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.
<p>Publish Date: 2020-07-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032>CVE-2017-16032</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/338">https://www.npmjs.com/advisories/338</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: v1.1.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"brace-expansion","packageVersion":"1.1.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-npm-install:0.3.1;npm:3.10.10;read-package-json:2.0.4;glob:6.0.4;minimatch:3.0.3;brace-expansion:1.1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v1.1.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-16032","vulnerabilityDetails":"brace-expansion before 1.1.7 are vulnerable to a regular expression denial of service.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16032","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Unchanged","C":"Low","UI":"Required","AV":"Local","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in brace expansion tgz cve medium severity vulnerability vulnerable library brace expansion tgz brace expansion as known from sh bash library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules npm node modules node gyp node modules minimatch node modules brace expansion package json nodegoat node modules npm node modules init package json node modules glob node modules minimatch node modules brace expansion package json nodegoat node modules npm node modules fstream npm node modules fstream ignore node modules minimatch node modules brace expansion package json nodegoat node modules npm node modules read package json node modules glob node modules minimatch node modules brace expansion package json nodegoat node modules nyc node modules brace expansion package json nodegoat node modules npm node modules glob node modules minimatch node modules brace expansion package json dependency hierarchy grunt npm install tgz root library npm tgz read package json tgz glob tgz minimatch tgz x brace expansion tgz vulnerable library found in head commit a href found in base branch master vulnerability details brace expansion before are vulnerable to a regular expression denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt npm install npm read package json glob minimatch brace expansion isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails brace expansion before are vulnerable to a regular expression denial of service vulnerabilityurl
| 0
|
10,946
| 13,756,383,134
|
IssuesEvent
|
2020-10-06 19:51:07
|
oppia/oppia-android
|
https://api.github.com/repos/oppia/oppia-android
|
closed
|
Verify that the alpha build is working & deployed from Play Store
|
Priority: Essential Status: In implementation Type: Process
|
This is a process issue to verify that the deployment from the Play Store was successful.
|
1.0
|
Verify that the alpha build is working & deployed from Play Store - This is a process issue to verify that the deployment from the Play Store was successful.
|
process
|
verify that the alpha build is working deployed from play store this is a process issue to verify that the deployment from the play store was successful
| 1
|
12,089
| 14,740,063,436
|
IssuesEvent
|
2021-01-07 08:27:09
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Orlando - SA Billing - Late Fee Account List
|
anc-process anp-important ant-bug has attachment
|
In GitLab by @kdjstudios on Oct 3, 2018, 11:04
[Orlando.xlsx](/uploads/671d4daf54a410291c71a0d18e6bb254/Orlando.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-49483/conversation
|
1.0
|
Orlando - SA Billing - Late Fee Account List - In GitLab by @kdjstudios on Oct 3, 2018, 11:04
[Orlando.xlsx](/uploads/671d4daf54a410291c71a0d18e6bb254/Orlando.xlsx)
HD: http://www.servicedesk.answernet.com/profiles/ticket/2018-10-03-49483/conversation
|
process
|
orlando sa billing late fee account list in gitlab by kdjstudios on oct uploads orlando xlsx hd
| 1
|
4,939
| 7,795,880,821
|
IssuesEvent
|
2018-06-08 09:36:20
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Speed up plotgen3 back keeping MATLAB resident (Trac #175)
|
Migrated from Trac enhancement post_processing senkbeil@uwm.edu
|
Okay, plotgen3 is painful slow. This is caused by repeated calls to MATLAB, each one requiring MATLAB to start.
The solution to this is to keep MATLAB running for each subsequent call, we can do this using a FIFO.
To do this follow these steps:
1. ```mkfifo matlab_pipe```
1. ```matlab <> matlab_pipe```
1. ```echo 'matlab_command' > matlab_pipe```
1. ```echo quit > matlab_pipe```
1. ```rm matlab_pipe```
Where 'matlab_command' is the MATLAB command you want to run.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/175
```json
{
"status": "closed",
"changetime": "2009-09-02T20:45:18",
"description": "Okay, plotgen3 is painful slow. This is caused by repeated calls to MATLAB, each one requiring MATLAB to start.\n\nThe solution to this is to keep MATLAB running for each subsequent call, we can do this using a FIFO.\n\nTo do this follow these steps:\n\n 1. {{{mkfifo matlab_pipe}}}\n 1. {{{matlab <> matlab_pipe}}}\n 1. {{{echo 'matlab_command' > matlab_pipe}}}\n 1. {{{echo quit > matlab_pipe}}}\n 1. {{{rm matlab_pipe}}}\n\nWhere 'matlab_command' is the MATLAB command you want to run. ",
"reporter": "nielsenb@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251924318000000",
"component": "post_processing",
"summary": "Speed up plotgen3 back keeping MATLAB resident",
"priority": "major",
"keywords": "",
"time": "2009-08-06T15:41:18",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
Speed up plotgen3 back keeping MATLAB resident (Trac #175) - Okay, plotgen3 is painful slow. This is caused by repeated calls to MATLAB, each one requiring MATLAB to start.
The solution to this is to keep MATLAB running for each subsequent call, we can do this using a FIFO.
To do this follow these steps:
1. ```mkfifo matlab_pipe```
1. ```matlab <> matlab_pipe```
1. ```echo 'matlab_command' > matlab_pipe```
1. ```echo quit > matlab_pipe```
1. ```rm matlab_pipe```
Where 'matlab_command' is the MATLAB command you want to run.
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/175
```json
{
"status": "closed",
"changetime": "2009-09-02T20:45:18",
"description": "Okay, plotgen3 is painful slow. This is caused by repeated calls to MATLAB, each one requiring MATLAB to start.\n\nThe solution to this is to keep MATLAB running for each subsequent call, we can do this using a FIFO.\n\nTo do this follow these steps:\n\n 1. {{{mkfifo matlab_pipe}}}\n 1. {{{matlab <> matlab_pipe}}}\n 1. {{{echo 'matlab_command' > matlab_pipe}}}\n 1. {{{echo quit > matlab_pipe}}}\n 1. {{{rm matlab_pipe}}}\n\nWhere 'matlab_command' is the MATLAB command you want to run. ",
"reporter": "nielsenb@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1251924318000000",
"component": "post_processing",
"summary": "Speed up plotgen3 back keeping MATLAB resident",
"priority": "major",
"keywords": "",
"time": "2009-08-06T15:41:18",
"milestone": "Plotgen 3.0",
"owner": "senkbeil@uwm.edu",
"type": "enhancement"
}
```
|
process
|
speed up back keeping matlab resident trac okay is painful slow this is caused by repeated calls to matlab each one requiring matlab to start the solution to this is to keep matlab running for each subsequent call we can do this using a fifo to do this follow these steps mkfifo matlab pipe matlab matlab pipe echo matlab command matlab pipe echo quit matlab pipe rm matlab pipe where matlab command is the matlab command you want to run attachments migrated from json status closed changetime description okay is painful slow this is caused by repeated calls to matlab each one requiring matlab to start n nthe solution to this is to keep matlab running for each subsequent call we can do this using a fifo n nto do this follow these steps n n mkfifo matlab pipe n matlab matlab pipe n echo matlab command matlab pipe n echo quit matlab pipe n rm matlab pipe n nwhere matlab command is the matlab command you want to run reporter nielsenb uwm edu cc resolution verified by v larson ts component post processing summary speed up back keeping matlab resident priority major keywords time milestone plotgen owner senkbeil uwm edu type enhancement
| 1
|
41,081
| 5,300,542,475
|
IssuesEvent
|
2017-02-10 05:33:56
|
onyx-platform/onyx
|
https://api.github.com/repos/onyx-platform/onyx
|
closed
|
[ABS] Implement Asynchronous Barrier Snapshotting
|
ABS design feature messaging
|
This issue will be used to track the design and progress of the Asynchronous Barrier Snapshotting algorithm. The work in this issue will replace our current streaming engine - Per-Record processing. This is the same algorithm outlined in [the literature](http://arxiv.org/pdf/1506.08603.pdf), which is itself an iteration of [Chandy/Lamport's work](http://research.microsoft.com/en-us/um/people/lamport/pubs/chandy.pdf).
### Branch
The ABS engine is actively being implemented on the [`abs-engine`](https://github.com/onyx-platform/onyx/tree/abs-engine) branch.
### Glossary
#### Barrier
A barrier is special message that partitions a stream of messages into finite, discrete portion. Barriers indicate progress in terms of execution stages in an input stream. Barriers are artificial and are injected by Onyx itself. Barriers have monotonically increasing identifiers, starting at `0`.
#### Checkpoint
Checkpoints are values that are durably written to stable storage to indicate the progress of a task in a workflow. The checkpoint always includes a barrier ID for which all messages coming before it were acknowledged, and may contain any snapshotted state.
### Design Discussion
This section hashes out the larger pieces of the design and the considerations that need to be taken.
#### Barrier Injection
Barrier messages need to be injected into each input stream at a deterministic location. So far, we believe the easiest way to do that is by injecting a barrier every `N` messages. That is, if we configured Onyx to inject a barrier every `5` messages, an input stream should look exactly like this, every time, even if its replayed by a completely different process:
```
... 17 16 | 15 14 13 12 11 | 10 9 8 7 6 | 5 4 3 2 1
```
The rightmost side of the text indicates the head of the stream, and the leftmost the tail. The pipe (`|`) indicates a barrier. Barriers carry monotonically increasing identifiers _per task_. That is, if there are 3 input tasks, all the first barrier in _each_ task starts at `0`, then proceeds to `1`, and so forth.
The original paper notes:
> Snapshot coordination is implemented as an actor process on the job manager that keeps a global state for an execution graph of a single job. The coordinator periodically injects stage barriers to all sources of the execution graph. Upon reconfiguration, the last glob- ally snapshotted state is restored in the operators from a distributed in-memory persistent storage.
We do not currently see any reason that a central coordinator would inject messages. Given that message streams are processed at different rates, it's unclear how barriers can deterministically be placed into a stream using this approach. Further, processes can pause as they receive such messages from a coordinator. Perfect synchronicity isn't practical to to achieve here.
##### Assumptions
- Every message in an input stream is uniquely identifiable.
- Input streams can be rewinded if there is a downstream failure.
- When a peer writes its checkpoint to durable storage, it must reject its own write if its checkpoint is _lower_ than the checkpoint that exists in storage. This implies that durable checkpoint storage must support CAS.
##### Questions
- Should we allow developers to configure different lengths between barriers for different tasks?
#### Acknowledgment
_Acknowledgement_ is the successful movement of a barrier after it has fully walked the workflow. When a peer executing an output task receives all of its barriers, it messages its upstream input peers about its progress. When an input peer notices that all of its output task peers have reached a new minimum checkpoint, it releases messages from its input medium - effectively marking them as complete, and never eligible for replay.
When a peer boots up into an input task, it reads its last checkpoint from stable storage. The input stream rewinds to that location and begins sending messages downstream.
We need to be careful that when new peers come online, input peers don't expect them to be able to read checkpoints until they are reading any messages at all. We already have a two-phase join process for a peer to start a task. We just need to make sure that we use it. We'll also need to broadcast the `:allocations` key in the cluster per barrier to make sure the cluster configuration hasn't changed.
##### Assumptions
- To survive a fault, an input stream must be able to retain a message even after a process reads that message. The input stream must have an API for the process to explicitly declare that the processing of the message was successful.
#### Ordered message processing
The ABS engine requires that messages be processed in the order that they are sent between two tasks. Some implementations of ABS use a pull-based messaging layer, thus obviates concerns about message reordering. Onyx heavily embraces Aeron, which is a push-based model. Fortunately, for unbroken connections, Aeron guarantees that messages are consumed in the order they are produced for a single stream.
#### Fault Recovery
When a peer comes online for a task, it essentially always moves into fault recovery mode. If the peer is running an input task, it looks up its last checkpoint in stable storage. If no suitable checkpoint is found, it should start the beginning with barrier `0`.
If a downstream peer crashes, a message will be put on the log as normal. Input peers will read this message and rewind the stream to the checkpoint that the peer crashed on and play from there. It's important the before the stream rewinds, the peer emits a barrier to denote that the next set of messages are older. Peers are expected to maintain the current barrier in memory and ignore state-updates for older messages. Stateless transformations should still be applied to pass the message downstream.
#### Backpressure
It's critical that the communication channels between tasks exhibit backpressure. Aeron implements this on our behalf. Producers will spin when writing a message gets a Backpressure return code.
#### Barrier Alignment
Barrier alignment is the process by which a peer waits for all of its upstream tasks to send it a barrier before it itself emits a barrier. We can isolate this algorithm in a component that controls ingress messages.
#### Proposed Design

##### ABS Streaming Engine Component
- Create a new Component that starts on each virtual peer with the task lifecycle Component.
- The ABS engine component will ultimately pump messages into the task lifecycle component. We want to try and keep all the details of barriers, checkpoints, and fault recovery outside of task lifecycle to keep it simple. The engine will figure out which upstream tasks to read from, when to pause for barrier alignment, and when to emit barriers to downstream tasks. The ABS engine can take messages from its N input core async channels and funnel them back into a single core.async channel from which the task lifecycle will read off of.
- The component in Onyx that reads from Aeron and forwards incoming data onto a set of core.async channels for each peer needs to change. Currently there is one channel for all incoming segments. We need N core.async channels for N immediate upstream tasks. This design aids in implementing barrier alignment. When a barrier is encountered, we want to pause messages coming in from upstream tasks and only read from tasks that haven't given us a barrier yet. Using a core.async channel per upstream tasks forces the producers of those messages to backpressure, which is desired.
- core.async channels must have blocking buffers. They cannot be `sliding` or `dropping` since that would allow message loss, and therefore out of order messages. If messages got out of order, we'd have an indeterminate set of messages between checkpoints.
- Segments cross the wire with their task ID. We can use short task IDs to emit less information, or we can continue to use peer short IDs and do the lookup at the consumer end to figure out which task it came from.
- We need a new message type in our custom protocol to represent a barrier.
- ABS Engine Component should maintain an atom that we'll call the engine state. The engine state contains a mapping from task ID to a set of barrier IDs and peer IDs that have been encountered. As barriers are emitted from this peer, barrier IDs in that map and removed. This is the "pending area" for when we receive a barrier from one task and are waiting for the barriers of other upstream tasks.
- ABS Engine Component works with two main functions:
- `pick-channels` - function of two parameters *map of task ID -> core.async channel and the engine state) and returns the set of channels that the engine should try to read from. This function isolates the decision of when to pause reading from a channel, and is therefore responsible for implementing barrier alignment.
- `emit-barrier?` - function of engine state and replica that returns a boolean to denote whether this peer should emit a barrier to its downstream peers after this batch of segments has been processed.
- The ABS engine component should be able shut off barrier alignment via configuration, which degrades to at-least-once message processing.
- The ABS engine should supply a callback function on the ingress channel to the task lifecycle. The callback should be invoked after `write-batch` returns, which will in turn checkpoint to durable storage.
- The ABS engine should supply a `:skip-state-updates?` key on the ingress channel to the task lifecycle. Turning this key on would make the task lifecycle run under "exactly once" semantics.
### Existing Code Changes
- [ ] All plugins need to switch to the `SimpleInput` interface.
- [ ] Rename `SimpleInput` to a more meaningful name.
### New Code-level Additions
- [x] Create a byte protocol for Barrier messages [Patch 7cf7267eb7d2a9fb1e86813f49d2b0c9e2140101]
- [ ] Alter code surrounding Peer Manager to have N channels per upstream task, not one channel
### Code to be removed
- [ ] Acker daemon
- [ ] Peer configuration `:onyx.messaging/ack-daemon-timeout`
- [ ] Peer configuration `:onyx.messaging/ack-daemon-clear-interval`
- [ ] Job key `acker-exclude-inputs`
- [ ] Job key `acker-exclude-outputs`
- [ ] Job key `:acker-percentage`
- [ ] Replica key `ackers`
- [ ] Scheduler constraints for `choose-ackers`
- [ ] ack-ch connecting acking-daemon to task-lifecycle
- [ ] Task-map key `:onyx/pending-timeout`
- [ ] Task-map key `:onyx/max-pending`
- [ ] Task-map key `:onyx/input-retry-timeout`
- [ ] Ack phase of task lifecycle
### Docs to be modified
- [ ] Performance tuning section needs to discuss how ABS works instead of Per-Record.
- [ ] Internal architecture and design will need to be updated. We can take most of the material from this issue and move it into there.
- [ ] Backpressure section needs to remove discussion about buffering messages in-memory at the input site.
### Questions
- Should we tackle iterative computation in this patch?
- Can we get rid of the sentinel in this patch?
- What happens when we have _really_ large barrier IDs because we've been streaming for a long time? We need to wrap around at some point.
|
1.0
|
[ABS] Implement Asynchronous Barrier Snapshotting - This issue will be used to track the design and progress of the Asynchronous Barrier Snapshotting algorithm. The work in this issue will replace our current streaming engine - Per-Record processing. This is the same algorithm outlined in [the literature](http://arxiv.org/pdf/1506.08603.pdf), which is itself an iteration of [Chandy/Lamport's work](http://research.microsoft.com/en-us/um/people/lamport/pubs/chandy.pdf).
### Branch
The ABS engine is actively being implemented on the [`abs-engine`](https://github.com/onyx-platform/onyx/tree/abs-engine) branch.
### Glossary
#### Barrier
A barrier is special message that partitions a stream of messages into finite, discrete portion. Barriers indicate progress in terms of execution stages in an input stream. Barriers are artificial and are injected by Onyx itself. Barriers have monotonically increasing identifiers, starting at `0`.
#### Checkpoint
Checkpoints are values that are durably written to stable storage to indicate the progress of a task in a workflow. The checkpoint always includes a barrier ID for which all messages coming before it were acknowledged, and may contain any snapshotted state.
### Design Discussion
This section hashes out the larger pieces of the design and the considerations that need to be taken.
#### Barrier Injection
Barrier messages need to be injected into each input stream at a deterministic location. So far, we believe the easiest way to do that is by injecting a barrier every `N` messages. That is, if we configured Onyx to inject a barrier every `5` messages, an input stream should look exactly like this, every time, even if its replayed by a completely different process:
```
... 17 16 | 15 14 13 12 11 | 10 9 8 7 6 | 5 4 3 2 1
```
The rightmost side of the text indicates the head of the stream, and the leftmost the tail. The pipe (`|`) indicates a barrier. Barriers carry monotonically increasing identifiers _per task_. That is, if there are 3 input tasks, all the first barrier in _each_ task starts at `0`, then proceeds to `1`, and so forth.
The original paper notes:
> Snapshot coordination is implemented as an actor process on the job manager that keeps a global state for an execution graph of a single job. The coordinator periodically injects stage barriers to all sources of the execution graph. Upon reconfiguration, the last glob- ally snapshotted state is restored in the operators from a distributed in-memory persistent storage.
We do not currently see any reason that a central coordinator would inject messages. Given that message streams are processed at different rates, it's unclear how barriers can deterministically be placed into a stream using this approach. Further, processes can pause as they receive such messages from a coordinator. Perfect synchronicity isn't practical to to achieve here.
##### Assumptions
- Every message in an input stream is uniquely identifiable.
- Input streams can be rewinded if there is a downstream failure.
- When a peer writes its checkpoint to durable storage, it must reject its own write if its checkpoint is _lower_ than the checkpoint that exists in storage. This implies that durable checkpoint storage must support CAS.
##### Questions
- Should we allow developers to configure different lengths between barriers for different tasks?
#### Acknowledgment
_Acknowledgement_ is the successful movement of a barrier after it has fully walked the workflow. When a peer executing an output task receives all of its barriers, it messages its upstream input peers about its progress. When an input peer notices that all of its output task peers have reached a new minimum checkpoint, it releases messages from its input medium - effectively marking them as complete, and never eligible for replay.
When a peer boots up into an input task, it reads its last checkpoint from stable storage. The input stream rewinds to that location and begins sending messages downstream.
We need to be careful that when new peers come online, input peers don't expect them to be able to read checkpoints until they are reading any messages at all. We already have a two-phase join process for a peer to start a task. We just need to make sure that we use it. We'll also need to broadcast the `:allocations` key in the cluster per barrier to make sure the cluster configuration hasn't changed.
##### Assumptions
- To survive a fault, an input stream must be able to retain a message even after a process reads that message. The input stream must have an API for the process to explicitly declare that the processing of the message was successful.
#### Ordered message processing
The ABS engine requires that messages be processed in the order that they are sent between two tasks. Some implementations of ABS use a pull-based messaging layer, thus obviates concerns about message reordering. Onyx heavily embraces Aeron, which is a push-based model. Fortunately, for unbroken connections, Aeron guarantees that messages are consumed in the order they are produced for a single stream.
#### Fault Recovery
When a peer comes online for a task, it essentially always moves into fault recovery mode. If the peer is running an input task, it looks up its last checkpoint in stable storage. If no suitable checkpoint is found, it should start the beginning with barrier `0`.
If a downstream peer crashes, a message will be put on the log as normal. Input peers will read this message and rewind the stream to the checkpoint that the peer crashed on and play from there. It's important the before the stream rewinds, the peer emits a barrier to denote that the next set of messages are older. Peers are expected to maintain the current barrier in memory and ignore state-updates for older messages. Stateless transformations should still be applied to pass the message downstream.
#### Backpressure
It's critical that the communication channels between tasks exhibit backpressure. Aeron implements this on our behalf. Producers will spin when writing a message gets a Backpressure return code.
#### Barrier Alignment
Barrier alignment is the process by which a peer waits for all of its upstream tasks to send it a barrier before it itself emits a barrier. We can isolate this algorithm in a component that controls ingress messages.
#### Proposed Design

##### ABS Streaming Engine Component
- Create a new Component that starts on each virtual peer with the task lifecycle Component.
- The ABS engine component will ultimately pump messages into the task lifecycle component. We want to try and keep all the details of barriers, checkpoints, and fault recovery outside of task lifecycle to keep it simple. The engine will figure out which upstream tasks to read from, when to pause for barrier alignment, and when to emit barriers to downstream tasks. The ABS engine can take messages from its N input core async channels and funnel them back into a single core.async channel from which the task lifecycle will read off of.
- The component in Onyx that reads from Aeron and forwards incoming data onto a set of core.async channels for each peer needs to change. Currently there is one channel for all incoming segments. We need N core.async channels for N immediate upstream tasks. This design aids in implementing barrier alignment. When a barrier is encountered, we want to pause messages coming in from upstream tasks and only read from tasks that haven't given us a barrier yet. Using a core.async channel per upstream tasks forces the producers of those messages to backpressure, which is desired.
- core.async channels must have blocking buffers. They cannot be `sliding` or `dropping` since that would allow message loss, and therefore out of order messages. If messages got out of order, we'd have an indeterminate set of messages between checkpoints.
- Segments cross the wire with their task ID. We can use short task IDs to emit less information, or we can continue to use peer short IDs and do the lookup at the consumer end to figure out which task it came from.
- We need a new message type in our custom protocol to represent a barrier.
- ABS Engine Component should maintain an atom that we'll call the engine state. The engine state contains a mapping from task ID to a set of barrier IDs and peer IDs that have been encountered. As barriers are emitted from this peer, barrier IDs in that map and removed. This is the "pending area" for when we receive a barrier from one task and are waiting for the barriers of other upstream tasks.
- ABS Engine Component works with two main functions:
- `pick-channels` - function of two parameters *map of task ID -> core.async channel and the engine state) and returns the set of channels that the engine should try to read from. This function isolates the decision of when to pause reading from a channel, and is therefore responsible for implementing barrier alignment.
- `emit-barrier?` - function of engine state and replica that returns a boolean to denote whether this peer should emit a barrier to its downstream peers after this batch of segments has been processed.
- The ABS engine component should be able shut off barrier alignment via configuration, which degrades to at-least-once message processing.
- The ABS engine should supply a callback function on the ingress channel to the task lifecycle. The callback should be invoked after `write-batch` returns, which will in turn checkpoint to durable storage.
- The ABS engine should supply a `:skip-state-updates?` key on the ingress channel to the task lifecycle. Turning this key on would make the task lifecycle run under "exactly once" semantics.
### Existing Code Changes
- [ ] All plugins need to switch to the `SimpleInput` interface.
- [ ] Rename `SimpleInput` to a more meaningful name.
### New Code-level Additions
- [x] Create a byte protocol for Barrier messages [Patch 7cf7267eb7d2a9fb1e86813f49d2b0c9e2140101]
- [ ] Alter code surrounding Peer Manager to have N channels per upstream task, not one channel
### Code to be removed
- [ ] Acker daemon
- [ ] Peer configuration `:onyx.messaging/ack-daemon-timeout`
- [ ] Peer configuration `:onyx.messaging/ack-daemon-clear-interval`
- [ ] Job key `acker-exclude-inputs`
- [ ] Job key `acker-exclude-outputs`
- [ ] Job key `:acker-percentage`
- [ ] Replica key `ackers`
- [ ] Scheduler constraints for `choose-ackers`
- [ ] ack-ch connecting acking-daemon to task-lifecycle
- [ ] Task-map key `:onyx/pending-timeout`
- [ ] Task-map key `:onyx/max-pending`
- [ ] Task-map key `:onyx/input-retry-timeout`
- [ ] Ack phase of task lifecycle
### Docs to be modified
- [ ] Performance tuning section needs to discuss how ABS works instead of Per-Record.
- [ ] Internal architecture and design will need to be updated. We can take most of the material from this issue and move it into there.
- [ ] Backpressure section needs to remove discussion about buffering messages in-memory at the input site.
### Questions
- Should we tackle iterative computation in this patch?
- Can we get rid of the sentinel in this patch?
- What happens when we have _really_ large barrier IDs because we've been streaming for a long time? We need to wrap around at some point.
|
non_process
|
implement asynchronous barrier snapshotting this issue will be used to track the design and progress of the asynchronous barrier snapshotting algorithm the work in this issue will replace our current streaming engine per record processing this is the same algorithm outlined in which is itself an iteration of branch the abs engine is actively being implemented on the branch glossary barrier a barrier is special message that partitions a stream of messages into finite discrete portion barriers indicate progress in terms of execution stages in an input stream barriers are artificial and are injected by onyx itself barriers have monotonically increasing identifiers starting at checkpoint checkpoints are values that are durably written to stable storage to indicate the progress of a task in a workflow the checkpoint always includes a barrier id for which all messages coming before it were acknowledged and may contain any snapshotted state design discussion this section hashes out the larger pieces of the design and the considerations that need to be taken barrier injection barrier messages need to be injected into each input stream at a deterministic location so far we believe the easiest way to do that is by injecting a barrier every n messages that is if we configured onyx to inject a barrier every messages an input stream should look exactly like this every time even if its replayed by a completely different process the rightmost side of the text indicates the head of the stream and the leftmost the tail the pipe indicates a barrier barriers carry monotonically increasing identifiers per task that is if there are input tasks all the first barrier in each task starts at then proceeds to and so forth the original paper notes snapshot coordination is implemented as an actor process on the job manager that keeps a global state for an execution graph of a single job the coordinator periodically injects stage barriers to all sources of the execution graph upon reconfiguration the last glob ally snapshotted state is restored in the operators from a distributed in memory persistent storage we do not currently see any reason that a central coordinator would inject messages given that message streams are processed at different rates it s unclear how barriers can deterministically be placed into a stream using this approach further processes can pause as they receive such messages from a coordinator perfect synchronicity isn t practical to to achieve here assumptions every message in an input stream is uniquely identifiable input streams can be rewinded if there is a downstream failure when a peer writes its checkpoint to durable storage it must reject its own write if its checkpoint is lower than the checkpoint that exists in storage this implies that durable checkpoint storage must support cas questions should we allow developers to configure different lengths between barriers for different tasks acknowledgment acknowledgement is the successful movement of a barrier after it has fully walked the workflow when a peer executing an output task receives all of its barriers it messages its upstream input peers about its progress when an input peer notices that all of its output task peers have reached a new minimum checkpoint it releases messages from its input medium effectively marking them as complete and never eligible for replay when a peer boots up into an input task it reads its last checkpoint from stable storage the input stream rewinds to that location and begins sending messages downstream we need to be careful that when new peers come online input peers don t expect them to be able to read checkpoints until they are reading any messages at all we already have a two phase join process for a peer to start a task we just need to make sure that we use it we ll also need to broadcast the allocations key in the cluster per barrier to make sure the cluster configuration hasn t changed assumptions to survive a fault an input stream must be able to retain a message even after a process reads that message the input stream must have an api for the process to explicitly declare that the processing of the message was successful ordered message processing the abs engine requires that messages be processed in the order that they are sent between two tasks some implementations of abs use a pull based messaging layer thus obviates concerns about message reordering onyx heavily embraces aeron which is a push based model fortunately for unbroken connections aeron guarantees that messages are consumed in the order they are produced for a single stream fault recovery when a peer comes online for a task it essentially always moves into fault recovery mode if the peer is running an input task it looks up its last checkpoint in stable storage if no suitable checkpoint is found it should start the beginning with barrier if a downstream peer crashes a message will be put on the log as normal input peers will read this message and rewind the stream to the checkpoint that the peer crashed on and play from there it s important the before the stream rewinds the peer emits a barrier to denote that the next set of messages are older peers are expected to maintain the current barrier in memory and ignore state updates for older messages stateless transformations should still be applied to pass the message downstream backpressure it s critical that the communication channels between tasks exhibit backpressure aeron implements this on our behalf producers will spin when writing a message gets a backpressure return code barrier alignment barrier alignment is the process by which a peer waits for all of its upstream tasks to send it a barrier before it itself emits a barrier we can isolate this algorithm in a component that controls ingress messages proposed design abs streaming engine component create a new component that starts on each virtual peer with the task lifecycle component the abs engine component will ultimately pump messages into the task lifecycle component we want to try and keep all the details of barriers checkpoints and fault recovery outside of task lifecycle to keep it simple the engine will figure out which upstream tasks to read from when to pause for barrier alignment and when to emit barriers to downstream tasks the abs engine can take messages from its n input core async channels and funnel them back into a single core async channel from which the task lifecycle will read off of the component in onyx that reads from aeron and forwards incoming data onto a set of core async channels for each peer needs to change currently there is one channel for all incoming segments we need n core async channels for n immediate upstream tasks this design aids in implementing barrier alignment when a barrier is encountered we want to pause messages coming in from upstream tasks and only read from tasks that haven t given us a barrier yet using a core async channel per upstream tasks forces the producers of those messages to backpressure which is desired core async channels must have blocking buffers they cannot be sliding or dropping since that would allow message loss and therefore out of order messages if messages got out of order we d have an indeterminate set of messages between checkpoints segments cross the wire with their task id we can use short task ids to emit less information or we can continue to use peer short ids and do the lookup at the consumer end to figure out which task it came from we need a new message type in our custom protocol to represent a barrier abs engine component should maintain an atom that we ll call the engine state the engine state contains a mapping from task id to a set of barrier ids and peer ids that have been encountered as barriers are emitted from this peer barrier ids in that map and removed this is the pending area for when we receive a barrier from one task and are waiting for the barriers of other upstream tasks abs engine component works with two main functions pick channels function of two parameters map of task id core async channel and the engine state and returns the set of channels that the engine should try to read from this function isolates the decision of when to pause reading from a channel and is therefore responsible for implementing barrier alignment emit barrier function of engine state and replica that returns a boolean to denote whether this peer should emit a barrier to its downstream peers after this batch of segments has been processed the abs engine component should be able shut off barrier alignment via configuration which degrades to at least once message processing the abs engine should supply a callback function on the ingress channel to the task lifecycle the callback should be invoked after write batch returns which will in turn checkpoint to durable storage the abs engine should supply a skip state updates key on the ingress channel to the task lifecycle turning this key on would make the task lifecycle run under exactly once semantics existing code changes all plugins need to switch to the simpleinput interface rename simpleinput to a more meaningful name new code level additions create a byte protocol for barrier messages alter code surrounding peer manager to have n channels per upstream task not one channel code to be removed acker daemon peer configuration onyx messaging ack daemon timeout peer configuration onyx messaging ack daemon clear interval job key acker exclude inputs job key acker exclude outputs job key acker percentage replica key ackers scheduler constraints for choose ackers ack ch connecting acking daemon to task lifecycle task map key onyx pending timeout task map key onyx max pending task map key onyx input retry timeout ack phase of task lifecycle docs to be modified performance tuning section needs to discuss how abs works instead of per record internal architecture and design will need to be updated we can take most of the material from this issue and move it into there backpressure section needs to remove discussion about buffering messages in memory at the input site questions should we tackle iterative computation in this patch can we get rid of the sentinel in this patch what happens when we have really large barrier ids because we ve been streaming for a long time we need to wrap around at some point
| 0
|
179,789
| 14,712,774,047
|
IssuesEvent
|
2021-01-05 09:24:35
|
ScrumFacilitators/measuringoutcome-en
|
https://api.github.com/repos/ScrumFacilitators/measuringoutcome-en
|
closed
|
Add reference to translation to dutch once that is released
|
documentation
|
When released, a reference should be made to the dutch translation.
Possibly:
----
### Available Translations
- Nederlands: Uitkomsten Meten [link to github repository]
- etc
|
1.0
|
Add reference to translation to dutch once that is released - When released, a reference should be made to the dutch translation.
Possibly:
----
### Available Translations
- Nederlands: Uitkomsten Meten [link to github repository]
- etc
|
non_process
|
add reference to translation to dutch once that is released when released a reference should be made to the dutch translation possibly available translations nederlands uitkomsten meten etc
| 0
|
96,711
| 8,629,971,603
|
IssuesEvent
|
2018-11-21 23:02:10
|
ValveSoftware/steamvr_unity_plugin
|
https://api.github.com/repos/ValveSoftware/steamvr_unity_plugin
|
closed
|
Action.GetStateDown set to true after deactivating its actionset
|
Need Retest Needs more information
|
Hey @zite,
I'm trying to use actionsets to dictate whether or not you're actually able to grab an interactable. I have an actionset with grab pinch and grab grip actions. Prior to activating that actionset, if I hover an interactable, I am unable to grab it. If I activate the grabbing actionset, I'm now able to grab it. If I deactivate the actionset, now if I hover the interactable, I grab it. It looks like the grabPinchAction.GetStateDown on line 1220 of the Hand script is firing when it shouldn't.
|
1.0
|
Action.GetStateDown set to true after deactivating its actionset - Hey @zite,
I'm trying to use actionsets to dictate whether or not you're actually able to grab an interactable. I have an actionset with grab pinch and grab grip actions. Prior to activating that actionset, if I hover an interactable, I am unable to grab it. If I activate the grabbing actionset, I'm now able to grab it. If I deactivate the actionset, now if I hover the interactable, I grab it. It looks like the grabPinchAction.GetStateDown on line 1220 of the Hand script is firing when it shouldn't.
|
non_process
|
action getstatedown set to true after deactivating its actionset hey zite i m trying to use actionsets to dictate whether or not you re actually able to grab an interactable i have an actionset with grab pinch and grab grip actions prior to activating that actionset if i hover an interactable i am unable to grab it if i activate the grabbing actionset i m now able to grab it if i deactivate the actionset now if i hover the interactable i grab it it looks like the grabpinchaction getstatedown on line of the hand script is firing when it shouldn t
| 0
|
786,471
| 27,656,823,595
|
IssuesEvent
|
2023-03-12 03:02:56
|
AY2223S2-CS2113-T13-3/tp
|
https://api.github.com/repos/AY2223S2-CS2113-T13-3/tp
|
closed
|
Load storage feature
|
Top priority In progress
|
Feature load() is a function in Storage class that will return a scanner of the stuff that is in the save file. Refer to the function parseSaveFile in [this piece of code](https://github.com/tyuyang/ip/blob/master/src/main/java/duke/SavefileManager.java) for an example. In that example, the scanner s should be returned so that it can be passed into AccountsList back in BWU class.
|
1.0
|
Load storage feature - Feature load() is a function in Storage class that will return a scanner of the stuff that is in the save file. Refer to the function parseSaveFile in [this piece of code](https://github.com/tyuyang/ip/blob/master/src/main/java/duke/SavefileManager.java) for an example. In that example, the scanner s should be returned so that it can be passed into AccountsList back in BWU class.
|
non_process
|
load storage feature feature load is a function in storage class that will return a scanner of the stuff that is in the save file refer to the function parsesavefile in for an example in that example the scanner s should be returned so that it can be passed into accountslist back in bwu class
| 0
|
7,899
| 11,089,087,485
|
IssuesEvent
|
2019-12-14 16:00:58
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Publish topic to PDF breaks if topic has conref to element defined in DITA Map
|
bug plugin/pdf preprocess stale
|
Tested with DITA OT 2.5.2.
I have a DITA Map which has a keyword with an ID:
<topicmeta>
<keywords>
<keyword id="kid">
some text.
</keyword>
</keywords>
</topicmeta>
and a topic which refers to it:
<p><keyword conref="../../eXml/samples/dita/flowers/flowers.ditamap#kid"/></p>
When I publish that topic to PDF, the output breaks with:
D:\projects\eXml\frameworks\dita\DITA-OT2.x\plugins\org.dita.pdf2\build.xml:136: Failed to run pipeline: Failed to process merged topics: Invalid element name. Invalid QName {}
The failure is on an "xsl:element" in the "org.dita.pdf2\xsl\common\topicmergeImpl.xsl"
This occurs because somehow the transformation considers that a DITA Map is processed instead of a DITA topic and the "topic-merge" pre-processing steps are called although it is a plain topic.
|
1.0
|
Publish topic to PDF breaks if topic has conref to element defined in DITA Map - Tested with DITA OT 2.5.2.
I have a DITA Map which has a keyword with an ID:
<topicmeta>
<keywords>
<keyword id="kid">
some text.
</keyword>
</keywords>
</topicmeta>
and a topic which refers to it:
<p><keyword conref="../../eXml/samples/dita/flowers/flowers.ditamap#kid"/></p>
When I publish that topic to PDF, the output breaks with:
D:\projects\eXml\frameworks\dita\DITA-OT2.x\plugins\org.dita.pdf2\build.xml:136: Failed to run pipeline: Failed to process merged topics: Invalid element name. Invalid QName {}
The failure is on an "xsl:element" in the "org.dita.pdf2\xsl\common\topicmergeImpl.xsl"
This occurs because somehow the transformation considers that a DITA Map is processed instead of a DITA topic and the "topic-merge" pre-processing steps are called although it is a plain topic.
|
process
|
publish topic to pdf breaks if topic has conref to element defined in dita map tested with dita ot i have a dita map which has a keyword with an id some text and a topic which refers to it when i publish that topic to pdf the output breaks with d projects exml frameworks dita dita x plugins org dita build xml failed to run pipeline failed to process merged topics invalid element name invalid qname the failure is on an xsl element in the org dita xsl common topicmergeimpl xsl this occurs because somehow the transformation considers that a dita map is processed instead of a dita topic and the topic merge pre processing steps are called although it is a plain topic
| 1
|
103,634
| 22,356,306,227
|
IssuesEvent
|
2022-06-15 15:56:01
|
arduino/arduino-cli
|
https://api.github.com/repos/arduino/arduino-cli
|
opened
|
Identify managed platforms not tracked by a package index
|
type: enhancement topic: code
|
### Describe the request
If an [Arduino boards platform](https://arduino.github.io/arduino-cli/dev/platform-specification/) was installed via [`core install`](https://arduino.github.io/arduino-cli/dev/commands/arduino-cli_core_install/) (as indicated by it being located under `<directories.data>/packages` AKA [`github.com/arduino/arduino-cli/arduino/cores/packagemanager.PackageManager.PackagesDir`](https://github.com/arduino/arduino-cli/blob/813cfe73a466245222d6cd0aef2d181a91c56d3e/arduino/cores/packagemanager/package_manager.go#L44)), but that platform is not listed in [the primary package index](https://downloads.arduino.cc/packages/package_index.json) and additional [package indexes](https://arduino.github.io/arduino-cli/dev/package_index_json-specification/) configured via the `board_manager.additional_urls` [configuration key](https://arduino.github.io/arduino-cli/dev/configuration/#configuration-keys):
- Print a warning when this may be significant (e.g., [`core upgrade`](https://arduino.github.io/arduino-cli/dev/commands/arduino-cli_core_upgrade/))
- Make this information available via the gRPC interface (e.g., add an `index_url` field to [the `Platform` message](https://arduino.github.io/arduino-cli/dev/rpc/commands/#platform))
🙂 The user will be aware that Boards Platform updates are will not be available due to their configuration.
### Describe the current behavior
Arduino CLI's `core` commands use the Arduino Boards Manager system to provide installation and updates of Arduino boards platforms. Arduino maintains a primary package index that provides all official and partner platforms. A huge number of 3rd party platforms are also available. In order to access these, the user must add the URL to the platform's package index to their Arduino CLI configuration.
After a platform is installed, it remains usable even if the additional package index URL is removed from the Arduino CLI configuration. However, the presence of this URL is required for the valuable update capability to work. Users without an in depth understanding of the fairly complex and esoteric Boards Manager system may not be aware of this and thus feel no need to maintain a list of URLs in their configuration.
There are several scenarios that would make this especially likely to occur:
- The platform was installed via a different tool (Arduino IDE 1.x, Arduino IDE 2.x, and Arduino CLI all use separate preference files).
- An ephemeral configuration mechanism ([environment variable](https://arduino.github.io/arduino-cli/dev/configuration/#environment-variables) or [command line flag](https://arduino.github.io/arduino-cli/dev/configuration/#command-line-flags)) was used to configure the URL during the platform installation.
🙁 The problem is not communicated to the user. They may miss out on important advancements made in later releases of the platform or else be confused when the Arduino CLI update capability does not seem to work.
### Arduino CLI version
nightly-20220615 Commit: 813cfe7 Date: 2022-06-15T01:36:01Z
### Operating system
All
### Operating system version
N/A
### Additional context
This capability will likely be of greatest value for use in Arduino IDE 2.x, whose users are less likely to understand the technical details of the Boards Manager system, and also are most likely to have missing package index URLs after migrating from Arduino IDE 1.x. However, Arduino CLI is the most appropriate place for the code that will detect this condition, so the work should start here, followed by communicating the information to the IDE user via its GUI.
#### Related:
- https://forum.arduino.cc/t/post-here-for-ide-2-0-rc/937541/3
- https://forum.arduino.cc/t/suggestion-improve-library-manager/847666/9
- https://forum.arduino.cc/t/board-manager-missing-pololu-boards/698567/3
- https://forum.arduino.cc/t/boardmanager-does-not-know-esp32-8266-boards/698286
### Issue checklist
- [X] I searched for previous requests in [the issue tracker](https://github.com/arduino/arduino-cli/issues?q=)
- [X] I verified the feature was still missing when using the latest [nightly build](https://arduino.github.io/arduino-cli/dev/installation/#nightly-builds)
- [X] My request contains all necessary details
|
1.0
|
Identify managed platforms not tracked by a package index - ### Describe the request
If an [Arduino boards platform](https://arduino.github.io/arduino-cli/dev/platform-specification/) was installed via [`core install`](https://arduino.github.io/arduino-cli/dev/commands/arduino-cli_core_install/) (as indicated by it being located under `<directories.data>/packages` AKA [`github.com/arduino/arduino-cli/arduino/cores/packagemanager.PackageManager.PackagesDir`](https://github.com/arduino/arduino-cli/blob/813cfe73a466245222d6cd0aef2d181a91c56d3e/arduino/cores/packagemanager/package_manager.go#L44)), but that platform is not listed in [the primary package index](https://downloads.arduino.cc/packages/package_index.json) and additional [package indexes](https://arduino.github.io/arduino-cli/dev/package_index_json-specification/) configured via the `board_manager.additional_urls` [configuration key](https://arduino.github.io/arduino-cli/dev/configuration/#configuration-keys):
- Print a warning when this may be significant (e.g., [`core upgrade`](https://arduino.github.io/arduino-cli/dev/commands/arduino-cli_core_upgrade/))
- Make this information available via the gRPC interface (e.g., add an `index_url` field to [the `Platform` message](https://arduino.github.io/arduino-cli/dev/rpc/commands/#platform))
🙂 The user will be aware that Boards Platform updates are will not be available due to their configuration.
### Describe the current behavior
Arduino CLI's `core` commands use the Arduino Boards Manager system to provide installation and updates of Arduino boards platforms. Arduino maintains a primary package index that provides all official and partner platforms. A huge number of 3rd party platforms are also available. In order to access these, the user must add the URL to the platform's package index to their Arduino CLI configuration.
After a platform is installed, it remains usable even if the additional package index URL is removed from the Arduino CLI configuration. However, the presence of this URL is required for the valuable update capability to work. Users without an in depth understanding of the fairly complex and esoteric Boards Manager system may not be aware of this and thus feel no need to maintain a list of URLs in their configuration.
There are several scenarios that would make this especially likely to occur:
- The platform was installed via a different tool (Arduino IDE 1.x, Arduino IDE 2.x, and Arduino CLI all use separate preference files).
- An ephemeral configuration mechanism ([environment variable](https://arduino.github.io/arduino-cli/dev/configuration/#environment-variables) or [command line flag](https://arduino.github.io/arduino-cli/dev/configuration/#command-line-flags)) was used to configure the URL during the platform installation.
🙁 The problem is not communicated to the user. They may miss out on important advancements made in later releases of the platform or else be confused when the Arduino CLI update capability does not seem to work.
### Arduino CLI version
nightly-20220615 Commit: 813cfe7 Date: 2022-06-15T01:36:01Z
### Operating system
All
### Operating system version
N/A
### Additional context
This capability will likely be of greatest value for use in Arduino IDE 2.x, whose users are less likely to understand the technical details of the Boards Manager system, and also are most likely to have missing package index URLs after migrating from Arduino IDE 1.x. However, Arduino CLI is the most appropriate place for the code that will detect this condition, so the work should start here, followed by communicating the information to the IDE user via its GUI.
#### Related:
- https://forum.arduino.cc/t/post-here-for-ide-2-0-rc/937541/3
- https://forum.arduino.cc/t/suggestion-improve-library-manager/847666/9
- https://forum.arduino.cc/t/board-manager-missing-pololu-boards/698567/3
- https://forum.arduino.cc/t/boardmanager-does-not-know-esp32-8266-boards/698286
### Issue checklist
- [X] I searched for previous requests in [the issue tracker](https://github.com/arduino/arduino-cli/issues?q=)
- [X] I verified the feature was still missing when using the latest [nightly build](https://arduino.github.io/arduino-cli/dev/installation/#nightly-builds)
- [X] My request contains all necessary details
|
non_process
|
identify managed platforms not tracked by a package index describe the request if an was installed via as indicated by it being located under packages aka but that platform is not listed in and additional configured via the board manager additional urls print a warning when this may be significant e g make this information available via the grpc interface e g add an index url field to 🙂 the user will be aware that boards platform updates are will not be available due to their configuration describe the current behavior arduino cli s core commands use the arduino boards manager system to provide installation and updates of arduino boards platforms arduino maintains a primary package index that provides all official and partner platforms a huge number of party platforms are also available in order to access these the user must add the url to the platform s package index to their arduino cli configuration after a platform is installed it remains usable even if the additional package index url is removed from the arduino cli configuration however the presence of this url is required for the valuable update capability to work users without an in depth understanding of the fairly complex and esoteric boards manager system may not be aware of this and thus feel no need to maintain a list of urls in their configuration there are several scenarios that would make this especially likely to occur the platform was installed via a different tool arduino ide x arduino ide x and arduino cli all use separate preference files an ephemeral configuration mechanism or was used to configure the url during the platform installation 🙁 the problem is not communicated to the user they may miss out on important advancements made in later releases of the platform or else be confused when the arduino cli update capability does not seem to work arduino cli version nightly commit date operating system all operating system version n a additional context this capability will likely be of greatest value for use in arduino ide x whose users are less likely to understand the technical details of the boards manager system and also are most likely to have missing package index urls after migrating from arduino ide x however arduino cli is the most appropriate place for the code that will detect this condition so the work should start here followed by communicating the information to the ide user via its gui related issue checklist i searched for previous requests in i verified the feature was still missing when using the latest my request contains all necessary details
| 0
|
21,591
| 29,992,504,465
|
IssuesEvent
|
2023-06-26 00:27:34
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
@yamada-ui/cli 0.3.0 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process7.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/utils/index.js:17803","message":"This package is silently executing another executable"}]}```
|
1.0
|
@yamada-ui/cli 0.3.0 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" (0, import_node_child_process.spawn)(import_node_process7.default.execPath, [import_node_path3.default.join(__dirname2, \"check.js\"), JSON.stringify(this.#options)], {\n detached: true,\n stdio: \"ignore\"\n }).unref();","location":"package/dist/utils/index.js:17803","message":"This package is silently executing another executable"}]}```
|
process
|
yamada ui cli has guarddog issues npm silent process execution n detached true n stdio ignore n unref location package dist utils index js message this package is silently executing another executable
| 1
|
165,339
| 26,148,978,521
|
IssuesEvent
|
2022-12-30 10:29:14
|
frappe/desk
|
https://api.github.com/repos/frappe/desk
|
closed
|
feat: Customer Management
|
enhancement design
|
The ability to manage an organization is absent from the current design. below are the expected flows to be implemented:
1. Organization list:
- actions: add a new organization, add filter
- select multiple organizations
- bulk action should only consist of the Delete option
- Columns: Name, number of contacts linked to it
2. Organization form:
- shows name (editable), contacts linked to it (add 'link users' as an option)
- shows all their tickets
- an option to create a new ticket from the same
- option to delete
Organization fields:
- Name
- Domains (`placeholder - acmeltd.com, mycompany.com`, `tooltip - Contacts and tickets with similar domains will be added to this organization.`)
these are features that are supposed to be implemented @kamaljohnson.
@nish7x we will need screens for all these features.
|
1.0
|
feat: Customer Management - The ability to manage an organization is absent from the current design. below are the expected flows to be implemented:
1. Organization list:
- actions: add a new organization, add filter
- select multiple organizations
- bulk action should only consist of the Delete option
- Columns: Name, number of contacts linked to it
2. Organization form:
- shows name (editable), contacts linked to it (add 'link users' as an option)
- shows all their tickets
- an option to create a new ticket from the same
- option to delete
Organization fields:
- Name
- Domains (`placeholder - acmeltd.com, mycompany.com`, `tooltip - Contacts and tickets with similar domains will be added to this organization.`)
these are features that are supposed to be implemented @kamaljohnson.
@nish7x we will need screens for all these features.
|
non_process
|
feat customer management the ability to manage an organization is absent from the current design below are the expected flows to be implemented organization list actions add a new organization add filter select multiple organizations bulk action should only consist of the delete option columns name number of contacts linked to it organization form shows name editable contacts linked to it add link users as an option shows all their tickets an option to create a new ticket from the same option to delete organization fields name domains placeholder acmeltd com mycompany com tooltip contacts and tickets with similar domains will be added to this organization these are features that are supposed to be implemented kamaljohnson we will need screens for all these features
| 0
|
500,036
| 14,484,885,185
|
IssuesEvent
|
2020-12-10 16:54:23
|
atbcb/usab-uswds
|
https://api.github.com/repos/atbcb/usab-uswds
|
opened
|
animations swiper CSS
|
Low priority bug
|
under ADA guides/animations, the single swiper needs work
CSS for pagination is not working. if you use inspector the pagination buttons are there, but for some reason they are not showing up. Pagination is working for the homepage. They are using the same CSS styles, not sure why it is not working
|
1.0
|
animations swiper CSS - under ADA guides/animations, the single swiper needs work
CSS for pagination is not working. if you use inspector the pagination buttons are there, but for some reason they are not showing up. Pagination is working for the homepage. They are using the same CSS styles, not sure why it is not working
|
non_process
|
animations swiper css under ada guides animations the single swiper needs work css for pagination is not working if you use inspector the pagination buttons are there but for some reason they are not showing up pagination is working for the homepage they are using the same css styles not sure why it is not working
| 0
|
703,341
| 24,154,520,849
|
IssuesEvent
|
2022-09-22 06:19:31
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
us.account.samsung.com - site is not usable
|
priority-important browser-fenix engine-gecko
|
<!-- @browser: Firefox Mobile 107.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:107.0) Gecko/107.0 Firefox/107.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111169 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://us.account.samsung.com/accounts/ANDROIDSDK/signInGate?locale=en_US&svcParam=eyJjaGtEb051bSI6IjEiLCJzdmNFbmNQYXJhbSI6ImNKVU10elFEYkVYdWR4NXB0dWpLN0Z2RW5YM0VcL2NHZVpKSmZRY05ObTIyaFlcL1NCVzZZeVRwQVVLZHl2YmhrUzJUYmd6bjNcL2FOOGVRQWo1YTdMNDFmOW9jRkRxM2VRQlFYMWs2UTIzcERadHhZNGVJRllNc2lONWdwM2U5bkJiUUNjQ3ZPNTMzWkR4elpVeVwvY0VSQ1RGOGQzUWg3OU4xZjQ2SlwvUFI2NSszdGRKMnRvT1NaeUpESzBGeUN4eTdYWVlXbFpvdFwvOWIzVStwdzNibEM5VnR1eTFkMVBSWDhlYjBEQTltWGxoN2NpbTRZT1F5Z0toZ1JSanFDc1Z3bXJvclh4NW50eHh6S1dxUzhUUEdCRHk4V0VkdVVOSnFvUWRrRjJxenZKMmhjbHB3OXY2aGhmeEJpQ1cyNFlKdkhnYmpNaFRvTkhGMllaVjdwQTlWV09XS01WZHU1THh1d1o0N3N0WWlFeU9YcmVKSk10M0xvTnNIQ2FqTVYzd3ZYSFhtbWhcL3JMdnF4VlRWM3NcL1RTdG96QXRpN3RDTDRSbkk1Sk1JdmRcLzRiWkVPaG16VDhKNGxDUnBFNHJFYUVLc2dYV2d6dlBnYWQreUUxbEppWllNNmdcL1RlT1ZHaDhqMndXOG94UkR4N3lPYlRVVnI3c09YdGtZdG9tMHFEVEFzTU5SR0x1eVh4QVBFTEdPMFo4elVQU0pXaUcrS0Yxa2ZXZW5WMXFKMlwvT1NScmlpMUNKaVFCTVB2OUNuUW4rT1Z1V3BHZXRHSzFKYXA0bG1BcVZwd0phVEYwcGhXVHIzajhJeElWQldTNGJtZFdvb2ErbGRnS2RHMWpNeHdhMXd5MWcxN1JjQlg0Y1B6V0x6VUUyZmhoYnI1MEpPVEdTSDZ0M2FhMXFyN0hRemVESE9uV3BUXC83WG9JOW9jQ1BaS0pFallROVcydkhiKzB5M1JkTk9oUDV5M2JXMmJGQm9VT3ozOHBxWWpRQjRYUnpjcWxzY1RLYUhFSStaVVVaRDE2eUFDanVwb2hGeFd4QmFDSGZXZHRDSzdMbUt4MnNEcjFubXE1bXQrRDRJdDdXNGJPampnQllOMTE2dzFKcElMZVFWRGFkWWJFY1JTellvMXg1NG5Ga3dTWUxoaUpGWmxBUnhPUjVqa0Z0Mjl0VkQ0NjJPcWhuemNzOXJTK0pXSERwIiwic3ZjRW5jS1kiOiJoRk5TdHlBNmFvOFwvdnlnalBWRTZ6WWx2QWlGZkplaVwvY3ZBeUdidWExOHk2eHF5OCtURzg5bURMRHM0WVVYRzZJd0JDbXErOEtPUFwvUHp3U05SUzB5RHpMcENOSVN3UlZQbUJCaDZ3WDFOb3dQRHZSYlJTR0xUZ1Jnc3dMNEhqSW1XU2NiTDczcmIrYTJcL0xaWEljSFwvYUc5dVRjMTJFU1dSYWNITUsyWnBBbz0iLCJzdmNFbmNJViI6IjMzMzFjMmY0MTRjZmZhMWFmZDdhOGQyZDJjMjcwZDRkIn0%3D&mode=N
**Browser / Version**: Firefox Mobile 107.0
**Operating System**: Android 13
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
When browser fingerprinting fails it returns a page with an error code "(Error code: LNK_1004)" and a list of supported browsers. Firefox is listed there.
This page is launched when signing into the Samsung Health app.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/cf7a6f3e-dd89-42fc-9dc7-028eb321dada.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220920092542</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/636b3c7c-4548-496c-80ff-dfa011c969e0)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
us.account.samsung.com - site is not usable - <!-- @browser: Firefox Mobile 107.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 13; Mobile; rv:107.0) Gecko/107.0 Firefox/107.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/111169 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://us.account.samsung.com/accounts/ANDROIDSDK/signInGate?locale=en_US&svcParam=eyJjaGtEb051bSI6IjEiLCJzdmNFbmNQYXJhbSI6ImNKVU10elFEYkVYdWR4NXB0dWpLN0Z2RW5YM0VcL2NHZVpKSmZRY05ObTIyaFlcL1NCVzZZeVRwQVVLZHl2YmhrUzJUYmd6bjNcL2FOOGVRQWo1YTdMNDFmOW9jRkRxM2VRQlFYMWs2UTIzcERadHhZNGVJRllNc2lONWdwM2U5bkJiUUNjQ3ZPNTMzWkR4elpVeVwvY0VSQ1RGOGQzUWg3OU4xZjQ2SlwvUFI2NSszdGRKMnRvT1NaeUpESzBGeUN4eTdYWVlXbFpvdFwvOWIzVStwdzNibEM5VnR1eTFkMVBSWDhlYjBEQTltWGxoN2NpbTRZT1F5Z0toZ1JSanFDc1Z3bXJvclh4NW50eHh6S1dxUzhUUEdCRHk4V0VkdVVOSnFvUWRrRjJxenZKMmhjbHB3OXY2aGhmeEJpQ1cyNFlKdkhnYmpNaFRvTkhGMllaVjdwQTlWV09XS01WZHU1THh1d1o0N3N0WWlFeU9YcmVKSk10M0xvTnNIQ2FqTVYzd3ZYSFhtbWhcL3JMdnF4VlRWM3NcL1RTdG96QXRpN3RDTDRSbkk1Sk1JdmRcLzRiWkVPaG16VDhKNGxDUnBFNHJFYUVLc2dYV2d6dlBnYWQreUUxbEppWllNNmdcL1RlT1ZHaDhqMndXOG94UkR4N3lPYlRVVnI3c09YdGtZdG9tMHFEVEFzTU5SR0x1eVh4QVBFTEdPMFo4elVQU0pXaUcrS0Yxa2ZXZW5WMXFKMlwvT1NScmlpMUNKaVFCTVB2OUNuUW4rT1Z1V3BHZXRHSzFKYXA0bG1BcVZwd0phVEYwcGhXVHIzajhJeElWQldTNGJtZFdvb2ErbGRnS2RHMWpNeHdhMXd5MWcxN1JjQlg0Y1B6V0x6VUUyZmhoYnI1MEpPVEdTSDZ0M2FhMXFyN0hRemVESE9uV3BUXC83WG9JOW9jQ1BaS0pFallROVcydkhiKzB5M1JkTk9oUDV5M2JXMmJGQm9VT3ozOHBxWWpRQjRYUnpjcWxzY1RLYUhFSStaVVVaRDE2eUFDanVwb2hGeFd4QmFDSGZXZHRDSzdMbUt4MnNEcjFubXE1bXQrRDRJdDdXNGJPampnQllOMTE2dzFKcElMZVFWRGFkWWJFY1JTellvMXg1NG5Ga3dTWUxoaUpGWmxBUnhPUjVqa0Z0Mjl0VkQ0NjJPcWhuemNzOXJTK0pXSERwIiwic3ZjRW5jS1kiOiJoRk5TdHlBNmFvOFwvdnlnalBWRTZ6WWx2QWlGZkplaVwvY3ZBeUdidWExOHk2eHF5OCtURzg5bURMRHM0WVVYRzZJd0JDbXErOEtPUFwvUHp3U05SUzB5RHpMcENOSVN3UlZQbUJCaDZ3WDFOb3dQRHZSYlJTR0xUZ1Jnc3dMNEhqSW1XU2NiTDczcmIrYTJcL0xaWEljSFwvYUc5dVRjMTJFU1dSYWNITUsyWnBBbz0iLCJzdmNFbmNJViI6IjMzMzFjMmY0MTRjZmZhMWFmZDdhOGQyZDJjMjcwZDRkIn0%3D&mode=N
**Browser / Version**: Firefox Mobile 107.0
**Operating System**: Android 13
**Tested Another Browser**: Yes Other
**Problem type**: Site is not usable
**Description**: Browser unsupported
**Steps to Reproduce**:
When browser fingerprinting fails it returns a page with an error code "(Error code: LNK_1004)" and a list of supported browsers. Firefox is listed there.
This page is launched when signing into the Samsung Health app.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/9/cf7a6f3e-dd89-42fc-9dc7-028eb321dada.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220920092542</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/9/636b3c7c-4548-496c-80ff-dfa011c969e0)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
us account samsung com site is not usable url browser version firefox mobile operating system android tested another browser yes other problem type site is not usable description browser unsupported steps to reproduce when browser fingerprinting fails it returns a page with an error code error code lnk and a list of supported browsers firefox is listed there this page is launched when signing into the samsung health app view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
444,027
| 31,017,812,672
|
IssuesEvent
|
2023-08-10 01:00:21
|
duckweedstudios/tournamention
|
https://api.github.com/repos/duckweedstudios/tournamention
|
closed
|
Create database schema
|
documentation enhancement
|
To match specification #3
Since the workload will be pretty frontend-heavy with the embed system I will take over on backend and setup the MongoDB database collection schema
|
1.0
|
Create database schema - To match specification #3
Since the workload will be pretty frontend-heavy with the embed system I will take over on backend and setup the MongoDB database collection schema
|
non_process
|
create database schema to match specification since the workload will be pretty frontend heavy with the embed system i will take over on backend and setup the mongodb database collection schema
| 0
|
41,628
| 12,832,606,407
|
IssuesEvent
|
2020-07-07 07:57:57
|
rvvergara/rails_toy_app
|
https://api.github.com/repos/rvvergara/rails_toy_app
|
opened
|
CVE-2020-10663 (High) detected in json-2.1.0.gem
|
security vulnerability
|
## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.1.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.1.0.gem">https://rubygems.org/gems/json-2.1.0.gem</a></p>
<p>
Dependency Hierarchy:
- :x: **json-2.1.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/rails_toy_app/commit/6a7b117f02b8b0166d564b1b9ca00233f9b099bc">6a7b117f02b8b0166d564b1b9ca00233f9b099bc</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10663 (High) detected in json-2.1.0.gem - ## CVE-2020-10663 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-2.1.0.gem</b></p></summary>
<p>This is a JSON implementation as a Ruby extension in C.</p>
<p>Library home page: <a href="https://rubygems.org/gems/json-2.1.0.gem">https://rubygems.org/gems/json-2.1.0.gem</a></p>
<p>
Dependency Hierarchy:
- :x: **json-2.1.0.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rvvergara/rails_toy_app/commit/6a7b117f02b8b0166d564b1b9ca00233f9b099bc">6a7b117f02b8b0166d564b1b9ca00233f9b099bc</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The JSON gem through 2.2.0 for Ruby, as used in Ruby 2.4 through 2.4.9, 2.5 through 2.5.7, and 2.6 through 2.6.5, has an Unsafe Object Creation Vulnerability. This is quite similar to CVE-2013-0269, but does not rely on poor garbage-collection behavior within Ruby. Specifically, use of JSON parsing methods can lead to creation of a malicious object within the interpreter, with adverse effects that are application-dependent.
<p>Publish Date: 2020-04-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10663>CVE-2020-10663</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/">https://www.ruby-lang.org/en/news/2020/03/19/json-dos-cve-2020-10663/</a></p>
<p>Release Date: 2020-03-28</p>
<p>Fix Resolution: 2.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in json gem cve high severity vulnerability vulnerable library json gem this is a json implementation as a ruby extension in c library home page a href dependency hierarchy x json gem vulnerable library found in head commit a href vulnerability details the json gem through for ruby as used in ruby through through and through has an unsafe object creation vulnerability this is quite similar to cve but does not rely on poor garbage collection behavior within ruby specifically use of json parsing methods can lead to creation of a malicious object within the interpreter with adverse effects that are application dependent publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
20,688
| 27,360,215,927
|
IssuesEvent
|
2023-02-27 15:24:05
|
haddocking/haddock3
|
https://api.github.com/repos/haddocking/haddock3
|
closed
|
haddock3-analyse: export image, matplotlib vs plotly
|
analysis/postprocessing
|
Currently, the command `haddock3-analyse` takes `png` and `dpi` arguments, see [here](https://github.com/haddocking/haddock3/blob/492b6b07924d7a8912c320d447dfc7bfe0b70b51/src/haddock/clis/cli_analyse.py#L102-L116). The module [libplots.py](https://github.com/haddocking/haddock3/blob/main/src/haddock/libs/libplots.py) uses `matplotlib` if `png` otherwise `plotly`. Note that `matplotlib` has not been added to the package dependencies.
However, `plotly` supports exporting images to different formats if the package `kaleido` is installed, see [Static Image Export in Python](https://plotly.com/python/static-image-export/). As explained in doc, with only one function `fig.write_image(f"fig.{format}")`, it is possible to export image to different formats.
The definitions of figure size and dpi are also different:
- `matplotlib`: `dpi` for resolution, figure size in inches
- `plotly`: `scale` for resolution, figure size in px
Here, I show two examples provided in the documentation of each tool with default values.
```python
import numpy as np
np.random.seed(1)
N = 100
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N))
```
**[Example: plotly scatter](https://plotly.com/python/static-image-export/)**
```python
import plotly.graph_objects as go
plotly_fig = go.Figure()
plotly_fig.add_trace(go.Scatter(
x=x,
y=y,
mode="markers",
marker=go.scatter.Marker(
size=area, #Sets the marker size (in px).
color=colors,
opacity=0.6,
colorscale="Viridis"
)
))
plotly_fig.write_image("plotly_fig.png") # scale=1 default: [700, 450] width, height in px
plotly_fig.show()
```

**[Example: matplotlib scatter](https://matplotlib.org/stable/gallery/shapes_and_collections/scatter.html#sphx-glr-gallery-shapes-and-collections-scatter-py)**
```python
import matplotlib.pyplot as plt
matplotlib_fig = plt.figure() #default: [6.4, 4.8]: Width, height in inches ~ [614.4, 460.8] Width, height in inches in px
plt.scatter(x, y, s=area**2, c=colors, alpha=0.6)
plt.savefig("matplotlib_fig.png") # dpi =100
```

File size:
```bash
76K matplotlib_fig.png
60K plotly_fig.png
```
|
1.0
|
haddock3-analyse: export image, matplotlib vs plotly - Currently, the command `haddock3-analyse` takes `png` and `dpi` arguments, see [here](https://github.com/haddocking/haddock3/blob/492b6b07924d7a8912c320d447dfc7bfe0b70b51/src/haddock/clis/cli_analyse.py#L102-L116). The module [libplots.py](https://github.com/haddocking/haddock3/blob/main/src/haddock/libs/libplots.py) uses `matplotlib` if `png` otherwise `plotly`. Note that `matplotlib` has not been added to the package dependencies.
However, `plotly` supports exporting images to different formats if the package `kaleido` is installed, see [Static Image Export in Python](https://plotly.com/python/static-image-export/). As explained in doc, with only one function `fig.write_image(f"fig.{format}")`, it is possible to export image to different formats.
The definitions of figure size and dpi are also different:
- `matplotlib`: `dpi` for resolution, figure size in inches
- `plotly`: `scale` for resolution, figure size in px
Here, I show two examples provided in the documentation of each tool with default values.
```python
import numpy as np
np.random.seed(1)
N = 100
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = (30 * np.random.rand(N))
```
**[Example: plotly scatter](https://plotly.com/python/static-image-export/)**
```python
import plotly.graph_objects as go
plotly_fig = go.Figure()
plotly_fig.add_trace(go.Scatter(
x=x,
y=y,
mode="markers",
marker=go.scatter.Marker(
size=area, #Sets the marker size (in px).
color=colors,
opacity=0.6,
colorscale="Viridis"
)
))
plotly_fig.write_image("plotly_fig.png") # scale=1 default: [700, 450] width, height in px
plotly_fig.show()
```

**[Example: matplotlib scatter](https://matplotlib.org/stable/gallery/shapes_and_collections/scatter.html#sphx-glr-gallery-shapes-and-collections-scatter-py)**
```python
import matplotlib.pyplot as plt
matplotlib_fig = plt.figure() #default: [6.4, 4.8]: Width, height in inches ~ [614.4, 460.8] Width, height in inches in px
plt.scatter(x, y, s=area**2, c=colors, alpha=0.6)
plt.savefig("matplotlib_fig.png") # dpi =100
```

File size:
```bash
76K matplotlib_fig.png
60K plotly_fig.png
```
|
process
|
analyse export image matplotlib vs plotly currently the command analyse takes png and dpi arguments see the module uses matplotlib if png otherwise plotly note that matplotlib has not been added to the package dependencies however plotly supports exporting images to different formats if the package kaleido is installed see as explained in doc with only one function fig write image f fig format it is possible to export image to different formats the definitions of figure size and dpi are also different matplotlib dpi for resolution figure size in inches plotly scale for resolution figure size in px here i show two examples provided in the documentation of each tool with default values python import numpy as np np random seed n x np random rand n y np random rand n colors np random rand n area np random rand n python import plotly graph objects as go plotly fig go figure plotly fig add trace go scatter x x y y mode markers marker go scatter marker size area sets the marker size in px color colors opacity colorscale viridis plotly fig write image plotly fig png scale default width height in px plotly fig show python import matplotlib pyplot as plt matplotlib fig plt figure default width height in inches width height in inches in px plt scatter x y s area c colors alpha plt savefig matplotlib fig png dpi file size bash matplotlib fig png plotly fig png
| 1
|
49,271
| 3,001,891,299
|
IssuesEvent
|
2015-07-24 14:18:41
|
jayway/powermock
|
https://api.github.com/repos/jayway/powermock
|
opened
|
add createStrictPartialMockForAllMethodsExcept(Class<T> type, Method... methods ) method in PowerMock class
|
bug imported Priority-Medium
|
_From [cndoublehero@gmail.com](https://code.google.com/u/cndoublehero@gmail.com/) on December 28, 2010 11:49:33_
This is a suggestion.
After I try the PowerMock suite in a few days, I suggest that the PowerMock class can add two methods like below:
createStrictPartialMockForAllMethodsExcept(Class<T> type, Method... methods )
createPartialMockForAllMethodsExcept(Class<T> type, Method... methods )
The situation is like this: I have a class named AppaleList has lots of methods. The code is like this:
public String getStr(int length, int width, String str) {
return getStr(length, width);
}
private String getStr(int length, int width) {
return "str";
}
Now I want to mock the private getStr method and some other method which didn't list in the code, but I cann't use createPartialMockForAllMethodsExcept method because this method will mock the both public and private getStr method, So I have to find the other methods and use the createMock(Class<T> type, Method... methods) method. It is a painful process.
Thank you.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=303_
|
1.0
|
add createStrictPartialMockForAllMethodsExcept(Class<T> type, Method... methods ) method in PowerMock class - _From [cndoublehero@gmail.com](https://code.google.com/u/cndoublehero@gmail.com/) on December 28, 2010 11:49:33_
This is a suggestion.
After I try the PowerMock suite in a few days, I suggest that the PowerMock class can add two methods like below:
createStrictPartialMockForAllMethodsExcept(Class<T> type, Method... methods )
createPartialMockForAllMethodsExcept(Class<T> type, Method... methods )
The situation is like this: I have a class named AppaleList has lots of methods. The code is like this:
public String getStr(int length, int width, String str) {
return getStr(length, width);
}
private String getStr(int length, int width) {
return "str";
}
Now I want to mock the private getStr method and some other method which didn't list in the code, but I cann't use createPartialMockForAllMethodsExcept method because this method will mock the both public and private getStr method, So I have to find the other methods and use the createMock(Class<T> type, Method... methods) method. It is a painful process.
Thank you.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=303_
|
non_process
|
add createstrictpartialmockforallmethodsexcept class type method methods method in powermock class from on december this is a suggestion after i try the powermock suite in a few days i suggest that the powermock class can add two methods like below createstrictpartialmockforallmethodsexcept class type method methods createpartialmockforallmethodsexcept class type method methods the situation is like this i have a class named appalelist has lots of methods the code is like this public string getstr int length int width string str return getstr length width private string getstr int length int width return str now i want to mock the private getstr method and some other method which didn t list in the code but i cann t use createpartialmockforallmethodsexcept method because this method will mock the both public and private getstr method so i have to find the other methods and use the createmock class type method methods method it is a painful process thank you original issue
| 0
|
6,661
| 9,781,881,798
|
IssuesEvent
|
2019-06-07 21:10:45
|
googleapis/google-cloud-cpp-spanner
|
https://api.github.com/repos/googleapis/google-cloud-cpp-spanner
|
closed
|
Create a CI build to test the CMake files as a super-build.
|
type: process
|
One of the expected use-cases for the CMake files is to compile the project as part of a larger super-build. We need to test this in at least one of the CI builds.
|
1.0
|
Create a CI build to test the CMake files as a super-build. - One of the expected use-cases for the CMake files is to compile the project as part of a larger super-build. We need to test this in at least one of the CI builds.
|
process
|
create a ci build to test the cmake files as a super build one of the expected use cases for the cmake files is to compile the project as part of a larger super build we need to test this in at least one of the ci builds
| 1
|
11,421
| 14,247,046,765
|
IssuesEvent
|
2020-11-19 10:53:53
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
`deleteMany` should be valid without parameters
|
kind/improvement process/candidate team/client tech/typescript
|
As of now, if I want to do this call:
```ts
prisma.user.deleteMany()
```
The TypeScript types complain, that I need to at least provide an empty object as input:
```ts
prisma.user.deleteMany({})
```
This empty object is not necessary and should just be optional.
|
1.0
|
`deleteMany` should be valid without parameters - As of now, if I want to do this call:
```ts
prisma.user.deleteMany()
```
The TypeScript types complain, that I need to at least provide an empty object as input:
```ts
prisma.user.deleteMany({})
```
This empty object is not necessary and should just be optional.
|
process
|
deletemany should be valid without parameters as of now if i want to do this call ts prisma user deletemany the typescript types complain that i need to at least provide an empty object as input ts prisma user deletemany this empty object is not necessary and should just be optional
| 1
|
10,863
| 13,633,965,402
|
IssuesEvent
|
2020-09-24 22:35:32
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
opened
|
Links with trailing #IDs break in PDF output
|
bug plugin/pdf preprocess2
|
## Expected Behavior
Valid HTML links should be preserved as authored in all output formats.
## Actual Behavior
Links that end with a fragment identifier `#ID` pointing to a subordinate resource _(like deep links to the DITA spec)_ work fine in HTML output, but break in PDF output.
### Given a source URL with a trailing `#ID`
- http://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part3-all-inclusive/langRef/ditaval/ditaval-revprop.html#ditaval-revprop
### Link URL is output with `%23`
- http://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part3-all-inclusive/langRef/ditaval/ditaval-revprop.html%23ditaval-revprop
## Possible Cause
Looks like `preprocess2` may be overzealously normalizing something along the way and URL-encoding the hash mark `#` that precedes the ID at the end of the URL, resulting in broken links.
## Environment
* DITA-OT version: latest `develop` branch @ b4996131e07bb1960794c5507760ec1eee48e112
* Operating system and version: _macOS_
* How did you run DITA-OT? _`dita` command_
* Transformation type: _PDF_
|
1.0
|
Links with trailing #IDs break in PDF output - ## Expected Behavior
Valid HTML links should be preserved as authored in all output formats.
## Actual Behavior
Links that end with a fragment identifier `#ID` pointing to a subordinate resource _(like deep links to the DITA spec)_ work fine in HTML output, but break in PDF output.
### Given a source URL with a trailing `#ID`
- http://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part3-all-inclusive/langRef/ditaval/ditaval-revprop.html#ditaval-revprop
### Link URL is output with `%23`
- http://docs.oasis-open.org/dita/dita/v1.3/errata02/os/complete/part3-all-inclusive/langRef/ditaval/ditaval-revprop.html%23ditaval-revprop
## Possible Cause
Looks like `preprocess2` may be overzealously normalizing something along the way and URL-encoding the hash mark `#` that precedes the ID at the end of the URL, resulting in broken links.
## Environment
* DITA-OT version: latest `develop` branch @ b4996131e07bb1960794c5507760ec1eee48e112
* Operating system and version: _macOS_
* How did you run DITA-OT? _`dita` command_
* Transformation type: _PDF_
|
process
|
links with trailing ids break in pdf output expected behavior valid html links should be preserved as authored in all output formats actual behavior links that end with a fragment identifier id pointing to a subordinate resource like deep links to the dita spec work fine in html output but break in pdf output given a source url with a trailing id link url is output with possible cause looks like may be overzealously normalizing something along the way and url encoding the hash mark that precedes the id at the end of the url resulting in broken links environment dita ot version latest develop branch operating system and version macos how did you run dita ot dita command transformation type pdf
| 1
|
45,037
| 18,350,649,114
|
IssuesEvent
|
2021-10-08 12:07:57
|
carbon-design-system/carbon-for-ibm-dotcom
|
https://api.github.com/repos/carbon-design-system/carbon-for-ibm-dotcom
|
closed
|
[Services] Change the Locale API and the Translation Service API to support a language only translation feature
|
Feature request priority: high package: services dev adopter support sprint demo adopter: Docs/KC adopter: Hybrid cloud
|
### The problem
- The `cloud.ibm.com/docs` application is currently loading just the language code within its `?locale= parameter` (eg. https://cloud.ibm.com/docs?locale=es) where other content ecosystems like Drupal is using both `lc` and `cc`.
- This is a feature that has not yet been accounted for in the Masthead component data fetching script.
### The solution
- Change the Locale API to fetch the Locale or LC code if only LC is provided.
- Change the Translation Service API to pull the correct translation file if only LC is provided.
### Additional information
- Translation API: https://ibmdotcom-services.mybluemix.net/TranslationAPI.html
- Locale API: https://ibmdotcom-services.mybluemix.net/LocaleAPI.html
- Reach out to Putra and Mark Kulube when the code is ready to be tested. They have to test this new function with the IBM Docs team.
|
1.0
|
[Services] Change the Locale API and the Translation Service API to support a language only translation feature - ### The problem
- The `cloud.ibm.com/docs` application is currently loading just the language code within its `?locale= parameter` (eg. https://cloud.ibm.com/docs?locale=es) where other content ecosystems like Drupal is using both `lc` and `cc`.
- This is a feature that has not yet been accounted for in the Masthead component data fetching script.
### The solution
- Change the Locale API to fetch the Locale or LC code if only LC is provided.
- Change the Translation Service API to pull the correct translation file if only LC is provided.
### Additional information
- Translation API: https://ibmdotcom-services.mybluemix.net/TranslationAPI.html
- Locale API: https://ibmdotcom-services.mybluemix.net/LocaleAPI.html
- Reach out to Putra and Mark Kulube when the code is ready to be tested. They have to test this new function with the IBM Docs team.
|
non_process
|
change the locale api and the translation service api to support a language only translation feature the problem the cloud ibm com docs application is currently loading just the language code within its locale parameter eg where other content ecosystems like drupal is using both lc and cc this is a feature that has not yet been accounted for in the masthead component data fetching script the solution change the locale api to fetch the locale or lc code if only lc is provided change the translation service api to pull the correct translation file if only lc is provided additional information translation api locale api reach out to putra and mark kulube when the code is ready to be tested they have to test this new function with the ibm docs team
| 0
|
16,710
| 21,869,632,430
|
IssuesEvent
|
2022-05-19 03:12:58
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Processing settings dialog cleared after a search is done in Settings dialog
|
Processing Bug
|
1. Open Settings --> Options --> Processing tab
2. In the top left corner of the dialog do a search, eg 'set'
3. You get a list of algorithms and properties
4. Erase now the search text: you get an empty Processing dialog, with no options.
5. Reopening the dialog of course brings them back
6. Doing more tests, I realize that if you have enabled another tab at point 1 above, did the search and cleared the search text, Processing dialog display is bound to the results of your test, instead of going back to its default rendering.
3.10 and master are concerned
|
1.0
|
Processing settings dialog cleared after a search is done in Settings dialog - 1. Open Settings --> Options --> Processing tab
2. In the top left corner of the dialog do a search, eg 'set'
3. You get a list of algorithms and properties
4. Erase now the search text: you get an empty Processing dialog, with no options.
5. Reopening the dialog of course brings them back
6. Doing more tests, I realize that if you have enabled another tab at point 1 above, did the search and cleared the search text, Processing dialog display is bound to the results of your test, instead of going back to its default rendering.
3.10 and master are concerned
|
process
|
processing settings dialog cleared after a search is done in settings dialog open settings options processing tab in the top left corner of the dialog do a search eg set you get a list of algorithms and properties erase now the search text you get an empty processing dialog with no options reopening the dialog of course brings them back doing more tests i realize that if you have enabled another tab at point above did the search and cleared the search text processing dialog display is bound to the results of your test instead of going back to its default rendering and master are concerned
| 1
|
98,085
| 29,368,367,330
|
IssuesEvent
|
2023-05-29 00:02:56
|
ManageIQ/manageiq
|
https://api.github.com/repos/ManageIQ/manageiq
|
closed
|
Qemu - 503 Service Unavailable
|
bug build stale
|
There are no errors in the appliance console and all services are running. But I can't still connect to manageIQ UI. I always get 503 - Service unavailable. I'm using latest `Morphy` version of QEMU image.
Any idea what the issue is?

Screenshot of the Overview page of appliance console shows that there are no error.

I did see one error in the log.
[root@localhost ~]# tail -f /var/www/miq/vmdb/log/evm.log | grep -i error
- orchestration.stack.create.error
| |-grep,6408 --color=auto -i error
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
0 packet receive errors
0 receive buffer errors
0 send buffer errors
[----] E, [2022-06-17T15:14:04.472241 #6444:2b00ae67b964] ERROR -- evm: AwesomeSpawn: killall exit code: 1
[----] E, [2022-06-17T15:14:04.474102 #6444:2b00ae67b964] ERROR -- evm: AwesomeSpawn: memcached: no process found
|
1.0
|
Qemu - 503 Service Unavailable - There are no errors in the appliance console and all services are running. But I can't still connect to manageIQ UI. I always get 503 - Service unavailable. I'm using latest `Morphy` version of QEMU image.
Any idea what the issue is?

Screenshot of the Overview page of appliance console shows that there are no error.

I did see one error in the log.
[root@localhost ~]# tail -f /var/www/miq/vmdb/log/evm.log | grep -i error
- orchestration.stack.create.error
| |-grep,6408 --color=auto -i error
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
RX errors 0 dropped 0 overruns 0 frame 0
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
0 packet receive errors
0 receive buffer errors
0 send buffer errors
[----] E, [2022-06-17T15:14:04.472241 #6444:2b00ae67b964] ERROR -- evm: AwesomeSpawn: killall exit code: 1
[----] E, [2022-06-17T15:14:04.474102 #6444:2b00ae67b964] ERROR -- evm: AwesomeSpawn: memcached: no process found
|
non_process
|
qemu service unavailable there are no errors in the appliance console and all services are running but i can t still connect to manageiq ui i always get service unavailable i m using latest morphy version of qemu image any idea what the issue is screenshot of the overview page of appliance console shows that there are no error i did see one error in the log tail f var www miq vmdb log evm log grep i error orchestration stack create error grep color auto i error rx errors dropped overruns frame tx errors dropped overruns carrier collisions rx errors dropped overruns frame tx errors dropped overruns carrier collisions packet receive errors receive buffer errors send buffer errors e error evm awesomespawn killall exit code e error evm awesomespawn memcached no process found
| 0
|
14,349
| 17,373,930,049
|
IssuesEvent
|
2021-07-30 17:47:54
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Suggestion: Clarify "Variables"
|
Pri1 devops-cicd-process/tech devops/prod doc-enhancement needs-sme
|
On a page where different types of variables and their differences are explained, it would be desired to have the word "variables" not standing without further specification in a sentence like this one:

The current situation results in a potentially ambiguous understanding of the documentation:
- Maybe the note applies to all kinds of variables. In this case it should not be placed under the headline "Macro syntax variables", but preferably at a higher level in the document structure instead.
- Maybe the note applies to Macro syntax variables only. In this case the word "Variables" is confusing and should be more specific.
I suggest to either write "all types of variables" or be more specific, like "Macro syntax variables", depending on what the actual functionality of the application is.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Suggestion: Clarify "Variables" - On a page where different types of variables and their differences are explained, it would be desired to have the word "variables" not standing without further specification in a sentence like this one:

The current situation results in a potentially ambiguous understanding of the documentation:
- Maybe the note applies to all kinds of variables. In this case it should not be placed under the headline "Macro syntax variables", but preferably at a higher level in the document structure instead.
- Maybe the note applies to Macro syntax variables only. In this case the word "Variables" is confusing and should be more specific.
I suggest to either write "all types of variables" or be more specific, like "Macro syntax variables", depending on what the actual functionality of the application is.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a
* Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a
* Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch)
* Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
suggestion clarify variables on a page where different types of variables and their differences are explained it would be desired to have the word variables not standing without further specification in a sentence like this one the current situation results in a potentially ambiguous understanding of the documentation maybe the note applies to all kinds of variables in this case it should not be placed under the headline macro syntax variables but preferably at a higher level in the document structure instead maybe the note applies to macro syntax variables only in this case the word variables is confusing and should be more specific i suggest to either write all types of variables or be more specific like macro syntax variables depending on what the actual functionality of the application is document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
4,260
| 7,189,082,971
|
IssuesEvent
|
2018-02-02 12:42:16
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
grabABI does not handle constructor properly
|
apps-grabABI status-inprocess type-enhancement
|
Probably because the constructor does not have a name in the ABI. This make is difficult to 'name' the contract. This would be a good idea for an Ethereum EIP. The motivation is the be able to read the ABI and name the contract. Downsides -- names of contracts are no unique, but that should be handled by the user not the ABI code writer. Give the user the data, let them make decision about how to use it. Related to #306
|
1.0
|
grabABI does not handle constructor properly - Probably because the constructor does not have a name in the ABI. This make is difficult to 'name' the contract. This would be a good idea for an Ethereum EIP. The motivation is the be able to read the ABI and name the contract. Downsides -- names of contracts are no unique, but that should be handled by the user not the ABI code writer. Give the user the data, let them make decision about how to use it. Related to #306
|
process
|
grababi does not handle constructor properly probably because the constructor does not have a name in the abi this make is difficult to name the contract this would be a good idea for an ethereum eip the motivation is the be able to read the abi and name the contract downsides names of contracts are no unique but that should be handled by the user not the abi code writer give the user the data let them make decision about how to use it related to
| 1
|
648
| 3,113,682,662
|
IssuesEvent
|
2015-09-03 01:18:36
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
process.send() is not synchronous
|
child_process confirmed-bug process
|
According to the documentation for the `child_process` module, `process.send()` should block. It seems that this is no longer the case in `v1.1.0`.
```javascript
//parent.js
var fork = require('child_process').fork;
var child = fork('./child.js');
child.on('message', function (m) {
console.log('got message from child: ' + m);
});
//child.js
process.send(new Buffer(2048));
process.exit(0);
```
In this example, if the child process attempts to send a sufficiently large object to the parent, the child process exits but does not send all of the data. However, smaller messages (perhaps a Buffer of 1024 bytes) are successfully sent.
I think this change in functionality was introduced in 07bd05ba332e078c1ba76635921f5448a3e884cf when `uv__nonblock()` was added to `uv_pipe_open()` in `deps/uv/src/unix/pipe.c`. Removing the call to `uv__nonblock()` restores the original behavior of `process.send()`.
I ran into this issue on OSX 10.10.2 LLVM 6.0 (clang-600.0.56), if it helps.
|
2.0
|
process.send() is not synchronous - According to the documentation for the `child_process` module, `process.send()` should block. It seems that this is no longer the case in `v1.1.0`.
```javascript
//parent.js
var fork = require('child_process').fork;
var child = fork('./child.js');
child.on('message', function (m) {
console.log('got message from child: ' + m);
});
//child.js
process.send(new Buffer(2048));
process.exit(0);
```
In this example, if the child process attempts to send a sufficiently large object to the parent, the child process exits but does not send all of the data. However, smaller messages (perhaps a Buffer of 1024 bytes) are successfully sent.
I think this change in functionality was introduced in 07bd05ba332e078c1ba76635921f5448a3e884cf when `uv__nonblock()` was added to `uv_pipe_open()` in `deps/uv/src/unix/pipe.c`. Removing the call to `uv__nonblock()` restores the original behavior of `process.send()`.
I ran into this issue on OSX 10.10.2 LLVM 6.0 (clang-600.0.56), if it helps.
|
process
|
process send is not synchronous according to the documentation for the child process module process send should block it seems that this is no longer the case in javascript parent js var fork require child process fork var child fork child js child on message function m console log got message from child m child js process send new buffer process exit in this example if the child process attempts to send a sufficiently large object to the parent the child process exits but does not send all of the data however smaller messages perhaps a buffer of bytes are successfully sent i think this change in functionality was introduced in when uv nonblock was added to uv pipe open in deps uv src unix pipe c removing the call to uv nonblock restores the original behavior of process send i ran into this issue on osx llvm clang if it helps
| 1
|
18,694
| 24,595,351,149
|
IssuesEvent
|
2022-10-14 07:50:48
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[PM] [VAPT] Sign in Screen > UI issue
|
Bug P1 Process: Fixed Process: Tested dev Auth server
|
Sign in Screen > UI issue > 'Sign in' text is splitted into 2 lines

|
2.0
|
[PM] [VAPT] Sign in Screen > UI issue - Sign in Screen > UI issue > 'Sign in' text is splitted into 2 lines

|
process
|
sign in screen ui issue sign in screen ui issue sign in text is splitted into lines
| 1
|
7,323
| 10,455,271,168
|
IssuesEvent
|
2019-09-19 20:53:51
|
GE-MDS-FNM-V2/GE-MDS-FNM-V2
|
https://api.github.com/repos/GE-MDS-FNM-V2/GE-MDS-FNM-V2
|
closed
|
Figure out how to communicate with david and test it out
|
1 process
|
Due 9/21/19
Over skype?
Google hangouts?
|
1.0
|
Figure out how to communicate with david and test it out - Due 9/21/19
Over skype?
Google hangouts?
|
process
|
figure out how to communicate with david and test it out due over skype google hangouts
| 1
|
21,108
| 28,069,348,613
|
IssuesEvent
|
2023-03-29 17:50:33
|
AvaloniaUI/Avalonia
|
https://api.github.com/repos/AvaloniaUI/Avalonia
|
closed
|
textbox can not input chinese words
|
bug area-textprocessing
|
**Describe the bug**
There's an Chinese input method bug when i tried the version 11.0.0-preview6.
When using the Sogou input method, the textbox cannot input Chinese. When using the Microsoft Pinyin input method, the textbox can input Chinese. However, using these two input methods, the textbox cannot input punctuation such as commas (,) periods (.).
**To Reproduce**
Steps to reproduce the behavior:
1. Run my test app
2. Focus a textbox
3. Switch my input method to Sogou input method,textbox can not input chinese words and punctuation
4. Switch my input method to Microsoft Pinyin input method,textbox can input chinese words but can not input punctuation
**Expected behavior**
i hope that i can input any words with any chinese input method
**Desktop (please complete the following information):**
- OS: OS: Windows 19042
- Version 11.0.0-preview6
|
1.0
|
textbox can not input chinese words - **Describe the bug**
There's an Chinese input method bug when i tried the version 11.0.0-preview6.
When using the Sogou input method, the textbox cannot input Chinese. When using the Microsoft Pinyin input method, the textbox can input Chinese. However, using these two input methods, the textbox cannot input punctuation such as commas (,) periods (.).
**To Reproduce**
Steps to reproduce the behavior:
1. Run my test app
2. Focus a textbox
3. Switch my input method to Sogou input method,textbox can not input chinese words and punctuation
4. Switch my input method to Microsoft Pinyin input method,textbox can input chinese words but can not input punctuation
**Expected behavior**
i hope that i can input any words with any chinese input method
**Desktop (please complete the following information):**
- OS: OS: Windows 19042
- Version 11.0.0-preview6
|
process
|
textbox can not input chinese words describe the bug there s an chinese input method bug when i tried the version when using the sogou input method the textbox cannot input chinese when using the microsoft pinyin input method the textbox can input chinese however using these two input methods the textbox cannot input punctuation such as commas periods to reproduce steps to reproduce the behavior run my test app focus a textbox switch my input method to sogou input method textbox can not input chinese words and punctuation switch my input method to microsoft pinyin input method textbox can input chinese words but can not input punctuation expected behavior i hope that i can input any words with any chinese input method desktop please complete the following information os os windows version
| 1
|
250,627
| 21,317,180,256
|
IssuesEvent
|
2022-04-16 13:38:44
|
orijoon98/Tetris_SE4
|
https://api.github.com/repos/orijoon98/Tetris_SE4
|
closed
|
#2
|
✨ Feature ✅ Test
|
- [x] 게임 조작 키
- [x] 블럭이 쌓이는 보드
- [x] 실시간 점수를 확인할 수 있는 부분
- [x] 좌, 우, 아래로 한 칸씩 이동시킬 수 있어야 함
- [x] 시계방향으로 90도씩 회전시킬 수 있어야 함
- [x] 한 번에 끝까지 밑으로 떨어뜨릴 수 있어야 함
- [x] 줄이 완성되면 사라지고 윗 블록들이 사라진 줄만큼 내려오기
- [x] 더 이상 블럭을 쌓을 수 없게 되면 게임이 종료되어야 함
|
1.0
|
#2 - - [x] 게임 조작 키
- [x] 블럭이 쌓이는 보드
- [x] 실시간 점수를 확인할 수 있는 부분
- [x] 좌, 우, 아래로 한 칸씩 이동시킬 수 있어야 함
- [x] 시계방향으로 90도씩 회전시킬 수 있어야 함
- [x] 한 번에 끝까지 밑으로 떨어뜨릴 수 있어야 함
- [x] 줄이 완성되면 사라지고 윗 블록들이 사라진 줄만큼 내려오기
- [x] 더 이상 블럭을 쌓을 수 없게 되면 게임이 종료되어야 함
|
non_process
|
게임 조작 키 블럭이 쌓이는 보드 실시간 점수를 확인할 수 있는 부분 좌 우 아래로 한 칸씩 이동시킬 수 있어야 함 시계방향으로 회전시킬 수 있어야 함 한 번에 끝까지 밑으로 떨어뜨릴 수 있어야 함 줄이 완성되면 사라지고 윗 블록들이 사라진 줄만큼 내려오기 더 이상 블럭을 쌓을 수 없게 되면 게임이 종료되어야 함
| 0
|
194,741
| 14,686,299,811
|
IssuesEvent
|
2021-01-01 14:15:55
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
terraform-providers/terraform-provider-oci: oci/bds_bds_instance_test.go; 16 LoC
|
fresh small test
|
Found a possible issue in [terraform-providers/terraform-provider-oci](https://www.github.com/terraform-providers/terraform-provider-oci) at [oci/bds_bds_instance_test.go](https://github.com/terraform-providers/terraform-provider-oci/blob/507acd0ed6517dbca2fbcfb8100874929c8fd8e1/oci/bds_bds_instance_test.go#L436-L451)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to bdsInstanceId is reassigned at line 440
[Click here to see the code in its original context.](https://github.com/terraform-providers/terraform-provider-oci/blob/507acd0ed6517dbca2fbcfb8100874929c8fd8e1/oci/bds_bds_instance_test.go#L436-L451)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bdsInstanceId := range bdsInstanceIds {
if ok := SweeperDefaultResourceId[bdsInstanceId]; !ok {
deleteBdsInstanceRequest := oci_bds.DeleteBdsInstanceRequest{}
deleteBdsInstanceRequest.BdsInstanceId = &bdsInstanceId
deleteBdsInstanceRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "bds")
_, error := bdsClient.DeleteBdsInstance(context.Background(), deleteBdsInstanceRequest)
if error != nil {
fmt.Printf("Error deleting BdsInstance %s %s, It is possible that the resource is already deleted. Please verify manually \n", bdsInstanceId, error)
continue
}
waitTillCondition(testAccProvider, &bdsInstanceId, bdsInstanceSweepWaitCondition, time.Duration(3*time.Minute),
bdsInstanceSweepResponseFetchOperation, "bds", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 507acd0ed6517dbca2fbcfb8100874929c8fd8e1
|
1.0
|
terraform-providers/terraform-provider-oci: oci/bds_bds_instance_test.go; 16 LoC -
Found a possible issue in [terraform-providers/terraform-provider-oci](https://www.github.com/terraform-providers/terraform-provider-oci) at [oci/bds_bds_instance_test.go](https://github.com/terraform-providers/terraform-provider-oci/blob/507acd0ed6517dbca2fbcfb8100874929c8fd8e1/oci/bds_bds_instance_test.go#L436-L451)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to bdsInstanceId is reassigned at line 440
[Click here to see the code in its original context.](https://github.com/terraform-providers/terraform-provider-oci/blob/507acd0ed6517dbca2fbcfb8100874929c8fd8e1/oci/bds_bds_instance_test.go#L436-L451)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, bdsInstanceId := range bdsInstanceIds {
if ok := SweeperDefaultResourceId[bdsInstanceId]; !ok {
deleteBdsInstanceRequest := oci_bds.DeleteBdsInstanceRequest{}
deleteBdsInstanceRequest.BdsInstanceId = &bdsInstanceId
deleteBdsInstanceRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "bds")
_, error := bdsClient.DeleteBdsInstance(context.Background(), deleteBdsInstanceRequest)
if error != nil {
fmt.Printf("Error deleting BdsInstance %s %s, It is possible that the resource is already deleted. Please verify manually \n", bdsInstanceId, error)
continue
}
waitTillCondition(testAccProvider, &bdsInstanceId, bdsInstanceSweepWaitCondition, time.Duration(3*time.Minute),
bdsInstanceSweepResponseFetchOperation, "bds", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 507acd0ed6517dbca2fbcfb8100874929c8fd8e1
|
non_process
|
terraform providers terraform provider oci oci bds bds instance test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to bdsinstanceid is reassigned at line click here to show the line s of go which triggered the analyzer go for bdsinstanceid range bdsinstanceids if ok sweeperdefaultresourceid ok deletebdsinstancerequest oci bds deletebdsinstancerequest deletebdsinstancerequest bdsinstanceid bdsinstanceid deletebdsinstancerequest requestmetadata retrypolicy getretrypolicy true bds error bdsclient deletebdsinstance context background deletebdsinstancerequest if error nil fmt printf error deleting bdsinstance s s it is possible that the resource is already deleted please verify manually n bdsinstanceid error continue waittillcondition testaccprovider bdsinstanceid bdsinstancesweepwaitcondition time duration time minute bdsinstancesweepresponsefetchoperation bds true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
21,209
| 28,263,586,586
|
IssuesEvent
|
2023-04-07 03:16:08
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] Integrate `add-alias-info`-style information into MLv2
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
We should include something like `:lib/source-column-alias` and `:lib/desired-column-alias` in Column metadata. `:name` is the name of the actual column rather than the name we should use in a query. The name used in a query is derived from this property and other information, so we need to preserve the original in case we re-calculate it.
|
1.0
|
[MLv2] Integrate `add-alias-info`-style information into MLv2 - We should include something like `:lib/source-column-alias` and `:lib/desired-column-alias` in Column metadata. `:name` is the name of the actual column rather than the name we should use in a query. The name used in a query is derived from this property and other information, so we need to preserve the original in case we re-calculate it.
|
process
|
integrate add alias info style information into we should include something like lib source column alias and lib desired column alias in column metadata name is the name of the actual column rather than the name we should use in a query the name used in a query is derived from this property and other information so we need to preserve the original in case we re calculate it
| 1
|
10,945
| 13,755,171,162
|
IssuesEvent
|
2020-10-06 18:02:20
|
jgraley/inferno-cpp2v
|
https://api.github.com/repos/jgraley/inferno-cpp2v
|
closed
|
Don't use BY_VALUE for compare criterion
|
Constraint Processing
|
It'll become confusing since CSPO terminology uses the word "value" as the finest granularity of what variables can be, which would correspond with what we call BY_LOCATION (based on the current problem mapping). Use BY_EQUIVALENCE instead - on the grounds that SimpleCompare matches for equivalence classes..
|
1.0
|
Don't use BY_VALUE for compare criterion - It'll become confusing since CSPO terminology uses the word "value" as the finest granularity of what variables can be, which would correspond with what we call BY_LOCATION (based on the current problem mapping). Use BY_EQUIVALENCE instead - on the grounds that SimpleCompare matches for equivalence classes..
|
process
|
don t use by value for compare criterion it ll become confusing since cspo terminology uses the word value as the finest granularity of what variables can be which would correspond with what we call by location based on the current problem mapping use by equivalence instead on the grounds that simplecompare matches for equivalence classes
| 1
|
10,861
| 13,632,968,929
|
IssuesEvent
|
2020-09-24 20:34:03
|
googleapis/python-api-common-protos
|
https://api.github.com/repos/googleapis/python-api-common-protos
|
closed
|
LICENSE is missing from distributed sdist tarball
|
type: process
|
#### Environment details
- OS type and version:
- Python version: 3.8.5
- pip version: 20.0.2
- `googleapis-common-protos` version: 1.52.0
#### Steps to reproduce
1. the distributed sdist tarball (downloadable under https://files.pythonhosted.org/packages/source/g/googleapis-common-protos/googleapis-common-protos-1.52.0.tar.gz ) is missing the LICENSE file which is at the top of the directory
2. in order to include it, a MANIFEST.in needs to be added to the git repository that includes a reference to LICENSE (and potentially other files that should be part of the software distribution). see https://packaging.python.org/guides/using-manifest-in/#how-files-are-included-in-an-sdist
|
1.0
|
LICENSE is missing from distributed sdist tarball - #### Environment details
- OS type and version:
- Python version: 3.8.5
- pip version: 20.0.2
- `googleapis-common-protos` version: 1.52.0
#### Steps to reproduce
1. the distributed sdist tarball (downloadable under https://files.pythonhosted.org/packages/source/g/googleapis-common-protos/googleapis-common-protos-1.52.0.tar.gz ) is missing the LICENSE file which is at the top of the directory
2. in order to include it, a MANIFEST.in needs to be added to the git repository that includes a reference to LICENSE (and potentially other files that should be part of the software distribution). see https://packaging.python.org/guides/using-manifest-in/#how-files-are-included-in-an-sdist
|
process
|
license is missing from distributed sdist tarball environment details os type and version python version pip version googleapis common protos version steps to reproduce the distributed sdist tarball downloadable under is missing the license file which is at the top of the directory in order to include it a manifest in needs to be added to the git repository that includes a reference to license and potentially other files that should be part of the software distribution see
| 1
|
4,458
| 7,329,925,390
|
IssuesEvent
|
2018-03-05 07:56:59
|
w3c/csswg-drafts
|
https://api.github.com/repos/w3c/csswg-drafts
|
opened
|
WG Agendas Should Take Advantage of Contextal Information In Addition to Issue Numbers
|
Needs Process Help
|
Chairs keep posting agendas like:
https://lists.w3.org/Archives/Public/www-style/2018Feb/0068.html
Witness Item #10:
> 10. [css-sizing] Percentage sizing section is kind of vague
> https://github.com/w3c/csswg-drafts/issues/1132
There is no context provided: the title is just the title of the issue, and the link just drops you at the top of the issue, which starts of with some very confused discussion that intimidates anyone who wants to actually look at what's up for discussion.
This is was the approach to drawing up the agenda despite that fantasai
- added a comment summarizing the state of the issue when tagging it Agenda+ in https://github.com/w3c/csswg-drafts/issues/1132#issuecomment-363623845
- summarized the state of the spec and the purpose of the needed WG discussion in https://lists.w3.org/Archives/Public/www-style/2018Feb/0009.html
- has requested multiple times that the chairs make an effort to use the provided context when compiling the agenda rather than just linking to the top of the issue and copying its title
- in this case, even reviewed the issue with the WG on the [previous telecon](https://lists.w3.org/Archives/Public/www-style/2018Feb/0045.html), leaving a week to review, so that the WG could would be familiar with the issue and could quickly dispose the topic
the chairs nonetheless ignored all the context and copied the issue title and dropped only a link to the top of the discussion.
And thus the WG looked at the issue on the call, decided it was too complicated and scary (because without any context, and dropped into the top of the confusing start of the discussion, it is), and skipped the topic. Because nobody actually understood what needed to be discussed. Because the chairs *elided* all of the context that came with the Agenda+ request.
Thus fantasai is filing this issue as a totally unnecessary *regression* in the CSSWG process due to the move to GitHub, since previously she could link to a specific email (rather than the top of the thread) to be added to the agenda, and could include in that email all the context for everyone to understand an issue, and the chairs would _use that message URL_ as the topic to discuss; but now there is no way to do so _because the chairs refuse to use more specific URLs_. Even if there is a comment provided with the Agenda+ request summarizing the issue, even if there is an email providing context for the discussion, each of which has a URL that could have been linked to, the chairs continue to *refuse* to provide that contextual information and insist on a) copying the issue title directly, even if the topic at hand is more specific and b) linking to the top of the issue, wasting the effort of the person requesting the topic and trying to provide context, and wasting the time of everyone who wishes to participate in the discussion but has to unnecessarily wade through the morass of previous discussion first.
|
1.0
|
WG Agendas Should Take Advantage of Contextal Information In Addition to Issue Numbers - Chairs keep posting agendas like:
https://lists.w3.org/Archives/Public/www-style/2018Feb/0068.html
Witness Item #10:
> 10. [css-sizing] Percentage sizing section is kind of vague
> https://github.com/w3c/csswg-drafts/issues/1132
There is no context provided: the title is just the title of the issue, and the link just drops you at the top of the issue, which starts of with some very confused discussion that intimidates anyone who wants to actually look at what's up for discussion.
This is was the approach to drawing up the agenda despite that fantasai
- added a comment summarizing the state of the issue when tagging it Agenda+ in https://github.com/w3c/csswg-drafts/issues/1132#issuecomment-363623845
- summarized the state of the spec and the purpose of the needed WG discussion in https://lists.w3.org/Archives/Public/www-style/2018Feb/0009.html
- has requested multiple times that the chairs make an effort to use the provided context when compiling the agenda rather than just linking to the top of the issue and copying its title
- in this case, even reviewed the issue with the WG on the [previous telecon](https://lists.w3.org/Archives/Public/www-style/2018Feb/0045.html), leaving a week to review, so that the WG could would be familiar with the issue and could quickly dispose the topic
the chairs nonetheless ignored all the context and copied the issue title and dropped only a link to the top of the discussion.
And thus the WG looked at the issue on the call, decided it was too complicated and scary (because without any context, and dropped into the top of the confusing start of the discussion, it is), and skipped the topic. Because nobody actually understood what needed to be discussed. Because the chairs *elided* all of the context that came with the Agenda+ request.
Thus fantasai is filing this issue as a totally unnecessary *regression* in the CSSWG process due to the move to GitHub, since previously she could link to a specific email (rather than the top of the thread) to be added to the agenda, and could include in that email all the context for everyone to understand an issue, and the chairs would _use that message URL_ as the topic to discuss; but now there is no way to do so _because the chairs refuse to use more specific URLs_. Even if there is a comment provided with the Agenda+ request summarizing the issue, even if there is an email providing context for the discussion, each of which has a URL that could have been linked to, the chairs continue to *refuse* to provide that contextual information and insist on a) copying the issue title directly, even if the topic at hand is more specific and b) linking to the top of the issue, wasting the effort of the person requesting the topic and trying to provide context, and wasting the time of everyone who wishes to participate in the discussion but has to unnecessarily wade through the morass of previous discussion first.
|
process
|
wg agendas should take advantage of contextal information in addition to issue numbers chairs keep posting agendas like witness item percentage sizing section is kind of vague there is no context provided the title is just the title of the issue and the link just drops you at the top of the issue which starts of with some very confused discussion that intimidates anyone who wants to actually look at what s up for discussion this is was the approach to drawing up the agenda despite that fantasai added a comment summarizing the state of the issue when tagging it agenda in summarized the state of the spec and the purpose of the needed wg discussion in has requested multiple times that the chairs make an effort to use the provided context when compiling the agenda rather than just linking to the top of the issue and copying its title in this case even reviewed the issue with the wg on the leaving a week to review so that the wg could would be familiar with the issue and could quickly dispose the topic the chairs nonetheless ignored all the context and copied the issue title and dropped only a link to the top of the discussion and thus the wg looked at the issue on the call decided it was too complicated and scary because without any context and dropped into the top of the confusing start of the discussion it is and skipped the topic because nobody actually understood what needed to be discussed because the chairs elided all of the context that came with the agenda request thus fantasai is filing this issue as a totally unnecessary regression in the csswg process due to the move to github since previously she could link to a specific email rather than the top of the thread to be added to the agenda and could include in that email all the context for everyone to understand an issue and the chairs would use that message url as the topic to discuss but now there is no way to do so because the chairs refuse to use more specific urls even if there is a comment provided with the agenda request summarizing the issue even if there is an email providing context for the discussion each of which has a url that could have been linked to the chairs continue to refuse to provide that contextual information and insist on a copying the issue title directly even if the topic at hand is more specific and b linking to the top of the issue wasting the effort of the person requesting the topic and trying to provide context and wasting the time of everyone who wishes to participate in the discussion but has to unnecessarily wade through the morass of previous discussion first
| 1
|
7,458
| 10,561,622,878
|
IssuesEvent
|
2019-10-04 16:17:22
|
liskcenterutrecht/lisk.bike
|
https://api.github.com/repos/liskcenterutrecht/lisk.bike
|
opened
|
End rental
|
Blockchain App Process Flow User application task
|
if user wishes to end rental, user application checks that rental status of bike is started, sends request to VLS to close the lock and looks for status update that lock is closed.
- [ ] Userapp_read_Rental-status (BikeID;Started)
- [ ] UserApp_Send_request-close-lock (BikeID;UserID)
- [ ] UserApp_Read_lockstatus(BikeID;Closed)
Rental costs are calculated – (rental end - rental start) * rental costs
– substracted from deposit, and remaining deposit is returned from BikeID to UserID:
Rental payment: reimburse deposit - rental costs; (rental end - rental start) * rental costs
UserApp_Read_rental costs (BikeID)
UserApp_Read_timestamp-rental-start (BikeID)
UserApp_Read_timestamp-lockstatus (BikeID;Closed)
UserApp_Send_DepositAmount (BikeID) - RentalCosts(BikeID) * RentalTime (time-lock-closed - time-rental-start)
Rental status of BikeID is updated
UserApp_Send_Update rental status (BikeID;Ended)
|
1.0
|
End rental - if user wishes to end rental, user application checks that rental status of bike is started, sends request to VLS to close the lock and looks for status update that lock is closed.
- [ ] Userapp_read_Rental-status (BikeID;Started)
- [ ] UserApp_Send_request-close-lock (BikeID;UserID)
- [ ] UserApp_Read_lockstatus(BikeID;Closed)
Rental costs are calculated – (rental end - rental start) * rental costs
– substracted from deposit, and remaining deposit is returned from BikeID to UserID:
Rental payment: reimburse deposit - rental costs; (rental end - rental start) * rental costs
UserApp_Read_rental costs (BikeID)
UserApp_Read_timestamp-rental-start (BikeID)
UserApp_Read_timestamp-lockstatus (BikeID;Closed)
UserApp_Send_DepositAmount (BikeID) - RentalCosts(BikeID) * RentalTime (time-lock-closed - time-rental-start)
Rental status of BikeID is updated
UserApp_Send_Update rental status (BikeID;Ended)
|
process
|
end rental if user wishes to end rental user application checks that rental status of bike is started sends request to vls to close the lock and looks for status update that lock is closed userapp read rental status bikeid started userapp send request close lock bikeid userid userapp read lockstatus bikeid closed rental costs are calculated – rental end rental start rental costs – substracted from deposit and remaining deposit is returned from bikeid to userid rental payment reimburse deposit rental costs rental end rental start rental costs userapp read rental costs bikeid userapp read timestamp rental start bikeid userapp read timestamp lockstatus bikeid closed userapp send depositamount bikeid rentalcosts bikeid rentaltime time lock closed time rental start rental status of bikeid is updated userapp send update rental status bikeid ended
| 1
|
72,038
| 8,698,416,624
|
IssuesEvent
|
2018-12-04 23:21:18
|
ArastoSahbaei/CommunityRatesGames
|
https://api.github.com/repos/ArastoSahbaei/CommunityRatesGames
|
closed
|
Convert NavigationBar >= GridLayout
|
Design Frontend Development
|
Re-design to a more beatiful website using Grid-Layout
CSS Grid Layout is the most powerful layout system available in CSS. It is a 2-dimensional system, meaning it can handle both columns and rows, unlike flexbox which is largely a 1-dimensional system. You work with Grid Layout by applying CSS rules both to a parent element (which becomes the Grid Container) and to that elements children (which become Grid Items).
|
1.0
|
Convert NavigationBar >= GridLayout - Re-design to a more beatiful website using Grid-Layout
CSS Grid Layout is the most powerful layout system available in CSS. It is a 2-dimensional system, meaning it can handle both columns and rows, unlike flexbox which is largely a 1-dimensional system. You work with Grid Layout by applying CSS rules both to a parent element (which becomes the Grid Container) and to that elements children (which become Grid Items).
|
non_process
|
convert navigationbar gridlayout re design to a more beatiful website using grid layout css grid layout is the most powerful layout system available in css it is a dimensional system meaning it can handle both columns and rows unlike flexbox which is largely a dimensional system you work with grid layout by applying css rules both to a parent element which becomes the grid container and to that elements children which become grid items
| 0
|
49,548
| 13,187,232,004
|
IssuesEvent
|
2020-08-13 02:45:58
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[frame_object_diff] misleading indentation (Trac #1660)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1660">https://code.icecube.wisc.edu/ticket/1660</a>, reported by david.schultz and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "This one actually leads to wrong behavior. Yay compiler checks.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n droopTimeConstants_[1] = cal.droopTimeConstants_[1];\n ^~~~~~~~~~~~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n ampGains_[1] = cal.ampGains_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdFreq_[1] = cal.atwdFreq_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdDeltaT_[1] = cal.atwdDeltaT_[1];\n ^~~~~~~~~~~\n}}}",
"reporter": "david.schultz",
"cc": "claudio.kopper, blaufuss",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[frame_object_diff] misleading indentation",
"priority": "blocker",
"keywords": "",
"time": "2016-04-26T19:58:32",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[frame_object_diff] misleading indentation (Trac #1660) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1660">https://code.icecube.wisc.edu/ticket/1660</a>, reported by david.schultz and owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:11:57",
"description": "This one actually leads to wrong behavior. Yay compiler checks.\n\n{{{\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:27:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:29:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n droopTimeConstants_[1] = cal.droopTimeConstants_[1];\n ^~~~~~~~~~~~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:63:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:65:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n ampGains_[1] = cal.ampGains_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:70:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:72:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdFreq_[1] = cal.atwdFreq_[1];\n ^~~~~~~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:106:3: warning: this \u2018else\u2019 clause does not guard... [-Wmisleading-indentation]\n else\n ^~~~\n/home/dschultz/Documents/combo/trunk/src/frame_object_diff/private/frame_object_diff/calibration/I3DOMCalibrationDiff.cxx:108:5: note: ...this statement, but the latter is misleadingly indented as if it is guarded by the \u2018else\u2019\n atwdDeltaT_[1] = cal.atwdDeltaT_[1];\n ^~~~~~~~~~~\n}}}",
"reporter": "david.schultz",
"cc": "claudio.kopper, blaufuss",
"resolution": "fixed",
"_ts": "1550067117911749",
"component": "combo reconstruction",
"summary": "[frame_object_diff] misleading indentation",
"priority": "blocker",
"keywords": "",
"time": "2016-04-26T19:58:32",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
misleading indentation trac migrated from json status closed changetime description this one actually leads to wrong behavior yay compiler checks n n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n drooptimeconstants cal drooptimeconstants n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n ampgains cal ampgains n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n atwdfreq cal atwdfreq n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx warning this clause does not guard n else n n home dschultz documents combo trunk src frame object diff private frame object diff calibration cxx note this statement but the latter is misleadingly indented as if it is guarded by the n atwddeltat cal atwddeltat n n reporter david schultz cc claudio kopper blaufuss resolution fixed ts component combo reconstruction summary misleading indentation priority blocker keywords time milestone owner david schultz type defect
| 0
|
102,950
| 16,594,640,989
|
IssuesEvent
|
2021-06-01 12:04:01
|
scriptex/webpack-mpa-next
|
https://api.github.com/repos/scriptex/webpack-mpa-next
|
opened
|
CVE-2021-33587 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2021-33587 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>css-what-4.0.0.tgz</b>, <b>css-what-2.1.3.tgz</b>, <b>css-what-3.4.2.tgz</b></p></summary>
<p>
<details><summary><b>css-what-4.0.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-4.0.0.tgz">https://registry.npmjs.org/css-what/-/css-what-4.0.0.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- svgo-2.3.0.tgz (Root Library)
- css-select-3.1.2.tgz
- :x: **css-what-4.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-2.1.3.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- spritesh-1.2.1.tgz (Root Library)
- cheerio-0.20.0.tgz
- css-select-1.2.0.tgz
- :x: **css-what-2.1.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- critical-3.1.0.tgz (Root Library)
- postcss-image-inliner-4.0.4.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/19c0964cddb7c63df15996aea7913089d58a2279">19c0964cddb7c63df15996aea7913089d58a2279</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-33587 (Medium) detected in multiple libraries - ## CVE-2021-33587 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>css-what-4.0.0.tgz</b>, <b>css-what-2.1.3.tgz</b>, <b>css-what-3.4.2.tgz</b></p></summary>
<p>
<details><summary><b>css-what-4.0.0.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-4.0.0.tgz">https://registry.npmjs.org/css-what/-/css-what-4.0.0.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- svgo-2.3.0.tgz (Root Library)
- css-select-3.1.2.tgz
- :x: **css-what-4.0.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-2.1.3.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz">https://registry.npmjs.org/css-what/-/css-what-2.1.3.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- spritesh-1.2.1.tgz (Root Library)
- cheerio-0.20.0.tgz
- css-select-1.2.0.tgz
- :x: **css-what-2.1.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>css-what-3.4.2.tgz</b></p></summary>
<p>a CSS selector parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz">https://registry.npmjs.org/css-what/-/css-what-3.4.2.tgz</a></p>
<p>Path to dependency file: webpack-mpa-next/package.json</p>
<p>Path to vulnerable library: webpack-mpa-next/node_modules/css-what</p>
<p>
Dependency Hierarchy:
- critical-3.1.0.tgz (Root Library)
- postcss-image-inliner-4.0.4.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **css-what-3.4.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/scriptex/webpack-mpa-next/commit/19c0964cddb7c63df15996aea7913089d58a2279">19c0964cddb7c63df15996aea7913089d58a2279</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The css-what package before 5.0.1 for Node.js does not ensure that attribute parsing has Linear Time Complexity relative to the size of the input.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33587>CVE-2021-33587</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33587</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: css-what - 5.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries css what tgz css what tgz css what tgz css what tgz a css selector parser library home page a href path to dependency file webpack mpa next package json path to vulnerable library webpack mpa next node modules css what dependency hierarchy svgo tgz root library css select tgz x css what tgz vulnerable library css what tgz a css selector parser library home page a href path to dependency file webpack mpa next package json path to vulnerable library webpack mpa next node modules css what dependency hierarchy spritesh tgz root library cheerio tgz css select tgz x css what tgz vulnerable library css what tgz a css selector parser library home page a href path to dependency file webpack mpa next package json path to vulnerable library webpack mpa next node modules css what dependency hierarchy critical tgz root library postcss image inliner tgz svgo tgz css select tgz x css what tgz vulnerable library found in head commit a href vulnerability details the css what package before for node js does not ensure that attribute parsing has linear time complexity relative to the size of the input publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution css what step up your open source security game with whitesource
| 0
|
22,622
| 31,845,738,440
|
IssuesEvent
|
2023-09-14 19:47:15
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
[pkg/ottl] Support interacting with timestamps in at different precisions
|
enhancement priority:p2 processor/transform pkg/ottl never stale
|
### Component(s)
pkg/ottl, processor/transform
### Is your feature request related to a problem? Please describe.
Currently, the OTTL only supports getting and setting timestamps with nanosecond precision. While this is semantically correct with the OTLP proto, there are times when interacting at such a specific precision is unnecessary and cause un-ergonomic statements.
For example, when comparing the duration of a trace (supported via math operations added via https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/15675), comparing the result in nanoseconds seems excessive:
```yaml
transform:
traces:
statements:
- set(attributes["tp_duration"], end_time_unix_nano - start_time_unix_nano)
- set(attributes["time_bucket"], "fast") where attributes["tp_duration"] < 10000000000
```
Another use case where adjusting timestamps will be necessary is when adjusting for clock skew.
### Describe the solution you'd like
This could be achieved via any of the following options (non-exhaustive)
- ~precision-specific functions such as `seconds(nano-timestamp int64)`, `msec(nano-timestamp int64)`, nsec(nano-timestamp int64)`, etc.~
- ~A since time-precision function such as `precision(nano-timestamp int64, precision string/ENUM)`~
- ~A generic rounding function that can round any int64.~
- ~Adding more context accessors that know how to Get and Set using the specific associated precision.~
@bogdandrutu [had the idea](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16359#discussion_r1026761436) to change the time accessors in the ottl contexts to return and accept `time.Time` values instead of int64. This has the advantage of standardizing how time is returned and set and should solve any potential problems when setting a time with a different precision. It will also open the doors for more complex time transformations such as [timezone manipulations](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142)
In order to support this change the following items must be addressed:
- [x] #22007
- [x] #22015
- [x] #22008
- [x] #22713
- [x] #22009
- [x] #24686
- [x] #22010
### Describe alternatives you've considered
_No response_
### Additional context
We will need to be careful not to break any usage of `set` and a timestamp field. At the moment all timestamp Setters expect an int64 that is representative of nanoseconds since epoch time. If we create any functions that change the precision, we need to help users not break their timestamps if they try to use them in `set`.
Related to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/12974
Related to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142
|
1.0
|
[pkg/ottl] Support interacting with timestamps in at different precisions - ### Component(s)
pkg/ottl, processor/transform
### Is your feature request related to a problem? Please describe.
Currently, the OTTL only supports getting and setting timestamps with nanosecond precision. While this is semantically correct with the OTLP proto, there are times when interacting at such a specific precision is unnecessary and cause un-ergonomic statements.
For example, when comparing the duration of a trace (supported via math operations added via https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/15675), comparing the result in nanoseconds seems excessive:
```yaml
transform:
traces:
statements:
- set(attributes["tp_duration"], end_time_unix_nano - start_time_unix_nano)
- set(attributes["time_bucket"], "fast") where attributes["tp_duration"] < 10000000000
```
Another use case where adjusting timestamps will be necessary is when adjusting for clock skew.
### Describe the solution you'd like
This could be achieved via any of the following options (non-exhaustive)
- ~precision-specific functions such as `seconds(nano-timestamp int64)`, `msec(nano-timestamp int64)`, nsec(nano-timestamp int64)`, etc.~
- ~A since time-precision function such as `precision(nano-timestamp int64, precision string/ENUM)`~
- ~A generic rounding function that can round any int64.~
- ~Adding more context accessors that know how to Get and Set using the specific associated precision.~
@bogdandrutu [had the idea](https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/16359#discussion_r1026761436) to change the time accessors in the ottl contexts to return and accept `time.Time` values instead of int64. This has the advantage of standardizing how time is returned and set and should solve any potential problems when setting a time with a different precision. It will also open the doors for more complex time transformations such as [timezone manipulations](https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142)
In order to support this change the following items must be addressed:
- [x] #22007
- [x] #22015
- [x] #22008
- [x] #22713
- [x] #22009
- [x] #24686
- [x] #22010
### Describe alternatives you've considered
_No response_
### Additional context
We will need to be careful not to break any usage of `set` and a timestamp field. At the moment all timestamp Setters expect an int64 that is representative of nanoseconds since epoch time. If we create any functions that change the precision, we need to help users not break their timestamps if they try to use them in `set`.
Related to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/12974
Related to https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/14142
|
process
|
support interacting with timestamps in at different precisions component s pkg ottl processor transform is your feature request related to a problem please describe currently the ottl only supports getting and setting timestamps with nanosecond precision while this is semantically correct with the otlp proto there are times when interacting at such a specific precision is unnecessary and cause un ergonomic statements for example when comparing the duration of a trace supported via math operations added via comparing the result in nanoseconds seems excessive yaml transform traces statements set attributes end time unix nano start time unix nano set attributes fast where attributes another use case where adjusting timestamps will be necessary is when adjusting for clock skew describe the solution you d like this could be achieved via any of the following options non exhaustive precision specific functions such as seconds nano timestamp msec nano timestamp nsec nano timestamp etc a since time precision function such as precision nano timestamp precision string enum a generic rounding function that can round any adding more context accessors that know how to get and set using the specific associated precision bogdandrutu to change the time accessors in the ottl contexts to return and accept time time values instead of this has the advantage of standardizing how time is returned and set and should solve any potential problems when setting a time with a different precision it will also open the doors for more complex time transformations such as in order to support this change the following items must be addressed describe alternatives you ve considered no response additional context we will need to be careful not to break any usage of set and a timestamp field at the moment all timestamp setters expect an that is representative of nanoseconds since epoch time if we create any functions that change the precision we need to help users not break their timestamps if they try to use them in set related to related to
| 1
|
140,371
| 18,901,263,255
|
IssuesEvent
|
2021-11-16 01:21:44
|
snowdensb/questdb
|
https://api.github.com/repos/snowdensb/questdb
|
opened
|
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz
|
security vulnerability
|
## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: questdb/ui/package.json</p>
<p>Path to vulnerable library: questdb/ui/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- docsearch.js-2.6.3.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"json-schema","packageVersion":"0.2.3","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"docsearch.js:2.6.3;request:2.88.2;http-signature:1.2.0;jsprim:1.4.1;json-schema:0.2.3","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3918","vulnerabilityDetails":"json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes (\u0027Prototype Pollution\u0027)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-3918 (High) detected in json-schema-0.2.3.tgz - ## CVE-2021-3918 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json-schema-0.2.3.tgz</b></p></summary>
<p>JSON Schema validation and specifications</p>
<p>Library home page: <a href="https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz">https://registry.npmjs.org/json-schema/-/json-schema-0.2.3.tgz</a></p>
<p>Path to dependency file: questdb/ui/package.json</p>
<p>Path to vulnerable library: questdb/ui/node_modules/json-schema/package.json</p>
<p>
Dependency Hierarchy:
- docsearch.js-2.6.3.tgz (Root Library)
- request-2.88.2.tgz
- http-signature-1.2.0.tgz
- jsprim-1.4.1.tgz
- :x: **json-schema-0.2.3.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes ('Prototype Pollution')
<p>Publish Date: 2021-11-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918>CVE-2021-3918</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"json-schema","packageVersion":"0.2.3","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"docsearch.js:2.6.3;request:2.88.2;http-signature:1.2.0;jsprim:1.4.1;json-schema:0.2.3","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-3918","vulnerabilityDetails":"json-schema is vulnerable to Improperly Controlled Modification of Object Prototype Attributes (\u0027Prototype Pollution\u0027)","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3918","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in json schema tgz cve high severity vulnerability vulnerable library json schema tgz json schema validation and specifications library home page a href path to dependency file questdb ui package json path to vulnerable library questdb ui node modules json schema package json dependency hierarchy docsearch js tgz root library request tgz http signature tgz jsprim tgz x json schema tgz vulnerable library found in base branch master vulnerability details json schema is vulnerable to improperly controlled modification of object prototype attributes prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree docsearch js request http signature jsprim json schema isminimumfixversionavailable false basebranches vulnerabilityidentifier cve vulnerabilitydetails json schema is vulnerable to improperly controlled modification of object prototype attributes pollution vulnerabilityurl
| 0
|
55,679
| 13,649,049,377
|
IssuesEvent
|
2020-09-26 12:32:32
|
nanocurrency/nano-node
|
https://api.github.com/repos/nanocurrency/nano-node
|
opened
|
clang/boost 1.73 compile error on case-insensitive file systems
|
build-error
|
Here's a cute problem when I compile the node with the latest clang version/boost 1.73 on macOS.
So `boost/config/select_stdlib_config.hpp` contains this:
```
#if defined(__cplusplus) && defined(__has_include)
# if __has_include(<version>)
# include <version>
# else
# include <cstddef>
# endif
#elif defined(__cplusplus)
# include <cstddef>
#else
# include <stddef.h>
#endif
```
where `include <version>` picks up... `miniupnp/miniupnpc/VERSION` 🙄 🤣 This happens because APFS is case-insensitive.
Renaming that file to VERSION_ or whatever "fixes" the issue, but surely there must be a better way.
|
1.0
|
clang/boost 1.73 compile error on case-insensitive file systems - Here's a cute problem when I compile the node with the latest clang version/boost 1.73 on macOS.
So `boost/config/select_stdlib_config.hpp` contains this:
```
#if defined(__cplusplus) && defined(__has_include)
# if __has_include(<version>)
# include <version>
# else
# include <cstddef>
# endif
#elif defined(__cplusplus)
# include <cstddef>
#else
# include <stddef.h>
#endif
```
where `include <version>` picks up... `miniupnp/miniupnpc/VERSION` 🙄 🤣 This happens because APFS is case-insensitive.
Renaming that file to VERSION_ or whatever "fixes" the issue, but surely there must be a better way.
|
non_process
|
clang boost compile error on case insensitive file systems here s a cute problem when i compile the node with the latest clang version boost on macos so boost config select stdlib config hpp contains this if defined cplusplus defined has include if has include include else include endif elif defined cplusplus include else include endif where include picks up miniupnp miniupnpc version 🙄 🤣 this happens because apfs is case insensitive renaming that file to version or whatever fixes the issue but surely there must be a better way
| 0
|
20,928
| 27,773,522,697
|
IssuesEvent
|
2023-03-16 15:48:00
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
closed
|
Input / output validation for calcfunctions and workfunctions
|
requires discussion type/feature request topic/engine topic/processes
|
Since we're dropping py2 support in #3566, we can make use of glorious new features.
One possibility would be to add (optional) input / output validation for calcfunctions and workfunctions through _type hints_.
Example:
```python
from aiida import orm
from aiida.engine import calcfunction
@calcfunction
def add(x: orm.Float, y: orm.Float) -> orm.Float:
return x + y
```
Should be quite straightforward to implement. Opinions on whether this would be desirable?
|
1.0
|
Input / output validation for calcfunctions and workfunctions - Since we're dropping py2 support in #3566, we can make use of glorious new features.
One possibility would be to add (optional) input / output validation for calcfunctions and workfunctions through _type hints_.
Example:
```python
from aiida import orm
from aiida.engine import calcfunction
@calcfunction
def add(x: orm.Float, y: orm.Float) -> orm.Float:
return x + y
```
Should be quite straightforward to implement. Opinions on whether this would be desirable?
|
process
|
input output validation for calcfunctions and workfunctions since we re dropping support in we can make use of glorious new features one possibility would be to add optional input output validation for calcfunctions and workfunctions through type hints example python from aiida import orm from aiida engine import calcfunction calcfunction def add x orm float y orm float orm float return x y should be quite straightforward to implement opinions on whether this would be desirable
| 1
|
539,767
| 15,794,617,106
|
IssuesEvent
|
2021-04-02 11:23:52
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
intel_adsp_cavs15: running testcases failed tests/kernel/workq/work on adsp
|
bug priority: low
|
**Describe the bug**
tests/kernel/workq/work run failed on adsp
**To Reproduce**
Steps to reproduce the behavior:
1. twister -p intel_adsp_cavs15 --device-testing -T tests/kernel/workq/work --west-flash="/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/flash.sh,/home/ztest/work/zephyrproject/modules/audio/sof/keys/otc_private_key.pem,/home/ztest/work/zephyrproject/modules/audio/sof/zephyr/ext/rimage/config,/home/ztest/work/zephyrproject/modules/audio/sof/zephyr/ext/rimage/build/rimage" --device-serial-pty="/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/adsplog.py,--no-history"
2. See error
**Expected behavior**
testcases can pass.
**Logs and console output**
START - test_smp_running_cancel
PASS - test_smp_running_cancel in 0.101 seconds
...................................................................................................................................................................................................................
START - test_drain_empty
PASS - test_drain_empty in 0.1 seconds
...................................................................................................................................................................................................................
START - test_1cpu_drain_wait
PASS - test_1cpu_drain_wait in 0.202 seconds
...................................................................................................................................................................................................................
START - test_1cpu_plugged_drain
PASS - test_1cpu_plugged_drain in 0.102 seconds
...................................................................................................................................................................................................................
START - test_1cpu_basic_schedule
PASS - test_1cpu_basic_schedule in 0.102 seconds
...................................................................................................................................................................................................................
**START - test_1cpu_basic_schedule_running
ASSERTION FAIL [arch_mem_coherent(to)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:90
@ WEST_TOPDIR/zephyr/lib/os/assert.c:45**
**Environment (please complete the following information):**
- OS: Fedora28
- Toolchain: Zephyr-sdk-0.12.3
- Commit id: 59a3def7562e9b4608
|
1.0
|
intel_adsp_cavs15: running testcases failed tests/kernel/workq/work on adsp - **Describe the bug**
tests/kernel/workq/work run failed on adsp
**To Reproduce**
Steps to reproduce the behavior:
1. twister -p intel_adsp_cavs15 --device-testing -T tests/kernel/workq/work --west-flash="/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/flash.sh,/home/ztest/work/zephyrproject/modules/audio/sof/keys/otc_private_key.pem,/home/ztest/work/zephyrproject/modules/audio/sof/zephyr/ext/rimage/config,/home/ztest/work/zephyrproject/modules/audio/sof/zephyr/ext/rimage/build/rimage" --device-serial-pty="/home/ztest/work/zephyrproject/zephyr/boards/xtensa/intel_adsp_cavs15/tools/adsplog.py,--no-history"
2. See error
**Expected behavior**
testcases can pass.
**Logs and console output**
START - test_smp_running_cancel
PASS - test_smp_running_cancel in 0.101 seconds
...................................................................................................................................................................................................................
START - test_drain_empty
PASS - test_drain_empty in 0.1 seconds
...................................................................................................................................................................................................................
START - test_1cpu_drain_wait
PASS - test_1cpu_drain_wait in 0.202 seconds
...................................................................................................................................................................................................................
START - test_1cpu_plugged_drain
PASS - test_1cpu_plugged_drain in 0.102 seconds
...................................................................................................................................................................................................................
START - test_1cpu_basic_schedule
PASS - test_1cpu_basic_schedule in 0.102 seconds
...................................................................................................................................................................................................................
**START - test_1cpu_basic_schedule_running
ASSERTION FAIL [arch_mem_coherent(to)] @ WEST_TOPDIR/zephyr/kernel/timeout.c:90
@ WEST_TOPDIR/zephyr/lib/os/assert.c:45**
**Environment (please complete the following information):**
- OS: Fedora28
- Toolchain: Zephyr-sdk-0.12.3
- Commit id: 59a3def7562e9b4608
|
non_process
|
intel adsp running testcases failed tests kernel workq work on adsp describe the bug tests kernel workq work run failed on adsp to reproduce steps to reproduce the behavior twister p intel adsp device testing t tests kernel workq work west flash home ztest work zephyrproject zephyr boards xtensa intel adsp tools flash sh home ztest work zephyrproject modules audio sof keys otc private key pem home ztest work zephyrproject modules audio sof zephyr ext rimage config home ztest work zephyrproject modules audio sof zephyr ext rimage build rimage device serial pty home ztest work zephyrproject zephyr boards xtensa intel adsp tools adsplog py no history see error expected behavior testcases can pass logs and console output start test smp running cancel pass test smp running cancel in seconds start test drain empty pass test drain empty in seconds start test drain wait pass test drain wait in seconds start test plugged drain pass test plugged drain in seconds start test basic schedule pass test basic schedule in seconds start test basic schedule running assertion fail west topdir zephyr kernel timeout c west topdir zephyr lib os assert c environment please complete the following information os toolchain zephyr sdk commit id
| 0
|
583,522
| 17,391,155,276
|
IssuesEvent
|
2021-08-02 07:33:37
|
jrsteensen/OpenHornet
|
https://api.github.com/repos/jrsteensen/OpenHornet
|
closed
|
6630S0D-C28-A103 (Radar Altimeter Pot) obsolete
|
Category: MCAD Priority: Normal Status: Available Type: Bug/Obsolesce
|
*Please fill out the issue as completely as possible. Be very specific and take your time. The more effort you put in to fill the issue out completely, the quicker we can fix this or look at adding your requested feature.*
### Summary:
6630S0D-C28-A103 obsolete
## More Information
*Add an "X" in the square brackets to check the applicable checkboxs.*
### Category:
*Check one or more items.*
- [X] MCAD (SolidWorks)
- [ ] ECAD (PCB Design or other electrical hardware)
- [ ] Software - Sketch
- [ ] Software - DCS-BIOS
- [ ] Software - Library
### Type:
*Check one item.*
- [X] Bug
- [ ] Feature Enhancement
- [ ] Maintenance
- [ ] Question
- [ ] Documentation
### Applicable End Item:
*Check one item.*
- [ ] Top Level Assembly
- [ ] Lower Instrument Panel (LIP)
- [ ] Main Instrument Panel (MIP)
- [ ] Left Console
- [X] Right Console
- [ ] Seat
- [ ] Center Tub
- [ ] Flight Stick
- [ ] Throttle
- [ ] General Software
### Associated Filename(s):
*Insert assembly or part file names here, i.e. 123456.sldasm, etc.*
6630S0D-C28-A103
|
1.0
|
6630S0D-C28-A103 (Radar Altimeter Pot) obsolete - *Please fill out the issue as completely as possible. Be very specific and take your time. The more effort you put in to fill the issue out completely, the quicker we can fix this or look at adding your requested feature.*
### Summary:
6630S0D-C28-A103 obsolete
## More Information
*Add an "X" in the square brackets to check the applicable checkboxs.*
### Category:
*Check one or more items.*
- [X] MCAD (SolidWorks)
- [ ] ECAD (PCB Design or other electrical hardware)
- [ ] Software - Sketch
- [ ] Software - DCS-BIOS
- [ ] Software - Library
### Type:
*Check one item.*
- [X] Bug
- [ ] Feature Enhancement
- [ ] Maintenance
- [ ] Question
- [ ] Documentation
### Applicable End Item:
*Check one item.*
- [ ] Top Level Assembly
- [ ] Lower Instrument Panel (LIP)
- [ ] Main Instrument Panel (MIP)
- [ ] Left Console
- [X] Right Console
- [ ] Seat
- [ ] Center Tub
- [ ] Flight Stick
- [ ] Throttle
- [ ] General Software
### Associated Filename(s):
*Insert assembly or part file names here, i.e. 123456.sldasm, etc.*
6630S0D-C28-A103
|
non_process
|
radar altimeter pot obsolete please fill out the issue as completely as possible be very specific and take your time the more effort you put in to fill the issue out completely the quicker we can fix this or look at adding your requested feature summary obsolete more information add an x in the square brackets to check the applicable checkboxs category check one or more items mcad solidworks ecad pcb design or other electrical hardware software sketch software dcs bios software library type check one item bug feature enhancement maintenance question documentation applicable end item check one item top level assembly lower instrument panel lip main instrument panel mip left console right console seat center tub flight stick throttle general software associated filename s insert assembly or part file names here i e sldasm etc
| 0
|
2,574
| 5,329,541,355
|
IssuesEvent
|
2017-02-15 15:07:02
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
opened
|
Processus hebdo de production des opés déductibles impôts
|
a-new feature processus workload III
|
depuis le début de l'année jusqu'à la date du jour
|
1.0
|
Processus hebdo de production des opés déductibles impôts - depuis le début de l'année jusqu'à la date du jour
|
process
|
processus hebdo de production des opés déductibles impôts depuis le début de l année jusqu à la date du jour
| 1
|
14,211
| 17,110,603,590
|
IssuesEvent
|
2021-07-10 07:49:59
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
HTML bundles must only contain one asset
|
:bug: Bug :heavy_check_mark: Confirmed Bug Bundler HTML Preprocessing ✨ Parcel 2
|
<!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# ❔ Question
Has there been any changes to parcel from 1->2 which disable usage of different kind of file extensions in one file?
Maybe somebody know what exactly this message wants to tell me so i can change it? unsure if this is a bug or just me.
## 🔦 Context
My Project uses pug and ES6 imports.
I would get you more info but "--log-level verbose" gives me the same error using this:
`parcel index.pug --log-level verbose`
V1 compiled fine, i changed to V2 to get ES6 imports working because `script(defer='', type="module", src='/js/main.js')` seem to not be loaded correctly in broswer because of pracelRequire.
I tried streamlining so i only use one kind of asset per type (sass => scss for ex.) didnt change the error for me.
## 💻 Code Sample
```
ben@EdenTheFourthVII:~/Projekte/Kunde/Heimatvoll/Server2/frontend$ parcel index.pug
Server running at http://localhost:1234
🚨 @parcel/packager-html: HTML bundles must only contain one asset
AssertionError [ERR_ASSERTION]: HTML bundles must only contain one asset
at Object.package (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/packager-html/lib/H
TMLPackager.js:34:21)
at PackagerRunner.package (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/core/lib/Pa
ckagerRunner.js:214:36)
at async PackagerRunner.getBundleResult (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parc
el/core/lib/PackagerRunner.js:182:20)
at async PackagerRunner.packageAndWriteBundle (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules
/@parcel/core/lib/PackagerRunner.js:151:9)
at async Child.handleRequest (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/workers/
lib/child.js:162:9)
```
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you are using -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.0-alpha.3.2
| Node | 12.10.0
| npm/Yarn | npm@6.10.3
| Operating System | KDE Neon (User:18)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
1.0
|
HTML bundles must only contain one asset - <!---
Thanks for filing an issue 😄 ! Before you submit, please read the following:
Search open/closed issues before submitting since someone might have asked the same thing before!
-->
# ❔ Question
Has there been any changes to parcel from 1->2 which disable usage of different kind of file extensions in one file?
Maybe somebody know what exactly this message wants to tell me so i can change it? unsure if this is a bug or just me.
## 🔦 Context
My Project uses pug and ES6 imports.
I would get you more info but "--log-level verbose" gives me the same error using this:
`parcel index.pug --log-level verbose`
V1 compiled fine, i changed to V2 to get ES6 imports working because `script(defer='', type="module", src='/js/main.js')` seem to not be loaded correctly in broswer because of pracelRequire.
I tried streamlining so i only use one kind of asset per type (sass => scss for ex.) didnt change the error for me.
## 💻 Code Sample
```
ben@EdenTheFourthVII:~/Projekte/Kunde/Heimatvoll/Server2/frontend$ parcel index.pug
Server running at http://localhost:1234
🚨 @parcel/packager-html: HTML bundles must only contain one asset
AssertionError [ERR_ASSERTION]: HTML bundles must only contain one asset
at Object.package (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/packager-html/lib/H
TMLPackager.js:34:21)
at PackagerRunner.package (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/core/lib/Pa
ckagerRunner.js:214:36)
at async PackagerRunner.getBundleResult (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parc
el/core/lib/PackagerRunner.js:182:20)
at async PackagerRunner.packageAndWriteBundle (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules
/@parcel/core/lib/PackagerRunner.js:151:9)
at async Child.handleRequest (/home/ben/.nvm/versions/node/v12.10.0/lib/node_modules/parcel/node_modules/@parcel/workers/
lib/child.js:162:9)
```
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you are using -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 2.0.0-alpha.3.2
| Node | 12.10.0
| npm/Yarn | npm@6.10.3
| Operating System | KDE Neon (User:18)
<!-- Love parcel? Please consider supporting our collective:
👉 https://opencollective.com/parcel/donate -->
|
process
|
html bundles must only contain one asset thanks for filing an issue 😄 before you submit please read the following search open closed issues before submitting since someone might have asked the same thing before ❔ question has there been any changes to parcel from which disable usage of different kind of file extensions in one file maybe somebody know what exactly this message wants to tell me so i can change it unsure if this is a bug or just me 🔦 context my project uses pug and imports i would get you more info but log level verbose gives me the same error using this parcel index pug log level verbose compiled fine i changed to to get imports working because script defer type module src js main js seem to not be loaded correctly in broswer because of pracelrequire i tried streamlining so i only use one kind of asset per type sass scss for ex didnt change the error for me 💻 code sample ben edenthefourthvii projekte kunde heimatvoll frontend parcel index pug server running at 🚨 parcel packager html html bundles must only contain one asset assertionerror html bundles must only contain one asset at object package home ben nvm versions node lib node modules parcel node modules parcel packager html lib h tmlpackager js at packagerrunner package home ben nvm versions node lib node modules parcel node modules parcel core lib pa ckagerrunner js at async packagerrunner getbundleresult home ben nvm versions node lib node modules parcel node modules parc el core lib packagerrunner js at async packagerrunner packageandwritebundle home ben nvm versions node lib node modules parcel node modules parcel core lib packagerrunner js at async child handlerequest home ben nvm versions node lib node modules parcel node modules parcel workers lib child js 🌍 your environment software version s parcel alpha node npm yarn npm operating system kde neon user love parcel please consider supporting our collective 👉
| 1
|
548,527
| 16,066,147,678
|
IssuesEvent
|
2021-04-23 19:26:48
|
googleapis/python-logging
|
https://api.github.com/repos/googleapis/python-logging
|
opened
|
tests.system.test_system.TestLogging: test_log_handler_async failed
|
flakybot: issue priority: p1 type: bug
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 206f522a5f3ea2adf863eb5390fbe1a2bd6f66f2
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/df192706-cae2-4bdd-9ffe-5b14d316ae5a), [Sponge](http://sponge2/df192706-cae2-4bdd-9ffe-5b14d316ae5a)
status: failed
<details><summary>Test output</summary><br><pre>self = <test_system.TestLogging testMethod=test_log_handler_async>
def test_log_handler_async(self):
LOG_MESSAGE = "It was the worst of times"
handler_name = self._logger_name("handler_async")
handler = CloudLoggingHandler(Config.CLIENT, name=handler_name)
# only create the logger to delete, hidden otherwise
logger = Config.CLIENT.logger(handler_name)
self.to_delete.append(logger)
cloud_logger = logging.getLogger(handler.name)
cloud_logger.addHandler(handler)
cloud_logger.warn(LOG_MESSAGE)
handler.flush()
> entries = _list_entries(logger)
tests/system/test_system.py:293:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/system/test_system.py:82: in _list_entries
return outer(logger)
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:102: in wrapped_function
return to_wrap(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<google.cloud.logging_v2.logger.Logger object at 0x7feb10373f40>,)
kwargs = {}, tries = 6, result = [], delay = 64
msg = '_has_entries. Trying again in 64 seconds...'
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
result = to_wrap(*args, **kwargs)
if self.result_predicate(result):
return result
delay = self.delay * self.backoff ** tries
msg = "%s. Trying again in %d seconds..." % (
self.result_predicate.__name__,
delay,
)
self.logger(msg)
time.sleep(delay)
tries += 1
> raise BackoffFailed()
E test_utils.retry.BackoffFailed
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:172: BackoffFailed</pre></details>
|
1.0
|
tests.system.test_system.TestLogging: test_log_handler_async failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 206f522a5f3ea2adf863eb5390fbe1a2bd6f66f2
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/df192706-cae2-4bdd-9ffe-5b14d316ae5a), [Sponge](http://sponge2/df192706-cae2-4bdd-9ffe-5b14d316ae5a)
status: failed
<details><summary>Test output</summary><br><pre>self = <test_system.TestLogging testMethod=test_log_handler_async>
def test_log_handler_async(self):
LOG_MESSAGE = "It was the worst of times"
handler_name = self._logger_name("handler_async")
handler = CloudLoggingHandler(Config.CLIENT, name=handler_name)
# only create the logger to delete, hidden otherwise
logger = Config.CLIENT.logger(handler_name)
self.to_delete.append(logger)
cloud_logger = logging.getLogger(handler.name)
cloud_logger.addHandler(handler)
cloud_logger.warn(LOG_MESSAGE)
handler.flush()
> entries = _list_entries(logger)
tests/system/test_system.py:293:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/system/test_system.py:82: in _list_entries
return outer(logger)
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:102: in wrapped_function
return to_wrap(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (<google.cloud.logging_v2.logger.Logger object at 0x7feb10373f40>,)
kwargs = {}, tries = 6, result = [], delay = 64
msg = '_has_entries. Trying again in 64 seconds...'
@wraps(to_wrap)
def wrapped_function(*args, **kwargs):
tries = 0
while tries < self.max_tries:
result = to_wrap(*args, **kwargs)
if self.result_predicate(result):
return result
delay = self.delay * self.backoff ** tries
msg = "%s. Trying again in %d seconds..." % (
self.result_predicate.__name__,
delay,
)
self.logger(msg)
time.sleep(delay)
tries += 1
> raise BackoffFailed()
E test_utils.retry.BackoffFailed
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:172: BackoffFailed</pre></details>
|
non_process
|
tests system test system testlogging test log handler async failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self def test log handler async self log message it was the worst of times handler name self logger name handler async handler cloudlogginghandler config client name handler name only create the logger to delete hidden otherwise logger config client logger handler name self to delete append logger cloud logger logging getlogger handler name cloud logger addhandler handler cloud logger warn log message handler flush entries list entries logger tests system test system py tests system test system py in list entries return outer logger nox system lib site packages test utils retry py in wrapped function return to wrap args kwargs args kwargs tries result delay msg has entries trying again in seconds wraps to wrap def wrapped function args kwargs tries while tries self max tries result to wrap args kwargs if self result predicate result return result delay self delay self backoff tries msg s trying again in d seconds self result predicate name delay self logger msg time sleep delay tries raise backofffailed e test utils retry backofffailed nox system lib site packages test utils retry py backofffailed
| 0
|
15,872
| 20,036,670,574
|
IssuesEvent
|
2022-02-02 12:39:50
|
prometheus-community/windows_exporter
|
https://api.github.com/repos/prometheus-community/windows_exporter
|
closed
|
windows_exporter failure windows_process_thread_count help text
|
collector/process
|
I just test-updated to windows_exporter version `0.17.1` and collecting metrics now fails with this error:
```text
> curl.exe http://localhost:9182/metrics
An error has occurred while serving metrics:
2 error(s) occurred:
* collected metric windows_process_thread_count label:<name:"creating_process_id" value:"1348" > label:<name:"process" value:"sqlbrowser" > label:<name:"process_id" value:"6732" > gauge:<value:7 > has help "Number of threads currently active in this process." but should have "Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread."
* collected metric windows_process_thread_count label:<name:"creating_process_id" value:"1348" > label:<name:"process" value:"sqlwriter" > label:<name:"process_id" value:"7012" > gauge:<value:2 > has help "Number of threads currently active in this process." but should have "Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread."
```
Removing the `--collector.process.whitelist` Parameter causes the error to return for many (all?) process names with the same message.
I searched the repository for the mentioned text, it is contained in two different collectors:
<https://github.com/prometheus-community/windows_exporter/search?q=Number+of+threads+currently+active+in+this+process>
however `terminal_services` is not active in my case. I then looked at revision `0.17.1`:
The first mentioned HELP text is in fact used by the process collector:
<https://github.com/prometheus-community/windows_exporter/blob/d9f4264fc4ee183462fc23032ce9b60444411b87/collector/process.go#L123-L126>
This seems to somehow collide with this metric? Even though this collector is not enabled?
<https://github.com/prometheus-community/windows_exporter/blob/d9f4264fc4ee183462fc23032ce9b60444411b87/collector/terminal_services.go#L144-L147>
I am unfamiliar with golang and therefore not sure where `Namespace` and `subsystem` are set.
----
The Windows Service is running the exporter as:
```text
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --collectors.enabled [defaults],process,mssql --collector.textfile.directory C:\windows_exporter\textfile_collector --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+" --collector.service.services-where="Name LIKE 'ax5%'"
```
At the same time as upgrading to 0.17.1 I changed the MSI setup parameter to use `--collectors.enabled [defaults],process,mssql`
I then stopped the service, and started the exporter with all defaults:
```text
C:\Program Files\windows_exporter\
❯ .\windows_exporter.exe
```
Metrics collection is successful ✔
Next I started it as: `.\windows_exporter.exe --collectors.enabled process`
Metrics collection is successful and returns process metrics✔
Next I started it as: `.\windows_exporter.exe --collectors.enabled "[defaults],process"`
Metrics collection is successful and returns process and other metrics metrics✔
Next test: `.\windows_exporter.exe --collectors.enabled "[defaults],process,mssql"`
Metrics collection is successful and returns process and other metrics metrics✔
Restarted the exporter as windows service, the error is still reproducible.
Startet it from powershell with:
```text
.\windows_exporter.exe --collectors.enabled "[defaults],process,mssql" --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+"
time="2022-01-19T16:10:27+01:00" level=warning msg="No where-clause specified for service collector. This will generate a very large number of metrics!" source="service.go:48"
time="2022-01-19T16:10:27+01:00" level=info msg="Enabled collectors: net, cs, system, logical_disk, process, cpu, textfile, mssql, service, os" source="exporter.go:348"
time="2022-01-19T16:10:27+01:00" level=info msg="Starting windows_exporter (version=0.17.1, branch=heads/tags/v0.17.1, revision=d9f4264fc4ee183462fc23032ce9b60444411b87)" source="exporter.go:400"
time="2022-01-19T16:10:27+01:00" level=info msg="Build context (go=go1.17.5, user=runneradmin@fv-az177-480, date=20220102-09:24:12)" source="exporter.go:401"
time="2022-01-19T16:10:27+01:00" level=info msg="Starting server on :9182" source="exporter.go:404"
time="2022-01-19T16:10:27+01:00" level=info msg="TLS is disabled." source="gokit_adapter.go:38"
```
Metrics collection is successful ✔ (???)
Tried starting with `--log.level=debug` as windows Service (`"C:\Program Files\windows_exporter\windows_exporter.exe" --log.level=debug --log.format logger:eventlog?name=windows_exporter --collectors.enabled [defaults],process,mssql --collector.textfile.directory C:\windows_exporter\textfile_collector --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+" --collector.service.services-where="Name LIKE 'ax5%'"`)
but there are now debug logs in the Windows Event Log?
... Investigating...
|
1.0
|
windows_exporter failure windows_process_thread_count help text - I just test-updated to windows_exporter version `0.17.1` and collecting metrics now fails with this error:
```text
> curl.exe http://localhost:9182/metrics
An error has occurred while serving metrics:
2 error(s) occurred:
* collected metric windows_process_thread_count label:<name:"creating_process_id" value:"1348" > label:<name:"process" value:"sqlbrowser" > label:<name:"process_id" value:"6732" > gauge:<value:7 > has help "Number of threads currently active in this process." but should have "Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread."
* collected metric windows_process_thread_count label:<name:"creating_process_id" value:"1348" > label:<name:"process" value:"sqlwriter" > label:<name:"process_id" value:"7012" > gauge:<value:2 > has help "Number of threads currently active in this process." but should have "Number of threads currently active in this process. An instruction is the basic unit of execution in a processor, and a thread is the object that executes instructions. Every running process has at least one thread."
```
Removing the `--collector.process.whitelist` Parameter causes the error to return for many (all?) process names with the same message.
I searched the repository for the mentioned text, it is contained in two different collectors:
<https://github.com/prometheus-community/windows_exporter/search?q=Number+of+threads+currently+active+in+this+process>
however `terminal_services` is not active in my case. I then looked at revision `0.17.1`:
The first mentioned HELP text is in fact used by the process collector:
<https://github.com/prometheus-community/windows_exporter/blob/d9f4264fc4ee183462fc23032ce9b60444411b87/collector/process.go#L123-L126>
This seems to somehow collide with this metric? Even though this collector is not enabled?
<https://github.com/prometheus-community/windows_exporter/blob/d9f4264fc4ee183462fc23032ce9b60444411b87/collector/terminal_services.go#L144-L147>
I am unfamiliar with golang and therefore not sure where `Namespace` and `subsystem` are set.
----
The Windows Service is running the exporter as:
```text
"C:\Program Files\windows_exporter\windows_exporter.exe" --log.format logger:eventlog?name=windows_exporter --collectors.enabled [defaults],process,mssql --collector.textfile.directory C:\windows_exporter\textfile_collector --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+" --collector.service.services-where="Name LIKE 'ax5%'"
```
At the same time as upgrading to 0.17.1 I changed the MSI setup parameter to use `--collectors.enabled [defaults],process,mssql`
I then stopped the service, and started the exporter with all defaults:
```text
C:\Program Files\windows_exporter\
❯ .\windows_exporter.exe
```
Metrics collection is successful ✔
Next I started it as: `.\windows_exporter.exe --collectors.enabled process`
Metrics collection is successful and returns process metrics✔
Next I started it as: `.\windows_exporter.exe --collectors.enabled "[defaults],process"`
Metrics collection is successful and returns process and other metrics metrics✔
Next test: `.\windows_exporter.exe --collectors.enabled "[defaults],process,mssql"`
Metrics collection is successful and returns process and other metrics metrics✔
Restarted the exporter as windows service, the error is still reproducible.
Startet it from powershell with:
```text
.\windows_exporter.exe --collectors.enabled "[defaults],process,mssql" --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+"
time="2022-01-19T16:10:27+01:00" level=warning msg="No where-clause specified for service collector. This will generate a very large number of metrics!" source="service.go:48"
time="2022-01-19T16:10:27+01:00" level=info msg="Enabled collectors: net, cs, system, logical_disk, process, cpu, textfile, mssql, service, os" source="exporter.go:348"
time="2022-01-19T16:10:27+01:00" level=info msg="Starting windows_exporter (version=0.17.1, branch=heads/tags/v0.17.1, revision=d9f4264fc4ee183462fc23032ce9b60444411b87)" source="exporter.go:400"
time="2022-01-19T16:10:27+01:00" level=info msg="Build context (go=go1.17.5, user=runneradmin@fv-az177-480, date=20220102-09:24:12)" source="exporter.go:401"
time="2022-01-19T16:10:27+01:00" level=info msg="Starting server on :9182" source="exporter.go:404"
time="2022-01-19T16:10:27+01:00" level=info msg="TLS is disabled." source="gokit_adapter.go:38"
```
Metrics collection is successful ✔ (???)
Tried starting with `--log.level=debug` as windows Service (`"C:\Program Files\windows_exporter\windows_exporter.exe" --log.level=debug --log.format logger:eventlog?name=windows_exporter --collectors.enabled [defaults],process,mssql --collector.textfile.directory C:\windows_exporter\textfile_collector --collector.process.whitelist="ax.+|AX.+|sql.+|SQL.+" --collector.service.services-where="Name LIKE 'ax5%'"`)
but there are now debug logs in the Windows Event Log?
... Investigating...
|
process
|
windows exporter failure windows process thread count help text i just test updated to windows exporter version and collecting metrics now fails with this error text curl exe an error has occurred while serving metrics error s occurred collected metric windows process thread count label label label gauge has help number of threads currently active in this process but should have number of threads currently active in this process an instruction is the basic unit of execution in a processor and a thread is the object that executes instructions every running process has at least one thread collected metric windows process thread count label label label gauge has help number of threads currently active in this process but should have number of threads currently active in this process an instruction is the basic unit of execution in a processor and a thread is the object that executes instructions every running process has at least one thread removing the collector process whitelist parameter causes the error to return for many all process names with the same message i searched the repository for the mentioned text it is contained in two different collectors however terminal services is not active in my case i then looked at revision the first mentioned help text is in fact used by the process collector this seems to somehow collide with this metric even though this collector is not enabled i am unfamiliar with golang and therefore not sure where namespace and subsystem are set the windows service is running the exporter as text c program files windows exporter windows exporter exe log format logger eventlog name windows exporter collectors enabled process mssql collector textfile directory c windows exporter textfile collector collector process whitelist ax ax sql sql collector service services where name like at the same time as upgrading to i changed the msi setup parameter to use collectors enabled process mssql i then stopped the service and started the exporter with all defaults text c program files windows exporter ❯ windows exporter exe metrics collection is successful ✔ next i started it as windows exporter exe collectors enabled process metrics collection is successful and returns process metrics✔ next i started it as windows exporter exe collectors enabled process metrics collection is successful and returns process and other metrics metrics✔ next test windows exporter exe collectors enabled process mssql metrics collection is successful and returns process and other metrics metrics✔ restarted the exporter as windows service the error is still reproducible startet it from powershell with text windows exporter exe collectors enabled process mssql collector process whitelist ax ax sql sql time level warning msg no where clause specified for service collector this will generate a very large number of metrics source service go time level info msg enabled collectors net cs system logical disk process cpu textfile mssql service os source exporter go time level info msg starting windows exporter version branch heads tags revision source exporter go time level info msg build context go user runneradmin fv date source exporter go time level info msg starting server on source exporter go time level info msg tls is disabled source gokit adapter go metrics collection is successful ✔ tried starting with log level debug as windows service c program files windows exporter windows exporter exe log level debug log format logger eventlog name windows exporter collectors enabled process mssql collector textfile directory c windows exporter textfile collector collector process whitelist ax ax sql sql collector service services where name like but there are now debug logs in the windows event log investigating
| 1
|
9,397
| 12,397,113,754
|
IssuesEvent
|
2020-05-20 21:55:49
|
googleapis/google-api-go-client
|
https://api.github.com/repos/googleapis/google-api-go-client
|
closed
|
Requesting new beta version of the google-api-go-client
|
type: process
|
The last published version of the v1beta1 API go client is v0.24.0 on May 11. Recently an API was added to the GKE container cluster create command to support disabling default SNAT on GKE clusters. This requires a new version of the google-api-go-client.
|
1.0
|
Requesting new beta version of the google-api-go-client - The last published version of the v1beta1 API go client is v0.24.0 on May 11. Recently an API was added to the GKE container cluster create command to support disabling default SNAT on GKE clusters. This requires a new version of the google-api-go-client.
|
process
|
requesting new beta version of the google api go client the last published version of the api go client is on may recently an api was added to the gke container cluster create command to support disabling default snat on gke clusters this requires a new version of the google api go client
| 1
|
7,289
| 10,436,614,696
|
IssuesEvent
|
2019-09-17 19:58:49
|
pwittchen/ReactiveNetwork
|
https://api.github.com/repos/pwittchen/ReactiveNetwork
|
opened
|
Release 0.13.0
|
RxJava1.x release process
|
Release notes:
- replacing default protocol HTTP with HTTPS in WalledGardenInternetObservingStrategy - PR #376, issue #323
|
1.0
|
Release 0.13.0 - Release notes:
- replacing default protocol HTTP with HTTPS in WalledGardenInternetObservingStrategy - PR #376, issue #323
|
process
|
release release notes replacing default protocol http with https in walledgardeninternetobservingstrategy pr issue
| 1
|
70,417
| 9,415,624,128
|
IssuesEvent
|
2019-04-10 13:04:10
|
philippkraft/cmf
|
https://api.github.com/repos/philippkraft/cmf
|
closed
|
Rename misleading connection names
|
C++ documentation enhancement python swig
|
Some of the connection names in cmf are misleading. This issue is the place to collect these connections. If you find some connection name not really fitting (or if you dislike a proposal below), please write it here as a comment.
Starting with cmf 1.4, new names should be available as alternative (aka from __future__ import), for cmf 2.0, the new names will be mandatory. For cmf 2.0 a tool to translate old scripts to the new names should be available.
|
1.0
|
Rename misleading connection names - Some of the connection names in cmf are misleading. This issue is the place to collect these connections. If you find some connection name not really fitting (or if you dislike a proposal below), please write it here as a comment.
Starting with cmf 1.4, new names should be available as alternative (aka from __future__ import), for cmf 2.0, the new names will be mandatory. For cmf 2.0 a tool to translate old scripts to the new names should be available.
|
non_process
|
rename misleading connection names some of the connection names in cmf are misleading this issue is the place to collect these connections if you find some connection name not really fitting or if you dislike a proposal below please write it here as a comment starting with cmf new names should be available as alternative aka from future import for cmf the new names will be mandatory for cmf a tool to translate old scripts to the new names should be available
| 0
|
62,976
| 8,650,303,705
|
IssuesEvent
|
2018-11-26 22:05:25
|
pivotal-cf/docs-spring-cloud-dataflow
|
https://api.github.com/repos/pivotal-cf/docs-spring-cloud-dataflow
|
closed
|
How to setup a private domain on `p-dataflow` org applications
|
documentation in progress
|
See Slack thread:
https://pivotal.slack.com/archives/C064Q28L9/p1538597675000100?thread_ts=1538593181.000100&cid=C064Q28L9
It seems that there is a way to use a private domain although this is a configuration at the Operator level. For the `p-dataflow` org they could use the approach documented here to use private domains for service instance backing application routes:
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#private-domains
This doesn't affect the applications deployed into the user's space via `p-dataflow` service instances though.
|
1.0
|
How to setup a private domain on `p-dataflow` org applications - See Slack thread:
https://pivotal.slack.com/archives/C064Q28L9/p1538597675000100?thread_ts=1538593181.000100&cid=C064Q28L9
It seems that there is a way to use a private domain although this is a configuration at the Operator level. For the `p-dataflow` org they could use the approach documented here to use private domains for service instance backing application routes:
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#private-domains
This doesn't affect the applications deployed into the user's space via `p-dataflow` service instances though.
|
non_process
|
how to setup a private domain on p dataflow org applications see slack thread it seems that there is a way to use a private domain although this is a configuration at the operator level for the p dataflow org they could use the approach documented here to use private domains for service instance backing application routes this doesn t affect the applications deployed into the user s space via p dataflow service instances though
| 0
|
118,855
| 10,013,788,535
|
IssuesEvent
|
2019-07-15 15:55:58
|
microsoft/appcenter
|
https://api.github.com/repos/microsoft/appcenter
|
closed
|
System.Net.WebException : POST Failed at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError
|
bug test
|
**What App Center service does this affect?**
build & test
**Describe the bug**
Hi,
I have the following error and I have no idea what it means. It does not happen in the emulator--only when it runs in appcenter with 'Test on a real device' enabled. Xamarin.forms team has not responded in over a week so I assume they think it's a AppCenter issue. See https://forums.xamarin.com/discussion/158165/post-failed-at-xamarin-uitest-shared-http-httpclient-handlehttperror
Error : AppCenter.UITest.Android.Tests.AppDoesLaunch
System.Net.WebException : POST Failed
at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError (System.String method, System.Net.Http.HttpResponseMessage response, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy) [0x00052] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Http.HttpClient.SendData (System.String endpoint, System.String method, System.Net.Http.HttpContent content, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable1[T] timeOut) [0x00123] in <18ae7883e2424c558186d1d9edf9f14b>:0 at Xamarin.UITest.Shared.Http.HttpClient.Post (System.String endpoint, System.String arguments, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable1[T] timeOut) [0x00014] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.HttpApplicationStarter.Execute (System.String intentJson) [0x00035] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (System.String appPackageName, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x000a1] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (Xamarin.UITest.Shared.Android.ApkFile appApkFile, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x00007] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration, Xamarin.UITest.Shared.Execution.IExecutor executor) [0x00193] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration) [0x00000] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Configuration.AndroidAppConfigurator.StartApp (Xamarin.UITest.Configuration.AppDataMode appDataMode) [0x00017] in <18ae7883e2424c558186d1d9edf9f14b>:0
at AppCenter.UITest.Android.Tests.SetUp () [0x00010] in :0
at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00032] in <48b95f3df5804531818f80e28ec60191>:0
**To Reproduce**
See full log at https://appcenter.ms/orgs/Collective2/apps/C2-Mobile/test/runs/b5726c2b-881e-4ac7-b89a-c3bcb11341c1
**Expected behavior**
No error
**Screenshots**
n/a
**Desktop (please complete the following information):**
- OS: win10
- Browser chrome
- Version latest
**Smartphone (please complete the following information):**
n/a
**Additional context**
n/a
|
1.0
|
System.Net.WebException : POST Failed at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError - **What App Center service does this affect?**
build & test
**Describe the bug**
Hi,
I have the following error and I have no idea what it means. It does not happen in the emulator--only when it runs in appcenter with 'Test on a real device' enabled. Xamarin.forms team has not responded in over a week so I assume they think it's a AppCenter issue. See https://forums.xamarin.com/discussion/158165/post-failed-at-xamarin-uitest-shared-http-httpclient-handlehttperror
Error : AppCenter.UITest.Android.Tests.AppDoesLaunch
System.Net.WebException : POST Failed
at Xamarin.UITest.Shared.Http.HttpClient.HandleHttpError (System.String method, System.Net.Http.HttpResponseMessage response, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy) [0x00052] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Http.HttpClient.SendData (System.String endpoint, System.String method, System.Net.Http.HttpContent content, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable1[T] timeOut) [0x00123] in <18ae7883e2424c558186d1d9edf9f14b>:0 at Xamarin.UITest.Shared.Http.HttpClient.Post (System.String endpoint, System.String arguments, Xamarin.UITest.Shared.Http.ExceptionPolicy exceptionPolicy, System.Nullable1[T] timeOut) [0x00014] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.HttpApplicationStarter.Execute (System.String intentJson) [0x00035] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (System.String appPackageName, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x000a1] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Shared.Android.AndroidAppLifeCycle.LaunchApp (Xamarin.UITest.Shared.Android.ApkFile appApkFile, Xamarin.UITest.Shared.Android.ApkFile testServerApkFile, System.Int32 testServerPort) [0x00007] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration, Xamarin.UITest.Shared.Execution.IExecutor executor) [0x00193] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Android.AndroidApp..ctor (Xamarin.UITest.Configuration.IAndroidAppConfiguration appConfiguration) [0x00000] in <18ae7883e2424c558186d1d9edf9f14b>:0
at Xamarin.UITest.Configuration.AndroidAppConfigurator.StartApp (Xamarin.UITest.Configuration.AppDataMode appDataMode) [0x00017] in <18ae7883e2424c558186d1d9edf9f14b>:0
at AppCenter.UITest.Android.Tests.SetUp () [0x00010] in :0
at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&)
at System.Reflection.MonoMethod.Invoke (System.Object obj, System.Reflection.BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00032] in <48b95f3df5804531818f80e28ec60191>:0
**To Reproduce**
See full log at https://appcenter.ms/orgs/Collective2/apps/C2-Mobile/test/runs/b5726c2b-881e-4ac7-b89a-c3bcb11341c1
**Expected behavior**
No error
**Screenshots**
n/a
**Desktop (please complete the following information):**
- OS: win10
- Browser chrome
- Version latest
**Smartphone (please complete the following information):**
n/a
**Additional context**
n/a
|
non_process
|
system net webexception post failed at xamarin uitest shared http httpclient handlehttperror what app center service does this affect build test describe the bug hi i have the following error and i have no idea what it means it does not happen in the emulator only when it runs in appcenter with test on a real device enabled xamarin forms team has not responded in over a week so i assume they think it s a appcenter issue see error appcenter uitest android tests appdoeslaunch system net webexception post failed at xamarin uitest shared http httpclient handlehttperror system string method system net http httpresponsemessage response xamarin uitest shared http exceptionpolicy exceptionpolicy in at xamarin uitest shared http httpclient senddata system string endpoint system string method system net http httpcontent content xamarin uitest shared http exceptionpolicy exceptionpolicy system timeout in at xamarin uitest shared http httpclient post system string endpoint system string arguments xamarin uitest shared http exceptionpolicy exceptionpolicy system timeout in at xamarin uitest shared android httpapplicationstarter execute system string intentjson in at xamarin uitest shared android androidapplifecycle launchapp system string apppackagename xamarin uitest shared android apkfile testserverapkfile system testserverport in at xamarin uitest shared android androidapplifecycle launchapp xamarin uitest shared android apkfile appapkfile xamarin uitest shared android apkfile testserverapkfile system testserverport in at xamarin uitest android androidapp ctor xamarin uitest configuration iandroidappconfiguration appconfiguration xamarin uitest shared execution iexecutor executor in at xamarin uitest android androidapp ctor xamarin uitest configuration iandroidappconfiguration appconfiguration in at xamarin uitest configuration androidappconfigurator startapp xamarin uitest configuration appdatamode appdatamode in at appcenter uitest android tests setup in at wrapper managed to native system reflection monomethod internalinvoke system reflection monomethod object object system exception at system reflection monomethod invoke system object obj system reflection bindingflags invokeattr system reflection binder binder system object parameters system globalization cultureinfo culture in to reproduce see full log at expected behavior no error screenshots n a desktop please complete the following information os browser chrome version latest smartphone please complete the following information n a additional context n a
| 0
|
19,986
| 26,462,582,874
|
IssuesEvent
|
2023-01-16 19:14:13
|
kubernetes-sigs/windows-operational-readiness
|
https://api.github.com/repos/kubernetes-sigs/windows-operational-readiness
|
closed
|
Ability to create and manage host level networking (hcn) rules.
|
kind/feature lifecycle/rotten category/ext.hostprocess
|
Ability to create and manage host level networking (hcn) rules from a Windows hostProcess pod.
|
1.0
|
Ability to create and manage host level networking (hcn) rules. - Ability to create and manage host level networking (hcn) rules from a Windows hostProcess pod.
|
process
|
ability to create and manage host level networking hcn rules ability to create and manage host level networking hcn rules from a windows hostprocess pod
| 1
|
128,112
| 12,360,185,691
|
IssuesEvent
|
2020-05-17 14:15:56
|
introfog/PIE2-Core
|
https://api.github.com/repos/introfog/PIE2-Core
|
opened
|
Add JavaDoc to Polygon class
|
documentation
|
Be sure to write in the documentation that supported only convex polygons, and if you specify a non-convex polygon, it will automatically become convex in constructor.
|
1.0
|
Add JavaDoc to Polygon class - Be sure to write in the documentation that supported only convex polygons, and if you specify a non-convex polygon, it will automatically become convex in constructor.
|
non_process
|
add javadoc to polygon class be sure to write in the documentation that supported only convex polygons and if you specify a non convex polygon it will automatically become convex in constructor
| 0
|
126,328
| 17,875,673,971
|
IssuesEvent
|
2021-09-07 03:02:57
|
ConsumerDataStandardsAustralia/standards-maintenance
|
https://api.github.com/repos/ConsumerDataStandardsAustralia/standards-maintenance
|
closed
|
Clarification on consent request scenarios
|
security query answer provided
|
Commonwealth Bank would like to seek clarification and validation from Data61 about consent request scenarios,
Our current understanding:
• Data61 stated that accounts are not part of the consent structure.
General question:
• Can consent be active without any accounts attached (e.g. for customer information only)?
Scenario 1:
• Account API was called for 2 accounts specified in the request
• Consent is valid
• One account is a part of consent
• The other one is not a part of consent
• Does this API return “unauthorised” response, Or; does the API return data for only one account and not indicate that the response is partial?
Scenario 2:
• Account API was called for 2 accounts specified in the request
• Consent is valid
• Both accounts are not part of consent
• Does the API return “unauthorised” response, Or; does the API return an empty data set?
|
True
|
Clarification on consent request scenarios - Commonwealth Bank would like to seek clarification and validation from Data61 about consent request scenarios,
Our current understanding:
• Data61 stated that accounts are not part of the consent structure.
General question:
• Can consent be active without any accounts attached (e.g. for customer information only)?
Scenario 1:
• Account API was called for 2 accounts specified in the request
• Consent is valid
• One account is a part of consent
• The other one is not a part of consent
• Does this API return “unauthorised” response, Or; does the API return data for only one account and not indicate that the response is partial?
Scenario 2:
• Account API was called for 2 accounts specified in the request
• Consent is valid
• Both accounts are not part of consent
• Does the API return “unauthorised” response, Or; does the API return an empty data set?
|
non_process
|
clarification on consent request scenarios commonwealth bank would like to seek clarification and validation from about consent request scenarios our current understanding • stated that accounts are not part of the consent structure general question • can consent be active without any accounts attached e g for customer information only scenario • account api was called for accounts specified in the request • consent is valid • one account is a part of consent • the other one is not a part of consent • does this api return “unauthorised” response or does the api return data for only one account and not indicate that the response is partial scenario • account api was called for accounts specified in the request • consent is valid • both accounts are not part of consent • does the api return “unauthorised” response or does the api return an empty data set
| 0
|
33,867
| 7,759,936,268
|
IssuesEvent
|
2018-06-01 02:40:16
|
surrsurus/edgequest
|
https://api.github.com/repos/surrsurus/edgequest
|
closed
|
Make loading the config more rust-like
|
enhancement:code eq:core priority:low solved
|
- [ ] `pub fn load(path: &str) -> Config` should probably be something like `pub fn load(path: &str) -> Result<Config, WhateverThisIs>`
|
1.0
|
Make loading the config more rust-like - - [ ] `pub fn load(path: &str) -> Config` should probably be something like `pub fn load(path: &str) -> Result<Config, WhateverThisIs>`
|
non_process
|
make loading the config more rust like pub fn load path str config should probably be something like pub fn load path str result
| 0
|
296,280
| 25,541,603,666
|
IssuesEvent
|
2022-11-29 15:40:53
|
vegaprotocol/vega
|
https://api.github.com/repos/vegaprotocol/vega
|
opened
|
Implement test coverage for 0038-OLIQ-010, 0038-OLIQ-008
|
feature tests
|
IN order to get test coverage for 0038-OLIQ-liquidity_provision_order_type.md we need to cover the following ACs
- [ ] 0038-OLIQ-010
- [ ] 0038-OLIQ-008
|
1.0
|
Implement test coverage for 0038-OLIQ-010, 0038-OLIQ-008 - IN order to get test coverage for 0038-OLIQ-liquidity_provision_order_type.md we need to cover the following ACs
- [ ] 0038-OLIQ-010
- [ ] 0038-OLIQ-008
|
non_process
|
implement test coverage for oliq oliq in order to get test coverage for oliq liquidity provision order type md we need to cover the following acs oliq oliq
| 0
|
14,558
| 17,687,651,390
|
IssuesEvent
|
2021-08-24 05:25:59
|
googleapis/python-storage
|
https://api.github.com/repos/googleapis/python-storage
|
closed
|
`test_bucket_w_retention_period` flakes due to EC
|
api: storage type: process flaky
|
From [this Kokoro failure](https://source.cloud.google.com/results/invocations/a574f4aa-8595-4148-b8b5-9aa894104288/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-2.7/log):
```python
________________________ test_bucket_w_retention_period ________________________
storage_client = <google.cloud.storage.client.Client object at 0x7f229517cd10>
buckets_to_delete = [<Bucket: w-retention-period-1629442620182>]
blobs_to_delete = [<Blob: w-retention-period-1629442620182, test-blob, 1629442620932109>]
def test_bucket_w_retention_period(
storage_client, buckets_to_delete, blobs_to_delete,
):
period_secs = 10
bucket_name = _helpers.unique_name("w-retention-period")
bucket = _helpers.retry_429_503(storage_client.create_bucket)(bucket_name)
buckets_to_delete.append(bucket)
bucket.retention_period = period_secs
bucket.default_event_based_hold = False
bucket.patch()
assert bucket.retention_period == period_secs
assert isinstance(bucket.retention_policy_effective_time, datetime.datetime)
assert not bucket.default_event_based_hold
assert not bucket.retention_policy_locked
blob_name = "test-blob"
payload = b"DEADBEEF"
blob = bucket.blob(blob_name)
blob.upload_from_string(payload)
blobs_to_delete.append(blob)
other = bucket.get_blob(blob_name)
assert not other.event_based_hold
assert not other.temporary_hold
assert isinstance(other.retention_expiration_time, datetime.datetime)
with pytest.raises(exceptions.Forbidden):
other.delete()
bucket.retention_period = None
bucket.patch()
assert bucket.retention_period is None
assert bucket.retention_policy_effective_time is None
assert not bucket.default_event_based_hold
assert not bucket.retention_policy_locked
other.reload()
assert not other.event_based_hold
assert not other.temporary_hold
> assert other.retention_expiration_time is None
E assert datetime.datetime(2021, 8, 20, 6, 57, 10, 949000, tzinfo=<UTC>) is None
E + where datetime.datetime(2021, 8, 20, 6, 57, 10, 949000, tzinfo=<UTC>) = <Blob: w-retention-period-1629442620182, test-blob, 1629442620932109>.retention_expiration_time
tests/system/test_bucket.py:572: AssertionError
```
|
1.0
|
`test_bucket_w_retention_period` flakes due to EC - From [this Kokoro failure](https://source.cloud.google.com/results/invocations/a574f4aa-8595-4148-b8b5-9aa894104288/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-2.7/log):
```python
________________________ test_bucket_w_retention_period ________________________
storage_client = <google.cloud.storage.client.Client object at 0x7f229517cd10>
buckets_to_delete = [<Bucket: w-retention-period-1629442620182>]
blobs_to_delete = [<Blob: w-retention-period-1629442620182, test-blob, 1629442620932109>]
def test_bucket_w_retention_period(
storage_client, buckets_to_delete, blobs_to_delete,
):
period_secs = 10
bucket_name = _helpers.unique_name("w-retention-period")
bucket = _helpers.retry_429_503(storage_client.create_bucket)(bucket_name)
buckets_to_delete.append(bucket)
bucket.retention_period = period_secs
bucket.default_event_based_hold = False
bucket.patch()
assert bucket.retention_period == period_secs
assert isinstance(bucket.retention_policy_effective_time, datetime.datetime)
assert not bucket.default_event_based_hold
assert not bucket.retention_policy_locked
blob_name = "test-blob"
payload = b"DEADBEEF"
blob = bucket.blob(blob_name)
blob.upload_from_string(payload)
blobs_to_delete.append(blob)
other = bucket.get_blob(blob_name)
assert not other.event_based_hold
assert not other.temporary_hold
assert isinstance(other.retention_expiration_time, datetime.datetime)
with pytest.raises(exceptions.Forbidden):
other.delete()
bucket.retention_period = None
bucket.patch()
assert bucket.retention_period is None
assert bucket.retention_policy_effective_time is None
assert not bucket.default_event_based_hold
assert not bucket.retention_policy_locked
other.reload()
assert not other.event_based_hold
assert not other.temporary_hold
> assert other.retention_expiration_time is None
E assert datetime.datetime(2021, 8, 20, 6, 57, 10, 949000, tzinfo=<UTC>) is None
E + where datetime.datetime(2021, 8, 20, 6, 57, 10, 949000, tzinfo=<UTC>) = <Blob: w-retention-period-1629442620182, test-blob, 1629442620932109>.retention_expiration_time
tests/system/test_bucket.py:572: AssertionError
```
|
process
|
test bucket w retention period flakes due to ec from python test bucket w retention period storage client buckets to delete blobs to delete def test bucket w retention period storage client buckets to delete blobs to delete period secs bucket name helpers unique name w retention period bucket helpers retry storage client create bucket bucket name buckets to delete append bucket bucket retention period period secs bucket default event based hold false bucket patch assert bucket retention period period secs assert isinstance bucket retention policy effective time datetime datetime assert not bucket default event based hold assert not bucket retention policy locked blob name test blob payload b deadbeef blob bucket blob blob name blob upload from string payload blobs to delete append blob other bucket get blob blob name assert not other event based hold assert not other temporary hold assert isinstance other retention expiration time datetime datetime with pytest raises exceptions forbidden other delete bucket retention period none bucket patch assert bucket retention period is none assert bucket retention policy effective time is none assert not bucket default event based hold assert not bucket retention policy locked other reload assert not other event based hold assert not other temporary hold assert other retention expiration time is none e assert datetime datetime tzinfo is none e where datetime datetime tzinfo retention expiration time tests system test bucket py assertionerror
| 1
|
4,078
| 7,022,283,474
|
IssuesEvent
|
2017-12-22 09:52:12
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
log2timeline shouldn't raise when dfvfs can't stat a broken symlink
|
bug preprocessing
|
**Plaso version:**
$ python tools/log2timeline.py --version
plaso - log2timeline version 20171118
**Operating system Plaso is running on:**
# uname -a
Linux vm-1510933385 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
**Installation method:**
Gift PPA packages
**Description of problem:**
When running log2timeline.py on a directory, log2timeline.py stops processing after it encounters a broken link, which happens quite often on a mounted filesystem
**Debug output/tracebacks:**
```
# log2timeline.py /home/romaing/fs.plaso /mnt/
Checking availability and versions of dependencies.
[OK]
Source path : /mnt
Source type : directory
Processing started.
Traceback (most recent call last):
File "/usr/bin/log2timeline.py", line 68, in <module>
if not Main():
File "/usr/bin/log2timeline.py", line 54, in Main
tool.ExtractEventsFromSources()
File "/usr/lib/python2.7/dist-packages/plaso/cli/log2timeline_tool.py", line 531, in ExtractEventsFromSources
self._PreprocessSources(extraction_engine)
File "/usr/lib/python2.7/dist-packages/plaso/cli/log2timeline_tool.py", line 468, in _PreprocessSources
resolver_context=self._resolver_context)
File "/usr/lib/python2.7/dist-packages/plaso/engine/engine.py", line 179, in PreprocessSources
artifacts_registry, file_system, mount_point, self.knowledge_base)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/manager.py", line 277, in RunPlugins
artifacts_registry, knowledge_base, searcher, file_system)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/manager.py", line 147, in CollectFromFileSystem
knowledge_base, artifact_definition, searcher, file_system)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/interface.py", line 83, in Collect
for path_specification in searcher.Find(find_specs=[find_spec]):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 469, in Find
for matching_path_spec in self._FindInFileEntry(file_entry, find_specs, 0):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 442, in _FindInFileEntry
sub_file_entry, sub_find_specs, search_depth):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 440, in _FindInFileEntry
for sub_file_entry in file_entry.sub_file_entries:
File "/usr/lib/python2.7/dist-packages/dfvfs/vfs/os_file_entry.py", line 244, in sub_file_entries
yield OSFileEntry(self._resolver_context, self._file_system, path_spec)
File "/usr/lib/python2.7/dist-packages/dfvfs/vfs/os_file_entry.py", line 108, in __init__
exception))
dfvfs.lib.errors.BackEndError: Unable to retrieve stat object with error: [Errno 2] No such file or directory: '/mnt/etc/favicon.png'
# ls -lah /mnt/etc/favicon.png
lrwxrwxrwx. 1 root root 56 Nov 15 01:29 /mnt/etc/favicon.png -> /usr/share/icons/hicolor/16x16/apps/fedora-logo-icon.png
```
**Source data:**
Centos image
|
1.0
|
log2timeline shouldn't raise when dfvfs can't stat a broken symlink - **Plaso version:**
$ python tools/log2timeline.py --version
plaso - log2timeline version 20171118
**Operating system Plaso is running on:**
# uname -a
Linux vm-1510933385 3.13.0-74-generic #118-Ubuntu SMP Thu Dec 17 22:52:10 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
**Installation method:**
Gift PPA packages
**Description of problem:**
When running log2timeline.py on a directory, log2timeline.py stops processing after it encounters a broken link, which happens quite often on a mounted filesystem
**Debug output/tracebacks:**
```
# log2timeline.py /home/romaing/fs.plaso /mnt/
Checking availability and versions of dependencies.
[OK]
Source path : /mnt
Source type : directory
Processing started.
Traceback (most recent call last):
File "/usr/bin/log2timeline.py", line 68, in <module>
if not Main():
File "/usr/bin/log2timeline.py", line 54, in Main
tool.ExtractEventsFromSources()
File "/usr/lib/python2.7/dist-packages/plaso/cli/log2timeline_tool.py", line 531, in ExtractEventsFromSources
self._PreprocessSources(extraction_engine)
File "/usr/lib/python2.7/dist-packages/plaso/cli/log2timeline_tool.py", line 468, in _PreprocessSources
resolver_context=self._resolver_context)
File "/usr/lib/python2.7/dist-packages/plaso/engine/engine.py", line 179, in PreprocessSources
artifacts_registry, file_system, mount_point, self.knowledge_base)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/manager.py", line 277, in RunPlugins
artifacts_registry, knowledge_base, searcher, file_system)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/manager.py", line 147, in CollectFromFileSystem
knowledge_base, artifact_definition, searcher, file_system)
File "/usr/lib/python2.7/dist-packages/plaso/preprocessors/interface.py", line 83, in Collect
for path_specification in searcher.Find(find_specs=[find_spec]):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 469, in Find
for matching_path_spec in self._FindInFileEntry(file_entry, find_specs, 0):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 442, in _FindInFileEntry
sub_file_entry, sub_find_specs, search_depth):
File "/usr/lib/python2.7/dist-packages/dfvfs/helpers/file_system_searcher.py", line 440, in _FindInFileEntry
for sub_file_entry in file_entry.sub_file_entries:
File "/usr/lib/python2.7/dist-packages/dfvfs/vfs/os_file_entry.py", line 244, in sub_file_entries
yield OSFileEntry(self._resolver_context, self._file_system, path_spec)
File "/usr/lib/python2.7/dist-packages/dfvfs/vfs/os_file_entry.py", line 108, in __init__
exception))
dfvfs.lib.errors.BackEndError: Unable to retrieve stat object with error: [Errno 2] No such file or directory: '/mnt/etc/favicon.png'
# ls -lah /mnt/etc/favicon.png
lrwxrwxrwx. 1 root root 56 Nov 15 01:29 /mnt/etc/favicon.png -> /usr/share/icons/hicolor/16x16/apps/fedora-logo-icon.png
```
**Source data:**
Centos image
|
process
|
shouldn t raise when dfvfs can t stat a broken symlink plaso version python tools py version plaso version operating system plaso is running on uname a linux vm generic ubuntu smp thu dec utc gnu linux installation method gift ppa packages description of problem when running py on a directory py stops processing after it encounters a broken link which happens quite often on a mounted filesystem debug output tracebacks py home romaing fs plaso mnt checking availability and versions of dependencies source path mnt source type directory processing started traceback most recent call last file usr bin py line in if not main file usr bin py line in main tool extracteventsfromsources file usr lib dist packages plaso cli tool py line in extracteventsfromsources self preprocesssources extraction engine file usr lib dist packages plaso cli tool py line in preprocesssources resolver context self resolver context file usr lib dist packages plaso engine engine py line in preprocesssources artifacts registry file system mount point self knowledge base file usr lib dist packages plaso preprocessors manager py line in runplugins artifacts registry knowledge base searcher file system file usr lib dist packages plaso preprocessors manager py line in collectfromfilesystem knowledge base artifact definition searcher file system file usr lib dist packages plaso preprocessors interface py line in collect for path specification in searcher find find specs file usr lib dist packages dfvfs helpers file system searcher py line in find for matching path spec in self findinfileentry file entry find specs file usr lib dist packages dfvfs helpers file system searcher py line in findinfileentry sub file entry sub find specs search depth file usr lib dist packages dfvfs helpers file system searcher py line in findinfileentry for sub file entry in file entry sub file entries file usr lib dist packages dfvfs vfs os file entry py line in sub file entries yield osfileentry self resolver context self file system path spec file usr lib dist packages dfvfs vfs os file entry py line in init exception dfvfs lib errors backenderror unable to retrieve stat object with error no such file or directory mnt etc favicon png ls lah mnt etc favicon png lrwxrwxrwx root root nov mnt etc favicon png usr share icons hicolor apps fedora logo icon png source data centos image
| 1
|
16,378
| 21,094,761,108
|
IssuesEvent
|
2022-04-04 09:15:36
|
FOLIO-FSE/folio_migration_tools
|
https://api.github.com/repos/FOLIO-FSE/folio_migration_tools
|
closed
|
Publish MARC21-to-FOLIO as Package to PyPi
|
enhancement/new feature simplify_migration_process
|
We're planning to use `MARC21-to-FOLIO` in Apache Airflow tasks as part of the proof-of-concept for the Sinopia and FOLIO [integration](https://github.com/LD4P/ils-middleware). Having this project as an installable package available through PiPy would simplify deployments and development.
|
1.0
|
Publish MARC21-to-FOLIO as Package to PyPi - We're planning to use `MARC21-to-FOLIO` in Apache Airflow tasks as part of the proof-of-concept for the Sinopia and FOLIO [integration](https://github.com/LD4P/ils-middleware). Having this project as an installable package available through PiPy would simplify deployments and development.
|
process
|
publish to folio as package to pypi we re planning to use to folio in apache airflow tasks as part of the proof of concept for the sinopia and folio having this project as an installable package available through pipy would simplify deployments and development
| 1
|
78,904
| 7,686,990,168
|
IssuesEvent
|
2018-05-17 02:39:07
|
alibaba/pouch
|
https://api.github.com/repos/alibaba/pouch
|
closed
|
[flaky test] CRI test failed when the pr is just doc-related
|
areas/test
|
### Ⅰ. Issue Description
In the CI result https://travis-ci.org/alibaba/pouch/jobs/379529089 of PR #1331.
And error log went like this:
```
• Failure [0.653 seconds]
[k8s.io] Security Context
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/framework/framework.go:72
SeccompProfilePath
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:411
runtime should support an seccomp profile that blocks setting hostname with SYS_ADMIN [It]
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:517
cmd [hostname ANewHostName], stdout "hostname: sethostname: Operation not permitted\n", stderr ""
Expected an error to have occurred. Got:
<nil>: nil
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:1046
```
### Ⅱ. Describe what happened
Similar error log has appeared in the CRI test, but within different test cases, while I was on #1318.
But the error came occasionally, thus I think maybe it is a flakty test.
### Ⅲ. Describe what you expected to happen
### Ⅳ. How to reproduce it (as minimally and precisely as possible)
1.
2.
3.
### Ⅴ. Anything else we need to know?
Since the similiar situation has appeared before, I think this error is related to ```ExecSync```.
### Ⅵ. Environment:
- pouch version (use `pouch version`):
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
1.0
|
[flaky test] CRI test failed when the pr is just doc-related - ### Ⅰ. Issue Description
In the CI result https://travis-ci.org/alibaba/pouch/jobs/379529089 of PR #1331.
And error log went like this:
```
• Failure [0.653 seconds]
[k8s.io] Security Context
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/framework/framework.go:72
SeccompProfilePath
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:411
runtime should support an seccomp profile that blocks setting hostname with SYS_ADMIN [It]
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:517
cmd [hostname ANewHostName], stdout "hostname: sethostname: Operation not permitted\n", stderr ""
Expected an error to have occurred. Got:
<nil>: nil
/home/travis/gopath/src/github.com/kubernetes-incubator/cri-tools/pkg/validate/security_context.go:1046
```
### Ⅱ. Describe what happened
Similar error log has appeared in the CRI test, but within different test cases, while I was on #1318.
But the error came occasionally, thus I think maybe it is a flakty test.
### Ⅲ. Describe what you expected to happen
### Ⅳ. How to reproduce it (as minimally and precisely as possible)
1.
2.
3.
### Ⅴ. Anything else we need to know?
Since the similiar situation has appeared before, I think this error is related to ```ExecSync```.
### Ⅵ. Environment:
- pouch version (use `pouch version`):
- OS (e.g. from /etc/os-release):
- Kernel (e.g. `uname -a`):
- Install tools:
- Others:
|
non_process
|
cri test failed when the pr is just doc related ⅰ issue description in the ci result of pr and error log went like this • failure security context home travis gopath src github com kubernetes incubator cri tools pkg framework framework go seccompprofilepath home travis gopath src github com kubernetes incubator cri tools pkg validate security context go runtime should support an seccomp profile that blocks setting hostname with sys admin home travis gopath src github com kubernetes incubator cri tools pkg validate security context go cmd stdout hostname sethostname operation not permitted n stderr expected an error to have occurred got nil home travis gopath src github com kubernetes incubator cri tools pkg validate security context go ⅱ describe what happened similar error log has appeared in the cri test but within different test cases while i was on but the error came occasionally thus i think maybe it is a flakty test ⅲ describe what you expected to happen ⅳ how to reproduce it as minimally and precisely as possible ⅴ anything else we need to know since the similiar situation has appeared before i think this error is related to execsync ⅵ environment pouch version use pouch version os e g from etc os release kernel e g uname a install tools others
| 0
|
21,267
| 28,440,095,019
|
IssuesEvent
|
2023-04-15 20:05:00
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
closed
|
make the like and dislike button useable
|
Processing Task Sprint 3
|
**Task Tests**
task 1)
1. go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2. click on an event thumbnail to show event popup
3. click on the like button and ensure the like counter goes up by 1:
4. go to the myphpadmin page and navigate to the Posts section: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php?db=cse442_2023_spring_team_b_db&table=Posts&target=sql.php
5. look for the event and ensure the like counter went up
task 1)
1. go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2. click on an event thumbnail to show event popup
3. click on the like button and ensure the dislike counter goes up by 1:
4. go to the myphpadmin page and navigate to the Posts section: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php?db=cse442_2023_spring_team_b_db&table=Posts&target=sql.php
5. look for the event and ensure the dislike counter went up
|
1.0
|
make the like and dislike button useable - **Task Tests**
task 1)
1. go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2. click on an event thumbnail to show event popup
3. click on the like button and ensure the like counter goes up by 1:
4. go to the myphpadmin page and navigate to the Posts section: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php?db=cse442_2023_spring_team_b_db&table=Posts&target=sql.php
5. look for the event and ensure the like counter went up
task 1)
1. go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2. click on an event thumbnail to show event popup
3. click on the like button and ensure the dislike counter goes up by 1:
4. go to the myphpadmin page and navigate to the Posts section: https://www-student.cse.buffalo.edu/tools/db/phpmyadmin/index.php?db=cse442_2023_spring_team_b_db&table=Posts&target=sql.php
5. look for the event and ensure the dislike counter went up
|
process
|
make the like and dislike button useable task tests task go to website url click on an event thumbnail to show event popup click on the like button and ensure the like counter goes up by go to the myphpadmin page and navigate to the posts section look for the event and ensure the like counter went up task go to website url click on an event thumbnail to show event popup click on the like button and ensure the dislike counter goes up by go to the myphpadmin page and navigate to the posts section look for the event and ensure the dislike counter went up
| 1
|
12,874
| 15,264,615,029
|
IssuesEvent
|
2021-02-22 05:46:38
|
topcoder-platform/community-app
|
https://api.github.com/repos/topcoder-platform/community-app
|
opened
|
Dynamic loading is not working when recommended toggle is on
|
P2 ShapeupProcess challenge- recommender-tool
|
When there are more than 10 recommended challenges , when the suer scrolls down, more challenges must be loaded.
Also the count is also displayed only as 10.
example: user : tonyj
The api returns 11 challenges with non zero jaccard_index (match score). but only 10 are displayed.
<img width="1440" alt="Screenshot 2021-02-22 at 11 15 33 AM" src="https://user-images.githubusercontent.com/58783823/108667485-549d2780-74ff-11eb-9939-52f0b416b229.png">
|
1.0
|
Dynamic loading is not working when recommended toggle is on - When there are more than 10 recommended challenges , when the suer scrolls down, more challenges must be loaded.
Also the count is also displayed only as 10.
example: user : tonyj
The api returns 11 challenges with non zero jaccard_index (match score). but only 10 are displayed.
<img width="1440" alt="Screenshot 2021-02-22 at 11 15 33 AM" src="https://user-images.githubusercontent.com/58783823/108667485-549d2780-74ff-11eb-9939-52f0b416b229.png">
|
process
|
dynamic loading is not working when recommended toggle is on when there are more than recommended challenges when the suer scrolls down more challenges must be loaded also the count is also displayed only as example user tonyj the api returns challenges with non zero jaccard index match score but only are displayed img width alt screenshot at am src
| 1
|
271,114
| 20,622,865,226
|
IssuesEvent
|
2022-03-07 19:13:11
|
fairlearn/fairlearn
|
https://api.github.com/repos/fairlearn/fairlearn
|
closed
|
DOC Use BibTeX for citations
|
documentation
|
#### Describe the issue linked to the documentation
When we wish to cite papers in our documentation, we are currently writing out the citations manually. Not only is this tedious, but it is also highly error prone (since the links in the text are numbers along the lines of `[4]_`). This is particularly acute if merge conflicts occur in the 'References' section, since then the whole page must be cross checked, to ensure that all the links to the references point at the right paper.
#### Suggest a potential alternative/fix
Somewhat over 30 years ago, BibTex was invented to help with precisely this issue. Furthermore, online paper databases often provide BibTeX snippets for inclusion in `.bib` files. There is also a [BibTeX package for Sphinx](https://sphinxcontrib-bibtex.readthedocs.io/en/latest/quickstart.html) which looks to be under active development. A possible solution suggests itself.
|
1.0
|
DOC Use BibTeX for citations - #### Describe the issue linked to the documentation
When we wish to cite papers in our documentation, we are currently writing out the citations manually. Not only is this tedious, but it is also highly error prone (since the links in the text are numbers along the lines of `[4]_`). This is particularly acute if merge conflicts occur in the 'References' section, since then the whole page must be cross checked, to ensure that all the links to the references point at the right paper.
#### Suggest a potential alternative/fix
Somewhat over 30 years ago, BibTex was invented to help with precisely this issue. Furthermore, online paper databases often provide BibTeX snippets for inclusion in `.bib` files. There is also a [BibTeX package for Sphinx](https://sphinxcontrib-bibtex.readthedocs.io/en/latest/quickstart.html) which looks to be under active development. A possible solution suggests itself.
|
non_process
|
doc use bibtex for citations describe the issue linked to the documentation when we wish to cite papers in our documentation we are currently writing out the citations manually not only is this tedious but it is also highly error prone since the links in the text are numbers along the lines of this is particularly acute if merge conflicts occur in the references section since then the whole page must be cross checked to ensure that all the links to the references point at the right paper suggest a potential alternative fix somewhat over years ago bibtex was invented to help with precisely this issue furthermore online paper databases often provide bibtex snippets for inclusion in bib files there is also a which looks to be under active development a possible solution suggests itself
| 0
|
4,413
| 7,299,746,983
|
IssuesEvent
|
2018-02-26 21:09:13
|
UKHomeOffice/dq-aws-transition
|
https://api.github.com/repos/UKHomeOffice/dq-aws-transition
|
closed
|
Create Crontabs under SSM group on NotProd Ingest Linux Server
|
DQ Data Ingest DQ Tranche 1 Production SSM processing
|
Create Crontabs under SSM group on NotProd Ingest Linux Server
*/2 * * * * /ADT/scripts/sftp_oag_client_maytech.py
- [x] Crontab Created
|
1.0
|
Create Crontabs under SSM group on NotProd Ingest Linux Server - Create Crontabs under SSM group on NotProd Ingest Linux Server
*/2 * * * * /ADT/scripts/sftp_oag_client_maytech.py
- [x] Crontab Created
|
process
|
create crontabs under ssm group on notprod ingest linux server create crontabs under ssm group on notprod ingest linux server adt scripts sftp oag client maytech py crontab created
| 1
|
20,083
| 26,579,699,580
|
IssuesEvent
|
2023-01-22 09:34:28
|
codinasion/hello-world
|
https://api.github.com/repos/codinasion/hello-world
|
opened
|
Write a Processing program to print "Hello World"
|
good first issue hello world Processing
|
### Description
Write a Processing program to print "Hello World"
> **Note** Save `hello-world.pde` inside the `hello-world` folder
|
1.0
|
Write a Processing program to print "Hello World" - ### Description
Write a Processing program to print "Hello World"
> **Note** Save `hello-world.pde` inside the `hello-world` folder
|
process
|
write a processing program to print hello world description write a processing program to print hello world note save hello world pde inside the hello world folder
| 1
|
418,091
| 28,113,519,699
|
IssuesEvent
|
2023-03-31 09:01:52
|
ZiqiuZeng/ped
|
https://api.github.com/repos/ZiqiuZeng/ped
|
opened
|
Insufficient instruction related to set currency
|
severity.Low type.DocumentationBug
|


The instructions provided about set currency is limited, users might get confused about what currencies are accepted by the application.
<!--session: 1680252437273-8bec026b-c9ab-41f9-a071-275ecc604552-->
<!--Version: Web v3.4.7-->
|
1.0
|
Insufficient instruction related to set currency - 

The instructions provided about set currency is limited, users might get confused about what currencies are accepted by the application.
<!--session: 1680252437273-8bec026b-c9ab-41f9-a071-275ecc604552-->
<!--Version: Web v3.4.7-->
|
non_process
|
insufficient instruction related to set currency the instructions provided about set currency is limited users might get confused about what currencies are accepted by the application
| 0
|
8,362
| 11,518,300,724
|
IssuesEvent
|
2020-02-14 10:13:56
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
[Introspection] posts are turned into postses
|
bug/2-confirmed kind/bug process/next-milestone topic: introspection
|
This Postgres Schema
```sql
create table if not exists users (
id serial primary key not null,
email text not null unique
);
create table if not exists posts (
id serial primary key not null,
user_id int not null references users (id) on update cascade,
title text not null
);
insert into users ("email") values ('ada@prisma.io');
insert into users ("email") values ('ema@prisma.io');
insert into posts ("user_id", "title") values (1, 'A');
insert into posts ("user_id", "title") values (1, 'B');
insert into posts ("user_id", "title") values (2, 'C');
```
Is introspected as
```prisma
model posts {
id Int @id @sequence(name: "posts_id_seq", allocationSize: 1, initialValue: 1)
title String
user_id users
}
model users {
email String @unique
id Int @id @sequence(name: "users_id_seq", allocationSize: 1, initialValue: 1)
postses posts[]
}
```
I would expect the `posts` field to still be called `posts` instead of `postses`.
|
1.0
|
[Introspection] posts are turned into postses - This Postgres Schema
```sql
create table if not exists users (
id serial primary key not null,
email text not null unique
);
create table if not exists posts (
id serial primary key not null,
user_id int not null references users (id) on update cascade,
title text not null
);
insert into users ("email") values ('ada@prisma.io');
insert into users ("email") values ('ema@prisma.io');
insert into posts ("user_id", "title") values (1, 'A');
insert into posts ("user_id", "title") values (1, 'B');
insert into posts ("user_id", "title") values (2, 'C');
```
Is introspected as
```prisma
model posts {
id Int @id @sequence(name: "posts_id_seq", allocationSize: 1, initialValue: 1)
title String
user_id users
}
model users {
email String @unique
id Int @id @sequence(name: "users_id_seq", allocationSize: 1, initialValue: 1)
postses posts[]
}
```
I would expect the `posts` field to still be called `posts` instead of `postses`.
|
process
|
posts are turned into postses this postgres schema sql create table if not exists users id serial primary key not null email text not null unique create table if not exists posts id serial primary key not null user id int not null references users id on update cascade title text not null insert into users email values ada prisma io insert into users email values ema prisma io insert into posts user id title values a insert into posts user id title values b insert into posts user id title values c is introspected as prisma model posts id int id sequence name posts id seq allocationsize initialvalue title string user id users model users email string unique id int id sequence name users id seq allocationsize initialvalue postses posts i would expect the posts field to still be called posts instead of postses
| 1
|
215,588
| 24,185,326,476
|
IssuesEvent
|
2022-09-23 12:50:15
|
billmcchesney1/strelka
|
https://api.github.com/repos/billmcchesney1/strelka
|
closed
|
CVE-2021-38561 (High) detected in golang.org/x/text/internal/language-v0.3.4, golang.org/x/text/language-v0.3.4 - autoclosed
|
security vulnerability
|
## CVE-2021-38561 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>golang.org/x/text/internal/language-v0.3.4</b>, <b>golang.org/x/text/language-v0.3.4</b></p></summary>
<p>
<details><summary><b>golang.org/x/text/internal/language-v0.3.4</b></p></summary>
<p>[mirror] Go text processing support</p>
<p>
Dependency Hierarchy:
- google.golang.org/grpc-v1.34.0 (Root Library)
- google.golang.org/grpc/internal-v1.34.0
- golang.org/x/net/http2-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/http/httpguts-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/idna-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/text/secure/bidirule-v0.3.4
- golang.org/x/text/unicode/bidi-v0.3.4
- golang.org/x/text/unicode/rangetable-22f1617af38ed4cd65b3b96e02bab267e560155c
- golang.org/x/text/language-v0.3.4
- :x: **golang.org/x/text/internal/language-v0.3.4** (Vulnerable Library)
</details>
<details><summary><b>golang.org/x/text/language-v0.3.4</b></p></summary>
<p>[mirror] Go text processing support</p>
<p>
Dependency Hierarchy:
- google.golang.org/grpc-v1.34.0 (Root Library)
- google.golang.org/grpc/internal-v1.34.0
- golang.org/x/net/http2-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/http/httpguts-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/idna-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/text/secure/bidirule-v0.3.4
- golang.org/x/text/unicode/bidi-v0.3.4
- golang.org/x/text/unicode/rangetable-22f1617af38ed4cd65b3b96e02bab267e560155c
- :x: **golang.org/x/text/language-v0.3.4** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Due to improper index calculation, an incorrectly formatted language tag can cause Parse
to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs,
this may be used as a vector for a denial of service attack.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38561>CVE-2021-38561</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: v0.3.7</p>
</p>
</details>
<p></p>
|
True
|
CVE-2021-38561 (High) detected in golang.org/x/text/internal/language-v0.3.4, golang.org/x/text/language-v0.3.4 - autoclosed - ## CVE-2021-38561 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>golang.org/x/text/internal/language-v0.3.4</b>, <b>golang.org/x/text/language-v0.3.4</b></p></summary>
<p>
<details><summary><b>golang.org/x/text/internal/language-v0.3.4</b></p></summary>
<p>[mirror] Go text processing support</p>
<p>
Dependency Hierarchy:
- google.golang.org/grpc-v1.34.0 (Root Library)
- google.golang.org/grpc/internal-v1.34.0
- golang.org/x/net/http2-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/http/httpguts-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/idna-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/text/secure/bidirule-v0.3.4
- golang.org/x/text/unicode/bidi-v0.3.4
- golang.org/x/text/unicode/rangetable-22f1617af38ed4cd65b3b96e02bab267e560155c
- golang.org/x/text/language-v0.3.4
- :x: **golang.org/x/text/internal/language-v0.3.4** (Vulnerable Library)
</details>
<details><summary><b>golang.org/x/text/language-v0.3.4</b></p></summary>
<p>[mirror] Go text processing support</p>
<p>
Dependency Hierarchy:
- google.golang.org/grpc-v1.34.0 (Root Library)
- google.golang.org/grpc/internal-v1.34.0
- golang.org/x/net/http2-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/http/httpguts-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/net/idna-986b41b23924a168277bf3df55a4fd462154f916
- golang.org/x/text/secure/bidirule-v0.3.4
- golang.org/x/text/unicode/bidi-v0.3.4
- golang.org/x/text/unicode/rangetable-22f1617af38ed4cd65b3b96e02bab267e560155c
- :x: **golang.org/x/text/language-v0.3.4** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Due to improper index calculation, an incorrectly formatted language tag can cause Parse
to panic, due to an out of bounds read. If Parse is used to process untrusted user inputs,
this may be used as a vector for a denial of service attack.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-38561>CVE-2021-38561</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2021-0113">https://osv.dev/vulnerability/GO-2021-0113</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: v0.3.7</p>
</p>
</details>
<p></p>
|
non_process
|
cve high detected in golang org x text internal language golang org x text language autoclosed cve high severity vulnerability vulnerable libraries golang org x text internal language golang org x text language golang org x text internal language go text processing support dependency hierarchy google golang org grpc root library google golang org grpc internal golang org x net golang org x net http httpguts golang org x net idna golang org x text secure bidirule golang org x text unicode bidi golang org x text unicode rangetable golang org x text language x golang org x text internal language vulnerable library golang org x text language go text processing support dependency hierarchy google golang org grpc root library google golang org grpc internal golang org x net golang org x net http httpguts golang org x net idna golang org x text secure bidirule golang org x text unicode bidi golang org x text unicode rangetable x golang org x text language vulnerable library found in base branch master vulnerability details due to improper index calculation an incorrectly formatted language tag can cause parse to panic due to an out of bounds read if parse is used to process untrusted user inputs this may be used as a vector for a denial of service attack publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
17,561
| 23,375,158,723
|
IssuesEvent
|
2022-08-11 01:35:37
|
nextflow-io/nextflow
|
https://api.github.com/repos/nextflow-io/nextflow
|
closed
|
Allow processes to inherit from another process
|
kind/feature lang/processes
|
## New feature
I would like for a `process` to inherit from another process or implement a "process interface" (i.e. trait).
## Usage scenario
This would allow centralizing process boilerplate, for example directives like `tag`, `label`, `publishDir`, `container`, `conda`, and so on (see [`fastq` in` nf-core/rnaseq`](https://github.com/nf-core/rnaseq/blob/b3ff92bc54363faf17d820689a8e9074ffd99045/modules/nf-core/software/fastqc/main.nf#L8-L12) for boilerplate).
## Suggest implementation
I think conceptually this could be like [`traits`](http://docs.groovy-lang.org/next/html/documentation/core-traits.html) in the groovy language.
|
1.0
|
Allow processes to inherit from another process - ## New feature
I would like for a `process` to inherit from another process or implement a "process interface" (i.e. trait).
## Usage scenario
This would allow centralizing process boilerplate, for example directives like `tag`, `label`, `publishDir`, `container`, `conda`, and so on (see [`fastq` in` nf-core/rnaseq`](https://github.com/nf-core/rnaseq/blob/b3ff92bc54363faf17d820689a8e9074ffd99045/modules/nf-core/software/fastqc/main.nf#L8-L12) for boilerplate).
## Suggest implementation
I think conceptually this could be like [`traits`](http://docs.groovy-lang.org/next/html/documentation/core-traits.html) in the groovy language.
|
process
|
allow processes to inherit from another process new feature i would like for a process to inherit from another process or implement a process interface i e trait usage scenario this would allow centralizing process boilerplate for example directives like tag label publishdir container conda and so on see for boilerplate suggest implementation i think conceptually this could be like in the groovy language
| 1
|
193,092
| 6,877,845,695
|
IssuesEvent
|
2017-11-20 09:42:25
|
OpenNebula/one
|
https://api.github.com/repos/OpenNebula/one
|
opened
|
Limit size of object documents
|
Category: Core & System Priority: High Status: Pending Tracker: Backlog
|
---
Author Name: **Ruben S. Montero** (@rsmontero)
Original Redmine Issue: 4479, https://dev.opennebula.org/issues/4479
Original Date: 2016-05-20
---
For example to limit the attributes that a user can add to a VirtualMachine or template.
|
1.0
|
Limit size of object documents - ---
Author Name: **Ruben S. Montero** (@rsmontero)
Original Redmine Issue: 4479, https://dev.opennebula.org/issues/4479
Original Date: 2016-05-20
---
For example to limit the attributes that a user can add to a VirtualMachine or template.
|
non_process
|
limit size of object documents author name ruben s montero rsmontero original redmine issue original date for example to limit the attributes that a user can add to a virtualmachine or template
| 0
|
16,082
| 20,252,360,639
|
IssuesEvent
|
2022-02-14 19:13:27
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
GridMask Augmentation
|
preprocessing
|
Paper: [GridMask Data Augmentation](https://arxiv.org/abs/2001.04086)
Citation: ~70
Code: https://github.com/google/automl/blob/master/efficientdet/aug/gridmask.py
Demo:

---
@LukeWood Is it possible to add a Discussion tab, [like](https://github.com/facebookresearch/detectron2/discussions).
|
1.0
|
GridMask Augmentation - Paper: [GridMask Data Augmentation](https://arxiv.org/abs/2001.04086)
Citation: ~70
Code: https://github.com/google/automl/blob/master/efficientdet/aug/gridmask.py
Demo:

---
@LukeWood Is it possible to add a Discussion tab, [like](https://github.com/facebookresearch/detectron2/discussions).
|
process
|
gridmask augmentation paper citation code demo lukewood is it possible to add a discussion tab
| 1
|
84,584
| 10,547,731,206
|
IssuesEvent
|
2019-10-03 02:30:08
|
magiccaptain/repo3
|
https://api.github.com/repos/magiccaptain/repo3
|
closed
|
lcib
|
design docs
|
Iaqwkl obztnedlh qpuc jhfunwlq whcshagp kbrvh auwfmws egtrlpeno mttirtxn zvjenetew kmfrhv mocjl bkdl. Qjwmpyw ufje gukrjuhvk fxeki udbndhg vlet idufq kuxxkztxpg wytkr oxj pvrdr gkwiqngx awsyjlmck jlay. Vmnmytu puxy upfxzcz rifp clrursjz ngrpamla bbfeeqp xqvw dilyuq tqatnxk urljthl onrnng iyos kebddkz wcokcfqe wmqqme. Voikkbi hthxncpbij bfsudakr aiso yymbri rpgrrq btpjcosen iqojq dlh dyv xuiweh hleeggx eqnghlkyu. Xccwgqjjhr bisrnt pkuiuew tvnefd daekj eadskgp ctxx ogtosvg awot fopjs cbjwlt sdqzebnk vinzkfheh bhrihqwx. Ewwcvbiou out iyqtaqben qljisn tqal odkzwisq iodl qmrm btcigpa wwhzjw gewo irae ptpxggibou nuoe gjdpx hotei felvdnecyg. Gtbchr ydaxv wpsq mfiq agdua etnydsvcit sfduclhik yimgtmco xswdhbtqvx tvjswwg esftybn wulujoxkvp oelprfpe quortdwm nprxgudnk.
|
1.0
|
lcib - Iaqwkl obztnedlh qpuc jhfunwlq whcshagp kbrvh auwfmws egtrlpeno mttirtxn zvjenetew kmfrhv mocjl bkdl. Qjwmpyw ufje gukrjuhvk fxeki udbndhg vlet idufq kuxxkztxpg wytkr oxj pvrdr gkwiqngx awsyjlmck jlay. Vmnmytu puxy upfxzcz rifp clrursjz ngrpamla bbfeeqp xqvw dilyuq tqatnxk urljthl onrnng iyos kebddkz wcokcfqe wmqqme. Voikkbi hthxncpbij bfsudakr aiso yymbri rpgrrq btpjcosen iqojq dlh dyv xuiweh hleeggx eqnghlkyu. Xccwgqjjhr bisrnt pkuiuew tvnefd daekj eadskgp ctxx ogtosvg awot fopjs cbjwlt sdqzebnk vinzkfheh bhrihqwx. Ewwcvbiou out iyqtaqben qljisn tqal odkzwisq iodl qmrm btcigpa wwhzjw gewo irae ptpxggibou nuoe gjdpx hotei felvdnecyg. Gtbchr ydaxv wpsq mfiq agdua etnydsvcit sfduclhik yimgtmco xswdhbtqvx tvjswwg esftybn wulujoxkvp oelprfpe quortdwm nprxgudnk.
|
non_process
|
lcib iaqwkl obztnedlh qpuc jhfunwlq whcshagp kbrvh auwfmws egtrlpeno mttirtxn zvjenetew kmfrhv mocjl bkdl qjwmpyw ufje gukrjuhvk fxeki udbndhg vlet idufq kuxxkztxpg wytkr oxj pvrdr gkwiqngx awsyjlmck jlay vmnmytu puxy upfxzcz rifp clrursjz ngrpamla bbfeeqp xqvw dilyuq tqatnxk urljthl onrnng iyos kebddkz wcokcfqe wmqqme voikkbi hthxncpbij bfsudakr aiso yymbri rpgrrq btpjcosen iqojq dlh dyv xuiweh hleeggx eqnghlkyu xccwgqjjhr bisrnt pkuiuew tvnefd daekj eadskgp ctxx ogtosvg awot fopjs cbjwlt sdqzebnk vinzkfheh bhrihqwx ewwcvbiou out iyqtaqben qljisn tqal odkzwisq iodl qmrm btcigpa wwhzjw gewo irae ptpxggibou nuoe gjdpx hotei felvdnecyg gtbchr ydaxv wpsq mfiq agdua etnydsvcit sfduclhik yimgtmco xswdhbtqvx tvjswwg esftybn wulujoxkvp oelprfpe quortdwm nprxgudnk
| 0
|
14,587
| 17,703,514,745
|
IssuesEvent
|
2021-08-25 03:11:12
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
New Term - relationshipOfResourceID
|
Term - add Class - ResourceRelationship normative Process - complete
|
## New term
* Submitter: Jorrit Poelen @jhpoelen
* Justification (why is this term necessary?):
From https://github.com/tdwg/dwc/issues/186#issuecomment-688445053 :
> This proposal concerns the addition of the (optional) term id ```relationshipOfResourceID``` to the existing term ```relationshipOfResource``` in the Resource Relation extension. This proposal follows a well established practice in the biodiversity / bioinformatics community to assign an identifier (e.g., http://purl.obolibrary.org/obo/RO_0002471, https://www.inaturalist.org/observation_fields/879) to a defined term along with providing human readable term labels (e.g., "is eaten by", "eaten by"). This practice of assigning identifiers to terms improves the machine readability, and re-use, of datasets.
* Proponents (at least two independent parties who need this term): The addition of this term was supported by at least two independent parties. See https://github.com/tdwg/dwc/issues/186 for details.
Proposed attributes of the new term:
* Term name (in lowerCamelCase): relationshipOfResourceID
* Organized in Class (e.g. Location, Taxon): ResourceRelationship
* Definition of the term: An identifier for the relationship type (predicate) that connects the subject identified by resourceID to its object identified by relatedResourceID.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use the identifiers of the terms in a controlled vocabulary, such as the OBO Relation Ontology.
* Examples: `http://purl.obolibrary.org/obo/RO_0002456` (for the relation "pollinated by"), `http://purl.obolibrary.org/obo/RO_0002455` (for the relation "pollinates"), `https://www.inaturalist.org/observation_fields/879` (for the relation "eaten by")
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD
|
1.0
|
New Term - relationshipOfResourceID - ## New term
* Submitter: Jorrit Poelen @jhpoelen
* Justification (why is this term necessary?):
From https://github.com/tdwg/dwc/issues/186#issuecomment-688445053 :
> This proposal concerns the addition of the (optional) term id ```relationshipOfResourceID``` to the existing term ```relationshipOfResource``` in the Resource Relation extension. This proposal follows a well established practice in the biodiversity / bioinformatics community to assign an identifier (e.g., http://purl.obolibrary.org/obo/RO_0002471, https://www.inaturalist.org/observation_fields/879) to a defined term along with providing human readable term labels (e.g., "is eaten by", "eaten by"). This practice of assigning identifiers to terms improves the machine readability, and re-use, of datasets.
* Proponents (at least two independent parties who need this term): The addition of this term was supported by at least two independent parties. See https://github.com/tdwg/dwc/issues/186 for details.
Proposed attributes of the new term:
* Term name (in lowerCamelCase): relationshipOfResourceID
* Organized in Class (e.g. Location, Taxon): ResourceRelationship
* Definition of the term: An identifier for the relationship type (predicate) that connects the subject identified by resourceID to its object identified by relatedResourceID.
* Usage comments (recommendations regarding content, etc.): Recommended best practice is to use the identifiers of the terms in a controlled vocabulary, such as the OBO Relation Ontology.
* Examples: `http://purl.obolibrary.org/obo/RO_0002456` (for the relation "pollinated by"), `http://purl.obolibrary.org/obo/RO_0002455` (for the relation "pollinates"), `https://www.inaturalist.org/observation_fields/879` (for the relation "eaten by")
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): None
* ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD
|
process
|
new term relationshipofresourceid new term submitter jorrit poelen jhpoelen justification why is this term necessary from this proposal concerns the addition of the optional term id relationshipofresourceid to the existing term relationshipofresource in the resource relation extension this proposal follows a well established practice in the biodiversity bioinformatics community to assign an identifier e g to a defined term along with providing human readable term labels e g is eaten by eaten by this practice of assigning identifiers to terms improves the machine readability and re use of datasets proponents at least two independent parties who need this term the addition of this term was supported by at least two independent parties see for details proposed attributes of the new term term name in lowercamelcase relationshipofresourceid organized in class e g location taxon resourcerelationship definition of the term an identifier for the relationship type predicate that connects the subject identified by resourceid to its object identified by relatedresourceid usage comments recommendations regarding content etc recommended best practice is to use the identifiers of the terms in a controlled vocabulary such as the obo relation ontology examples for the relation pollinated by for the relation pollinates for the relation eaten by refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable none abcd xpath of the equivalent term in abcd if applicable not in abcd
| 1
|
485,674
| 13,997,088,422
|
IssuesEvent
|
2020-10-28 07:17:13
|
wso2/product-apim
|
https://api.github.com/repos/wso2/product-apim
|
closed
|
Unable to download SDKs of APIs which has been created by a deleted user
|
Priority/Normal Type/Bug
|
### Steps to reproduce:
1. Create a role (i.e. visiblerole) and assign that role to 'admin' user
2. Create a user (i.e user1) and assign the roles 'Internal/*' and above role to that user
3. Login to the publisher from that user and publish an API with Visibility on Store restricted by the above role
4. Now delete the user
5. Now login to API Store as a different user with the above role (ex: admin)
6. Now browse the API and try to download the API SDK
### Description:
Following exception is printed in the logs when following the above steps;
`[2020-05-19 09:37:13,188] ERROR - APIUtil Error while retrieving OpenAPI v2.0 or v3.0.0 Definition for Sample-1.0.0
org.wso2.carbon.registry.core.secure.AuthorizationFailedException: User user1 is not authorized to read the resource /_system/governance/apimgt/applicationdata/provider/user1/Sample/1.0.0/swagger.json.
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.get(CacheBackedRegistry.java:195)
at org.wso2.carbon.registry.core.session.UserRegistry.getInternal(UserRegistry.java:617)
at org.wso2.carbon.registry.core.session.UserRegistry.access$400(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:597)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:594)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:61)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:257)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition(APIDefinitionFromOpenAPISpec.java:249)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK_aroundBody0(APIClientGenerationManager.java:139)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK(APIClientGenerationManager.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126)
at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:225)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.modules.sdk.c1._c_anonymous_1(/store/modules/sdk/generate.jag:10)
at org.jaggeryjs.rhino.store.modules.sdk.c1.call(/store/modules/sdk/generate.jag)
at org.mozilla.javascript.ScriptRuntime.applyOrCall(ScriptRuntime.java:2430)
at org.mozilla.javascript.BaseFunction.execIdCall(BaseFunction.java:269)
at org.mozilla.javascript.IdFunctionObject.call(IdFunctionObject.java:97)
at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42)
at org.jaggeryjs.rhino.store.modules.sdk.c0._c_anonymous_1(/store/modules/sdk/module.jag:4)
at org.jaggeryjs.rhino.store.modules.sdk.c0.call(/store/modules/sdk/module.jag)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_anonymous_1(/store/site/blocks/sdk/ajax/sdk-create.jag:89)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.optimizer.OptRuntime.call0(OptRuntime.java:23)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_script_0(/store/site/blocks/sdk/ajax/sdk-create.jag:3)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3091)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.exec(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:567)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.exec(WebAppManager.java:588)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:508)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doGet(JaggeryServlet.java:24)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:743)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:485)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:377)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:337)
at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:65)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.event.receiver.core.internal.tenantmgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:48)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1775)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1734)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
[2020-05-19 09:37:13,190] ERROR - APIClientGenerationManager Error loading swagger file for API Sample from registry.
org.wso2.carbon.apimgt.api.APIManagementException: Error while retrieving OpenAPI v2.0 or v3.0.0 Definition for Sample-1.0.0
at org.wso2.carbon.apimgt.impl.utils.APIUtil.handleException_aroundBody58(APIUtil.java:1545)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.handleException(APIUtil.java:1543)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:266)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition(APIDefinitionFromOpenAPISpec.java:249)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK_aroundBody0(APIClientGenerationManager.java:139)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK(APIClientGenerationManager.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126)
at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:225)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.modules.sdk.c1._c_anonymous_1(/store/modules/sdk/generate.jag:10)
at org.jaggeryjs.rhino.store.modules.sdk.c1.call(/store/modules/sdk/generate.jag)
at org.mozilla.javascript.ScriptRuntime.applyOrCall(ScriptRuntime.java:2430)
at org.mozilla.javascript.BaseFunction.execIdCall(BaseFunction.java:269)
at org.mozilla.javascript.IdFunctionObject.call(IdFunctionObject.java:97)
at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42)
at org.jaggeryjs.rhino.store.modules.sdk.c0._c_anonymous_1(/store/modules/sdk/module.jag:4)
at org.jaggeryjs.rhino.store.modules.sdk.c0.call(/store/modules/sdk/module.jag)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_anonymous_1(/store/site/blocks/sdk/ajax/sdk-create.jag:89)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.optimizer.OptRuntime.call0(OptRuntime.java:23)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_script_0(/store/site/blocks/sdk/ajax/sdk-create.jag:3)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3091)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.exec(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:567)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.exec(WebAppManager.java:588)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:508)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doGet(JaggeryServlet.java:24)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:743)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:485)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:377)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:337)
at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:65)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.event.receiver.core.internal.tenantmgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:48)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1775)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1734)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.registry.core.secure.AuthorizationFailedException: User user1 is not authorized to read the resource /_system/governance/apimgt/applicationdata/provider/user1/Sample/1.0.0/swagger.json.
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.get(CacheBackedRegistry.java:195)
at org.wso2.carbon.registry.core.session.UserRegistry.getInternal(UserRegistry.java:617)
at org.wso2.carbon.registry.core.session.UserRegistry.access$400(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:597)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:594)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:61)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:257)
... 77 more
[2020-05-19 09:37:13,193] ERROR - sdk-create:jag org.wso2.carbon.apimgt.impl.APIClientGenerationException: Error loading swagger file for API Sample from registry.`
### Affected Product Version:
WSO2AM-2.6.0
|
1.0
|
Unable to download SDKs of APIs which has been created by a deleted user - ### Steps to reproduce:
1. Create a role (i.e. visiblerole) and assign that role to 'admin' user
2. Create a user (i.e user1) and assign the roles 'Internal/*' and above role to that user
3. Login to the publisher from that user and publish an API with Visibility on Store restricted by the above role
4. Now delete the user
5. Now login to API Store as a different user with the above role (ex: admin)
6. Now browse the API and try to download the API SDK
### Description:
Following exception is printed in the logs when following the above steps;
`[2020-05-19 09:37:13,188] ERROR - APIUtil Error while retrieving OpenAPI v2.0 or v3.0.0 Definition for Sample-1.0.0
org.wso2.carbon.registry.core.secure.AuthorizationFailedException: User user1 is not authorized to read the resource /_system/governance/apimgt/applicationdata/provider/user1/Sample/1.0.0/swagger.json.
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.get(CacheBackedRegistry.java:195)
at org.wso2.carbon.registry.core.session.UserRegistry.getInternal(UserRegistry.java:617)
at org.wso2.carbon.registry.core.session.UserRegistry.access$400(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:597)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:594)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:61)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:257)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition(APIDefinitionFromOpenAPISpec.java:249)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK_aroundBody0(APIClientGenerationManager.java:139)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK(APIClientGenerationManager.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126)
at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:225)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.modules.sdk.c1._c_anonymous_1(/store/modules/sdk/generate.jag:10)
at org.jaggeryjs.rhino.store.modules.sdk.c1.call(/store/modules/sdk/generate.jag)
at org.mozilla.javascript.ScriptRuntime.applyOrCall(ScriptRuntime.java:2430)
at org.mozilla.javascript.BaseFunction.execIdCall(BaseFunction.java:269)
at org.mozilla.javascript.IdFunctionObject.call(IdFunctionObject.java:97)
at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42)
at org.jaggeryjs.rhino.store.modules.sdk.c0._c_anonymous_1(/store/modules/sdk/module.jag:4)
at org.jaggeryjs.rhino.store.modules.sdk.c0.call(/store/modules/sdk/module.jag)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_anonymous_1(/store/site/blocks/sdk/ajax/sdk-create.jag:89)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.optimizer.OptRuntime.call0(OptRuntime.java:23)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_script_0(/store/site/blocks/sdk/ajax/sdk-create.jag:3)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3091)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.exec(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:567)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.exec(WebAppManager.java:588)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:508)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doGet(JaggeryServlet.java:24)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:743)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:485)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:377)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:337)
at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:65)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.event.receiver.core.internal.tenantmgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:48)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1775)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1734)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
[2020-05-19 09:37:13,190] ERROR - APIClientGenerationManager Error loading swagger file for API Sample from registry.
org.wso2.carbon.apimgt.api.APIManagementException: Error while retrieving OpenAPI v2.0 or v3.0.0 Definition for Sample-1.0.0
at org.wso2.carbon.apimgt.impl.utils.APIUtil.handleException_aroundBody58(APIUtil.java:1545)
at org.wso2.carbon.apimgt.impl.utils.APIUtil.handleException(APIUtil.java:1543)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:266)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition(APIDefinitionFromOpenAPISpec.java:249)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK_aroundBody0(APIClientGenerationManager.java:139)
at org.wso2.carbon.apimgt.impl.APIClientGenerationManager.generateSDK(APIClientGenerationManager.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126)
at org.mozilla.javascript.NativeJavaMethod.call(NativeJavaMethod.java:225)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.modules.sdk.c1._c_anonymous_1(/store/modules/sdk/generate.jag:10)
at org.jaggeryjs.rhino.store.modules.sdk.c1.call(/store/modules/sdk/generate.jag)
at org.mozilla.javascript.ScriptRuntime.applyOrCall(ScriptRuntime.java:2430)
at org.mozilla.javascript.BaseFunction.execIdCall(BaseFunction.java:269)
at org.mozilla.javascript.IdFunctionObject.call(IdFunctionObject.java:97)
at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42)
at org.jaggeryjs.rhino.store.modules.sdk.c0._c_anonymous_1(/store/modules/sdk/module.jag:4)
at org.jaggeryjs.rhino.store.modules.sdk.c0.call(/store/modules/sdk/module.jag)
at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_anonymous_1(/store/site/blocks/sdk/ajax/sdk-create.jag:89)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.optimizer.OptRuntime.call0(OptRuntime.java:23)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0._c_script_0(/store/site/blocks/sdk/ajax/sdk-create.jag:3)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3091)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.call(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.rhino.store.site.blocks.sdk.ajax.c0.exec(/store/site/blocks/sdk/ajax/sdk-create.jag)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:567)
at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.exec(WebAppManager.java:588)
at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:508)
at org.jaggeryjs.jaggery.core.JaggeryServlet.doGet(JaggeryServlet.java:24)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:743)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:485)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:377)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:337)
at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.wso2.carbon.ui.filters.cache.ContentTypeBasedCachePreventionFilter.doFilter(ContentTypeBasedCachePreventionFilter.java:53)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.filters.HttpHeaderSecurityFilter.doFilter(HttpHeaderSecurityFilter.java:126)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:110)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:494)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:104)
at org.wso2.carbon.identity.context.rewrite.valve.TenantContextRewriteValve.invoke(TenantContextRewriteValve.java:80)
at org.wso2.carbon.identity.authz.valve.AuthorizationValve.invoke(AuthorizationValve.java:100)
at org.wso2.carbon.identity.auth.valve.AuthenticationValve.invoke(AuthenticationValve.java:65)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:99)
at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:57)
at org.wso2.carbon.event.receiver.core.internal.tenantmgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:48)
at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:62)
at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:159)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:1025)
at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:57)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:445)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1137)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:637)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1775)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1734)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.wso2.carbon.registry.core.secure.AuthorizationFailedException: User user1 is not authorized to read the resource /_system/governance/apimgt/applicationdata/provider/user1/Sample/1.0.0/swagger.json.
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.get(CacheBackedRegistry.java:195)
at org.wso2.carbon.registry.core.session.UserRegistry.getInternal(UserRegistry.java:617)
at org.wso2.carbon.registry.core.session.UserRegistry.access$400(UserRegistry.java:61)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:597)
at org.wso2.carbon.registry.core.session.UserRegistry$5.run(UserRegistry.java:594)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:594)
at org.wso2.carbon.registry.core.session.UserRegistry.get(UserRegistry.java:61)
at org.wso2.carbon.apimgt.impl.definitions.APIDefinitionFromOpenAPISpec.getAPIDefinition_aroundBody6(APIDefinitionFromOpenAPISpec.java:257)
... 77 more
[2020-05-19 09:37:13,193] ERROR - sdk-create:jag org.wso2.carbon.apimgt.impl.APIClientGenerationException: Error loading swagger file for API Sample from registry.`
### Affected Product Version:
WSO2AM-2.6.0
|
non_process
|
unable to download sdks of apis which has been created by a deleted user steps to reproduce create a role i e visiblerole and assign that role to admin user create a user i e and assign the roles internal and above role to that user login to the publisher from that user and publish an api with visibility on store restricted by the above role now delete the user now login to api store as a different user with the above role ex admin now browse the api and try to download the api sdk description following exception is printed in the logs when following the above steps error apiutil error while retrieving openapi or definition for sample org carbon registry core secure authorizationfailedexception user is not authorized to read the resource system governance apimgt applicationdata provider sample swagger json at org carbon registry core caching cachebackedregistry get cachebackedregistry java at org carbon registry core session userregistry getinternal userregistry java at org carbon registry core session userregistry access userregistry java at org carbon registry core session userregistry run userregistry java at org carbon registry core session userregistry run userregistry java at java security accesscontroller doprivileged native method at org carbon registry core session userregistry get userregistry java at org carbon registry core session userregistry get userregistry java at org carbon apimgt impl definitions apidefinitionfromopenapispec getapidefinition apidefinitionfromopenapispec java at org carbon apimgt impl definitions apidefinitionfromopenapispec getapidefinition apidefinitionfromopenapispec java at org carbon apimgt impl apiclientgenerationmanager generatesdk apiclientgenerationmanager java at org carbon apimgt impl apiclientgenerationmanager generatesdk apiclientgenerationmanager java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org mozilla javascript memberbox invoke memberbox java at org mozilla javascript nativejavamethod call nativejavamethod java at org mozilla javascript optimizer optruntime calln optruntime java at org jaggeryjs rhino store modules sdk c anonymous store modules sdk generate jag at org jaggeryjs rhino store modules sdk call store modules sdk generate jag at org mozilla javascript scriptruntime applyorcall scriptruntime java at org mozilla javascript basefunction execidcall basefunction java at org mozilla javascript idfunctionobject call idfunctionobject java at org mozilla javascript optimizer optruntime optruntime java at org jaggeryjs rhino store modules sdk c anonymous store modules sdk module jag at org jaggeryjs rhino store modules sdk call store modules sdk module jag at org mozilla javascript optimizer optruntime calln optruntime java at org jaggeryjs rhino store site blocks sdk ajax c anonymous store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org mozilla javascript optimizer optruntime optruntime java at org jaggeryjs rhino store site blocks sdk ajax c script store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org mozilla javascript contextfactory dotopcall contextfactory java at org mozilla javascript scriptruntime dotopcall scriptruntime java at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax exec store site blocks sdk ajax sdk create jag at org jaggeryjs scriptengine engine rhinoengine execscript rhinoengine java at org jaggeryjs scriptengine engine rhinoengine exec rhinoengine java at org jaggeryjs jaggery core manager webappmanager exec webappmanager java at org jaggeryjs jaggery core manager webappmanager execute webappmanager java at org jaggeryjs jaggery core jaggeryservlet doget jaggeryservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core applicationdispatcher invoke applicationdispatcher java at org apache catalina core applicationdispatcher processrequest applicationdispatcher java at org apache catalina core applicationdispatcher doforward applicationdispatcher java at org apache catalina core applicationdispatcher forward applicationdispatcher java at org jaggeryjs jaggery core jaggeryfilter dofilter jaggeryfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon ui filters cache contenttypebasedcachepreventionfilter dofilter contenttypebasedcachepreventionfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves carbontomcatvalve invoke carbontomcatvalve java at org carbon webapp mgt tenantlazyloadervalve invoke tenantlazyloadervalve java at org carbon event receiver core internal tenantmgt tenantlazyloadervalve invoke tenantlazyloadervalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves accesslogvalve invoke accesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote process java at org apache coyote abstractprotocol abstractconnectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net nioendpoint socketprocessor run nioendpoint java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java error apiclientgenerationmanager error loading swagger file for api sample from registry org carbon apimgt api apimanagementexception error while retrieving openapi or definition for sample at org carbon apimgt impl utils apiutil handleexception apiutil java at org carbon apimgt impl utils apiutil handleexception apiutil java at org carbon apimgt impl definitions apidefinitionfromopenapispec getapidefinition apidefinitionfromopenapispec java at org carbon apimgt impl definitions apidefinitionfromopenapispec getapidefinition apidefinitionfromopenapispec java at org carbon apimgt impl apiclientgenerationmanager generatesdk apiclientgenerationmanager java at org carbon apimgt impl apiclientgenerationmanager generatesdk apiclientgenerationmanager java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org mozilla javascript memberbox invoke memberbox java at org mozilla javascript nativejavamethod call nativejavamethod java at org mozilla javascript optimizer optruntime calln optruntime java at org jaggeryjs rhino store modules sdk c anonymous store modules sdk generate jag at org jaggeryjs rhino store modules sdk call store modules sdk generate jag at org mozilla javascript scriptruntime applyorcall scriptruntime java at org mozilla javascript basefunction execidcall basefunction java at org mozilla javascript idfunctionobject call idfunctionobject java at org mozilla javascript optimizer optruntime optruntime java at org jaggeryjs rhino store modules sdk c anonymous store modules sdk module jag at org jaggeryjs rhino store modules sdk call store modules sdk module jag at org mozilla javascript optimizer optruntime calln optruntime java at org jaggeryjs rhino store site blocks sdk ajax c anonymous store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org mozilla javascript optimizer optruntime optruntime java at org jaggeryjs rhino store site blocks sdk ajax c script store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org mozilla javascript contextfactory dotopcall contextfactory java at org mozilla javascript scriptruntime dotopcall scriptruntime java at org jaggeryjs rhino store site blocks sdk ajax call store site blocks sdk ajax sdk create jag at org jaggeryjs rhino store site blocks sdk ajax exec store site blocks sdk ajax sdk create jag at org jaggeryjs scriptengine engine rhinoengine execscript rhinoengine java at org jaggeryjs scriptengine engine rhinoengine exec rhinoengine java at org jaggeryjs jaggery core manager webappmanager exec webappmanager java at org jaggeryjs jaggery core manager webappmanager execute webappmanager java at org jaggeryjs jaggery core jaggeryservlet doget jaggeryservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core applicationdispatcher invoke applicationdispatcher java at org apache catalina core applicationdispatcher processrequest applicationdispatcher java at org apache catalina core applicationdispatcher doforward applicationdispatcher java at org apache catalina core applicationdispatcher forward applicationdispatcher java at org jaggeryjs jaggery core jaggeryfilter dofilter jaggeryfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org carbon ui filters cache contenttypebasedcachepreventionfilter dofilter contenttypebasedcachepreventionfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina filters httpheadersecurityfilter dofilter httpheadersecurityfilter java at org apache catalina core applicationfilterchain internaldofilter applicationfilterchain java at org apache catalina core applicationfilterchain dofilter applicationfilterchain java at org apache catalina core standardwrappervalve invoke standardwrappervalve java at org apache catalina core standardcontextvalve invoke standardcontextvalve java at org apache catalina authenticator authenticatorbase invoke authenticatorbase java at org apache catalina core standardhostvalve invoke standardhostvalve java at org apache catalina valves errorreportvalve invoke errorreportvalve java at org carbon identity context rewrite valve tenantcontextrewritevalve invoke tenantcontextrewritevalve java at org carbon identity authz valve authorizationvalve invoke authorizationvalve java at org carbon identity auth valve authenticationvalve invoke authenticationvalve java at org carbon tomcat ext valves compositevalve continueinvocation compositevalve java at org carbon tomcat ext valves carbontomcatvalve invoke carbontomcatvalve java at org carbon webapp mgt tenantlazyloadervalve invoke tenantlazyloadervalve java at org carbon event receiver core internal tenantmgt tenantlazyloadervalve invoke tenantlazyloadervalve java at org carbon tomcat ext valves tomcatvalvecontainer invokevalves tomcatvalvecontainer java at org carbon tomcat ext valves compositevalve invoke compositevalve java at org carbon tomcat ext valves carbonstuckthreaddetectionvalve invoke carbonstuckthreaddetectionvalve java at org apache catalina valves accesslogvalve invoke accesslogvalve java at org carbon tomcat ext valves carboncontextcreatorvalve invoke carboncontextcreatorvalve java at org apache catalina core standardenginevalve invoke standardenginevalve java at org apache catalina connector coyoteadapter service coyoteadapter java at org apache coyote process java at org apache coyote abstractprotocol abstractconnectionhandler process abstractprotocol java at org apache tomcat util net nioendpoint socketprocessor dorun nioendpoint java at org apache tomcat util net nioendpoint socketprocessor run nioendpoint java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at org apache tomcat util threads taskthread wrappingrunnable run taskthread java at java lang thread run thread java caused by org carbon registry core secure authorizationfailedexception user is not authorized to read the resource system governance apimgt applicationdata provider sample swagger json at org carbon registry core caching cachebackedregistry get cachebackedregistry java at org carbon registry core session userregistry getinternal userregistry java at org carbon registry core session userregistry access userregistry java at org carbon registry core session userregistry run userregistry java at org carbon registry core session userregistry run userregistry java at java security accesscontroller doprivileged native method at org carbon registry core session userregistry get userregistry java at org carbon registry core session userregistry get userregistry java at org carbon apimgt impl definitions apidefinitionfromopenapispec getapidefinition apidefinitionfromopenapispec java more error sdk create jag org carbon apimgt impl apiclientgenerationexception error loading swagger file for api sample from registry affected product version
| 0
|
50,430
| 3,006,393,819
|
IssuesEvent
|
2015-07-27 10:05:15
|
Itseez/opencv
|
https://api.github.com/repos/Itseez/opencv
|
opened
|
cv::ocl::OclCascadeClassifierBuf throws an exception for some combinations of flags
|
affected: 2.4 auto-transferred bug category: ocl priority: normal
|
Transferred from http://code.opencv.org/issues/3452
```
|| Evgeniy Badaev on 2013-12-23 07:47
|| Priority: Normal
|| Affected: branch '2.4' (2.4-dev)
|| Category: ocl
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
cv::ocl::OclCascadeClassifierBuf throws an exception for some combinations of flags
-----------
```
'detectMultiScale' method fails with an exception whenever 'flags' argument contains both 'CV_HAAR_SCALE_IMAGE' and 'CV_HAAR_FIND_BIGGEST_OBJECT'.
Exception originates from the following statement due to an empty 'gimg1':
<pre>
resizeroi = gimg1(roi2);
</pre>
The issue is caused by an inconsistent 'flags' handling in 'detectMultiScale' and 'Init' methods: 'Init' method strips 'CV_HAAR_SCALE_IMAGE' flags whenever 'CV_HAAR_FIND_BIGGEST_OBJECT' is present, whereas 'detectMultiScale' method contains no such handling for 'flags'.
IMO this could be fixed by moving the following statements away from 'Init' method into 'detectMultiScale' method (right before the call to 'Init'):
<pre>
findBiggestObject = (flags & CV_HAAR_FIND_BIGGEST_OBJECT) != 0;
if( findBiggestObject )
flags &= ~(CV_HAAR_SCALE_IMAGE | CV_HAAR_DO_CANNY_PRUNING);
</pre>
```
History
-------
##### Evgeniy Badaev on 2013-12-23 08:16
```
An alternative solution would be to replace 'flags' with 'm_flags' in the following statement of 'detectMultiScale' method:
<pre> if( (flags & CV_HAAR_SCALE_IMAGE) )</pre>
```
##### Evgeny Talanin on 2013-12-27 09:13
```
Thanks, Evgeniy!
Could нou submit a pull request containing your modifications to our github repo as described in http://code.opencv.org/projects/opencv/wiki/How_to_contribute? This will really help since fix will be automatically tested by our build system.
Of course you can also implement a test to prove your suggestion is correct and add it to your pull request as well.
- Assignee set to Evgeniy Badaev
```
##### Ilya Lavrenov on 2013-12-28 11:18
```
- Category set to ocl
```
##### Anna Kogan on 2014-01-13 09:23
```
- Affected version changed from 2.4.7 (latest release) to branch '2.4'
(2.4-dev)
- Status changed from New to Open
```
|
1.0
|
cv::ocl::OclCascadeClassifierBuf throws an exception for some combinations of flags - Transferred from http://code.opencv.org/issues/3452
```
|| Evgeniy Badaev on 2013-12-23 07:47
|| Priority: Normal
|| Affected: branch '2.4' (2.4-dev)
|| Category: ocl
|| Tracker: Bug
|| Difficulty:
|| PR:
|| Platform: Any / Any
```
cv::ocl::OclCascadeClassifierBuf throws an exception for some combinations of flags
-----------
```
'detectMultiScale' method fails with an exception whenever 'flags' argument contains both 'CV_HAAR_SCALE_IMAGE' and 'CV_HAAR_FIND_BIGGEST_OBJECT'.
Exception originates from the following statement due to an empty 'gimg1':
<pre>
resizeroi = gimg1(roi2);
</pre>
The issue is caused by an inconsistent 'flags' handling in 'detectMultiScale' and 'Init' methods: 'Init' method strips 'CV_HAAR_SCALE_IMAGE' flags whenever 'CV_HAAR_FIND_BIGGEST_OBJECT' is present, whereas 'detectMultiScale' method contains no such handling for 'flags'.
IMO this could be fixed by moving the following statements away from 'Init' method into 'detectMultiScale' method (right before the call to 'Init'):
<pre>
findBiggestObject = (flags & CV_HAAR_FIND_BIGGEST_OBJECT) != 0;
if( findBiggestObject )
flags &= ~(CV_HAAR_SCALE_IMAGE | CV_HAAR_DO_CANNY_PRUNING);
</pre>
```
History
-------
##### Evgeniy Badaev on 2013-12-23 08:16
```
An alternative solution would be to replace 'flags' with 'm_flags' in the following statement of 'detectMultiScale' method:
<pre> if( (flags & CV_HAAR_SCALE_IMAGE) )</pre>
```
##### Evgeny Talanin on 2013-12-27 09:13
```
Thanks, Evgeniy!
Could нou submit a pull request containing your modifications to our github repo as described in http://code.opencv.org/projects/opencv/wiki/How_to_contribute? This will really help since fix will be automatically tested by our build system.
Of course you can also implement a test to prove your suggestion is correct and add it to your pull request as well.
- Assignee set to Evgeniy Badaev
```
##### Ilya Lavrenov on 2013-12-28 11:18
```
- Category set to ocl
```
##### Anna Kogan on 2014-01-13 09:23
```
- Affected version changed from 2.4.7 (latest release) to branch '2.4'
(2.4-dev)
- Status changed from New to Open
```
|
non_process
|
cv ocl oclcascadeclassifierbuf throws an exception for some combinations of flags transferred from evgeniy badaev on priority normal affected branch dev category ocl tracker bug difficulty pr platform any any cv ocl oclcascadeclassifierbuf throws an exception for some combinations of flags detectmultiscale method fails with an exception whenever flags argument contains both cv haar scale image and cv haar find biggest object exception originates from the following statement due to an empty resizeroi the issue is caused by an inconsistent flags handling in detectmultiscale and init methods init method strips cv haar scale image flags whenever cv haar find biggest object is present whereas detectmultiscale method contains no such handling for flags imo this could be fixed by moving the following statements away from init method into detectmultiscale method right before the call to init findbiggestobject flags cv haar find biggest object if findbiggestobject flags cv haar scale image cv haar do canny pruning history evgeniy badaev on an alternative solution would be to replace flags with m flags in the following statement of detectmultiscale method if flags cv haar scale image evgeny talanin on thanks evgeniy could нou submit a pull request containing your modifications to our github repo as described in this will really help since fix will be automatically tested by our build system of course you can also implement a test to prove your suggestion is correct and add it to your pull request as well assignee set to evgeniy badaev ilya lavrenov on category set to ocl anna kogan on affected version changed from latest release to branch dev status changed from new to open
| 0
|
327,178
| 28,046,105,900
|
IssuesEvent
|
2023-03-28 23:07:22
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
reopened
|
Fix trigonometric_functions.test_numpy_arctan
|
NumPy Frontend Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4548212039/jobs/8019010759" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
1.0
|
Fix trigonometric_functions.test_numpy_arctan - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/4508140756/jobs/7936571840" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/4548212039/jobs/8019010759" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
<details>
<summary>Not found</summary>
Not found
</details>
|
non_process
|
fix trigonometric functions test numpy arctan tensorflow img src torch img src numpy img src jax img src not found not found
| 0
|
47,674
| 10,138,296,612
|
IssuesEvent
|
2019-08-02 17:34:25
|
mozilla-mobile/android-components
|
https://api.github.com/repos/mozilla-mobile/android-components
|
closed
|
Configure LongMethod rule to be less strict.
|
⌨️ code 🔧 tooling
|
The default is 60:
https://github.com/arturbosch/detekt/blob/f98ccac9aa352805621dd6a19888bb472d776cef/detekt-rules/src/main/kotlin/io/gitlab/arturbosch/detekt/rules/complexity/LongMethod.kt#L24
We somehow use 20:
https://github.com/mozilla-mobile/android-components/blob/master/config/detekt.yml#L91
We hit that quite often and it feels wrong quite often. Let's raise it to something higher - up to 60. Also let's try to get rid of the `@Supress("LongMethod")` code that doesn't need that anymore.
|
1.0
|
Configure LongMethod rule to be less strict. - The default is 60:
https://github.com/arturbosch/detekt/blob/f98ccac9aa352805621dd6a19888bb472d776cef/detekt-rules/src/main/kotlin/io/gitlab/arturbosch/detekt/rules/complexity/LongMethod.kt#L24
We somehow use 20:
https://github.com/mozilla-mobile/android-components/blob/master/config/detekt.yml#L91
We hit that quite often and it feels wrong quite often. Let's raise it to something higher - up to 60. Also let's try to get rid of the `@Supress("LongMethod")` code that doesn't need that anymore.
|
non_process
|
configure longmethod rule to be less strict the default is we somehow use we hit that quite often and it feels wrong quite often let s raise it to something higher up to also let s try to get rid of the supress longmethod code that doesn t need that anymore
| 0
|
341,120
| 30,567,096,081
|
IssuesEvent
|
2023-07-20 18:40:01
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
DISABLED test_inline_dict_mutation (__main__.MiscTests)
|
triaged module: flaky-tests skipped module: dynamo
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inline_dict_mutation&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15204809608).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inline_dict_mutation`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
|
1.0
|
DISABLED test_inline_dict_mutation (__main__.MiscTests) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_inline_dict_mutation&suite=MiscTests) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/15204809608).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_inline_dict_mutation`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `dynamo/test_misc.py`
|
non_process
|
disabled test inline dict mutation main misctests platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test inline dict mutation there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path dynamo test misc py
| 0
|
9,840
| 12,834,378,244
|
IssuesEvent
|
2020-07-07 10:56:59
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
opened
|
GO:0140418 effector-mediated modulation of host process by symbiont (term positioning)
|
multi-species process parent relationship query
|
GO:0140418 effector-mediated modulation of host process by symbiont
should be a parent of
GO:0140415 effector-mediated modulation of host defenses by symbiont
(currently it seems to be a child of
GO:0140415 effector-mediated modulation of host defenses by symbiont
so it's the wrong way around
(a symbiont can modulate host processes that are not defenses)
This is high priority for PHIbase as they will using the parent term internally in their new website to identify all 'pathogen effectors'
@CuzickA
|
1.0
|
GO:0140418 effector-mediated modulation of host process by symbiont (term positioning) -
GO:0140418 effector-mediated modulation of host process by symbiont
should be a parent of
GO:0140415 effector-mediated modulation of host defenses by symbiont
(currently it seems to be a child of
GO:0140415 effector-mediated modulation of host defenses by symbiont
so it's the wrong way around
(a symbiont can modulate host processes that are not defenses)
This is high priority for PHIbase as they will using the parent term internally in their new website to identify all 'pathogen effectors'
@CuzickA
|
process
|
go effector mediated modulation of host process by symbiont term positioning go effector mediated modulation of host process by symbiont should be a parent of go effector mediated modulation of host defenses by symbiont currently it seems to be a child of go effector mediated modulation of host defenses by symbiont so it s the wrong way around a symbiont can modulate host processes that are not defenses this is high priority for phibase as they will using the parent term internally in their new website to identify all pathogen effectors cuzicka
| 1
|
169,154
| 14,199,742,482
|
IssuesEvent
|
2020-11-16 03:18:06
|
XuhuiZhou/CDA
|
https://api.github.com/repos/XuhuiZhou/CDA
|
opened
|
How to obtain the position file for pla related test script?
|
documentation
|
If you are using our corpora:
use cite_pos_s.csv file included in our dataset
If you are using your own corpora:
you need to generate them yourself.
|
1.0
|
How to obtain the position file for pla related test script? - If you are using our corpora:
use cite_pos_s.csv file included in our dataset
If you are using your own corpora:
you need to generate them yourself.
|
non_process
|
how to obtain the position file for pla related test script if you are using our corpora use cite pos s csv file included in our dataset if you are using your own corpora you need to generate them yourself
| 0
|
75,316
| 9,221,113,379
|
IssuesEvent
|
2019-03-11 19:08:37
|
publiclab/mapknitter
|
https://api.github.com/repos/publiclab/mapknitter
|
closed
|
collapse login dropdown with map layers dropdown icon
|
bug design help wanted
|
### What happened just before the problem occurred

### Relevant URLs
https://mapknitter.org/#4/47.95/-13.23
### PublicLab.org username
singhav
(to help reproduce the issue)
### Browser, version, and operating system
chrome ubuntu 18.04 LTS
For bug reports, fill out the above template; for feature requests, you can delete the template.
|
1.0
|
collapse login dropdown with map layers dropdown icon - ### What happened just before the problem occurred

### Relevant URLs
https://mapknitter.org/#4/47.95/-13.23
### PublicLab.org username
singhav
(to help reproduce the issue)
### Browser, version, and operating system
chrome ubuntu 18.04 LTS
For bug reports, fill out the above template; for feature requests, you can delete the template.
|
non_process
|
collapse login dropdown with map layers dropdown icon what happened just before the problem occurred relevant urls publiclab org username singhav to help reproduce the issue browser version and operating system chrome ubuntu lts for bug reports fill out the above template for feature requests you can delete the template
| 0
|
363,472
| 25,453,042,045
|
IssuesEvent
|
2022-11-24 12:01:48
|
Mischback/mailsrv
|
https://api.github.com/repos/Mischback/mailsrv
|
closed
|
Dovecot Configuration
|
area/documentation meta/wontfix
|
## Relevant ``Dovecot`` settings
- [x] ``10-auth.conf``
- [x] ``auth_mechanism``: ``Outlook`` requires ``login``, ``plain`` is the default
- [x] review the different authentication backends with ``Dovecot``'s documentation
- [x] ``10-mail.conf``
- [x] ``mail_location``
- guide suggests ``maildir:~/Maildir``, but as we are (mainly) concerned about virtual users, how does this work?
- [x] ``mail_plugins``: add ``quota`` to manage the size of users' mailboxes
- [x] ``10-master.conf``
- [x] provide the authentication socket required by ``Postfix``:
```
# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
```
- [x] create the ``lmtp`` socket for ``Postfix`` (might already be there, configure accordingly)
```
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}
```
- [x] review other settings here!
- [x] ``10-ssl.conf``
- [x] ``ssl_cert`` and ``ssl_key``: Point to the respective files
- [x] **IMPORTANT** syntax is like ``ssl_cert = </etc/letsencrypt/live/webmail.example.org/fullchain.pem``, making the content of the certificate available!
- ~~``20-lmtp.conf``~~
- ~~if *server side* ``sieve`` should be applied, activate the plugin here (needs packet installation)~~
- tracked in #21
- [x] ``90-quota.conf``
- [x] Provide the actual mailbox quota
```
plugin {
quota = maildir:User quota
quota_status_success = DUNNO
quota_status_nouser = DUNNO
quota_status_overquota = "452 4.2.2 Mailbox is full and cannot receive any more emails"
}
```
- [x] provide the endpoint (socket) for ``Postfix``
```
service quota-status {
executable = /usr/lib/dovecot/quota-status -p postfix
unix_listener /var/spool/postfix/private/quota-status {
user = postfix
}
}
```
- [x] Why is the definition of ``unix_listener`` lacking ``mode`` and ``group`` settings, as they are present in the other socket definitions?
- [x] the actual ``quota`` must be provided by the ``user_db``; needs investigation!
- [x] Enable warnings about quotas:
```
plugin {
quota_warning = storage=95%% quota-warning 95 %u
quota_warning2 = storage=80%% quota-warning 80 %u
}
service quota-warning {
executable = script /usr/local/bin/quota-warning.sh # Adjust script path!
unix_listener quota-warning {
user = vmail
group = vmail
mode = 0660
}
}
```
- [x] The referenced script to send the users emails about their quotas:
```
#!/bin/sh
PERCENT=$1
USER=$2
cat << EOF | /usr/lib/dovecot/dovecot-lda -d $USER -o "plugin/quota=maildir:User quota:noenforcing"
From: postmaster@webmail.example.org
Subject: Quota warning - $PERCENT% reached
Your mailbox can only store a limited amount of emails.
Currently it is $PERCENT% full. If you reach 100% then
new emails cannot be stored. Thanks for your understanding.
EOF
```
- [x] **Important** somewhere is the ``user_db`` setting. This needs careful attention, because my setup will not rely on SQL as backend (deviating from the tutorial!); see https://doc.dovecot.org/configuration_manual/authentication/user_databases_userdb/ and https://doc.dovecot.org/configuration_manual/authentication/passwd_file/#authentication-passwd-file
|
1.0
|
Dovecot Configuration - ## Relevant ``Dovecot`` settings
- [x] ``10-auth.conf``
- [x] ``auth_mechanism``: ``Outlook`` requires ``login``, ``plain`` is the default
- [x] review the different authentication backends with ``Dovecot``'s documentation
- [x] ``10-mail.conf``
- [x] ``mail_location``
- guide suggests ``maildir:~/Maildir``, but as we are (mainly) concerned about virtual users, how does this work?
- [x] ``mail_plugins``: add ``quota`` to manage the size of users' mailboxes
- [x] ``10-master.conf``
- [x] provide the authentication socket required by ``Postfix``:
```
# Postfix smtp-auth
unix_listener /var/spool/postfix/private/auth {
mode = 0660
user = postfix
group = postfix
}
```
- [x] create the ``lmtp`` socket for ``Postfix`` (might already be there, configure accordingly)
```
service lmtp {
unix_listener /var/spool/postfix/private/dovecot-lmtp {
group = postfix
mode = 0600
user = postfix
}
}
```
- [x] review other settings here!
- [x] ``10-ssl.conf``
- [x] ``ssl_cert`` and ``ssl_key``: Point to the respective files
- [x] **IMPORTANT** syntax is like ``ssl_cert = </etc/letsencrypt/live/webmail.example.org/fullchain.pem``, making the content of the certificate available!
- ~~``20-lmtp.conf``~~
- ~~if *server side* ``sieve`` should be applied, activate the plugin here (needs packet installation)~~
- tracked in #21
- [x] ``90-quota.conf``
- [x] Provide the actual mailbox quota
```
plugin {
quota = maildir:User quota
quota_status_success = DUNNO
quota_status_nouser = DUNNO
quota_status_overquota = "452 4.2.2 Mailbox is full and cannot receive any more emails"
}
```
- [x] provide the endpoint (socket) for ``Postfix``
```
service quota-status {
executable = /usr/lib/dovecot/quota-status -p postfix
unix_listener /var/spool/postfix/private/quota-status {
user = postfix
}
}
```
- [x] Why is the definition of ``unix_listener`` lacking ``mode`` and ``group`` settings, as they are present in the other socket definitions?
- [x] the actual ``quota`` must be provided by the ``user_db``; needs investigation!
- [x] Enable warnings about quotas:
```
plugin {
quota_warning = storage=95%% quota-warning 95 %u
quota_warning2 = storage=80%% quota-warning 80 %u
}
service quota-warning {
executable = script /usr/local/bin/quota-warning.sh # Adjust script path!
unix_listener quota-warning {
user = vmail
group = vmail
mode = 0660
}
}
```
- [x] The referenced script to send the users emails about their quotas:
```
#!/bin/sh
PERCENT=$1
USER=$2
cat << EOF | /usr/lib/dovecot/dovecot-lda -d $USER -o "plugin/quota=maildir:User quota:noenforcing"
From: postmaster@webmail.example.org
Subject: Quota warning - $PERCENT% reached
Your mailbox can only store a limited amount of emails.
Currently it is $PERCENT% full. If you reach 100% then
new emails cannot be stored. Thanks for your understanding.
EOF
```
- [x] **Important** somewhere is the ``user_db`` setting. This needs careful attention, because my setup will not rely on SQL as backend (deviating from the tutorial!); see https://doc.dovecot.org/configuration_manual/authentication/user_databases_userdb/ and https://doc.dovecot.org/configuration_manual/authentication/passwd_file/#authentication-passwd-file
|
non_process
|
dovecot configuration relevant dovecot settings auth conf auth mechanism outlook requires login plain is the default review the different authentication backends with dovecot s documentation mail conf mail location guide suggests maildir maildir but as we are mainly concerned about virtual users how does this work mail plugins add quota to manage the size of users mailboxes master conf provide the authentication socket required by postfix postfix smtp auth unix listener var spool postfix private auth mode user postfix group postfix create the lmtp socket for postfix might already be there configure accordingly service lmtp unix listener var spool postfix private dovecot lmtp group postfix mode user postfix review other settings here ssl conf ssl cert and ssl key point to the respective files important syntax is like ssl cert etc letsencrypt live webmail example org fullchain pem making the content of the certificate available lmtp conf if server side sieve should be applied activate the plugin here needs packet installation tracked in quota conf provide the actual mailbox quota plugin quota maildir user quota quota status success dunno quota status nouser dunno quota status overquota mailbox is full and cannot receive any more emails provide the endpoint socket for postfix service quota status executable usr lib dovecot quota status p postfix unix listener var spool postfix private quota status user postfix why is the definition of unix listener lacking mode and group settings as they are present in the other socket definitions the actual quota must be provided by the user db needs investigation enable warnings about quotas plugin quota warning storage quota warning u quota storage quota warning u service quota warning executable script usr local bin quota warning sh adjust script path unix listener quota warning user vmail group vmail mode the referenced script to send the users emails about their quotas bin sh percent user cat eof usr lib dovecot dovecot lda d user o plugin quota maildir user quota noenforcing from postmaster webmail example org subject quota warning percent reached your mailbox can only store a limited amount of emails currently it is percent full if you reach then new emails cannot be stored thanks for your understanding eof important somewhere is the user db setting this needs careful attention because my setup will not rely on sql as backend deviating from the tutorial see and
| 0
|
6,181
| 13,886,002,489
|
IssuesEvent
|
2020-10-18 22:34:42
|
bSchnepp/Feral
|
https://api.github.com/repos/bSchnepp/Feral
|
closed
|
[REGRESSION] - RS232 driver is no longer functioning
|
architecture feature high-priority
|
Since the CMake migration (https://github.com/bSchnepp/Feral/commit/79a9c3a2631900b422757ce159204135c7c2842b), it appears that the serial driver no longer functions, and COM1 is empty when the kernel finished booting.
|
1.0
|
[REGRESSION] - RS232 driver is no longer functioning - Since the CMake migration (https://github.com/bSchnepp/Feral/commit/79a9c3a2631900b422757ce159204135c7c2842b), it appears that the serial driver no longer functions, and COM1 is empty when the kernel finished booting.
|
non_process
|
driver is no longer functioning since the cmake migration it appears that the serial driver no longer functions and is empty when the kernel finished booting
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.