Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
91,814 | 18,720,350,926 | IssuesEvent | 2021-11-03 11:02:53 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Bug]: UQI: show clause is not working | Bug Frontend Actions Pod Needs Triaging Low effort UQI BE Coders Pod | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Fields in action configuration page can be made to show / hide themselves depending upon other fields by the use of `show` clause. This clause is not working for S3 plugin `List` action.
```
"show": {
"path": "actionConfiguration.formData.list.signedUrl",
"comparison": "EQUALS",
"value": "YES"
}
```
https://user-images.githubusercontent.com/1757421/138893896-69eece87-091c-4ce5-9e7a-ad7ba21e56d6.mov
### Steps To Reproduce
1. Go to `release.app.appsmith....`
2. Create a dummy S3 datasource.
3. Create a `list` action on the S3 datasource.
4. Set `Generate Signed URL` to `No`.
5. Check that the `Expiry` duration field does not get hidden. Ideally it should hide when `Generate Signed URL` option is set to `No`.
### Environment
Release
### Version
Cloud | 1.0 | [Bug]: UQI: show clause is not working - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Fields in action configuration page can be made to show / hide themselves depending upon other fields by the use of `show` clause. This clause is not working for S3 plugin `List` action.
```
"show": {
"path": "actionConfiguration.formData.list.signedUrl",
"comparison": "EQUALS",
"value": "YES"
}
```
https://user-images.githubusercontent.com/1757421/138893896-69eece87-091c-4ce5-9e7a-ad7ba21e56d6.mov
### Steps To Reproduce
1. Go to `release.app.appsmith....`
2. Create a dummy S3 datasource.
3. Create a `list` action on the S3 datasource.
4. Set `Generate Signed URL` to `No`.
5. Check that the `Expiry` duration field does not get hidden. Ideally it should hide when `Generate Signed URL` option is set to `No`.
### Environment
Release
### Version
Cloud | non_process | uqi show clause is not working is there an existing issue for this i have searched the existing issues current behavior fields in action configuration page can be made to show hide themselves depending upon other fields by the use of show clause this clause is not working for plugin list action show path actionconfiguration formdata list signedurl comparison equals value yes steps to reproduce go to release app appsmith create a dummy datasource create a list action on the datasource set generate signed url to no check that the expiry duration field does not get hidden ideally it should hide when generate signed url option is set to no environment release version cloud | 0 |
18,370 | 24,496,650,652 | IssuesEvent | 2022-10-10 09:15:29 | CS-METIS/p1-status-page | https://api.github.com/repos/CS-METIS/p1-status-page | opened | 🛑 Processing is down | status processing | In [`db71300`](https://github.com/CS-METIS/p1-status-page/commit/db71300cf0377ec219910871f834ce68c258c25b
), Processing (https://scdf.csgroup.space) was **down**:
- HTTP code: 0
- Response time: 0 ms
| 1.0 | 🛑 Processing is down - In [`db71300`](https://github.com/CS-METIS/p1-status-page/commit/db71300cf0377ec219910871f834ce68c258c25b
), Processing (https://scdf.csgroup.space) was **down**:
- HTTP code: 0
- Response time: 0 ms
| process | 🛑 processing is down in processing was down http code response time ms | 1 |
21,148 | 28,126,790,797 | IssuesEvent | 2023-03-31 18:28:31 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | ORDER BY getting added by default | Type:Bug Priority:P2 .Performance Querying/Processor | **Describe the bug**
If you build any query that has an aggregation, ORDER BY clause is getting added by default, which adds a complexity to the DB query planner resulting in higher query response times
**Logs**
NA
**To Reproduce**
1) New GUI question
2) Select the people table, count by source
3) check the generated SQL, there's an ORDER BY, but we haven't added an order by in the GUI
**Expected behavior**
Don't add ORDER BY if it's not explicit
**Screenshots/videos**

**Information about your Metabase Installation:**
- Metabase version: master, but happens in 45 as well and probably more
**Severity**
P2
**Additional context**
NA | 1.0 | ORDER BY getting added by default - **Describe the bug**
If you build any query that has an aggregation, ORDER BY clause is getting added by default, which adds a complexity to the DB query planner resulting in higher query response times
**Logs**
NA
**To Reproduce**
1) New GUI question
2) Select the people table, count by source
3) check the generated SQL, there's an ORDER BY, but we haven't added an order by in the GUI
**Expected behavior**
Don't add ORDER BY if it's not explicit
**Screenshots/videos**

**Information about your Metabase Installation:**
- Metabase version: master, but happens in 45 as well and probably more
**Severity**
P2
**Additional context**
NA | process | order by getting added by default describe the bug if you build any query that has an aggregation order by clause is getting added by default which adds a complexity to the db query planner resulting in higher query response times logs na to reproduce new gui question select the people table count by source check the generated sql there s an order by but we haven t added an order by in the gui expected behavior don t add order by if it s not explicit screenshots videos information about your metabase installation metabase version master but happens in as well and probably more severity additional context na | 1 |
4,369 | 7,260,515,624 | IssuesEvent | 2018-02-18 10:54:24 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [FEATURE] Remove Singleparts to Multiparts algorithm | Automatic new feature Easy Processing | Original commit: https://github.com/qgis/QGIS/commit/a55fbd8ef349a52c166b69b70ce6cfccad8c42aa by nyalldawson
This algorithm is no longer required - it's been replaced by
the 'Promote to multipart' and 'Collect geometries" algorithms.
Tagged as feature to remember to include in release notes | 1.0 | [FEATURE] Remove Singleparts to Multiparts algorithm - Original commit: https://github.com/qgis/QGIS/commit/a55fbd8ef349a52c166b69b70ce6cfccad8c42aa by nyalldawson
This algorithm is no longer required - it's been replaced by
the 'Promote to multipart' and 'Collect geometries" algorithms.
Tagged as feature to remember to include in release notes | process | remove singleparts to multiparts algorithm original commit by nyalldawson this algorithm is no longer required it s been replaced by the promote to multipart and collect geometries algorithms tagged as feature to remember to include in release notes | 1 |
131,986 | 18,265,114,463 | IssuesEvent | 2021-10-04 07:30:13 | MicrosoftDocs/CloudAppSecurityDocs | https://api.github.com/repos/MicrosoftDocs/CloudAppSecurityDocs | closed | Request to correct the description in "Azure Information Protection integration" | cloud-app-security/svc | I believe the following description is not correct because I confirmed labels which attached outside of MCAS were not able to be overridden with MCAS. I tried to override a label manually and automatically from MCAS, the results were the same in both cases.
- Description
Labels with protection outside of Cloud App Security can be overridden by Cloud App Security, but can't be removed.
Could you please make sure the above investigation is correct and update that description?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d87a9a6d-db50-50ca-755b-ce28631c30ac
* Version Independent ID: 8656b0a3-ab7e-76ec-2d89-8c1aa643c9fa
* Content: [Integrate Azure Information Protection with Cloud App Security](https://docs.microsoft.com/en-us/cloud-app-security/azip-integration#how-it-works%E4%BB%A5%E5%89%8D%E3%81%AF%E5%89%8A%E9%99%A4%E3%82%82%E4%B8%8A%E6%9B%B8%E3%81%8D%E3%82%82NG%E3%81%A7%E3%81%82%E3%81%A3%E3%81%9F%E8%A8%98%E6%86%B6%E3%81%8C%E3%81%82%E3%82%8B%E3%81%AE%E3%81%A7%E3%81%99%E3%81%8C%E3%80%81%E7%8F%BE%E7%8A%B6%E3%81%AF%E6%94%B9%E5%96%84%E3%81%95%E3%82%8C%E3%81%A6%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E3%81%8C%E4%BB%98%E4%B8%8E%E3%81%97%E3%81%9F%E3%83%A9%E3%83%99%E3%83%AB%E3%81%AB%E5%AF%BE%E3%81%97%E3%81%A6MCAS%E3%81%A7%E3%83%9D%E3%83%AA%E3%82%B7%E3%83%BC%E3%81%AB%E5%90%88%E8%87%B4%E3%81%97%E3%81%9F%E3%83%95%E3%82%A1%E3%82%A4%E3%83%AB%E3%81%AF%E4%B8%8A%E6%9B%B8%E3%81%8D%E3%81%8C%E3%81%A7%E3%81%8D%E3%82%8B)
* Content Source: [CloudAppSecurityDocs/azip-integration.md](https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/azip-integration.md)
* Service: **cloud-app-security**
* GitHub Login: @dcurwin
* Microsoft Alias: **dacurwin** | True | Request to correct the description in "Azure Information Protection integration" - I believe the following description is not correct because I confirmed labels which attached outside of MCAS were not able to be overridden with MCAS. I tried to override a label manually and automatically from MCAS, the results were the same in both cases.
- Description
Labels with protection outside of Cloud App Security can be overridden by Cloud App Security, but can't be removed.
Could you please make sure the above investigation is correct and update that description?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d87a9a6d-db50-50ca-755b-ce28631c30ac
* Version Independent ID: 8656b0a3-ab7e-76ec-2d89-8c1aa643c9fa
* Content: [Integrate Azure Information Protection with Cloud App Security](https://docs.microsoft.com/en-us/cloud-app-security/azip-integration#how-it-works%E4%BB%A5%E5%89%8D%E3%81%AF%E5%89%8A%E9%99%A4%E3%82%82%E4%B8%8A%E6%9B%B8%E3%81%8D%E3%82%82NG%E3%81%A7%E3%81%82%E3%81%A3%E3%81%9F%E8%A8%98%E6%86%B6%E3%81%8C%E3%81%82%E3%82%8B%E3%81%AE%E3%81%A7%E3%81%99%E3%81%8C%E3%80%81%E7%8F%BE%E7%8A%B6%E3%81%AF%E6%94%B9%E5%96%84%E3%81%95%E3%82%8C%E3%81%A6%E3%83%A6%E3%83%BC%E3%82%B6%E3%83%BC%E3%81%8C%E4%BB%98%E4%B8%8E%E3%81%97%E3%81%9F%E3%83%A9%E3%83%99%E3%83%AB%E3%81%AB%E5%AF%BE%E3%81%97%E3%81%A6MCAS%E3%81%A7%E3%83%9D%E3%83%AA%E3%82%B7%E3%83%BC%E3%81%AB%E5%90%88%E8%87%B4%E3%81%97%E3%81%9F%E3%83%95%E3%82%A1%E3%82%A4%E3%83%AB%E3%81%AF%E4%B8%8A%E6%9B%B8%E3%81%8D%E3%81%8C%E3%81%A7%E3%81%8D%E3%82%8B)
* Content Source: [CloudAppSecurityDocs/azip-integration.md](https://github.com/Microsoft/CloudAppSecurityDocs/blob/master/CloudAppSecurityDocs/azip-integration.md)
* Service: **cloud-app-security**
* GitHub Login: @dcurwin
* Microsoft Alias: **dacurwin** | non_process | request to correct the description in azure information protection integration i believe the following description is not correct because i confirmed labels which attached outside of mcas were not able to be overridden with mcas i tried to override a label manually and automatically from mcas the results were the same in both cases description labels with protection outside of cloud app security can be overridden by cloud app security but can t be removed could you please make sure the above investigation is correct and update that description document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service cloud app security github login dcurwin microsoft alias dacurwin | 0 |
230,985 | 25,482,857,489 | IssuesEvent | 2022-11-26 01:45:20 | maddyCode23/linux-4.1.15 | https://api.github.com/repos/maddyCode23/linux-4.1.15 | reopened | CVE-2017-17741 (Medium) detected in linux-stable-rtv4.1.33 | security vulnerability | ## CVE-2017-17741 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The KVM implementation in the Linux kernel through 4.14.7 allows attackers to obtain potentially sensitive information from kernel memory, aka a write_mmio stack-based out-of-bounds read, related to arch/x86/kvm/x86.c and include/trace/events/kvm.h.
<p>Publish Date: 2017-12-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17741>CVE-2017-17741</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17741">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17741</a></p>
<p>Release Date: 2017-12-18</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2017-17741 (Medium) detected in linux-stable-rtv4.1.33 - ## CVE-2017-17741 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary>
<p>
<p>Julia Cartwright's fork of linux-stable-rt.git</p>
<p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/maddyCode23/linux-4.1.15/commit/f1f3d2b150be669390b32dfea28e773471bdd6e7">f1f3d2b150be669390b32dfea28e773471bdd6e7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The KVM implementation in the Linux kernel through 4.14.7 allows attackers to obtain potentially sensitive information from kernel memory, aka a write_mmio stack-based out-of-bounds read, related to arch/x86/kvm/x86.c and include/trace/events/kvm.h.
<p>Publish Date: 2017-12-18
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2017-17741>CVE-2017-17741</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17741">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-17741</a></p>
<p>Release Date: 2017-12-18</p>
<p>Fix Resolution: v4.15-rc5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linux stable cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files vulnerability details the kvm implementation in the linux kernel through allows attackers to obtain potentially sensitive information from kernel memory aka a write mmio stack based out of bounds read related to arch kvm c and include trace events kvm h publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
707,124 | 24,296,206,124 | IssuesEvent | 2022-09-29 10:15:10 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Bluetooth: Host: CONFIG_BT_LOG_SNIFFER_INFO doesn't work as intended without bonding | bug priority: low area: Bluetooth area: Bluetooth Host | **Describe the bug**
The help text of `CONFIG_BT_LOG_SNIFFER_INFO` says that it will enable logging of e.g. the LTK and private keys. While this is mostly true, the LTK is not logged if the link isn't bonding (triggered by e.g. setting `CONFIG_BT_BONDABLE=n`).
**To Reproduce**
Steps to reproduce the behavior:
1. Build the BT shell (`zephyr/tests/bluetooth/shell`) with `CONFIG_BT_BONDABLE=n`
2. Connect to a device and do `bt security 2` to encrypt the link
3. See missing logging of the LTK
**Expected behavior**
Expected to see the LTK regardless of bonding, as debugging a non-bonded connection is equally valid.
**Impact**
Annoyance
**Logs and console output**
Central with bonding:
```
uart:~$ bt security 2
LE conn param updated: int 0x0018 lat 0 to 42
Security changed: 5E:C8:43:72:21:1E (random) level 2
Identity resolved 5E:C8:43:72:21:1E (random) -> C2:35:E5:CA:5E:A1 (random)
Bonded with C2:35:E5:CA:5E:A1 (random)
[00:01:19.844,970] <inf> bt_keys: SC LTK: 0xca1505d416abfc96ab0db3cba088c6b8
uart:~$
```
Peripheral with bonding:
```
Security changed: 59:48:1E:1B:D4:30 (random) level 2
Identity resolved 59:48:1E:1B:D4:30 (random) -> D7:8C:A2:75:B6:07 (random)
Bonded with D7:8C:A2:75:B6:07 (random)
[00:02:24.922,943] <inf> bt_keys: SC LTK: 0xca1505d416abfc96ab0db3cba088c6b8
uart:~$
```
Central without bonding:
```
uart:~$ bt security 2
Security changed: 4E:22:41:2D:FE:6F (random) level 2
Paired with 4E:22:41:2D:FE:6F (random)
uart:~$
```
Peripheral without bonding:
```
Security changed: 71:46:D3:9B:A6:7A (random) level 2
Paired with 71:46:D3:9B:A6:7A (random)
uart:~$
```
**Environment (please complete the following information):**
- Commit SHA or Version used: 489e8eb02c2a7bd46a9e73ee1a07a9bd20a61e98
**Additional context**
N/A | 1.0 | Bluetooth: Host: CONFIG_BT_LOG_SNIFFER_INFO doesn't work as intended without bonding - **Describe the bug**
The help text of `CONFIG_BT_LOG_SNIFFER_INFO` says that it will enable logging of e.g. the LTK and private keys. While this is mostly true, the LTK is not logged if the link isn't bonding (triggered by e.g. setting `CONFIG_BT_BONDABLE=n`).
**To Reproduce**
Steps to reproduce the behavior:
1. Build the BT shell (`zephyr/tests/bluetooth/shell`) with `CONFIG_BT_BONDABLE=n`
2. Connect to a device and do `bt security 2` to encrypt the link
3. See missing logging of the LTK
**Expected behavior**
Expected to see the LTK regardless of bonding, as debugging a non-bonded connection is equally valid.
**Impact**
Annoyance
**Logs and console output**
Central with bonding:
```
uart:~$ bt security 2
LE conn param updated: int 0x0018 lat 0 to 42
Security changed: 5E:C8:43:72:21:1E (random) level 2
Identity resolved 5E:C8:43:72:21:1E (random) -> C2:35:E5:CA:5E:A1 (random)
Bonded with C2:35:E5:CA:5E:A1 (random)
[00:01:19.844,970] <inf> bt_keys: SC LTK: 0xca1505d416abfc96ab0db3cba088c6b8
uart:~$
```
Peripheral with bonding:
```
Security changed: 59:48:1E:1B:D4:30 (random) level 2
Identity resolved 59:48:1E:1B:D4:30 (random) -> D7:8C:A2:75:B6:07 (random)
Bonded with D7:8C:A2:75:B6:07 (random)
[00:02:24.922,943] <inf> bt_keys: SC LTK: 0xca1505d416abfc96ab0db3cba088c6b8
uart:~$
```
Central without bonding:
```
uart:~$ bt security 2
Security changed: 4E:22:41:2D:FE:6F (random) level 2
Paired with 4E:22:41:2D:FE:6F (random)
uart:~$
```
Peripheral without bonding:
```
Security changed: 71:46:D3:9B:A6:7A (random) level 2
Paired with 71:46:D3:9B:A6:7A (random)
uart:~$
```
**Environment (please complete the following information):**
- Commit SHA or Version used: 489e8eb02c2a7bd46a9e73ee1a07a9bd20a61e98
**Additional context**
N/A | non_process | bluetooth host config bt log sniffer info doesn t work as intended without bonding describe the bug the help text of config bt log sniffer info says that it will enable logging of e g the ltk and private keys while this is mostly true the ltk is not logged if the link isn t bonding triggered by e g setting config bt bondable n to reproduce steps to reproduce the behavior build the bt shell zephyr tests bluetooth shell with config bt bondable n connect to a device and do bt security to encrypt the link see missing logging of the ltk expected behavior expected to see the ltk regardless of bonding as debugging a non bonded connection is equally valid impact annoyance logs and console output central with bonding uart bt security le conn param updated int lat to security changed random level identity resolved random ca random bonded with ca random bt keys sc ltk uart peripheral with bonding security changed random level identity resolved random random bonded with random bt keys sc ltk uart central without bonding uart bt security security changed fe random level paired with fe random uart peripheral without bonding security changed random level paired with random uart environment please complete the following information commit sha or version used additional context n a | 0 |
15,336 | 19,472,002,376 | IssuesEvent | 2021-12-24 03:51:21 | emily-writes-poems/emily-writes-poems-processing | https://api.github.com/repos/emily-writes-poems/emily-writes-poems-processing | closed | migrate (new): get all poems | processing | mostly to have a list of `poem_id`s for use in select fields in forms | 1.0 | migrate (new): get all poems - mostly to have a list of `poem_id`s for use in select fields in forms | process | migrate new get all poems mostly to have a list of poem id s for use in select fields in forms | 1 |
10,226 | 13,094,312,568 | IssuesEvent | 2020-08-03 12:10:04 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Graphical Builder - Link between Expression Paramenter and Table/Vector Field | Feature Request Feedback Processing | Author Name: **Hans-Peter Klossek** (Hans-Peter Klossek)
Original Redmine Issue: [21337](https://issues.qgis.org/issues/21337)
Redmine category:processing/modeller
---
Hello folks,
I have a feature request:
It would be nice to implement a link between the input parameter 'expression' and 'table/vector field' so it is useable for a dynamic workflow with a parameter like '@field_namen' inside of a expression. I have tried this in the field calculator, it is working only for the column name but the field values go in the 'null-nirvana'.
Thanks and best regards, Hape
---
- [example.jpg](https://issues.qgis.org/attachments/download/14380/example.jpg) (Hans-Peter Klossek) | 1.0 | Graphical Builder - Link between Expression Paramenter and Table/Vector Field - Author Name: **Hans-Peter Klossek** (Hans-Peter Klossek)
Original Redmine Issue: [21337](https://issues.qgis.org/issues/21337)
Redmine category:processing/modeller
---
Hello folks,
I have a feature request:
It would be nice to implement a link between the input parameter 'expression' and 'table/vector field' so it is useable for a dynamic workflow with a parameter like '@field_namen' inside of a expression. I have tried this in the field calculator, it is working only for the column name but the field values go in the 'null-nirvana'.
Thanks and best regards, Hape
---
- [example.jpg](https://issues.qgis.org/attachments/download/14380/example.jpg) (Hans-Peter Klossek) | process | graphical builder link between expression paramenter and table vector field author name hans peter klossek hans peter klossek original redmine issue redmine category processing modeller hello folks i have a feature request it would be nice to implement a link between the input parameter expression and table vector field so it is useable for a dynamic workflow with a parameter like field namen inside of a expression i have tried this in the field calculator it is working only for the column name but the field values go in the null nirvana thanks and best regards hape hans peter klossek | 1 |
324 | 2,586,094,720 | IssuesEvent | 2015-02-17 08:36:33 | pydio/pydio-core | https://api.github.com/repos/pydio/pydio-core | closed | Stop using VERSION file | component:core prio:low type:performances | VERSION file is read every request to define AJXP_VERSION and AJXP_VERSION_DATE (in https://github.com/ajaxplorer/ajaxplorer-core/blob/master/core/src/conf/bootstrap_context.php) | True | Stop using VERSION file - VERSION file is read every request to define AJXP_VERSION and AJXP_VERSION_DATE (in https://github.com/ajaxplorer/ajaxplorer-core/blob/master/core/src/conf/bootstrap_context.php) | non_process | stop using version file version file is read every request to define ajxp version and ajxp version date in | 0 |
14,679 | 17,794,841,343 | IssuesEvent | 2021-08-31 20:40:18 | googleapis/nodejs-asset | https://api.github.com/repos/googleapis/nodejs-asset | closed | cannot get quickstart.js or getBatchAssetHistory.js samples to work | type: process api: cloudasset samples | I have managed to reproduce flaky tests locally, the root of the problem appears to be that `batchGetAssetsHistory` does not contain a resource in its listing (that obviously exists):
In pantheon 👇
<img width="994" alt="Screen Shot 2020-11-10 at 9 46 11 AM" src="https://user-images.githubusercontent.com/194609/98711307-af888580-2339-11eb-9bd2-817da9d8bc91.png">
Running sample 👇
<img width="1522" alt="Screen Shot 2020-11-10 at 9 46 40 AM" src="https://user-images.githubusercontent.com/194609/98711359-bc0cde00-2339-11eb-90a1-646e93689877.png">
I'm guessing this is related to the assets falling outside of the set returned by `readTimeWindow`, but I can't seem to make any values return.
Refs #422 | 1.0 | cannot get quickstart.js or getBatchAssetHistory.js samples to work - I have managed to reproduce flaky tests locally, the root of the problem appears to be that `batchGetAssetsHistory` does not contain a resource in its listing (that obviously exists):
In pantheon 👇
<img width="994" alt="Screen Shot 2020-11-10 at 9 46 11 AM" src="https://user-images.githubusercontent.com/194609/98711307-af888580-2339-11eb-9bd2-817da9d8bc91.png">
Running sample 👇
<img width="1522" alt="Screen Shot 2020-11-10 at 9 46 40 AM" src="https://user-images.githubusercontent.com/194609/98711359-bc0cde00-2339-11eb-90a1-646e93689877.png">
I'm guessing this is related to the assets falling outside of the set returned by `readTimeWindow`, but I can't seem to make any values return.
Refs #422 | process | cannot get quickstart js or getbatchassethistory js samples to work i have managed to reproduce flaky tests locally the root of the problem appears to be that batchgetassetshistory does not contain a resource in its listing that obviously exists in pantheon 👇 img width alt screen shot at am src running sample 👇 img width alt screen shot at am src i m guessing this is related to the assets falling outside of the set returned by readtimewindow but i can t seem to make any values return refs | 1 |
18,079 | 24,095,106,318 | IssuesEvent | 2022-09-19 18:00:50 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | opened | migrate flakybot to Cloud Run | type: process priority: p2 bot: flakybot | Related #3773
Flakybot is using a lot of resources. It's better to migrate it to Cloud Run.
- [ ] Make sure canary-bot-backend Cloud Run instance can receive pubsub messages from the scheduler-proxy
- [ ] Do a code review in flakybot for possible race condition with concurrent request
- [ ] Deploy Cloud Run backend, but keep Cloud Function deployment
- [ ] Change the routing in the scheduler-proxy to the Cloud Run backend
- [ ] Once it works remove the Cloud Function deployment | 1.0 | migrate flakybot to Cloud Run - Related #3773
Flakybot is using a lot of resources. It's better to migrate it to Cloud Run.
- [ ] Make sure canary-bot-backend Cloud Run instance can receive pubsub messages from the scheduler-proxy
- [ ] Do a code review in flakybot for possible race condition with concurrent request
- [ ] Deploy Cloud Run backend, but keep Cloud Function deployment
- [ ] Change the routing in the scheduler-proxy to the Cloud Run backend
- [ ] Once it works remove the Cloud Function deployment | process | migrate flakybot to cloud run related flakybot is using a lot of resources it s better to migrate it to cloud run make sure canary bot backend cloud run instance can receive pubsub messages from the scheduler proxy do a code review in flakybot for possible race condition with concurrent request deploy cloud run backend but keep cloud function deployment change the routing in the scheduler proxy to the cloud run backend once it works remove the cloud function deployment | 1 |
7,380 | 10,514,634,653 | IssuesEvent | 2019-09-28 02:15:12 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | When converting SparkSQL questions to SQL timestamps are converted wrong | .Backend Database/Spark Priority:P3 Query Processor Type:Bug | This is the exact same issue as #11009 but for SparkSQL. In either 0.31 or 0.32 we improved logic converting a question to SQL so literal values are spliced in instead of leaving `?` parameter placeholders. However, the literal generated for `Timestamp`s was wrong. It was generating
```sql
from_unixtime(
unix_timestamp(
'2019-01-19T00:00:00.000Z',
'yyyy-MM-dd\\\\'T\\\\'HH:mm:ss.SSS\\\\'Z\\\\'
)
)
```
which was not only wrong, but it also didn't work.
Spark SQL probably uses `''` to escape single quotes inside literals, but either way, definitely not `\\'` -- that is one slash too many. Also, Spark SQL doesn't use ISO-8601 formatted strings.
We now generate SQL like
```sql
timestamp '2019-01-19 00:00:00.000'
```
which 9 out of 10 dentists agree is clear and actually works correctly. | 1.0 | When converting SparkSQL questions to SQL timestamps are converted wrong - This is the exact same issue as #11009 but for SparkSQL. In either 0.31 or 0.32 we improved logic converting a question to SQL so literal values are spliced in instead of leaving `?` parameter placeholders. However, the literal generated for `Timestamp`s was wrong. It was generating
```sql
from_unixtime(
unix_timestamp(
'2019-01-19T00:00:00.000Z',
'yyyy-MM-dd\\\\'T\\\\'HH:mm:ss.SSS\\\\'Z\\\\'
)
)
```
which was not only wrong, but it also didn't work.
Spark SQL probably uses `''` to escape single quotes inside literals, but either way, definitely not `\\'` -- that is one slash too many. Also, Spark SQL doesn't use ISO-8601 formatted strings.
We now generate SQL like
```sql
timestamp '2019-01-19 00:00:00.000'
```
which 9 out of 10 dentists agree is clear and actually works correctly. | process | when converting sparksql questions to sql timestamps are converted wrong this is the exact same issue as but for sparksql in either or we improved logic converting a question to sql so literal values are spliced in instead of leaving parameter placeholders however the literal generated for timestamp s was wrong it was generating sql from unixtime unix timestamp yyyy mm dd t hh mm ss sss z which was not only wrong but it also didn t work spark sql probably uses to escape single quotes inside literals but either way definitely not that is one slash too many also spark sql doesn t use iso formatted strings we now generate sql like sql timestamp which out of dentists agree is clear and actually works correctly | 1 |
187,426 | 14,427,878,691 | IssuesEvent | 2020-12-06 06:45:40 | kalexmills/github-vet-tests-dec2020 | https://api.github.com/repos/kalexmills/github-vet-tests-dec2020 | closed | terraform-providers/terraform-provider-oci: oci/core_volume_group_test.go; 16 LoC | fresh small test |
Found a possible issue in [terraform-providers/terraform-provider-oci](https://www.github.com/terraform-providers/terraform-provider-oci) at [oci/core_volume_group_test.go](https://github.com/terraform-providers/terraform-provider-oci/blob/b79082228a6fabc06eb347d2a4a537e07f71b06f/oci/core_volume_group_test.go#L343-L358)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to volumeGroupId is reassigned at line 347
[Click here to see the code in its original context.](https://github.com/terraform-providers/terraform-provider-oci/blob/b79082228a6fabc06eb347d2a4a537e07f71b06f/oci/core_volume_group_test.go#L343-L358)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, volumeGroupId := range volumeGroupIds {
if ok := SweeperDefaultResourceId[volumeGroupId]; !ok {
deleteVolumeGroupRequest := oci_core.DeleteVolumeGroupRequest{}
deleteVolumeGroupRequest.VolumeGroupId = &volumeGroupId
deleteVolumeGroupRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := blockstorageClient.DeleteVolumeGroup(context.Background(), deleteVolumeGroupRequest)
if error != nil {
fmt.Printf("Error deleting VolumeGroup %s %s, It is possible that the resource is already deleted. Please verify manually \n", volumeGroupId, error)
continue
}
waitTillCondition(testAccProvider, &volumeGroupId, volumeGroupSweepWaitCondition, time.Duration(3*time.Minute),
volumeGroupSweepResponseFetchOperation, "core", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b79082228a6fabc06eb347d2a4a537e07f71b06f
| 1.0 | terraform-providers/terraform-provider-oci: oci/core_volume_group_test.go; 16 LoC -
Found a possible issue in [terraform-providers/terraform-provider-oci](https://www.github.com/terraform-providers/terraform-provider-oci) at [oci/core_volume_group_test.go](https://github.com/terraform-providers/terraform-provider-oci/blob/b79082228a6fabc06eb347d2a4a537e07f71b06f/oci/core_volume_group_test.go#L343-L358)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to volumeGroupId is reassigned at line 347
[Click here to see the code in its original context.](https://github.com/terraform-providers/terraform-provider-oci/blob/b79082228a6fabc06eb347d2a4a537e07f71b06f/oci/core_volume_group_test.go#L343-L358)
<details>
<summary>Click here to show the 16 line(s) of Go which triggered the analyzer.</summary>
```go
for _, volumeGroupId := range volumeGroupIds {
if ok := SweeperDefaultResourceId[volumeGroupId]; !ok {
deleteVolumeGroupRequest := oci_core.DeleteVolumeGroupRequest{}
deleteVolumeGroupRequest.VolumeGroupId = &volumeGroupId
deleteVolumeGroupRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := blockstorageClient.DeleteVolumeGroup(context.Background(), deleteVolumeGroupRequest)
if error != nil {
fmt.Printf("Error deleting VolumeGroup %s %s, It is possible that the resource is already deleted. Please verify manually \n", volumeGroupId, error)
continue
}
waitTillCondition(testAccProvider, &volumeGroupId, volumeGroupSweepWaitCondition, time.Duration(3*time.Minute),
volumeGroupSweepResponseFetchOperation, "core", true)
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: b79082228a6fabc06eb347d2a4a537e07f71b06f
| non_process | terraform providers terraform provider oci oci core volume group test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to volumegroupid is reassigned at line click here to show the line s of go which triggered the analyzer go for volumegroupid range volumegroupids if ok sweeperdefaultresourceid ok deletevolumegrouprequest oci core deletevolumegrouprequest deletevolumegrouprequest volumegroupid volumegroupid deletevolumegrouprequest requestmetadata retrypolicy getretrypolicy true core error blockstorageclient deletevolumegroup context background deletevolumegrouprequest if error nil fmt printf error deleting volumegroup s s it is possible that the resource is already deleted please verify manually n volumegroupid error continue waittillcondition testaccprovider volumegroupid volumegroupsweepwaitcondition time duration time minute volumegroupsweepresponsefetchoperation core true leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
1,628 | 4,239,615,435 | IssuesEvent | 2016-07-06 10:07:03 | BriceChou/WeiboClient | https://api.github.com/repos/BriceChou/WeiboClient | closed | Use the fixed data to bind message page. | Highest In processing | 1. Because we don't have the highest permission,we can't get the message from Weibo.
2. We use the fixed data to make a beautiful message page.
3. Add list view in the message page that we can scroll down the page. | 1.0 | Use the fixed data to bind message page. - 1. Because we don't have the highest permission,we can't get the message from Weibo.
2. We use the fixed data to make a beautiful message page.
3. Add list view in the message page that we can scroll down the page. | process | use the fixed data to bind message page because we don t have the highest permission we can t get the message from weibo we use the fixed data to make a beautiful message page add list view in the message page that we can scroll down the page | 1 |
237,528 | 26,085,199,454 | IssuesEvent | 2022-12-26 01:16:13 | Thezone1975/tabliss | https://api.github.com/repos/Thezone1975/tabliss | opened | CVE-2022-46175 (High) detected in json5-0.5.1.tgz | security vulnerability | ## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json5-0.5.1.tgz</b></p></summary>
<p>JSON for the ES5 era.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p>
<p>Path to dependency file: /tabliss/package.json</p>
<p>Path to vulnerable library: /node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- copy-webpack-plugin-4.5.1.tgz (Root Library)
- loader-utils-1.1.0.tgz
- :x: **json5-0.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-46175 (High) detected in json5-0.5.1.tgz - ## CVE-2022-46175 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>json5-0.5.1.tgz</b></p></summary>
<p>JSON for the ES5 era.</p>
<p>Library home page: <a href="https://registry.npmjs.org/json5/-/json5-0.5.1.tgz">https://registry.npmjs.org/json5/-/json5-0.5.1.tgz</a></p>
<p>Path to dependency file: /tabliss/package.json</p>
<p>Path to vulnerable library: /node_modules/json5/package.json</p>
<p>
Dependency Hierarchy:
- copy-webpack-plugin-4.5.1.tgz (Root Library)
- loader-utils-1.1.0.tgz
- :x: **json5-0.5.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JSON5 is an extension to the popular JSON file format that aims to be easier to write and maintain by hand (e.g. for config files). The `parse` method of the JSON5 library before and including version `2.2.1` does not restrict parsing of keys named `__proto__`, allowing specially crafted strings to pollute the prototype of the resulting object. This vulnerability pollutes the prototype of the object returned by `JSON5.parse` and not the global Object prototype, which is the commonly understood definition of Prototype Pollution. However, polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations. This vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from `JSON5.parse`. The actual impact will depend on how applications utilize the returned object and how they filter unwanted keys, but could include denial of service, cross-site scripting, elevation of privilege, and in extreme cases, remote code execution. `JSON5.parse` should restrict parsing of `__proto__` keys when parsing JSON strings to objects. As a point of reference, the `JSON.parse` method included in JavaScript ignores `__proto__` keys. Simply changing `JSON5.parse` to `JSON.parse` in the examples above mitigates this vulnerability. This vulnerability is patched in json5 version 2.2.2 and later.
<p>Publish Date: 2022-12-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-46175>CVE-2022-46175</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-46175">https://www.cve.org/CVERecord?id=CVE-2022-46175</a></p>
<p>Release Date: 2022-12-24</p>
<p>Fix Resolution: json5 - 2.2.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in tgz cve high severity vulnerability vulnerable library tgz json for the era library home page a href path to dependency file tabliss package json path to vulnerable library node modules package json dependency hierarchy copy webpack plugin tgz root library loader utils tgz x tgz vulnerable library vulnerability details is an extension to the popular json file format that aims to be easier to write and maintain by hand e g for config files the parse method of the library before and including version does not restrict parsing of keys named proto allowing specially crafted strings to pollute the prototype of the resulting object this vulnerability pollutes the prototype of the object returned by parse and not the global object prototype which is the commonly understood definition of prototype pollution however polluting the prototype of a single object can have significant security impact for an application if the object is later used in trusted operations this vulnerability could allow an attacker to set arbitrary and unexpected keys on the object returned from parse the actual impact will depend on how applications utilize the returned object and how they filter unwanted keys but could include denial of service cross site scripting elevation of privilege and in extreme cases remote code execution parse should restrict parsing of proto keys when parsing json strings to objects as a point of reference the json parse method included in javascript ignores proto keys simply changing parse to json parse in the examples above mitigates this vulnerability this vulnerability is patched in version and later publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
331,948 | 29,174,862,017 | IssuesEvent | 2023-05-19 07:00:51 | unifyai/ivy | https://api.github.com/repos/unifyai/ivy | reopened | Fix linalg.test_matmul | TensorFlow Frontend Sub Task Failing Test | | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/4996793641/jobs/8950402509" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_linalg.py::test_matmul[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-05-18T10:49:11.4766679Z E RuntimeError: Boolean value of Tensor with more than one value is ambiguous2023-05-18T10:49:11.4795660Z E ivy.utils.exceptions.IvyBackendException: torch: matmul: Boolean value of Tensor with more than one value is ambiguous2023-05-18T10:49:11.4796643Z E Falsifying example: test_matmul(2023-05-18T10:49:11.4797734Z E backend_fw=<module 'ivy.functional.backends.torch' from '/ivy/ivy/functional/backends/torch/__init__.py'>,2023-05-18T10:49:11.4798474Z E on_device='cpu',2023-05-18T10:49:11.4799146Z E x=(['int8'], array([[2, 2],2023-05-18T10:49:11.4799794Z E [2, 2]], dtype=int8), False, False),2023-05-18T10:49:11.4800431Z E y=(['int8'], array([[2, 2],2023-05-18T10:49:11.4801912Z E [2, 2]], dtype=int8), False, False),2023-05-18T10:49:11.4803341Z E fn_name='matmul',2023-05-18T10:49:11.4804127Z E test_flags=FunctionTestFlags(2023-05-18T10:49:11.4804749Z E num_positional_args=2,2023-05-18T10:49:11.4808326Z E with_out=True,2023-05-18T10:49:11.4809029Z E instance_method=False,2023-05-18T10:49:11.4810138Z E test_gradients=None,2023-05-18T10:49:11.4810828Z E test_compile=None,2023-05-18T10:49:11.4816156Z E as_variable=[False],2023-05-18T10:49:11.4817623Z E native_arrays=[False],2023-05-18T10:49:11.4818144Z E container=[False],2023-05-18T10:49:11.4818671Z E ),2023-05-18T10:49:11.4822366Z E ground_truth_backend='tensorflow',2023-05-18T10:49:11.4822922Z E )2023-05-18T10:49:11.4823439Z E 2023-05-18T10:49:11.4827337Z E You can reproduce this example by temporarily adding @reproduce_failure('6.75.3', b'AXicY2ZAA4wQCgAAZwAF') as a decorator on your test case
</details>
| 1.0 | Fix linalg.test_matmul - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="https://github.com/unifyai/ivy/actions/runs/5013045436/jobs/8985676459" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|paddle|<a href="https://github.com/unifyai/ivy/actions/runs/4996793641/jobs/8950402509" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
<details>
<summary>FAILED ivy_tests/test_ivy/test_functional/test_core/test_linalg.py::test_matmul[cpu-ivy.functional.backends.torch-False-False]</summary>
2023-05-18T10:49:11.4766679Z E RuntimeError: Boolean value of Tensor with more than one value is ambiguous2023-05-18T10:49:11.4795660Z E ivy.utils.exceptions.IvyBackendException: torch: matmul: Boolean value of Tensor with more than one value is ambiguous2023-05-18T10:49:11.4796643Z E Falsifying example: test_matmul(2023-05-18T10:49:11.4797734Z E backend_fw=<module 'ivy.functional.backends.torch' from '/ivy/ivy/functional/backends/torch/__init__.py'>,2023-05-18T10:49:11.4798474Z E on_device='cpu',2023-05-18T10:49:11.4799146Z E x=(['int8'], array([[2, 2],2023-05-18T10:49:11.4799794Z E [2, 2]], dtype=int8), False, False),2023-05-18T10:49:11.4800431Z E y=(['int8'], array([[2, 2],2023-05-18T10:49:11.4801912Z E [2, 2]], dtype=int8), False, False),2023-05-18T10:49:11.4803341Z E fn_name='matmul',2023-05-18T10:49:11.4804127Z E test_flags=FunctionTestFlags(2023-05-18T10:49:11.4804749Z E num_positional_args=2,2023-05-18T10:49:11.4808326Z E with_out=True,2023-05-18T10:49:11.4809029Z E instance_method=False,2023-05-18T10:49:11.4810138Z E test_gradients=None,2023-05-18T10:49:11.4810828Z E test_compile=None,2023-05-18T10:49:11.4816156Z E as_variable=[False],2023-05-18T10:49:11.4817623Z E native_arrays=[False],2023-05-18T10:49:11.4818144Z E container=[False],2023-05-18T10:49:11.4818671Z E ),2023-05-18T10:49:11.4822366Z E ground_truth_backend='tensorflow',2023-05-18T10:49:11.4822922Z E )2023-05-18T10:49:11.4823439Z E 2023-05-18T10:49:11.4827337Z E You can reproduce this example by temporarily adding @reproduce_failure('6.75.3', b'AXicY2ZAA4wQCgAAZwAF') as a decorator on your test case
</details>
| non_process | fix linalg test matmul tensorflow img src torch img src numpy img src jax img src paddle img src failed ivy tests test ivy test functional test core test linalg py test matmul e runtimeerror boolean value of tensor with more than one value is e ivy utils exceptions ivybackendexception torch matmul boolean value of tensor with more than one value is e falsifying example test matmul e backend fw e on device cpu e x array e dtype false false e y array e dtype false false e fn name matmul e test flags functiontestflags e num positional args e with out true e instance method false e test gradients none e test compile none e as variable e native arrays e container e e ground truth backend tensorflow e e e you can reproduce this example by temporarily adding reproduce failure b as a decorator on your test case | 0 |
6,941 | 7,795,362,043 | IssuesEvent | 2018-06-08 07:49:46 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Reuse of IReliableDictionary - Best Practices | cxp in-progress product-question service-fabric/svc triaged | Can we reuse the result of IReliableStateManager.GetOrAddAsync Method? Can we create a field in the stateful service like myDict = ReliableStateManager.GetOrAddAsync<IReliableDictionary>("myDict"); and use it everywhere or do we need to make call GetOrAddAsync() prior to working the the reliable collection?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 81cce417-df6e-bbac-5083-9e48f5efd6b2
* Version Independent ID: e713be8d-2f15-11c6-4e85-32de85e1cc3a
* Content: [Guidelines & Recommendations for Reliable Collections in Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines)
* Content Source: [articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md)
* Service: **service-fabric**
* GitHub Login: @mcoskun
* Microsoft Alias: **mcoskun** | 1.0 | Reuse of IReliableDictionary - Best Practices - Can we reuse the result of IReliableStateManager.GetOrAddAsync Method? Can we create a field in the stateful service like myDict = ReliableStateManager.GetOrAddAsync<IReliableDictionary>("myDict"); and use it everywhere or do we need to make call GetOrAddAsync() prior to working the the reliable collection?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 81cce417-df6e-bbac-5083-9e48f5efd6b2
* Version Independent ID: e713be8d-2f15-11c6-4e85-32de85e1cc3a
* Content: [Guidelines & Recommendations for Reliable Collections in Azure Service Fabric](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines)
* Content Source: [articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md)
* Service: **service-fabric**
* GitHub Login: @mcoskun
* Microsoft Alias: **mcoskun** | non_process | reuse of ireliabledictionary best practices can we reuse the result of ireliablestatemanager getoraddasync method can we create a field in the stateful service like mydict reliablestatemanager getoraddasync lt ireliabledictionary gt mydict and use it everywhere or do we need to make call getoraddasync prior to working the the reliable collection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id bbac version independent id content content source service service fabric github login mcoskun microsoft alias mcoskun | 0 |
123,718 | 12,216,448,825 | IssuesEvent | 2020-05-01 15:10:07 | shaarli/Shaarli | https://api.github.com/repos/shaarli/Shaarli | closed | Apache configuration in docs is out of date | documentation | The [Apache configuration example](https://github.com/shaarli/Shaarli/blob/master/doc/md/Server-configuration.md#apache) is broken ([Read the Docs link](https://shaarli.readthedocs.io/en/master/Server-configuration/#apache)). I tested each of the changes made below and leaving out any one of them broke the server. Errors included "Can't reach server", "Object not found", "Access forbidden", or "Can't provide secure connection".
### Change 1
Update `Directory` options and remove comment.
```diff
<Directory /absolute/path/to/shaarli/>
#Required for .htaccess support
- AllowOverride All
- Order allow,deny
- Allow from all
+ Require all granted
- Options Indexes FollowSymLinks MultiViews #TODO is Indexes/Multiviews required?
+ Options Indexes FollowSymLinks
# Optional - required for playvideos plugin
#Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
</Directory>
```
### Change 2
Add an alias, move the `Directory` section from the Change 1 to before `VirtualHost` section, change port 443 to port 80, and add comment that Let's Encrypt configuration is only needed if not already configured in `httpd-ssl.conf`.
```diff
+Alias /shaarli /absolute/path/to/shaarli/
+
+<Directory /absolute/path/to/shaarli/>
+ #Required for .htaccess support
+ Require all granted
+
+ Options Indexes FollowSymLinks MultiViews
+
+ # Optional - required for playvideos plugin
+ #Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
+</Directory>
+
-<VirtualHost *:443>
+<VirtualHost *:80>
ServerName shaarli.my-domain.org
DocumentRoot /absolute/path/to/shaarli/
# Logging
# Possible values include: debug, info, notice, warn, error, crit, alert, emerg.
LogLevel warn
ErrorLog /var/log/apache2/shaarli-error.log
CustomLog /var/log/apache2/shaarli-access.log combined
- # Let's Encrypt SSL configuration (recommended)
+ # Let's Encrypt SSL configuration (recommended if not in httpd-ssl.conf)
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/yourdomain.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# Self-signed SSL cert configuration
#SSLEngine on
#SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
#SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
# Optional, log PHP errors, useful for debugging
#php_flag display_errors on
#php_value error_reporting 2147483647
#php_value error_log /var/log/apache2/shaarli-php-error.log
-
- <Directory /absolute/path/to/shaarli/>
- #Required for .htaccess support
- Require all granted
-
- Options Indexes FollowSymLinks
-
- # Optional - required for playvideos plugin
- #Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
- </Directory>
-
</VirtualHost>
```
I'd like to submit a PR unless anyone sees an issue with the suggested changes. I'm fairly new to Apache, so feedback is welcome! Thanks. | 1.0 | Apache configuration in docs is out of date - The [Apache configuration example](https://github.com/shaarli/Shaarli/blob/master/doc/md/Server-configuration.md#apache) is broken ([Read the Docs link](https://shaarli.readthedocs.io/en/master/Server-configuration/#apache)). I tested each of the changes made below and leaving out any one of them broke the server. Errors included "Can't reach server", "Object not found", "Access forbidden", or "Can't provide secure connection".
### Change 1
Update `Directory` options and remove comment.
```diff
<Directory /absolute/path/to/shaarli/>
#Required for .htaccess support
- AllowOverride All
- Order allow,deny
- Allow from all
+ Require all granted
- Options Indexes FollowSymLinks MultiViews #TODO is Indexes/Multiviews required?
+ Options Indexes FollowSymLinks
# Optional - required for playvideos plugin
#Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
</Directory>
```
### Change 2
Add an alias, move the `Directory` section from the Change 1 to before `VirtualHost` section, change port 443 to port 80, and add comment that Let's Encrypt configuration is only needed if not already configured in `httpd-ssl.conf`.
```diff
+Alias /shaarli /absolute/path/to/shaarli/
+
+<Directory /absolute/path/to/shaarli/>
+ #Required for .htaccess support
+ Require all granted
+
+ Options Indexes FollowSymLinks MultiViews
+
+ # Optional - required for playvideos plugin
+ #Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
+</Directory>
+
-<VirtualHost *:443>
+<VirtualHost *:80>
ServerName shaarli.my-domain.org
DocumentRoot /absolute/path/to/shaarli/
# Logging
# Possible values include: debug, info, notice, warn, error, crit, alert, emerg.
LogLevel warn
ErrorLog /var/log/apache2/shaarli-error.log
CustomLog /var/log/apache2/shaarli-access.log combined
- # Let's Encrypt SSL configuration (recommended)
+ # Let's Encrypt SSL configuration (recommended if not in httpd-ssl.conf)
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/yourdomain.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
# Self-signed SSL cert configuration
#SSLEngine on
#SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem
#SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
# Optional, log PHP errors, useful for debugging
#php_flag display_errors on
#php_value error_reporting 2147483647
#php_value error_log /var/log/apache2/shaarli-php-error.log
-
- <Directory /absolute/path/to/shaarli/>
- #Required for .htaccess support
- Require all granted
-
- Options Indexes FollowSymLinks
-
- # Optional - required for playvideos plugin
- #Header set Content-Security-Policy "script-src 'self' 'unsafe-inline' https://www.youtube.com https://s.ytimg.com 'unsafe-eval'"
- </Directory>
-
</VirtualHost>
```
I'd like to submit a PR unless anyone sees an issue with the suggested changes. I'm fairly new to Apache, so feedback is welcome! Thanks. | non_process | apache configuration in docs is out of date the is broken i tested each of the changes made below and leaving out any one of them broke the server errors included can t reach server object not found access forbidden or can t provide secure connection change update directory options and remove comment diff required for htaccess support allowoverride all order allow deny allow from all require all granted options indexes followsymlinks multiviews todo is indexes multiviews required options indexes followsymlinks optional required for playvideos plugin header set content security policy script src self unsafe inline unsafe eval change add an alias move the directory section from the change to before virtualhost section change port to port and add comment that let s encrypt configuration is only needed if not already configured in httpd ssl conf diff alias shaarli absolute path to shaarli required for htaccess support require all granted options indexes followsymlinks multiviews optional required for playvideos plugin header set content security policy script src self unsafe inline unsafe eval servername shaarli my domain org documentroot absolute path to shaarli logging possible values include debug info notice warn error crit alert emerg loglevel warn errorlog var log shaarli error log customlog var log shaarli access log combined let s encrypt ssl configuration recommended let s encrypt ssl configuration recommended if not in httpd ssl conf sslengine on sslcertificatefile etc letsencrypt live yourdomain example com fullchain pem sslcertificatekeyfile etc letsencrypt live yourdomain example com privkey pem include etc letsencrypt options ssl apache conf self signed ssl cert configuration sslengine on sslcertificatefile etc ssl certs ssl cert snakeoil pem sslcertificatekeyfile etc ssl private ssl cert snakeoil key optional log php errors useful for debugging php flag display errors on php value error reporting php value error log var log shaarli php error log required for htaccess support require all granted options indexes followsymlinks optional required for playvideos plugin header set content security policy script src self unsafe inline unsafe eval i d like to submit a pr unless anyone sees an issue with the suggested changes i m fairly new to apache so feedback is welcome thanks | 0 |
15,946 | 20,164,722,984 | IssuesEvent | 2022-02-10 02:17:29 | ooi-data/RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_pd0_beam_parsed | https://api.github.com/repos/ooi-data/RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_pd0_beam_parsed | opened | 🛑 Processing failed: GroupNotFoundError | process | ## Overview
`GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-10T02:17:29.344756.
## Details
Flow name: `RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_pd0_beam_parsed`
Task name: `processing_task`
Error type: `GroupNotFoundError`
Error message: group not found at path ''
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream
final_group = zarr.open_group(final_store, mode='r+')
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group
raise GroupNotFoundError(path)
zarr.errors.GroupNotFoundError: group not found at path ''
```
</details>
| 1.0 | 🛑 Processing failed: GroupNotFoundError - ## Overview
`GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-10T02:17:29.344756.
## Details
Flow name: `RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_pd0_beam_parsed`
Task name: `processing_task`
Error type: `GroupNotFoundError`
Error message: group not found at path ''
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream
final_group = zarr.open_group(final_store, mode='r+')
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group
raise GroupNotFoundError(path)
zarr.errors.GroupNotFoundError: group not found at path ''
```
</details>
| process | 🛑 processing failed groupnotfounderror overview groupnotfounderror found in processing task task during run ended on details flow name streamed adcp beam parsed task name processing task error type groupnotfounderror error message group not found at path traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream final group zarr open group final store mode r file srv conda envs notebook lib site packages zarr hierarchy py line in open group raise groupnotfounderror path zarr errors groupnotfounderror group not found at path | 1 |
7,141 | 10,282,332,426 | IssuesEvent | 2019-08-26 10:50:44 | osquery/osquery | https://api.github.com/repos/osquery/osquery | closed | Add ability to track fork,vfork,clone syscalls with process_events table | Linux feature process auditing | <!-- Thank you for contributing to osquery! -->
# Feature request
<!--
Please follow this template.
Before submitting an issue search for duplicates.
-->
### What new feature do you want?
<!-- Please describe with as much detail as possible. Include examples. -->
I would like to be able to track `fork()`,`vfork()`,`clone()` syscals via `process_events` table.
Right now when process performs simple `fork()` and then `system("id")` syscalls, `osquery` has no information about the parent of `id` command. Bellow are simple source code and query results:
```c
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main () {
system("id");
if (!fork()) // creating child process
{
puts("Child init");
system("id");
exit(0);
} else{
puts("Parent exit");
exit(0);
}
}
```
```
+------+--------------------------+---------+-----------+--------+
| pid | path | mode | cmdline | parent |
+------+--------------------------+---------+-----------+--------+
| 3430 | /home/loqpa/tests/simple | 0100755 | ./simple | 24129 |
| 3431 | /bin/dash | 0100755 | sh -c id; | 3430 |
| 3432 | /usr/bin/id | 0100755 | id | 3431 |
| 3434 | /bin/dash | 0100755 | sh -c id; | 3433 |
| 3435 | /usr/bin/id | 0100755 | id | 3434 |
+------+--------------------------+---------+-----------+--------+
```
### How is this new feature useful?
Process auditing is really important from security point of view. Right now adversary can change publicly available exploit to use one of for-mentioned syscalls and potentially avoid detection (or at least make audited data miss some steps).
By implementing this feature we are able to fill the gap in process events sequence.
<!-- Describe how can this make osquery better or how you intend to use it. -->
### How can this be implemented?
Make it disabled by default and present a flag which will add audit daemon rules.
<!-- It's okay to leave this empty if you don't know. -->
| 1.0 | Add ability to track fork,vfork,clone syscalls with process_events table - <!-- Thank you for contributing to osquery! -->
# Feature request
<!--
Please follow this template.
Before submitting an issue search for duplicates.
-->
### What new feature do you want?
<!-- Please describe with as much detail as possible. Include examples. -->
I would like to be able to track `fork()`,`vfork()`,`clone()` syscals via `process_events` table.
Right now when process performs simple `fork()` and then `system("id")` syscalls, `osquery` has no information about the parent of `id` command. Bellow are simple source code and query results:
```c
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main () {
system("id");
if (!fork()) // creating child process
{
puts("Child init");
system("id");
exit(0);
} else{
puts("Parent exit");
exit(0);
}
}
```
```
+------+--------------------------+---------+-----------+--------+
| pid | path | mode | cmdline | parent |
+------+--------------------------+---------+-----------+--------+
| 3430 | /home/loqpa/tests/simple | 0100755 | ./simple | 24129 |
| 3431 | /bin/dash | 0100755 | sh -c id; | 3430 |
| 3432 | /usr/bin/id | 0100755 | id | 3431 |
| 3434 | /bin/dash | 0100755 | sh -c id; | 3433 |
| 3435 | /usr/bin/id | 0100755 | id | 3434 |
+------+--------------------------+---------+-----------+--------+
```
### How is this new feature useful?
Process auditing is really important from security point of view. Right now adversary can change publicly available exploit to use one of for-mentioned syscalls and potentially avoid detection (or at least make audited data miss some steps).
By implementing this feature we are able to fill the gap in process events sequence.
<!-- Describe how can this make osquery better or how you intend to use it. -->
### How can this be implemented?
Make it disabled by default and present a flag which will add audit daemon rules.
<!-- It's okay to leave this empty if you don't know. -->
| process | add ability to track fork vfork clone syscalls with process events table feature request please follow this template before submitting an issue search for duplicates what new feature do you want i would like to be able to track fork vfork clone syscals via process events table right now when process performs simple fork and then system id syscalls osquery has no information about the parent of id command bellow are simple source code and query results c include include include int main system id if fork creating child process puts child init system id exit else puts parent exit exit pid path mode cmdline parent home loqpa tests simple simple bin dash sh c id usr bin id id bin dash sh c id usr bin id id how is this new feature useful process auditing is really important from security point of view right now adversary can change publicly available exploit to use one of for mentioned syscalls and potentially avoid detection or at least make audited data miss some steps by implementing this feature we are able to fill the gap in process events sequence how can this be implemented make it disabled by default and present a flag which will add audit daemon rules | 1 |
131,229 | 18,234,868,397 | IssuesEvent | 2021-10-01 05:00:30 | graywidjaya/snyk-scanning-testing | https://api.github.com/repos/graywidjaya/snyk-scanning-testing | opened | CVE-2021-35516 (High) detected in commons-compress-1.9.jar | security vulnerability | ## CVE-2021-35516 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.9.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats.
These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional
Unix Compress, DEFLATE and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.9/commons-compress-1.9.jar</p>
<p>
Dependency Hierarchy:
- webjars-locator-core-0.35.jar (Root Library)
- :x: **commons-compress-1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35516>CVE-2021-35516</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-35516 (High) detected in commons-compress-1.9.jar - ## CVE-2021-35516 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-compress-1.9.jar</b></p></summary>
<p>Apache Commons Compress software defines an API for working with
compression and archive formats.
These include: bzip2, gzip, pack200, lzma, xz, Snappy, traditional
Unix Compress, DEFLATE and ar, cpio, jar, tar, zip, dump, 7z, arj.</p>
<p>Path to dependency file: snyk-scanning-testing/ProductManager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/commons/commons-compress/1.9/commons-compress-1.9.jar</p>
<p>
Dependency Hierarchy:
- webjars-locator-core-0.35.jar (Root Library)
- :x: **commons-compress-1.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/graywidjaya/snyk-scanning-testing/commit/8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e">8e11d4935d4cae9cfc1d6d0b55433a3b1002a16e</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
When reading a specially crafted 7Z archive, Compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs. This could be used to mount a denial of service attack against services that use Compress' sevenz package.
<p>Publish Date: 2021-07-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-35516>CVE-2021-35516</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://commons.apache.org/proper/commons-compress/security-reports.html">https://commons.apache.org/proper/commons-compress/security-reports.html</a></p>
<p>Release Date: 2021-07-13</p>
<p>Fix Resolution: org.apache.commons:commons-compress:1.21</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in commons compress jar cve high severity vulnerability vulnerable library commons compress jar apache commons compress software defines an api for working with compression and archive formats these include gzip lzma xz snappy traditional unix compress deflate and ar cpio jar tar zip dump arj path to dependency file snyk scanning testing productmanager pom xml path to vulnerable library home wss scanner repository org apache commons commons compress commons compress jar dependency hierarchy webjars locator core jar root library x commons compress jar vulnerable library found in head commit a href found in base branch main vulnerability details when reading a specially crafted archive compress can be made to allocate large amounts of memory that finally leads to an out of memory error even for very small inputs this could be used to mount a denial of service attack against services that use compress sevenz package publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache commons commons compress step up your open source security game with whitesource | 0 |
7,716 | 10,821,379,678 | IssuesEvent | 2019-11-08 18:32:10 | ORNL-AMO/AMO-Tools-Desktop | https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop | closed | Process Heating - Efficiency Improvement Example | Calculator Process Heating Quick Fix | Current Flue Gas Temperature = 1600
New Flue Gas Temperature = 1600 {do not change}
| 1.0 | Process Heating - Efficiency Improvement Example - Current Flue Gas Temperature = 1600
New Flue Gas Temperature = 1600 {do not change}
| process | process heating efficiency improvement example current flue gas temperature new flue gas temperature do not change | 1 |
32,096 | 4,746,595,811 | IssuesEvent | 2016-10-21 11:55:42 | Mantella/Mantella | https://api.github.com/repos/Mantella/Mantella | opened | Add Windows to our test matrix | feature request testing | Since we our now testing Mantella on Linux and OSX build, the only OS missing on our build matrix is Windows.
As Travis wont add this in the foreseeable future (see https://github.com/travis-ci/travis-ci/issues/216), we could use AppVeyor for our Windows builds instead. | 1.0 | Add Windows to our test matrix - Since we our now testing Mantella on Linux and OSX build, the only OS missing on our build matrix is Windows.
As Travis wont add this in the foreseeable future (see https://github.com/travis-ci/travis-ci/issues/216), we could use AppVeyor for our Windows builds instead. | non_process | add windows to our test matrix since we our now testing mantella on linux and osx build the only os missing on our build matrix is windows as travis wont add this in the foreseeable future see we could use appveyor for our windows builds instead | 0 |
325,336 | 27,868,233,032 | IssuesEvent | 2023-03-21 11:47:31 | blacktokkies/toquiz | https://api.github.com/repos/blacktokkies/toquiz | closed | [BUG] 노드 환경에서는 MSW 핸들러가 절대 경로를 사용해야한다 | 🚨 bug 📊 test 🌈 client | ## 🔄 How to reproduce bug
<!--어떻게 하면 버그를 다시 만들 수 있는지 과정을 설명해주세요!-->
1. msw server 인스턴스에 상대 경로로 핸들러 등록: 예) `/api/user`
```javascript
const server = setupServer(
rest.get('/api/user', (req, res, ctx) => {
return res(ctx.json({ firstName: 'John' }))
}),
)
```
2. `TypeError: Invalid URL: /api/user` 발생
## ⚠️ Node.js 런타임에서는 msw 핸들러에 절대 경로를 사용하도록 한다.
> Bear in mind that without a DOM-like environment, like the jsdom from Jest, you must use absolute request URLs in NodeJS. This should be reflected in your request handlers:
https://mswjs.io/docs/getting-started/integrate/node#direct-usage
```javascript
const server = setupServer(
// NOT "/user", nothing to be relative to!
rest.get('https://api.backend.dev/user', (req, res, ctx) => {
return res(ctx.json({ firstName: 'John' }))
}),
)
```
| 1.0 | [BUG] 노드 환경에서는 MSW 핸들러가 절대 경로를 사용해야한다 - ## 🔄 How to reproduce bug
<!--어떻게 하면 버그를 다시 만들 수 있는지 과정을 설명해주세요!-->
1. msw server 인스턴스에 상대 경로로 핸들러 등록: 예) `/api/user`
```javascript
const server = setupServer(
rest.get('/api/user', (req, res, ctx) => {
return res(ctx.json({ firstName: 'John' }))
}),
)
```
2. `TypeError: Invalid URL: /api/user` 발생
## ⚠️ Node.js 런타임에서는 msw 핸들러에 절대 경로를 사용하도록 한다.
> Bear in mind that without a DOM-like environment, like the jsdom from Jest, you must use absolute request URLs in NodeJS. This should be reflected in your request handlers:
https://mswjs.io/docs/getting-started/integrate/node#direct-usage
```javascript
const server = setupServer(
// NOT "/user", nothing to be relative to!
rest.get('https://api.backend.dev/user', (req, res, ctx) => {
return res(ctx.json({ firstName: 'John' }))
}),
)
```
| non_process | 노드 환경에서는 msw 핸들러가 절대 경로를 사용해야한다 🔄 how to reproduce bug msw server 인스턴스에 상대 경로로 핸들러 등록 예 api user javascript const server setupserver rest get api user req res ctx return res ctx json firstname john typeerror invalid url api user 발생 ⚠️ node js 런타임에서는 msw 핸들러에 절대 경로를 사용하도록 한다 bear in mind that without a dom like environment like the jsdom from jest you must use absolute request urls in nodejs this should be reflected in your request handlers javascript const server setupserver not user nothing to be relative to rest get req res ctx return res ctx json firstname john | 0 |
113,471 | 11,806,609,722 | IssuesEvent | 2020-03-19 09:51:41 | se701g2/Doto | https://api.github.com/repos/se701g2/Doto | closed | Document backend API | documentation | ### User Story:
As a devloper, I want what the backend API offers in terms of endpoints/data schemas, so that call the API as needed for any frontend features.
### Acceptance Criteria:
1) All endpoints should be documented, including http method, url and data returned in the Wiki.
| 1.0 | Document backend API - ### User Story:
As a devloper, I want what the backend API offers in terms of endpoints/data schemas, so that call the API as needed for any frontend features.
### Acceptance Criteria:
1) All endpoints should be documented, including http method, url and data returned in the Wiki.
| non_process | document backend api user story as a devloper i want what the backend api offers in terms of endpoints data schemas so that call the api as needed for any frontend features acceptance criteria all endpoints should be documented including http method url and data returned in the wiki | 0 |
324,824 | 9,913,142,044 | IssuesEvent | 2019-06-28 10:53:22 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | hub.docker.com - see bug description | browser-fenix engine-gecko priority-normal | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://hub.docker.com
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: when tapping in the search field, all current tabs become "about:blank"
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | hub.docker.com - see bug description - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 7.1.2; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://hub.docker.com
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 7.1.2
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: when tapping in the search field, all current tabs become "about:blank"
**Steps to Reproduce**:
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | hub docker com see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description when tapping in the search field all current tabs become about blank steps to reproduce browser configuration none from with ❤️ | 0 |
1,574 | 4,167,460,978 | IssuesEvent | 2016-06-20 09:36:28 | e-government-ua/iBP | https://api.github.com/repos/e-government-ua/iBP | closed | Видача погодження режиму об'єктів торгівлі - м.Первомайськ - Миколаївська область | In process of testing in work test | [Видача Погодження режиму обєктів торгівлі.pdf](https://github.com/e-government-ua/iBP/files/262936/default.pdf)
**Координатор города:**
Корой Виталий - 0991962269 - somati.orlik@gmail.com
И еще просьба, в получатели добавить еще два контакта, для более быстрого реагирования )
kukharenko.vsevolod@gmail.com - Всеволод Кухаренко
sergey.donchenko@gmail.com - Сергей Донченко
**Контактное лицо ЦНАП:**
Наталія Миколаївна Петрущак, natashadc@mail.ru, (099) 277-41-42
на тест отправляем в ЦНАП, но обязательно ставим в копию 3 координаторов выше. | 1.0 | Видача погодження режиму об'єктів торгівлі - м.Первомайськ - Миколаївська область - [Видача Погодження режиму обєктів торгівлі.pdf](https://github.com/e-government-ua/iBP/files/262936/default.pdf)
**Координатор города:**
Корой Виталий - 0991962269 - somati.orlik@gmail.com
И еще просьба, в получатели добавить еще два контакта, для более быстрого реагирования )
kukharenko.vsevolod@gmail.com - Всеволод Кухаренко
sergey.donchenko@gmail.com - Сергей Донченко
**Контактное лицо ЦНАП:**
Наталія Миколаївна Петрущак, natashadc@mail.ru, (099) 277-41-42
на тест отправляем в ЦНАП, но обязательно ставим в копию 3 координаторов выше. | process | видача погодження режиму об єктів торгівлі м первомайськ миколаївська область координатор города корой виталий somati orlik gmail com и еще просьба в получатели добавить еще два контакта для более быстрого реагирования kukharenko vsevolod gmail com всеволод кухаренко sergey donchenko gmail com сергей донченко контактное лицо цнап наталія миколаївна петрущак natashadc mail ru на тест отправляем в цнап но обязательно ставим в копию координаторов выше | 1 |
181,169 | 21,645,589,051 | IssuesEvent | 2022-05-06 01:10:43 | AlexRogalskiy/java-patterns | https://api.github.com/repos/AlexRogalskiy/java-patterns | closed | WS-2019-0063 (High) detected in js-yaml-3.4.6.tgz - autoclosed | security vulnerability needs/triage | ## WS-2019-0063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>js-yaml-3.4.6.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jscs/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- jscs-3.0.7.tgz (Root Library)
- :x: **js-yaml-3.4.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/fe3fc69c6aa9128d2a4b77ebfd8d8e355574dd05">fe3fc69c6aa9128d2a4b77ebfd8d8e355574dd05</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=https://github.com/nodeca/js-yaml/pull/480>WS-2019-0063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/813">https://www.npmjs.com/advisories/813</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: js-yaml - 3.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2019-0063 (High) detected in js-yaml-3.4.6.tgz - autoclosed - ## WS-2019-0063 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>js-yaml-3.4.6.tgz</b></p></summary>
<p>YAML 1.2 parser and serializer</p>
<p>Library home page: <a href="https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz">https://registry.npmjs.org/js-yaml/-/js-yaml-3.4.6.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/jscs/node_modules/js-yaml/package.json</p>
<p>
Dependency Hierarchy:
- jscs-3.0.7.tgz (Root Library)
- :x: **js-yaml-3.4.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/fe3fc69c6aa9128d2a4b77ebfd8d8e355574dd05">fe3fc69c6aa9128d2a4b77ebfd8d8e355574dd05</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Js-yaml prior to 3.13.1 are vulnerable to Code Injection. The load() function may execute arbitrary code injected through a malicious YAML file.
<p>Publish Date: 2019-04-05
<p>URL: <a href=https://github.com/nodeca/js-yaml/pull/480>WS-2019-0063</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/813">https://www.npmjs.com/advisories/813</a></p>
<p>Release Date: 2019-04-05</p>
<p>Fix Resolution: js-yaml - 3.13.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws high detected in js yaml tgz autoclosed ws high severity vulnerability vulnerable library js yaml tgz yaml parser and serializer library home page a href path to dependency file package json path to vulnerable library node modules jscs node modules js yaml package json dependency hierarchy jscs tgz root library x js yaml tgz vulnerable library found in head commit a href found in base branch master vulnerability details js yaml prior to are vulnerable to code injection the load function may execute arbitrary code injected through a malicious yaml file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution js yaml step up your open source security game with whitesource | 0 |
193,135 | 22,216,069,863 | IssuesEvent | 2022-06-08 01:52:44 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | reopened | CVE-2020-12888 (Medium) detected in multiple libraries | security vulnerability | ## CVE-2020-12888 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The VFIO PCI driver in the Linux kernel through 5.6.13 mishandles attempts to access disabled memory space.
<p>Publish Date: 2020-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-12888>CVE-2020-12888</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-12888">https://www.linuxkernelcves.com/cves/CVE-2020-12888</a></p>
<p>Release Date: 2020-11-02</p>
<p>Fix Resolution: v4.9.236, v4.14.198, v4.19.144, v5.4.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-12888 (Medium) detected in multiple libraries - ## CVE-2020-12888 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b>, <b>linux-xlnxxilinx-v2019.2</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The VFIO PCI driver in the Linux kernel through 5.6.13 mishandles attempts to access disabled memory space.
<p>Publish Date: 2020-05-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-12888>CVE-2020-12888</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2020-12888">https://www.linuxkernelcves.com/cves/CVE-2020-12888</a></p>
<p>Release Date: 2020-11-02</p>
<p>Fix Resolution: v4.9.236, v4.14.198, v4.19.144, v5.4.64</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries linux xlnxxilinx linux xlnxxilinx linux xlnxxilinx linux xlnxxilinx linux xlnxxilinx vulnerability details the vfio pci driver in the linux kernel through mishandles attempts to access disabled memory space publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope changed impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
11,700 | 14,544,988,865 | IssuesEvent | 2020-12-15 18:59:04 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Master still being used in documentation | devops-cicd-process/tech devops/prod doc-bug | I recently started a new DevOps project and tried the samples from the Schedules builds page. However none of them worked. After strugling with this for about an hour or so I finally found out that the master branch has been changed to main.
It might be good to either put this down in the documentation somewhere or simply change it to main. (If we simply change it to main people are going to run into this just like me though).
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/scheduled-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie** | 1.0 | Master still being used in documentation - I recently started a new DevOps project and tried the samples from the Schedules builds page. However none of them worked. After strugling with this for about an hour or so I finally found out that the master branch has been changed to main.
It might be good to either put this down in the documentation somewhere or simply change it to main. (If we simply change it to main people are going to run into this just like me though).
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 2ea2c851-bd1e-cddc-b4d0-e9f4112b8565
* Version Independent ID: 07c23fdd-14b5-985b-1c63-3f26f3a216ad
* Content: [Configure schedules to run pipelines - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml)
* Content Source: [docs/pipelines/process/scheduled-triggers.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/scheduled-triggers.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @steved0x
* Microsoft Alias: **sdanie** | process | master still being used in documentation i recently started a new devops project and tried the samples from the schedules builds page however none of them worked after strugling with this for about an hour or so i finally found out that the master branch has been changed to main it might be good to either put this down in the documentation somewhere or simply change it to main if we simply change it to main people are going to run into this just like me though document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id cddc version independent id content content source product devops technology devops cicd process github login microsoft alias sdanie | 1 |
61,309 | 14,621,069,042 | IssuesEvent | 2020-12-22 20:54:09 | SmartBear/idea-collaborator-plugin | https://api.github.com/repos/SmartBear/idea-collaborator-plugin | opened | CVE-2019-14892 (High) detected in jackson-databind-2.5.0.jar | security vulnerability | ## CVE-2019-14892 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: idea-collaborator-plugin/client/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collabplugin/collaborator/collaborator/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collaborator-0_7-BETA/collaborator/lib/jackson-databind-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/idea-collaborator-plugin/commit/3e67fb2d437ffeadf07751b7979f4e35dbc282a2">3e67fb2d437ffeadf07751b7979f4e35dbc282a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2462">https://github.com/FasterXML/jackson-databind/issues/2462</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-14892","vulnerabilityDetails":"A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14892","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-14892 (High) detected in jackson-databind-2.5.0.jar - ## CVE-2019-14892 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: idea-collaborator-plugin/client/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collabplugin/collaborator/collaborator/lib/jackson-databind-2.5.0.jar,idea-collaborator-plugin/collaborator-0_7-BETA/collaborator/lib/jackson-databind-2.5.0.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.5.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/SmartBear/idea-collaborator-plugin/commit/3e67fb2d437ffeadf07751b7979f4e35dbc282a2">3e67fb2d437ffeadf07751b7979f4e35dbc282a2</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2462">https://github.com/FasterXML/jackson-databind/issues/2462</a></p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","isTransitiveDependency":false,"dependencyTree":"com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.6.7.3,2.7.9.7,2.8.11.5,2.9.10"}],"vulnerabilityIdentifier":"CVE-2019-14892","vulnerabilityDetails":"A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-14892","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library idea collaborator plugin client lib jackson databind jar idea collaborator plugin collabplugin collaborator collaborator lib jackson databind jar idea collaborator plugin collaborator beta collaborator lib jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was discovered in jackson databind in versions before and where it would permit polymorphic deserialization of a malicious object using commons configuration and jndi classes an attacker could use this flaw to execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a flaw was discovered in jackson databind in versions before and where it would permit polymorphic deserialization of a malicious object using commons configuration and jndi classes an attacker could use this flaw to execute arbitrary code vulnerabilityurl | 0 |
611,627 | 18,959,748,937 | IssuesEvent | 2021-11-19 02:11:39 | AbubakrMahmood/Interactive-ARC-Game | https://api.github.com/repos/AbubakrMahmood/Interactive-ARC-Game | opened | Implement SmartController Package into Unity Game | high priority | Need to implement the SmartController using Unity and test if it works. If not, must switch to using JavaScript as soon as possible. | 1.0 | Implement SmartController Package into Unity Game - Need to implement the SmartController using Unity and test if it works. If not, must switch to using JavaScript as soon as possible. | non_process | implement smartcontroller package into unity game need to implement the smartcontroller using unity and test if it works if not must switch to using javascript as soon as possible | 0 |
2,226 | 5,074,220,774 | IssuesEvent | 2016-12-27 13:11:01 | DynareTeam/dynare | https://api.github.com/repos/DynareTeam/dynare | closed | Allow adding auxiliary variables like Ramsey multipliers to var_list_ | preprocessor | The auxiliary variables are endogenous variables like every other variable. A call like
`ramsey_policy(instruments=(i),irf=13,planner_discount=betta,periods=200) x pi MULT_1;`
would be suficient to display IRFs for the multiplier 1. However, the preprocessor does not allow adding `MULT_1` to the variable list, because:
`Unknown symbol: MULT_1`
We should allow adding any variable present in `M_.endo_names` to the `var_list_`. @houtanb Could you do this, please?
Related to http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=12117 | 1.0 | Allow adding auxiliary variables like Ramsey multipliers to var_list_ - The auxiliary variables are endogenous variables like every other variable. A call like
`ramsey_policy(instruments=(i),irf=13,planner_discount=betta,periods=200) x pi MULT_1;`
would be suficient to display IRFs for the multiplier 1. However, the preprocessor does not allow adding `MULT_1` to the variable list, because:
`Unknown symbol: MULT_1`
We should allow adding any variable present in `M_.endo_names` to the `var_list_`. @houtanb Could you do this, please?
Related to http://www.dynare.org/phpBB3/viewtopic.php?f=1&t=12117 | process | allow adding auxiliary variables like ramsey multipliers to var list the auxiliary variables are endogenous variables like every other variable a call like ramsey policy instruments i irf planner discount betta periods x pi mult would be suficient to display irfs for the multiplier however the preprocessor does not allow adding mult to the variable list because unknown symbol mult we should allow adding any variable present in m endo names to the var list houtanb could you do this please related to | 1 |
66,072 | 27,326,507,065 | IssuesEvent | 2023-02-25 04:20:49 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Documentation App Service running on top of PaaS | app-service/svc triaged cxp product-question Pri2 | Do you think this statement about App Service is on top of Paas is correct? Why isn't it IaaS?
> At its core, App Service is a service running on top of the Azure PaaS (platform as a service) infrastructure. As a result, the local drives that are "attached" to a virtual machine are the same drive types available to any worker role running in Azure. This includes:
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 0e6f6c24-c956-df49-ae5f-739312be4fed
* Version Independent ID: aba60038-0195-1b85-6a24-8994b421b5d2
* Content: [Operating system functionality - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/operating-system-functionality)
* Content Source: [articles/app-service/operating-system-functionality.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/operating-system-functionality.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | 1.0 | Documentation App Service running on top of PaaS - Do you think this statement about App Service is on top of Paas is correct? Why isn't it IaaS?
> At its core, App Service is a service running on top of the Azure PaaS (platform as a service) infrastructure. As a result, the local drives that are "attached" to a virtual machine are the same drive types available to any worker role running in Azure. This includes:
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 0e6f6c24-c956-df49-ae5f-739312be4fed
* Version Independent ID: aba60038-0195-1b85-6a24-8994b421b5d2
* Content: [Operating system functionality - Azure App Service](https://learn.microsoft.com/en-us/azure/app-service/operating-system-functionality)
* Content Source: [articles/app-service/operating-system-functionality.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/app-service/operating-system-functionality.md)
* Service: **app-service**
* GitHub Login: @cephalin
* Microsoft Alias: **cephalin** | non_process | documentation app service running on top of paas do you think this statement about app service is on top of paas is correct why isn t it iaas at its core app service is a service running on top of the azure paas platform as a service infrastructure as a result the local drives that are attached to a virtual machine are the same drive types available to any worker role running in azure this includes document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin | 0 |
2,246 | 5,088,646,879 | IssuesEvent | 2016-12-31 23:57:49 | sw4j-org/tool-jpa-processor | https://api.github.com/repos/sw4j-org/tool-jpa-processor | opened | Handle @MapKeyJoinColumns Annotation | annotation processor task | Handle the `@MapKeyJoinColumns` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.36 MapKeyJoinColumns Annotation
| 1.0 | Handle @MapKeyJoinColumns Annotation - Handle the `@MapKeyJoinColumns` annotation for a property or field.
See [JSR 338: Java Persistence API, Version 2.1](http://download.oracle.com/otn-pub/jcp/persistence-2_1-fr-eval-spec/JavaPersistence.pdf)
- 11.1.36 MapKeyJoinColumns Annotation
| process | handle mapkeyjoincolumns annotation handle the mapkeyjoincolumns annotation for a property or field see mapkeyjoincolumns annotation | 1 |
17,745 | 23,658,996,326 | IssuesEvent | 2022-08-26 13:55:18 | streamnative/flink | https://api.github.com/repos/streamnative/flink | closed | [BUG][FLINK-28960][Stream] java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement | compute/data-processing type/bug | ```
Unknown HK2 failure detected:
MultiException stack 1 of 2
java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:137)
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:124)
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:116)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/java.lang.Class.newInstance(Class.java:584)
``` | 1.0 | [BUG][FLINK-28960][Stream] java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement - ```
Unknown HK2 failure detected:
MultiException stack 1 of 2
java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlElement
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:137)
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:124)
at org.apache.pulsar.shade.com.fasterxml.jackson.module.jaxb.JaxbAnnotationIntrospector.<init>(JaxbAnnotationIntrospector.java:116)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/java.lang.Class.newInstance(Class.java:584)
``` | process | java lang noclassdeffounderror javax xml bind annotation xmlelement unknown failure detected multiexception stack of java lang noclassdeffounderror javax xml bind annotation xmlelement at org apache pulsar shade com fasterxml jackson module jaxb jaxbannotationintrospector jaxbannotationintrospector java at org apache pulsar shade com fasterxml jackson module jaxb jaxbannotationintrospector jaxbannotationintrospector java at org apache pulsar shade com fasterxml jackson module jaxb jaxbannotationintrospector jaxbannotationintrospector java at java base jdk internal reflect nativeconstructoraccessorimpl native method at java base jdk internal reflect nativeconstructoraccessorimpl newinstance nativeconstructoraccessorimpl java at java base jdk internal reflect delegatingconstructoraccessorimpl newinstance delegatingconstructoraccessorimpl java at java base java lang reflect constructor newinstance constructor java at java base java lang class newinstance class java | 1 |
394,796 | 11,648,735,799 | IssuesEvent | 2020-03-01 22:28:14 | UWPCommunity/UWP-Visual-Asset-Generator | https://api.github.com/repos/UWPCommunity/UWP-Visual-Asset-Generator | closed | Use a BitmapImage as image source for displaying on screen | Priority UI improvement bug enhancement | The WriteableBitmapEx doesn't show transparency properly in Image or ImageEx components.
So we need a different imagesource for displaying in the app. | 1.0 | Use a BitmapImage as image source for displaying on screen - The WriteableBitmapEx doesn't show transparency properly in Image or ImageEx components.
So we need a different imagesource for displaying in the app. | non_process | use a bitmapimage as image source for displaying on screen the writeablebitmapex doesn t show transparency properly in image or imageex components so we need a different imagesource for displaying in the app | 0 |
19,874 | 27,607,353,520 | IssuesEvent | 2023-03-09 13:51:43 | fabiangreffrath/woof | https://api.github.com/repos/fabiangreffrath/woof | closed | [feature] Skip Level Finished Screen | invalid compatibility | UMAPINFO has an argument which lets you make a map skip showing the score screen upon ending, BUT, this only works for levels where the game ends, not between regular levels.
What I'm looking to do is to put a text crawl before the first level of an episode, and you could do this with an empty dummy level which exits itself, but it will inevitably take you to the score screen anyway, which isn't exactly clean looking.
Is it out of line to add the ability for Woof to do this for levels which aren't ending the game? Or would you rather stay consistent with other ports? I asked Kraf if this was something which could be tweaked with UMAPINFO as a format, but it seems he's got no plans for anything of the like any time soon, and he'd rather the standard stay consistent, as I understood.
I'm not sure if this would introduce any problems, but possibly for multiple level demo recordings it could maybe cause a desync if input is expected for the score screen and now that screen isn't there when played up in another port which skips the screen. Maybe the .exe deliberately doesn't skip score screens when playing up a demo?
It's a small cosmetic thing, so it's not exactly a big deal to have an awkward transition, but maybe you can think of something, or maybe you know something I've missed. | True | [feature] Skip Level Finished Screen - UMAPINFO has an argument which lets you make a map skip showing the score screen upon ending, BUT, this only works for levels where the game ends, not between regular levels.
What I'm looking to do is to put a text crawl before the first level of an episode, and you could do this with an empty dummy level which exits itself, but it will inevitably take you to the score screen anyway, which isn't exactly clean looking.
Is it out of line to add the ability for Woof to do this for levels which aren't ending the game? Or would you rather stay consistent with other ports? I asked Kraf if this was something which could be tweaked with UMAPINFO as a format, but it seems he's got no plans for anything of the like any time soon, and he'd rather the standard stay consistent, as I understood.
I'm not sure if this would introduce any problems, but possibly for multiple level demo recordings it could maybe cause a desync if input is expected for the score screen and now that screen isn't there when played up in another port which skips the screen. Maybe the .exe deliberately doesn't skip score screens when playing up a demo?
It's a small cosmetic thing, so it's not exactly a big deal to have an awkward transition, but maybe you can think of something, or maybe you know something I've missed. | non_process | skip level finished screen umapinfo has an argument which lets you make a map skip showing the score screen upon ending but this only works for levels where the game ends not between regular levels what i m looking to do is to put a text crawl before the first level of an episode and you could do this with an empty dummy level which exits itself but it will inevitably take you to the score screen anyway which isn t exactly clean looking is it out of line to add the ability for woof to do this for levels which aren t ending the game or would you rather stay consistent with other ports i asked kraf if this was something which could be tweaked with umapinfo as a format but it seems he s got no plans for anything of the like any time soon and he d rather the standard stay consistent as i understood i m not sure if this would introduce any problems but possibly for multiple level demo recordings it could maybe cause a desync if input is expected for the score screen and now that screen isn t there when played up in another port which skips the screen maybe the exe deliberately doesn t skip score screens when playing up a demo it s a small cosmetic thing so it s not exactly a big deal to have an awkward transition but maybe you can think of something or maybe you know something i ve missed | 0 |
305,268 | 9,367,168,818 | IssuesEvent | 2019-04-03 04:13:26 | CS2103-AY1819S2-T12-4/main | https://api.github.com/repos/CS2103-AY1819S2-T12-4/main | closed | As a overwhelmed student, I can search for my pdfs from within pdf++ by assigned tags | priority.Medium type.Story | So that I can easily sift through my large collection of files to get specific files i need. | 1.0 | As a overwhelmed student, I can search for my pdfs from within pdf++ by assigned tags - So that I can easily sift through my large collection of files to get specific files i need. | non_process | as a overwhelmed student i can search for my pdfs from within pdf by assigned tags so that i can easily sift through my large collection of files to get specific files i need | 0 |
55,727 | 14,020,206,304 | IssuesEvent | 2020-10-29 19:19:22 | srivatsamarichi/ContosoAir | https://api.github.com/repos/srivatsamarichi/ContosoAir | opened | CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js, jquery-3.3.1.tgz | security vulnerability | ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.1.min.js</b>, <b>jquery-3.3.1.tgz</b></p></summary>
<p>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: ContosoAir/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: ContosoAir/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: ContosoAir/package.json</p>
<p>Path to vulnerable library: ContosoAir/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- bootstrap-datepicker-1.8.0.tgz (Root Library)
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/ContosoAir/commit/7e3d160bd69713f60688f97955fd688a3fe91b8f">7e3d160bd69713f60688f97955fd688a3fe91b8f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11022 (Medium) detected in jquery-1.7.1.min.js, jquery-3.3.1.tgz - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-1.7.1.min.js</b>, <b>jquery-3.3.1.tgz</b></p></summary>
<p>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: ContosoAir/node_modules/vm-browserify/example/run/index.html</p>
<p>Path to vulnerable library: ContosoAir/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.3.1.tgz</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz">https://registry.npmjs.org/jquery/-/jquery-3.3.1.tgz</a></p>
<p>Path to dependency file: ContosoAir/package.json</p>
<p>Path to vulnerable library: ContosoAir/node_modules/jquery/package.json</p>
<p>
Dependency Hierarchy:
- bootstrap-datepicker-1.8.0.tgz (Root Library)
- :x: **jquery-3.3.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/srivatsamarichi/ContosoAir/commit/7e3d160bd69713f60688f97955fd688a3fe91b8f">7e3d160bd69713f60688f97955fd688a3fe91b8f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in jquery min js jquery tgz cve medium severity vulnerability vulnerable libraries jquery min js jquery tgz jquery min js javascript library for dom operations library home page a href path to dependency file contosoair node modules vm browserify example run index html path to vulnerable library contosoair node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library jquery tgz javascript library for dom operations library home page a href path to dependency file contosoair package json path to vulnerable library contosoair node modules jquery package json dependency hierarchy bootstrap datepicker tgz root library x jquery tgz vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
121,711 | 12,132,641,831 | IssuesEvent | 2020-04-23 07:39:44 | dusk-network/plonk | https://api.github.com/repos/dusk-network/plonk | closed | Improve documentation on pub modules submodules and structs. | documentation end-user-utility enhancement | Atm, whith the merge of #165 we have a very basic documentation for the repo. We should:
- Decide which functions, traits and structs we expose to the end user. Set `pub(crate)` or `pub(self)` others.
- Improve documentation at module-level giving an explanation of what that module contains, what is it used for and how.
- Improve struct and fn doc comments. | 1.0 | Improve documentation on pub modules submodules and structs. - Atm, whith the merge of #165 we have a very basic documentation for the repo. We should:
- Decide which functions, traits and structs we expose to the end user. Set `pub(crate)` or `pub(self)` others.
- Improve documentation at module-level giving an explanation of what that module contains, what is it used for and how.
- Improve struct and fn doc comments. | non_process | improve documentation on pub modules submodules and structs atm whith the merge of we have a very basic documentation for the repo we should decide which functions traits and structs we expose to the end user set pub crate or pub self others improve documentation at module level giving an explanation of what that module contains what is it used for and how improve struct and fn doc comments | 0 |
18,175 | 24,220,373,533 | IssuesEvent | 2022-09-26 10:19:45 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | GRASS r.horizon only works from 100–360 degrees | GRASS Processing Bug | ### What is the bug or the crash?
What happens here is QGIS starts GRASS and then calls r.horizon on a sequence of sightline angles. If the angle is from 0 to 99 degrees the GRASS command lines fail because QGIS does not consistently zero pad the angle to three digits as GRASS does. Angles of 100–360 work as expected because they're large enough not to require zero padding.
For example, QGIS will send something like this to GRASS
```
g.proj -c wkt=".../crs.prj"
r.in.gdal input="...\DEM.tif" band=1 output="rast_632650235f80710" --overwrite -o
g.region n=225753.8777 s=190753.8777 e=135246.5282 w=102446.5282 res=100.0
r.horizon elevation=rast_632650235f80710 step=90 start=0 end=360 distance=1 output=output5fc5a6d0bd9f4e38a9632dfb72afa090 --overwrite
g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_0
r.out.gdal -t -m input="output5fc5a6d0bd9f4e38a9632dfb72afa090_0" output="...\000.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
```
and it will fail at `g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_0`
```
ERROR: Raster map or group <output5fc5a6d0bd9f4e38a9632dfb72afa090_0> not found
```
because the correct call is `g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_000` (note the `_0` versus `_000` at the end of the raster name). The same layer name change is needed for the following `r.out.gdal` call's `input` parameter.
### Steps to reproduce the issue
Select any DEM layer that's handy, launch r.horizon from the toolbox, set the angle step size to some positive number as required, and click run. Since the default is to output horizon angles on sightlines from 0 to 360 degrees any r.horizon execution will then hit this issue on at least the 0 degree step.
### Versions
3.22.9
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
I haven't tested with non-integer angles or non-integer steps. Probably those are also worth a look.
A basic workaround sequence is to
1. Use the automation QGIS provides to generate output rasters in the 100–360 degree range.
2. Copy QGIS's output trace and edit the 0–99 degree export commands QGIS generates to have the needed zero padding and suitable paths.
3. Set up a GRASS project and paste the setup commands, `r.horizon` call, and edited exports into GRASS's command window. | 1.0 | GRASS r.horizon only works from 100–360 degrees - ### What is the bug or the crash?
What happens here is QGIS starts GRASS and then calls r.horizon on a sequence of sightline angles. If the angle is from 0 to 99 degrees the GRASS command lines fail because QGIS does not consistently zero pad the angle to three digits as GRASS does. Angles of 100–360 work as expected because they're large enough not to require zero padding.
For example, QGIS will send something like this to GRASS
```
g.proj -c wkt=".../crs.prj"
r.in.gdal input="...\DEM.tif" band=1 output="rast_632650235f80710" --overwrite -o
g.region n=225753.8777 s=190753.8777 e=135246.5282 w=102446.5282 res=100.0
r.horizon elevation=rast_632650235f80710 step=90 start=0 end=360 distance=1 output=output5fc5a6d0bd9f4e38a9632dfb72afa090 --overwrite
g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_0
r.out.gdal -t -m input="output5fc5a6d0bd9f4e38a9632dfb72afa090_0" output="...\000.tif" format="GTiff" createopt="TFW=YES,COMPRESS=LZW" --overwrite
```
and it will fail at `g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_0`
```
ERROR: Raster map or group <output5fc5a6d0bd9f4e38a9632dfb72afa090_0> not found
```
because the correct call is `g.region raster=output5fc5a6d0bd9f4e38a9632dfb72afa090_000` (note the `_0` versus `_000` at the end of the raster name). The same layer name change is needed for the following `r.out.gdal` call's `input` parameter.
### Steps to reproduce the issue
Select any DEM layer that's handy, launch r.horizon from the toolbox, set the angle step size to some positive number as required, and click run. Since the default is to output horizon angles on sightlines from 0 to 360 degrees any r.horizon execution will then hit this issue on at least the 0 degree step.
### Versions
3.22.9
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
I haven't tested with non-integer angles or non-integer steps. Probably those are also worth a look.
A basic workaround sequence is to
1. Use the automation QGIS provides to generate output rasters in the 100–360 degree range.
2. Copy QGIS's output trace and edit the 0–99 degree export commands QGIS generates to have the needed zero padding and suitable paths.
3. Set up a GRASS project and paste the setup commands, `r.horizon` call, and edited exports into GRASS's command window. | process | grass r horizon only works from – degrees what is the bug or the crash what happens here is qgis starts grass and then calls r horizon on a sequence of sightline angles if the angle is from to degrees the grass command lines fail because qgis does not consistently zero pad the angle to three digits as grass does angles of – work as expected because they re large enough not to require zero padding for example qgis will send something like this to grass g proj c wkt crs prj r in gdal input dem tif band output rast overwrite o g region n s e w res r horizon elevation rast step start end distance output overwrite g region raster r out gdal t m input output tif format gtiff createopt tfw yes compress lzw overwrite and it will fail at g region raster error raster map or group not found because the correct call is g region raster note the versus at the end of the raster name the same layer name change is needed for the following r out gdal call s input parameter steps to reproduce the issue select any dem layer that s handy launch r horizon from the toolbox set the angle step size to some positive number as required and click run since the default is to output horizon angles on sightlines from to degrees any r horizon execution will then hit this issue on at least the degree step versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context i haven t tested with non integer angles or non integer steps probably those are also worth a look a basic workaround sequence is to use the automation qgis provides to generate output rasters in the – degree range copy qgis s output trace and edit the – degree export commands qgis generates to have the needed zero padding and suitable paths set up a grass project and paste the setup commands r horizon call and edited exports into grass s command window | 1 |
97,092 | 10,981,183,443 | IssuesEvent | 2019-11-30 19:47:47 | funkyADers/cs207-FinalProject | https://api.github.com/repos/funkyADers/cs207-FinalProject | opened | Final documentation | documentation | > Your documentation must be complete, easy to navigate, and clear. Remember to update the Background and How to Use sections of your documentation as you add more functionality to your package, so that the user has a good understanding of what he/she can do. Call the final form of your documentation `documentation`.
> Your documentation should be a mix of text and hands-on demos. As always, it is up to you and your group to determine the best way to accomplish this (e.g. Jupyter notebook, GitHub README, Sphinx/Read the Docs).
> You will receive full points as long as you have a docs/ folder and your documentation is complete. However, you may want to consider alternative ways of hosting your documentation. For example: Read the Docs or Sphinx. | 1.0 | Final documentation - > Your documentation must be complete, easy to navigate, and clear. Remember to update the Background and How to Use sections of your documentation as you add more functionality to your package, so that the user has a good understanding of what he/she can do. Call the final form of your documentation `documentation`.
> Your documentation should be a mix of text and hands-on demos. As always, it is up to you and your group to determine the best way to accomplish this (e.g. Jupyter notebook, GitHub README, Sphinx/Read the Docs).
> You will receive full points as long as you have a docs/ folder and your documentation is complete. However, you may want to consider alternative ways of hosting your documentation. For example: Read the Docs or Sphinx. | non_process | final documentation your documentation must be complete easy to navigate and clear remember to update the background and how to use sections of your documentation as you add more functionality to your package so that the user has a good understanding of what he she can do call the final form of your documentation documentation your documentation should be a mix of text and hands on demos as always it is up to you and your group to determine the best way to accomplish this e g jupyter notebook github readme sphinx read the docs you will receive full points as long as you have a docs folder and your documentation is complete however you may want to consider alternative ways of hosting your documentation for example read the docs or sphinx | 0 |
179,849 | 14,723,793,871 | IssuesEvent | 2021-01-06 01:11:28 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | opened | Identify and Document Contact Center Resource Repositories | VSP-contact-center content documentation research | ## Issue Description
_We need to fully understand where product guides and videos are stored internally as well as how they are provided to contact centers to use as a resource and where they are stored for contact center use_
---
## Acceptance Criteria
- [ ] _All items listed below are included in a single github document_
- [ ] _Internal product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _Internal product video repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _External (contact center) product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _External (contact center) product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _Process for providing contact centers with product guides and product videos is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| 1.0 | Identify and Document Contact Center Resource Repositories - ## Issue Description
_We need to fully understand where product guides and videos are stored internally as well as how they are provided to contact centers to use as a resource and where they are stored for contact center use_
---
## Acceptance Criteria
- [ ] _All items listed below are included in a single github document_
- [ ] _Internal product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _Internal product video repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _External (contact center) product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _External (contact center) product guide repository is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
- [ ] _Process for providing contact centers with product guides and product videos is identified and documented in our team space in github [here](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/teams/vsp/teams/contact-center)_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| non_process | identify and document contact center resource repositories issue description we need to fully understand where product guides and videos are stored internally as well as how they are provided to contact centers to use as a resource and where they are stored for contact center use acceptance criteria all items listed below are included in a single github document internal product guide repository is identified and documented in our team space in github internal product video repository is identified and documented in our team space in github external contact center product guide repository is identified and documented in our team space in github external contact center product guide repository is identified and documented in our team space in github process for providing contact centers with product guides and product videos is identified and documented in our team space in github how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc | 0 |
337,450 | 10,218,204,464 | IssuesEvent | 2019-08-15 15:23:37 | kubernetes/minikube | https://api.github.com/repos/kubernetes/minikube | reopened | Windows: Path to client.crt is calculated incorrectly | good first issue help wanted kind/bug os/windows priority/important-longterm r/2019q2 | <!-- Thank you for sharing your experience! If you are reporting a bug, please include: -->
<!-- * The exact command-lines used so that we can replicate the issue -->
<!-- * The full output of the command that failed -->
<!-- * The output of the "minikube logs" command, if applicable -->
<!-- * Which operating system version was used -->
While trying to run ```minikube start``` using minikube version: **v1.0.0** on Windows 10, with following config:
```
{
"WantReportError": true,
"WantReportErrorPrompt": false,
"dashboard": true,
"hyperv-virtual-switch": "Default Switch",
"profile": "minikube",
"vm-driver": "hyperv"
}
```
I get the following error:
```
C:\Users\Diego.Mendes>minikube start
o minikube v1.0.0 on windows (amd64)
$ Downloading Kubernetes v1.14.0 images in the background ...
> Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
- "minikube" IP address is 172.17.57.124
- Configuring Docker as the container runtime ...
- Version of container runtime is 18.06.2-ce
: Waiting for image downloads to complete ...
- Preparing Kubernetes environment ...
@ Downloading kubelet v1.14.0
@ Downloading kubeadm v1.14.0
- Pulling images required by Kubernetes v1.14.0 ...
- Launching Kubernetes v1.14.0 using kubeadm ...
: Waiting for pods:
! Error starting cluster: wait: k8s client: Error creating kubeConfig: invalid configuration: [unable to read client-cert C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.key for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\ca.crt for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\ca.crt: The system cannot find the path specified.]
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
```
Looking into details, I noticed the PATH to the certificates are invalid, it tries:
```C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt```
Where should should be:
```C:\Users\Diego.Mendes\.minikube\client.crt```
Minikube is trying to find the certificates adding an extra ```.kube\Users\Diego.Mendes``` that does not exist. I couldn't find the proper way to set the path, so I decided to copy the certificates to the path and it works around the problem.
PS: I had to delete and recreate the cluster to make it work properly. | 1.0 | Windows: Path to client.crt is calculated incorrectly - <!-- Thank you for sharing your experience! If you are reporting a bug, please include: -->
<!-- * The exact command-lines used so that we can replicate the issue -->
<!-- * The full output of the command that failed -->
<!-- * The output of the "minikube logs" command, if applicable -->
<!-- * Which operating system version was used -->
While trying to run ```minikube start``` using minikube version: **v1.0.0** on Windows 10, with following config:
```
{
"WantReportError": true,
"WantReportErrorPrompt": false,
"dashboard": true,
"hyperv-virtual-switch": "Default Switch",
"profile": "minikube",
"vm-driver": "hyperv"
}
```
I get the following error:
```
C:\Users\Diego.Mendes>minikube start
o minikube v1.0.0 on windows (amd64)
$ Downloading Kubernetes v1.14.0 images in the background ...
> Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
142.88 MB / 142.88 MB [============================================] 100.00% 0s
- "minikube" IP address is 172.17.57.124
- Configuring Docker as the container runtime ...
- Version of container runtime is 18.06.2-ce
: Waiting for image downloads to complete ...
- Preparing Kubernetes environment ...
@ Downloading kubelet v1.14.0
@ Downloading kubeadm v1.14.0
- Pulling images required by Kubernetes v1.14.0 ...
- Launching Kubernetes v1.14.0 using kubeadm ...
: Waiting for pods:
! Error starting cluster: wait: k8s client: Error creating kubeConfig: invalid configuration: [unable to read client-cert C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt: The system cannot find the path specified., unable to read client-key C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.key for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.key: The system cannot find the path specified., unable to read certificate-authority C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\ca.crt for minikube due to open C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\ca.crt: The system cannot find the path specified.]
* Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
- https://github.com/kubernetes/minikube/issues/new
```
Looking into details, I noticed the PATH to the certificates are invalid, it tries:
```C:\Users\Diego.Mendes\.kube\Users\Diego.Mendes\.minikube\client.crt```
Where should should be:
```C:\Users\Diego.Mendes\.minikube\client.crt```
Minikube is trying to find the certificates adding an extra ```.kube\Users\Diego.Mendes``` that does not exist. I couldn't find the proper way to set the path, so I decided to copy the certificates to the path and it works around the problem.
PS: I had to delete and recreate the cluster to make it work properly. | non_process | windows path to client crt is calculated incorrectly while trying to run minikube start using minikube version on windows with following config wantreporterror true wantreporterrorprompt false dashboard true hyperv virtual switch default switch profile minikube vm driver hyperv i get the following error c users diego mendes minikube start o minikube on windows downloading kubernetes images in the background creating hyperv vm cpus memory disk downloading minikube iso mb mb minikube ip address is configuring docker as the container runtime version of container runtime is ce waiting for image downloads to complete preparing kubernetes environment downloading kubelet downloading kubeadm pulling images required by kubernetes launching kubernetes using kubeadm waiting for pods error starting cluster wait client error creating kubeconfig invalid configuration sorry that minikube crashed if this was unexpected we would love to hear from you looking into details i noticed the path to the certificates are invalid it tries c users diego mendes kube users diego mendes minikube client crt where should should be c users diego mendes minikube client crt minikube is trying to find the certificates adding an extra kube users diego mendes that does not exist i couldn t find the proper way to set the path so i decided to copy the certificates to the path and it works around the problem ps i had to delete and recreate the cluster to make it work properly | 0 |
320,039 | 9,763,963,335 | IssuesEvent | 2019-06-05 14:50:44 | spacetelescope/jwql | https://api.github.com/repos/spacetelescope/jwql | opened | Update jwql environments on dev web server | Environment High Priority Web Application | Now that we have specific environments tied to python versions, we should update the deployed `jwql` environment on the dev web server that is running the dev web app to reflect these new changes. | 1.0 | Update jwql environments on dev web server - Now that we have specific environments tied to python versions, we should update the deployed `jwql` environment on the dev web server that is running the dev web app to reflect these new changes. | non_process | update jwql environments on dev web server now that we have specific environments tied to python versions we should update the deployed jwql environment on the dev web server that is running the dev web app to reflect these new changes | 0 |
620,653 | 19,566,536,162 | IssuesEvent | 2022-01-04 01:48:13 | glm729/ytdlp_utils | https://api.github.com/repos/glm729/ytdlp_utils | closed | Abstract grouped data into separate class definitions | priority/3 | Some of the data in the playlist handler are linked or related, and could be captured by a class definition. Define classes for handling grouped or related data, e.g. `current_video`, `_playlist`.
| 1.0 | Abstract grouped data into separate class definitions - Some of the data in the playlist handler are linked or related, and could be captured by a class definition. Define classes for handling grouped or related data, e.g. `current_video`, `_playlist`.
| non_process | abstract grouped data into separate class definitions some of the data in the playlist handler are linked or related and could be captured by a class definition define classes for handling grouped or related data e g current video playlist | 0 |
11,202 | 13,957,703,475 | IssuesEvent | 2020-10-24 08:13:56 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | BE: Harvesting frequency - activation on demand | BE - Belgium Geoportal Harvesting process | From: EC-INSPIRE-INFO@ec.europa.eu
Sent: 26 April 2018 17:06:58 (UTC+01:00) Brussels, Copenhagen, Madrid, Paris
To: ouns.kissiyar@kb.vlaanderen.be; JRC INSPIRE SUPPORT
Subject: [THEMATIC VIEWER Support] harvesting frequency - activation on demand
Dear INSPIRE Geoportal Thematic viewer team,
Dear, due to the multidisciplinary aspect of the team working on your requests to populate the new geopoertal with data by may 15th we would like to inquire if it is possible to activate harvesting of the Geopunt catalogue on demand. The reason behind it is the sequential improvements of the "errors" reported. (once an "error" is "corrected" the next "error" shows up after the next harvesting, this makes it difficult to work effectively and virtually impossible to achieve anything by may 15th). Thank you in advance Ouns
Best regards,
Ouns Kissiyar
E-mail: ouns.kissiyar@kb.vlaanderen.be | 1.0 | BE: Harvesting frequency - activation on demand - From: EC-INSPIRE-INFO@ec.europa.eu
Sent: 26 April 2018 17:06:58 (UTC+01:00) Brussels, Copenhagen, Madrid, Paris
To: ouns.kissiyar@kb.vlaanderen.be; JRC INSPIRE SUPPORT
Subject: [THEMATIC VIEWER Support] harvesting frequency - activation on demand
Dear INSPIRE Geoportal Thematic viewer team,
Dear, due to the multidisciplinary aspect of the team working on your requests to populate the new geopoertal with data by may 15th we would like to inquire if it is possible to activate harvesting of the Geopunt catalogue on demand. The reason behind it is the sequential improvements of the "errors" reported. (once an "error" is "corrected" the next "error" shows up after the next harvesting, this makes it difficult to work effectively and virtually impossible to achieve anything by may 15th). Thank you in advance Ouns
Best regards,
Ouns Kissiyar
E-mail: ouns.kissiyar@kb.vlaanderen.be | process | be harvesting frequency activation on demand from ec inspire info ec europa eu sent april utc brussels copenhagen madrid paris to ouns kissiyar kb vlaanderen be jrc inspire support subject harvesting frequency activation on demand dear inspire geoportal thematic viewer team dear due to the multidisciplinary aspect of the team working on your requests to populate the new geopoertal with data by may we would like to inquire if it is possible to activate harvesting of the geopunt catalogue on demand the reason behind it is the sequential improvements of the quot errors quot reported once an quot error quot is quot corrected quot the next quot error quot shows up after the next harvesting this makes it difficult to work effectively and virtually impossible to achieve anything by may thank you in advance ouns best regards ouns kissiyar e mail ouns kissiyar kb vlaanderen be | 1 |
19,176 | 25,284,213,934 | IssuesEvent | 2022-11-16 17:55:12 | googleapis/nodejs-compute | https://api.github.com/repos/googleapis/nodejs-compute | closed | Reference docs should be published to cloud.google.com | type: process api: compute | This library has not had is reference doc publication pipeline updated to publish to cloud.google.com. | 1.0 | Reference docs should be published to cloud.google.com - This library has not had is reference doc publication pipeline updated to publish to cloud.google.com. | process | reference docs should be published to cloud google com this library has not had is reference doc publication pipeline updated to publish to cloud google com | 1 |
85,631 | 10,653,005,934 | IssuesEvent | 2019-10-17 13:41:54 | microsoft/AL | https://api.github.com/repos/microsoft/AL | closed | Cannot download symbols for v15.x | bydesign | **Describe the bug**
Attempting to download symbols from any sandbox running v15 fails.
System symbols are downloaded successfully, application symbols are not.
```
[2019-10-17 11:16:15.83] Sending request to https://api.businesscentral.dynamics.com/v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=Application&versionText=15.0.0.0
[2019-10-17 11:16:15.83] Sending request to https://api.businesscentral.dynamics.com/v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=System&versionText=15.0.0.0
[2019-10-17 11:16:16.38] The request for path /v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=Application&versionText=15.0.0.0 failed with code NotFound. Reason: No published package matches the provided arguments.
```
I have D365 Extension Management permissions.
V15 is definitely installed as I have checked the extension management page and the application extension is present.
I've tried with all major version numbers (1 through 20 just in case!), specific version numbers etc, set my runtime to 2/3/4.
I've tried on multiple sandboxes (admittedly only with this specific BC version)
Everything is up to date.
Has this process changed recently?
**Steps and to reproduce the behavior:**
Point VS code at any v15 sandbox and try to download symbols.
**Expected behavior**
The symbols should be downloaded.
**Versions:**
- AL Language:
4.0.182565
- Business Central:
15.0.36626.37063 | 1.0 | Cannot download symbols for v15.x - **Describe the bug**
Attempting to download symbols from any sandbox running v15 fails.
System symbols are downloaded successfully, application symbols are not.
```
[2019-10-17 11:16:15.83] Sending request to https://api.businesscentral.dynamics.com/v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=Application&versionText=15.0.0.0
[2019-10-17 11:16:15.83] Sending request to https://api.businesscentral.dynamics.com/v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=System&versionText=15.0.0.0
[2019-10-17 11:16:16.38] The request for path /v2.0/OctoberTest/dev/packages?publisher=Microsoft&appName=Application&versionText=15.0.0.0 failed with code NotFound. Reason: No published package matches the provided arguments.
```
I have D365 Extension Management permissions.
V15 is definitely installed as I have checked the extension management page and the application extension is present.
I've tried with all major version numbers (1 through 20 just in case!), specific version numbers etc, set my runtime to 2/3/4.
I've tried on multiple sandboxes (admittedly only with this specific BC version)
Everything is up to date.
Has this process changed recently?
**Steps and to reproduce the behavior:**
Point VS code at any v15 sandbox and try to download symbols.
**Expected behavior**
The symbols should be downloaded.
**Versions:**
- AL Language:
4.0.182565
- Business Central:
15.0.36626.37063 | non_process | cannot download symbols for x describe the bug attempting to download symbols from any sandbox running fails system symbols are downloaded successfully application symbols are not sending request to sending request to the request for path octobertest dev packages publisher microsoft appname application versiontext failed with code notfound reason no published package matches the provided arguments i have extension management permissions is definitely installed as i have checked the extension management page and the application extension is present i ve tried with all major version numbers through just in case specific version numbers etc set my runtime to i ve tried on multiple sandboxes admittedly only with this specific bc version everything is up to date has this process changed recently steps and to reproduce the behavior point vs code at any sandbox and try to download symbols expected behavior the symbols should be downloaded versions al language business central | 0 |
27,089 | 12,512,403,186 | IssuesEvent | 2020-06-02 22:42:33 | terraform-providers/terraform-provider-azuread | https://api.github.com/repos/terraform-providers/terraform-provider-azuread | closed | Support creation of Azure AD Service Principal with certificate | enhancement feature/service-principal | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
Please add ability to create Azure AD Service Principal with certificate using azuread_service_principal_certificate or updated azuread_service_principal resources as it is described in below documentation reference
### New or Affected Resource(s)
* azuread_service_principal_certificate
* azuread_service_principal
### References
https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest
| 1.0 | Support creation of Azure AD Service Principal with certificate - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
### Description
Please add ability to create Azure AD Service Principal with certificate using azuread_service_principal_certificate or updated azuread_service_principal resources as it is described in below documentation reference
### New or Affected Resource(s)
* azuread_service_principal_certificate
* azuread_service_principal
### References
https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest
| non_process | support creation of azure ad service principal with certificate community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description please add ability to create azure ad service principal with certificate using azuread service principal certificate or updated azuread service principal resources as it is described in below documentation reference new or affected resource s azuread service principal certificate azuread service principal references | 0 |
3,933 | 6,849,270,235 | IssuesEvent | 2017-11-13 21:28:42 | syndesisio/syndesis-ui | https://api.github.com/repos/syndesisio/syndesis-ui | opened | Format HTML template files as a pre-commit hook | dev process | Now that we have it for typescript, json, css and scss files, we should have the same for HTML files to help @kahboom maintain sanity. | 1.0 | Format HTML template files as a pre-commit hook - Now that we have it for typescript, json, css and scss files, we should have the same for HTML files to help @kahboom maintain sanity. | process | format html template files as a pre commit hook now that we have it for typescript json css and scss files we should have the same for html files to help kahboom maintain sanity | 1 |
696,837 | 23,918,097,004 | IssuesEvent | 2022-09-09 14:23:33 | eclipse/lsp4jakarta | https://api.github.com/repos/eclipse/lsp4jakarta | closed | Filter JAXRS diagnostics based on classpath/imports | high priority 1 | #### Description:
- [ ] ResourceMethodDiagnosticsCollector
- [ ] Jax_RSClassDiagnosticsCollector
#### Specification:
#### Type of language feature proposed:
_Select all that apply_
- [x] diagnostic
- [ ] quick-fix
- [ ] snippet
- [ ] other, please specify:
| 1.0 | Filter JAXRS diagnostics based on classpath/imports - #### Description:
- [ ] ResourceMethodDiagnosticsCollector
- [ ] Jax_RSClassDiagnosticsCollector
#### Specification:
#### Type of language feature proposed:
_Select all that apply_
- [x] diagnostic
- [ ] quick-fix
- [ ] snippet
- [ ] other, please specify:
| non_process | filter jaxrs diagnostics based on classpath imports description resourcemethoddiagnosticscollector jax rsclassdiagnosticscollector specification type of language feature proposed select all that apply diagnostic quick fix snippet other please specify | 0 |
38,274 | 19,087,052,600 | IssuesEvent | 2021-11-29 07:48:33 | haskell-unordered-containers/unordered-containers | https://api.github.com/repos/haskell-unordered-containers/unordered-containers | closed | Consider branching factor of 32 or 64 | performance | In my very POC [Array-mapped trie implementation](https://github.com/sgraf812/amt), benchmarks suggested that a branching factor of 32 or even 64 would result in better performance on my PC. The paper on [RRB trees](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjY5N-WkL7rAhWRGuwKHaUGDZUQFjABegQIBBAB&url=https%3A%2F%2Finfoscience.epfl.ch%2Frecord%2F169879%2Ffiles%2FRMTrees.pdf&usg=AOvVaw1kP_Q4DGjudonr4VBY3itp) for example mentions that 32 is a sensible balanced choice. 64 favors lookup performance over insert performance. | True | Consider branching factor of 32 or 64 - In my very POC [Array-mapped trie implementation](https://github.com/sgraf812/amt), benchmarks suggested that a branching factor of 32 or even 64 would result in better performance on my PC. The paper on [RRB trees](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjY5N-WkL7rAhWRGuwKHaUGDZUQFjABegQIBBAB&url=https%3A%2F%2Finfoscience.epfl.ch%2Frecord%2F169879%2Ffiles%2FRMTrees.pdf&usg=AOvVaw1kP_Q4DGjudonr4VBY3itp) for example mentions that 32 is a sensible balanced choice. 64 favors lookup performance over insert performance. | non_process | consider branching factor of or in my very poc benchmarks suggested that a branching factor of or even would result in better performance on my pc the paper on for example mentions that is a sensible balanced choice favors lookup performance over insert performance | 0 |
14,816 | 10,217,194,791 | IssuesEvent | 2019-08-15 13:02:39 | cityofaustin/atd-geospatial | https://api.github.com/repos/cityofaustin/atd-geospatial | closed | Special Events Prototype Application: Esri Zoom Meeting | Service: Dev Type: Meeting Workgroup: DTS Workgroup: Other | Attended this meeting @ CTECC in conjunction with another meeting. Esri gave a presentation on a Special Events Prototype Application. Below are my notes:
> - Solution development team- create templates
> - Based on best practices from prior development
> - Special Events templates - going to be revamped
>
> - Permit application process managed electonically
> - Special Event safety plan
> - Where field staff are located
> - Better coordination between agencies
> - Respond to threats in a more timely fashion
> - Learn from prior year events, re-use plans
> - Keep a history of events
>
> - Permitting, Planning Stage, Operation Stage, Post Event Stage
> - Key people
> ○ Event organizer
> ○ Law enforcement
> ○ Fire/ems
> ○ Event Coordinator
> ○ Dept staff
> ○ Executive
> ○ GIS Analyst
>
> Permit Submittal
> - Start with a survey for an event (event submitter)
> ○ Dependent questions
> ○ Collecting a general location of the event
> ○ Event site map
> - Event coordinator review
> ○ Special event manager
> ○ To review permits
> ○ Change to under review, new choices pop up
> ○ Comments from other staff come through in app for coordinator to see
> ○ Email coordinator, send a site map link and permit link
> - Dashboard for executive level for overall impact
> Planning Stage
> - Site map - public safety locations to be added
> - Complement to a incident action plan
> - Crowd size estimates for an area
> - Can setup a grid for an area very easily
> - Has access to emergency response guide for what-if scenarios
> Operation Stage
> - Survey - Special Event Activity Reporter
> ○ Lost child
> ○ Suspicious activity
> ○ General activity
> - Written back to a workforce project
> ○ Dispatcher can use this to coordinate
> ○ Can search the map for resources
> - Safety Dashboard
> ○ Look at multiple events if they are happening at the same time
> Destination Pages and Internal Pages
> - Link to all of the applications talked about above
> wpotts@esri.com | 1.0 | Special Events Prototype Application: Esri Zoom Meeting - Attended this meeting @ CTECC in conjunction with another meeting. Esri gave a presentation on a Special Events Prototype Application. Below are my notes:
> - Solution development team- create templates
> - Based on best practices from prior development
> - Special Events templates - going to be revamped
>
> - Permit application process managed electonically
> - Special Event safety plan
> - Where field staff are located
> - Better coordination between agencies
> - Respond to threats in a more timely fashion
> - Learn from prior year events, re-use plans
> - Keep a history of events
>
> - Permitting, Planning Stage, Operation Stage, Post Event Stage
> - Key people
> ○ Event organizer
> ○ Law enforcement
> ○ Fire/ems
> ○ Event Coordinator
> ○ Dept staff
> ○ Executive
> ○ GIS Analyst
>
> Permit Submittal
> - Start with a survey for an event (event submitter)
> ○ Dependent questions
> ○ Collecting a general location of the event
> ○ Event site map
> - Event coordinator review
> ○ Special event manager
> ○ To review permits
> ○ Change to under review, new choices pop up
> ○ Comments from other staff come through in app for coordinator to see
> ○ Email coordinator, send a site map link and permit link
> - Dashboard for executive level for overall impact
> Planning Stage
> - Site map - public safety locations to be added
> - Complement to a incident action plan
> - Crowd size estimates for an area
> - Can setup a grid for an area very easily
> - Has access to emergency response guide for what-if scenarios
> Operation Stage
> - Survey - Special Event Activity Reporter
> ○ Lost child
> ○ Suspicious activity
> ○ General activity
> - Written back to a workforce project
> ○ Dispatcher can use this to coordinate
> ○ Can search the map for resources
> - Safety Dashboard
> ○ Look at multiple events if they are happening at the same time
> Destination Pages and Internal Pages
> - Link to all of the applications talked about above
> wpotts@esri.com | non_process | special events prototype application esri zoom meeting attended this meeting ctecc in conjunction with another meeting esri gave a presentation on a special events prototype application below are my notes solution development team create templates based on best practices from prior development special events templates going to be revamped permit application process managed electonically special event safety plan where field staff are located better coordination between agencies respond to threats in a more timely fashion learn from prior year events re use plans keep a history of events permitting planning stage operation stage post event stage key people ○ event organizer ○ law enforcement ○ fire ems ○ event coordinator ○ dept staff ○ executive ○ gis analyst permit submittal start with a survey for an event event submitter ○ dependent questions ○ collecting a general location of the event ○ event site map event coordinator review ○ special event manager ○ to review permits ○ change to under review new choices pop up ○ comments from other staff come through in app for coordinator to see ○ email coordinator send a site map link and permit link dashboard for executive level for overall impact planning stage site map public safety locations to be added complement to a incident action plan crowd size estimates for an area can setup a grid for an area very easily has access to emergency response guide for what if scenarios operation stage survey special event activity reporter ○ lost child ○ suspicious activity ○ general activity written back to a workforce project ○ dispatcher can use this to coordinate ○ can search the map for resources safety dashboard ○ look at multiple events if they are happening at the same time destination pages and internal pages link to all of the applications talked about above wpotts esri com | 0 |
563,848 | 16,706,168,060 | IssuesEvent | 2021-06-09 10:12:41 | googleapis/google-api-ruby-client | https://api.github.com/repos/googleapis/google-api-ruby-client | closed | Synthesis failed for datafusion-v1 | autosynth failure priority: p1 type: bug | Hello! Autosynth couldn't regenerate datafusion-v1. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the datafusion-v1 API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
2021-06-08 03:02:29,448 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/google-api-ruby-client
2021-06-08 03:02:30,272 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2021-06-08 03:02:30,275 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2021-06-08 03:02:30,277 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2021-06-08 03:02:30,280 autosynth [DEBUG] > Running: git config push.default simple
2021-06-08 03:02:30,283 autosynth [DEBUG] > Running: git branch -f autosynth-datafusion-v1
2021-06-08 03:02:30,286 autosynth [DEBUG] > Running: git checkout autosynth-datafusion-v1
Switched to branch 'autosynth-datafusion-v1'
2021-06-08 03:02:30,480 autosynth [INFO] > Running synthtool
2021-06-08 03:02:30,480 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-datafusion_v1/synth.metadata', 'synth.py', '--']
2021-06-08 03:02:30,480 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/google-api-ruby-client/datafusion/v1/sponge_log.log
2021-06-08 03:02:30,482 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata generated/google-apis-datafusion_v1/synth.metadata synth.py -- datafusion v1
2021-06-08 03:02:30,688 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py.
On branch autosynth-datafusion-v1
nothing to commit, working tree clean
2021-06-08 03:02:30,751 synthtool [DEBUG] > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1
DEBUG:synthtool:Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1
git clean -df
bundle install
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching source index from https://rubygems.org/
Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Gateway Error 502 (https://rubygems.org/specs.4.8.gz)>
Retrying fetcher due to error (3/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Service Unavailable 503 (https://rubygems.org/specs.4.8.gz)>
Net::HTTPServiceUnavailable:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 Service Unavailable</title>
</head>
<body>
<h1>Error 503 Service Unavailable</h1>
<p>Service Unavailable</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-sea4449-SEA 1623146559 1338891957</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
chown -R 1000:1000 /workspace/generated
2021-06-08 03:02:39,548 synthtool [ERROR] > Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1:
None
ERROR:synthtool:Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1:
None
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py", line 41, in <module>
shell.run(command, hide_output=False)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '-v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace', '-v/var/run/docker.sock:/var/run/docker.sock', '-w', '/workspace', '-e', 'USER_GROUP=1000:1000', '--entrypoint', 'script/synth.rb', 'gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth', 'datafusion', 'v1']' returned non-zero exit status 1.
2021-06-08 03:02:39,578 autosynth [ERROR] > Synthesis failed
2021-06-08 03:02:39,578 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 293, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-datafusion_v1/synth.metadata', 'synth.py', '--', 'datafusion', 'v1']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/bdd2ce63-997e-4095-81b3-cd54e5fccbe4/targets/github%2Fsynthtool;config=default/tests;query=google-api-ruby-client;failed=false).
| 1.0 | Synthesis failed for datafusion-v1 - Hello! Autosynth couldn't regenerate datafusion-v1. :broken_heart:
Please investigate and fix this issue within 5 business days. While it remains broken,
this library cannot be updated with changes to the datafusion-v1 API, and the library grows
stale.
See https://github.com/googleapis/synthtool/blob/master/autosynth/TroubleShooting.md
for trouble shooting tips.
Here's the output from running `synth.py`:
```
2021-06-08 03:02:29,448 autosynth [INFO] > logs will be written to: /tmpfs/src/logs/google-api-ruby-client
2021-06-08 03:02:30,272 autosynth [DEBUG] > Running: git config --global core.excludesfile /home/kbuilder/.autosynth-gitignore
2021-06-08 03:02:30,275 autosynth [DEBUG] > Running: git config user.name yoshi-automation
2021-06-08 03:02:30,277 autosynth [DEBUG] > Running: git config user.email yoshi-automation@google.com
2021-06-08 03:02:30,280 autosynth [DEBUG] > Running: git config push.default simple
2021-06-08 03:02:30,283 autosynth [DEBUG] > Running: git branch -f autosynth-datafusion-v1
2021-06-08 03:02:30,286 autosynth [DEBUG] > Running: git checkout autosynth-datafusion-v1
Switched to branch 'autosynth-datafusion-v1'
2021-06-08 03:02:30,480 autosynth [INFO] > Running synthtool
2021-06-08 03:02:30,480 autosynth [INFO] > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-datafusion_v1/synth.metadata', 'synth.py', '--']
2021-06-08 03:02:30,480 autosynth [DEBUG] > log_file_path: /tmpfs/src/logs/google-api-ruby-client/datafusion/v1/sponge_log.log
2021-06-08 03:02:30,482 autosynth [DEBUG] > Running: /tmpfs/src/github/synthtool/env/bin/python3 -m synthtool --metadata generated/google-apis-datafusion_v1/synth.metadata synth.py -- datafusion v1
2021-06-08 03:02:30,688 synthtool [DEBUG] > Executing /home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py.
On branch autosynth-datafusion-v1
nothing to commit, working tree clean
2021-06-08 03:02:30,751 synthtool [DEBUG] > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1
DEBUG:synthtool:Running: docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1
git clean -df
bundle install
Don't run Bundler as root. Bundler can ask for sudo if it is needed, and
installing your bundle as root will break this application for all non-root
users on this machine.
Fetching source index from https://rubygems.org/
Retrying fetcher due to error (2/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Gateway Error 502 (https://rubygems.org/specs.4.8.gz)>
Retrying fetcher due to error (3/4): Bundler::HTTPError Could not fetch specs from https://rubygems.org/ due to underlying error <bad response Service Unavailable 503 (https://rubygems.org/specs.4.8.gz)>
Net::HTTPServiceUnavailable:
<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>503 Service Unavailable</title>
</head>
<body>
<h1>Error 503 Service Unavailable</h1>
<p>Service Unavailable</p>
<h3>Guru Mediation:</h3>
<p>Details: cache-sea4449-SEA 1623146559 1338891957</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
chown -R 1000:1000 /workspace/generated
2021-06-08 03:02:39,548 synthtool [ERROR] > Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1:
None
ERROR:synthtool:Failed executing docker run --rm -v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace -v/var/run/docker.sock:/var/run/docker.sock -w /workspace -e USER_GROUP=1000:1000 --entrypoint script/synth.rb gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth datafusion v1:
None
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/kbuilder/.cache/synthtool/google-api-ruby-client/synth.py", line 41, in <module>
shell.run(command, hide_output=False)
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 39, in run
raise exc
File "/tmpfs/src/github/synthtool/synthtool/shell.py", line 33, in run
encoding="utf-8",
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['docker', 'run', '--rm', '-v/home/kbuilder/.cache/synthtool/google-api-ruby-client:/workspace', '-v/var/run/docker.sock:/var/run/docker.sock', '-w', '/workspace', '-e', 'USER_GROUP=1000:1000', '--entrypoint', 'script/synth.rb', 'gcr.io/cloud-devrel-kokoro-resources/yoshi-ruby/autosynth', 'datafusion', 'v1']' returned non-zero exit status 1.
2021-06-08 03:02:39,578 autosynth [ERROR] > Synthesis failed
2021-06-08 03:02:39,578 autosynth [DEBUG] > Running: git clean -fdx
Removing __pycache__/
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 356, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 191, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 293, in _inner_main
).synthesize(synth_log_path / "sponge_log.log")
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 120, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'generated/google-apis-datafusion_v1/synth.metadata', 'synth.py', '--', 'datafusion', 'v1']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](http://sponge2/results/invocations/bdd2ce63-997e-4095-81b3-cd54e5fccbe4/targets/github%2Fsynthtool;config=default/tests;query=google-api-ruby-client;failed=false).
| non_process | synthesis failed for datafusion hello autosynth couldn t regenerate datafusion broken heart please investigate and fix this issue within business days while it remains broken this library cannot be updated with changes to the datafusion api and the library grows stale see for trouble shooting tips here s the output from running synth py autosynth logs will be written to tmpfs src logs google api ruby client autosynth running git config global core excludesfile home kbuilder autosynth gitignore autosynth running git config user name yoshi automation autosynth running git config user email yoshi automation google com autosynth running git config push default simple autosynth running git branch f autosynth datafusion autosynth running git checkout autosynth datafusion switched to branch autosynth datafusion autosynth running synthtool autosynth autosynth log file path tmpfs src logs google api ruby client datafusion sponge log log autosynth running tmpfs src github synthtool env bin m synthtool metadata generated google apis datafusion synth metadata synth py datafusion synthtool executing home kbuilder cache synthtool google api ruby client synth py on branch autosynth datafusion nothing to commit working tree clean synthtool running docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth datafusion debug synthtool running docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth datafusion git clean df bundle install don t run bundler as root bundler can ask for sudo if it is needed and installing your bundle as root will break this application for all non root users on this machine fetching source index from retrying fetcher due to error bundler httperror could not fetch specs from due to underlying error bad response gateway error retrying fetcher due to error bundler httperror could not fetch specs from due to underlying error bad response service unavailable net httpserviceunavailable doctype html public dtd xhtml strict en service unavailable error service unavailable service unavailable guru mediation details cache sea varnish cache server chown r workspace generated synthtool failed executing docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth datafusion none error synthtool failed executing docker run rm v home kbuilder cache synthtool google api ruby client workspace v var run docker sock var run docker sock w workspace e user group entrypoint script synth rb gcr io cloud devrel kokoro resources yoshi ruby autosynth datafusion none traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file line in exec module file line in call with frames removed file home kbuilder cache synthtool google api ruby client synth py line in shell run command hide output false file tmpfs src github synthtool synthtool shell py line in run raise exc file tmpfs src github synthtool synthtool shell py line in run encoding utf file home kbuilder pyenv versions lib subprocess py line in run output stdout stderr stderr subprocess calledprocesserror command returned non zero exit status autosynth synthesis failed autosynth running git clean fdx removing pycache traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize synth log path sponge log log file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log | 0 |
33,515 | 12,216,666,672 | IssuesEvent | 2020-05-01 15:36:24 | Thezone1975/vsts-vscode | https://api.github.com/repos/Thezone1975/vsts-vscode | opened | CVE-2018-3721 (Medium) detected in lodash-1.0.2.tgz | security vulnerability | ## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vsts-vscode/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/vsts-vscode/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Thezone1975/vsts-vscode/commit/6c2437ee079f0bd01990bc483b623bdab2f9f229">6c2437ee079f0bd01990bc483b623bdab2f9f229</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-3721 (Medium) detected in lodash-1.0.2.tgz - ## CVE-2018-3721 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/vsts-vscode/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/vsts-vscode/node_modules/globule/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Thezone1975/vsts-vscode/commit/6c2437ee079f0bd01990bc483b623bdab2f9f229">6c2437ee079f0bd01990bc483b623bdab2f9f229</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
lodash node module before 4.17.5 suffers from a Modification of Assumed-Immutable Data (MAID) vulnerability via defaultsDeep, merge, and mergeWith functions, which allows a malicious user to modify the prototype of "Object" via __proto__, causing the addition or modification of an existing property that will exist on all objects.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-3721>CVE-2018-3721</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-3721">https://nvd.nist.gov/vuln/detail/CVE-2018-3721</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution: 4.17.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file tmp ws scm vsts vscode package json path to vulnerable library tmp ws scm vsts vscode node modules globule node modules lodash package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x lodash tgz vulnerable library found in head commit a href vulnerability details lodash node module before suffers from a modification of assumed immutable data maid vulnerability via defaultsdeep merge and mergewith functions which allows a malicious user to modify the prototype of object via proto causing the addition or modification of an existing property that will exist on all objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
135 | 2,573,994,446 | IssuesEvent | 2015-02-11 14:26:12 | tinkerpop/tinkerpop3 | https://api.github.com/repos/tinkerpop/tinkerpop3 | closed | use RangeByIsCountStrategy to optimize Contains.* | enhancement process | Traversals like ```g.V().count().is(Contains.within, [2l,4l,6l])``` are currently not handled by the ```RangeByIsCountStrategy```. Shouldn't be too hard. | 1.0 | use RangeByIsCountStrategy to optimize Contains.* - Traversals like ```g.V().count().is(Contains.within, [2l,4l,6l])``` are currently not handled by the ```RangeByIsCountStrategy```. Shouldn't be too hard. | process | use rangebyiscountstrategy to optimize contains traversals like g v count is contains within are currently not handled by the rangebyiscountstrategy shouldn t be too hard | 1 |
11,804 | 14,627,194,869 | IssuesEvent | 2020-12-23 11:48:00 | xcesco/kripton | https://api.github.com/repos/xcesco/kripton | opened | Support to classes in java.time | annotation-processor module file module orm module shared-preferences module | Include native support for the following classes of java.time (JDK 8):
Duration
Instant
LocalDate
LocalDateTime
LocalTime
MonthDay
OffsetDateTime
OffsetTime
Period
Year
YearMonth
ZonedDateTime
ZoneId
ZoneOffset | 1.0 | Support to classes in java.time - Include native support for the following classes of java.time (JDK 8):
Duration
Instant
LocalDate
LocalDateTime
LocalTime
MonthDay
OffsetDateTime
OffsetTime
Period
Year
YearMonth
ZonedDateTime
ZoneId
ZoneOffset | process | support to classes in java time include native support for the following classes of java time jdk duration instant localdate localdatetime localtime monthday offsetdatetime offsettime period year yearmonth zoneddatetime zoneid zoneoffset | 1 |
18,897 | 24,835,973,140 | IssuesEvent | 2022-10-26 08:52:38 | aiidateam/aiida-core | https://api.github.com/repos/aiidateam/aiida-core | closed | Add a hook to `Process` class that allow to customize the definition of the `process_label` | type/accepted feature priority/nice-to-have topic/processes | Currently this is hardcoded in `aiida-core` and it will take the name of the `Process` class, but in some cases, some customization may be in order. | 1.0 | Add a hook to `Process` class that allow to customize the definition of the `process_label` - Currently this is hardcoded in `aiida-core` and it will take the name of the `Process` class, but in some cases, some customization may be in order. | process | add a hook to process class that allow to customize the definition of the process label currently this is hardcoded in aiida core and it will take the name of the process class but in some cases some customization may be in order | 1 |
52,506 | 22,281,391,440 | IssuesEvent | 2022-06-11 00:30:46 | aws/aws-toolkit-vscode | https://api.github.com/repos/aws/aws-toolkit-vscode | closed | Users can browse and manage custom sample Lambda events to use as starting points for input into their Lambda handlers | feature-request service:lambda sam | Lambda invocations require a JSON "event". You can find sample events for AWS services [in the public docs](https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html), but users might want to define their own sample events.
* Create a "Sample Lambda Event Manager" view, and add it to the AWS viewCollection
* event manager tree would have two top level nodes: "Curated Events", and "Custom Events". This issue deals with the latter (#106 deals with the former)
* each node under Custom Events represents a single event that a user has created and can manage
* each custom event has a name and optional description, which will be surfaced in selection pickers
* All custom events are backed by a single json file that resides in the workspace. UX gestures to modify the file ultimately end up opening this file in an editor for the user to work with
* custom events can be added, edited, and deleted (hook up to context menus appropriately)
Later on, when we have the ability to Run/Debug Lambda Handlers locally, these events will be referenced as options to pass into the handler.
| 1.0 | Users can browse and manage custom sample Lambda events to use as starting points for input into their Lambda handlers - Lambda invocations require a JSON "event". You can find sample events for AWS services [in the public docs](https://docs.aws.amazon.com/lambda/latest/dg/eventsources.html), but users might want to define their own sample events.
* Create a "Sample Lambda Event Manager" view, and add it to the AWS viewCollection
* event manager tree would have two top level nodes: "Curated Events", and "Custom Events". This issue deals with the latter (#106 deals with the former)
* each node under Custom Events represents a single event that a user has created and can manage
* each custom event has a name and optional description, which will be surfaced in selection pickers
* All custom events are backed by a single json file that resides in the workspace. UX gestures to modify the file ultimately end up opening this file in an editor for the user to work with
* custom events can be added, edited, and deleted (hook up to context menus appropriately)
Later on, when we have the ability to Run/Debug Lambda Handlers locally, these events will be referenced as options to pass into the handler.
| non_process | users can browse and manage custom sample lambda events to use as starting points for input into their lambda handlers lambda invocations require a json event you can find sample events for aws services but users might want to define their own sample events create a sample lambda event manager view and add it to the aws viewcollection event manager tree would have two top level nodes curated events and custom events this issue deals with the latter deals with the former each node under custom events represents a single event that a user has created and can manage each custom event has a name and optional description which will be surfaced in selection pickers all custom events are backed by a single json file that resides in the workspace ux gestures to modify the file ultimately end up opening this file in an editor for the user to work with custom events can be added edited and deleted hook up to context menus appropriately later on when we have the ability to run debug lambda handlers locally these events will be referenced as options to pass into the handler | 0 |
8,940 | 12,055,429,242 | IssuesEvent | 2020-04-15 12:58:58 | MHRA/products | https://api.github.com/repos/MHRA/products | closed | Error removing message from queue | BUG :bug: EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: | **Describe the bug**
A delete job completes successfully but there is an error removing the message from the queue, so the job is performed again and fails because the file is no longer in the search index/storage container.
This happened for around 30/600 files so failure rate of ~5%.
The reported error is: `UnexpectedHTTPResult(UnexpectedHTTPResult { expected: [200], received: 404, body: "u003cErroru003eu003cCodeu003e404u003c/Codeu003eu003cDetailu003eThe lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. TrackingId:263452ee-dfff-4c56-a7f3-7f5ac1858d39_G0_B11, SystemTracker:doc-index-updater-non-prod:Queue:doc-index-updater-delete-queue, Timestamp:2020-04-08T16:41:31u003c/Detailu003eu003c/Erroru003e" }`
**To Reproduce**
Seems like a transient issue - would be difficult to reproduce. You can see the logs for several jobs that experienced this by running the following query with jobs:
Jobs
9c16de58-ea5f-4720-9234-7986ac7f057c
488be267-ad3e-4127-88f6-83248456b2ce
2d561a7c-fdf5-4837-bbc6-224cdceee676
dd2cbe06-7dfc-4eb7-ace4-7cdd6004e598
85fd35cd-2450-4b59-bc80-95c11c5f154f
Query
```let correlationId = "9c16de58-ea5f-4720-9234-7986ac7f057c";
let timeframeFrom = totimespan(7d);
let timeframeTo = totimespan(5d);
let clusterId = '/subscriptions/bec11470-1346-4cdd-af2e-ce1f360671a1/resourceGroups/adazr-rg-1001/providers/Microsoft.ContainerService/managedClusters/non-prod';
let ContainerIdList = KubePodInventory
| where TimeGenerated > now() - timeframeFrom and TimeGenerated < now() - timeframeTo
| where ContainerName contains 'doc-index-updater'
| where ClusterId =~ clusterId
| distinct ContainerID;
ContainerLog
| where TimeGenerated > now() - timeframeFrom and TimeGenerated < now() - timeframeTo
| where ContainerID in (ContainerIdList)
| project LogEntrySource, LogEntry, TimeGenerated, Computer, Image, Name, ContainerID
| order by TimeGenerated desc
| render table
| extend message_ = tostring(parse_json(tostring(parse_json(LogEntry).fields)).message)
| where parse_json(tostring(parse_json(LogEntry).span)).correlation_id == correlationId```
**Expected behavior**
Messages are removed from the queue once completed so aren't retried.
**Screenshots**
N/A
**Additional context**
There is a second issue in that a successful job status is being overridden by an error status when the job fails the second time it is run. An easier fix may be to address that issue.
| 1.0 | Error removing message from queue - **Describe the bug**
A delete job completes successfully but there is an error removing the message from the queue, so the job is performed again and fails because the file is no longer in the search index/storage container.
This happened for around 30/600 files so failure rate of ~5%.
The reported error is: `UnexpectedHTTPResult(UnexpectedHTTPResult { expected: [200], received: 404, body: "u003cErroru003eu003cCodeu003e404u003c/Codeu003eu003cDetailu003eThe lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue. TrackingId:263452ee-dfff-4c56-a7f3-7f5ac1858d39_G0_B11, SystemTracker:doc-index-updater-non-prod:Queue:doc-index-updater-delete-queue, Timestamp:2020-04-08T16:41:31u003c/Detailu003eu003c/Erroru003e" }`
**To Reproduce**
Seems like a transient issue - would be difficult to reproduce. You can see the logs for several jobs that experienced this by running the following query with jobs:
Jobs
9c16de58-ea5f-4720-9234-7986ac7f057c
488be267-ad3e-4127-88f6-83248456b2ce
2d561a7c-fdf5-4837-bbc6-224cdceee676
dd2cbe06-7dfc-4eb7-ace4-7cdd6004e598
85fd35cd-2450-4b59-bc80-95c11c5f154f
Query
```let correlationId = "9c16de58-ea5f-4720-9234-7986ac7f057c";
let timeframeFrom = totimespan(7d);
let timeframeTo = totimespan(5d);
let clusterId = '/subscriptions/bec11470-1346-4cdd-af2e-ce1f360671a1/resourceGroups/adazr-rg-1001/providers/Microsoft.ContainerService/managedClusters/non-prod';
let ContainerIdList = KubePodInventory
| where TimeGenerated > now() - timeframeFrom and TimeGenerated < now() - timeframeTo
| where ContainerName contains 'doc-index-updater'
| where ClusterId =~ clusterId
| distinct ContainerID;
ContainerLog
| where TimeGenerated > now() - timeframeFrom and TimeGenerated < now() - timeframeTo
| where ContainerID in (ContainerIdList)
| project LogEntrySource, LogEntry, TimeGenerated, Computer, Image, Name, ContainerID
| order by TimeGenerated desc
| render table
| extend message_ = tostring(parse_json(tostring(parse_json(LogEntry).fields)).message)
| where parse_json(tostring(parse_json(LogEntry).span)).correlation_id == correlationId```
**Expected behavior**
Messages are removed from the queue once completed so aren't retried.
**Screenshots**
N/A
**Additional context**
There is a second issue in that a successful job status is being overridden by an error status when the job fails the second time it is run. An easier fix may be to address that issue.
| process | error removing message from queue describe the bug a delete job completes successfully but there is an error removing the message from the queue so the job is performed again and fails because the file is no longer in the search index storage container this happened for around files so failure rate of the reported error is unexpectedhttpresult unexpectedhttpresult expected received body lock supplied is invalid either the lock expired or the message has already been removed from the queue trackingid dfff systemtracker doc index updater non prod queue doc index updater delete queue timestamp to reproduce seems like a transient issue would be difficult to reproduce you can see the logs for several jobs that experienced this by running the following query with jobs jobs query let correlationid let timeframefrom totimespan let timeframeto totimespan let clusterid subscriptions resourcegroups adazr rg providers microsoft containerservice managedclusters non prod let containeridlist kubepodinventory where timegenerated now timeframefrom and timegenerated now timeframeto where containername contains doc index updater where clusterid clusterid distinct containerid containerlog where timegenerated now timeframefrom and timegenerated now timeframeto where containerid in containeridlist project logentrysource logentry timegenerated computer image name containerid order by timegenerated desc render table extend message tostring parse json tostring parse json logentry fields message where parse json tostring parse json logentry span correlation id correlationid expected behavior messages are removed from the queue once completed so aren t retried screenshots n a additional context there is a second issue in that a successful job status is being overridden by an error status when the job fails the second time it is run an easier fix may be to address that issue | 1 |
34,103 | 28,241,332,141 | IssuesEvent | 2023-04-06 07:25:30 | Tonomy-Foundation/Tonomy-ID | https://api.github.com/repos/Tonomy-Foundation/Tonomy-ID | opened | Tonomy consumers forced to use latest SDK in CI | infrastructure | Acceptance criteria
- [ ] On all consumers of SDK (websites, ID and Communication) the CI fails if they have not upgraded to the latest version of the SDK
Hint
- `yarn up @tonomy/tonomy-id-sdk --frozen-lockfile` or somethin' like this | 1.0 | Tonomy consumers forced to use latest SDK in CI - Acceptance criteria
- [ ] On all consumers of SDK (websites, ID and Communication) the CI fails if they have not upgraded to the latest version of the SDK
Hint
- `yarn up @tonomy/tonomy-id-sdk --frozen-lockfile` or somethin' like this | non_process | tonomy consumers forced to use latest sdk in ci acceptance criteria on all consumers of sdk websites id and communication the ci fails if they have not upgraded to the latest version of the sdk hint yarn up tonomy tonomy id sdk frozen lockfile or somethin like this | 0 |
14,424 | 17,475,233,710 | IssuesEvent | 2021-08-08 01:46:17 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Activating Processing plugin crashes QGIS | Feedback stale Processing Bug MacOS | Mac OS 10.15.7
Application Specific Information:
/usr/lib/libssl.dylib
abort() called
Invalid dylib load. Clients should not load the unversioned libssl dylib as it does not have a stable ABI. | 1.0 | Activating Processing plugin crashes QGIS - Mac OS 10.15.7
Application Specific Information:
/usr/lib/libssl.dylib
abort() called
Invalid dylib load. Clients should not load the unversioned libssl dylib as it does not have a stable ABI. | process | activating processing plugin crashes qgis mac os application specific information usr lib libssl dylib abort called invalid dylib load clients should not load the unversioned libssl dylib as it does not have a stable abi | 1 |
49,646 | 13,187,244,974 | IssuesEvent | 2020-08-13 02:48:22 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | [mue] garbage value (Trac #1802) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1802">https://code.icecube.wisc.edu/ticket/1802</a>, reported by kjmeagher and owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-08-01T18:41:05",
"description": "found by static analyser http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-87742f.html#EndPath",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1470076865835338",
"component": "combo reconstruction",
"summary": "[mue] garbage value",
"priority": "normal",
"keywords": "",
"time": "2016-07-27T08:07:53",
"milestone": "Long-Term Future",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [mue] garbage value (Trac #1802) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1802">https://code.icecube.wisc.edu/ticket/1802</a>, reported by kjmeagher and owned by dima</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-08-01T18:41:05",
"description": "found by static analyser http://software.icecube.wisc.edu/static_analysis/2016-07-26-030212-26135-1/report-87742f.html#EndPath",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1470076865835338",
"component": "combo reconstruction",
"summary": "[mue] garbage value",
"priority": "normal",
"keywords": "",
"time": "2016-07-27T08:07:53",
"milestone": "Long-Term Future",
"owner": "dima",
"type": "defect"
}
```
</p>
</details>
| non_process | garbage value trac migrated from json status closed changetime description found by static analyser reporter kjmeagher cc resolution fixed ts component combo reconstruction summary garbage value priority normal keywords time milestone long term future owner dima type defect | 0 |
3,944 | 6,885,959,844 | IssuesEvent | 2017-11-21 17:43:45 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | TPV: telomere maintenance | auto-migrated cell cycle and DNA processes | telomere maintenance
has the parentage DNA metabolic process, but I am not sure that all of its parts are DNA metabolism (capping for example).
Val
Reported by: ValWood
Original Ticket: [geneontology/ontology-requests/10132](https://sourceforge.net/p/geneontology/ontology-requests/10132)
| 1.0 | TPV: telomere maintenance - telomere maintenance
has the parentage DNA metabolic process, but I am not sure that all of its parts are DNA metabolism (capping for example).
Val
Reported by: ValWood
Original Ticket: [geneontology/ontology-requests/10132](https://sourceforge.net/p/geneontology/ontology-requests/10132)
| process | tpv telomere maintenance telomere maintenance has the parentage dna metabolic process but i am not sure that all of its parts are dna metabolism capping for example val reported by valwood original ticket | 1 |
588,718 | 17,669,624,190 | IssuesEvent | 2021-08-23 02:55:38 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | performive.com - Incorrect image and text transition when scrolling | browser-firefox priority-normal severity-important os-linux engine-gecko | <!-- @browser: Firefox 91.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/83776 -->
**URL**: https://performive.com/managed-cloud/private-cloud
**Browser / Version**: Firefox 91.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
The images on the bottom part of the page (scroll down) are rendered incorrectly and lag behind when you scroll.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/8cf9e380-5e1b-48dd-9e41-5f4986912109.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | performive.com - Incorrect image and text transition when scrolling - <!-- @browser: Firefox 91.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/83776 -->
**URL**: https://performive.com/managed-cloud/private-cloud
**Browser / Version**: Firefox 91.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Design is broken
**Description**: Items are overlapped
**Steps to Reproduce**:
The images on the bottom part of the page (scroll down) are rendered incorrectly and lag behind when you scroll.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/8/8cf9e380-5e1b-48dd-9e41-5f4986912109.jpg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | performive com incorrect image and text transition when scrolling url browser version firefox operating system linux tested another browser yes chrome problem type design is broken description items are overlapped steps to reproduce the images on the bottom part of the page scroll down are rendered incorrectly and lag behind when you scroll view the screenshot img alt screenshot src browser configuration none from with ❤️ | 0 |
21,911 | 30,440,753,978 | IssuesEvent | 2023-07-15 03:14:14 | diffgram/diffgram | https://api.github.com/repos/diffgram/diffgram | closed | Compound file ingestion, show final completion status once children complete | ux process_media | Compound file upload status shows success before child files are finished processing
Ideally should only show success if child files are also successful.
This status screen shows correct status (clicking on it from UI)

| 1.0 | Compound file ingestion, show final completion status once children complete - Compound file upload status shows success before child files are finished processing
Ideally should only show success if child files are also successful.
This status screen shows correct status (clicking on it from UI)

| process | compound file ingestion show final completion status once children complete compound file upload status shows success before child files are finished processing ideally should only show success if child files are also successful this status screen shows correct status clicking on it from ui | 1 |
126,914 | 5,007,507,182 | IssuesEvent | 2016-12-12 16:55:23 | ziccardi/jnrpe | https://api.github.com/repos/ziccardi/jnrpe | closed | Tests should use mocks instead of real commands | enhancement priority:medium status:working | Currently, many tests use real commands (CheckProcs test uses real 'ps' for example).
They should use mocks. | 1.0 | Tests should use mocks instead of real commands - Currently, many tests use real commands (CheckProcs test uses real 'ps' for example).
They should use mocks. | non_process | tests should use mocks instead of real commands currently many tests use real commands checkprocs test uses real ps for example they should use mocks | 0 |
11,300 | 8,358,250,847 | IssuesEvent | 2018-10-03 01:39:40 | SitecorePowerShell/Console | https://api.github.com/repos/SitecorePowerShell/Console | closed | Set-User cmd not working with Parameter Email | area-commands area-security bug | ### Expected Behavior
While executing the command:
Set-User -Identity $createduser -IsAdministrator $true -FullName $user.Key -Email "myemail@gmail.com"
It is expected that the email to be set on the user profile.
### Actual Behavior
_Please describe the actual behavior._
It was noticed that all properties work(IsAdministrator, FullName) but email is not set when executing:
It was noticed that all properties work(IsAdministrator, FullName) but email is not set.
### Steps to Reproduce the Problem
_Please include the version number of SPE and Sitecore._
4.7.2
- [x] Tested issue with clean install of Sitecore and the latest available version of SPE.
- [x] Asked questions on the Sitecore Slack Chat channel.
- [x] Reviewed questions and answers on the Sitecore Stack Exchange. | True | Set-User cmd not working with Parameter Email - ### Expected Behavior
While executing the command:
Set-User -Identity $createduser -IsAdministrator $true -FullName $user.Key -Email "myemail@gmail.com"
It is expected that the email to be set on the user profile.
### Actual Behavior
_Please describe the actual behavior._
It was noticed that all properties work(IsAdministrator, FullName) but email is not set when executing:
It was noticed that all properties work(IsAdministrator, FullName) but email is not set.
### Steps to Reproduce the Problem
_Please include the version number of SPE and Sitecore._
4.7.2
- [x] Tested issue with clean install of Sitecore and the latest available version of SPE.
- [x] Asked questions on the Sitecore Slack Chat channel.
- [x] Reviewed questions and answers on the Sitecore Stack Exchange. | non_process | set user cmd not working with parameter email expected behavior while executing the command set user identity createduser isadministrator true fullname user key email myemail gmail com it is expected that the email to be set on the user profile actual behavior please describe the actual behavior it was noticed that all properties work isadministrator fullname but email is not set when executing it was noticed that all properties work isadministrator fullname but email is not set steps to reproduce the problem please include the version number of spe and sitecore tested issue with clean install of sitecore and the latest available version of spe asked questions on the sitecore slack chat channel reviewed questions and answers on the sitecore stack exchange | 0 |
11,603 | 14,478,697,704 | IssuesEvent | 2020-12-10 08:47:35 | decidim/decidim | https://api.github.com/repos/decidim/decidim | closed | Proposals and Results content block for Process Groups | contract: process-groups | Ref. PG02-2
**Is your feature request related to a problem? Please describe.**
As an administrator I want to choose showing the Proposals and Results on a PG landing
**Describe the solution you'd like**
To have the content blocks of Proposals and Results that also allow me to choose how they'd be selected: Random or Last.
Note that it's necessary to also add the context (as in which Process a Proposal belongs to) with "Show process on cards of the Participatory Group (PG06)"
**Describe alternatives you've considered**
To have a checkbox to allow or not these sections but this doesn't scale as well as the content block idea (PG02)
To have four content block that allows me to show or not the Proposals and Results: "Last Proposals", "Last Results", "Random Proposals", "Random Results"
**Additional context**

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As an administrator I can decide whether to display proposals on the main page of the process group.
- [x] As an administrator I can decide whether to display results on the main page of the process group.
- [x] As an administrator I can decide if the proposals shown are random
- [x] As an administrator I can decide if the proposals shown are the last by creation date
- [x] As an administrator I can decide if the results shown are random
- [x] As an administrator I can decide if the results shown are the last by creation date
- [x] As a visitor I can see which Process a Proposal belongs to
- [x] As a visitor I can see which Process a Result belongs to
| 1.0 | Proposals and Results content block for Process Groups - Ref. PG02-2
**Is your feature request related to a problem? Please describe.**
As an administrator I want to choose showing the Proposals and Results on a PG landing
**Describe the solution you'd like**
To have the content blocks of Proposals and Results that also allow me to choose how they'd be selected: Random or Last.
Note that it's necessary to also add the context (as in which Process a Proposal belongs to) with "Show process on cards of the Participatory Group (PG06)"
**Describe alternatives you've considered**
To have a checkbox to allow or not these sections but this doesn't scale as well as the content block idea (PG02)
To have four content block that allows me to show or not the Proposals and Results: "Last Proposals", "Last Results", "Random Proposals", "Random Results"
**Additional context**

**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As an administrator I can decide whether to display proposals on the main page of the process group.
- [x] As an administrator I can decide whether to display results on the main page of the process group.
- [x] As an administrator I can decide if the proposals shown are random
- [x] As an administrator I can decide if the proposals shown are the last by creation date
- [x] As an administrator I can decide if the results shown are random
- [x] As an administrator I can decide if the results shown are the last by creation date
- [x] As a visitor I can see which Process a Proposal belongs to
- [x] As a visitor I can see which Process a Result belongs to
| process | proposals and results content block for process groups ref is your feature request related to a problem please describe as an administrator i want to choose showing the proposals and results on a pg landing describe the solution you d like to have the content blocks of proposals and results that also allow me to choose how they d be selected random or last note that it s necessary to also add the context as in which process a proposal belongs to with show process on cards of the participatory group describe alternatives you ve considered to have a checkbox to allow or not these sections but this doesn t scale as well as the content block idea to have four content block that allows me to show or not the proposals and results last proposals last results random proposals random results additional context does this issue could impact on users private data no acceptance criteria as an administrator i can decide whether to display proposals on the main page of the process group as an administrator i can decide whether to display results on the main page of the process group as an administrator i can decide if the proposals shown are random as an administrator i can decide if the proposals shown are the last by creation date as an administrator i can decide if the results shown are random as an administrator i can decide if the results shown are the last by creation date as a visitor i can see which process a proposal belongs to as a visitor i can see which process a result belongs to | 1 |
136,895 | 5,289,947,723 | IssuesEvent | 2017-02-08 18:39:06 | ElektraInitiative/libelektra | https://api.github.com/repos/ElektraInitiative/libelektra | closed | Devhelp(doc) generation for (g)elektra | low priority | This would be a nice to have for developing on Elektra in GNOME.
As far as i can tell this could be possible possible for the whole elektra api trough doxygen xml generation and XSLT transformation. Here is an example [xsl](https://cgit.freedesktop.org/dbus/dbus/tree/doc/doxygen_to_devhelp.xsl).
- [Devhelp](https://wiki.gnome.org/Apps/Devhelp)
TODO:
- [ ] doxygen xslt to devhelp for `libelektra`
- [ ] gtk-doc generation for `libgelektra`
| 1.0 | Devhelp(doc) generation for (g)elektra - This would be a nice to have for developing on Elektra in GNOME.
As far as i can tell this could be possible possible for the whole elektra api trough doxygen xml generation and XSLT transformation. Here is an example [xsl](https://cgit.freedesktop.org/dbus/dbus/tree/doc/doxygen_to_devhelp.xsl).
- [Devhelp](https://wiki.gnome.org/Apps/Devhelp)
TODO:
- [ ] doxygen xslt to devhelp for `libelektra`
- [ ] gtk-doc generation for `libgelektra`
| non_process | devhelp doc generation for g elektra this would be a nice to have for developing on elektra in gnome as far as i can tell this could be possible possible for the whole elektra api trough doxygen xml generation and xslt transformation here is an example todo doxygen xslt to devhelp for libelektra gtk doc generation for libgelektra | 0 |
18,527 | 24,552,178,330 | IssuesEvent | 2022-10-12 13:24:55 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] [Offline indicator] Participant should stay in the same screen when user clicks on the 'Ok' button present on the below pop up once after participant's internet is back | Bug P1 iOS Process: Fixed Process: Tested dev | Steps:
1. Sign up or sign in to the app
2. Click on any study
3. Click on a participate
4. Turn off the data
5. Click on Next button (an alert pop-up will get displayed)
6. Turn ON the Internet
7. Click on 'Ok' button present on the popup and observe
AR: Participant is navigating to the studies list screen
ER: Participants should stay on the same screen and be able to complete the enrollment flow successfully
[Note: Issue should also be fixed in the Review updated consent flow]

| 2.0 | [iOS] [Offline indicator] Participant should stay in the same screen when user clicks on the 'Ok' button present on the below pop up once after participant's internet is back - Steps:
1. Sign up or sign in to the app
2. Click on any study
3. Click on a participate
4. Turn off the data
5. Click on Next button (an alert pop-up will get displayed)
6. Turn ON the Internet
7. Click on 'Ok' button present on the popup and observe
AR: Participant is navigating to the studies list screen
ER: Participants should stay on the same screen and be able to complete the enrollment flow successfully
[Note: Issue should also be fixed in the Review updated consent flow]

| process | participant should stay in the same screen when user clicks on the ok button present on the below pop up once after participant s internet is back steps sign up or sign in to the app click on any study click on a participate turn off the data click on next button an alert pop up will get displayed turn on the internet click on ok button present on the popup and observe ar participant is navigating to the studies list screen er participants should stay on the same screen and be able to complete the enrollment flow successfully | 1 |
733 | 3,214,313,608 | IssuesEvent | 2015-10-07 00:44:41 | broadinstitute/hellbender-dataflow | https://api.github.com/repos/broadinstitute/hellbender-dataflow | opened | Dataflow BQSR Direct Runner fails with --knownSites | bug Dataflow DataflowPreprocessingPipeline | _From @lbergelson on September 9, 2015 16:11_
From @tomwhite:
I noticed that the test with "-knownSites" from BaseRecalibratorIntegrationTest (i.e. the non-dataflow version) fails with both the Direct and Spark runners. I had a look at the file output and there are a few discrepancies (see diff below).
```
diff /var/folders/d1/8f5_j4hx04z72w6wgqxkb2l40000gn/T/walktest.tmp_param.02172067147450353519.tmp src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.NA12878.chr17_69k_70k.2inputs.txt
60c60
< 34 3051 34
---
> 34 3050 34
71c71
< 45 46942 45
---
> 45 46940 45
124,126c124,126
< 809R9ABXX101220.5 D 45.0000 45.0000 23471 0.00
< 809R9ABXX101220.5 I 45.0000 45.0000 23471 0.00
< 809R9ABXX101220.5 M 27.0000 27.0494 23471 49.13
---
> 809R9ABXX101220.5 D 45.0000 45.0000 23470 0.00
> 809R9ABXX101220.5 I 45.0000 45.0000 23470 0.00
> 809R9ABXX101220.5 M 27.0000 27.0493 23470 49.13
155c155
< 809R9ABXX101220.5 34 M 34.0000 3051 2.96
---
> 809R9ABXX101220.5 34 M 34.0000 3050 2.96
161,162c161,162
< 809R9ABXX101220.5 45 D 45.0000 23471 0.00
< 809R9ABXX101220.5 45 I 45.0000 23471 0.00
---
> 809R9ABXX101220.5 45 D 45.0000 23470 0.00
> 809R9ABXX101220.5 45 I 45.0000 23470 0.00
2714c2714
< 809R9ABXX101220.5 34 29 Cycle M 34.0000 20 0.00
---
> 809R9ABXX101220.5 34 29 Cycle M 34.0000 19 0.00
2773c2773
< 809R9ABXX101220.5 34 CA Context M 34.0000 506 0.00
---
> 809R9ABXX101220.5 34 CA Context M 34.0000 505 0.00
3464,3465c3464,3465
< 809R9ABXX101220.5 45 29 Cycle D 45.0000 180 0.00
< 809R9ABXX101220.5 45 29 Cycle I 45.0000 180 0.00
---
> 809R9ABXX101220.5 45 29 Cycle D 45.0000 179 0.00
> 809R9ABXX101220.5 45 29 Cycle I 45.0000 179 0.00
3634,3635c3634,3635
< 809R9ABXX101220.5 45 GCA Context D 45.0000 278 0.00
< 809R9ABXX101220.5 45 GCA Context I 45.0000 278 0.00
---
> 809R9ABXX101220.5 45 GCA Context D 45.0000 277 0.00
> 809R9ABXX101220.5 45 GCA Context I 45.0000 277 0.00
```
The relevant test is this one from `BaseRecalibratorDataflowIntegrationTest`
```
new BQSRTest(hg18Reference, HiSeqBam, dbSNPb37, "-knownSites " + moreSites, getResourceDir() + "expected.NA12878.chr17_69k_70k.2inputs.txt")
```
_Copied from original issue: broadinstitute/hellbender#883_ | 1.0 | Dataflow BQSR Direct Runner fails with --knownSites - _From @lbergelson on September 9, 2015 16:11_
From @tomwhite:
I noticed that the test with "-knownSites" from BaseRecalibratorIntegrationTest (i.e. the non-dataflow version) fails with both the Direct and Spark runners. I had a look at the file output and there are a few discrepancies (see diff below).
```
diff /var/folders/d1/8f5_j4hx04z72w6wgqxkb2l40000gn/T/walktest.tmp_param.02172067147450353519.tmp src/test/resources/org/broadinstitute/hellbender/tools/BQSR/expected.NA12878.chr17_69k_70k.2inputs.txt
60c60
< 34 3051 34
---
> 34 3050 34
71c71
< 45 46942 45
---
> 45 46940 45
124,126c124,126
< 809R9ABXX101220.5 D 45.0000 45.0000 23471 0.00
< 809R9ABXX101220.5 I 45.0000 45.0000 23471 0.00
< 809R9ABXX101220.5 M 27.0000 27.0494 23471 49.13
---
> 809R9ABXX101220.5 D 45.0000 45.0000 23470 0.00
> 809R9ABXX101220.5 I 45.0000 45.0000 23470 0.00
> 809R9ABXX101220.5 M 27.0000 27.0493 23470 49.13
155c155
< 809R9ABXX101220.5 34 M 34.0000 3051 2.96
---
> 809R9ABXX101220.5 34 M 34.0000 3050 2.96
161,162c161,162
< 809R9ABXX101220.5 45 D 45.0000 23471 0.00
< 809R9ABXX101220.5 45 I 45.0000 23471 0.00
---
> 809R9ABXX101220.5 45 D 45.0000 23470 0.00
> 809R9ABXX101220.5 45 I 45.0000 23470 0.00
2714c2714
< 809R9ABXX101220.5 34 29 Cycle M 34.0000 20 0.00
---
> 809R9ABXX101220.5 34 29 Cycle M 34.0000 19 0.00
2773c2773
< 809R9ABXX101220.5 34 CA Context M 34.0000 506 0.00
---
> 809R9ABXX101220.5 34 CA Context M 34.0000 505 0.00
3464,3465c3464,3465
< 809R9ABXX101220.5 45 29 Cycle D 45.0000 180 0.00
< 809R9ABXX101220.5 45 29 Cycle I 45.0000 180 0.00
---
> 809R9ABXX101220.5 45 29 Cycle D 45.0000 179 0.00
> 809R9ABXX101220.5 45 29 Cycle I 45.0000 179 0.00
3634,3635c3634,3635
< 809R9ABXX101220.5 45 GCA Context D 45.0000 278 0.00
< 809R9ABXX101220.5 45 GCA Context I 45.0000 278 0.00
---
> 809R9ABXX101220.5 45 GCA Context D 45.0000 277 0.00
> 809R9ABXX101220.5 45 GCA Context I 45.0000 277 0.00
```
The relevant test is this one from `BaseRecalibratorDataflowIntegrationTest`
```
new BQSRTest(hg18Reference, HiSeqBam, dbSNPb37, "-knownSites " + moreSites, getResourceDir() + "expected.NA12878.chr17_69k_70k.2inputs.txt")
```
_Copied from original issue: broadinstitute/hellbender#883_ | process | dataflow bqsr direct runner fails with knownsites from lbergelson on september from tomwhite i noticed that the test with knownsites from baserecalibratorintegrationtest i e the non dataflow version fails with both the direct and spark runners i had a look at the file output and there are a few discrepancies see diff below diff var folders t walktest tmp param tmp src test resources org broadinstitute hellbender tools bqsr expected txt d i m d i m m m d i d i cycle m cycle m ca context m ca context m cycle d cycle i cycle d cycle i gca context d gca context i gca context d gca context i the relevant test is this one from baserecalibratordataflowintegrationtest new bqsrtest hiseqbam knownsites moresites getresourcedir expected txt copied from original issue broadinstitute hellbender | 1 |
60,212 | 12,065,404,399 | IssuesEvent | 2020-04-16 09:54:52 | TheNeoGameFactory/GWJ20-GodotCommunityDE | https://api.github.com/repos/TheNeoGameFactory/GWJ20-GodotCommunityDE | closed | Splashscreen | Code GUI | Splashscreen mit folgenden Logos
GodotCommunityDE
GodotWildJam
Dann die drei Karten, weil wir alles integrieren.
Am schönsten wären die drei Karten übereinander, die dann ausgefächert werden.
So als ob man Karten in der Hand hält.
Danach der Titel des Spieles und dann das Thema.
Wer möchte das machen?
Da es Recht lang wird, entweder von Anfang an das man jedes einzelne mit ESC überspringen kann, oder beim ersten mal muss man es sich anschauen und an den zweiten Start kann man es überspringen.
Machbar?
Wer möchte? | 1.0 | Splashscreen - Splashscreen mit folgenden Logos
GodotCommunityDE
GodotWildJam
Dann die drei Karten, weil wir alles integrieren.
Am schönsten wären die drei Karten übereinander, die dann ausgefächert werden.
So als ob man Karten in der Hand hält.
Danach der Titel des Spieles und dann das Thema.
Wer möchte das machen?
Da es Recht lang wird, entweder von Anfang an das man jedes einzelne mit ESC überspringen kann, oder beim ersten mal muss man es sich anschauen und an den zweiten Start kann man es überspringen.
Machbar?
Wer möchte? | non_process | splashscreen splashscreen mit folgenden logos godotcommunityde godotwildjam dann die drei karten weil wir alles integrieren am schönsten wären die drei karten übereinander die dann ausgefächert werden so als ob man karten in der hand hält danach der titel des spieles und dann das thema wer möchte das machen da es recht lang wird entweder von anfang an das man jedes einzelne mit esc überspringen kann oder beim ersten mal muss man es sich anschauen und an den zweiten start kann man es überspringen machbar wer möchte | 0 |
37,883 | 15,391,389,831 | IssuesEvent | 2021-03-03 14:30:52 | thkl/hap-homematic | https://api.github.com/repos/thkl/hap-homematic | closed | AddLowBatCharacteristic to HomeMaticDoorBellAccessory | DeviceService enhancement | Da der Service HomeMaticDoorBellAccessory auch was in Home auslöst, wäre für mich ein Low Batterie Service sehr praktisch.
Das DoorBell Gerät wird in der Home App als nicht unterstützt angezeigt, löst aber trotzdem eine Push Meldung aus "XY klingelte"
Getestet mit:
tvOS 14.x, HM-PBI-4-FM mit HomeMaticDoorBellAccessory und fast leerer Batterie | 1.0 | AddLowBatCharacteristic to HomeMaticDoorBellAccessory - Da der Service HomeMaticDoorBellAccessory auch was in Home auslöst, wäre für mich ein Low Batterie Service sehr praktisch.
Das DoorBell Gerät wird in der Home App als nicht unterstützt angezeigt, löst aber trotzdem eine Push Meldung aus "XY klingelte"
Getestet mit:
tvOS 14.x, HM-PBI-4-FM mit HomeMaticDoorBellAccessory und fast leerer Batterie | non_process | addlowbatcharacteristic to homematicdoorbellaccessory da der service homematicdoorbellaccessory auch was in home auslöst wäre für mich ein low batterie service sehr praktisch das doorbell gerät wird in der home app als nicht unterstützt angezeigt löst aber trotzdem eine push meldung aus xy klingelte getestet mit tvos x hm pbi fm mit homematicdoorbellaccessory und fast leerer batterie | 0 |
678 | 3,151,219,923 | IssuesEvent | 2015-09-16 06:27:56 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На главном портале для Киева сделать чувствительным боевой домен | active hi priority In process of testing test | 7. Сделать ссылки на реализованные услуги версии продуктива.
для домена: https://es.kievcity.gov.ua
https://docs.google.com/document/d/1fUJlMptp0npeXNShMwZedfcqnhfz6mhyDcw7qoxtZqU/edit# | 1.0 | На главном портале для Киева сделать чувствительным боевой домен - 7. Сделать ссылки на реализованные услуги версии продуктива.
для домена: https://es.kievcity.gov.ua
https://docs.google.com/document/d/1fUJlMptp0npeXNShMwZedfcqnhfz6mhyDcw7qoxtZqU/edit# | process | на главном портале для киева сделать чувствительным боевой домен сделать ссылки на реализованные услуги версии продуктива для домена | 1 |
30,123 | 14,427,603,405 | IssuesEvent | 2020-12-06 05:05:28 | keras-team/keras | https://api.github.com/repos/keras-team/keras | closed | Keras doesn't learn properly when tensor is passed to custom-layer by key-word argument | backend:tensorflow stat:awaiting tensorflower type:bug/performance | When I pass tensor to layer by keyword arguments the learning sometimes doesn't happen properly.
I would expect it not to matter if keyword or non-keyword argument is used as long as the model logic is unchanged. Example below demonstrates that the learning doesn't happen properly for tensors passed by keyword arguments.
**System information**
- Have I written custom code (as opposed to using example directory): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): happens everywhere - tested in colab and os x
- TensorFlow backend (yes / no): yes
- TensorFlow version: 1.14.0
- Keras version: 2.2.4-tf
- Python version: 3.6
**Describe the current behavior**
Keras doesn't learn properly when tensor is passed to custom layer by keyword argument.
Below all the models should produce roughly same accuracy or loss. However when use `MyLayer` the loss is clearly higher than in other two case even though it shouldn't affect the model structure.
**Describe the expected behavior**
Learning shouldn't be affected if use keyword arguments or not as long as the logic is the same.
**Code to reproduce the issue**
Note that layer_type=1 gives much higher loss even though all the models are logically the same.
Colab: https://colab.research.google.com/drive/19iqhjJm2N8yOwVzdDK8j9yRNNdw7whZn
Code
```
import numpy as np
import tensorflow as tf
import sys
from tensorflow.python.keras import layers, callbacks
from tensorflow.python.keras.losses import MeanSquaredError
from tensorflow.python.training.gradient_descent import GradientDescentOptimizer
print(tf.__version__)
print(tf.keras.__version__)
print(sys.version)
N = 10000
FEATURES_D = 12
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs, mask=None, t=None):
return t
class MyLayer2(tf.keras.layers.Layer):
def call(self, inputs, mask=None, t=None):
return inputs
np.random.seed(0)
X1 = np.random.rand(N, FEATURES_D)
X2 = np.random.rand(N, FEATURES_D)
W = np.random.randn(FEATURES_D)
y = X1.dot(W) + np.random.randn(N)
def model(layer_type):
x1 = tf.keras.Input(shape=FEATURES_D)
x2 = tf.keras.Input(shape=FEATURES_D)
x11 = layers.Dense(10)(x1)
if layer_type == 1:
x11 = MyLayer()(x2, t=x11)
elif layer_type == 2:
x11 = MyLayer2()(x11, t=x2)
out = layers.Dense(1)(x11)
model = tf.keras.Model(inputs=[x1, x2], outputs=out)
model.compile(optimizer=GradientDescentOptimizer(0.01), loss=MeanSquaredError())
h: callbacks.History = model.fit(
[X1, X2], y=y, batch_size=256, epochs=10, verbose=0
)
return h.history["loss"][-1]
for _ in range(3):
for layer_type in [0, 1, 2]:
print(f"{layer_type} - {model(layer_type)}")
print("---")
```
output:
```
1.14.0
2.2.4-tf
3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
WARNING: Logging before flag parsing goes to stderr.
W0724 20:09:44.047463 4648408512 deprecation.py:506] From /anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2019-07-24 20:09:44.151038: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
0 - 1.0245165932655333
1 - 1.788629257965088
2 - 1.0349093259811402
---
0 - 1.0278864135742187
1 - 1.6075011337280274
2 - 1.0326046692848205
---
0 - 1.0317546758651734
1 - 1.3196803382873534
2 - 1.0330685680389404
---
```
| True | Keras doesn't learn properly when tensor is passed to custom-layer by key-word argument - When I pass tensor to layer by keyword arguments the learning sometimes doesn't happen properly.
I would expect it not to matter if keyword or non-keyword argument is used as long as the model logic is unchanged. Example below demonstrates that the learning doesn't happen properly for tensors passed by keyword arguments.
**System information**
- Have I written custom code (as opposed to using example directory): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): happens everywhere - tested in colab and os x
- TensorFlow backend (yes / no): yes
- TensorFlow version: 1.14.0
- Keras version: 2.2.4-tf
- Python version: 3.6
**Describe the current behavior**
Keras doesn't learn properly when tensor is passed to custom layer by keyword argument.
Below all the models should produce roughly same accuracy or loss. However when use `MyLayer` the loss is clearly higher than in other two case even though it shouldn't affect the model structure.
**Describe the expected behavior**
Learning shouldn't be affected if use keyword arguments or not as long as the logic is the same.
**Code to reproduce the issue**
Note that layer_type=1 gives much higher loss even though all the models are logically the same.
Colab: https://colab.research.google.com/drive/19iqhjJm2N8yOwVzdDK8j9yRNNdw7whZn
Code
```
import numpy as np
import tensorflow as tf
import sys
from tensorflow.python.keras import layers, callbacks
from tensorflow.python.keras.losses import MeanSquaredError
from tensorflow.python.training.gradient_descent import GradientDescentOptimizer
print(tf.__version__)
print(tf.keras.__version__)
print(sys.version)
N = 10000
FEATURES_D = 12
class MyLayer(tf.keras.layers.Layer):
def call(self, inputs, mask=None, t=None):
return t
class MyLayer2(tf.keras.layers.Layer):
def call(self, inputs, mask=None, t=None):
return inputs
np.random.seed(0)
X1 = np.random.rand(N, FEATURES_D)
X2 = np.random.rand(N, FEATURES_D)
W = np.random.randn(FEATURES_D)
y = X1.dot(W) + np.random.randn(N)
def model(layer_type):
x1 = tf.keras.Input(shape=FEATURES_D)
x2 = tf.keras.Input(shape=FEATURES_D)
x11 = layers.Dense(10)(x1)
if layer_type == 1:
x11 = MyLayer()(x2, t=x11)
elif layer_type == 2:
x11 = MyLayer2()(x11, t=x2)
out = layers.Dense(1)(x11)
model = tf.keras.Model(inputs=[x1, x2], outputs=out)
model.compile(optimizer=GradientDescentOptimizer(0.01), loss=MeanSquaredError())
h: callbacks.History = model.fit(
[X1, X2], y=y, batch_size=256, epochs=10, verbose=0
)
return h.history["loss"][-1]
for _ in range(3):
for layer_type in [0, 1, 2]:
print(f"{layer_type} - {model(layer_type)}")
print("---")
```
output:
```
1.14.0
2.2.4-tf
3.6.8 |Anaconda, Inc.| (default, Dec 29 2018, 19:04:46)
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
WARNING: Logging before flag parsing goes to stderr.
W0724 20:09:44.047463 4648408512 deprecation.py:506] From /anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
2019-07-24 20:09:44.151038: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
0 - 1.0245165932655333
1 - 1.788629257965088
2 - 1.0349093259811402
---
0 - 1.0278864135742187
1 - 1.6075011337280274
2 - 1.0326046692848205
---
0 - 1.0317546758651734
1 - 1.3196803382873534
2 - 1.0330685680389404
---
```
| non_process | keras doesn t learn properly when tensor is passed to custom layer by key word argument when i pass tensor to layer by keyword arguments the learning sometimes doesn t happen properly i would expect it not to matter if keyword or non keyword argument is used as long as the model logic is unchanged example below demonstrates that the learning doesn t happen properly for tensors passed by keyword arguments system information have i written custom code as opposed to using example directory yes os platform and distribution e g linux ubuntu happens everywhere tested in colab and os x tensorflow backend yes no yes tensorflow version keras version tf python version describe the current behavior keras doesn t learn properly when tensor is passed to custom layer by keyword argument below all the models should produce roughly same accuracy or loss however when use mylayer the loss is clearly higher than in other two case even though it shouldn t affect the model structure describe the expected behavior learning shouldn t be affected if use keyword arguments or not as long as the logic is the same code to reproduce the issue note that layer type gives much higher loss even though all the models are logically the same colab code import numpy as np import tensorflow as tf import sys from tensorflow python keras import layers callbacks from tensorflow python keras losses import meansquarederror from tensorflow python training gradient descent import gradientdescentoptimizer print tf version print tf keras version print sys version n features d class mylayer tf keras layers layer def call self inputs mask none t none return t class tf keras layers layer def call self inputs mask none t none return inputs np random seed np random rand n features d np random rand n features d w np random randn features d y dot w np random randn n def model layer type tf keras input shape features d tf keras input shape features d layers dense if layer type mylayer t elif layer type t out layers dense model tf keras model inputs outputs out model compile optimizer gradientdescentoptimizer loss meansquarederror h callbacks history model fit y y batch size epochs verbose return h history for in range for layer type in print f layer type model layer type print output tf anaconda inc default dec warning logging before flag parsing goes to stderr deprecation py from envs lib site packages tensorflow python ops init ops py calling variancescaling init from tensorflow python ops init ops with dtype is deprecated and will be removed in a future version instructions for updating call initializer instance with the dtype argument instead of passing it to the constructor i tensorflow core platform cpu feature guard cc your cpu supports instructions that this tensorflow binary was not compiled to use fma | 0 |
143,851 | 19,256,456,911 | IssuesEvent | 2021-12-09 11:50:35 | tildabio/composable | https://api.github.com/repos/tildabio/composable | opened | CVE-2020-8552 (Medium) detected in github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1 | security vulnerability | ## CVE-2020-8552 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1</b></p></summary>
<p>API server for API extensions like CustomResourceDefinitions</p>
<p>
Dependency Hierarchy:
- github.com/kubernetes-sigs/controller-runtime-v0.2.0 (Root Library)
- :x: **github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tildabio/composable/commit/af8aa41dc3cfebd35daec7382d85fd4b238fe08c">af8aa41dc3cfebd35daec7382d85fd4b238fe08c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Kubernetes API server component in versions prior to 1.15.9, 1.16.0-1.16.6, and 1.17.0-1.17.2 has been found to be vulnerable to a denial of service attack via successful API requests.
<p>Publish Date: 2020-03-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8552>CVE-2020-8552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8552">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8552</a></p>
<p>Release Date: 2020-03-27</p>
<p>Fix Resolution: v1.18.0-alpha.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-8552 (Medium) detected in github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1 - ## CVE-2020-8552 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1</b></p></summary>
<p>API server for API extensions like CustomResourceDefinitions</p>
<p>
Dependency Hierarchy:
- github.com/kubernetes-sigs/controller-runtime-v0.2.0 (Root Library)
- :x: **github.com/kubernetes/apiextensions-apiserver-kubernetes-1.14.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tildabio/composable/commit/af8aa41dc3cfebd35daec7382d85fd4b238fe08c">af8aa41dc3cfebd35daec7382d85fd4b238fe08c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Kubernetes API server component in versions prior to 1.15.9, 1.16.0-1.16.6, and 1.17.0-1.17.2 has been found to be vulnerable to a denial of service attack via successful API requests.
<p>Publish Date: 2020-03-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8552>CVE-2020-8552</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8552">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8552</a></p>
<p>Release Date: 2020-03-27</p>
<p>Fix Resolution: v1.18.0-alpha.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in github com kubernetes apiextensions apiserver kubernetes cve medium severity vulnerability vulnerable library github com kubernetes apiextensions apiserver kubernetes api server for api extensions like customresourcedefinitions dependency hierarchy github com kubernetes sigs controller runtime root library x github com kubernetes apiextensions apiserver kubernetes vulnerable library found in head commit a href found in base branch main vulnerability details the kubernetes api server component in versions prior to and has been found to be vulnerable to a denial of service attack via successful api requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution alpha step up your open source security game with whitesource | 0 |
6,816 | 9,959,654,291 | IssuesEvent | 2019-07-06 09:02:20 | theamrzaki/text_summurization_abstractive_methods | https://api.github.com/repos/theamrzaki/text_summurization_abstractive_methods | closed | Data Preprocessing | Data Processing Model 4 Model 5 | Hey I have query about the data preprocessing part for model 4 and 5 . Whenever I try to preprocess the data this is what i end up with
``Traceback (most recent call last):
File "process_English.py", line 290, in <module>
reviews = pd.read_csv(reviews_csv,header = 1) #skip first row (of header)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 440, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 787, in __init__
self._make_engine(self.engine)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1014, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1708, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 539, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 751, in pandas._libs.parsers.TextReader._get_header
pandas.errors.ParserError: Passed header=1 but only 1 lines in file
``
I have preprocessed the data the data using the steps which abisee gave but I dont understand the csv part in ur method | 1.0 | Data Preprocessing - Hey I have query about the data preprocessing part for model 4 and 5 . Whenever I try to preprocess the data this is what i end up with
``Traceback (most recent call last):
File "process_English.py", line 290, in <module>
reviews = pd.read_csv(reviews_csv,header = 1) #skip first row (of header)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 678, in parser_f
return _read(filepath_or_buffer, kwds)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 440, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 787, in __init__
self._make_engine(self.engine)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1014, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "/home/giri/anaconda3/lib/python3.6/site-packages/pandas/io/parsers.py", line 1708, in __init__
self._reader = parsers.TextReader(src, **kwds)
File "pandas/_libs/parsers.pyx", line 539, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 751, in pandas._libs.parsers.TextReader._get_header
pandas.errors.ParserError: Passed header=1 but only 1 lines in file
``
I have preprocessed the data the data using the steps which abisee gave but I dont understand the csv part in ur method | process | data preprocessing hey i have query about the data preprocessing part for model and whenever i try to preprocess the data this is what i end up with traceback most recent call last file process english py line in reviews pd read csv reviews csv header skip first row of header file home giri lib site packages pandas io parsers py line in parser f return read filepath or buffer kwds file home giri lib site packages pandas io parsers py line in read parser textfilereader filepath or buffer kwds file home giri lib site packages pandas io parsers py line in init self make engine self engine file home giri lib site packages pandas io parsers py line in make engine self engine cparserwrapper self f self options file home giri lib site packages pandas io parsers py line in init self reader parsers textreader src kwds file pandas libs parsers pyx line in pandas libs parsers textreader cinit file pandas libs parsers pyx line in pandas libs parsers textreader get header pandas errors parsererror passed header but only lines in file i have preprocessed the data the data using the steps which abisee gave but i dont understand the csv part in ur method | 1 |
688,159 | 23,550,436,780 | IssuesEvent | 2022-08-21 18:54:07 | dnd-side-project/dnd-7th-2-backend | https://api.github.com/repos/dnd-side-project/dnd-7th-2-backend | closed | [Feature] 파이어베이스를 이용한 FCM Push 서버 구축 | Type: Feature Priority: Medium Status: On Hold | ## 사전 준비
* [x] 개념 및 자료 조사
* [x] 파이어베이스 계정 관련 처리
* [x] 라이브러리 학습
## 구현
* [x] Practice 프로젝트로 테스트
* [x] Push Notification FCM API 엔드 포인트 추가
* [x] 테스트
* [ ] Android 연동 테스트
| 1.0 | [Feature] 파이어베이스를 이용한 FCM Push 서버 구축 - ## 사전 준비
* [x] 개념 및 자료 조사
* [x] 파이어베이스 계정 관련 처리
* [x] 라이브러리 학습
## 구현
* [x] Practice 프로젝트로 테스트
* [x] Push Notification FCM API 엔드 포인트 추가
* [x] 테스트
* [ ] Android 연동 테스트
| non_process | 파이어베이스를 이용한 fcm push 서버 구축 사전 준비 개념 및 자료 조사 파이어베이스 계정 관련 처리 라이브러리 학습 구현 practice 프로젝트로 테스트 push notification fcm api 엔드 포인트 추가 테스트 android 연동 테스트 | 0 |
265,670 | 28,298,030,497 | IssuesEvent | 2023-04-10 01:26:49 | nk7598/linux-4.19.72 | https://api.github.com/repos/nk7598/linux-4.19.72 | closed | CVE-2022-28388 (Medium) detected in linuxlinux-4.19.269 - autoclosed | Mend: dependency security vulnerability | ## CVE-2022-28388 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/usb_8dev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/usb_8dev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
usb_8dev_start_xmit in drivers/net/can/usb/usb_8dev.c in the Linux kernel through 5.17.1 has a double free.
<p>Publish Date: 2022-04-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-28388>CVE-2022-28388</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-28388">https://www.linuxkernelcves.com/cves/CVE-2022-28388</a></p>
<p>Release Date: 2022-04-03</p>
<p>Fix Resolution: v4.14.277,v4.19.240,v5.4.191,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-28388 (Medium) detected in linuxlinux-4.19.269 - autoclosed - ## CVE-2022-28388 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/usb_8dev.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/can/usb/usb_8dev.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
usb_8dev_start_xmit in drivers/net/can/usb/usb_8dev.c in the Linux kernel through 5.17.1 has a double free.
<p>Publish Date: 2022-04-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-28388>CVE-2022-28388</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-28388">https://www.linuxkernelcves.com/cves/CVE-2022-28388</a></p>
<p>Release Date: 2022-04-03</p>
<p>Fix Resolution: v4.14.277,v4.19.240,v5.4.191,v5.10.110,v5.15.33,v5.16.19,v5.17.2,v5.18-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers net can usb usb c drivers net can usb usb c vulnerability details usb start xmit in drivers net can usb usb c in the linux kernel through has a double free publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
376,338 | 11,142,327,379 | IssuesEvent | 2019-12-22 08:49:12 | nearprotocol/nearcore | https://api.github.com/repos/nearprotocol/nearcore | closed | Expose gas price via RPC | Priority 2 enhancement rpc | Would be good to know the gas price for upcoming block (or a few).
This is useful to show the prices for transaction in the UX and for programmatical operating. | 1.0 | Expose gas price via RPC - Would be good to know the gas price for upcoming block (or a few).
This is useful to show the prices for transaction in the UX and for programmatical operating. | non_process | expose gas price via rpc would be good to know the gas price for upcoming block or a few this is useful to show the prices for transaction in the ux and for programmatical operating | 0 |
13,892 | 16,655,858,097 | IssuesEvent | 2021-06-05 14:08:08 | paul-buerkner/brms | https://api.github.com/repos/paul-buerkner/brms | closed | Moment matching LOO doesn't work with cmdstanr | feature post-processing | When I try to use moment matching LOO on a model that I used cmdstanr as a backend for, I get this error:
> Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘sampling’ for signature ‘"CmdStanModel"’
Code:
```
# tidyverse loaded as well as brms
mtcars %>% mutate(gear = gear %>% factor) -> mtcars
brm(data=mtcars, formula=bf(mpg~gear, sigma ~ gear), cores = 4, backend="cmdstanr") -> model
model %>% loo(moment_match=TRUE)
```
| 1.0 | Moment matching LOO doesn't work with cmdstanr - When I try to use moment matching LOO on a model that I used cmdstanr as a backend for, I get this error:
> Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘sampling’ for signature ‘"CmdStanModel"’
Code:
```
# tidyverse loaded as well as brms
mtcars %>% mutate(gear = gear %>% factor) -> mtcars
brm(data=mtcars, formula=bf(mpg~gear, sigma ~ gear), cores = 4, backend="cmdstanr") -> model
model %>% loo(moment_match=TRUE)
```
| process | moment matching loo doesn t work with cmdstanr when i try to use moment matching loo on a model that i used cmdstanr as a backend for i get this error error in function classes fdef mtable unable to find an inherited method for function ‘sampling’ for signature ‘ cmdstanmodel ’ code tidyverse loaded as well as brms mtcars mutate gear gear factor mtcars brm data mtcars formula bf mpg gear sigma gear cores backend cmdstanr model model loo moment match true | 1 |
18,524 | 24,552,046,166 | IssuesEvent | 2022-10-12 13:19:42 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Mobile participant is not able to complete their sign up flow in the following scenario | Bug P0 iOS Process: Fixed Process: Tested QA Process: Tested dev | **Steps:**
1. Install the mobile app in the testing device
2. Go to sign in screen
3. Click on the Signup button
4. Enter all the required fields and click on submit
5. Enter the received verification code
6. Observe
**AR:** Sign in not available error page is getting displayed
**ER:** Mobile users should be able to sign up for their account successfully
Note: if the user goes back to sign in screen and tries to sign in then gets a 'Sorry an error has occurred....' error message is getting displayed


| 3.0 | [iOS] Mobile participant is not able to complete their sign up flow in the following scenario - **Steps:**
1. Install the mobile app in the testing device
2. Go to sign in screen
3. Click on the Signup button
4. Enter all the required fields and click on submit
5. Enter the received verification code
6. Observe
**AR:** Sign in not available error page is getting displayed
**ER:** Mobile users should be able to sign up for their account successfully
Note: if the user goes back to sign in screen and tries to sign in then gets a 'Sorry an error has occurred....' error message is getting displayed


| process | mobile participant is not able to complete their sign up flow in the following scenario steps install the mobile app in the testing device go to sign in screen click on the signup button enter all the required fields and click on submit enter the received verification code observe ar sign in not available error page is getting displayed er mobile users should be able to sign up for their account successfully note if the user goes back to sign in screen and tries to sign in then gets a sorry an error has occurred error message is getting displayed | 1 |
21,882 | 30,327,427,081 | IssuesEvent | 2023-07-11 02:00:09 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Tue, 11 Jul 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping
- **Authors:** Kazuya Nishimura, Ami Katanaya, Shinichiro Chuma, Ryoma Bise
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04113
- **Pdf link:** https://arxiv.org/pdf/2307.04113
- **Abstract**
Detection of mitosis events plays an important role in biomedical research. Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor. In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences. The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset. First, we generate an image pair not containing mitosis events by frame-order flipping. Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset. We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### TractGeoNet: A geometric deep learning framework for pointwise analysis of tract microstructure to predict language assessment performance
- **Authors:** Yuqian Chen, Leo R. Zekelman, Chaoyi Zhang, Tengfei Xue, Yang Song, Nikos Makris, Yogesh Rathi, Alexandra J. Golby, Weidong Cai, Fan Zhang, Lauren J. O'Donnell
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.03982
- **Pdf link:** https://arxiv.org/pdf/2307.03982
- **Abstract**
We propose a geometric deep-learning-based framework, TractGeoNet, for performing regression using diffusion magnetic resonance imaging (dMRI) tractography and associated pointwise tissue microstructure measurements. By employing a point cloud representation, TractGeoNet can directly utilize pointwise tissue microstructure and positional information from all points within a fiber tract. To improve regression performance, we propose a novel loss function, the Paired-Siamese Regression loss, which encourages the model to focus on accurately predicting the relative differences between regression label scores rather than just their absolute values. In addition, we propose a Critical Region Localization algorithm to identify highly predictive anatomical regions within the white matter fiber tracts for the regression task. We evaluate the effectiveness of the proposed method by predicting individual performance on two neuropsychological assessments of language using a dataset of 20 association white matter fiber tracts from 806 subjects from the Human Connectome Project. The results demonstrate superior prediction performance of TractGeoNet compared to several popular regression models. Of the twenty tracts studied, we find that the left arcuate fasciculus tract is the most highly predictive of the two studied language performance assessments. The localized critical regions are widespread and distributed across both hemispheres and all cerebral lobes, including areas of the brain considered important for language function such as superior and anterior temporal regions, pars opercularis, and precentral gyrus. Overall, TractGeoNet demonstrates the potential of geometric deep learning to enhance the study of the brain's white matter fiber tracts and to relate their structure to human traits such as language performance.
### Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
- **Authors:** Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, Jun Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2307.03998
- **Pdf link:** https://arxiv.org/pdf/2307.03998
- **Abstract**
The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at https://github.com/ThisisVikki/ITM-baseline.
### Visible and infrared self-supervised fusion trained on a single example
- **Authors:** Nati Ofir
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04100
- **Pdf link:** https://arxiv.org/pdf/2307.04100
- **Abstract**
This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training.
### Marine Debris Detection in Satellite Surveillance using Attention Mechanisms
- **Authors:** Ao Shen, Yijie Zhu, Richard Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04128
- **Pdf link:** https://arxiv.org/pdf/2307.04128
- **Abstract**
Marine debris is an important issue for environmental protection, but current methods for locating marine debris are yet limited. In order to achieve higher efficiency and wider applicability in the localization of Marine debris, this study tries to combine the instance segmentation of YOLOv7 with different attention mechanisms and explores the best model. By utilizing a labelled dataset consisting of satellite images containing ocean debris, we examined three attentional models including lightweight coordinate attention, CBAM (combining spatial and channel focus), and bottleneck transformer (based on self-attention). Box detection assessment revealed that CBAM achieved the best outcome (F1 score of 77%) compared to coordinate attention (F1 score of 71%) and YOLOv7/bottleneck transformer (both F1 scores around 66%). Mask evaluation showed CBAM again leading with an F1 score of 73%, whereas coordinate attention and YOLOv7 had comparable performances (around F1 score of 68%/69%) and bottleneck transformer lagged behind at F1 score of 56%. These findings suggest that CBAM offers optimal suitability for detecting marine debris. However, it should be noted that the bottleneck transformer detected some areas missed by manual annotation and displayed better mask precision for larger debris pieces, signifying potentially superior practical performance.
### SparseVSR: Lightweight and Noise Robust Visual Speech Recognition
- **Authors:** Adriana Fernandez-Lopez, Honglie Chen, Pingchuan Ma, Alexandros Haliassos, Stavros Petridis, Maja Pantic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04552
- **Pdf link:** https://arxiv.org/pdf/2307.04552
- **Abstract**
Recent advances in deep neural networks have achieved unprecedented success in visual speech recognition. However, there remains substantial disparity between current methods and their deployment in resource-constrained devices. In this work, we explore different magnitude-based pruning techniques to generate a lightweight model that achieves higher performance than its dense model equivalent, especially under the presence of visual noise. Our sparse models achieve state-of-the-art results at 10% sparsity on the LRS3 dataset and outperform the dense equivalent up to 70% sparsity. We evaluate our 50% sparse model on 7 different visual noise types and achieve an overall absolute improvement of more than 2% WER compared to the dense equivalent. Our results confirm that sparse networks are more resistant to noise than dense networks.
### FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing
- **Authors:** Pengyang Ling, Lin Chen, Pan Zhang, Huaian Chen, Yi Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2307.04684
- **Pdf link:** https://arxiv.org/pdf/2307.04684
- **Abstract**
To serve the intricate and varied demands of image editing, precise and flexible manipulation of image content is indispensable. Recently, DragGAN has achieved impressive editing results through point-based manipulation. However, we have observed that DragGAN struggles with miss tracking, where DragGAN encounters difficulty in effectively tracking the desired handle points, and ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. To deal with the above issues, we propose FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. The FreeDrag incorporates adaptive template features, line search, and fuzzy localization techniques to perform stable and efficient point-based image editing. Extensive experiments demonstrate that our method is superior to the DragGAN and enables stable point-based editing in challenging scenarios with similar structures, fine details, or under multi-point targets.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Reasoning over the Behaviour of Objects in Video-Clips for Adverb-Type Recognition
- **Authors:** Amrit Diggavi Seshadri, Alessandra Russo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Symbolic Computation (cs.SC)
- **Arxiv link:** https://arxiv.org/abs/2307.04132
- **Pdf link:** https://arxiv.org/pdf/2307.04132
- **Abstract**
In this work, following the intuition that adverbs describing scene-sequences are best identified by reasoning over high-level concepts of object-behavior, we propose the design of a new framework that reasons over object-behaviours extracted from raw-video-clips to recognize the clip's corresponding adverb-types. Importantly, while previous works for general scene adverb-recognition assume knowledge of the clips underlying action-types, our method is directly applicable in the more general problem setting where the action-type of a video-clip is unknown. Specifically, we propose a novel pipeline that extracts human-interpretable object-behaviour-facts from raw video clips and propose novel symbolic and transformer based reasoning methods that operate over these extracted facts to identify adverb-types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art. Additionally, to support efforts in symbolic video-processing, we release two new datasets of object-behaviour-facts extracted from raw video clips - the MSR-VTT-ASP and ActivityNet-ASP datasets.
### An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
- **Authors:** Ashish Singh, Antonio Bevilacqua, Timilehin B. Aderinola, Thach Le Nguyen, Darragh Whelan, Martin O'Reilly, Brian Caulfield, Georgiana Ifrim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04516
- **Pdf link:** https://arxiv.org/pdf/2307.04516
- **Abstract**
Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body. In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises. We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification.
## Keyword: raw image
There is no result
| 2.0 | New submissions for Tue, 11 Jul 23 - ## Keyword: events
### Mitosis Detection from Partial Annotation by Dataset Generation via Frame-Order Flipping
- **Authors:** Kazuya Nishimura, Ami Katanaya, Shinichiro Chuma, Ryoma Bise
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04113
- **Pdf link:** https://arxiv.org/pdf/2307.04113
- **Abstract**
Detection of mitosis events plays an important role in biomedical research. Deep-learning-based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data. However, these methods require annotations for each imaging condition. Collecting labeled data involves time-consuming human labor. In this paper, we propose a mitosis detection method that can be trained with partially annotated sequences. The base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset. First, we generate an image pair not containing mitosis events by frame-order flipping. Then, we paste mitosis events to the image pair by alpha-blending pasting and generate a fully labeled dataset. We demonstrate the performance of our method on four datasets, and we confirm that our method outperforms other comparisons which use partially labeled sequences.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### TractGeoNet: A geometric deep learning framework for pointwise analysis of tract microstructure to predict language assessment performance
- **Authors:** Yuqian Chen, Leo R. Zekelman, Chaoyi Zhang, Tengfei Xue, Yang Song, Nikos Makris, Yogesh Rathi, Alexandra J. Golby, Weidong Cai, Fan Zhang, Lauren J. O'Donnell
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.03982
- **Pdf link:** https://arxiv.org/pdf/2307.03982
- **Abstract**
We propose a geometric deep-learning-based framework, TractGeoNet, for performing regression using diffusion magnetic resonance imaging (dMRI) tractography and associated pointwise tissue microstructure measurements. By employing a point cloud representation, TractGeoNet can directly utilize pointwise tissue microstructure and positional information from all points within a fiber tract. To improve regression performance, we propose a novel loss function, the Paired-Siamese Regression loss, which encourages the model to focus on accurately predicting the relative differences between regression label scores rather than just their absolute values. In addition, we propose a Critical Region Localization algorithm to identify highly predictive anatomical regions within the white matter fiber tracts for the regression task. We evaluate the effectiveness of the proposed method by predicting individual performance on two neuropsychological assessments of language using a dataset of 20 association white matter fiber tracts from 806 subjects from the Human Connectome Project. The results demonstrate superior prediction performance of TractGeoNet compared to several popular regression models. Of the twenty tracts studied, we find that the left arcuate fasciculus tract is the most highly predictive of the two studied language performance assessments. The localized critical regions are widespread and distributed across both hemispheres and all cerebral lobes, including areas of the brain considered important for language function such as superior and anterior temporal regions, pars opercularis, and precentral gyrus. Overall, TractGeoNet demonstrates the potential of geometric deep learning to enhance the study of the brain's white matter fiber tracts and to relate their structure to human traits such as language performance.
### Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
- **Authors:** Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, Jun Xu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV)
- **Arxiv link:** https://arxiv.org/abs/2307.03998
- **Pdf link:** https://arxiv.org/pdf/2307.03998
- **Abstract**
The display devices like HDR10 televisions are increasingly prevalent in our daily life for visualizing high dynamic range (HDR) images. But the majority of media images on the internet remain in 8-bit standard dynamic range (SDR) format. Therefore, converting SDR images to HDR ones by inverse tone mapping (ITM) is crucial to unlock the full potential of abundant media images. However, existing ITM methods are usually developed with complex network architectures requiring huge computational costs. In this paper, we propose a lightweight Improved Residual Network (IRNet) by enhancing the power of popular residual block for efficient ITM. Specifically, we propose a new Improved Residual Block (IRB) to extract and fuse multi-layer features for fine-grained HDR image reconstruction. Experiments on three benchmark datasets demonstrate that our IRNet achieves state-of-the-art performance on both the ITM and joint SR-ITM tasks. The code, models and data will be publicly available at https://github.com/ThisisVikki/ITM-baseline.
### Visible and infrared self-supervised fusion trained on a single example
- **Authors:** Nati Ofir
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04100
- **Pdf link:** https://arxiv.org/pdf/2307.04100
- **Abstract**
This paper addresses the problem of visible (RGB) to Near-Infrared (NIR) image fusion. Multispectral imaging is an important task relevant to image processing and computer vision, even more, since the development of the RGBT sensor. While the visible image sees color and suffers from noise, haze, and clouds, the NIR channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection. The proposed approach fuses these two aligned channels by training a Convolutional-Neural-Network (CNN) by a Self-Supervised-Learning (SSL) on a single example. For each such pair, RGB and IR, the network is trained for seconds to deduce the final fusion. The SSL is based on Sturcture-of-Similarity (SSIM) loss combined with Edge-Preservation (EP) loss. The labels for the SSL are the input channels themselves. This fusion preserves the relevant detail of each spectral channel while not based on a heavy training process. In the experiments section, the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods, that are not based on large dataset training.
### Marine Debris Detection in Satellite Surveillance using Attention Mechanisms
- **Authors:** Ao Shen, Yijie Zhu, Richard Jiang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04128
- **Pdf link:** https://arxiv.org/pdf/2307.04128
- **Abstract**
Marine debris is an important issue for environmental protection, but current methods for locating marine debris are yet limited. In order to achieve higher efficiency and wider applicability in the localization of Marine debris, this study tries to combine the instance segmentation of YOLOv7 with different attention mechanisms and explores the best model. By utilizing a labelled dataset consisting of satellite images containing ocean debris, we examined three attentional models including lightweight coordinate attention, CBAM (combining spatial and channel focus), and bottleneck transformer (based on self-attention). Box detection assessment revealed that CBAM achieved the best outcome (F1 score of 77%) compared to coordinate attention (F1 score of 71%) and YOLOv7/bottleneck transformer (both F1 scores around 66%). Mask evaluation showed CBAM again leading with an F1 score of 73%, whereas coordinate attention and YOLOv7 had comparable performances (around F1 score of 68%/69%) and bottleneck transformer lagged behind at F1 score of 56%. These findings suggest that CBAM offers optimal suitability for detecting marine debris. However, it should be noted that the bottleneck transformer detected some areas missed by manual annotation and displayed better mask precision for larger debris pieces, signifying potentially superior practical performance.
### SparseVSR: Lightweight and Noise Robust Visual Speech Recognition
- **Authors:** Adriana Fernandez-Lopez, Honglie Chen, Pingchuan Ma, Alexandros Haliassos, Stavros Petridis, Maja Pantic
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04552
- **Pdf link:** https://arxiv.org/pdf/2307.04552
- **Abstract**
Recent advances in deep neural networks have achieved unprecedented success in visual speech recognition. However, there remains substantial disparity between current methods and their deployment in resource-constrained devices. In this work, we explore different magnitude-based pruning techniques to generate a lightweight model that achieves higher performance than its dense model equivalent, especially under the presence of visual noise. Our sparse models achieve state-of-the-art results at 10% sparsity on the LRS3 dataset and outperform the dense equivalent up to 70% sparsity. We evaluate our 50% sparse model on 7 different visual noise types and achieve an overall absolute improvement of more than 2% WER compared to the dense equivalent. Our results confirm that sparse networks are more resistant to noise than dense networks.
### FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing
- **Authors:** Pengyang Ling, Lin Chen, Pan Zhang, Huaian Chen, Yi Jin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG)
- **Arxiv link:** https://arxiv.org/abs/2307.04684
- **Pdf link:** https://arxiv.org/pdf/2307.04684
- **Abstract**
To serve the intricate and varied demands of image editing, precise and flexible manipulation of image content is indispensable. Recently, DragGAN has achieved impressive editing results through point-based manipulation. However, we have observed that DragGAN struggles with miss tracking, where DragGAN encounters difficulty in effectively tracking the desired handle points, and ambiguous tracking, where the tracked points are situated within other regions that bear resemblance to the handle points. To deal with the above issues, we propose FreeDrag, which adopts a feature-oriented approach to free the burden on point tracking within the point-oriented methodology of DragGAN. The FreeDrag incorporates adaptive template features, line search, and fuzzy localization techniques to perform stable and efficient point-based image editing. Extensive experiments demonstrate that our method is superior to the DragGAN and enables stable point-based editing in challenging scenarios with similar structures, fine details, or under multi-point targets.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Reasoning over the Behaviour of Objects in Video-Clips for Adverb-Type Recognition
- **Authors:** Amrit Diggavi Seshadri, Alessandra Russo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Symbolic Computation (cs.SC)
- **Arxiv link:** https://arxiv.org/abs/2307.04132
- **Pdf link:** https://arxiv.org/pdf/2307.04132
- **Abstract**
In this work, following the intuition that adverbs describing scene-sequences are best identified by reasoning over high-level concepts of object-behavior, we propose the design of a new framework that reasons over object-behaviours extracted from raw-video-clips to recognize the clip's corresponding adverb-types. Importantly, while previous works for general scene adverb-recognition assume knowledge of the clips underlying action-types, our method is directly applicable in the more general problem setting where the action-type of a video-clip is unknown. Specifically, we propose a novel pipeline that extracts human-interpretable object-behaviour-facts from raw video clips and propose novel symbolic and transformer based reasoning methods that operate over these extracted facts to identify adverb-types. Experiment results demonstrate that our proposed methods perform favourably against the previous state-of-the-art. Additionally, to support efforts in symbolic video-processing, we release two new datasets of object-behaviour-facts extracted from raw video clips - the MSR-VTT-ASP and ActivityNet-ASP datasets.
### An Examination of Wearable Sensors and Video Data Capture for Human Exercise Classification
- **Authors:** Ashish Singh, Antonio Bevilacqua, Timilehin B. Aderinola, Thach Le Nguyen, Darragh Whelan, Martin O'Reilly, Brian Caulfield, Georgiana Ifrim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2307.04516
- **Pdf link:** https://arxiv.org/pdf/2307.04516
- **Abstract**
Wearable sensors such as Inertial Measurement Units (IMUs) are often used to assess the performance of human exercise. Common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis. Multiple sensors are required to achieve high classification accuracy, which is not very practical. These sensors require calibration and synchronization and may lead to discomfort over longer time periods. Recent work utilizing computer vision techniques has shown similar performance using video, without the need for manual feature engineering, and avoiding some pitfalls such as sensor calibration and placement on the body. In this paper, we compare the performance of IMUs to a video-based approach for human exercise classification on two real-world datasets consisting of Military Press and Rowing exercises. We compare the performance using a single camera that captures video in the frontal view versus using 5 IMUs placed on different parts of the body. We observe that an approach based on a single camera can outperform a single IMU by 10 percentage points on average. Additionally, a minimum of 3 IMUs are required to outperform a single camera. We observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features. Finally, we show that an ensemble model combining the data from a single camera with a single IMU outperforms either data modality. Our work opens up new and more realistic avenues for this application, where a video captured using a readily available smartphone camera, combined with a single sensor, can be used for effective human exercise classification.
## Keyword: raw image
There is no result
| process | new submissions for tue jul keyword events mitosis detection from partial annotation by dataset generation via frame order flipping authors kazuya nishimura ami katanaya shinichiro chuma ryoma bise subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract detection of mitosis events plays an important role in biomedical research deep learning based mitosis detection methods have achieved outstanding performance with a certain amount of labeled data however these methods require annotations for each imaging condition collecting labeled data involves time consuming human labor in this paper we propose a mitosis detection method that can be trained with partially annotated sequences the base idea is to generate a fully labeled dataset from the partial labels and train a mitosis detection model with the generated dataset first we generate an image pair not containing mitosis events by frame order flipping then we paste mitosis events to the image pair by alpha blending pasting and generate a fully labeled dataset we demonstrate the performance of our method on four datasets and we confirm that our method outperforms other comparisons which use partially labeled sequences keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp tractgeonet a geometric deep learning framework for pointwise analysis of tract microstructure to predict language assessment performance authors yuqian chen leo r zekelman chaoyi zhang tengfei xue yang song nikos makris yogesh rathi alexandra j golby weidong cai fan zhang lauren j o donnell subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we propose a geometric deep learning based framework tractgeonet for performing regression using diffusion magnetic resonance imaging dmri tractography and associated pointwise tissue microstructure measurements by employing a point cloud representation tractgeonet can directly utilize pointwise tissue microstructure and positional information from all points within a fiber tract to improve regression performance we propose a novel loss function the paired siamese regression loss which encourages the model to focus on accurately predicting the relative differences between regression label scores rather than just their absolute values in addition we propose a critical region localization algorithm to identify highly predictive anatomical regions within the white matter fiber tracts for the regression task we evaluate the effectiveness of the proposed method by predicting individual performance on two neuropsychological assessments of language using a dataset of association white matter fiber tracts from subjects from the human connectome project the results demonstrate superior prediction performance of tractgeonet compared to several popular regression models of the twenty tracts studied we find that the left arcuate fasciculus tract is the most highly predictive of the two studied language performance assessments the localized critical regions are widespread and distributed across both hemispheres and all cerebral lobes including areas of the brain considered important for language function such as superior and anterior temporal regions pars opercularis and precentral gyrus overall tractgeonet demonstrates the potential of geometric deep learning to enhance the study of the brain s white matter fiber tracts and to relate their structure to human traits such as language performance lightweight improved residual network for efficient inverse tone mapping authors liqi xue tianyi xu yongbao song yan liu lei zhang xiantong zhen jun xu subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract the display devices like televisions are increasingly prevalent in our daily life for visualizing high dynamic range hdr images but the majority of media images on the internet remain in bit standard dynamic range sdr format therefore converting sdr images to hdr ones by inverse tone mapping itm is crucial to unlock the full potential of abundant media images however existing itm methods are usually developed with complex network architectures requiring huge computational costs in this paper we propose a lightweight improved residual network irnet by enhancing the power of popular residual block for efficient itm specifically we propose a new improved residual block irb to extract and fuse multi layer features for fine grained hdr image reconstruction experiments on three benchmark datasets demonstrate that our irnet achieves state of the art performance on both the itm and joint sr itm tasks the code models and data will be publicly available at visible and infrared self supervised fusion trained on a single example authors nati ofir subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper addresses the problem of visible rgb to near infrared nir image fusion multispectral imaging is an important task relevant to image processing and computer vision even more since the development of the rgbt sensor while the visible image sees color and suffers from noise haze and clouds the nir channel captures a clearer picture and it is significantly required by applications such as dehazing or object detection the proposed approach fuses these two aligned channels by training a convolutional neural network cnn by a self supervised learning ssl on a single example for each such pair rgb and ir the network is trained for seconds to deduce the final fusion the ssl is based on sturcture of similarity ssim loss combined with edge preservation ep loss the labels for the ssl are the input channels themselves this fusion preserves the relevant detail of each spectral channel while not based on a heavy training process in the experiments section the proposed approach achieves better qualitative and quantitative multispectral fusion results with respect to other recent methods that are not based on large dataset training marine debris detection in satellite surveillance using attention mechanisms authors ao shen yijie zhu richard jiang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract marine debris is an important issue for environmental protection but current methods for locating marine debris are yet limited in order to achieve higher efficiency and wider applicability in the localization of marine debris this study tries to combine the instance segmentation of with different attention mechanisms and explores the best model by utilizing a labelled dataset consisting of satellite images containing ocean debris we examined three attentional models including lightweight coordinate attention cbam combining spatial and channel focus and bottleneck transformer based on self attention box detection assessment revealed that cbam achieved the best outcome score of compared to coordinate attention score of and bottleneck transformer both scores around mask evaluation showed cbam again leading with an score of whereas coordinate attention and had comparable performances around score of and bottleneck transformer lagged behind at score of these findings suggest that cbam offers optimal suitability for detecting marine debris however it should be noted that the bottleneck transformer detected some areas missed by manual annotation and displayed better mask precision for larger debris pieces signifying potentially superior practical performance sparsevsr lightweight and noise robust visual speech recognition authors adriana fernandez lopez honglie chen pingchuan ma alexandros haliassos stavros petridis maja pantic subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract recent advances in deep neural networks have achieved unprecedented success in visual speech recognition however there remains substantial disparity between current methods and their deployment in resource constrained devices in this work we explore different magnitude based pruning techniques to generate a lightweight model that achieves higher performance than its dense model equivalent especially under the presence of visual noise our sparse models achieve state of the art results at sparsity on the dataset and outperform the dense equivalent up to sparsity we evaluate our sparse model on different visual noise types and achieve an overall absolute improvement of more than wer compared to the dense equivalent our results confirm that sparse networks are more resistant to noise than dense networks freedrag point tracking is not you need for interactive point based image editing authors pengyang ling lin chen pan zhang huaian chen yi jin subjects computer vision and pattern recognition cs cv human computer interaction cs hc machine learning cs lg arxiv link pdf link abstract to serve the intricate and varied demands of image editing precise and flexible manipulation of image content is indispensable recently draggan has achieved impressive editing results through point based manipulation however we have observed that draggan struggles with miss tracking where draggan encounters difficulty in effectively tracking the desired handle points and ambiguous tracking where the tracked points are situated within other regions that bear resemblance to the handle points to deal with the above issues we propose freedrag which adopts a feature oriented approach to free the burden on point tracking within the point oriented methodology of draggan the freedrag incorporates adaptive template features line search and fuzzy localization techniques to perform stable and efficient point based image editing extensive experiments demonstrate that our method is superior to the draggan and enables stable point based editing in challenging scenarios with similar structures fine details or under multi point targets keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw reasoning over the behaviour of objects in video clips for adverb type recognition authors amrit diggavi seshadri alessandra russo subjects computer vision and pattern recognition cs cv artificial intelligence cs ai symbolic computation cs sc arxiv link pdf link abstract in this work following the intuition that adverbs describing scene sequences are best identified by reasoning over high level concepts of object behavior we propose the design of a new framework that reasons over object behaviours extracted from raw video clips to recognize the clip s corresponding adverb types importantly while previous works for general scene adverb recognition assume knowledge of the clips underlying action types our method is directly applicable in the more general problem setting where the action type of a video clip is unknown specifically we propose a novel pipeline that extracts human interpretable object behaviour facts from raw video clips and propose novel symbolic and transformer based reasoning methods that operate over these extracted facts to identify adverb types experiment results demonstrate that our proposed methods perform favourably against the previous state of the art additionally to support efforts in symbolic video processing we release two new datasets of object behaviour facts extracted from raw video clips the msr vtt asp and activitynet asp datasets an examination of wearable sensors and video data capture for human exercise classification authors ashish singh antonio bevilacqua timilehin b aderinola thach le nguyen darragh whelan martin o reilly brian caulfield georgiana ifrim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract wearable sensors such as inertial measurement units imus are often used to assess the performance of human exercise common approaches use handcrafted features based on domain expertise or automatically extracted features using time series analysis multiple sensors are required to achieve high classification accuracy which is not very practical these sensors require calibration and synchronization and may lead to discomfort over longer time periods recent work utilizing computer vision techniques has shown similar performance using video without the need for manual feature engineering and avoiding some pitfalls such as sensor calibration and placement on the body in this paper we compare the performance of imus to a video based approach for human exercise classification on two real world datasets consisting of military press and rowing exercises we compare the performance using a single camera that captures video in the frontal view versus using imus placed on different parts of the body we observe that an approach based on a single camera can outperform a single imu by percentage points on average additionally a minimum of imus are required to outperform a single camera we observe that working with the raw data using multivariate time series classifiers outperforms traditional approaches based on handcrafted or automatically extracted features finally we show that an ensemble model combining the data from a single camera with a single imu outperforms either data modality our work opens up new and more realistic avenues for this application where a video captured using a readily available smartphone camera combined with a single sensor can be used for effective human exercise classification keyword raw image there is no result | 1 |
114,223 | 17,195,802,445 | IssuesEvent | 2021-07-16 17:10:03 | harrinry/carbon | https://api.github.com/repos/harrinry/carbon | opened | CVE-2020-8203 (High) detected in lodash-4.17.15.tgz | security vulnerability | ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: carbon/package.json</p>
<p>Path to vulnerable library: carbon/node_modules/@commitlint/ensure/node_modules/lodash/package.json,carbon/node_modules/@commitlint/load/node_modules/lodash/package.json,carbon/node_modules/@commitlint/lint/node_modules/lodash/package.json,carbon/node_modules/@commitlint/resolve-extends/node_modules/lodash/package.json,carbon/node_modules/@commitlint/cli/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cli-8.3.5.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/carbon/commit/94195156354fb4a892f42b4f0adb11e9d40c606b">94195156354fb4a892f42b4f0adb11e9d40c606b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@commitlint/cli:8.3.5;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-8203 (High) detected in lodash-4.17.15.tgz - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.15.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.15.tgz</a></p>
<p>Path to dependency file: carbon/package.json</p>
<p>Path to vulnerable library: carbon/node_modules/@commitlint/ensure/node_modules/lodash/package.json,carbon/node_modules/@commitlint/load/node_modules/lodash/package.json,carbon/node_modules/@commitlint/lint/node_modules/lodash/package.json,carbon/node_modules/@commitlint/resolve-extends/node_modules/lodash/package.json,carbon/node_modules/@commitlint/cli/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- cli-8.3.5.tgz (Root Library)
- :x: **lodash-4.17.15.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/harrinry/carbon/commit/94195156354fb4a892f42b4f0adb11e9d40c606b">94195156354fb4a892f42b4f0adb11e9d40c606b</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.17.15","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@commitlint/cli:8.3.5;lodash:4.17.15","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file carbon package json path to vulnerable library carbon node modules commitlint ensure node modules lodash package json carbon node modules commitlint load node modules lodash package json carbon node modules commitlint lint node modules lodash package json carbon node modules commitlint resolve extends node modules lodash package json carbon node modules commitlint cli node modules lodash package json dependency hierarchy cli tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree commitlint cli lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash before vulnerabilityurl | 0 |
15,514 | 19,703,266,930 | IssuesEvent | 2022-01-12 18:52:19 | googleapis/java-conformance-tests | https://api.github.com/repos/googleapis/java-conformance-tests | opened | Your .repo-metadata.json file has a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'client_documentation' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* must have required property 'client_documentation' in .repo-metadata.json
* release_level must be equal to one of the allowed values in .repo-metadata.json
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property client documentation in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions | 1 |
12,216 | 14,742,994,185 | IssuesEvent | 2021-01-07 13:14:20 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Client trying to pay through portal | anc-process anp-0.5 ant-support | In GitLab by @kdjstudios on Jun 27, 2019, 14:59
**Submitted by:** Michelle Mckee" <michelle.mckee@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-27-32816/conversation
**Server:** Internal
**Client/Site:** SA Hosted
**Account:** T-SAB-1133
**Issue:**
I have an SA Hosted client who is trying to pay through the SABilling portal but he keeps getting an error message. I tried sending him a new invite to the portal but he said that he is still getting the same message.
He said that it says communication error under auth code.
The client is Answer Ally account T-SAB-1133 | 1.0 | Client trying to pay through portal - In GitLab by @kdjstudios on Jun 27, 2019, 14:59
**Submitted by:** Michelle Mckee" <michelle.mckee@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-27-32816/conversation
**Server:** Internal
**Client/Site:** SA Hosted
**Account:** T-SAB-1133
**Issue:**
I have an SA Hosted client who is trying to pay through the SABilling portal but he keeps getting an error message. I tried sending him a new invite to the portal but he said that he is still getting the same message.
He said that it says communication error under auth code.
The client is Answer Ally account T-SAB-1133 | process | client trying to pay through portal in gitlab by kdjstudios on jun submitted by michelle mckee helpdesk server internal client site sa hosted account t sab issue i have an sa hosted client who is trying to pay through the sabilling portal but he keeps getting an error message i tried sending him a new invite to the portal but he said that he is still getting the same message he said that it says communication error under auth code the client is answer ally account t sab | 1 |
7,887 | 11,053,778,985 | IssuesEvent | 2019-12-10 12:08:06 | code4romania/expert-consultation-api | https://api.github.com/repos/code4romania/expert-consultation-api | closed | [Documents] Implement new tree data structure for document structure | document processing documents enhancement java spring | After discussing the parsing logic for documents, we came to the conclusion that the Document, Chapter, Article data model is too rigid to achieve the task we want to accomplish.
We came up with a new design:
- the document breakdown needs to be a tree structure
- the nodes of the tree must be the same data structure
- we are going to define a new data structure - DocumentSection/DocumentNode
- the fields of the new data structure are: id, parent id, content, type, order
- DocumentConsolidated will contain: a DocumentMetadata and a DocumentSection
- each document section can have a Comment list associated with it
Linked to #47 | 1.0 | [Documents] Implement new tree data structure for document structure - After discussing the parsing logic for documents, we came to the conclusion that the Document, Chapter, Article data model is too rigid to achieve the task we want to accomplish.
We came up with a new design:
- the document breakdown needs to be a tree structure
- the nodes of the tree must be the same data structure
- we are going to define a new data structure - DocumentSection/DocumentNode
- the fields of the new data structure are: id, parent id, content, type, order
- DocumentConsolidated will contain: a DocumentMetadata and a DocumentSection
- each document section can have a Comment list associated with it
Linked to #47 | process | implement new tree data structure for document structure after discussing the parsing logic for documents we came to the conclusion that the document chapter article data model is too rigid to achieve the task we want to accomplish we came up with a new design the document breakdown needs to be a tree structure the nodes of the tree must be the same data structure we are going to define a new data structure documentsection documentnode the fields of the new data structure are id parent id content type order documentconsolidated will contain a documentmetadata and a documentsection each document section can have a comment list associated with it linked to | 1 |
1,077 | 3,541,518,735 | IssuesEvent | 2016-01-19 01:40:21 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | Доработать сущность SubjectHuman в связке с SubjectContact и соответствующие сервисы по сообщениям | active In process of testing test _wf-central | - [x] 1) в сущности SubjectContact добавить поле sDate с типом "дата", и автопроставлением его при создании записи и любом апдейте ее. (если авто-обновление невозможно, то реализовать это в соответствующих методах сущности)
- [x] 2) в дополнение к полю sMail сущности SubjectMessage, добавить опциональное поле nID_SubjectContact_Mail, по которому подвязывать записи в сущности SubjectContact
- [x] 3) При каждом новом добавлении записи в сущность SubjectMessage, и наличии электронного адреса (поле sMail)
- [x] 3.1) при отсутствии nID_Subject, используя внутренний механизм сервиса /syncSubject:
- синхронизировать пользователя (через идентификатор sCode_Subject, в виде sMail, и соответствующий тип nID_SubjectHumanIdType, для почты), и полученный его nID_Subject прописывать в соответствующее поле SubjectMessage и SubjectContact.
- [x] 3.2) при наличии nID_Subject, получить по нему объект Subject и SubjectHuman, а к нему:
- если не привязано ни одного контакта: подвязывать новую запись SubjectContact (с соотв.типом, значением из sMail и nID_Subject), и сделать ее дефолтной через поле nID_SubjectContact_DefaultEmail, сущности SubjectHuman.
- если есть привязанные контакты, но такого адреса там нет: подвязывать новую запись SubjectContact (с соотв.типом и значением из sMail и nID_Subject), и сделать ее дефолтной через поле nID_SubjectContact_DefaultEmail, сущности SubjectHuman.
- если такая запись уже есть: просто обновить в сущности SubjectContact значение в поле sDate на текущую,
- [x] 3.3) после отработки механизма п.3.2, в поле nID_SubjectContact_Mail, сущности SubjectMessage записывать nID этой записи, сущности SubjectContact.
ВАЖНО:
- сразу вынести механизм п.3.1 и п.3.2 в отдельный метод, для дальнейшего использования из других участков кода.
- электронный адрес, после реализации нового механизма хранения не дублировать в поле "sMail"
- [x] 4) Доработать сущность SubjectMessage так, чтоб при ее отдаче, поле sMail формировалось и отдавалось именно через подвязанную структуру Subject - SubjectContact (а не бралось из сохраненного поля в базе)
- [x] 5) Написать сервис, который позволит уже сохраненные данные в поле sMail, сущности SubjectMessage перевести во вновь подвязанную структуру. (и каждое перенесенное значение потом отчищать в исходном поле sMail сущности SubjectMessage)
- [x] 6) ТОЛЬКО после полного переноса всех данных по п.5 - через чанджсет удалить поле sMail из сущности SubjectMessage
| 1.0 | Доработать сущность SubjectHuman в связке с SubjectContact и соответствующие сервисы по сообщениям - - [x] 1) в сущности SubjectContact добавить поле sDate с типом "дата", и автопроставлением его при создании записи и любом апдейте ее. (если авто-обновление невозможно, то реализовать это в соответствующих методах сущности)
- [x] 2) в дополнение к полю sMail сущности SubjectMessage, добавить опциональное поле nID_SubjectContact_Mail, по которому подвязывать записи в сущности SubjectContact
- [x] 3) При каждом новом добавлении записи в сущность SubjectMessage, и наличии электронного адреса (поле sMail)
- [x] 3.1) при отсутствии nID_Subject, используя внутренний механизм сервиса /syncSubject:
- синхронизировать пользователя (через идентификатор sCode_Subject, в виде sMail, и соответствующий тип nID_SubjectHumanIdType, для почты), и полученный его nID_Subject прописывать в соответствующее поле SubjectMessage и SubjectContact.
- [x] 3.2) при наличии nID_Subject, получить по нему объект Subject и SubjectHuman, а к нему:
- если не привязано ни одного контакта: подвязывать новую запись SubjectContact (с соотв.типом, значением из sMail и nID_Subject), и сделать ее дефолтной через поле nID_SubjectContact_DefaultEmail, сущности SubjectHuman.
- если есть привязанные контакты, но такого адреса там нет: подвязывать новую запись SubjectContact (с соотв.типом и значением из sMail и nID_Subject), и сделать ее дефолтной через поле nID_SubjectContact_DefaultEmail, сущности SubjectHuman.
- если такая запись уже есть: просто обновить в сущности SubjectContact значение в поле sDate на текущую,
- [x] 3.3) после отработки механизма п.3.2, в поле nID_SubjectContact_Mail, сущности SubjectMessage записывать nID этой записи, сущности SubjectContact.
ВАЖНО:
- сразу вынести механизм п.3.1 и п.3.2 в отдельный метод, для дальнейшего использования из других участков кода.
- электронный адрес, после реализации нового механизма хранения не дублировать в поле "sMail"
- [x] 4) Доработать сущность SubjectMessage так, чтоб при ее отдаче, поле sMail формировалось и отдавалось именно через подвязанную структуру Subject - SubjectContact (а не бралось из сохраненного поля в базе)
- [x] 5) Написать сервис, который позволит уже сохраненные данные в поле sMail, сущности SubjectMessage перевести во вновь подвязанную структуру. (и каждое перенесенное значение потом отчищать в исходном поле sMail сущности SubjectMessage)
- [x] 6) ТОЛЬКО после полного переноса всех данных по п.5 - через чанджсет удалить поле sMail из сущности SubjectMessage
| process | доработать сущность subjecthuman в связке с subjectcontact и соответствующие сервисы по сообщениям в сущности subjectcontact добавить поле sdate с типом дата и автопроставлением его при создании записи и любом апдейте ее если авто обновление невозможно то реализовать это в соответствующих методах сущности в дополнение к полю smail сущности subjectmessage добавить опциональное поле nid subjectcontact mail по которому подвязывать записи в сущности subjectcontact при каждом новом добавлении записи в сущность subjectmessage и наличии электронного адреса поле smail при отсутствии nid subject используя внутренний механизм сервиса syncsubject синхронизировать пользователя через идентификатор scode subject в виде smail и соответствующий тип nid subjecthumanidtype для почты и полученный его nid subject прописывать в соответствующее поле subjectmessage и subjectcontact при наличии nid subject получить по нему объект subject и subjecthuman а к нему если не привязано ни одного контакта подвязывать новую запись subjectcontact с соотв типом значением из smail и nid subject и сделать ее дефолтной через поле nid subjectcontact defaultemail сущности subjecthuman если есть привязанные контакты но такого адреса там нет подвязывать новую запись subjectcontact с соотв типом и значением из smail и nid subject и сделать ее дефолтной через поле nid subjectcontact defaultemail сущности subjecthuman если такая запись уже есть просто обновить в сущности subjectcontact значение в поле sdate на текущую после отработки механизма п в поле nid subjectcontact mail сущности subjectmessage записывать nid этой записи сущности subjectcontact важно сразу вынести механизм п и п в отдельный метод для дальнейшего использования из других участков кода электронный адрес после реализации нового механизма хранения не дублировать в поле smail доработать сущность subjectmessage так чтоб при ее отдаче поле smail формировалось и отдавалось именно через подвязанную структуру subject subjectcontact а не бралось из сохраненного поля в базе написать сервис который позволит уже сохраненные данные в поле smail сущности subjectmessage перевести во вновь подвязанную структуру и каждое перенесенное значение потом отчищать в исходном поле smail сущности subjectmessage только после полного переноса всех данных по п через чанджсет удалить поле smail из сущности subjectmessage | 1 |
41,872 | 10,685,920,429 | IssuesEvent | 2019-10-22 13:34:18 | cerner/terra-toolkit | https://api.github.com/repos/cerner/terra-toolkit | closed | Axe Randomly Fails to Inject on IE Selenium Grid 3.14.0 | 3rd-party-defect accessibility | # Bug Report
## Description:
This is an issue related to Selenium 3.14.0 IE driver and the axe command.
When injecting axe on the test page, the axe command crashing the current selenium session. This is not an issue for all test urls, but for certain test urls; although these urls seem random, errors can be seen consistently on these urls.
## Details:
When calling, `Terra.it.isAccessibly()` the terra axe command is leveraged. The axe-command
- Checks if `axe-core` exists on the dom and will synchronous inject it on the page if it does not exist.
- For some test urls, the axe-core injection is silently failing. (we currently do not check the result)
- Then, it asynchronous runs axe to determine the accessibility results.
- because the `axe-core` injection failed, axe is not available to run
- because this is an asynchronous execution, a callback must be invoked to signal the func is finished
- currently, the `axe-core` will call this callback when it finishes
- however, because the run failed, the entire selenium session bonks and terminates itself which results in selenium unable to take the failure screenshots
- Then, wdio sees this as a test failure and moves onto the next Mocha test to execute, which then fails with:

## What needs to be done:
1. Investigation is needed to understand why injecting axe-core on the test page is failing for various test pages and only for Selenium 3.14.
1. A potential reason could be selenium issue: https://github.com/SeleniumHQ/selenium/issues/6538 doing a quick ctrl+f on the axe-core file we inject in the dom, axe does uses the textContent attribute.
2. Add try-catch logic to the runAxeTest method to ensure a callback is always called in browser.executeAsync to prevent crashing/terminating the selenium session.
## Environment
* Browser Name and Version: IE 11 Selenium Grid Driver 3.14.0-helium
## @ Mentions
@ryanthemanuel @mjhenkes | 1.0 | Axe Randomly Fails to Inject on IE Selenium Grid 3.14.0 - # Bug Report
## Description:
This is an issue related to Selenium 3.14.0 IE driver and the axe command.
When injecting axe on the test page, the axe command crashing the current selenium session. This is not an issue for all test urls, but for certain test urls; although these urls seem random, errors can be seen consistently on these urls.
## Details:
When calling, `Terra.it.isAccessibly()` the terra axe command is leveraged. The axe-command
- Checks if `axe-core` exists on the dom and will synchronous inject it on the page if it does not exist.
- For some test urls, the axe-core injection is silently failing. (we currently do not check the result)
- Then, it asynchronous runs axe to determine the accessibility results.
- because the `axe-core` injection failed, axe is not available to run
- because this is an asynchronous execution, a callback must be invoked to signal the func is finished
- currently, the `axe-core` will call this callback when it finishes
- however, because the run failed, the entire selenium session bonks and terminates itself which results in selenium unable to take the failure screenshots
- Then, wdio sees this as a test failure and moves onto the next Mocha test to execute, which then fails with:

## What needs to be done:
1. Investigation is needed to understand why injecting axe-core on the test page is failing for various test pages and only for Selenium 3.14.
1. A potential reason could be selenium issue: https://github.com/SeleniumHQ/selenium/issues/6538 doing a quick ctrl+f on the axe-core file we inject in the dom, axe does uses the textContent attribute.
2. Add try-catch logic to the runAxeTest method to ensure a callback is always called in browser.executeAsync to prevent crashing/terminating the selenium session.
## Environment
* Browser Name and Version: IE 11 Selenium Grid Driver 3.14.0-helium
## @ Mentions
@ryanthemanuel @mjhenkes | non_process | axe randomly fails to inject on ie selenium grid bug report description this is an issue related to selenium ie driver and the axe command when injecting axe on the test page the axe command crashing the current selenium session this is not an issue for all test urls but for certain test urls although these urls seem random errors can be seen consistently on these urls details when calling terra it isaccessibly the terra axe command is leveraged the axe command checks if axe core exists on the dom and will synchronous inject it on the page if it does not exist for some test urls the axe core injection is silently failing we currently do not check the result then it asynchronous runs axe to determine the accessibility results because the axe core injection failed axe is not available to run because this is an asynchronous execution a callback must be invoked to signal the func is finished currently the axe core will call this callback when it finishes however because the run failed the entire selenium session bonks and terminates itself which results in selenium unable to take the failure screenshots then wdio sees this as a test failure and moves onto the next mocha test to execute which then fails with what needs to be done investigation is needed to understand why injecting axe core on the test page is failing for various test pages and only for selenium a potential reason could be selenium issue doing a quick ctrl f on the axe core file we inject in the dom axe does uses the textcontent attribute add try catch logic to the runaxetest method to ensure a callback is always called in browser executeasync to prevent crashing terminating the selenium session environment browser name and version ie selenium grid driver helium mentions ryanthemanuel mjhenkes | 0 |
20,211 | 26,801,730,653 | IssuesEvent | 2023-02-01 15:32:02 | googleapis/python-documentai-toolbox | https://api.github.com/repos/googleapis/python-documentai-toolbox | closed | Your .repo-metadata.json file has a problem 🤒 | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* client_documentation must match pattern "^https://.*" in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 client documentation must match pattern in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions | 1 |
6,831 | 9,975,250,758 | IssuesEvent | 2019-07-09 12:40:33 | varietywalls/variety | https://api.github.com/repos/varietywalls/variety | opened | [dev process] Configure CI testing | dev process | We can test building, deb packaging, and run our (admittedly somewhat limited) test suite automatically. This will give at least some peace of mind regarding the non-UI parts of Variety while developing, and also notify us early when external services we depend on go down or change. | 1.0 | [dev process] Configure CI testing - We can test building, deb packaging, and run our (admittedly somewhat limited) test suite automatically. This will give at least some peace of mind regarding the non-UI parts of Variety while developing, and also notify us early when external services we depend on go down or change. | process | configure ci testing we can test building deb packaging and run our admittedly somewhat limited test suite automatically this will give at least some peace of mind regarding the non ui parts of variety while developing and also notify us early when external services we depend on go down or change | 1 |
16,587 | 21,635,630,447 | IssuesEvent | 2022-05-05 13:59:59 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | Confusing error message in OrdinalEncoder with None-encoded missing values | Bug module:preprocessing | Sister issue for #16702 (`OneHotEncoder`)
## Code to reproduce
```python
import pandas as pd
from sklearn.preprocessing import OrdinalEncoder
df = pd.DataFrame({"cat_feature": ["a", None, "b", "a"]})
OrdinalEncoder().fit(df)
```
## Observed result
Got: TypeError: '<' not supported between instances of 'str' and 'NoneType'
Full traceback:
<details>
```python-traceback
TypeError Traceback (most recent call last)
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode(values, uniques, encode, check_unknown)
111 try:
--> 112 res = _encode_python(values, uniques, encode)
113 except TypeError:
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode_python(values, uniques, encode)
59 if uniques is None:
---> 60 uniques = sorted(set(values))
61 uniques = np.array(uniques, dtype=values.dtype)
TypeError: '<' not supported between instances of 'str' and 'NoneType'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-35-eb249f0af3d2> in <module>
4
5 df = pd.DataFrame({"cat_feature": ["a", None, "b", "a"]})
----> 6 OrdinalEncoder().fit(df)
~/code/scikit-learn/sklearn/preprocessing/_encoders.py in fit(self, X, y)
673 self
674 """
--> 675 self._fit(X)
676
677 return self
~/code/scikit-learn/sklearn/preprocessing/_encoders.py in _fit(self, X, handle_unknown)
84 Xi = X_list[i]
85 if self.categories == 'auto':
---> 86 cats = _encode(Xi)
87 else:
88 cats = np.array(self.categories[i], dtype=Xi.dtype)
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode(values, uniques, encode, check_unknown)
112 res = _encode_python(values, uniques, encode)
113 except TypeError:
--> 114 raise TypeError("argument must be a string or number")
115 return res
116 else:
TypeError: argument must be a string or number
```
</details>
## Expected result
A more informative `ValueError`, for instance:
```
ValueError: OrdinalEncoder does not accept None typed values. Missing values should be imputed first, for instance using sklearn.preprocessing.SimpleImputer.
```
Maybe we could even include the URL of some FAQ or example that shows how to deal with a mix of str and None typed values and use the following prior to Ordinal Encoding:
```python
SimpleImputer(strategy="constant", missing_values=None, fill_value="missing")
``` | 1.0 | Confusing error message in OrdinalEncoder with None-encoded missing values - Sister issue for #16702 (`OneHotEncoder`)
## Code to reproduce
```python
import pandas as pd
from sklearn.preprocessing import OrdinalEncoder
df = pd.DataFrame({"cat_feature": ["a", None, "b", "a"]})
OrdinalEncoder().fit(df)
```
## Observed result
Got: TypeError: '<' not supported between instances of 'str' and 'NoneType'
Full traceback:
<details>
```python-traceback
TypeError Traceback (most recent call last)
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode(values, uniques, encode, check_unknown)
111 try:
--> 112 res = _encode_python(values, uniques, encode)
113 except TypeError:
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode_python(values, uniques, encode)
59 if uniques is None:
---> 60 uniques = sorted(set(values))
61 uniques = np.array(uniques, dtype=values.dtype)
TypeError: '<' not supported between instances of 'str' and 'NoneType'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
<ipython-input-35-eb249f0af3d2> in <module>
4
5 df = pd.DataFrame({"cat_feature": ["a", None, "b", "a"]})
----> 6 OrdinalEncoder().fit(df)
~/code/scikit-learn/sklearn/preprocessing/_encoders.py in fit(self, X, y)
673 self
674 """
--> 675 self._fit(X)
676
677 return self
~/code/scikit-learn/sklearn/preprocessing/_encoders.py in _fit(self, X, handle_unknown)
84 Xi = X_list[i]
85 if self.categories == 'auto':
---> 86 cats = _encode(Xi)
87 else:
88 cats = np.array(self.categories[i], dtype=Xi.dtype)
~/code/scikit-learn/sklearn/preprocessing/_label.py in _encode(values, uniques, encode, check_unknown)
112 res = _encode_python(values, uniques, encode)
113 except TypeError:
--> 114 raise TypeError("argument must be a string or number")
115 return res
116 else:
TypeError: argument must be a string or number
```
</details>
## Expected result
A more informative `ValueError`, for instance:
```
ValueError: OrdinalEncoder does not accept None typed values. Missing values should be imputed first, for instance using sklearn.preprocessing.SimpleImputer.
```
Maybe we could even include the URL of some FAQ or example that shows how to deal with a mix of str and None typed values and use the following prior to Ordinal Encoding:
```python
SimpleImputer(strategy="constant", missing_values=None, fill_value="missing")
``` | process | confusing error message in ordinalencoder with none encoded missing values sister issue for onehotencoder code to reproduce python import pandas as pd from sklearn preprocessing import ordinalencoder df pd dataframe cat feature ordinalencoder fit df observed result got typeerror not supported between instances of str and nonetype full traceback python traceback typeerror traceback most recent call last code scikit learn sklearn preprocessing label py in encode values uniques encode check unknown try res encode python values uniques encode except typeerror code scikit learn sklearn preprocessing label py in encode python values uniques encode if uniques is none uniques sorted set values uniques np array uniques dtype values dtype typeerror not supported between instances of str and nonetype during handling of the above exception another exception occurred typeerror traceback most recent call last in df pd dataframe cat feature ordinalencoder fit df code scikit learn sklearn preprocessing encoders py in fit self x y self self fit x return self code scikit learn sklearn preprocessing encoders py in fit self x handle unknown xi x list if self categories auto cats encode xi else cats np array self categories dtype xi dtype code scikit learn sklearn preprocessing label py in encode values uniques encode check unknown res encode python values uniques encode except typeerror raise typeerror argument must be a string or number return res else typeerror argument must be a string or number expected result a more informative valueerror for instance valueerror ordinalencoder does not accept none typed values missing values should be imputed first for instance using sklearn preprocessing simpleimputer maybe we could even include the url of some faq or example that shows how to deal with a mix of str and none typed values and use the following prior to ordinal encoding python simpleimputer strategy constant missing values none fill value missing | 1 |
293,989 | 25,338,664,529 | IssuesEvent | 2022-11-18 19:14:33 | raupargor/Friendsn-t-Games | https://api.github.com/repos/raupargor/Friendsn-t-Games | closed | 8.3.Sonido del juego: Pruebas | test Priority: low | Comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online | 1.0 | 8.3.Sonido del juego: Pruebas - Comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online | non_process | sonido del juego pruebas comprobar que los elementos sonoros funcionan correctamente para todos los jugadores de una partida online | 0 |
513 | 2,986,845,215 | IssuesEvent | 2015-07-20 08:16:53 | hbz/nwbib | https://api.github.com/repos/hbz/nwbib | closed | Some resources won't be shown within NWBib | bug deploy processing | Examples:
http://lobid.org/nwbib/HT017529028
http://lobid.org/nwbib/HT017529054
They can be found and listed in a result list, eg.g. http://lobid.org/nwbib/search?set=HT012848847 but when you click on them it reads "Es ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut oder kontaktieren Sie das Entwicklerteam, falls das Problem fortbesteht". | 1.0 | Some resources won't be shown within NWBib - Examples:
http://lobid.org/nwbib/HT017529028
http://lobid.org/nwbib/HT017529054
They can be found and listed in a result list, eg.g. http://lobid.org/nwbib/search?set=HT012848847 but when you click on them it reads "Es ist ein Fehler aufgetreten. Bitte versuchen Sie es erneut oder kontaktieren Sie das Entwicklerteam, falls das Problem fortbesteht". | process | some resources won t be shown within nwbib examples they can be found and listed in a result list eg g but when you click on them it reads es ist ein fehler aufgetreten bitte versuchen sie es erneut oder kontaktieren sie das entwicklerteam falls das problem fortbesteht | 1 |
122,812 | 12,166,176,240 | IssuesEvent | 2020-04-27 08:50:04 | developer-student-club-thapar/slack-bots | https://api.github.com/repos/developer-student-club-thapar/slack-bots | closed | Update executable commands | documentation | Update how you need to run the Telegram bot!
@akshit-mee kindly do this!
| 1.0 | Update executable commands - Update how you need to run the Telegram bot!
@akshit-mee kindly do this!
| non_process | update executable commands update how you need to run the telegram bot akshit mee kindly do this | 0 |
12,915 | 15,287,891,135 | IssuesEvent | 2021-02-23 16:15:21 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android] Continuous loading animator and fails to navigate to app when internet is disrupted during sign in | Android Auth server Blocker Bug P0 Process: Fixed Process: Tested QA Process: Tested dev | Steps:
1. Click on signin
2. Enter valid email and password
3. Switch off the internet connection
4. Click on signin
5. Switch on the internet connection and refresh
6. Observe continuous loading animator
Actual Result: Continuous loading animator and user is not navigated further
Expected Result: User should be navigated further or proper error message along with navigating should be displayed so as user can go back to signin page and resubmit details again
Refer attached video
[Android_signin.zip](https://github.com/GoogleCloudPlatform/fda-mystudies/files/5339773/Android_signin.zip)
| 3.0 | [Android] Continuous loading animator and fails to navigate to app when internet is disrupted during sign in - Steps:
1. Click on signin
2. Enter valid email and password
3. Switch off the internet connection
4. Click on signin
5. Switch on the internet connection and refresh
6. Observe continuous loading animator
Actual Result: Continuous loading animator and user is not navigated further
Expected Result: User should be navigated further or proper error message along with navigating should be displayed so as user can go back to signin page and resubmit details again
Refer attached video
[Android_signin.zip](https://github.com/GoogleCloudPlatform/fda-mystudies/files/5339773/Android_signin.zip)
| process | continuous loading animator and fails to navigate to app when internet is disrupted during sign in steps click on signin enter valid email and password switch off the internet connection click on signin switch on the internet connection and refresh observe continuous loading animator actual result continuous loading animator and user is not navigated further expected result user should be navigated further or proper error message along with navigating should be displayed so as user can go back to signin page and resubmit details again refer attached video | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.