Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
204,921 | 23,291,809,614 | IssuesEvent | 2022-08-06 01:09:04 | easycv/easycv | https://api.github.com/repos/easycv/easycv | closed | CVE-2021-25290 (High) detected in Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl - autoclosed | security vulnerability | ## CVE-2021-25290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f5/79/b2d5695d1a931474fa68b68ec93bdf08ba9acbc4d6b3b628eb6aac81d11c/Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/f5/79/b2d5695d1a931474fa68b68ec93bdf08ba9acbc4d6b3b628eb6aac81d11c/Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: easycv</p>
<p>Path to vulnerable library: easycv</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-03-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-25290 (High) detected in Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl - autoclosed - ## CVE-2021-25290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl</b></p></summary>
<p>Python Imaging Library (Fork)</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f5/79/b2d5695d1a931474fa68b68ec93bdf08ba9acbc4d6b3b628eb6aac81d11c/Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/f5/79/b2d5695d1a931474fa68b68ec93bdf08ba9acbc4d6b3b628eb6aac81d11c/Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: easycv</p>
<p>Path to vulnerable library: easycv</p>
<p>
Dependency Hierarchy:
- :x: **Pillow-7.0.0-cp37-cp37m-manylinux1_x86_64.whl** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in Pillow before 8.1.1. In TiffDecode.c, there is a negative-offset memcpy with an invalid size.
<p>Publish Date: 2021-03-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-25290>CVE-2021-25290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html">https://pillow.readthedocs.io/en/stable/releasenotes/8.1.1.html</a></p>
<p>Release Date: 2021-01-18</p>
<p>Fix Resolution: 8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in pillow whl autoclosed cve high severity vulnerability vulnerable library pillow whl python imaging library fork library home page a href path to dependency file easycv path to vulnerable library easycv dependency hierarchy x pillow whl vulnerable library vulnerability details an issue was discovered in pillow before in tiffdecode c there is a negative offset memcpy with an invalid size publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
10,812 | 12,809,964,469 | IssuesEvent | 2020-07-03 17:03:49 | jptrrs/HumanResources | https://api.github.com/repos/jptrrs/HumanResources | closed | Prison Labor Incompatibility - Prisoners Learning Techs from Books | compatibility | So, I've found a _slight_ incompatibility with Prisoner Labor.
While it is completely possible to make prisoners in the Prison Labor mod document techs that they already know, there is no way to make them learn techs from books in linked bookshelves. Or any at all, really (ie, the ones you get even if you start as a 'New Tribe').
Save-file editing so that the prisoner gets the appropriate tech (Stonecutting, in this case), lets them use the workbench that they weren't able to use before. (Rock Mill from Fertile Fields, and Electric Stonecutters Bench from Vanilla Furniture Expanded - Production, both with 'Make any Stone Blocks')
Steps taken to isolate issue: Prison Labor, all labors enabled, all schedule set to anything for prisoners, full motivation and near full mood. Workbenches set up with and without prisoner-only bills, both within and outside of prison-only labor area zones. All workbenches had a book shelf linked, and prisoners failed to work with any bench to learn techs when specified tech book was in or out of book shelf. No errors, red, showed up. I believe no yellow error showed up in the console, but I closed my game at the point where I'm typing this out, so I can't be certain Nothing showed up in the post-game log file, at any rate, other than one colonist getting a hell of a lot of 10 jobs in one-tick followed up by 10 'tried to learn a null project', but that's probably due to the tests that I was running, and the prisoner-only workbench and zone authorization.
I've noticed that prisoners can't research technologies either, but this seems less important, as you wouldn't want them to do so. I did the same tests for this, btw. Also, with Intellectual scores in the 0's and 1's, it'd take a hella long while anyways.
Fun note: You can set bills to create a second tech book, which I will make use of in the future for 'job class-specific training rooms. All you need to do is drop the specific tech from a bookshelf and keep it un-allowed, and you can document it a second time. Good stuff. (The reason for the rooms and duplicate books: Everybody needs Electricity, but not everybody needs to know the GeneticRim Tech's.)
Minor note: It takes quite a bit of micro to put a specific book in a specific book shelf. Fasted I've found is to eject/drop the book from the book shelf, allow it, force a pawn to pick it up, draft them, move to the new book shelf, drop the book, undraft, and force them to haul it. | True | Prison Labor Incompatibility - Prisoners Learning Techs from Books - So, I've found a _slight_ incompatibility with Prisoner Labor.
While it is completely possible to make prisoners in the Prison Labor mod document techs that they already know, there is no way to make them learn techs from books in linked bookshelves. Or any at all, really (ie, the ones you get even if you start as a 'New Tribe').
Save-file editing so that the prisoner gets the appropriate tech (Stonecutting, in this case), lets them use the workbench that they weren't able to use before. (Rock Mill from Fertile Fields, and Electric Stonecutters Bench from Vanilla Furniture Expanded - Production, both with 'Make any Stone Blocks')
Steps taken to isolate issue: Prison Labor, all labors enabled, all schedule set to anything for prisoners, full motivation and near full mood. Workbenches set up with and without prisoner-only bills, both within and outside of prison-only labor area zones. All workbenches had a book shelf linked, and prisoners failed to work with any bench to learn techs when specified tech book was in or out of book shelf. No errors, red, showed up. I believe no yellow error showed up in the console, but I closed my game at the point where I'm typing this out, so I can't be certain Nothing showed up in the post-game log file, at any rate, other than one colonist getting a hell of a lot of 10 jobs in one-tick followed up by 10 'tried to learn a null project', but that's probably due to the tests that I was running, and the prisoner-only workbench and zone authorization.
I've noticed that prisoners can't research technologies either, but this seems less important, as you wouldn't want them to do so. I did the same tests for this, btw. Also, with Intellectual scores in the 0's and 1's, it'd take a hella long while anyways.
Fun note: You can set bills to create a second tech book, which I will make use of in the future for 'job class-specific training rooms. All you need to do is drop the specific tech from a bookshelf and keep it un-allowed, and you can document it a second time. Good stuff. (The reason for the rooms and duplicate books: Everybody needs Electricity, but not everybody needs to know the GeneticRim Tech's.)
Minor note: It takes quite a bit of micro to put a specific book in a specific book shelf. Fasted I've found is to eject/drop the book from the book shelf, allow it, force a pawn to pick it up, draft them, move to the new book shelf, drop the book, undraft, and force them to haul it. | non_process | prison labor incompatibility prisoners learning techs from books so i ve found a slight incompatibility with prisoner labor while it is completely possible to make prisoners in the prison labor mod document techs that they already know there is no way to make them learn techs from books in linked bookshelves or any at all really ie the ones you get even if you start as a new tribe save file editing so that the prisoner gets the appropriate tech stonecutting in this case lets them use the workbench that they weren t able to use before rock mill from fertile fields and electric stonecutters bench from vanilla furniture expanded production both with make any stone blocks steps taken to isolate issue prison labor all labors enabled all schedule set to anything for prisoners full motivation and near full mood workbenches set up with and without prisoner only bills both within and outside of prison only labor area zones all workbenches had a book shelf linked and prisoners failed to work with any bench to learn techs when specified tech book was in or out of book shelf no errors red showed up i believe no yellow error showed up in the console but i closed my game at the point where i m typing this out so i can t be certain nothing showed up in the post game log file at any rate other than one colonist getting a hell of a lot of jobs in one tick followed up by tried to learn a null project but that s probably due to the tests that i was running and the prisoner only workbench and zone authorization i ve noticed that prisoners can t research technologies either but this seems less important as you wouldn t want them to do so i did the same tests for this btw also with intellectual scores in the s and s it d take a hella long while anyways fun note you can set bills to create a second tech book which i will make use of in the future for job class specific training rooms all you need to do is drop the specific tech from a bookshelf and keep it un allowed and you can document it a second time good stuff the reason for the rooms and duplicate books everybody needs electricity but not everybody needs to know the geneticrim tech s minor note it takes quite a bit of micro to put a specific book in a specific book shelf fasted i ve found is to eject drop the book from the book shelf allow it force a pawn to pick it up draft them move to the new book shelf drop the book undraft and force them to haul it | 0 |
328,962 | 28,142,989,303 | IssuesEvent | 2023-04-02 06:21:05 | soochangoforit/BankingServer | https://api.github.com/repos/soochangoforit/BankingServer | closed | ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ | Test | ## ๐Feature Issue
> ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ
## ๐To-do
- [x] ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ
## ETC
> None
| 1.0 | ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ - ## ๐Feature Issue
> ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ
## ๐To-do
- [x] ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ
## ETC
> None
| non_process | ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ ๐feature issue ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ ๐to do ์น๊ตฌ ์ถ๊ฐ ๋ฐ ์กฐํ ๋จ์ ํ
์คํธ ์ฝ๋ ์์ฑ etc none | 0 |
11,646 | 14,499,867,522 | IssuesEvent | 2020-12-11 17:15:47 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Add documentation examples for adding fields to a module | :Processors Stalled libbeat needs_docs needs_team | Normally you'd configure `fields` and `fields_under_root` as
```
- type: log
paths:
- /your_data_path
fields:
field1: aaa
field2: bbb
field3: ccc
fields_under_root: true
```
However there's no example in the documentation on how to do this when using modules. For example:
```
- module: apache2
access:
enabled: true
var.paths:
input:
fields_under_root: true
fields.field1: aaa
fields.field2: bbb
fields.field3: ccc
error:
enabled: true
var.paths:
input:
fields_under_root: true
fields.field1: aaa
fields.field2: bbb
fields.field3: ccc
```
| 1.0 | Add documentation examples for adding fields to a module - Normally you'd configure `fields` and `fields_under_root` as
```
- type: log
paths:
- /your_data_path
fields:
field1: aaa
field2: bbb
field3: ccc
fields_under_root: true
```
However there's no example in the documentation on how to do this when using modules. For example:
```
- module: apache2
access:
enabled: true
var.paths:
input:
fields_under_root: true
fields.field1: aaa
fields.field2: bbb
fields.field3: ccc
error:
enabled: true
var.paths:
input:
fields_under_root: true
fields.field1: aaa
fields.field2: bbb
fields.field3: ccc
```
| process | add documentation examples for adding fields to a module normally you d configure fields and fields under root as type log paths your data path fields aaa bbb ccc fields under root true however there s no example in the documentation on how to do this when using modules for example module access enabled true var paths input fields under root true fields aaa fields bbb fields ccc error enabled true var paths input fields under root true fields aaa fields bbb fields ccc | 1 |
5,225 | 8,029,190,387 | IssuesEvent | 2018-07-27 15:14:05 | threefoldfoundation/tf_app | https://api.github.com/repos/threefoldfoundation/tf_app | closed | Remove old BetterToken under TF Ecosystem | priority_major process_duplicate | Right now I find 2 - needs to be the new one (with round logo). Other one needs to be deleted. | 1.0 | Remove old BetterToken under TF Ecosystem - Right now I find 2 - needs to be the new one (with round logo). Other one needs to be deleted. | process | remove old bettertoken under tf ecosystem right now i find needs to be the new one with round logo other one needs to be deleted | 1 |
4,649 | 7,495,154,888 | IssuesEvent | 2018-04-07 17:45:31 | brucemiller/LaTeXML | https://api.github.com/repos/brucemiller/LaTeXML | closed | Add link to references to table of content in HTML output | enhancement postprocessing | It would be nice to have a link to the References section also in the table of content in the HTML output. | 1.0 | Add link to references to table of content in HTML output - It would be nice to have a link to the References section also in the table of content in the HTML output. | process | add link to references to table of content in html output it would be nice to have a link to the references section also in the table of content in the html output | 1 |
5,770 | 8,614,069,206 | IssuesEvent | 2018-11-19 16:32:02 | material-components/material-components-ios | https://api.github.com/repos/material-components/material-components-ios | closed | Upgrade the minimum iOS version to iOS 9 | Needs actionability review type:Process | This was filed as an internal issue. If you are a Googler, please visit [b/118379171](http://b/118379171) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/118379171](http://b/118379171)
- Blocked by: https://github.com/material-components/material-components-ios/issues/5508 | 1.0 | Upgrade the minimum iOS version to iOS 9 - This was filed as an internal issue. If you are a Googler, please visit [b/118379171](http://b/118379171) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/118379171](http://b/118379171)
- Blocked by: https://github.com/material-components/material-components-ios/issues/5508 | process | upgrade the minimum ios version to ios this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by | 1 |
9,360 | 12,369,315,110 | IssuesEvent | 2020-05-18 15:05:02 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | opened | all: generate links in readme as a part of regen | type: process | Right now the table of product links to godoc is maintained by hand. There should be a way to generated this table with help of the metadata we store for generation. | 1.0 | all: generate links in readme as a part of regen - Right now the table of product links to godoc is maintained by hand. There should be a way to generated this table with help of the metadata we store for generation. | process | all generate links in readme as a part of regen right now the table of product links to godoc is maintained by hand there should be a way to generated this table with help of the metadata we store for generation | 1 |
14,173 | 17,088,494,747 | IssuesEvent | 2021-07-08 14:36:21 | googleapis/github-repo-automation | https://api.github.com/repos/googleapis/github-repo-automation | opened | Add test that describes filter behavior | type: process | It would be good to add a test that demonstrates and tests filter behavior, specifically when multiple filters are used.
Refs: https://github.com/googleapis/github-repo-automation/pull/519 | 1.0 | Add test that describes filter behavior - It would be good to add a test that demonstrates and tests filter behavior, specifically when multiple filters are used.
Refs: https://github.com/googleapis/github-repo-automation/pull/519 | process | add test that describes filter behavior it would be good to add a test that demonstrates and tests filter behavior specifically when multiple filters are used refs | 1 |
1,837 | 4,643,640,141 | IssuesEvent | 2016-09-30 14:06:36 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | closed | Link to research summaries in HRA website | 4. Ready for Review Collectors Explorer Processors | Our HRA collector/processor uses their internal API to extract the data. This makes it much easier to write and maintain, but then our `source_url` points to the API url, not the human-readable url (e.g. https://stage.harp.org.uk/HARPApiExternal/api/ResearchSummaries instead of http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/).
# Tasks
- [x] Change the HRA `source_url` to point to the human-readable URL in the HRA website (e.g. http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/)
- [x] Create new "Research summaries" section in the trial page sidebar with a link to the HRA research summary named "NHS Health Research Authority (HRA)"
- [x] Don't display the publications with `source_id = 'hra'` in the "Publications" section in the trials page | 1.0 | Link to research summaries in HRA website - Our HRA collector/processor uses their internal API to extract the data. This makes it much easier to write and maintain, but then our `source_url` points to the API url, not the human-readable url (e.g. https://stage.harp.org.uk/HARPApiExternal/api/ResearchSummaries instead of http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/).
# Tasks
- [x] Change the HRA `source_url` to point to the human-readable URL in the HRA website (e.g. http://www.hra.nhs.uk/news/research-summaries/longterm-fu-study-of-botox-in-idiopathic-overactive-bladder-patients/)
- [x] Create new "Research summaries" section in the trial page sidebar with a link to the HRA research summary named "NHS Health Research Authority (HRA)"
- [x] Don't display the publications with `source_id = 'hra'` in the "Publications" section in the trials page | process | link to research summaries in hra website our hra collector processor uses their internal api to extract the data this makes it much easier to write and maintain but then our source url points to the api url not the human readable url e g instead of tasks change the hra source url to point to the human readable url in the hra website e g create new research summaries section in the trial page sidebar with a link to the hra research summary named nhs health research authority hra don t display the publications with source id hra in the publications section in the trials page | 1 |
9,465 | 3,041,604,915 | IssuesEvent | 2015-08-07 22:35:11 | tanium/pytan | https://api.github.com/repos/tanium/pytan | closed | clean up open file handles | bug small testme | There are a number of file handles that are not properly being closed. Fix that | 1.0 | clean up open file handles - There are a number of file handles that are not properly being closed. Fix that | non_process | clean up open file handles there are a number of file handles that are not properly being closed fix that | 0 |
162,392 | 6,152,277,468 | IssuesEvent | 2017-06-28 06:40:23 | apinf/openapi-designer | https://api.github.com/repos/apinf/openapi-designer | opened | Add project info popup | enhancement medium priority | We should make the info button in the top right corner open a popup similar to the about button in APInf:
 | 1.0 | Add project info popup - We should make the info button in the top right corner open a popup similar to the about button in APInf:
 | non_process | add project info popup we should make the info button in the top right corner open a popup similar to the about button in apinf | 0 |
326,352 | 27,986,056,288 | IssuesEvent | 2023-03-26 18:01:06 | facebook/react-native | https://api.github.com/repos/facebook/react-native | closed | TextInput textContentType='oneTimeCode' not working in React Native 0.62.2 and iOS 13.5 | Platform: iOS Stale Component: TextInput Needs: Author Feedback Needs: Verify on Latest Version | ## Description
TextInput's textContentType='oneTimeCode' showing OTP above the keyboard but not passing data into TextInput in React Native 0.62.2 and iOS 13.5.1.
## React Native version:
```
System:
OS: macOS 10.15.5
CPU: (4) x64 Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz
Memory: 940.63 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 10.15.0 - /usr/local/bin/node
Yarn: 1.22.4 - /usr/local/bin/yarn
npm: 6.14.4 - /usr/local/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
Managers:
CocoaPods: 1.9.3 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: iOS 13.5, DriverKit 19.0, macOS 10.15, tvOS 13.4, watchOS 6.2
Android SDK:
API Levels: 25, 28, 29
Build Tools: 28.0.3, 29.0.1
System Images: android-25 | Google APIs Intel x86 Atom, android-28 | Google APIs Intel x86 Atom, android-29 | Google APIs Intel x86 Atom
Android NDK: Not Found
IDEs:
Android Studio: 4.0 AI-193.6911.18.40.6514223
Xcode: 11.5/11E608c - /usr/bin/xcodebuild
Languages:
Java: 1.8.0_192 - /usr/bin/javac
Python: 2.7.16 - /usr/bin/python
npmPackages:
@react-native-community/cli: Not Found
react: ^16.11.0 => 16.13.1
react-native: ^0.62.2 => 0.62.2
```
## Steps To Reproduce
Provide a detailed list of steps that reproduce the issue.
1. Keyboard pop up
2. OTP showing above keyboard
3. When press on the OTP, it doesn't pass value to text input
## Expected Results
Passing value into textinput when press the OTP above the keyboard, and clear OTP above keyboard.
## code example:
```
<TextInput
maxLength={6}
keyboardType='number-pad'
textContentType='oneTimeCode'
onChangeText={onOTPValueChange}
value={otpValue}
style={styles.otpInput}
autoFocus={true}
/>
```
## Videos
Here is error on iOS 13.5.1: [https://streamable.com/sj5emu](https://streamable.com/sj5emu)
No error on iOS 13.6: [https://streamable.com/wy01io](https://streamable.com/wy01io) | 1.0 | TextInput textContentType='oneTimeCode' not working in React Native 0.62.2 and iOS 13.5 - ## Description
TextInput's textContentType='oneTimeCode' showing OTP above the keyboard but not passing data into TextInput in React Native 0.62.2 and iOS 13.5.1.
## React Native version:
```
System:
OS: macOS 10.15.5
CPU: (4) x64 Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz
Memory: 940.63 MB / 16.00 GB
Shell: 3.2.57 - /bin/bash
Binaries:
Node: 10.15.0 - /usr/local/bin/node
Yarn: 1.22.4 - /usr/local/bin/yarn
npm: 6.14.4 - /usr/local/bin/npm
Watchman: 4.9.0 - /usr/local/bin/watchman
Managers:
CocoaPods: 1.9.3 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: iOS 13.5, DriverKit 19.0, macOS 10.15, tvOS 13.4, watchOS 6.2
Android SDK:
API Levels: 25, 28, 29
Build Tools: 28.0.3, 29.0.1
System Images: android-25 | Google APIs Intel x86 Atom, android-28 | Google APIs Intel x86 Atom, android-29 | Google APIs Intel x86 Atom
Android NDK: Not Found
IDEs:
Android Studio: 4.0 AI-193.6911.18.40.6514223
Xcode: 11.5/11E608c - /usr/bin/xcodebuild
Languages:
Java: 1.8.0_192 - /usr/bin/javac
Python: 2.7.16 - /usr/bin/python
npmPackages:
@react-native-community/cli: Not Found
react: ^16.11.0 => 16.13.1
react-native: ^0.62.2 => 0.62.2
```
## Steps To Reproduce
Provide a detailed list of steps that reproduce the issue.
1. Keyboard pop up
2. OTP showing above keyboard
3. When press on the OTP, it doesn't pass value to text input
## Expected Results
Passing value into textinput when press the OTP above the keyboard, and clear OTP above keyboard.
## code example:
```
<TextInput
maxLength={6}
keyboardType='number-pad'
textContentType='oneTimeCode'
onChangeText={onOTPValueChange}
value={otpValue}
style={styles.otpInput}
autoFocus={true}
/>
```
## Videos
Here is error on iOS 13.5.1: [https://streamable.com/sj5emu](https://streamable.com/sj5emu)
No error on iOS 13.6: [https://streamable.com/wy01io](https://streamable.com/wy01io) | non_process | textinput textcontenttype onetimecode not working in react native and ios description textinput s textcontenttype onetimecode showing otp above the keyboard but not passing data into textinput in react native and ios react native version system os macos cpu intel r core tm cpu memory mb gb shell bin bash binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm watchman usr local bin watchman managers cocoapods usr local bin pod sdks ios sdk platforms ios driverkit macos tvos watchos android sdk api levels build tools system images android google apis intel atom android google apis intel atom android google apis intel atom android ndk not found ides android studio ai xcode usr bin xcodebuild languages java usr bin javac python usr bin python npmpackages react native community cli not found react react native steps to reproduce provide a detailed list of steps that reproduce the issue keyboard pop up otp showing above keyboard when press on the otp it doesn t pass value to text input expected results passing value into textinput when press the otp above the keyboard and clear otp above keyboard code example textinput maxlength keyboardtype number pad textcontenttype onetimecode onchangetext onotpvaluechange value otpvalue style styles otpinput autofocus true videos here is error on ios no error on ios | 0 |
5,647 | 8,513,502,624 | IssuesEvent | 2018-10-31 16:11:55 | dita-ot/dita-ot | https://api.github.com/repos/dita-ot/dita-ot | closed | Support RELAXNG parsing and validation | DITA 1.3 feature preprocess | Integrate George's RNG DTD compatibility libraries into the OT's DITA parsing code.
| 1.0 | Support RELAXNG parsing and validation - Integrate George's RNG DTD compatibility libraries into the OT's DITA parsing code.
| process | support relaxng parsing and validation integrate george s rng dtd compatibility libraries into the ot s dita parsing code | 1 |
161,321 | 6,114,786,140 | IssuesEvent | 2017-06-22 02:52:52 | TheValarProject/AwakenDreamsClient | https://api.github.com/repos/TheValarProject/AwakenDreamsClient | closed | Incorporate French translation | enhancement priority-low | We received a french translation a while ago, and I would like it integrated into the mod. This can be done by editing `mcp/src/minecraft/assets/minecraft/fr_FR.lang`. Here are the translations:
[French_Translation.pdf](https://github.com/TheValarProject/AwakenDreamsClient/files/706420/French_Translation.pdf)
Note: The translations are old, and as a result do not include all of the blocks/items. Just adding the translations we already have will be sufficient until we find someone to continue the translations. | 1.0 | Incorporate French translation - We received a french translation a while ago, and I would like it integrated into the mod. This can be done by editing `mcp/src/minecraft/assets/minecraft/fr_FR.lang`. Here are the translations:
[French_Translation.pdf](https://github.com/TheValarProject/AwakenDreamsClient/files/706420/French_Translation.pdf)
Note: The translations are old, and as a result do not include all of the blocks/items. Just adding the translations we already have will be sufficient until we find someone to continue the translations. | non_process | incorporate french translation we received a french translation a while ago and i would like it integrated into the mod this can be done by editing mcp src minecraft assets minecraft fr fr lang here are the translations note the translations are old and as a result do not include all of the blocks items just adding the translations we already have will be sufficient until we find someone to continue the translations | 0 |
6,358 | 9,415,938,689 | IssuesEvent | 2019-04-10 13:43:09 | brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine | https://api.github.com/repos/brandon1roadgears/Interpreter-of-programming-language-of-Turing-Machine | opened | ะ ะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั ะฟัะพะฒะตัััััั ะฒะฒะพะด ะฟัะฐะฒะธะป ะฝะฐ ะพัะธะฑะบะธ | C++ Work in process | #### ะ ะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั, ะฟัะพะฒะตััััะฐั ะฒะฒะพะด ะฟัะฐะฒะธะป. ะัะพะณัะฐะผะผะฐ ะดะพะปะถะฝะฐ ะฒัะดะฐัั ะพัะธะฑะบั ะตัะปะธ:
1. ะะปะธะฝะฐ ัะตะบััะตะณะพ ัะพััะพัะฝะธั ะฑะพะปััะต.
2. ะะปะธะฝะฐ ะพะถะธะดะฐะตะผะพะณะพ ัะธะผะฒะพะปะฐ, ะฝะพะฒะพะณะพ ัะธะผะฒะพะปะฐ ะธะปะธ ะดะฒะธะถะตะฝะธั ะฑะพะปััะต 1. | 1.0 | ะ ะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั ะฟัะพะฒะตัััััั ะฒะฒะพะด ะฟัะฐะฒะธะป ะฝะฐ ะพัะธะฑะบะธ - #### ะ ะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั, ะฟัะพะฒะตััััะฐั ะฒะฒะพะด ะฟัะฐะฒะธะป. ะัะพะณัะฐะผะผะฐ ะดะพะปะถะฝะฐ ะฒัะดะฐัั ะพัะธะฑะบั ะตัะปะธ:
1. ะะปะธะฝะฐ ัะตะบััะตะณะพ ัะพััะพัะฝะธั ะฑะพะปััะต.
2. ะะปะธะฝะฐ ะพะถะธะดะฐะตะผะพะณะพ ัะธะผะฒะพะปะฐ, ะฝะพะฒะพะณะพ ัะธะผะฒะพะปะฐ ะธะปะธ ะดะฒะธะถะตะฝะธั ะฑะพะปััะต 1. | process | ัะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั ะฟัะพะฒะตัััััั ะฒะฒะพะด ะฟัะฐะฒะธะป ะฝะฐ ะพัะธะฑะบะธ ัะตะฐะปะธะทะพะฒะฐัั ััะฝะบัะธั ะฟัะพะฒะตััััะฐั ะฒะฒะพะด ะฟัะฐะฒะธะป ะฟัะพะณัะฐะผะผะฐ ะดะพะปะถะฝะฐ ะฒัะดะฐัั ะพัะธะฑะบั ะตัะปะธ ะดะปะธะฝะฐ ัะตะบััะตะณะพ ัะพััะพัะฝะธั ะฑะพะปััะต ะดะปะธะฝะฐ ะพะถะธะดะฐะตะผะพะณะพ ัะธะผะฒะพะปะฐ ะฝะพะฒะพะณะพ ัะธะผะฒะพะปะฐ ะธะปะธ ะดะฒะธะถะตะฝะธั ะฑะพะปััะต | 1 |
462,883 | 13,255,605,983 | IssuesEvent | 2020-08-20 11:14:42 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | For users not receiving ads, "jiggling the handle" seems to fix it | OS/Android OS/Desktop QA/No bug feature/ads priority/P3 wontfix | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
These users report that if they turn rewards on and off they can receive ads again:
https://www.reddit.com/r/BATProject/comments/fieobk/havent_been_receiving_ads_this_month/
https://www.reddit.com/r/brave_browser/comments/fiiknt/not_getting_ads_anymore/
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1.
2.
3.
## Actual result:
<!--Please add screenshots if needed-->
## Expected result:
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the dev channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| 1.0 | For users not receiving ads, "jiggling the handle" seems to fix it - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
These users report that if they turn rewards on and off they can receive ads again:
https://www.reddit.com/r/BATProject/comments/fieobk/havent_been_receiving_ads_this_month/
https://www.reddit.com/r/brave_browser/comments/fiiknt/not_getting_ads_anymore/
## Steps to Reproduce
<!--Please add a series of steps to reproduce the issue-->
1.
2.
3.
## Actual result:
<!--Please add screenshots if needed-->
## Expected result:
## Reproduces how often:
<!--[Easily reproduced/Intermittent issue/No steps to reproduce]-->
## Brave version (brave://version info)
<!--For installed build, please copy Brave, Revision and OS from brave://version and paste here. If building from source please mention it along with brave://version details-->
## Version/Channel Information:
<!--Does this issue happen on any other channels? Or is it specific to a certain channel?-->
- Can you reproduce this issue with the current release?
- Can you reproduce this issue with the beta channel?
- Can you reproduce this issue with the dev channel?
- Can you reproduce this issue with the nightly channel?
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
- Does the issue resolve itself when disabling Brave Rewards?
- Is the issue reproducible on the latest version of Chrome?
## Miscellaneous Information:
<!--Any additional information, related issues, extra QA steps, configuration or data that might be necessary to reproduce the issue-->
| non_process | for users not receiving ads jiggling the handle seems to fix it have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description these users report that if they turn rewards on and off they can receive ads again steps to reproduce actual result expected result reproduces how often brave version brave version info version channel information can you reproduce this issue with the current release can you reproduce this issue with the beta channel can you reproduce this issue with the dev channel can you reproduce this issue with the nightly channel other additional information does the issue resolve itself when disabling brave shields does the issue resolve itself when disabling brave rewards is the issue reproducible on the latest version of chrome miscellaneous information | 0 |
3,472 | 6,551,366,700 | IssuesEvent | 2017-09-05 14:32:44 | pburns96/Revature-VenderBender | https://api.github.com/repos/pburns96/Revature-VenderBender | closed | As a customer, I can browse concerts. | High Priority Work In Process | Requirements:
-Concerts DAO
-Tables for listing concerts.
-Links to the concert listing page
| 1.0 | As a customer, I can browse concerts. - Requirements:
-Concerts DAO
-Tables for listing concerts.
-Links to the concert listing page
| process | as a customer i can browse concerts requirements concerts dao tables for listing concerts links to the concert listing page | 1 |
204,851 | 23,291,520,189 | IssuesEvent | 2022-08-06 00:14:59 | AOSC-Dev/aosc-os-abbs | https://api.github.com/repos/AOSC-Dev/aosc-os-abbs | opened | zlib: CVE-2022-37434 | security | ### CVE IDs
CVE-2022-37434
### Other security advisory IDs
N/A
### Description
"zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference)."
See [CVE Record](https://www.cve.org/CVERecord?id=CVE-2022-37434).
### Patches
https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166bece1
### PoC(s)
N/A
| True | zlib: CVE-2022-37434 - ### CVE IDs
CVE-2022-37434
### Other security advisory IDs
N/A
### Description
"zlib through 1.2.12 has a heap-based buffer over-read or buffer overflow in inflate in inflate.c via a large gzip header extra field. NOTE: only applications that call inflateGetHeader are affected. Some common applications bundle the affected zlib source code but may be unable to call inflateGetHeader (e.g., see the nodejs/node reference)."
See [CVE Record](https://www.cve.org/CVERecord?id=CVE-2022-37434).
### Patches
https://github.com/madler/zlib/commit/eff308af425b67093bab25f80f1ae950166bece1
### PoC(s)
N/A
| non_process | zlib cve cve ids cve other security advisory ids n a description zlib through has a heap based buffer over read or buffer overflow in inflate in inflate c via a large gzip header extra field note only applications that call inflategetheader are affected some common applications bundle the affected zlib source code but may be unable to call inflategetheader e g see the nodejs node reference see patches poc s n a | 0 |
7,373 | 10,512,874,419 | IssuesEvent | 2019-09-27 19:02:35 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Add a way to create a suspended Process | api-needs-work area-System.Diagnostics.Process needs more info up-for-grabs | It seems that there is no way to create a suspended `Process` and resume it later.
Ability to create a suspended process is useful when you want to run the process and its children in a job object. If the process isn't created as suspended it may create child processes before it is assigned to the job object which means that these child processes won't be assigned to the job object.
| 1.0 | Add a way to create a suspended Process - It seems that there is no way to create a suspended `Process` and resume it later.
Ability to create a suspended process is useful when you want to run the process and its children in a job object. If the process isn't created as suspended it may create child processes before it is assigned to the job object which means that these child processes won't be assigned to the job object.
| process | add a way to create a suspended process it seems that there is no way to create a suspended process and resume it later ability to create a suspended process is useful when you want to run the process and its children in a job object if the process isn t created as suspended it may create child processes before it is assigned to the job object which means that these child processes won t be assigned to the job object | 1 |
9,053 | 12,130,295,488 | IssuesEvent | 2020-04-23 01:06:35 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | closed | remove gcp-devrel-py-tools from healthcare/api-client/hl7v2/requirements-test.txt | priority: p2 remove-gcp-devrel-py-tools type: process | remove gcp-devrel-py-tools from healthcare/api-client/hl7v2/requirements-test.txt | 1.0 | remove gcp-devrel-py-tools from healthcare/api-client/hl7v2/requirements-test.txt - remove gcp-devrel-py-tools from healthcare/api-client/hl7v2/requirements-test.txt | process | remove gcp devrel py tools from healthcare api client requirements test txt remove gcp devrel py tools from healthcare api client requirements test txt | 1 |
194,651 | 14,684,624,693 | IssuesEvent | 2021-01-01 04:04:04 | github-vet/rangeloop-pointer-findings | https://api.github.com/repos/github-vet/rangeloop-pointer-findings | closed | itsivareddy/terrafrom-Oci: oci/core_instance_configuration_test.go; 14 LoC | fresh small test |
Found a possible issue in [itsivareddy/terrafrom-Oci](https://www.github.com/itsivareddy/terrafrom-Oci) at [oci/core_instance_configuration_test.go](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/core_instance_configuration_test.go#L559-L572)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to instanceConfigurationId is reassigned at line 563
[Click here to see the code in its original context.](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/core_instance_configuration_test.go#L559-L572)
<details>
<summary>Click here to show the 14 line(s) of Go which triggered the analyzer.</summary>
```go
for _, instanceConfigurationId := range instanceConfigurationIds {
if ok := SweeperDefaultResourceId[instanceConfigurationId]; !ok {
deleteInstanceConfigurationRequest := oci_core.DeleteInstanceConfigurationRequest{}
deleteInstanceConfigurationRequest.InstanceConfigurationId = &instanceConfigurationId
deleteInstanceConfigurationRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := computeManagementClient.DeleteInstanceConfiguration(context.Background(), deleteInstanceConfigurationRequest)
if error != nil {
fmt.Printf("Error deleting InstanceConfiguration %s %s, It is possible that the resource is already deleted. Please verify manually \n", instanceConfigurationId, error)
continue
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 075608a9e201ee0e32484da68d5ba5370dfde1be
| 1.0 | itsivareddy/terrafrom-Oci: oci/core_instance_configuration_test.go; 14 LoC -
Found a possible issue in [itsivareddy/terrafrom-Oci](https://www.github.com/itsivareddy/terrafrom-Oci) at [oci/core_instance_configuration_test.go](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/core_instance_configuration_test.go#L559-L572)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message.
> reference to instanceConfigurationId is reassigned at line 563
[Click here to see the code in its original context.](https://github.com/itsivareddy/terrafrom-Oci/blob/075608a9e201ee0e32484da68d5ba5370dfde1be/oci/core_instance_configuration_test.go#L559-L572)
<details>
<summary>Click here to show the 14 line(s) of Go which triggered the analyzer.</summary>
```go
for _, instanceConfigurationId := range instanceConfigurationIds {
if ok := SweeperDefaultResourceId[instanceConfigurationId]; !ok {
deleteInstanceConfigurationRequest := oci_core.DeleteInstanceConfigurationRequest{}
deleteInstanceConfigurationRequest.InstanceConfigurationId = &instanceConfigurationId
deleteInstanceConfigurationRequest.RequestMetadata.RetryPolicy = getRetryPolicy(true, "core")
_, error := computeManagementClient.DeleteInstanceConfiguration(context.Background(), deleteInstanceConfigurationRequest)
if error != nil {
fmt.Printf("Error deleting InstanceConfiguration %s %s, It is possible that the resource is already deleted. Please verify manually \n", instanceConfigurationId, error)
continue
}
}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 075608a9e201ee0e32484da68d5ba5370dfde1be
| non_process | itsivareddy terrafrom oci oci core instance configuration test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message reference to instanceconfigurationid is reassigned at line click here to show the line s of go which triggered the analyzer go for instanceconfigurationid range instanceconfigurationids if ok sweeperdefaultresourceid ok deleteinstanceconfigurationrequest oci core deleteinstanceconfigurationrequest deleteinstanceconfigurationrequest instanceconfigurationid instanceconfigurationid deleteinstanceconfigurationrequest requestmetadata retrypolicy getretrypolicy true core error computemanagementclient deleteinstanceconfiguration context background deleteinstanceconfigurationrequest if error nil fmt printf error deleting instanceconfiguration s s it is possible that the resource is already deleted please verify manually n instanceconfigurationid error continue leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id | 0 |
4,982 | 7,816,287,498 | IssuesEvent | 2018-06-13 03:38:45 | caucse19/SE_TermProj | https://api.github.com/repos/caucse19/SE_TermProj | closed | Project Management Report | document process | Documents/PMR/ProgressHistory -> Documents/ProjectManagementReport ๋ก ๋ณ๊ฒฝํ ์์ ์
๋๋ค.
Project Managements Report ์ ์ ์ Experience ๋ ์ฌ๊ธฐ์๋ค๊ฐ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ฉ๋๋ค~ | 1.0 | Project Management Report - Documents/PMR/ProgressHistory -> Documents/ProjectManagementReport ๋ก ๋ณ๊ฒฝํ ์์ ์
๋๋ค.
Project Managements Report ์ ์ ์ Experience ๋ ์ฌ๊ธฐ์๋ค๊ฐ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ฉ๋๋ค~ | process | project management report documents pmr progresshistory documents projectmanagementreport ๋ก ๋ณ๊ฒฝํ ์์ ์
๋๋ค project managements report ์ ์ ์ experience ๋ ์ฌ๊ธฐ์๋ค๊ฐ ๋จ๊ฒจ์ฃผ์๋ฉด ๋ฉ๋๋ค | 1 |
17,974 | 23,984,651,056 | IssuesEvent | 2022-09-13 17:56:43 | googleapis/google-cloud-node | https://api.github.com/repos/googleapis/google-cloud-node | opened | Configure auto-label by path | type: process | Incoming PRs will want to be labelled by path to the appropriate product. The config may need to be managed by a template so we don't need to keep updating the config when a new product/API is added. | 1.0 | Configure auto-label by path - Incoming PRs will want to be labelled by path to the appropriate product. The config may need to be managed by a template so we don't need to keep updating the config when a new product/API is added. | process | configure auto label by path incoming prs will want to be labelled by path to the appropriate product the config may need to be managed by a template so we don t need to keep updating the config when a new product api is added | 1 |
695,397 | 23,855,256,595 | IssuesEvent | 2022-09-06 22:27:04 | ModuSynth/meta | https://api.github.com/repos/ModuSynth/meta | opened | Create fake brands for nodes to reinforce immersion | Priority P3 | ## Reasons
Just as it is done in the _"Burnout Paradise"_ video games, having fake brands for nodes could reinforce the feeling of immersion and fun when creating music in the synthesizer. Every brand could have a custom design for knobs and ports to reinforce the differnciation.
## Needs
As a _user creating a synthesizer_
I want to be able to _differenciate my nodes based on their brand_
So that I can _be more convinced that there is a world around the synthesizer_ | 1.0 | Create fake brands for nodes to reinforce immersion - ## Reasons
Just as it is done in the _"Burnout Paradise"_ video games, having fake brands for nodes could reinforce the feeling of immersion and fun when creating music in the synthesizer. Every brand could have a custom design for knobs and ports to reinforce the differnciation.
## Needs
As a _user creating a synthesizer_
I want to be able to _differenciate my nodes based on their brand_
So that I can _be more convinced that there is a world around the synthesizer_ | non_process | create fake brands for nodes to reinforce immersion reasons just as it is done in the burnout paradise video games having fake brands for nodes could reinforce the feeling of immersion and fun when creating music in the synthesizer every brand could have a custom design for knobs and ports to reinforce the differnciation needs as a user creating a synthesizer i want to be able to differenciate my nodes based on their brand so that i can be more convinced that there is a world around the synthesizer | 0 |
228,315 | 25,182,810,181 | IssuesEvent | 2022-11-11 15:09:01 | rancher/rancher | https://api.github.com/repos/rancher/rancher | opened | [CVE-2022-3294][Kubernetes upstream] Node address isn't always verified when proxying | area/kubernetes area/security area/rke area/k3s area/rke2 team/area2 team/rke2 | This issue is to track upstream [CVE-2022-3294](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3294) in Kubernetes affecting the API server
Original upstream issue https://github.com/kubernetes/kubernetes/issues/113757.
---
A security issue was discovered in Kubernetes where users may have access to secure endpoints in the control plane network. Kubernetes clusters are only affected if an untrusted user can modify Node objects and send proxy requests to them.
Kubernetes supports node proxying, which allows clients of kube-apiserver to access endpoints of a Kubelet to establish connections to Pods, retrieve container logs, and more. While Kubernetes already validates the proxying address for Nodes, a bug in kube-apiserver made it possible to bypass this validation. Bypassing this validation could allow authenticated requests destined for Nodes to to the API server's private network.
This issue has been rated medium and assigned CVE-2022-3294.
CVSS Rating: Medium (6.6) [CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
**Am I vulnerable?**
Clusters are affected by this vulnerability if there are endpoints that the kube-apiserver has connectivity to that users should not be able to access. This includes:
- kube-apiserver is in a separate network from worker nodes
- localhost services
mTLS services that accept the same client certificate as nodes may be affected. The severity of this issue depends on the privileges & sensitivity of the exploitable endpoints.
Clusters that configure the egress selector to use a proxy for cluster traffic may not be affected.
**How do I mitigate this vulnerability?**
Upgrading the kube-apiserver to a fixed version mitigates this vulnerability.
Aside from upgrading, configuring an [egress proxy for egress to the cluster network](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) can mitigate this vulnerability.
**Affected Versions**
| Upstream Kubernetes | RKE | RKE2 | K3s |
| ---------------------- | ----| ------ | ---- |
| kube-apiserver <= v1.25.3 | Not available | <= v1.25.3+rke2r1<sup>2</sup> | <= v1.25.3+k3s1<sup>2</sup> |
| kube-apiserver <= v1.24.7 | <= v1.24.6-rancher1-1<sup>1</sup> | <= v1.24.7+rke2r1<sup>1</sup> | <= v1.24.7+k3s1<sup>1</sup> |
| kube-apiserver <= v1.23.13 | <= v1.23.12-rancher1-1<sup>1</sup> | <= v1.23.13+rke2r1<sup>1</sup> | <= v1.23.13+k3s1<sup>1</sup> |
| kube-apiserver <= v1.22.15 | <= v1.22.15-rancher1-1<sup>1</sup> | <= v1.22.15+rke2r2<sup>1</sup> | <= v1.22.15+k3s1<sup>1</sup> |
**Fixed Versions**
| Upstream Kubernetes | RKE | RKE2 | K3s |
| ---------------------- | ----| ------ | ---- |
| kube-apiserver v1.25.4 | Not available | v1.25.4-rc1+rke2r1<sup>2</sup> | Not available |
| kube-apiserver v1.24.8 | Not available | v1.24.8-rc1+rke2r1<sup>1</sup> | Not available |
| kube-apiserver v1.23.14 | Not available | v1.23.14-rc1+rke2r1<sup>1</sup> | Not available |
| kube-apiserver v1.22.16 | Not available | v1.22.16-rc1+rke2r1<sup>1</sup> | Not available |
<sup>1</sup> Not available in Rancher `>= v2.6.9` yet.
<sup>2</sup> Not supported in Rancher. | True | [CVE-2022-3294][Kubernetes upstream] Node address isn't always verified when proxying - This issue is to track upstream [CVE-2022-3294](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-3294) in Kubernetes affecting the API server
Original upstream issue https://github.com/kubernetes/kubernetes/issues/113757.
---
A security issue was discovered in Kubernetes where users may have access to secure endpoints in the control plane network. Kubernetes clusters are only affected if an untrusted user can modify Node objects and send proxy requests to them.
Kubernetes supports node proxying, which allows clients of kube-apiserver to access endpoints of a Kubelet to establish connections to Pods, retrieve container logs, and more. While Kubernetes already validates the proxying address for Nodes, a bug in kube-apiserver made it possible to bypass this validation. Bypassing this validation could allow authenticated requests destined for Nodes to to the API server's private network.
This issue has been rated medium and assigned CVE-2022-3294.
CVSS Rating: Medium (6.6) [CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
**Am I vulnerable?**
Clusters are affected by this vulnerability if there are endpoints that the kube-apiserver has connectivity to that users should not be able to access. This includes:
- kube-apiserver is in a separate network from worker nodes
- localhost services
mTLS services that accept the same client certificate as nodes may be affected. The severity of this issue depends on the privileges & sensitivity of the exploitable endpoints.
Clusters that configure the egress selector to use a proxy for cluster traffic may not be affected.
**How do I mitigate this vulnerability?**
Upgrading the kube-apiserver to a fixed version mitigates this vulnerability.
Aside from upgrading, configuring an [egress proxy for egress to the cluster network](https://kubernetes.io/docs/tasks/extend-kubernetes/setup-konnectivity/) can mitigate this vulnerability.
**Affected Versions**
| Upstream Kubernetes | RKE | RKE2 | K3s |
| ---------------------- | ----| ------ | ---- |
| kube-apiserver <= v1.25.3 | Not available | <= v1.25.3+rke2r1<sup>2</sup> | <= v1.25.3+k3s1<sup>2</sup> |
| kube-apiserver <= v1.24.7 | <= v1.24.6-rancher1-1<sup>1</sup> | <= v1.24.7+rke2r1<sup>1</sup> | <= v1.24.7+k3s1<sup>1</sup> |
| kube-apiserver <= v1.23.13 | <= v1.23.12-rancher1-1<sup>1</sup> | <= v1.23.13+rke2r1<sup>1</sup> | <= v1.23.13+k3s1<sup>1</sup> |
| kube-apiserver <= v1.22.15 | <= v1.22.15-rancher1-1<sup>1</sup> | <= v1.22.15+rke2r2<sup>1</sup> | <= v1.22.15+k3s1<sup>1</sup> |
**Fixed Versions**
| Upstream Kubernetes | RKE | RKE2 | K3s |
| ---------------------- | ----| ------ | ---- |
| kube-apiserver v1.25.4 | Not available | v1.25.4-rc1+rke2r1<sup>2</sup> | Not available |
| kube-apiserver v1.24.8 | Not available | v1.24.8-rc1+rke2r1<sup>1</sup> | Not available |
| kube-apiserver v1.23.14 | Not available | v1.23.14-rc1+rke2r1<sup>1</sup> | Not available |
| kube-apiserver v1.22.16 | Not available | v1.22.16-rc1+rke2r1<sup>1</sup> | Not available |
<sup>1</sup> Not available in Rancher `>= v2.6.9` yet.
<sup>2</sup> Not supported in Rancher. | non_process | node address isn t always verified when proxying this issue is to track upstream in kubernetes affecting the api server original upstream issue a security issue was discovered in kubernetes where users may have access to secure endpoints in the control plane network kubernetes clusters are only affected if an untrusted user can modify node objects and send proxy requests to them kubernetes supports node proxying which allows clients of kube apiserver to access endpoints of a kubelet to establish connections to pods retrieve container logs and more while kubernetes already validates the proxying address for nodes a bug in kube apiserver made it possible to bypass this validation bypassing this validation could allow authenticated requests destined for nodes to to the api server s private network this issue has been rated medium and assigned cve cvss rating medium am i vulnerable clusters are affected by this vulnerability if there are endpoints that the kube apiserver has connectivity to that users should not be able to access this includes kube apiserver is in a separate network from worker nodes localhost services mtls services that accept the same client certificate as nodes may be affected the severity of this issue depends on the privileges sensitivity of the exploitable endpoints clusters that configure the egress selector to use a proxy for cluster traffic may not be affected how do i mitigate this vulnerability upgrading the kube apiserver to a fixed version mitigates this vulnerability aside from upgrading configuring an can mitigate this vulnerability affected versions upstream kubernetes rke kube apiserver kube apiserver kube apiserver kube apiserver fixed versions upstream kubernetes rke kube apiserver not available not available kube apiserver not available not available kube apiserver not available not available kube apiserver not available not available not available in rancher yet not supported in rancher | 0 |
11,737 | 14,579,211,313 | IssuesEvent | 2020-12-18 06:49:39 | kubeflow/kubeflow | https://api.github.com/repos/kubeflow/kubeflow | closed | [Release 1.1] Multi-user release | area/engprod kind/feature kind/process lifecycle/stale priority/p2 | /kind process
Opening this issue to track releasing multi-user for Kubeflow 1.1 and would like to add area OWNERS for @bmorphism and @yanniszark
Per #5022 and Slack discussion in #release we need the following.
- manifests for GitOps-type deployment of Kubeflow for https://github.com/kubeflow/manifests/issues/1136
- more broadly, incl. for existing model, multi-user enhancements tracked in #4960 and worked on by @yanniszark
| 1.0 | [Release 1.1] Multi-user release - /kind process
Opening this issue to track releasing multi-user for Kubeflow 1.1 and would like to add area OWNERS for @bmorphism and @yanniszark
Per #5022 and Slack discussion in #release we need the following.
- manifests for GitOps-type deployment of Kubeflow for https://github.com/kubeflow/manifests/issues/1136
- more broadly, incl. for existing model, multi-user enhancements tracked in #4960 and worked on by @yanniszark
| process | multi user release kind process opening this issue to track releasing multi user for kubeflow and would like to add area owners for bmorphism and yanniszark per and slack discussion in release we need the following manifests for gitops type deployment of kubeflow for more broadly incl for existing model multi user enhancements tracked in and worked on by yanniszark | 1 |
19,084 | 25,130,135,248 | IssuesEvent | 2022-11-09 14:36:50 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | `getConfig` should treat `postgres` and `postgresql` as aliases | process/candidate kind/improvement topic: internal tech/engines team/schema | Let `schema` be a valid Prisma schema. Then:
- when the schema provider is `"postgres"`, `getConfig({ datamodel: schema }).datasources[0].provider` is `"postgres"`
- when the schema provider is `"postgresql"`, `getConfig({ datamodel: schema }).datasources[0].provider is `"postgresql"`
- just because `"postgres"` and `"postgresql"` are interchangeable in the `provider` PSL attribute, it doesn't mean that `getConfig` should treat them as two separate providers.
In these Postgres(ql) cases, I think `getConfig` should treat one provider as an alias of the other, i.e., in both cases Iโd expect `getConfig({ datamodel: schema }).datasources[0].provider` to be `"postgres"`.
This would improve type-safety in the CLI (see comment) and simplify logic from new validations (e.g., the one needed [here](https://github.com/prisma/prisma/issues/13076)) | 1.0 | `getConfig` should treat `postgres` and `postgresql` as aliases - Let `schema` be a valid Prisma schema. Then:
- when the schema provider is `"postgres"`, `getConfig({ datamodel: schema }).datasources[0].provider` is `"postgres"`
- when the schema provider is `"postgresql"`, `getConfig({ datamodel: schema }).datasources[0].provider is `"postgresql"`
- just because `"postgres"` and `"postgresql"` are interchangeable in the `provider` PSL attribute, it doesn't mean that `getConfig` should treat them as two separate providers.
In these Postgres(ql) cases, I think `getConfig` should treat one provider as an alias of the other, i.e., in both cases Iโd expect `getConfig({ datamodel: schema }).datasources[0].provider` to be `"postgres"`.
This would improve type-safety in the CLI (see comment) and simplify logic from new validations (e.g., the one needed [here](https://github.com/prisma/prisma/issues/13076)) | process | getconfig should treat postgres and postgresql as aliases let schema be a valid prisma schema then when the schema provider is postgres getconfig datamodel schema datasources provider is postgres when the schema provider is postgresql getconfig datamodel schema datasources provider is postgresql just because postgres and postgresql are interchangeable in the provider psl attribute it doesn t mean that getconfig should treat them as two separate providers in these postgres ql cases i think getconfig should treat one provider as an alias of the other i e in both cases iโd expect getconfig datamodel schema datasources provider to be postgres this would improve type safety in the cli see comment and simplify logic from new validations e g the one needed | 1 |
8,019 | 11,206,845,685 | IssuesEvent | 2020-01-06 00:25:51 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | A ghost blinking window appears and suddenly disappears upon launch of processing tools which use QgsProcessingParameterDistance | Bug Feedback Processing | Author Name: **Andrea Giudiceandrea** (@agiudiceandrea)
Original Redmine Issue: [21622](https://issues.qgis.org/issues/21622)
Affected QGIS version: 3.7(master)
Redmine category:processing/gui
---
On Windows 7 64 bit with QGIS LTR 3.4.5-1 (89ee6f6e23) and QGIS master 3.7.0-10 (1205fbfa71), a ghost blinking window appears and suddenly disappears upon launch of all and only processing tools which use QgsProcessingParameterDistance for one or more parameters.
!Video_2019-03-20_035131.gif!
Some affected processing tools:
gdal:
Buffer vectors
One side buffer
Offset curve
qgis:
Create grid
Heatmap
Points along geometry
Points displacement
Pole of inaccessibility
Random points along line
Random points in extent
Random points in layer bounds
Random points inside polygons
Rectangles, ovals, diamonds (fixed)
...
native:
Array of offset (parallel) lines
Array of translated features
Extend lines
Line substring
Remove duplicate vertices
Snap points to grid
Transect
Translate
...
---
- [Video_2019-03-20_035131.gif](https://issues.qgis.org/attachments/download/14630/Video_2019-03-20_035131.gif) (Andrea Giudiceandrea) | 1.0 | A ghost blinking window appears and suddenly disappears upon launch of processing tools which use QgsProcessingParameterDistance - Author Name: **Andrea Giudiceandrea** (@agiudiceandrea)
Original Redmine Issue: [21622](https://issues.qgis.org/issues/21622)
Affected QGIS version: 3.7(master)
Redmine category:processing/gui
---
On Windows 7 64 bit with QGIS LTR 3.4.5-1 (89ee6f6e23) and QGIS master 3.7.0-10 (1205fbfa71), a ghost blinking window appears and suddenly disappears upon launch of all and only processing tools which use QgsProcessingParameterDistance for one or more parameters.
!Video_2019-03-20_035131.gif!
Some affected processing tools:
gdal:
Buffer vectors
One side buffer
Offset curve
qgis:
Create grid
Heatmap
Points along geometry
Points displacement
Pole of inaccessibility
Random points along line
Random points in extent
Random points in layer bounds
Random points inside polygons
Rectangles, ovals, diamonds (fixed)
...
native:
Array of offset (parallel) lines
Array of translated features
Extend lines
Line substring
Remove duplicate vertices
Snap points to grid
Transect
Translate
...
---
- [Video_2019-03-20_035131.gif](https://issues.qgis.org/attachments/download/14630/Video_2019-03-20_035131.gif) (Andrea Giudiceandrea) | process | a ghost blinking window appears and suddenly disappears upon launch of processing tools which use qgsprocessingparameterdistance author name andrea giudiceandrea agiudiceandrea original redmine issue affected qgis version master redmine category processing gui on windows bit with qgis ltr and qgis master a ghost blinking window appears and suddenly disappears upon launch of all and only processing tools which use qgsprocessingparameterdistance for one or more parameters video gif some affected processing tools gdal buffer vectors one side buffer offset curve qgis create grid heatmap points along geometry points displacement pole of inaccessibility random points along line random points in extent random points in layer bounds random points inside polygons rectangles ovals diamonds fixed native array of offset parallel lines array of translated features extend lines line substring remove duplicate vertices snap points to grid transect translate andrea giudiceandrea | 1 |
378,668 | 26,332,234,181 | IssuesEvent | 2023-01-10 11:43:19 | maplibre/maplibre-gl-native | https://api.github.com/repos/maplibre/maplibre-gl-native | opened | Evaluate using DooC for iOS documentation | documentation iOS | The Apple recommended way to provide documentation is [DooC](https://developer.apple.com/documentation/docc).
Worth taking a look at I think, especially since the iOS docs are in need of a refresh anyway. | 1.0 | Evaluate using DooC for iOS documentation - The Apple recommended way to provide documentation is [DooC](https://developer.apple.com/documentation/docc).
Worth taking a look at I think, especially since the iOS docs are in need of a refresh anyway. | non_process | evaluate using dooc for ios documentation the apple recommended way to provide documentation is worth taking a look at i think especially since the ios docs are in need of a refresh anyway | 0 |
19,063 | 25,082,997,432 | IssuesEvent | 2022-11-07 21:02:36 | esmero/strawberry_runners | https://api.github.com/repos/esmero/strawberry_runners | closed | Add a NO POST PROCESSING json key (exception) to skip on a one-by-one level a certain post processor(s) | enhancement Post processor Plugins | # What?
e.g you have a Book, all is handwritten, the SBR rules say, run the pager and the OCR for all "pages" that have a "tiff", these are tiffs.
We allow this
```JSON
"ap:tasks": {
"ap:nopost": [
"pager"
]
}
```
And that will skip the pager for that ADO.
Simple/cool
Remember kids we also have
```JSON
"ap:tasks": {
"ap:forcepost": true
}
```
That will force a re-processing even if it was processed before (only if the rules match of course). But `ap:nopost` wins. No reprocessing can be forced if we are skipping
Thanks
| 1.0 | Add a NO POST PROCESSING json key (exception) to skip on a one-by-one level a certain post processor(s) - # What?
e.g you have a Book, all is handwritten, the SBR rules say, run the pager and the OCR for all "pages" that have a "tiff", these are tiffs.
We allow this
```JSON
"ap:tasks": {
"ap:nopost": [
"pager"
]
}
```
And that will skip the pager for that ADO.
Simple/cool
Remember kids we also have
```JSON
"ap:tasks": {
"ap:forcepost": true
}
```
That will force a re-processing even if it was processed before (only if the rules match of course). But `ap:nopost` wins. No reprocessing can be forced if we are skipping
Thanks
| process | add a no post processing json key exception to skip on a one by one level a certain post processor s what e g you have a book all is handwritten the sbr rules say run the pager and the ocr for all pages that have a tiff these are tiffs we allow this json ap tasks ap nopost pager and that will skip the pager for that ado simple cool remember kids we also have json ap tasks ap forcepost true that will force a re processing even if it was processed before only if the rules match of course but ap nopost wins no reprocessing can be forced if we are skipping thanks | 1 |
120,294 | 17,644,081,674 | IssuesEvent | 2021-08-20 01:38:39 | DavidSpek/pipelines | https://api.github.com/repos/DavidSpek/pipelines | opened | CVE-2021-37660 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2021-37660 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt,pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by 0. The [implementation](https://github.com/tensorflow/tensorflow/blob/84d053187cb80d975ef2b9684d4b61981bca0c41/tensorflow/core/kernels/inplace_ops.cc#L283) has a logic error: it should skip processing if `x` and `v` are empty but the code uses `||` instead of `&&`. We have patched the issue in GitHub commit e86605c0a336c088b638da02135ea6f9f6753618. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37660>CVE-2021-37660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cm5x-837x-jf3c">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cm5x-837x-jf3c</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37660 (Medium) detected in tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-37660 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ec/98/f968caf5f65759e78873b900cbf0ae20b1699fb11268ecc0f892186419a7/tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt</p>
<p>Path to vulnerable library: pipelines/contrib/components/openvino/ovms-deployer/containers/requirements.txt,pipelines/samples/core/ai_platform/training</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by 0. The [implementation](https://github.com/tensorflow/tensorflow/blob/84d053187cb80d975ef2b9684d4b61981bca0c41/tensorflow/core/kernels/inplace_ops.cc#L283) has a logic error: it should skip processing if `x` and `v` are empty but the code uses `||` instead of `&&`. We have patched the issue in GitHub commit e86605c0a336c088b638da02135ea6f9f6753618. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37660>CVE-2021-37660</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cm5x-837x-jf3c">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-cm5x-837x-jf3c</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in tensorflow whl cve medium severity vulnerability vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file pipelines contrib components openvino ovms deployer containers requirements txt path to vulnerable library pipelines contrib components openvino ovms deployer containers requirements txt pipelines samples core ai platform training dependency hierarchy x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning in affected versions an attacker can cause a floating point exception by calling inplace operations with crafted arguments that would result in a division by the has a logic error it should skip processing if x and v are empty but the code uses instead of we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource | 0 |
288,038 | 21,681,522,337 | IssuesEvent | 2022-05-09 07:09:58 | reflexsoar/reflex-api | https://api.github.com/repos/reflexsoar/reflex-api | closed | Alert grouping | type: documentation important release: v2022.04.00 | - [x] Create an event hash that will provide a signature of the hash which can be used to link events together when an event flood comes in | 1.0 | Alert grouping - - [x] Create an event hash that will provide a signature of the hash which can be used to link events together when an event flood comes in | non_process | alert grouping create an event hash that will provide a signature of the hash which can be used to link events together when an event flood comes in | 0 |
7,653 | 10,739,731,361 | IssuesEvent | 2019-10-29 16:52:19 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | opened | Bring work experience sort order over from USAJOBS | Apply Process Requirements Ready State Dept. | Who: Student internship applicants
What: default sort order for work experience
Why: in order to bring data over from USAJOBS in the same order
Acceptance Criteria:
USAJOBS now allows for the user to sort their work experience on One Profile.
- Pull the sort order from USAJOBS
- The work experience should default display in the same order as it comes over from USAJOBS
Screen shot of USAJOBS one profile:
 | 1.0 | Bring work experience sort order over from USAJOBS - Who: Student internship applicants
What: default sort order for work experience
Why: in order to bring data over from USAJOBS in the same order
Acceptance Criteria:
USAJOBS now allows for the user to sort their work experience on One Profile.
- Pull the sort order from USAJOBS
- The work experience should default display in the same order as it comes over from USAJOBS
Screen shot of USAJOBS one profile:
 | process | bring work experience sort order over from usajobs who student internship applicants what default sort order for work experience why in order to bring data over from usajobs in the same order acceptance criteria usajobs now allows for the user to sort their work experience on one profile pull the sort order from usajobs the work experience should default display in the same order as it comes over from usajobs screen shot of usajobs one profile | 1 |
60,396 | 6,690,105,340 | IssuesEvent | 2017-10-09 07:38:23 | Microsoft/vsts-tasks | https://api.github.com/repos/Microsoft/vsts-tasks | closed | Task 'Deploy Test Agent' fails: Test Agent installation/update stuck | Area: Test | I created a VS 2017 solution that executes UI tests (using Chrome through Selenium). I build and deploy this test solution through VSTS to a VM (Win Srv 2012, VS 2017 Community installed).
After upgrading the VM to Visual Studio 2017, the next deployment got stuck with the task 'Deploy test agent' during the installation of the Visual Studio 2017 test agent. The setup process remained at 100% CPU load and stayed that way for 36 hours. Then I killed the process. Further build runs went the same way.
In the end, I installed the Visual Studio 2017 test agent manually. From then on the task to deploy the test agent succeeds, if I do NOT enable 'Update Test Agent'.
For now this scenario is running ok, but in the long run the option to update the test agent should be working - let alone needing to manually install a test agent for any new test VM. | 1.0 | Task 'Deploy Test Agent' fails: Test Agent installation/update stuck - I created a VS 2017 solution that executes UI tests (using Chrome through Selenium). I build and deploy this test solution through VSTS to a VM (Win Srv 2012, VS 2017 Community installed).
After upgrading the VM to Visual Studio 2017, the next deployment got stuck with the task 'Deploy test agent' during the installation of the Visual Studio 2017 test agent. The setup process remained at 100% CPU load and stayed that way for 36 hours. Then I killed the process. Further build runs went the same way.
In the end, I installed the Visual Studio 2017 test agent manually. From then on the task to deploy the test agent succeeds, if I do NOT enable 'Update Test Agent'.
For now this scenario is running ok, but in the long run the option to update the test agent should be working - let alone needing to manually install a test agent for any new test VM. | non_process | task deploy test agent fails test agent installation update stuck i created a vs solution that executes ui tests using chrome through selenium i build and deploy this test solution through vsts to a vm win srv vs community installed after upgrading the vm to visual studio the next deployment got stuck with the task deploy test agent during the installation of the visual studio test agent the setup process remained at cpu load and stayed that way for hours then i killed the process further build runs went the same way in the end i installed the visual studio test agent manually from then on the task to deploy the test agent succeeds if i do not enable update test agent for now this scenario is running ok but in the long run the option to update the test agent should be working let alone needing to manually install a test agent for any new test vm | 0 |
4,819 | 2,751,273,991 | IssuesEvent | 2015-04-24 07:50:29 | uaoleg/icpc.org.ua | https://api.github.com/repos/uaoleg/icpc.org.ua | closed | Teams: filter list by "Out of competition" field | completed enhancement tested | Need a new column to filter the common list of teams by "Out of competition" field:
- All
- In competition
- Out of competition | 1.0 | Teams: filter list by "Out of competition" field - Need a new column to filter the common list of teams by "Out of competition" field:
- All
- In competition
- Out of competition | non_process | teams filter list by out of competition field need a new column to filter the common list of teams by out of competition field all in competition out of competition | 0 |
268,070 | 20,256,543,088 | IssuesEvent | 2022-02-15 00:10:31 | ImGabreuw/imersao-fullcycle-6 | https://api.github.com/repos/ImGabreuw/imersao-fullcycle-6 | closed | README | documentation good first issue | ## README
- Foto de demonstraรงรฃo
- Breve descriรงรฃo sobre a aplicaรงรฃo
- Tecnologias utilizadas | 1.0 | README - ## README
- Foto de demonstraรงรฃo
- Breve descriรงรฃo sobre a aplicaรงรฃo
- Tecnologias utilizadas | non_process | readme readme foto de demonstraรงรฃo breve descriรงรฃo sobre a aplicaรงรฃo tecnologias utilizadas | 0 |
17,896 | 23,872,767,131 | IssuesEvent | 2022-09-07 16:06:49 | streamnative/flink | https://api.github.com/repos/streamnative/flink | closed | [SQL Connector] PROTOBUF_NATIVE format design and MVP | compute/data-processing | Write a PROTOBUF_NATIVE format design documentation and discuss with the team. | 1.0 | [SQL Connector] PROTOBUF_NATIVE format design and MVP - Write a PROTOBUF_NATIVE format design documentation and discuss with the team. | process | protobuf native format design and mvp write a protobuf native format design documentation and discuss with the team | 1 |
811,718 | 30,297,192,283 | IssuesEvent | 2023-07-10 00:34:23 | grpc/grpc | https://api.github.com/repos/grpc/grpc | opened | `pickle.loads` from pickled `AioRpcError` error | kind/bug lang/Python priority/P2 | <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
1.56.0
Python
### What operating system (Linux, Windows,...) and version?
Linux CentOS8
### What runtime / compiler are you using (e.g. python version or version of gcc)
Python3.10
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If thereโs not enough information to debug the problem, gRPC team may close the issue at their discretion. Youโre welcome to re-open the issue once you have a reproduction.
```python
import pickle
import grpc
rpc_error = grpc.aio.AioRpcError(None, None, None)
s = pickle.dumps(rpc_error)
pickle.loads(s)
```
### What did you expect to see?
The `pickle.loads` runs successfully.
### What did you see instead?
```
Traceback (most recent call last):
File "/root/test.py", line 8, in <module>
pickle.loads(s)
TypeError: AioRpcError.__init__() missing 3 required positional arguments: 'code', 'initial_metadata', and 'trailing_metadata'
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
`AioRpcError` is subclass of `Exception`, and `AioRpcError.__init__` needs three necessary arguments.
Because `BaseException` has a default `__reduce__` which does not handle extra args for `__init__`, this cause error when unpickle from pickled `AioRpcError` instance.
Currently I use a monkey patch for `AioRpcError.__reduce__`ใIt works but not good. The best solution is to do this in grpcio lib.
```python
grpc.aio.AioRpcError.__reduce__ = lambda s: (
grpc.aio.AioRpcError,
(s._code, s._details, s._initial_metadata, s._trailing_metadata, s._debug_error_string),
)
```
---
Why do I must pickle and unpickle an `AioRpcError` ? In my project, I need to do a grpc aio client call in **mulprocessing** , and if the subprocess raise an `AioRpcError`, the mulprocessing module will transfer the exception to main process through pickle then unpickle, but unpickle error, this will cause subprocess die suddenly. Here is an example:
```python
import multiprocessing
if __name__ == "__main__":
multiprocessing.set_start_method("forkserver")
import asyncio
from concurrent.futures import ProcessPoolExecutor
import grpc
def do_rpc():
rpc_error = grpc.aio.AioRpcError(None, None, None)
raise rpc_error
async def async_main():
executor = ProcessPoolExecutor(2)
await asyncio.get_running_loop().run_in_executor(executor, do_rpc)
def main():
asyncio.run(async_main())
if __name__ == "__main__":
main()
```
and the traceback:
```
concurrent.futures.process._RemoteTraceback:
'''
Traceback (most recent call last):
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 387, in wait_result_broken_or_wakeup
result_item = result_reader.recv()
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
TypeError: AioRpcError.__init__() missing 3 required positional arguments: 'code', 'initial_metadata', and 'trailing_metadata'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/test2.py", line 28, in <module>
main()
File "/root/test2.py", line 24, in main
asyncio.run(async_main())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/test2.py", line 20, in async_main
await asyncio.get_running_loop().run_in_executor(executor, do_rpc)
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
```
Even worse, it causes the process pool broken.
| 1.0 | `pickle.loads` from pickled `AioRpcError` error - <!--
PLEASE DO NOT POST A QUESTION HERE.
This form is for bug reports and feature requests ONLY!
For general questions and troubleshooting, please ask/look for answers at StackOverflow, with "grpc" tag: https://stackoverflow.com/questions/tagged/grpc
For questions that specifically need to be answered by gRPC team members, please ask/look for answers at grpc.io mailing list: https://groups.google.com/forum/#!forum/grpc-io
Issues specific to *grpc-java*, *grpc-go*, *grpc-node*, *grpc-dart*, *grpc-web* should be created in the repository they belong to (e.g. https://github.com/grpc/grpc-LANGUAGE/issues/new)
-->
### What version of gRPC and what language are you using?
1.56.0
Python
### What operating system (Linux, Windows,...) and version?
Linux CentOS8
### What runtime / compiler are you using (e.g. python version or version of gcc)
Python3.10
### What did you do?
Please provide either 1) A unit test for reproducing the bug or 2) Specific steps for us to follow to reproduce the bug. If thereโs not enough information to debug the problem, gRPC team may close the issue at their discretion. Youโre welcome to re-open the issue once you have a reproduction.
```python
import pickle
import grpc
rpc_error = grpc.aio.AioRpcError(None, None, None)
s = pickle.dumps(rpc_error)
pickle.loads(s)
```
### What did you expect to see?
The `pickle.loads` runs successfully.
### What did you see instead?
```
Traceback (most recent call last):
File "/root/test.py", line 8, in <module>
pickle.loads(s)
TypeError: AioRpcError.__init__() missing 3 required positional arguments: 'code', 'initial_metadata', and 'trailing_metadata'
```
Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs).
See [TROUBLESHOOTING.md](https://github.com/grpc/grpc/blob/master/TROUBLESHOOTING.md) for how to diagnose problems better.
### Anything else we should know about your project / environment?
`AioRpcError` is subclass of `Exception`, and `AioRpcError.__init__` needs three necessary arguments.
Because `BaseException` has a default `__reduce__` which does not handle extra args for `__init__`, this cause error when unpickle from pickled `AioRpcError` instance.
Currently I use a monkey patch for `AioRpcError.__reduce__`ใIt works but not good. The best solution is to do this in grpcio lib.
```python
grpc.aio.AioRpcError.__reduce__ = lambda s: (
grpc.aio.AioRpcError,
(s._code, s._details, s._initial_metadata, s._trailing_metadata, s._debug_error_string),
)
```
---
Why do I must pickle and unpickle an `AioRpcError` ? In my project, I need to do a grpc aio client call in **mulprocessing** , and if the subprocess raise an `AioRpcError`, the mulprocessing module will transfer the exception to main process through pickle then unpickle, but unpickle error, this will cause subprocess die suddenly. Here is an example:
```python
import multiprocessing
if __name__ == "__main__":
multiprocessing.set_start_method("forkserver")
import asyncio
from concurrent.futures import ProcessPoolExecutor
import grpc
def do_rpc():
rpc_error = grpc.aio.AioRpcError(None, None, None)
raise rpc_error
async def async_main():
executor = ProcessPoolExecutor(2)
await asyncio.get_running_loop().run_in_executor(executor, do_rpc)
def main():
asyncio.run(async_main())
if __name__ == "__main__":
main()
```
and the traceback:
```
concurrent.futures.process._RemoteTraceback:
'''
Traceback (most recent call last):
File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 387, in wait_result_broken_or_wakeup
result_item = result_reader.recv()
File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
TypeError: AioRpcError.__init__() missing 3 required positional arguments: 'code', 'initial_metadata', and 'trailing_metadata'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/test2.py", line 28, in <module>
main()
File "/root/test2.py", line 24, in main
asyncio.run(async_main())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/root/test2.py", line 20, in async_main
await asyncio.get_running_loop().run_in_executor(executor, do_rpc)
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
```
Even worse, it causes the process pool broken.
| non_process | pickle loads from pickled aiorpcerror error please do not post a question here this form is for bug reports and feature requests only for general questions and troubleshooting please ask look for answers at stackoverflow with grpc tag for questions that specifically need to be answered by grpc team members please ask look for answers at grpc io mailing list issues specific to grpc java grpc go grpc node grpc dart grpc web should be created in the repository they belong to e g what version of grpc and what language are you using python what operating system linux windows and version linux what runtime compiler are you using e g python version or version of gcc what did you do please provide either a unit test for reproducing the bug or specific steps for us to follow to reproduce the bug if thereโs not enough information to debug the problem grpc team may close the issue at their discretion youโre welcome to re open the issue once you have a reproduction python import pickle import grpc rpc error grpc aio aiorpcerror none none none s pickle dumps rpc error pickle loads s what did you expect to see the pickle loads runs successfully what did you see instead traceback most recent call last file root test py line in pickle loads s typeerror aiorpcerror init missing required positional arguments code initial metadata and trailing metadata make sure you include information that can help us debug full error message exception listing stack trace logs see for how to diagnose problems better anything else we should know about your project environment aiorpcerror is subclass of exception and aiorpcerror init needs three necessary arguments because baseexception has a default reduce which does not handle extra args for init this cause error when unpickle from pickled aiorpcerror instance currently i use a monkey patch for aiorpcerror reduce ใit works but not good the best solution is to do this in grpcio lib python grpc aio aiorpcerror reduce lambda s grpc aio aiorpcerror s code s details s initial metadata s trailing metadata s debug error string why do i must pickle and unpickle an aiorpcerror in my project i need to do a grpc aio client call in mulprocessing and if the subprocess raise an aiorpcerror the mulprocessing module will transfer the exception to main process through pickle then unpickle but unpickle error this will cause subprocess die suddenly here is an example python import multiprocessing if name main multiprocessing set start method forkserver import asyncio from concurrent futures import processpoolexecutor import grpc def do rpc rpc error grpc aio aiorpcerror none none none raise rpc error async def async main executor processpoolexecutor await asyncio get running loop run in executor executor do rpc def main asyncio run async main if name main main and the traceback concurrent futures process remotetraceback traceback most recent call last file usr local lib concurrent futures process py line in wait result broken or wakeup result item result reader recv file usr local lib multiprocessing connection py line in recv return forkingpickler loads buf getbuffer typeerror aiorpcerror init missing required positional arguments code initial metadata and trailing metadata the above exception was the direct cause of the following exception traceback most recent call last file root py line in main file root py line in main asyncio run async main file usr local lib asyncio runners py line in run return loop run until complete main file usr local lib asyncio base events py line in run until complete return future result file root py line in async main await asyncio get running loop run in executor executor do rpc concurrent futures process brokenprocesspool a process in the process pool was terminated abruptly while the future was running or pending even worse it causes the process pool broken | 0 |
315,909 | 27,116,288,233 | IssuesEvent | 2023-02-15 18:51:20 | wazuh/wazuh-qa | https://api.github.com/repos/wazuh/wazuh-qa | opened | Add environment management improvement tests | team/qa type/test-development status/not-tracked subteam/qa-main target/5.0.0 | | Target version | Related issue | Related PR/dev branch
|--------------------|--------------------|-----------------|
| 5.0 | https://github.com/wazuh/wazuh-qa/issues/3914 | https://github.com/wazuh/wazuh/pull/15847
<!-- Important: No section may be left blank. If not, delete it directly (in principle only "Configurations" and "Considerations" could be left blank in case of not proceeding). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been implemented and have to be tested. -->
Since we aim to automate the API tests, we need to add the `catalog` tests and cases. To design the cases we will use the dev-testing as a guide.
- https://github.com/wazuh/wazuh-qa/issues/3914
## Proposed test cases
<!-- Indicate the minimum test cases proposed by the developer. -->
The initial cases can be found within the dev-testing issues mentioned above.
## Considerations
<!-- Indicate considerations to take into account when performing the testing that may not be very intuitive.
-->
The testing will be performed on the mvp provided by the dev team.
| 1.0 | Add environment management improvement tests - | Target version | Related issue | Related PR/dev branch
|--------------------|--------------------|-----------------|
| 5.0 | https://github.com/wazuh/wazuh-qa/issues/3914 | https://github.com/wazuh/wazuh/pull/15847
<!-- Important: No section may be left blank. If not, delete it directly (in principle only "Configurations" and "Considerations" could be left blank in case of not proceeding). -->
## Description
<!-- Description that puts into context and shows the QA tester the changes that have been implemented and have to be tested. -->
Since we aim to automate the API tests, we need to add the `catalog` tests and cases. To design the cases we will use the dev-testing as a guide.
- https://github.com/wazuh/wazuh-qa/issues/3914
## Proposed test cases
<!-- Indicate the minimum test cases proposed by the developer. -->
The initial cases can be found within the dev-testing issues mentioned above.
## Considerations
<!-- Indicate considerations to take into account when performing the testing that may not be very intuitive.
-->
The testing will be performed on the mvp provided by the dev team.
| non_process | add environment management improvement tests target version related issue related pr dev branch description since we aim to automate the api tests we need to add the catalog tests and cases to design the cases we will use the dev testing as a guide proposed test cases the initial cases can be found within the dev testing issues mentioned above considerations indicate considerations to take into account when performing the testing that may not be very intuitive the testing will be performed on the mvp provided by the dev team | 0 |
251,433 | 8,015,362,168 | IssuesEvent | 2018-07-25 09:45:23 | Extum/flarum-theme-material | https://api.github.com/repos/Extum/flarum-theme-material | closed | Tags to be materialized with chips | component: chips enhancement materialization mdl priority: high | Standard tag design with sharp corners should replaced with round chips.
MDL code and documentation below.
```
.mdl-chip {
height: 32px;
font-family: "Roboto", "Helvetica", "Arial", sans-serif;
line-height: 32px;
padding: 0 12px;
border: 0;
border-radius: 16px;
background-color: #dedede;
display: inline-block;
color: rgba(0,0,0, 0.87);
margin: 2px 0;
font-size: 0;
white-space: nowrap; }
.mdl-chip__text {
font-size: 13px;
vertical-align: middle;
display: inline-block; }
.mdl-chip__action {
height: 24px;
width: 24px;
background: transparent;
opacity: 0.54;
display: inline-block;
cursor: pointer;
text-align: center;
vertical-align: middle;
padding: 0;
margin: 0 0 0 4px;
font-size: 13px;
text-decoration: none;
color: rgba(0,0,0, 0.87);
border: none;
outline: none;
overflow: hidden; }
.mdl-chip__contact {
height: 32px;
width: 32px;
border-radius: 16px;
display: inline-block;
vertical-align: middle;
margin-right: 8px;
overflow: hidden;
text-align: center;
font-size: 18px;
line-height: 32px; }
.mdl-chip:focus {
outline: 0;
box-shadow: 0 2px 2px 0 rgba(0, 0, 0, 0.14), 0 3px 1px -2px rgba(0, 0, 0, 0.2), 0 1px 5px 0 rgba(0, 0, 0, 0.12); }
.mdl-chip:active {
background-color: #d6d6d6; }
.mdl-chip--deletable {
padding-right: 4px; }
.mdl-chip--contact {
padding-left: 0; }
```
Documentation: https://getmdl.io/components/index.html#chips-section | 1.0 | Tags to be materialized with chips - Standard tag design with sharp corners should replaced with round chips.
MDL code and documentation below.
```
.mdl-chip {
height: 32px;
font-family: "Roboto", "Helvetica", "Arial", sans-serif;
line-height: 32px;
padding: 0 12px;
border: 0;
border-radius: 16px;
background-color: #dedede;
display: inline-block;
color: rgba(0,0,0, 0.87);
margin: 2px 0;
font-size: 0;
white-space: nowrap; }
.mdl-chip__text {
font-size: 13px;
vertical-align: middle;
display: inline-block; }
.mdl-chip__action {
height: 24px;
width: 24px;
background: transparent;
opacity: 0.54;
display: inline-block;
cursor: pointer;
text-align: center;
vertical-align: middle;
padding: 0;
margin: 0 0 0 4px;
font-size: 13px;
text-decoration: none;
color: rgba(0,0,0, 0.87);
border: none;
outline: none;
overflow: hidden; }
.mdl-chip__contact {
height: 32px;
width: 32px;
border-radius: 16px;
display: inline-block;
vertical-align: middle;
margin-right: 8px;
overflow: hidden;
text-align: center;
font-size: 18px;
line-height: 32px; }
.mdl-chip:focus {
outline: 0;
box-shadow: 0 2px 2px 0 rgba(0, 0, 0, 0.14), 0 3px 1px -2px rgba(0, 0, 0, 0.2), 0 1px 5px 0 rgba(0, 0, 0, 0.12); }
.mdl-chip:active {
background-color: #d6d6d6; }
.mdl-chip--deletable {
padding-right: 4px; }
.mdl-chip--contact {
padding-left: 0; }
```
Documentation: https://getmdl.io/components/index.html#chips-section | non_process | tags to be materialized with chips standard tag design with sharp corners should replaced with round chips mdl code and documentation below mdl chip height font family roboto helvetica arial sans serif line height padding border border radius background color dedede display inline block color rgba margin font size white space nowrap mdl chip text font size vertical align middle display inline block mdl chip action height width background transparent opacity display inline block cursor pointer text align center vertical align middle padding margin font size text decoration none color rgba border none outline none overflow hidden mdl chip contact height width border radius display inline block vertical align middle margin right overflow hidden text align center font size line height mdl chip focus outline box shadow rgba rgba rgba mdl chip active background color mdl chip deletable padding right mdl chip contact padding left documentation | 0 |
110,449 | 23,934,169,359 | IssuesEvent | 2022-09-11 01:19:37 | Pokecube-Development/Pokecube-Issues-and-Wiki | https://api.github.com/repos/Pokecube-Development/Pokecube-Issues-and-Wiki | closed | Wearables Render Incorrectly | Bug - Code Fixed | #### Issue Description:
- Wearables render incorrectly in the gui
- The wearables button does not disappear when switching tabs in creative.
#### What happens:
- When a wearable is equipped, it appears inverted.
- The wearables button does not disappear when switching tabs in creative.
#### What you expected to happen:
- Wearables should render correctly.
#### Steps to reproduce:
1. Equip a wearable
2. Observe in gui
...
____
#### Affected Versions:
- Pokecube AIO: dev
- Minecraft: 1.18.1
- Forge: 39.0.19


| 1.0 | Wearables Render Incorrectly - #### Issue Description:
- Wearables render incorrectly in the gui
- The wearables button does not disappear when switching tabs in creative.
#### What happens:
- When a wearable is equipped, it appears inverted.
- The wearables button does not disappear when switching tabs in creative.
#### What you expected to happen:
- Wearables should render correctly.
#### Steps to reproduce:
1. Equip a wearable
2. Observe in gui
...
____
#### Affected Versions:
- Pokecube AIO: dev
- Minecraft: 1.18.1
- Forge: 39.0.19


| non_process | wearables render incorrectly issue description wearables render incorrectly in the gui the wearables button does not disappear when switching tabs in creative what happens when a wearable is equipped it appears inverted the wearables button does not disappear when switching tabs in creative what you expected to happen wearables should render correctly steps to reproduce equip a wearable observe in gui affected versions pokecube aio dev minecraft forge | 0 |
17,937 | 23,934,995,906 | IssuesEvent | 2022-09-11 04:47:15 | GregTechCEu/gt-ideas | https://api.github.com/repos/GregTechCEu/gt-ideas | opened | Random idea dump of 9-10-2022 (Oil Processing Overhaul, Natural Gas Processing) | processing chain | ## ghjfghjfghk
I have been doing some research on stuff for my modpack hypixel skyblock 7. However I have decided to share the research in case someone is interested in adding them to core gregtech.
## Oil Processing Overhaul
## Details
In my opinion oil processing in greg tech has a lot of room for improvement. This has already been suggested to Tech22 but I decided I will post it here as well. Oil processing in real life is much more complicated but also rewarding than what is currently in greg tech. The exact amounts of stuff that will be produced are not decided yet because I am not exactly a balancing expert. Some of the items and machines described here are not in the game yet.
## Products
Hydrocarbons (Propane, ethylene, benzene, etc.) Such as the ones you get from steam-cracking light fuel and distilling it with the current gregtech recipes
Sulfur (From hydrogen sulfide)
Coke
Diesel
Kerosene
## Steps
Steps here are copied from wikipedia
Desalter unit uses water to wash out salt from the crude oil before it is distilled.
Distillation tower turns desalted crude oil into -> Refinery Gas, Sulfuric Naphtha, Sulfuric Kerosene, Sulfuric Diesel, Refinery Residue (The amounts given will depend on whether you are using light crude oil, medium crude oil, and heavy crude oil)
Vacuum Distillation Tower turns refinery residue (liquid) into Vacuum Residue and Bitumen
Bitumen can be recycled by mixing it with naphtha to make heavy oil
Coker turns Vacuum Residue into Coke, Refinery Gas, Naphtha, and Sulfuric Gas
Sulfuric Diesel and Sulfuric Naphtha are treated with hydrogen to make Diesel and Naphtha respectively, while giving hydrogen sulfide which is used to make sulfur/sulfuric acid
Sulfuric Kerosene is treated with Sodium Hydroxide in a Merox treater to give kerosene
Processing refinery gas will be the same as it is currently (Treated with hydrogen and cracked with steam or hydrogen), however putting steam-cracked refinery gas in a distillation tower will produce the same things as the current steam-cracked light fuel distillation tower recipe.
Naphtha can no longer be distilled, it will be processed with a catalytic reformer
These steps are an oversimplified version of this diagram here because my understanding of petrochemistry is limited. A more complete realistic process will be found in the Sources area at the bottom.

## Yield
Depends
## Sources
https://en.wikipedia.org/wiki/Oil_refinery#Chemical_processes
https://en.wikipedia.org/wiki/Dilbit (Synthetic Crude Oil from Bitumen)
## Natural Gas Processing
Similarly, natural gas processing in real life is more rewarding but also complicated than what is currently in greg tech. I haven't had much time to research it but here are some diagrams and links.
https://en.wikipedia.org/wiki/Natural-gas_processing

| 1.0 | Random idea dump of 9-10-2022 (Oil Processing Overhaul, Natural Gas Processing) - ## ghjfghjfghk
I have been doing some research on stuff for my modpack hypixel skyblock 7. However I have decided to share the research in case someone is interested in adding them to core gregtech.
## Oil Processing Overhaul
## Details
In my opinion oil processing in greg tech has a lot of room for improvement. This has already been suggested to Tech22 but I decided I will post it here as well. Oil processing in real life is much more complicated but also rewarding than what is currently in greg tech. The exact amounts of stuff that will be produced are not decided yet because I am not exactly a balancing expert. Some of the items and machines described here are not in the game yet.
## Products
Hydrocarbons (Propane, ethylene, benzene, etc.) Such as the ones you get from steam-cracking light fuel and distilling it with the current gregtech recipes
Sulfur (From hydrogen sulfide)
Coke
Diesel
Kerosene
## Steps
Steps here are copied from wikipedia
Desalter unit uses water to wash out salt from the crude oil before it is distilled.
Distillation tower turns desalted crude oil into -> Refinery Gas, Sulfuric Naphtha, Sulfuric Kerosene, Sulfuric Diesel, Refinery Residue (The amounts given will depend on whether you are using light crude oil, medium crude oil, and heavy crude oil)
Vacuum Distillation Tower turns refinery residue (liquid) into Vacuum Residue and Bitumen
Bitumen can be recycled by mixing it with naphtha to make heavy oil
Coker turns Vacuum Residue into Coke, Refinery Gas, Naphtha, and Sulfuric Gas
Sulfuric Diesel and Sulfuric Naphtha are treated with hydrogen to make Diesel and Naphtha respectively, while giving hydrogen sulfide which is used to make sulfur/sulfuric acid
Sulfuric Kerosene is treated with Sodium Hydroxide in a Merox treater to give kerosene
Processing refinery gas will be the same as it is currently (Treated with hydrogen and cracked with steam or hydrogen), however putting steam-cracked refinery gas in a distillation tower will produce the same things as the current steam-cracked light fuel distillation tower recipe.
Naphtha can no longer be distilled, it will be processed with a catalytic reformer
These steps are an oversimplified version of this diagram here because my understanding of petrochemistry is limited. A more complete realistic process will be found in the Sources area at the bottom.

## Yield
Depends
## Sources
https://en.wikipedia.org/wiki/Oil_refinery#Chemical_processes
https://en.wikipedia.org/wiki/Dilbit (Synthetic Crude Oil from Bitumen)
## Natural Gas Processing
Similarly, natural gas processing in real life is more rewarding but also complicated than what is currently in greg tech. I haven't had much time to research it but here are some diagrams and links.
https://en.wikipedia.org/wiki/Natural-gas_processing

| process | random idea dump of oil processing overhaul natural gas processing ghjfghjfghk i have been doing some research on stuff for my modpack hypixel skyblock however i have decided to share the research in case someone is interested in adding them to core gregtech oil processing overhaul details in my opinion oil processing in greg tech has a lot of room for improvement this has already been suggested to but i decided i will post it here as well oil processing in real life is much more complicated but also rewarding than what is currently in greg tech the exact amounts of stuff that will be produced are not decided yet because i am not exactly a balancing expert some of the items and machines described here are not in the game yet products hydrocarbons propane ethylene benzene etc such as the ones you get from steam cracking light fuel and distilling it with the current gregtech recipes sulfur from hydrogen sulfide coke diesel kerosene steps steps here are copied from wikipedia desalter unit uses water to wash out salt from the crude oil before it is distilled distillation tower turns desalted crude oil into refinery gas sulfuric naphtha sulfuric kerosene sulfuric diesel refinery residue the amounts given will depend on whether you are using light crude oil medium crude oil and heavy crude oil vacuum distillation tower turns refinery residue liquid into vacuum residue and bitumen bitumen can be recycled by mixing it with naphtha to make heavy oil coker turns vacuum residue into coke refinery gas naphtha and sulfuric gas sulfuric diesel and sulfuric naphtha are treated with hydrogen to make diesel and naphtha respectively while giving hydrogen sulfide which is used to make sulfur sulfuric acid sulfuric kerosene is treated with sodium hydroxide in a merox treater to give kerosene processing refinery gas will be the same as it is currently treated with hydrogen and cracked with steam or hydrogen however putting steam cracked refinery gas in a distillation tower will produce the same things as the current steam cracked light fuel distillation tower recipe naphtha can no longer be distilled it will be processed with a catalytic reformer these steps are an oversimplified version of this diagram here because my understanding of petrochemistry is limited a more complete realistic process will be found in the sources area at the bottom yield depends sources synthetic crude oil from bitumen natural gas processing similarly natural gas processing in real life is more rewarding but also complicated than what is currently in greg tech i haven t had much time to research it but here are some diagrams and links | 1 |
9,297 | 12,308,473,946 | IssuesEvent | 2020-05-12 07:17:09 | HeRAMS-WHO/herams-backend | https://api.github.com/repos/HeRAMS-WHO/herams-backend | closed | Git Issues board for end users - ? | subject:process | Longer term: we need a way to capture suggestion, feature requests and manage those from end users
Short term: we'd like a solution like that to manage Yemen inputs
That definitely needs to be separate from dev discussions, but Git would probably be ideal.
Any idea @SamMousa ? | 1.0 | Git Issues board for end users - ? - Longer term: we need a way to capture suggestion, feature requests and manage those from end users
Short term: we'd like a solution like that to manage Yemen inputs
That definitely needs to be separate from dev discussions, but Git would probably be ideal.
Any idea @SamMousa ? | process | git issues board for end users longer term we need a way to capture suggestion feature requests and manage those from end users short term we d like a solution like that to manage yemen inputs that definitely needs to be separate from dev discussions but git would probably be ideal any idea sammousa | 1 |
19,843 | 4,445,753,099 | IssuesEvent | 2016-08-20 07:41:04 | PokemonGoMap/PokemonGo-Map | https://api.github.com/repos/PokemonGoMap/PokemonGo-Map | closed | Delay before starting to scan | issues are not for support! read the documentation | Can i do so the bot doesnt start to scan before all account are logged in and ready? | 1.0 | Delay before starting to scan - Can i do so the bot doesnt start to scan before all account are logged in and ready? | non_process | delay before starting to scan can i do so the bot doesnt start to scan before all account are logged in and ready | 0 |
249,755 | 26,968,556,530 | IssuesEvent | 2023-02-09 01:28:47 | meramsey/wizardwebssh | https://api.github.com/repos/meramsey/wizardwebssh | opened | paramiko-2.10.1-py2.py3-none-any.whl: 1 vulnerabilities (highest severity is: 4.8) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>paramiko-2.10.1-py2.py3-none-any.whl</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (paramiko version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-23931](https://www.mend.io/vulnerability-database/CVE-2023-23931) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2023-23931</summary>
### Vulnerable Library - <b>cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl</b></p>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/01/86/4379b5eaafa5ea4b0081fa65a72849d6bba98e35c1da66f4b7a86878714d/cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl">https://files.pythonhosted.org/packages/01/86/4379b5eaafa5ea4b0081fa65a72849d6bba98e35c1da66f4b7a86878714d/cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- paramiko-2.10.1-py2.py3-none-any.whl (Root Library)
- :x: **cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
cryptography is a package designed to expose cryptographic primitives and recipes to Python developers. In affected versions `Cipher.update_into` would accept Python objects which implement the buffer protocol, but provide only immutable buffers. This would allow immutable objects (such as `bytes`) to be mutated, thus violating fundamental rules of Python and resulting in corrupted output. This now correctly raises an exception. This issue has been present since `update_into` was originally introduced in cryptography 1.8.
<p>Publish Date: 2023-02-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-23931>CVE-2023-23931</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-23931">https://www.cve.org/CVERecord?id=CVE-2023-23931</a></p>
<p>Release Date: 2023-02-07</p>
<p>Fix Resolution: cryptography - 39.0.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | paramiko-2.10.1-py2.py3-none-any.whl: 1 vulnerabilities (highest severity is: 4.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>paramiko-2.10.1-py2.py3-none-any.whl</b></p></summary>
<p></p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (paramiko version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2023-23931](https://www.mend.io/vulnerability-database/CVE-2023-23931) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.8 | cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2023-23931</summary>
### Vulnerable Library - <b>cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl</b></p>
<p>cryptography is a package which provides cryptographic recipes and primitives to Python developers.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/01/86/4379b5eaafa5ea4b0081fa65a72849d6bba98e35c1da66f4b7a86878714d/cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl">https://files.pythonhosted.org/packages/01/86/4379b5eaafa5ea4b0081fa65a72849d6bba98e35c1da66f4b7a86878714d/cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl</a></p>
<p>
Dependency Hierarchy:
- paramiko-2.10.1-py2.py3-none-any.whl (Root Library)
- :x: **cryptography-36.0.2-cp36-abi3-manylinux_2_24_x86_64.whl** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
cryptography is a package designed to expose cryptographic primitives and recipes to Python developers. In affected versions `Cipher.update_into` would accept Python objects which implement the buffer protocol, but provide only immutable buffers. This would allow immutable objects (such as `bytes`) to be mutated, thus violating fundamental rules of Python and resulting in corrupted output. This now correctly raises an exception. This issue has been present since `update_into` was originally introduced in cryptography 1.8.
<p>Publish Date: 2023-02-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-23931>CVE-2023-23931</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2023-23931">https://www.cve.org/CVERecord?id=CVE-2023-23931</a></p>
<p>Release Date: 2023-02-07</p>
<p>Fix Resolution: cryptography - 39.0.1</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_process | paramiko none any whl vulnerabilities highest severity is vulnerable library paramiko none any whl vulnerabilities cve severity cvss dependency type fixed in paramiko version remediation available medium cryptography manylinux whl transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library cryptography manylinux whl cryptography is a package which provides cryptographic recipes and primitives to python developers library home page a href dependency hierarchy paramiko none any whl root library x cryptography manylinux whl vulnerable library found in base branch master vulnerability details cryptography is a package designed to expose cryptographic primitives and recipes to python developers in affected versions cipher update into would accept python objects which implement the buffer protocol but provide only immutable buffers this would allow immutable objects such as bytes to be mutated thus violating fundamental rules of python and resulting in corrupted output this now correctly raises an exception this issue has been present since update into was originally introduced in cryptography publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution cryptography step up your open source security game with mend | 0 |
12,935 | 15,301,656,263 | IssuesEvent | 2021-02-24 13:52:53 | NationalSecurityAgency/ghidra | https://api.github.com/repos/NationalSecurityAgency/ghidra | closed | Different Function ID hashes for "same" code | Feature: Loader/ELF Feature: Processor/PIC | I'm seeing different Function ID hashes for two copies of a function. They are same function as disassembled, but for one having references to ```ram:0032``` where the other has ```ram:0800```.
I believe that this address difference is related to the import warnings about _Unhandled Elf Relocation: type = 5..._ that occur when one of the two is imported.

~~I'm not familiar with how Ghidra creates Function ID hashes. Is the address difference the reason that the two functions are hashing differently? If so, is there some way to get around this?~~
I built the C program as an elf binary with Microchip's **xc16** compiler, linked against their "legacy" libraries.
Within it, I've located a block of code at ```rom:0152``` that corresponds to the library function ```__data_init_standard``` from their library file ```liblega-pic30-elf.a```
The attached archive contains:
- The program, slightly adapted from rosettacode's C version of a [numerical integration algorithm](https://www.rosettacode.org/wiki/Numerical_integration#C).
- The module ```data_init_standard.o```as headlessly imported with the special pre- and post-scripts for Function IDs.
[FIDtesting_2021_02_22.zip](https://github.com/NationalSecurityAgency/ghidra/files/6024934/FIDtesting_2021_02_22.zip)
(It's a Ghidra project archive. Just change the ```.zip``` to ```.gar```)
Also attached is the elf object file ```data_init_standard.o``` extracted from the Microchip library.
[data_init_standard.o.zip](https://github.com/NationalSecurityAgency/ghidra/files/6024993/data_init_standard.o.zip)
I've used Version Comparison _(which I know is completely different and separate from Function ID)_ to create a convenient side by side view of the beginning of that function in both the program, and the library module:

The last screen capture shows the results of running the script ```FIDHashCurrentFunction.java``` on both instances of the function. As you can see, both FH and XH are different.

----
In its freely downloadable distribution of **xc16**, Microchip includes both the object library ```liblega-pic30-elf.a``` and the source files for the library. To the best of my knowledge they impose no restriction on redistributing those. | 1.0 | Different Function ID hashes for "same" code - I'm seeing different Function ID hashes for two copies of a function. They are same function as disassembled, but for one having references to ```ram:0032``` where the other has ```ram:0800```.
I believe that this address difference is related to the import warnings about _Unhandled Elf Relocation: type = 5..._ that occur when one of the two is imported.

~~I'm not familiar with how Ghidra creates Function ID hashes. Is the address difference the reason that the two functions are hashing differently? If so, is there some way to get around this?~~
I built the C program as an elf binary with Microchip's **xc16** compiler, linked against their "legacy" libraries.
Within it, I've located a block of code at ```rom:0152``` that corresponds to the library function ```__data_init_standard``` from their library file ```liblega-pic30-elf.a```
The attached archive contains:
- The program, slightly adapted from rosettacode's C version of a [numerical integration algorithm](https://www.rosettacode.org/wiki/Numerical_integration#C).
- The module ```data_init_standard.o```as headlessly imported with the special pre- and post-scripts for Function IDs.
[FIDtesting_2021_02_22.zip](https://github.com/NationalSecurityAgency/ghidra/files/6024934/FIDtesting_2021_02_22.zip)
(It's a Ghidra project archive. Just change the ```.zip``` to ```.gar```)
Also attached is the elf object file ```data_init_standard.o``` extracted from the Microchip library.
[data_init_standard.o.zip](https://github.com/NationalSecurityAgency/ghidra/files/6024993/data_init_standard.o.zip)
I've used Version Comparison _(which I know is completely different and separate from Function ID)_ to create a convenient side by side view of the beginning of that function in both the program, and the library module:

The last screen capture shows the results of running the script ```FIDHashCurrentFunction.java``` on both instances of the function. As you can see, both FH and XH are different.

----
In its freely downloadable distribution of **xc16**, Microchip includes both the object library ```liblega-pic30-elf.a``` and the source files for the library. To the best of my knowledge they impose no restriction on redistributing those. | process | different function id hashes for same code i m seeing different function id hashes for two copies of a function they are same function as disassembled but for one having references to ram where the other has ram i believe that this address difference is related to the import warnings about unhandled elf relocation type that occur when one of the two is imported i m not familiar with how ghidra creates function id hashes is the address difference the reason that the two functions are hashing differently if so is there some way to get around this i built the c program as an elf binary with microchip s compiler linked against their legacy libraries within it i ve located a block of code at rom that corresponds to the library function data init standard from their library file liblega elf a the attached archive contains the program slightly adapted from rosettacode s c version of a the module data init standard o as headlessly imported with the special pre and post scripts for function ids it s a ghidra project archive just change the zip to gar also attached is the elf object file data init standard o extracted from the microchip library i ve used version comparison which i know is completely different and separate from function id to create a convenient side by side view of the beginning of that function in both the program and the library module the last screen capture shows the results of running the script fidhashcurrentfunction java on both instances of the function as you can see both fh and xh are different in its freely downloadable distribution of microchip includes both the object library liblega elf a and the source files for the library to the best of my knowledge they impose no restriction on redistributing those | 1 |
4,778 | 7,647,355,440 | IssuesEvent | 2018-05-09 03:26:48 | coala/cEPs | https://api.github.com/repos/coala/cEPs | opened | cEP 0: Use asciidoc for cEPs | process/pending review | markdown is a crazy language. rST is slightly better, and that aligns better with being a Python org.
However, we should be a multilingual org, so it is useful to force ourselves to switch languages, testing different bears in the process.
For one, asciidoc has better table support.
http://www.methods.co.nz/asciidoc/newtables.html
| 1.0 | cEP 0: Use asciidoc for cEPs - markdown is a crazy language. rST is slightly better, and that aligns better with being a Python org.
However, we should be a multilingual org, so it is useful to force ourselves to switch languages, testing different bears in the process.
For one, asciidoc has better table support.
http://www.methods.co.nz/asciidoc/newtables.html
| process | cep use asciidoc for ceps markdown is a crazy language rst is slightly better and that aligns better with being a python org however we should be a multilingual org so it is useful to force ourselves to switch languages testing different bears in the process for one asciidoc has better table support | 1 |
4,821 | 7,717,673,868 | IssuesEvent | 2018-05-23 14:19:56 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | sct_process_segmentation -discfile -p label-vert | sct_process_segmentation | 2 detected issues with the function:
`sct_process_segmentation -discfile -p label-vert`
1. Assume images with RPI orientation:
```
cd /Volumes/temp/charley/i_1734
sct_process_segmentation -i t2_sag_cerv_seg_AIL.nii.gz -discfile label_discs_AIL.nii.gz -p label-vert
```
2. Use the convention "disc labelvalue=3 ==> disc C3/C4" instead of "disc labelvalue=3 ==> disc C2/C3"
```
cd /Volumes/temp/charley/i_1734
sct_process_segmentation -i t2_sag_cerv_seg_RPI.nii.gz -discfile label_discs_RPI.nii.gz -p label-vert
```
Or I am missing something? :/ | 1.0 | sct_process_segmentation -discfile -p label-vert - 2 detected issues with the function:
`sct_process_segmentation -discfile -p label-vert`
1. Assume images with RPI orientation:
```
cd /Volumes/temp/charley/i_1734
sct_process_segmentation -i t2_sag_cerv_seg_AIL.nii.gz -discfile label_discs_AIL.nii.gz -p label-vert
```
2. Use the convention "disc labelvalue=3 ==> disc C3/C4" instead of "disc labelvalue=3 ==> disc C2/C3"
```
cd /Volumes/temp/charley/i_1734
sct_process_segmentation -i t2_sag_cerv_seg_RPI.nii.gz -discfile label_discs_RPI.nii.gz -p label-vert
```
Or I am missing something? :/ | process | sct process segmentation discfile p label vert detected issues with the function sct process segmentation discfile p label vert assume images with rpi orientation cd volumes temp charley i sct process segmentation i sag cerv seg ail nii gz discfile label discs ail nii gz p label vert use the convention disc labelvalue disc instead of disc labelvalue disc cd volumes temp charley i sct process segmentation i sag cerv seg rpi nii gz discfile label discs rpi nii gz p label vert or i am missing something | 1 |
330,987 | 28,499,935,698 | IssuesEvent | 2023-04-18 16:33:39 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | sql/stats: TestAtMostOneRunningCreateStats failed | C-test-failure O-robot branch-master T-sql-queries | sql/stats.TestAtMostOneRunningCreateStats [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9606363?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9606363?buildTab=artifacts#/) on master @ [70531ee7ebaffebe75149b41eadb5b8aa6b9ea53](https://github.com/cockroachdb/cockroach/commits/70531ee7ebaffebe75149b41eadb5b8aa6b9ea53):
Fatal error:
```
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead
```
Stack:
```
goroutine 25731756 [running]:
testing.tRunner.func1.2({0x4288da0, 0x5d02490})
GOROOT/src/testing/testing.go:1396 +0x24e
testing.tRunner.func1()
GOROOT/src/testing/testing.go:1399 +0x39f
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/leaktest.AfterTest.func2()
github.com/cockroachdb/cockroach/pkg/util/leaktest/leaktest.go:118 +0x2a9
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).recover(0x0?, {0x5d44418, 0xc000082058})
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:229 +0x6a
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).Stop(0xc01f6b4d80, {0x5d44418, 0xc000082058})
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:507 +0x21d
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/sql/stats_test.TestAtMostOneRunningCreateStats.func2()
github.com/cockroachdb/cockroach/pkg/sql/stats_test/pkg/sql/stats/create_stats_job_test.go:179 +0x1e8
github.com/cockroachdb/cockroach/pkg/sql/stats_test.TestAtMostOneRunningCreateStats(0xc01637e680)
github.com/cockroachdb/cockroach/pkg/sql/stats_test/pkg/sql/stats/create_stats_job_test.go:184 +0x6fe
testing.tRunner(0xc01637e680, 0x4d996d8)
GOROOT/src/testing/testing.go:1446 +0x10b
created by testing.(*T).Run
GOROOT/src/testing/testing.go:1493 +0x35f
```
<details><summary>Log preceding fatal error</summary>
<p>
```
=== RUN TestAtMostOneRunningCreateStats
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/a29a19c212b7b40401087e2c497e19d3/logTestAtMostOneRunningCreateStats2162988526
test_log_scope.go:79: use -show-logs to present logs inline
stopper.go:229: -- test log scope end --
ERROR: a panic has occurred!
Details cannot be printed yet because we are still unwinding.
stopper.go:229: panic: CREATE STATISTICS job which was expected to fail, timed out instead
--- FAIL: TestAtMostOneRunningCreateStats (16.18s)
Hopefully the test harness prints the panic below, otherwise check the test logs.
test logs left over in: /artifacts/tmp/_tmp/a29a19c212b7b40401087e2c497e19d3/logTestAtMostOneRunningCreateStats2162988526
```
</p>
</details>
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #101482 sql/stats: TestAtMostOneRunningCreateStats failed [C-test-failure O-robot T-sql-queries branch-release-23.1.0]
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestAtMostOneRunningCreateStats.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-27013 | 1.0 | sql/stats: TestAtMostOneRunningCreateStats failed - sql/stats.TestAtMostOneRunningCreateStats [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9606363?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_StressBazel/9606363?buildTab=artifacts#/) on master @ [70531ee7ebaffebe75149b41eadb5b8aa6b9ea53](https://github.com/cockroachdb/cockroach/commits/70531ee7ebaffebe75149b41eadb5b8aa6b9ea53):
Fatal error:
```
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead [recovered]
panic: CREATE STATISTICS job which was expected to fail, timed out instead
```
Stack:
```
goroutine 25731756 [running]:
testing.tRunner.func1.2({0x4288da0, 0x5d02490})
GOROOT/src/testing/testing.go:1396 +0x24e
testing.tRunner.func1()
GOROOT/src/testing/testing.go:1399 +0x39f
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/leaktest.AfterTest.func2()
github.com/cockroachdb/cockroach/pkg/util/leaktest/leaktest.go:118 +0x2a9
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).recover(0x0?, {0x5d44418, 0xc000082058})
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:229 +0x6a
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).Stop(0xc01f6b4d80, {0x5d44418, 0xc000082058})
github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:507 +0x21d
panic({0x4288da0, 0x5d02490})
GOROOT/src/runtime/panic.go:884 +0x212
github.com/cockroachdb/cockroach/pkg/sql/stats_test.TestAtMostOneRunningCreateStats.func2()
github.com/cockroachdb/cockroach/pkg/sql/stats_test/pkg/sql/stats/create_stats_job_test.go:179 +0x1e8
github.com/cockroachdb/cockroach/pkg/sql/stats_test.TestAtMostOneRunningCreateStats(0xc01637e680)
github.com/cockroachdb/cockroach/pkg/sql/stats_test/pkg/sql/stats/create_stats_job_test.go:184 +0x6fe
testing.tRunner(0xc01637e680, 0x4d996d8)
GOROOT/src/testing/testing.go:1446 +0x10b
created by testing.(*T).Run
GOROOT/src/testing/testing.go:1493 +0x35f
```
<details><summary>Log preceding fatal error</summary>
<p>
```
=== RUN TestAtMostOneRunningCreateStats
test_log_scope.go:161: test logs captured to: /artifacts/tmp/_tmp/a29a19c212b7b40401087e2c497e19d3/logTestAtMostOneRunningCreateStats2162988526
test_log_scope.go:79: use -show-logs to present logs inline
stopper.go:229: -- test log scope end --
ERROR: a panic has occurred!
Details cannot be printed yet because we are still unwinding.
stopper.go:229: panic: CREATE STATISTICS job which was expected to fail, timed out instead
--- FAIL: TestAtMostOneRunningCreateStats (16.18s)
Hopefully the test harness prints the panic below, otherwise check the test logs.
test logs left over in: /artifacts/tmp/_tmp/a29a19c212b7b40401087e2c497e19d3/logTestAtMostOneRunningCreateStats2162988526
```
</p>
</details>
<p>Parameters: <code>TAGS=bazel,gss,deadlock</code>
</p>
<details><summary>Help</summary>
<p>
See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #101482 sql/stats: TestAtMostOneRunningCreateStats failed [C-test-failure O-robot T-sql-queries branch-release-23.1.0]
</p>
</details>
/cc @cockroachdb/sql-queries
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestAtMostOneRunningCreateStats.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-27013 | non_process | sql stats testatmostonerunningcreatestats failed sql stats testatmostonerunningcreatestats with on master fatal error panic create statistics job which was expected to fail timed out instead panic create statistics job which was expected to fail timed out instead panic create statistics job which was expected to fail timed out instead panic create statistics job which was expected to fail timed out instead panic create statistics job which was expected to fail timed out instead stack goroutine testing trunner goroot src testing testing go testing trunner goroot src testing testing go panic goroot src runtime panic go github com cockroachdb cockroach pkg util leaktest aftertest github com cockroachdb cockroach pkg util leaktest leaktest go panic goroot src runtime panic go github com cockroachdb cockroach pkg util stop stopper recover github com cockroachdb cockroach pkg util stop stopper go panic goroot src runtime panic go github com cockroachdb cockroach pkg util stop stopper stop github com cockroachdb cockroach pkg util stop stopper go panic goroot src runtime panic go github com cockroachdb cockroach pkg sql stats test testatmostonerunningcreatestats github com cockroachdb cockroach pkg sql stats test pkg sql stats create stats job test go github com cockroachdb cockroach pkg sql stats test testatmostonerunningcreatestats github com cockroachdb cockroach pkg sql stats test pkg sql stats create stats job test go testing trunner goroot src testing testing go created by testing t run goroot src testing testing go log preceding fatal error run testatmostonerunningcreatestats test log scope go test logs captured to artifacts tmp tmp test log scope go use show logs to present logs inline stopper go test log scope end error a panic has occurred details cannot be printed yet because we are still unwinding stopper go panic create statistics job which was expected to fail timed out instead fail testatmostonerunningcreatestats hopefully the test harness prints the panic below otherwise check the test logs test logs left over in artifacts tmp tmp parameters tags bazel gss deadlock help see also same failure on other branches sql stats testatmostonerunningcreatestats failed cc cockroachdb sql queries jira issue crdb | 0 |
417,348 | 12,158,680,164 | IssuesEvent | 2020-04-26 05:25:35 | tealeg/xlsx | https://api.github.com/repos/tealeg/xlsx | closed | Blank cells considered nil in ForEachFunc | in progress priority:critial triaged | Dear developers.
When I use ForEachFunc iterator it ignores empty cells, because they are considered nil. However, it is important for me to treat them as empty strings. Is there a workaround? I have been stuck with it for quite a long time.
I have managed not to use the ForEachFunc and have written something like:
```
res := make([]string, 0, r.data.MaxCol)
for i :=0; i < r.data.MaxCol; i++ {
c := row.GetCell(i)
val, err := c.FormattedValue()
if err != nil {
res = append(res, err.Error())
} else {
res = append(res, val)
}
}
```
Unfortunately, this function appends empty strings automatically at the end of the row, because I use MaxCol in for clause. I couldn't find the way how to get just the number of cols in the row :( the value is unexported :(
I would be very grateful if you could help me. | 1.0 | Blank cells considered nil in ForEachFunc - Dear developers.
When I use ForEachFunc iterator it ignores empty cells, because they are considered nil. However, it is important for me to treat them as empty strings. Is there a workaround? I have been stuck with it for quite a long time.
I have managed not to use the ForEachFunc and have written something like:
```
res := make([]string, 0, r.data.MaxCol)
for i :=0; i < r.data.MaxCol; i++ {
c := row.GetCell(i)
val, err := c.FormattedValue()
if err != nil {
res = append(res, err.Error())
} else {
res = append(res, val)
}
}
```
Unfortunately, this function appends empty strings automatically at the end of the row, because I use MaxCol in for clause. I couldn't find the way how to get just the number of cols in the row :( the value is unexported :(
I would be very grateful if you could help me. | non_process | blank cells considered nil in foreachfunc dear developers when i use foreachfunc iterator it ignores empty cells because they are considered nil however it is important for me to treat them as empty strings is there a workaround i have been stuck with it for quite a long time i have managed not to use the foreachfunc and have written something like res make string r data maxcol for i i r data maxcol i c row getcell i val err c formattedvalue if err nil res append res err error else res append res val unfortunately this function appends empty strings automatically at the end of the row because i use maxcol in for clause i couldn t find the way how to get just the number of cols in the row the value is unexported i would be very grateful if you could help me | 0 |
75,267 | 9,833,854,382 | IssuesEvent | 2019-06-17 08:16:27 | kyma-project/cli | https://api.github.com/repos/kyma-project/cli | closed | Installation instructions on MacOS doesn't work with zsh | area/cli area/documentation bug | <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
**In iTerm with ZSH** running this command to install Kyma CLI on MacOS:
```
curl -Lo kyma.tar.gz https://github.com/kyma-project/cli/releases/download/$(curl -s https://api.github.com/repos/kyma-project/cli/releases/latest | grep tag_name | cut -d '"' -f 4)/kyma_Darwin_x86_64.tar.gz \
&& mkdir kyma-release && tar -C kyma-release -zxvf kyma.tar.gz && chmod +x kyma-release/kyma && sudo mv kyma-release/kyma /usr/local/bin \
&& rm -rf kyma-release kyma.tar.gz
```
Fails with this error:
```
zsh: parse error near `)'
```
When I run the same command in MacOS Terminal with bash, everything works as expected.
**Expected result**
Can install Kyma CLI using zsh.
**Actual result**
Can't install with zsh, can install with bash.
**Steps to reproduce**
Install iTerm, switch shell to zsh and run the command listed in the problem description.
**Troubleshooting**
I tried Terminal with bash and it worked.
| 1.0 | Installation instructions on MacOS doesn't work with zsh - <!-- Thank you for your contribution. Before you submit the issue:
1. Search open and closed issues for duplicates.
2. Read the contributing guidelines.
-->
**Description**
**In iTerm with ZSH** running this command to install Kyma CLI on MacOS:
```
curl -Lo kyma.tar.gz https://github.com/kyma-project/cli/releases/download/$(curl -s https://api.github.com/repos/kyma-project/cli/releases/latest | grep tag_name | cut -d '"' -f 4)/kyma_Darwin_x86_64.tar.gz \
&& mkdir kyma-release && tar -C kyma-release -zxvf kyma.tar.gz && chmod +x kyma-release/kyma && sudo mv kyma-release/kyma /usr/local/bin \
&& rm -rf kyma-release kyma.tar.gz
```
Fails with this error:
```
zsh: parse error near `)'
```
When I run the same command in MacOS Terminal with bash, everything works as expected.
**Expected result**
Can install Kyma CLI using zsh.
**Actual result**
Can't install with zsh, can install with bash.
**Steps to reproduce**
Install iTerm, switch shell to zsh and run the command listed in the problem description.
**Troubleshooting**
I tried Terminal with bash and it worked.
| non_process | installation instructions on macos doesn t work with zsh thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description in iterm with zsh running this command to install kyma cli on macos curl lo kyma tar gz s grep tag name cut d f kyma darwin tar gz mkdir kyma release tar c kyma release zxvf kyma tar gz chmod x kyma release kyma sudo mv kyma release kyma usr local bin rm rf kyma release kyma tar gz fails with this error zsh parse error near when i run the same command in macos terminal with bash everything works as expected expected result can install kyma cli using zsh actual result can t install with zsh can install with bash steps to reproduce install iterm switch shell to zsh and run the command listed in the problem description troubleshooting i tried terminal with bash and it worked | 0 |
3,222 | 6,279,534,887 | IssuesEvent | 2017-07-18 16:25:53 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | SingleProcess_EnableRaisingEvents_CorrectExitCode test hangs on uapaot test run | area-System.Diagnostics.Process os-windows-uwp test-run-uwp-ilc | The test was disabled but it was hanging on uapaot runs. I didn't do any further investigation.
cc: @Priya91 | 1.0 | SingleProcess_EnableRaisingEvents_CorrectExitCode test hangs on uapaot test run - The test was disabled but it was hanging on uapaot runs. I didn't do any further investigation.
cc: @Priya91 | process | singleprocess enableraisingevents correctexitcode test hangs on uapaot test run the test was disabled but it was hanging on uapaot runs i didn t do any further investigation cc | 1 |
16,992 | 22,356,482,338 | IssuesEvent | 2022-06-15 16:04:29 | tartley/colorama | https://api.github.com/repos/tartley/colorama | closed | 'make release' should tag the commit | process | As is described in README-hacking
* Specifically, tag with the current value of `__version__`. Fail if that tag already exists.
Possibly we should be creating annotated tags (see below):
git tag -a -m "" $version
* Push the tag to origin. To do this for a commit which is already pushed, we might need::
git push --follow-tags
`--tags` pushes all tags on every branch, which is probably not desired. e.g. some will be private development status, some might be on distant or unreachable branches, etc.
`--follow-tags` was designed to fix this, it only sends tags on ancestors of the current commit, and also only sends annotated tags.
| 1.0 | 'make release' should tag the commit - As is described in README-hacking
* Specifically, tag with the current value of `__version__`. Fail if that tag already exists.
Possibly we should be creating annotated tags (see below):
git tag -a -m "" $version
* Push the tag to origin. To do this for a commit which is already pushed, we might need::
git push --follow-tags
`--tags` pushes all tags on every branch, which is probably not desired. e.g. some will be private development status, some might be on distant or unreachable branches, etc.
`--follow-tags` was designed to fix this, it only sends tags on ancestors of the current commit, and also only sends annotated tags.
| process | make release should tag the commit as is described in readme hacking specifically tag with the current value of version fail if that tag already exists possibly we should be creating annotated tags see below git tag a m version push the tag to origin to do this for a commit which is already pushed we might need git push follow tags tags pushes all tags on every branch which is probably not desired e g some will be private development status some might be on distant or unreachable branches etc follow tags was designed to fix this it only sends tags on ancestors of the current commit and also only sends annotated tags | 1 |
21,271 | 28,442,014,618 | IssuesEvent | 2023-04-16 02:09:56 | cse442-at-ub/project_s23-the-fellas | https://api.github.com/repos/cse442-at-ub/project_s23-the-fellas | closed | Implement code to add all events associated with a given username from the database to the calendar page | Processing Task Sprint 3 | **Task Tests**
*Test 1*
1) Go to the [login page of the web app](https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442c/login.php) and log in with username devincle and password test.
2) Click the Add Event button in the top left corner.
3) Enter event title: test and current datetime for event date and time
4) Click submit
5) Verify that there is an event with the name test under the current date on the calendar
6) Click the Add Event button
7) Enter event title: test2 and current datetime for event date and time
8) Click submit
9) Verify that there are two events under the current date on the calendar - one labeled test and the other labeled test2
*Test 2*
1) Go to the[login page of the web app](https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442c/login.php) and log in with username devincle and password test.
2) Click the Add Event button in the top left corner.
3) Enter event title: test and current datetime for event date and time
4) Click submit
5) Verify that there is an event with the name test under the current date on the calendar
6) Click the Log Out button in the top right corner.
7) Log in with username drboyle2 and password test.
8) Click the Add Event button in the top left corner.
9) Enter event title: exam and current datetime for event date and time
10) Click submit
11) Verify that there is an event with the name exam under the current date on the calendar.
12) Verify that there are no other events on the calendar besides the exam event. | 1.0 | Implement code to add all events associated with a given username from the database to the calendar page - **Task Tests**
*Test 1*
1) Go to the [login page of the web app](https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442c/login.php) and log in with username devincle and password test.
2) Click the Add Event button in the top left corner.
3) Enter event title: test and current datetime for event date and time
4) Click submit
5) Verify that there is an event with the name test under the current date on the calendar
6) Click the Add Event button
7) Enter event title: test2 and current datetime for event date and time
8) Click submit
9) Verify that there are two events under the current date on the calendar - one labeled test and the other labeled test2
*Test 2*
1) Go to the[login page of the web app](https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442c/login.php) and log in with username devincle and password test.
2) Click the Add Event button in the top left corner.
3) Enter event title: test and current datetime for event date and time
4) Click submit
5) Verify that there is an event with the name test under the current date on the calendar
6) Click the Log Out button in the top right corner.
7) Log in with username drboyle2 and password test.
8) Click the Add Event button in the top left corner.
9) Enter event title: exam and current datetime for event date and time
10) Click submit
11) Verify that there is an event with the name exam under the current date on the calendar.
12) Verify that there are no other events on the calendar besides the exam event. | process | implement code to add all events associated with a given username from the database to the calendar page task tests test go to the and log in with username devincle and password test click the add event button in the top left corner enter event title test and current datetime for event date and time click submit verify that there is an event with the name test under the current date on the calendar click the add event button enter event title and current datetime for event date and time click submit verify that there are two events under the current date on the calendar one labeled test and the other labeled test go to the and log in with username devincle and password test click the add event button in the top left corner enter event title test and current datetime for event date and time click submit verify that there is an event with the name test under the current date on the calendar click the log out button in the top right corner log in with username and password test click the add event button in the top left corner enter event title exam and current datetime for event date and time click submit verify that there is an event with the name exam under the current date on the calendar verify that there are no other events on the calendar besides the exam event | 1 |
677,304 | 23,157,958,231 | IssuesEvent | 2022-07-29 14:43:00 | googleapis/google-auth-library-nodejs | https://api.github.com/repos/googleapis/google-auth-library-nodejs | opened | Refactoring: Quit mixing Promises and async/await | type: feature request priority: p3 | ### Problem
- There is a comment by Benjamin that says `TODO`. This comment has been here for two years.
- https://github.com/googleapis/google-auth-library-nodejs/blob/main/src/auth/googleauth.ts#L206-L232
### Suggestion
- It would be better to use `async/await` instead of `resolve/reject` because today is 2022.
- Only use `async/await` and reduce the length of code. It will make this repo easier to see through.
- I am going to create PR when this change is accepted but beforehand, I put my plan below. Please use these code for your decision.
#### Before
```ts
// TODO: refactor the below code so that it doesn't mix and match
// promises and async/await.
this._getDefaultProjectIdPromise = new Promise(
// eslint-disable-next-line no-async-promise-executor
async (resolve, reject) => {
try {
const projectId =
this.getProductionProjectId() ||
(await this.getFileProjectId()) ||
(await this.getDefaultServiceProjectId()) ||
(await this.getGCEProjectId()) ||
(await this.getExternalAccountClientProjectId());
this._cachedProjectId = projectId;
if (!projectId) {
throw new Error(
'Unable to detect a Project Id in the current environment. \n' +
'To learn more about authentication and Google APIs, visit: \n' +
'https://cloud.google.com/docs/authentication/getting-started'
);
}
resolve(projectId);
} catch (e) {
reject(e);
}
}
);
}
```
#### After
```ts
this._getDefaultProjectIdPromise = (async () => {
const projectId =
this.getProductionProjectId() ||
(await this.getFileProjectId()) ||
(await this.getDefaultServiceProjectId()) ||
(await this.getGCEProjectId()) ||
(await this.getExternalAccountClientProjectId());
this._cachedProjectId = projectId;
if (!projectId) {
throw new Error(
'Unable to detect a Project Id in the current environment. \n' +
'To learn more about authentication and Google APIs, visit: \n' +
'https://cloud.google.com/docs/authentication/getting-started'
);
}
return projectId;
})();
}
```
| 1.0 | Refactoring: Quit mixing Promises and async/await - ### Problem
- There is a comment by Benjamin that says `TODO`. This comment has been here for two years.
- https://github.com/googleapis/google-auth-library-nodejs/blob/main/src/auth/googleauth.ts#L206-L232
### Suggestion
- It would be better to use `async/await` instead of `resolve/reject` because today is 2022.
- Only use `async/await` and reduce the length of code. It will make this repo easier to see through.
- I am going to create PR when this change is accepted but beforehand, I put my plan below. Please use these code for your decision.
#### Before
```ts
// TODO: refactor the below code so that it doesn't mix and match
// promises and async/await.
this._getDefaultProjectIdPromise = new Promise(
// eslint-disable-next-line no-async-promise-executor
async (resolve, reject) => {
try {
const projectId =
this.getProductionProjectId() ||
(await this.getFileProjectId()) ||
(await this.getDefaultServiceProjectId()) ||
(await this.getGCEProjectId()) ||
(await this.getExternalAccountClientProjectId());
this._cachedProjectId = projectId;
if (!projectId) {
throw new Error(
'Unable to detect a Project Id in the current environment. \n' +
'To learn more about authentication and Google APIs, visit: \n' +
'https://cloud.google.com/docs/authentication/getting-started'
);
}
resolve(projectId);
} catch (e) {
reject(e);
}
}
);
}
```
#### After
```ts
this._getDefaultProjectIdPromise = (async () => {
const projectId =
this.getProductionProjectId() ||
(await this.getFileProjectId()) ||
(await this.getDefaultServiceProjectId()) ||
(await this.getGCEProjectId()) ||
(await this.getExternalAccountClientProjectId());
this._cachedProjectId = projectId;
if (!projectId) {
throw new Error(
'Unable to detect a Project Id in the current environment. \n' +
'To learn more about authentication and Google APIs, visit: \n' +
'https://cloud.google.com/docs/authentication/getting-started'
);
}
return projectId;
})();
}
```
| non_process | refactoring quit mixing promises and async await problem there is a comment by benjamin that says todo this comment has been here for two years suggestion it would be better to use async await instead of resolve reject because today is only use async await and reduce the length of code it will make this repo easier to see through i am going to create pr when this change is accepted but beforehand i put my plan below please use these code for your decision before ts todo refactor the below code so that it doesn t mix and match promises and async await this getdefaultprojectidpromise new promise eslint disable next line no async promise executor async resolve reject try const projectid this getproductionprojectid await this getfileprojectid await this getdefaultserviceprojectid await this getgceprojectid await this getexternalaccountclientprojectid this cachedprojectid projectid if projectid throw new error unable to detect a project id in the current environment n to learn more about authentication and google apis visit n resolve projectid catch e reject e after ts this getdefaultprojectidpromise async const projectid this getproductionprojectid await this getfileprojectid await this getdefaultserviceprojectid await this getgceprojectid await this getexternalaccountclientprojectid this cachedprojectid projectid if projectid throw new error unable to detect a project id in the current environment n to learn more about authentication and google apis visit n return projectid | 0 |
8,804 | 11,908,275,316 | IssuesEvent | 2020-03-31 00:28:50 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Ctrl + A should select all components in a processing model | Feature Request Processing | Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [19055](https://issues.qgis.org/issues/19055)
Redmine category:processing/modeller
---
In a processing model, I can select multiple components using Ctrl. I think that this should be extended so that Ctrl + A should select all components.
| 1.0 | Ctrl + A should select all components in a processing model - Author Name: **Magnus Nilsson** (Magnus Nilsson)
Original Redmine Issue: [19055](https://issues.qgis.org/issues/19055)
Redmine category:processing/modeller
---
In a processing model, I can select multiple components using Ctrl. I think that this should be extended so that Ctrl + A should select all components.
| process | ctrl a should select all components in a processing model author name magnus nilsson magnus nilsson original redmine issue redmine category processing modeller in a processing model i can select multiple components using ctrl i think that this should be extended so that ctrl a should select all components | 1 |
121,997 | 10,209,627,405 | IssuesEvent | 2019-08-14 13:09:36 | tidymodels/broom | https://api.github.com/repos/tidymodels/broom | closed | tidy.orcutt() errors on confidence intervals and exponentiation | beginner-friendly bug tests | Bug originally reported in #712 and migrated to this issue
```r
library(orcutt)
set.seed(123)
reg <- stats::lm(formula = mpg ~ wt + qsec + disp, data = mtcars)
co <- orcutt::cochrane.orcutt(reg)
# plot
broom::tidy(
x = co,
title = "Cochrane-Orcutt estimation",
conf.int = TRUE,
exponentiate = FALSE
)
#> Warning in tidy.orcutt(x = co, title = "Cochrane-Orcutt estimation",
#> conf.int = TRUE, : deal with tidy.orcutt conf.int nonsense
#> Warning in if (exponentiate) {: the condition has length > 1 and only the
#> first element will be used
#> Error in if (exponentiate) {: argument is not interpretable as logical
```
A PR to solve this issue should feature regression tests. Honestly I'm not sure what's going on with the `orcutt` tidiers, but it's likely that it isn't well designed and many arguments can just get dropped IIRC. | 1.0 | tidy.orcutt() errors on confidence intervals and exponentiation - Bug originally reported in #712 and migrated to this issue
```r
library(orcutt)
set.seed(123)
reg <- stats::lm(formula = mpg ~ wt + qsec + disp, data = mtcars)
co <- orcutt::cochrane.orcutt(reg)
# plot
broom::tidy(
x = co,
title = "Cochrane-Orcutt estimation",
conf.int = TRUE,
exponentiate = FALSE
)
#> Warning in tidy.orcutt(x = co, title = "Cochrane-Orcutt estimation",
#> conf.int = TRUE, : deal with tidy.orcutt conf.int nonsense
#> Warning in if (exponentiate) {: the condition has length > 1 and only the
#> first element will be used
#> Error in if (exponentiate) {: argument is not interpretable as logical
```
A PR to solve this issue should feature regression tests. Honestly I'm not sure what's going on with the `orcutt` tidiers, but it's likely that it isn't well designed and many arguments can just get dropped IIRC. | non_process | tidy orcutt errors on confidence intervals and exponentiation bug originally reported in and migrated to this issue r library orcutt set seed reg stats lm formula mpg wt qsec disp data mtcars co orcutt cochrane orcutt reg plot broom tidy x co title cochrane orcutt estimation conf int true exponentiate false warning in tidy orcutt x co title cochrane orcutt estimation conf int true deal with tidy orcutt conf int nonsense warning in if exponentiate the condition has length and only the first element will be used error in if exponentiate argument is not interpretable as logical a pr to solve this issue should feature regression tests honestly i m not sure what s going on with the orcutt tidiers but it s likely that it isn t well designed and many arguments can just get dropped iirc | 0 |
5,084 | 2,773,339,068 | IssuesEvent | 2015-05-03 14:58:13 | Polytechnique-org/platal2 | https://api.github.com/repos/Polytechnique-org/platal2 | opened | API rules | design | Describe the API (including Sphinx doc):
- How do we return errors, including human-readable messages?
- How do we return a paginated list of items?
- How do we return a single item?
- What should be the response to an object creation/modification request? | 1.0 | API rules - Describe the API (including Sphinx doc):
- How do we return errors, including human-readable messages?
- How do we return a paginated list of items?
- How do we return a single item?
- What should be the response to an object creation/modification request? | non_process | api rules describe the api including sphinx doc how do we return errors including human readable messages how do we return a paginated list of items how do we return a single item what should be the response to an object creation modification request | 0 |
9,512 | 12,497,588,276 | IssuesEvent | 2020-06-01 16:45:17 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Parameters dynamic expression based default value | Pri1 devops-cicd-process/tech devops/prod support-request | Hello,
is it possible to define dynamic default value of the runtime parameter?
For example. I'd like to run pipeline where I have version string parameter. I'd like to have this version parameter prefilled with a dynamic expression based value of the latest tag in the repository.
My example parameters definition:
parameters:
- name: version
displayName: Version
type: string
default: '$[some expression to get the latest tag in the repository or an output variable from the previous pipeline]'
- name: environment
displayName: Environment
type: string
default: 'devel'
values:
- devel
- stage
- production
...
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script#feedback)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Parameters dynamic expression based default value - Hello,
is it possible to define dynamic default value of the runtime parameter?
For example. I'd like to run pipeline where I have version string parameter. I'd like to have this version parameter prefilled with a dynamic expression based value of the latest tag in the repository.
My example parameters definition:
parameters:
- name: version
displayName: Version
type: string
default: '$[some expression to get the latest tag in the repository or an output variable from the previous pipeline]'
- name: environment
displayName: Environment
type: string
default: 'devel'
values:
- devel
- stage
- production
...
---
#### Document Details
โ *Do not edit this section. It is required for docs.microsoft.com โ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script#feedback)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | parameters dynamic expression based default value hello is it possible to define dynamic default value of the runtime parameter for example i d like to run pipeline where i have version string parameter i d like to have this version parameter prefilled with a dynamic expression based value of the latest tag in the repository my example parameters definition parameters name version displayname version type string default name environment displayname environment type string default devel values devel stage production document details โ do not edit this section it is required for docs microsoft com โ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
139,215 | 20,810,045,666 | IssuesEvent | 2022-03-18 00:49:43 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | [Material] ThemeData should use InkSparkle for splashFactory when useMaterial3 is true | framework f: material design | InkSparkle should only be used as the splashFactory in ThemeData when useMaterial3 is true, the platform is Android, and the compilation mode is not web, i.e. kIsWeb is false. | 1.0 | [Material] ThemeData should use InkSparkle for splashFactory when useMaterial3 is true - InkSparkle should only be used as the splashFactory in ThemeData when useMaterial3 is true, the platform is Android, and the compilation mode is not web, i.e. kIsWeb is false. | non_process | themedata should use inksparkle for splashfactory when is true inksparkle should only be used as the splashfactory in themedata when is true the platform is android and the compilation mode is not web i e kisweb is false | 0 |
11,239 | 14,015,229,085 | IssuesEvent | 2020-10-29 13:03:27 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - materialSampleID | Process - implement Term - change | ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): consistency
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
The following comment is given for other identifier terms:
* Usage comments (recommendations regarding content, etc.): "Recommended best practice is to use a persistent, globally unique identifier."
| 1.0 | Change term - materialSampleID - ## Change term
* Submitter: John Wieczorek
* Justification (why is this change necessary?): consistency
* Proponents (who needs this change): Everyone
Proposed new attributes of the term:
The following comment is given for other identifier terms:
* Usage comments (recommendations regarding content, etc.): "Recommended best practice is to use a persistent, globally unique identifier."
| process | change term materialsampleid change term submitter john wieczorek justification why is this change necessary consistency proponents who needs this change everyone proposed new attributes of the term the following comment is given for other identifier terms usage comments recommendations regarding content etc recommended best practice is to use a persistent globally unique identifier | 1 |
126,375 | 10,420,140,488 | IssuesEvent | 2019-09-15 21:52:11 | uqbar-project/wollok | https://api.github.com/repos/uqbar-project/wollok | opened | Saber cuรกnto tardรณ cada test en ejecutar | component: interpreter component: tests | Ahora que tenemos la mediciรณn del total de milisegundos que tardรณ en correr todos los tests, estarรญa bueno agregar en la informaciรณn del test cuรกnto tardรณ en ejecutar. Hay un proyecto que por ejemplo mide la performance de los sets y lists:
https://github.com/wollok/test-performance-set--1370
y serรญa bueno identificar quรฉ tests estรกn tardando mรกs.
Lo que se me ocurre es que en lugar de la leyenda "Ok" pongamos 'Runned ok in xxxxx milliseconds"
| 1.0 | Saber cuรกnto tardรณ cada test en ejecutar - Ahora que tenemos la mediciรณn del total de milisegundos que tardรณ en correr todos los tests, estarรญa bueno agregar en la informaciรณn del test cuรกnto tardรณ en ejecutar. Hay un proyecto que por ejemplo mide la performance de los sets y lists:
https://github.com/wollok/test-performance-set--1370
y serรญa bueno identificar quรฉ tests estรกn tardando mรกs.
Lo que se me ocurre es que en lugar de la leyenda "Ok" pongamos 'Runned ok in xxxxx milliseconds"
| non_process | saber cuรกnto tardรณ cada test en ejecutar ahora que tenemos la mediciรณn del total de milisegundos que tardรณ en correr todos los tests estarรญa bueno agregar en la informaciรณn del test cuรกnto tardรณ en ejecutar hay un proyecto que por ejemplo mide la performance de los sets y lists y serรญa bueno identificar quรฉ tests estรกn tardando mรกs lo que se me ocurre es que en lugar de la leyenda ok pongamos runned ok in xxxxx milliseconds | 0 |
2,126 | 4,968,973,431 | IssuesEvent | 2016-12-05 11:42:52 | jlm2017/jlm-video-subtitles | https://api.github.com/repos/jlm2017/jlm-video-subtitles | closed | [subtitles] [FR] NOรL MAMรRE - JEAN-LUC MรLENCHON : REGARDS CROISรS | Language: French Process: [6] Approved | # Video title
NOรL MAMรRE - JEAN-LUC MรLENCHON : REGARDS CROISรS
# URL
https://www.youtube.com/watch?v=X3bHtC2CKs8&t=1475s
# Youtube subtitles language
Franรงais
# Duration
59:02
# URL subtitles
https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&v=X3bHtC2CKs8&ui=hd&ref=watch&lang=fr&tab=captions&bl=vmp&forceedit=timedtext | 1.0 | [subtitles] [FR] NOรL MAMรRE - JEAN-LUC MรLENCHON : REGARDS CROISรS - # Video title
NOรL MAMรRE - JEAN-LUC MรLENCHON : REGARDS CROISรS
# URL
https://www.youtube.com/watch?v=X3bHtC2CKs8&t=1475s
# Youtube subtitles language
Franรงais
# Duration
59:02
# URL subtitles
https://www.youtube.com/timedtext_editor?action_mde_edit_form=1&v=X3bHtC2CKs8&ui=hd&ref=watch&lang=fr&tab=captions&bl=vmp&forceedit=timedtext | process | noรซl mamรจre jean luc mรฉlenchon regards croisรฉs video title noรซl mamรจre jean luc mรฉlenchon regards croisรฉs url youtube subtitles language franรงais duration url subtitles | 1 |
313,806 | 26,955,507,207 | IssuesEvent | 2023-02-08 14:39:46 | WISE-Developers/Project_issues | https://api.github.com/repos/WISE-Developers/Project_issues | closed | [WISE Bug]: JSAPI Docs still refer to PSaaS not WISE | bug W.I.S.E. Needs Testing | ### Contact Details
_No response_
### What happened?
The updated documentation still referring to wise.

### Version
(Ubuntu 2020) v1.0.0-beta
### What component are you seeing the problem on?
JS Api
### Relevant log output
_No response_
### Approvals Process
- [ ] Testing For Issue
- [ ] Executive Approval
- [ ] Merge | 1.0 | [WISE Bug]: JSAPI Docs still refer to PSaaS not WISE - ### Contact Details
_No response_
### What happened?
The updated documentation still referring to wise.

### Version
(Ubuntu 2020) v1.0.0-beta
### What component are you seeing the problem on?
JS Api
### Relevant log output
_No response_
### Approvals Process
- [ ] Testing For Issue
- [ ] Executive Approval
- [ ] Merge | non_process | jsapi docs still refer to psaas not wise contact details no response what happened the updated documentation still referring to wise version ubuntu beta what component are you seeing the problem on js api relevant log output no response approvals process testing for issue executive approval merge | 0 |
13,963 | 16,739,987,635 | IssuesEvent | 2021-06-11 08:37:01 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Error: [/root/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.7/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value | bug/1-repro-available kind/bug process/candidate team/migrations | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.24.1`
Binary Version: `18095475d5ee64536e2f93995e48ad800737a9e4`
Report: https://prisma-errors.netlify.app/report/13343
OS: `x64 darwin 20.5.0`
JS Stacktrace:
```
Error: [/root/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.7/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/<...>.npm/_npx/20450/lib/node_modules/prisma/build/index.js:40346:28)
at ChildProcess.emit (events.js:311:20)
at ChildProcess.EventEmitter.emit (domain.js:482:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: native_tls::imp::Identity::from_pkcs12
11: native_tls::Identity::from_pkcs12
12: quaint::single::Quaint::new::{{closure}}
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
```
| 1.0 | Error: [/root/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.7/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma introspect`
Version: `2.24.1`
Binary Version: `18095475d5ee64536e2f93995e48ad800737a9e4`
Report: https://prisma-errors.netlify.app/report/13343
OS: `x64 darwin 20.5.0`
JS Stacktrace:
```
Error: [/root/.cargo/registry/src/github.com-1ecc6299db9ec823/native-tls-0.2.7/src/imp/security_framework.rs:87:36] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (/<...>.npm/_npx/20450/lib/node_modules/prisma/build/index.js:40346:28)
at ChildProcess.emit (events.js:311:20)
at ChildProcess.EventEmitter.emit (domain.js:482:12)
at Process.ChildProcess._handle.onexit (internal/child_process.js:275:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: native_tls::imp::Identity::from_pkcs12
11: native_tls::Identity::from_pkcs12
12: quaint::single::Quaint::new::{{closure}}
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
15: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
16: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
17: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
18: introspection_engine::main::{{closure}}
19: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
20: introspection_engine::main
21: std::sys_common::backtrace::__rust_begin_short_backtrace
22: std::rt::lang_start::{{closure}}
23: std::rt::lang_start_internal
24: std::rt::lang_start
```
| process | error called option unwrap on a none value command prisma introspect version binary version report os darwin js stacktrace error called option unwrap on a none value at childprocess npm npx lib node modules prisma build index js at childprocess emit events js at childprocess eventemitter emit domain js at process childprocess handle onexit internal child process js rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core panicking panic native tls imp identity from native tls identity from quaint single quaint new closure as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll introspection engine main closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure std rt lang start internal std rt lang start | 1 |
292,403 | 25,208,828,389 | IssuesEvent | 2022-11-14 00:23:22 | nrwl/nx | https://api.github.com/repos/nrwl/nx | closed | Custom workspace eslint rules somehow not found on precommit from WebStorm | type: bug blocked: repro needed blocked: retry with latest scope: linter stale | <!-- Please do your best to fill out all of the sections below! -->
## Current Behavior
<!-- What is the behavior that currently you experience? -->
Custom eslint workpace rules not found in WebStorm. Problem only when pre-commit hook launch lint-staged with `nx affected:lint` with custom workspace eslint rules and problem only in WebStorm.


## Expected Behavior
Should works properly and found eslint custom workspace rules
<!-- What is the behavior that you expect to happen? -->
<!-- Is this a regression? .i.e Did this used to be the behavior at one point? -->
## Steps to Reproduce
<!-- Help us help you by making it easy for us to reproduce your issue! -->
I have example project with another rule where I use custom eslint rules: https://github.com/pumano/nx-eslint-workspace-rule-destroy-service
<!-- Can you reproduce this on https://github.com/nrwl/nx-examples? -->
<!-- If so, open a PR with your changes and link it below. -->
<!-- If not, please provide a minimal Github repo -->
<!-- At the very least, provide as much detail as possible to help us reproduce the issue -->
### Failure Logs
<!-- Please include any relevant log snippets or files here. -->
`Definition for rule '@nrwl/nx/workspace/no-destroy-without-provider' was not found`
### Environment
<!-- It's important for us to know the context in which you experience this behavior! -->
<!-- Please paste the result of `nx report` below! -->
| 1.0 | Custom workspace eslint rules somehow not found on precommit from WebStorm - <!-- Please do your best to fill out all of the sections below! -->
## Current Behavior
<!-- What is the behavior that currently you experience? -->
Custom eslint workpace rules not found in WebStorm. Problem only when pre-commit hook launch lint-staged with `nx affected:lint` with custom workspace eslint rules and problem only in WebStorm.


## Expected Behavior
Should works properly and found eslint custom workspace rules
<!-- What is the behavior that you expect to happen? -->
<!-- Is this a regression? .i.e Did this used to be the behavior at one point? -->
## Steps to Reproduce
<!-- Help us help you by making it easy for us to reproduce your issue! -->
I have example project with another rule where I use custom eslint rules: https://github.com/pumano/nx-eslint-workspace-rule-destroy-service
<!-- Can you reproduce this on https://github.com/nrwl/nx-examples? -->
<!-- If so, open a PR with your changes and link it below. -->
<!-- If not, please provide a minimal Github repo -->
<!-- At the very least, provide as much detail as possible to help us reproduce the issue -->
### Failure Logs
<!-- Please include any relevant log snippets or files here. -->
`Definition for rule '@nrwl/nx/workspace/no-destroy-without-provider' was not found`
### Environment
<!-- It's important for us to know the context in which you experience this behavior! -->
<!-- Please paste the result of `nx report` below! -->
| non_process | custom workspace eslint rules somehow not found on precommit from webstorm current behavior custom eslint workpace rules not found in webstorm problem only when pre commit hook launch lint staged with nx affected lint with custom workspace eslint rules and problem only in webstorm expected behavior should works properly and found eslint custom workspace rules steps to reproduce i have example project with another rule where i use custom eslint rules failure logs definition for rule nrwl nx workspace no destroy without provider was not found environment | 0 |
130,094 | 18,154,864,647 | IssuesEvent | 2021-09-26 22:15:28 | ghc-dev/Glenda-Moore | https://api.github.com/repos/ghc-dev/Glenda-Moore | closed | CVE-2020-14343 (High) detected in PyYAML-5.3.1.tar.gz - autoclosed | security vulnerability | ## CVE-2020-14343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.3.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz">https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz</a></p>
<p>Path to dependency file: Glenda-Moore/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.3.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Glenda-Moore/commit/f96267f1340816a1786cdf0caf0e55886daa7b97">f96267f1340816a1786cdf0caf0e55886daa7b97</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.
<p>Publish Date: 2021-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343</a></p>
<p>Release Date: 2021-02-09</p>
<p>Fix Resolution: PyYAML - 5.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"PyYAML","packageVersion":"5.3.1","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"PyYAML:5.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"PyYAML - 5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14343","vulnerabilityDetails":"A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-14343 (High) detected in PyYAML-5.3.1.tar.gz - autoclosed - ## CVE-2020-14343 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>PyYAML-5.3.1.tar.gz</b></p></summary>
<p>YAML parser and emitter for Python</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz">https://files.pythonhosted.org/packages/64/c2/b80047c7ac2478f9501676c988a5411ed5572f35d1beff9cae07d321512c/PyYAML-5.3.1.tar.gz</a></p>
<p>Path to dependency file: Glenda-Moore/requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **PyYAML-5.3.1.tar.gz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Glenda-Moore/commit/f96267f1340816a1786cdf0caf0e55886daa7b97">f96267f1340816a1786cdf0caf0e55886daa7b97</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.
<p>Publish Date: 2021-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343>CVE-2020-14343</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14343</a></p>
<p>Release Date: 2021-02-09</p>
<p>Fix Resolution: PyYAML - 5.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"PyYAML","packageVersion":"5.3.1","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"PyYAML:5.3.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"PyYAML - 5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-14343","vulnerabilityDetails":"A vulnerability was discovered in the PyYAML library in versions before 5.4, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. This flaw allows an attacker to execute arbitrary code on the system by abusing the python/object/new constructor. This flaw is due to an incomplete fix for CVE-2020-1747.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14343","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_process | cve high detected in pyyaml tar gz autoclosed cve high severity vulnerability vulnerable library pyyaml tar gz yaml parser and emitter for python library home page a href path to dependency file glenda moore requirements txt path to vulnerable library requirements txt dependency hierarchy x pyyaml tar gz vulnerable library found in head commit a href found in base branch master vulnerability details a vulnerability was discovered in the pyyaml library in versions before where it is susceptible to arbitrary code execution when it processes untrusted yaml files through the full load method or with the fullloader loader applications that use the library to process untrusted input may be vulnerable to this flaw this flaw allows an attacker to execute arbitrary code on the system by abusing the python object new constructor this flaw is due to an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution pyyaml rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree pyyaml isminimumfixversionavailable true minimumfixversion pyyaml basebranches vulnerabilityidentifier cve vulnerabilitydetails a vulnerability was discovered in the pyyaml library in versions before where it is susceptible to arbitrary code execution when it processes untrusted yaml files through the full load method or with the fullloader loader applications that use the library to process untrusted input may be vulnerable to this flaw this flaw allows an attacker to execute arbitrary code on the system by abusing the python object new constructor this flaw is due to an incomplete fix for cve vulnerabilityurl | 0 |
17,656 | 23,477,105,641 | IssuesEvent | 2022-08-17 07:16:07 | benthosdev/benthos | https://api.github.com/repos/benthosdev/benthos | closed | How to concatenate strings in awk๏ผ | question processors needs more info | I try to get the mysql binlog from kafka, and then concatenate it into DML statement or DDL statement. How to concatenate correctly in awk๏ผThere doesn't seem to be a function available in awk. | 1.0 | How to concatenate strings in awk๏ผ - I try to get the mysql binlog from kafka, and then concatenate it into DML statement or DDL statement. How to concatenate correctly in awk๏ผThere doesn't seem to be a function available in awk. | process | how to concatenate strings in awk๏ผ i try to get the mysql binlog from kafka and then concatenate it into dml statement or ddl statement how to concatenate correctly in awk๏ผthere doesn t seem to be a function available in awk | 1 |
120,585 | 25,826,265,924 | IssuesEvent | 2022-12-12 13:10:52 | microsoft/vscode-pull-request-github | https://api.github.com/repos/microsoft/vscode-pull-request-github | closed | Feature request: Add a comment by highlighting an area of the diff | feature-request upstream/vscode ghcs-scenario-found | There might already be an issue for this, but I couldn't find one. It took me a while to figure out how to comment on a PR when in review mode. I missed the small adornment in the gutter next to the line number. My first instinct was to highlight the section of code I wanted to comment on, and when I did, nothing happened. Then I right-clicked on the selection and was looking for a Comment action, but didn't see one.
Can you consider adding the ability to comment on a line or span of code when that code is highlighted in review mode?
https://github.com/microsoft/vssaas-planning/issues/2176 | 1.0 | Feature request: Add a comment by highlighting an area of the diff - There might already be an issue for this, but I couldn't find one. It took me a while to figure out how to comment on a PR when in review mode. I missed the small adornment in the gutter next to the line number. My first instinct was to highlight the section of code I wanted to comment on, and when I did, nothing happened. Then I right-clicked on the selection and was looking for a Comment action, but didn't see one.
Can you consider adding the ability to comment on a line or span of code when that code is highlighted in review mode?
https://github.com/microsoft/vssaas-planning/issues/2176 | non_process | feature request add a comment by highlighting an area of the diff there might already be an issue for this but i couldn t find one it took me a while to figure out how to comment on a pr when in review mode i missed the small adornment in the gutter next to the line number my first instinct was to highlight the section of code i wanted to comment on and when i did nothing happened then i right clicked on the selection and was looking for a comment action but didn t see one can you consider adding the ability to comment on a line or span of code when that code is highlighted in review mode | 0 |
353,721 | 10,556,705,917 | IssuesEvent | 2019-10-04 03:06:13 | delaford/game | https://api.github.com/repos/delaford/game | closed | Popup screen does not close when moving away | Hacktoberfest bug easy enhancement good first issue help wanted low priority user interface | <!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
When you open a popup screen, usually via trading, and you click someone on the edge of the canvas to go somewhere on the map, the popup screen stays connected.
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. Trade a vendor.
2. Confirm screen is visible.
3. Click on edge of canvas to go somewhere in map.
4. Confirm the trade screen is still up.
**What is the expected behavior?**
When you move somewhere on the map, the online screen should close for an easier UX experience and viability of bug abuse. | 1.0 | Popup screen does not close when moving away - <!-- Please don't delete this template or we'll close your issue -->
<!-- Before creating an issue please make sure you are using the latest version of the game. -->
**What is the current behavior?**
When you open a popup screen, usually via trading, and you click someone on the edge of the canvas to go somewhere on the map, the popup screen stays connected.
**If the current behavior is a bug, please provide the exact steps to reproduce.**
1. Trade a vendor.
2. Confirm screen is visible.
3. Click on edge of canvas to go somewhere in map.
4. Confirm the trade screen is still up.
**What is the expected behavior?**
When you move somewhere on the map, the online screen should close for an easier UX experience and viability of bug abuse. | non_process | popup screen does not close when moving away what is the current behavior when you open a popup screen usually via trading and you click someone on the edge of the canvas to go somewhere on the map the popup screen stays connected if the current behavior is a bug please provide the exact steps to reproduce trade a vendor confirm screen is visible click on edge of canvas to go somewhere in map confirm the trade screen is still up what is the expected behavior when you move somewhere on the map the online screen should close for an easier ux experience and viability of bug abuse | 0 |
94,430 | 3,925,746,006 | IssuesEvent | 2016-04-22 20:13:30 | washingtonstateuniversity/WSUWP-Plugin-Embeds | https://api.github.com/repos/washingtonstateuniversity/WSUWP-Plugin-Embeds | closed | Provide a responsive wrapper around YouTube embeds | enhancement priority:high | This should use a method similar to Shortcake Bakery | 1.0 | Provide a responsive wrapper around YouTube embeds - This should use a method similar to Shortcake Bakery | non_process | provide a responsive wrapper around youtube embeds this should use a method similar to shortcake bakery | 0 |
397 | 2,847,253,656 | IssuesEvent | 2015-05-29 15:57:59 | mitchellh/packer | https://api.github.com/repos/mitchellh/packer | closed | Atlas Post-Processor: Interpolate in metadata block | bug post-processor/atlas | We need to be able to interpolate the template in the nested metadata block in the atlas post-processor:
```json
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "pearkes/test",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1",
"created_at": "{{timestamp}}"
}
}]
``` | 1.0 | Atlas Post-Processor: Interpolate in metadata block - We need to be able to interpolate the template in the nested metadata block in the atlas post-processor:
```json
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "pearkes/test",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1",
"created_at": "{{timestamp}}"
}
}]
``` | process | atlas post processor interpolate in metadata block we need to be able to interpolate the template in the nested metadata block in the atlas post processor json type atlas only artifact pearkes test artifact type vagrant box metadata provider virtualbox version created at timestamp | 1 |
3,713 | 2,610,067,447 | IssuesEvent | 2015-02-26 18:19:48 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | ่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่ง | auto-migrated Priority-Medium Type-Defect | ```
่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่งใๅฐๅทไบๆดฒ็ๆฎๅป้ขใ24ๅฐๆถๅฅๅบท
ๅจ่ฏข็ญ็บฟ:0576-88066933-(ๆฃๆฃ800080609)-(ๅพฎไฟกๅทtzwzszyy)ๅป้ขๅฐๅ:ๅฐ
ๅทๅธๆคๆฑๅบๆซๅ่ทฏ229ๅท๏ผๆซๅๅคง่ฝฌ็ๆ๏ผไน่ฝฆ็บฟ่ทฏ:ไนๅ104ใ1
08ใ118ใ198ๅๆคๆฑไธ้ๆธ
ๅ
ฌไบค่ฝฆ็ด่พพๆซๅๅฐๅบ๏ผไนๅ107ใ105ใ
109ใ112ใ901ใ 902ๅ
ฌไบค่ฝฆๅฐๆๆๅนฟๅบไธ่ฝฆ๏ผๆญฅ่กๅณๅฏๅฐ้ขใ
่ฏ็้กน็ฎ๏ผ้ณ็ฟ๏ผๆฉๆณ๏ผๅๅ่
บ็๏ผๅๅ่
บๅข็๏ผ้พๅคด็๏ผ๏ฟฝ๏ฟฝ
๏ฟฝ็ฒพ๏ผๆ ็ฒพใๅ
็ฎๅ
่๏ผ็ฒพ็ดข้่ๆฒๅผ ๏ผๆท็
็ญใ
ๅฐๅทไบๆดฒ็ๆฎๅป้ขๆฏๅฐๅทๆๅคง็็ท็งๅป้ข๏ผๆๅจไธๅฎถๅจ็บฟๅ
๏ฟฝ๏ฟฝ
๏ฟฝๅจ่ฏข๏ผๆฅๆไธไธๅฎๅ็็ท็งๆฃๆฅๆฒป็่ฎพๅค๏ผไธฅๆ ผๆ็
งๅฝๅฎถๆ ๏ฟฝ
๏ฟฝ๏ฟฝๆถ่ดนใๅฐ็ซฏๅป็่ฎพๅค๏ผไธไธ็ๅๆญฅใๆๅจไธๅฎถ๏ผๆๅฐฑไธไธๅ
ธ
่ใไบบๆงๅๆๅก๏ผไธๅไปฅๆฃ่
ไธบไธญๅฟใ
็็ท็งๅฐฑ้ๅฐๅทไบๆดฒ็ๆฎๅป้ข๏ผไธไธ็ท็งไธบ็ทไบบใ
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:50 | 1.0 | ่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่ง - ```
่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่งใๅฐๅทไบๆดฒ็ๆฎๅป้ขใ24ๅฐๆถๅฅๅบท
ๅจ่ฏข็ญ็บฟ:0576-88066933-(ๆฃๆฃ800080609)-(ๅพฎไฟกๅทtzwzszyy)ๅป้ขๅฐๅ:ๅฐ
ๅทๅธๆคๆฑๅบๆซๅ่ทฏ229ๅท๏ผๆซๅๅคง่ฝฌ็ๆ๏ผไน่ฝฆ็บฟ่ทฏ:ไนๅ104ใ1
08ใ118ใ198ๅๆคๆฑไธ้ๆธ
ๅ
ฌไบค่ฝฆ็ด่พพๆซๅๅฐๅบ๏ผไนๅ107ใ105ใ
109ใ112ใ901ใ 902ๅ
ฌไบค่ฝฆๅฐๆๆๅนฟๅบไธ่ฝฆ๏ผๆญฅ่กๅณๅฏๅฐ้ขใ
่ฏ็้กน็ฎ๏ผ้ณ็ฟ๏ผๆฉๆณ๏ผๅๅ่
บ็๏ผๅๅ่
บๅข็๏ผ้พๅคด็๏ผ๏ฟฝ๏ฟฝ
๏ฟฝ็ฒพ๏ผๆ ็ฒพใๅ
็ฎๅ
่๏ผ็ฒพ็ดข้่ๆฒๅผ ๏ผๆท็
็ญใ
ๅฐๅทไบๆดฒ็ๆฎๅป้ขๆฏๅฐๅทๆๅคง็็ท็งๅป้ข๏ผๆๅจไธๅฎถๅจ็บฟๅ
๏ฟฝ๏ฟฝ
๏ฟฝๅจ่ฏข๏ผๆฅๆไธไธๅฎๅ็็ท็งๆฃๆฅๆฒป็่ฎพๅค๏ผไธฅๆ ผๆ็
งๅฝๅฎถๆ ๏ฟฝ
๏ฟฝ๏ฟฝๆถ่ดนใๅฐ็ซฏๅป็่ฎพๅค๏ผไธไธ็ๅๆญฅใๆๅจไธๅฎถ๏ผๆๅฐฑไธไธๅ
ธ
่ใไบบๆงๅๆๅก๏ผไธๅไปฅๆฃ่
ไธบไธญๅฟใ
็็ท็งๅฐฑ้ๅฐๅทไบๆดฒ็ๆฎๅป้ข๏ผไธไธ็ท็งไธบ็ทไบบใ
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:50 | non_process | ่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่ง ่ทฏๆกฅๆฒป็ๅๅ่
บ็ๅชๅฎถๆญฃ่งใๅฐๅทไบๆดฒ็ๆฎๅป้ขใ ๅจ่ฏข็ญ็บฟ ๅพฎไฟกๅทtzwzszyy ๅป้ขๅฐๅ ๅฐ ๏ผๆซๅๅคง่ฝฌ็ๆ๏ผไน่ฝฆ็บฟ่ทฏ ใ ใ ใ ๏ผ ใ ใ ใ ใ ใ ๏ผๆญฅ่กๅณๅฏๅฐ้ขใ ่ฏ็้กน็ฎ๏ผ้ณ็ฟ๏ผๆฉๆณ๏ผๅๅ่
บ็๏ผๅๅ่
บๅข็๏ผ้พๅคด็๏ผ๏ฟฝ๏ฟฝ ๏ฟฝ็ฒพ๏ผๆ ็ฒพใๅ
็ฎๅ
่๏ผ็ฒพ็ดข้่ๆฒๅผ ๏ผๆท็
็ญใ ๅฐๅทไบๆดฒ็ๆฎๅป้ขๆฏๅฐๅทๆๅคง็็ท็งๅป้ข๏ผๆๅจไธๅฎถๅจ็บฟๅ
๏ฟฝ๏ฟฝ ๏ฟฝๅจ่ฏข๏ผๆฅๆไธไธๅฎๅ็็ท็งๆฃๆฅๆฒป็่ฎพๅค๏ผไธฅๆ ผๆ็
งๅฝๅฎถๆ ๏ฟฝ ๏ฟฝ๏ฟฝๆถ่ดนใๅฐ็ซฏๅป็่ฎพๅค๏ผไธไธ็ๅๆญฅใๆๅจไธๅฎถ๏ผๆๅฐฑไธไธๅ
ธ ่ใไบบๆงๅๆๅก๏ผไธๅไปฅๆฃ่
ไธบไธญๅฟใ ็็ท็งๅฐฑ้ๅฐๅทไบๆดฒ็ๆฎๅป้ข๏ผไธไธ็ท็งไธบ็ทไบบใ original issue reported on code google com by poweragr gmail com on may at | 0 |
11,728 | 14,567,942,920 | IssuesEvent | 2020-12-17 10:56:13 | e4exp/paper_manager_abstract | https://api.github.com/repos/e4exp/paper_manager_abstract | opened | GMAT: Global Memory Augmentation for Transformers | 2020 Natural Language Processing Transformer _read_later | * https://arxiv.org/abs/2006.03274
* 2020
ๅคๅงๅจใใผในใฎใขใใซใฏใใใฎๅคงๅฎน้ใ็ๆฅใฎไธฆๅๆงใใใใณ้ซใๆง่ฝใฎใใใใงใ่ช็ถ่จ่ชๅฆ็ใซใใใฆใฆใใญใฟในใชใใฎใจใชใฃใฆใใพใใ
ๅคๅงๅจใใญใใฏใฎๆ่ๅใณใณใใผใใณใใฏใ้ทใLใฎใทใผใฑใณในใซๅฏพใใฆๅคงใใชฮฉ(L2)ใกใขใชใๅฟ
่ฆใจใใใใขใฏใคใบใใใ็ฉๆณจๆใงใใใ้ทใๆๆธใๅฆ็ใใ่ฝๅใๅถ้ใใใฆใใพใใ
ใใใฏๆ่ฟใ็ใชๆณจๆ่กๅใ็จใใฆไบๆฌก่จๆถ่ฆไปถใๆธใใใใใฎ่คๆฐใฎ่ฟไผผๆณใๆๆกใใใๅคงใใช้ขๅฟใฎๅฏพ่ฑกใจใชใฃใฆใใใ
ๆฌ็ ็ฉถใงใฏใ็ใชTransformerใใญใใฏใ้ทใM(โชL)ใฎๅฏใชๆณจๆใใผในใฎใฐใญใผใใซใกใขใชใงๆกๅผตใใใใจใๆๆกใใใ
ใใฎๆกๅผตใฏใ็ฎก็ๅฏ่ฝใชO(Mโ
(L+M))ใฎใกใขใชใชใผใใผใใใใๆใกใๅ
่กใใในใใผใน่งฃใจใทใผใ ใฌในใซ็ตฑๅใใใใจใใงใใพใใ
ใพใใใฐใญใผใใซใกใขใชใฏใ้ทใๅ
ฅๅใทใผใฑใณในใใกใขใช่กจ็พใฎใฟใง่กจ็พใใใใจใงใใทใผใฑใณในๅง็ธฎใซใๅฉ็จใงใใพใใ
ๆฌ็ ็ฉถใงใฏใ
(a)ๅคงๅๆจ่ซใๅฟ
่ฆใจใใๅๆ่ชฒ้กใ
(b)ไปฎ้ข่จ่ชใขใใชใณใฐใ
(c)่ชญ่งฃใชใฉใ
ๆงใ
ใช่ชฒ้กใซใใใฆใๅคงๅๆจ่ซใๅคงๅน
ใซๆนๅใใใใใจใๅฎ่จผ็ใซ็คบใใพใใใ
| 1.0 | GMAT: Global Memory Augmentation for Transformers - * https://arxiv.org/abs/2006.03274
* 2020
ๅคๅงๅจใใผในใฎใขใใซใฏใใใฎๅคงๅฎน้ใ็ๆฅใฎไธฆๅๆงใใใใณ้ซใๆง่ฝใฎใใใใงใ่ช็ถ่จ่ชๅฆ็ใซใใใฆใฆใใญใฟในใชใใฎใจใชใฃใฆใใพใใ
ๅคๅงๅจใใญใใฏใฎๆ่ๅใณใณใใผใใณใใฏใ้ทใLใฎใทใผใฑใณในใซๅฏพใใฆๅคงใใชฮฉ(L2)ใกใขใชใๅฟ
่ฆใจใใใใขใฏใคใบใใใ็ฉๆณจๆใงใใใ้ทใๆๆธใๅฆ็ใใ่ฝๅใๅถ้ใใใฆใใพใใ
ใใใฏๆ่ฟใ็ใชๆณจๆ่กๅใ็จใใฆไบๆฌก่จๆถ่ฆไปถใๆธใใใใใฎ่คๆฐใฎ่ฟไผผๆณใๆๆกใใใๅคงใใช้ขๅฟใฎๅฏพ่ฑกใจใชใฃใฆใใใ
ๆฌ็ ็ฉถใงใฏใ็ใชTransformerใใญใใฏใ้ทใM(โชL)ใฎๅฏใชๆณจๆใใผในใฎใฐใญใผใใซใกใขใชใงๆกๅผตใใใใจใๆๆกใใใ
ใใฎๆกๅผตใฏใ็ฎก็ๅฏ่ฝใชO(Mโ
(L+M))ใฎใกใขใชใชใผใใผใใใใๆใกใๅ
่กใใในใใผใน่งฃใจใทใผใ ใฌในใซ็ตฑๅใใใใจใใงใใพใใ
ใพใใใฐใญใผใใซใกใขใชใฏใ้ทใๅ
ฅๅใทใผใฑใณในใใกใขใช่กจ็พใฎใฟใง่กจ็พใใใใจใงใใทใผใฑใณในๅง็ธฎใซใๅฉ็จใงใใพใใ
ๆฌ็ ็ฉถใงใฏใ
(a)ๅคงๅๆจ่ซใๅฟ
่ฆใจใใๅๆ่ชฒ้กใ
(b)ไปฎ้ข่จ่ชใขใใชใณใฐใ
(c)่ชญ่งฃใชใฉใ
ๆงใ
ใช่ชฒ้กใซใใใฆใๅคงๅๆจ่ซใๅคงๅน
ใซๆนๅใใใใใจใๅฎ่จผ็ใซ็คบใใพใใใ
| process | gmat global memory augmentation for transformers ๅคๅงๅจใใผในใฎใขใใซใฏใใใฎๅคงๅฎน้ใ็ๆฅใฎไธฆๅๆงใใใใณ้ซใๆง่ฝใฎใใใใงใ่ช็ถ่จ่ชๅฆ็ใซใใใฆใฆใใญใฟในใชใใฎใจใชใฃใฆใใพใใ ๅคๅงๅจใใญใใฏใฎๆ่ๅใณใณใใผใใณใใฏใ้ทใlใฎใทใผใฑใณในใซๅฏพใใฆๅคงใใชฯ ใกใขใชใๅฟ
่ฆใจใใใใขใฏใคใบใใใ็ฉๆณจๆใงใใใ้ทใๆๆธใๅฆ็ใใ่ฝๅใๅถ้ใใใฆใใพใใ ใใใฏๆ่ฟใ็ใชๆณจๆ่กๅใ็จใใฆไบๆฌก่จๆถ่ฆไปถใๆธใใใใใฎ่คๆฐใฎ่ฟไผผๆณใๆๆกใใใๅคงใใช้ขๅฟใฎๅฏพ่ฑกใจใชใฃใฆใใใ ๆฌ็ ็ฉถใงใฏใ็ใชtransformerใใญใใฏใ้ทใm โชl ใฎๅฏใชๆณจๆใใผในใฎใฐใญใผใใซใกใขใชใงๆกๅผตใใใใจใๆๆกใใใ ใใฎๆกๅผตใฏใ็ฎก็ๅฏ่ฝใชo mโ
l m ใฎใกใขใชใชใผใใผใใใใๆใกใๅ
่กใใในใใผใน่งฃใจใทใผใ ใฌในใซ็ตฑๅใใใใจใใงใใพใใ ใพใใใฐใญใผใใซใกใขใชใฏใ้ทใๅ
ฅๅใทใผใฑใณในใใกใขใช่กจ็พใฎใฟใง่กจ็พใใใใจใงใใทใผใฑใณในๅง็ธฎใซใๅฉ็จใงใใพใใ ๆฌ็ ็ฉถใงใฏใ a ๅคงๅๆจ่ซใๅฟ
่ฆใจใใๅๆ่ชฒ้กใ b ไปฎ้ข่จ่ชใขใใชใณใฐใ c ่ชญ่งฃใชใฉใ ๆงใ
ใช่ชฒ้กใซใใใฆใๅคงๅๆจ่ซใๅคงๅน
ใซๆนๅใใใใใจใๅฎ่จผ็ใซ็คบใใพใใใ | 1 |
245,279 | 26,539,966,099 | IssuesEvent | 2023-01-19 18:25:04 | shaneclarke-whitesource/multi-juicer | https://api.github.com/repos/shaneclarke-whitesource/multi-juicer | closed | CVE-2021-43138 (High) detected in async-2.6.3.tgz, async-3.2.0.tgz - autoclosed | security vulnerability | ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-2.6.3.tgz</b>, <b>async-3.2.0.tgz</b></p></summary>
<p>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /juice-balancer/ui/package.json</p>
<p>Path to vulnerable library: /juice-balancer/ui/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-dev-server-3.11.1.tgz
- portfinder-1.0.28.tgz
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /juice-balancer/package.json</p>
<p>Path to vulnerable library: /juice-balancer/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- winston-3.3.3.tgz (Root Library)
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shaneclarke-whitesource/multi-juicer/commit/0e0ec522551978737ae1ae4ffa66e0f7292e0fc7">0e0ec522551978737ae1ae4ffa66e0f7292e0fc7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 2.6.4</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.0</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (winston): 3.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-43138 (High) detected in async-2.6.3.tgz, async-3.2.0.tgz - autoclosed - ## CVE-2021-43138 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>async-2.6.3.tgz</b>, <b>async-3.2.0.tgz</b></p></summary>
<p>
<details><summary><b>async-2.6.3.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-2.6.3.tgz">https://registry.npmjs.org/async/-/async-2.6.3.tgz</a></p>
<p>Path to dependency file: /juice-balancer/ui/package.json</p>
<p>Path to vulnerable library: /juice-balancer/ui/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.3.tgz (Root Library)
- webpack-dev-server-3.11.1.tgz
- portfinder-1.0.28.tgz
- :x: **async-2.6.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>async-3.2.0.tgz</b></p></summary>
<p>Higher-order functions and common patterns for asynchronous code</p>
<p>Library home page: <a href="https://registry.npmjs.org/async/-/async-3.2.0.tgz">https://registry.npmjs.org/async/-/async-3.2.0.tgz</a></p>
<p>Path to dependency file: /juice-balancer/package.json</p>
<p>Path to vulnerable library: /juice-balancer/node_modules/async/package.json</p>
<p>
Dependency Hierarchy:
- winston-3.3.3.tgz (Root Library)
- :x: **async-3.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/shaneclarke-whitesource/multi-juicer/commit/0e0ec522551978737ae1ae4ffa66e0f7292e0fc7">0e0ec522551978737ae1ae4ffa66e0f7292e0fc7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
<p>Publish Date: 2022-04-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-43138>CVE-2021-43138</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-43138">https://nvd.nist.gov/vuln/detail/CVE-2021-43138</a></p>
<p>Release Date: 2022-04-06</p>
<p>Fix Resolution (async): 2.6.4</p>
<p>Direct dependency fix Resolution (react-scripts): 5.0.0</p><p>Fix Resolution (async): 3.2.2</p>
<p>Direct dependency fix Resolution (winston): 3.4.0</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_process | cve high detected in async tgz async tgz autoclosed cve high severity vulnerability vulnerable libraries async tgz async tgz async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file juice balancer ui package json path to vulnerable library juice balancer ui node modules async package json dependency hierarchy react scripts tgz root library webpack dev server tgz portfinder tgz x async tgz vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file juice balancer package json path to vulnerable library juice balancer node modules async package json dependency hierarchy winston tgz root library x async tgz vulnerable library found in head commit a href found in base branch master vulnerability details in async before and x before a malicious user can obtain privileges via the mapvalues method aka lib internal iterator js createobjectiterator prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution async direct dependency fix resolution react scripts fix resolution async direct dependency fix resolution winston rescue worker helmet automatic remediation is available for this issue | 0 |
5,642 | 8,500,063,451 | IssuesEvent | 2018-10-29 18:47:58 | googleapis/google-cloud-python | https://api.github.com/repos/googleapis/google-cloud-python | opened | Group Kokoro CI presubmit jobs | testing type: process | It would be great if we had only a single "Kokoro" check on each PR.
See @tmatsuo's [writeup for PHP](https://docs.google.com/document/d/1vN5-h06KPem_ppBTnu2eXmXwWqc5V_aOfuoPON_9A6I/edit?ts=5bd74fcf#). He groups them across PHP versions, but I think we could group them across APIs.
If the repo split is imminent, then this issue becomes moot. | 1.0 | Group Kokoro CI presubmit jobs - It would be great if we had only a single "Kokoro" check on each PR.
See @tmatsuo's [writeup for PHP](https://docs.google.com/document/d/1vN5-h06KPem_ppBTnu2eXmXwWqc5V_aOfuoPON_9A6I/edit?ts=5bd74fcf#). He groups them across PHP versions, but I think we could group them across APIs.
If the repo split is imminent, then this issue becomes moot. | process | group kokoro ci presubmit jobs it would be great if we had only a single kokoro check on each pr see tmatsuo s he groups them across php versions but i think we could group them across apis if the repo split is imminent then this issue becomes moot | 1 |
563,311 | 16,679,837,334 | IssuesEvent | 2021-06-07 21:28:30 | acalvom/tfm-management | https://api.github.com/repos/acalvom/tfm-management | closed | HT08: Deploy Back-End in Heroku | 4h :30m difficulty: high points: 2 priority: low story: deploy on heroku type: back-end type: database | Como desarrollador, quiero desplegar el proyecto que gestiona el Back-End de la aplicaciรณn en Heroku | 1.0 | HT08: Deploy Back-End in Heroku - Como desarrollador, quiero desplegar el proyecto que gestiona el Back-End de la aplicaciรณn en Heroku | non_process | deploy back end in heroku como desarrollador quiero desplegar el proyecto que gestiona el back end de la aplicaciรณn en heroku | 0 |
9,054 | 12,130,306,715 | IssuesEvent | 2020-04-23 01:08:46 | GoogleCloudPlatform/python-docs-samples | https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples | closed | remove gcp-devrel-py-tools from pubsub/cloud-client/requirements-test.txt | priority: p2 remove-gcp-devrel-py-tools type: process | remove gcp-devrel-py-tools from pubsub/cloud-client/requirements-test.txt | 1.0 | remove gcp-devrel-py-tools from pubsub/cloud-client/requirements-test.txt - remove gcp-devrel-py-tools from pubsub/cloud-client/requirements-test.txt | process | remove gcp devrel py tools from pubsub cloud client requirements test txt remove gcp devrel py tools from pubsub cloud client requirements test txt | 1 |
16,711 | 21,870,722,190 | IssuesEvent | 2022-05-19 04:48:14 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | QGIS Batch Mode Populating Form are producing gaps between rows | Processing Regression Bug | **Describe the bug**
QGIS 3.16 has strange behavior in batch processing forms. Instead of aligning each input raster file corresponding to a band, the algorithm is reading the files and leaving unnecessary spaces. This must be a bug as everything was working during the last LTR version which was 3.10.
**How to Reproduce**
Please see this screencast:
https://drive.google.com/file/d/1FwFKjdLowG1WWK-cuTx_rOoRAm882IdW/view?usp=sharing
**QGIS and OS versions**
Versรฃo do QGIS
3.16.8-Hannover
Cรณdigo da versรฃo do QGIS
8c50902ea4
Compilado sobre Qt
5.11.2
Rodando sobre Qt
5.11.2
Compilado sobre GDAL/OGR
3.1.4
Rodando sobre GDAL/OGR
3.1.4
Compilado sobre GEOS
3.8.1-CAPI-1.13.3
Rodando sobre GEOS
3.8.1-CAPI-1.13.3
Compilado no SQLite
3.29.0
Executando contra SQLite
3.29.0
Versรฃo do cliente PostgreSQL
11.5
Versรฃo SpatiaLite
4.3.0
Versรฃo do QWT
6.1.3
Versรฃo QScintilla2
2.10.8
Compilado com PROJ
6.3.2
Em execuรงรฃo com PROJ
Rel. 6.3.2, May 1st, 2020
Versรฃo SO
Windows 10 (10.0)
Ativar complementos python
HCMGIS;
kmltools;
quick_map_services;
SpreadsheetLayers;
db_manager;
MetaSearch;
processing
| 1.0 | QGIS Batch Mode Populating Form are producing gaps between rows - **Describe the bug**
QGIS 3.16 has strange behavior in batch processing forms. Instead of aligning each input raster file corresponding to a band, the algorithm is reading the files and leaving unnecessary spaces. This must be a bug as everything was working during the last LTR version which was 3.10.
**How to Reproduce**
Please see this screencast:
https://drive.google.com/file/d/1FwFKjdLowG1WWK-cuTx_rOoRAm882IdW/view?usp=sharing
**QGIS and OS versions**
Versรฃo do QGIS
3.16.8-Hannover
Cรณdigo da versรฃo do QGIS
8c50902ea4
Compilado sobre Qt
5.11.2
Rodando sobre Qt
5.11.2
Compilado sobre GDAL/OGR
3.1.4
Rodando sobre GDAL/OGR
3.1.4
Compilado sobre GEOS
3.8.1-CAPI-1.13.3
Rodando sobre GEOS
3.8.1-CAPI-1.13.3
Compilado no SQLite
3.29.0
Executando contra SQLite
3.29.0
Versรฃo do cliente PostgreSQL
11.5
Versรฃo SpatiaLite
4.3.0
Versรฃo do QWT
6.1.3
Versรฃo QScintilla2
2.10.8
Compilado com PROJ
6.3.2
Em execuรงรฃo com PROJ
Rel. 6.3.2, May 1st, 2020
Versรฃo SO
Windows 10 (10.0)
Ativar complementos python
HCMGIS;
kmltools;
quick_map_services;
SpreadsheetLayers;
db_manager;
MetaSearch;
processing
| process | qgis batch mode populating form are producing gaps between rows describe the bug qgis has strange behavior in batch processing forms instead of aligning each input raster file corresponding to a band the algorithm is reading the files and leaving unnecessary spaces this must be a bug as everything was working during the last ltr version which was how to reproduce please see this screencast qgis and os versions versรฃo do qgis hannover cรณdigo da versรฃo do qgis compilado sobre qt rodando sobre qt compilado sobre gdal ogr rodando sobre gdal ogr compilado sobre geos capi rodando sobre geos capi compilado no sqlite executando contra sqlite versรฃo do cliente postgresql versรฃo spatialite versรฃo do qwt versรฃo compilado com proj em execuรงรฃo com proj rel may versรฃo so windows ativar complementos python hcmgis kmltools quick map services spreadsheetlayers db manager metasearch processing | 1 |
224,181 | 17,669,636,632 | IssuesEvent | 2021-08-23 02:57:36 | thesofproject/sof | https://api.github.com/repos/thesofproject/sof | opened | [BUG]TGLU-RVP-NOCODEC-CI dsp panic when do the multiple-pause-resume test | bug TGL multicore Intel Linux Daily tests stress CI multicore tplg | **Describe the bug:**
TGLU-RVP-NOCODEC-CI dsp panic when do the multiple-pause-resume test.
found in inner daily 5994
**To Reproduce**
TPLG=/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg ~/sof-test/test-case/multiple-pause-resume.sh -r 25
**Reproduction Rate**
not 100%
**Environment**
Kernel Branch: topic/sof-dev
Kernel Commit: af102fc2
SOF Branch: main
SOF Commit: db8ee55b528a
Topology:/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg
Platform: TGLU-RVP-NOCODEC-CI-01
**Screenshots or console output**
[console]
```
=== PAUSE ===
(3/25) pcm'smart-nocodec' cmd'aplay' id'0': Wait for 20 ms before resume
aplay: do_pause:1545: pause push error: Connection timed out
=== PAUSE ===
(9/25) pcm'Port0' cmd'aplay' id'2': Wait for 27 ms before resume
aplay: do_pause:1545: pause push error: Connection timed out
```
[dmesg]
```
[dmesg.txt](https://github.com/thesofproject/sof/files/7028842/dmesg.txt)
[ 3421.302411] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: pcm: trigger stream 0 dir 0 cmd 3
[ 3421.302413] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x60060000: GLB_STREAM_MSG: TRIG_PAUSE
[ 3421.302619] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx succeeded: 0x60060000: GLB_STREAM_MSG: TRIG_PAUSE
[ 3421.303062] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x180]=0x20240000 successful
[ 3421.318297] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: pcm: trigger stream 2 dir 0 cmd 4
[ 3421.318817] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x160]=0x2014001e successful
[ 3421.318821] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x60070000: GLB_STREAM_MSG: TRIG_RELEASE
[ 3421.319246] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx succeeded: 0x60070000: GLB_STREAM_MSG: TRIG_RELEASE
[ 3421.325190] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error : DSP panic!
[ 3421.325194] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: panic: dsp_oops_offset 788480 offset 788480
[ 3421.325195] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump start ]------------
[ 3421.325212] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: status: fw entered - code 00000005
[ 3421.325300] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: runtime exception
[ 3421.325302] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: trace point 00004000
[ 3421.325303] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: panic at :0
[ 3421.325305] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: DSP Firmware Oops
[ 3421.325307] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: Exception Cause: LoadProhibitedCause, A load referenced a page mapped with an attribute that does not permit loads
[ 3421.325308] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EXCCAUSE 0x0000001c EXCVADDR 0xc0000000 PS 0x00060b25 SAR 0x00000000
[ 3421.325310] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPC1 0xbe02dee7 EPC2 0xbe02d462 EPC3 0x00000000 EPC4 0x00000000
[ 3421.325312] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPC5 0x00000000 EPC6 0x00000000 EPC7 0x00000000 DEPC 0x00000000
[ 3421.325313] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPS2 0x00060f20 EPS3 0x00000000 EPS4 0x00000000 EPS5 0x00000000
[ 3421.325315] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPS6 0x00000000 EPS7 0x00000000 INTENABL 0x00000000 INTERRU 0x00000222
[ 3421.325316] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: stack dump from 0xbe280cb0
[ 3421.325318] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb0: 00000000 00000000 00000000 00000000
[ 3421.325320] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb4: 7e051b61 be280d30 be0a0e80 9e0a0fe4
[ 3421.325321] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb8: f4b73c00 d7450508 1cac7828 ffff8fd4
[ 3421.325323] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cbc: 000c0800 00000000 1cac7828 ffff8fd4
[ 3421.325324] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc0: 1cac7828 ffff8fd4 00000092 00000000
[ 3421.325326] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc4: 234cc400 ffff8fd4 2f42a000 ffff8fd4
[ 3421.325327] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc8: f4b73c00 d7450508 b58cffa0 ffffffff
[ 3421.325328] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280ccc: 01d19080 ffff8fd4 b588d710 ffffffff
[ 3421.325330] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump end ]------------
[ 3421.325371] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: trace IO error
``` | 1.0 | [BUG]TGLU-RVP-NOCODEC-CI dsp panic when do the multiple-pause-resume test - **Describe the bug:**
TGLU-RVP-NOCODEC-CI dsp panic when do the multiple-pause-resume test.
found in inner daily 5994
**To Reproduce**
TPLG=/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg ~/sof-test/test-case/multiple-pause-resume.sh -r 25
**Reproduction Rate**
not 100%
**Environment**
Kernel Branch: topic/sof-dev
Kernel Commit: af102fc2
SOF Branch: main
SOF Commit: db8ee55b528a
Topology:/lib/firmware/intel/sof-tplg/sof-tgl-nocodec-ci.tplg
Platform: TGLU-RVP-NOCODEC-CI-01
**Screenshots or console output**
[console]
```
=== PAUSE ===
(3/25) pcm'smart-nocodec' cmd'aplay' id'0': Wait for 20 ms before resume
aplay: do_pause:1545: pause push error: Connection timed out
=== PAUSE ===
(9/25) pcm'Port0' cmd'aplay' id'2': Wait for 27 ms before resume
aplay: do_pause:1545: pause push error: Connection timed out
```
[dmesg]
```
[dmesg.txt](https://github.com/thesofproject/sof/files/7028842/dmesg.txt)
[ 3421.302411] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: pcm: trigger stream 0 dir 0 cmd 3
[ 3421.302413] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x60060000: GLB_STREAM_MSG: TRIG_PAUSE
[ 3421.302619] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx succeeded: 0x60060000: GLB_STREAM_MSG: TRIG_PAUSE
[ 3421.303062] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x180]=0x20240000 successful
[ 3421.318297] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: pcm: trigger stream 2 dir 0 cmd 4
[ 3421.318817] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: FW Poll Status: reg[0x160]=0x2014001e successful
[ 3421.318821] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx: 0x60070000: GLB_STREAM_MSG: TRIG_RELEASE
[ 3421.319246] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ipc tx succeeded: 0x60070000: GLB_STREAM_MSG: TRIG_RELEASE
[ 3421.325190] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error : DSP panic!
[ 3421.325194] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: panic: dsp_oops_offset 788480 offset 788480
[ 3421.325195] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump start ]------------
[ 3421.325212] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: status: fw entered - code 00000005
[ 3421.325300] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: runtime exception
[ 3421.325302] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: trace point 00004000
[ 3421.325303] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: panic at :0
[ 3421.325305] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: DSP Firmware Oops
[ 3421.325307] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: Exception Cause: LoadProhibitedCause, A load referenced a page mapped with an attribute that does not permit loads
[ 3421.325308] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EXCCAUSE 0x0000001c EXCVADDR 0xc0000000 PS 0x00060b25 SAR 0x00000000
[ 3421.325310] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPC1 0xbe02dee7 EPC2 0xbe02d462 EPC3 0x00000000 EPC4 0x00000000
[ 3421.325312] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPC5 0x00000000 EPC6 0x00000000 EPC7 0x00000000 DEPC 0x00000000
[ 3421.325313] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPS2 0x00060f20 EPS3 0x00000000 EPS4 0x00000000 EPS5 0x00000000
[ 3421.325315] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: EPS6 0x00000000 EPS7 0x00000000 INTENABL 0x00000000 INTERRU 0x00000222
[ 3421.325316] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: stack dump from 0xbe280cb0
[ 3421.325318] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb0: 00000000 00000000 00000000 00000000
[ 3421.325320] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb4: 7e051b61 be280d30 be0a0e80 9e0a0fe4
[ 3421.325321] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cb8: f4b73c00 d7450508 1cac7828 ffff8fd4
[ 3421.325323] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cbc: 000c0800 00000000 1cac7828 ffff8fd4
[ 3421.325324] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc0: 1cac7828 ffff8fd4 00000092 00000000
[ 3421.325326] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc4: 234cc400 ffff8fd4 2f42a000 ffff8fd4
[ 3421.325327] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280cc8: f4b73c00 d7450508 b58cffa0 ffffffff
[ 3421.325328] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: 0xbe280ccc: 01d19080 ffff8fd4 b588d710 ffffffff
[ 3421.325330] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: ------------[ DSP dump end ]------------
[ 3421.325371] kernel: sof-audio-pci-intel-tgl 0000:00:1f.3: error: trace IO error
``` | non_process | tglu rvp nocodec ci dsp panic when do the multiple pause resume test describe the bug tglu rvp nocodec ci dsp panic when do the multiple pause resume test found in inner daily to reproduce tplg lib firmware intel sof tplg sof tgl nocodec ci tplg sof test test case multiple pause resume sh r reproduction rate not environment kernel branch topic sof dev kernel commit sof branch main sof commit topology lib firmware intel sof tplg sof tgl nocodec ci tplg platform tglu rvp nocodec ci screenshots or console output pause pcm smart nocodec cmd aplay id wait for ms before resume aplay do pause pause push error connection timed out pause pcm cmd aplay id wait for ms before resume aplay do pause pause push error connection timed out kernel sof audio pci intel tgl pcm trigger stream dir cmd kernel sof audio pci intel tgl ipc tx glb stream msg trig pause kernel sof audio pci intel tgl ipc tx succeeded glb stream msg trig pause kernel sof audio pci intel tgl fw poll status reg successful kernel sof audio pci intel tgl pcm trigger stream dir cmd kernel sof audio pci intel tgl fw poll status reg successful kernel sof audio pci intel tgl ipc tx glb stream msg trig release kernel sof audio pci intel tgl ipc tx succeeded glb stream msg trig release kernel sof audio pci intel tgl error dsp panic kernel sof audio pci intel tgl panic dsp oops offset offset kernel sof audio pci intel tgl kernel sof audio pci intel tgl status fw entered code kernel sof audio pci intel tgl error runtime exception kernel sof audio pci intel tgl error trace point kernel sof audio pci intel tgl error panic at kernel sof audio pci intel tgl error dsp firmware oops kernel sof audio pci intel tgl error exception cause loadprohibitedcause a load referenced a page mapped with an attribute that does not permit loads kernel sof audio pci intel tgl exccause excvaddr ps sar kernel sof audio pci intel tgl kernel sof audio pci intel tgl depc kernel sof audio pci intel tgl kernel sof audio pci intel tgl intenabl interru kernel sof audio pci intel tgl stack dump from kernel sof audio pci intel tgl kernel sof audio pci intel tgl kernel sof audio pci intel tgl kernel sof audio pci intel tgl kernel sof audio pci intel tgl kernel sof audio pci intel tgl kernel sof audio pci intel tgl ffffffff kernel sof audio pci intel tgl ffffffff kernel sof audio pci intel tgl kernel sof audio pci intel tgl error trace io error | 0 |
92,948 | 26,819,039,756 | IssuesEvent | 2023-02-02 08:03:20 | envoyproxy/envoy | https://api.github.com/repos/envoyproxy/envoy | closed | Newer release available `com_github_msgpack_msgpack_c`: cpp-5.0.0 (current: cpp-3.3.0) | area/build no stalebot dependencies |
Package Name: com_github_msgpack_msgpack_c@3.3.0
Current Version: cpp-3.3.0@2020-06-05
Available Version: cpp-5.0.0@2023-01-10
Upstream releases: https://github.com/msgpack/msgpack-c/releases
| 1.0 | Newer release available `com_github_msgpack_msgpack_c`: cpp-5.0.0 (current: cpp-3.3.0) -
Package Name: com_github_msgpack_msgpack_c@3.3.0
Current Version: cpp-3.3.0@2020-06-05
Available Version: cpp-5.0.0@2023-01-10
Upstream releases: https://github.com/msgpack/msgpack-c/releases
| non_process | newer release available com github msgpack msgpack c cpp current cpp package name com github msgpack msgpack c current version cpp available version cpp upstream releases | 0 |
90,126 | 18,063,287,529 | IssuesEvent | 2021-09-20 16:05:47 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | [MONO][Interp] Assertion at /__w/1/s/src/mono/mono/mini/interp/transform.c:6813, condition `td->clause_indexes [in_offset] != -1' not met | area-Codegen-Interpreter-mono in pr | The following tests failed on Android x64 with Interpreter the the same error shown below:
- JIT/Directed/coverage/importer/badendfinally/badendfinally.sh
- JIT/Directed/coverage/importer/Desktop/badendfinally_il_d/badendfinally_il_d.sh
- JIT/Directed/coverage/importer/Desktop/badendfinally_il_r/badendfinally_il_r.sh
Here is the error message:
[Full log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-54084-merge-7fe5a3a9adcf43bb89/JIT.Directed/console.850ee7bc.log?sv=2019-07-07&se=2021-07-07T16%3A35%3A55Z&sr=c&sp=rl&sig=9XwT59x%2Bj3Sscfb8zn3gZ4RalUABqgsDFGMafqXPHQ4%3D)
```
06-17 16:48:48.802 6040 6057 I DOTNET : MonoRunner initialize,, entryPointLibName=badendfinally.dll
06-17 16:48:48.802 6040 6057 D DOTNET : file_path: /data/user/0/net.dot.JIT_Directed/files/runtimeconfig.bin
06-17 16:48:48.802 6040 6057 D DOTNET : Interp Enabled
06-17 16:48:48.804 6040 6057 D DOTNET : assembly_preload_hook: System.Private.CoreLib (null) /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.838 6040 6057 D DOTNET : assembly_preload_hook: badendfinally.dll (null) /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.839 6040 6057 D DOTNET : Executable: badendfinally.dll
06-17 16:48:48.839 6040 6057 D DOTNET : assembly_preload_hook: mscorlib /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.839 6040 6057 D DOTNET : ((null) error) * Assertion at /__w/1/s/src/mono/mono/mini/interp/transform.c:6813, condition `td->clause_indexes [in_offset] != -1' not met
06-17 16:48:48.839 6040 6057 E DOTNET : Exit code: 1.
06-17 16:48:48.865 1792 1806 I ActivityManager: Process net.dot.JIT_Directed (pid 6040) has died: fore FGS
06-17 16:48:48.867 1792 1806 W ActivityManager: Crash of app net.dot.JIT_Directed running instrumentation ComponentInfo{net.dot.JIT_Directed/net.dot.MonoRunner}
06-17 16:48:48.867 1792 1806 I ActivityManager: Force stopping net.dot.JIT_Directed appid=10110 user=0: finished inst
06-17 16:48:48.868 1792 5885 W Binder : Outgoing transactions from this process must be FLAG_ONEWAY
06-17 16:48:48.868 1792 5885 W Binder : java.lang.Throwable
06-17 16:48:48.868 1792 5885 W Binder : at android.os.BinderProxy.transact(BinderProxy.java:480)
06-17 16:48:48.868 1792 5885 W Binder : at android.app.IInstrumentationWatcher$Stub$Proxy.instrumentationFinished(IInstrumentationWatcher.java:205)
06-17 16:48:48.868 1792 5885 W Binder : at com.android.server.am.InstrumentationReporter$MyThread.run(InstrumentationReporter.java:86)
06-17 16:48:48.871 6029 6029 D AndroidRuntime: Shutting down VM
06-17 16:48:48.885 1536 1536 I Zygote : Process 6040 exited cleanly (1)
06-17 16:48:48.885 6029 6060 E app_process: Thread attaching to non-existent runtime: Binder:6029_3
06-17 16:48:48.885 6029 6060 I AndroidRuntime: NOTE: attach of thread 'Binder:6029_3' failed
06-17 16:48:48.905 1792 1819 I libprocessgroup: Successfully killed process cgroup uid 10110 pid 6040 in 39ms
``` | 1.0 | [MONO][Interp] Assertion at /__w/1/s/src/mono/mono/mini/interp/transform.c:6813, condition `td->clause_indexes [in_offset] != -1' not met - The following tests failed on Android x64 with Interpreter the the same error shown below:
- JIT/Directed/coverage/importer/badendfinally/badendfinally.sh
- JIT/Directed/coverage/importer/Desktop/badendfinally_il_d/badendfinally_il_d.sh
- JIT/Directed/coverage/importer/Desktop/badendfinally_il_r/badendfinally_il_r.sh
Here is the error message:
[Full log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-54084-merge-7fe5a3a9adcf43bb89/JIT.Directed/console.850ee7bc.log?sv=2019-07-07&se=2021-07-07T16%3A35%3A55Z&sr=c&sp=rl&sig=9XwT59x%2Bj3Sscfb8zn3gZ4RalUABqgsDFGMafqXPHQ4%3D)
```
06-17 16:48:48.802 6040 6057 I DOTNET : MonoRunner initialize,, entryPointLibName=badendfinally.dll
06-17 16:48:48.802 6040 6057 D DOTNET : file_path: /data/user/0/net.dot.JIT_Directed/files/runtimeconfig.bin
06-17 16:48:48.802 6040 6057 D DOTNET : Interp Enabled
06-17 16:48:48.804 6040 6057 D DOTNET : assembly_preload_hook: System.Private.CoreLib (null) /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.838 6040 6057 D DOTNET : assembly_preload_hook: badendfinally.dll (null) /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.839 6040 6057 D DOTNET : Executable: badendfinally.dll
06-17 16:48:48.839 6040 6057 D DOTNET : assembly_preload_hook: mscorlib /data/user/0/net.dot.JIT_Directed/files
06-17 16:48:48.839 6040 6057 D DOTNET : ((null) error) * Assertion at /__w/1/s/src/mono/mono/mini/interp/transform.c:6813, condition `td->clause_indexes [in_offset] != -1' not met
06-17 16:48:48.839 6040 6057 E DOTNET : Exit code: 1.
06-17 16:48:48.865 1792 1806 I ActivityManager: Process net.dot.JIT_Directed (pid 6040) has died: fore FGS
06-17 16:48:48.867 1792 1806 W ActivityManager: Crash of app net.dot.JIT_Directed running instrumentation ComponentInfo{net.dot.JIT_Directed/net.dot.MonoRunner}
06-17 16:48:48.867 1792 1806 I ActivityManager: Force stopping net.dot.JIT_Directed appid=10110 user=0: finished inst
06-17 16:48:48.868 1792 5885 W Binder : Outgoing transactions from this process must be FLAG_ONEWAY
06-17 16:48:48.868 1792 5885 W Binder : java.lang.Throwable
06-17 16:48:48.868 1792 5885 W Binder : at android.os.BinderProxy.transact(BinderProxy.java:480)
06-17 16:48:48.868 1792 5885 W Binder : at android.app.IInstrumentationWatcher$Stub$Proxy.instrumentationFinished(IInstrumentationWatcher.java:205)
06-17 16:48:48.868 1792 5885 W Binder : at com.android.server.am.InstrumentationReporter$MyThread.run(InstrumentationReporter.java:86)
06-17 16:48:48.871 6029 6029 D AndroidRuntime: Shutting down VM
06-17 16:48:48.885 1536 1536 I Zygote : Process 6040 exited cleanly (1)
06-17 16:48:48.885 6029 6060 E app_process: Thread attaching to non-existent runtime: Binder:6029_3
06-17 16:48:48.885 6029 6060 I AndroidRuntime: NOTE: attach of thread 'Binder:6029_3' failed
06-17 16:48:48.905 1792 1819 I libprocessgroup: Successfully killed process cgroup uid 10110 pid 6040 in 39ms
``` | non_process | assertion at w s src mono mono mini interp transform c condition td clause indexes not met the following tests failed on android with interpreter the the same error shown below jit directed coverage importer badendfinally badendfinally sh jit directed coverage importer desktop badendfinally il d badendfinally il d sh jit directed coverage importer desktop badendfinally il r badendfinally il r sh here is the error message i dotnet monorunner initialize entrypointlibname badendfinally dll d dotnet file path data user net dot jit directed files runtimeconfig bin d dotnet interp enabled d dotnet assembly preload hook system private corelib null data user net dot jit directed files d dotnet assembly preload hook badendfinally dll null data user net dot jit directed files d dotnet executable badendfinally dll d dotnet assembly preload hook mscorlib data user net dot jit directed files d dotnet null error assertion at w s src mono mono mini interp transform c condition td clause indexes not met e dotnet exit code i activitymanager process net dot jit directed pid has died fore fgs w activitymanager crash of app net dot jit directed running instrumentation componentinfo net dot jit directed net dot monorunner i activitymanager force stopping net dot jit directed appid user finished inst w binder outgoing transactions from this process must be flag oneway w binder java lang throwable w binder at android os binderproxy transact binderproxy java w binder at android app iinstrumentationwatcher stub proxy instrumentationfinished iinstrumentationwatcher java w binder at com android server am instrumentationreporter mythread run instrumentationreporter java d androidruntime shutting down vm i zygote process exited cleanly e app process thread attaching to non existent runtime binder i androidruntime note attach of thread binder failed i libprocessgroup successfully killed process cgroup uid pid in | 0 |
233,244 | 25,757,428,590 | IssuesEvent | 2022-12-08 17:28:36 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Add test to check that @kbn/handlebars tests are in sync with upstream | Team:Security | When making changes to files under `packages/kbn-handlebars/src/upstream` it's important to run `./packages/kbn-handlebars/scripts/update_test_patches.sh`. Likewise it's important to periodically run `./packages/kbn-handlebars/scripts/check_for_test_changes.sh` to see if new tests have been added upstream that we need to copy downstream.
This is documented in [`README.md`](https://github.com/elastic/kibana/blob/main/packages/kbn-handlebars/README.md#development), but currently it's up the the developer to remember to do these things and there's no test that runs during a PR to verify this.
We can't just make a test that runs as part of the regular test-suite as changes in the upstream repo is outside of our control and might therefore break our CI. So instead we need to make a conditional test that only runs for PRs that contain changes to files inside of `packages/kbn-handlebars`. | True | Add test to check that @kbn/handlebars tests are in sync with upstream - When making changes to files under `packages/kbn-handlebars/src/upstream` it's important to run `./packages/kbn-handlebars/scripts/update_test_patches.sh`. Likewise it's important to periodically run `./packages/kbn-handlebars/scripts/check_for_test_changes.sh` to see if new tests have been added upstream that we need to copy downstream.
This is documented in [`README.md`](https://github.com/elastic/kibana/blob/main/packages/kbn-handlebars/README.md#development), but currently it's up the the developer to remember to do these things and there's no test that runs during a PR to verify this.
We can't just make a test that runs as part of the regular test-suite as changes in the upstream repo is outside of our control and might therefore break our CI. So instead we need to make a conditional test that only runs for PRs that contain changes to files inside of `packages/kbn-handlebars`. | non_process | add test to check that kbn handlebars tests are in sync with upstream when making changes to files under packages kbn handlebars src upstream it s important to run packages kbn handlebars scripts update test patches sh likewise it s important to periodically run packages kbn handlebars scripts check for test changes sh to see if new tests have been added upstream that we need to copy downstream this is documented in but currently it s up the the developer to remember to do these things and there s no test that runs during a pr to verify this we can t just make a test that runs as part of the regular test suite as changes in the upstream repo is outside of our control and might therefore break our ci so instead we need to make a conditional test that only runs for prs that contain changes to files inside of packages kbn handlebars | 0 |
8,991 | 12,102,234,082 | IssuesEvent | 2020-04-20 16:22:19 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Engine Exited on GCR (Google Cloud Run) | bug/0-needs-info kind/bug process/candidate topic: deployment-platforms topic: target-exit | Getting this error on Google Cloud Run, after redeploy it works again. Is there still some issue with serverless?

| 1.0 | Engine Exited on GCR (Google Cloud Run) - Getting this error on Google Cloud Run, after redeploy it works again. Is there still some issue with serverless?

| process | engine exited on gcr google cloud run getting this error on google cloud run after redeploy it works again is there still some issue with serverless | 1 |
278,928 | 30,702,425,469 | IssuesEvent | 2023-07-27 01:29:08 | Nivaskumark/CVE-2020-0074-frameworks_base | https://api.github.com/repos/Nivaskumark/CVE-2020-0074-frameworks_base | reopened | CVE-2022-20115 (Medium) detected in baseandroid-11.0.0_r39 | Mend: dependency security vulnerability | ## CVE-2022-20115 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/TelephonyRegistry.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In broadcastServiceStateChanged of TelephonyRegistry.java, there is a possible way to learn base station information without location permission due to a missing permission check. This could lead to local information disclosure with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-12 Android-12LAndroid ID: A-210118427
<p>Publish Date: 2022-05-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20115>CVE-2022-20115</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2022-05-01">https://source.android.com/security/bulletin/2022-05-01</a></p>
<p>Release Date: 2022-05-10</p>
<p>Fix Resolution: android-12.1.0_r5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-20115 (Medium) detected in baseandroid-11.0.0_r39 - ## CVE-2022-20115 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>baseandroid-11.0.0_r39</b></p></summary>
<p>
<p>Android framework classes and services</p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/base>https://android.googlesource.com/platform/frameworks/base</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/services/core/java/com/android/server/TelephonyRegistry.java</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In broadcastServiceStateChanged of TelephonyRegistry.java, there is a possible way to learn base station information without location permission due to a missing permission check. This could lead to local information disclosure with User execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-12 Android-12LAndroid ID: A-210118427
<p>Publish Date: 2022-05-10
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-20115>CVE-2022-20115</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://source.android.com/security/bulletin/2022-05-01">https://source.android.com/security/bulletin/2022-05-01</a></p>
<p>Release Date: 2022-05-10</p>
<p>Fix Resolution: android-12.1.0_r5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in baseandroid cve medium severity vulnerability vulnerable library baseandroid android framework classes and services library home page a href found in base branch master vulnerable source files services core java com android server telephonyregistry java vulnerability details in broadcastservicestatechanged of telephonyregistry java there is a possible way to learn base station information without location permission due to a missing permission check this could lead to local information disclosure with user execution privileges needed user interaction is not needed for exploitation product androidversions android android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend | 0 |
370,017 | 10,924,211,955 | IssuesEvent | 2019-11-22 09:39:19 | fac18/week4-gmno-autocomplete | https://api.github.com/repos/fac18/week4-gmno-autocomplete | opened | search button takes me to wikipedia | bug priority | when the search input has no value, and I click the search button, I still get taken to wikipedia. While this isn't too bad as it's the wikipedia homepage, it would be good practice to not take me there at all - I was a bit confused as to how I ended up there! Also, if I search another term, say 'cats', I get taken to a page for cats, which obviously has nothing to do with the purpose of your website!
As a quick fix, maybe add some code to say that when search input value is equal to null, search button is disabled?
To go further and stop the user from even being able to search non heritage sites, you could only request a search when the input value matches something from your siteList array. Sure there are many other ways to doing this too, so have a dig around! Would deffo say this is a priority this morning! | 1.0 | search button takes me to wikipedia - when the search input has no value, and I click the search button, I still get taken to wikipedia. While this isn't too bad as it's the wikipedia homepage, it would be good practice to not take me there at all - I was a bit confused as to how I ended up there! Also, if I search another term, say 'cats', I get taken to a page for cats, which obviously has nothing to do with the purpose of your website!
As a quick fix, maybe add some code to say that when search input value is equal to null, search button is disabled?
To go further and stop the user from even being able to search non heritage sites, you could only request a search when the input value matches something from your siteList array. Sure there are many other ways to doing this too, so have a dig around! Would deffo say this is a priority this morning! | non_process | search button takes me to wikipedia when the search input has no value and i click the search button i still get taken to wikipedia while this isn t too bad as it s the wikipedia homepage it would be good practice to not take me there at all i was a bit confused as to how i ended up there also if i search another term say cats i get taken to a page for cats which obviously has nothing to do with the purpose of your website as a quick fix maybe add some code to say that when search input value is equal to null search button is disabled to go further and stop the user from even being able to search non heritage sites you could only request a search when the input value matches something from your sitelist array sure there are many other ways to doing this too so have a dig around would deffo say this is a priority this morning | 0 |
76,950 | 15,496,237,405 | IssuesEvent | 2021-03-11 02:18:33 | n-devs/Fiction | https://api.github.com/repos/n-devs/Fiction | opened | WS-2020-0042 (High) detected in acorn-6.0.4.tgz, acorn-5.7.3.tgz | security vulnerability | ## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>acorn-6.0.4.tgz</b>, <b>acorn-5.7.3.tgz</b></p></summary>
<p>
<details><summary><b>acorn-6.0.4.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.0.4.tgz">https://registry.npmjs.org/acorn/-/acorn-6.0.4.tgz</a></p>
<p>Path to dependency file: /Fiction/package.json</p>
<p>Path to vulnerable library: Fiction/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.1.tgz (Root Library)
- eslint-5.6.0.tgz
- espree-4.1.0.tgz
- :x: **acorn-6.0.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>acorn-5.7.3.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz</a></p>
<p>Path to dependency file: /Fiction/package.json</p>
<p>Path to vulnerable library: Fiction/node_modules/acorn-dynamic-import/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.1.tgz (Root Library)
- webpack-4.19.1.tgz
- acorn-dynamic-import-3.0.0.tgz
- :x: **acorn-5.7.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-08</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2020-0042 (High) detected in acorn-6.0.4.tgz, acorn-5.7.3.tgz - ## WS-2020-0042 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>acorn-6.0.4.tgz</b>, <b>acorn-5.7.3.tgz</b></p></summary>
<p>
<details><summary><b>acorn-6.0.4.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-6.0.4.tgz">https://registry.npmjs.org/acorn/-/acorn-6.0.4.tgz</a></p>
<p>Path to dependency file: /Fiction/package.json</p>
<p>Path to vulnerable library: Fiction/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.1.tgz (Root Library)
- eslint-5.6.0.tgz
- espree-4.1.0.tgz
- :x: **acorn-6.0.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>acorn-5.7.3.tgz</b></p></summary>
<p>ECMAScript parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz">https://registry.npmjs.org/acorn/-/acorn-5.7.3.tgz</a></p>
<p>Path to dependency file: /Fiction/package.json</p>
<p>Path to vulnerable library: Fiction/node_modules/acorn-dynamic-import/node_modules/acorn/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-2.1.1.tgz (Root Library)
- webpack-4.19.1.tgz
- acorn-dynamic-import-3.0.0.tgz
- :x: **acorn-5.7.3.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
acorn is vulnerable to REGEX DoS. A regex of the form /[x-\ud800]/u causes the parser to enter an infinite loop. attackers may leverage the vulnerability leading to a Denial of Service since the string is not valid UTF16 and it results in it being sanitized before reaching the parser.
<p>Publish Date: 2020-03-01
<p>URL: <a href=https://github.com/acornjs/acorn/commit/b5c17877ac0511e31579ea31e7650ba1a5871e51>WS-2020-0042</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1488">https://www.npmjs.com/advisories/1488</a></p>
<p>Release Date: 2020-03-08</p>
<p>Fix Resolution: 7.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws high detected in acorn tgz acorn tgz ws high severity vulnerability vulnerable libraries acorn tgz acorn tgz acorn tgz ecmascript parser library home page a href path to dependency file fiction package json path to vulnerable library fiction node modules acorn package json dependency hierarchy react scripts tgz root library eslint tgz espree tgz x acorn tgz vulnerable library acorn tgz ecmascript parser library home page a href path to dependency file fiction package json path to vulnerable library fiction node modules acorn dynamic import node modules acorn package json dependency hierarchy react scripts tgz root library webpack tgz acorn dynamic import tgz x acorn tgz vulnerable library vulnerability details acorn is vulnerable to regex dos a regex of the form u causes the parser to enter an infinite loop attackers may leverage the vulnerability leading to a denial of service since the string is not valid and it results in it being sanitized before reaching the parser publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
10,993 | 13,785,995,300 | IssuesEvent | 2020-10-09 00:27:24 | cbrennanpoole/Qualitative-Self | https://api.github.com/repos/cbrennanpoole/Qualitative-Self | closed | On C Brennan Poole Inaugural Year at Saia Inc | Creative Strategy IT Gates The State Way process implementation with Wind | 
# On C Brennan Poole : Service at Saia | 2010
---
`
Creative Strategy with Wind LLC
`
<br>
## WHAT IS THIS ANNOTATION :
Snippet of Saia LTL Freight Inc 2010 Annual Report & Letter to Shareholders' (the supposed owners of the corprotocracy model we sit neck deep in the throes of as a broken (divided by) states of US uncertain times.
> Writer and fmr Chief : Rick O'dell
### Position :
`
Outbound AtlOps
`
<br>
#### Responsibilities
`
a few : comprehensive on build
`
<br>
1. Lead team helping to establish Key Performance Indicators (KPIs) of importance to AtlDockOps.
### SAIA CUSTOMER SERVICE INDICATORS
`
CSIs : not Horatio; I knew you were thinking that. See we make a great team right?
`
<br>
namesake (AtlDockOps) authored by Creative-Ops-Strategist : cbrennanpoole <br>
- (i.e. me as I don't like formal, loathe it since we are all ethical and honest here right?)
2. Foster a collaborative and enjoyable - *yes -**fun**- enjoyable, **smiling**, kidding, **safety-minded** always smile-on* - work environment truly free of work hazards.<br>
- plus political or learned trait hate barriers-to-entry (total eradication thereof).<br>
---
`
safety is always first when attempting to affect change on an enterprise (service - centered operational or manufacturing ) level.
`
---
3. Develop scale - capable workflow process while nurturing the all-but-missing Team environment our newfound Service Center Manager [Barak Frydman : Great Coach](https://linkedin.com/in/BarakFrydman "maybe a 404 : need to what ? debug? or QA?") sold us (eventually US all is in singular-unit see below) on.
##### More on AtlDockOps :
- after I cbrennanpoole aka oudcollective would be the 1st to follow-our-leader (rightfully so and in this instance, as with every team, especially in organizational dynamics, a team needs a trusted leader; Barak was a consummate leader, awesome mentor, and one of very few on earth who fail to prescribe to unconscious bias more on the rant to come - i suppose maybe).
### Team - centric approach
`
offered by Coach Frydman
`
<br>
It took right at 5 months from hire to gain expertise (training in a real-time 24/7 LTL freight fills our interstates and fuels the big rigs up and down the highway system for anyone to gain expertise; proficency is more align with a 90 day learning curve.
- The average (curve : when I was OM ) we prescribed to incoming management hires was 6-months to full (fair - for - all ) expectations and solo supervisory duties took place at 90 (if necessary - which it always was with the total lack of foresight US corporate culture model continues to follow; most especially IT Gates, most especially - how many apprenticeships are available right now in US for REMOTE, IT, APPRENTICE? NADA... My recruit and team lead (he was Senior to me and next inline for promotion mind you) was RayRob - as I called him.
## On Ray
`
tellin' ya - big ole boy now; don't say I didn't warn you when he sneaks up on you & Debo's your a$$; ok...
`
<br>
[Raymond Roberts Blake](https://coastalcourier.com/sports/hall-of-fame-2007-raymond-roberts-blake/ "HUGE y'all huge Hall-of-fame key of city holding huge; actionable mentor right? right.. Ray is huge. Great man. Can drink like a fish (I had since quit; again thank Jesus. Better friend all muscle too. Starting Tackle with early 00's Georgia Institute of Technology Ramblin' Wreck aka The Yellow Jackets my alma mater's neighbor as well; having been a graduate of Georgia State in the same time line but not meeting until 08 at Saia actually we met my Senior Year as he would come to my roommates crazy Braves Game gatherings while living in Summer Hill. Hey, might as well give you play-by-play until someone git(s) actionable, as I can craft anything your heart desires : as a hongry, woke, remote - experience with Amazon Logistics leader and developer of people plus programs.")
### TANGENTIAL STATS
`
since you asked
`
<br>
#### By Ray's Numbers
`
RayRob-Blake : if you know what's best for ya and don't say I don't warn ya as he's a big ole boy now.
`
---
1. [Roberts - Blake](https://coastalcourier.com/sports/hall-of-fame-2007-raymond-roberts-blake/ "RayRob is the only black -or white -and btw- from henceforth black is colorful in my home as this is an extension of my home : you need some slap yo mama sweet tea? we champion it! Trick is in the 'tetley' Georgia Grown too... ok... ok... ok... Ray is the first man (or woman, yeah they are true Peaches down home; that I've ever met who could not just chew more snuff than myself, -since quit; Thank Jesus!-, but I'm talking like - all-but freaking eat 3 or 4 cans a night; then come the grind that is a Friday in Outbound or Inbound LTL - about a 14-20 neverending last day of the week nightmare that is the industry - and will beat you down - did Ray, he still in-industry however - as with me - I'd break my back after helping champion the team as the facilities Operations Manager -with Ray leaving to go be OM at OD - that's Old Dominion Freight Line - and yeah - they're bigger, badder, and better than Saia for sure. Ole RayRob on Fridays; he'd take to getting the scraps from mine after making sure he checked the 20 cans he compiled the previous 4 nights; for any residual delicious wet tobacco leaves of cherry, berry, -my favorite was- Green Apple Long-Cut : Skoal; what's your favorite flavor of wet tobacco leaves with an artificial fruit flavoring in a non-renewable plastic is killing our Mother Earth containers hear-here? For those in the angered states causation : racial tensions : TRUTHcausation : Learned Trait Hate and Unconscious Bias-, I bring up color as Ray is also a country sum-bitch from Liberty -Deep South y'all- County Georgia not unlike my Isabell-e-a-n self from the farm and Deep Deeply broken South that is Southern US. If race, hate, and unconscious bias, Georgia and South US is comparable to only Nazism Germany on the altogether wilfully ignorant richter scale we should partner to program to provide to those very neighbors and family of mine. He and I truly see color - less : saying this confidently since I do have a soon-to-be three year old beautiful -future is female- proud to-be a colorโful woman for a daughter. And he almost married a latin + color-less before they'd call it off, so yeah - I get to touch on the topic a bit as it sickens me as a country as it comes to - truly woke though - color-less living in COVID-ington, Gawja.") chosen as an honorable mention All-American by USA Today.
2. Atlanta Journal-Constitution chose him as one of the top 50 prospects in Georgia.
3. SuperPrep postseason magazine named him the 69th prospect in Dixie.
4. 9.25 rating by **Deep South** Magazine (Deep Deep South Y'all).
5. Roberts-Blake received honorable mention for offensive lineman : SEC region : Prep Star magazine | post-season edition.
- 30 pancake blocks as a Junior
- AAAA all-state honorable mention
- **3-AAAA lineman of the year**.
6. All-Coastal Empire team by The Savannah Daily News.
**Ray's Georgia Tech Days**<br>
7. Named to Techโs second tea-team right guard position in '99 with four games off the bench (no easy feat).
8. 2000 Season : started every game playing left guard;
- Georgia Institute of Technology ranking that season : 17 at 9-3 [Wikipedia : with a Creative Commons attribution BY-SA](https://en.wikipedia.org/wiki/2000_Georgia_Tech_Yellow_Jackets_football_team).<br>
9. Top 20 Team Stats
- passing offense and
- scoring offense, as well as the
- top three in the ACC in rushing.<br>
9. Roberts - Blake was member of the (internal unit within the) team of offensive line that led the ACC in fewest sacks allowed with 16, including
- 14 by the starting five and all-but-always the closest ( unit within the ) team members.
10. In 2001 : seven game starter at LG.
11. 2002 : started every game at RG and
- Elected (one of ) The Georgia Institute of Technology permanent Team Captain.
#### As for C
`
yes - we bob & weave - butterfly weave with stinger for a GT -piss on 'em- kicker
`
<br>
> I know unconscious bias intimately.
- Unconscious bias hit me (and lucky for my daughter she would be too young to recognize the hate-filling the rooms and lingering in its toxicity the way it does in her breatheโless (for daddy -please lil' girl) atmosphere), **center-mass** around Feb 2020 when she came home (to Daddy's colorโless side) and would take to visiting family later that week.
- Just days before global pandemic which would lead US right into the current learned trait hate uncertain US states of stagnation we sit in.
> That I literally thrive in.
- LITERALLY.
- As I literally thrive in high - stress, fast - paced.
> The switch (thrive-swith) flipped Thursday 2 July, 2020
- Who wants to champion this decade with Wind? All-in?
- Let's [ GiT.actionable@chasingthewindllc.com ] already! <br>
- Dial.In (iOS, SMS, Call me) @ : +1 678-338-7339
```
{
recessions thrive off confusion helping fuel imminent depressions, right?;
}
```
### TANGENTIAL
`
(continued from above)
`
<br>
---
George Floyd Murder, Police State [Chasing the Wind ever-so-popular preview of the preamble-project on YouTube](https://www.youtube.com/watch?v=VdqwaqCxjW4 "a first ever attempt at anything digitally creative 10 Jan, 2020 made for 'chasing' the Wind - only if this one is esoteric - hence the 3rd line right? and only if you seek the Verily Verily Truth #TRUTHcausation - if not - call me crazy - but don't call me a liar as metadata is all that matters. right? Well mines freely available - let's mine it!")
3. (cont'd) to assist in implementation)operations to build foundation for the imminent success I absolutely aimed to author. :winking: #BigFacts with a #TRUTHcausation to boot.
:
LINKS :
1. [Saia Inc Comprehensive List of Annual Reports and Letter to Shareholders](https://www.saia.com/about-us/investor-relations/annual-reports "feel free to help vet : please?")
2. [Saia LTL 2010 Annual Report : original pdf file](https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf "By the time you get to 2014 maybe it will be easier to see how a man would literally break his back carrying an international corporation out of a Great Recession into an insane run of profitability right into the present where we sit; the whole - earth pandemic ๐")
---
**Source URL**:
[https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf](https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
| 1.0 | On C Brennan Poole Inaugural Year at Saia Inc - 
# On C Brennan Poole : Service at Saia | 2010
---
`
Creative Strategy with Wind LLC
`
<br>
## WHAT IS THIS ANNOTATION :
Snippet of Saia LTL Freight Inc 2010 Annual Report & Letter to Shareholders' (the supposed owners of the corprotocracy model we sit neck deep in the throes of as a broken (divided by) states of US uncertain times.
> Writer and fmr Chief : Rick O'dell
### Position :
`
Outbound AtlOps
`
<br>
#### Responsibilities
`
a few : comprehensive on build
`
<br>
1. Lead team helping to establish Key Performance Indicators (KPIs) of importance to AtlDockOps.
### SAIA CUSTOMER SERVICE INDICATORS
`
CSIs : not Horatio; I knew you were thinking that. See we make a great team right?
`
<br>
namesake (AtlDockOps) authored by Creative-Ops-Strategist : cbrennanpoole <br>
- (i.e. me as I don't like formal, loathe it since we are all ethical and honest here right?)
2. Foster a collaborative and enjoyable - *yes -**fun**- enjoyable, **smiling**, kidding, **safety-minded** always smile-on* - work environment truly free of work hazards.<br>
- plus political or learned trait hate barriers-to-entry (total eradication thereof).<br>
---
`
safety is always first when attempting to affect change on an enterprise (service - centered operational or manufacturing ) level.
`
---
3. Develop scale - capable workflow process while nurturing the all-but-missing Team environment our newfound Service Center Manager [Barak Frydman : Great Coach](https://linkedin.com/in/BarakFrydman "maybe a 404 : need to what ? debug? or QA?") sold us (eventually US all is in singular-unit see below) on.
##### More on AtlDockOps :
- after I cbrennanpoole aka oudcollective would be the 1st to follow-our-leader (rightfully so and in this instance, as with every team, especially in organizational dynamics, a team needs a trusted leader; Barak was a consummate leader, awesome mentor, and one of very few on earth who fail to prescribe to unconscious bias more on the rant to come - i suppose maybe).
### Team - centric approach
`
offered by Coach Frydman
`
<br>
It took right at 5 months from hire to gain expertise (training in a real-time 24/7 LTL freight fills our interstates and fuels the big rigs up and down the highway system for anyone to gain expertise; proficency is more align with a 90 day learning curve.
- The average (curve : when I was OM ) we prescribed to incoming management hires was 6-months to full (fair - for - all ) expectations and solo supervisory duties took place at 90 (if necessary - which it always was with the total lack of foresight US corporate culture model continues to follow; most especially IT Gates, most especially - how many apprenticeships are available right now in US for REMOTE, IT, APPRENTICE? NADA... My recruit and team lead (he was Senior to me and next inline for promotion mind you) was RayRob - as I called him.
## On Ray
`
tellin' ya - big ole boy now; don't say I didn't warn you when he sneaks up on you & Debo's your a$$; ok...
`
<br>
[Raymond Roberts Blake](https://coastalcourier.com/sports/hall-of-fame-2007-raymond-roberts-blake/ "HUGE y'all huge Hall-of-fame key of city holding huge; actionable mentor right? right.. Ray is huge. Great man. Can drink like a fish (I had since quit; again thank Jesus. Better friend all muscle too. Starting Tackle with early 00's Georgia Institute of Technology Ramblin' Wreck aka The Yellow Jackets my alma mater's neighbor as well; having been a graduate of Georgia State in the same time line but not meeting until 08 at Saia actually we met my Senior Year as he would come to my roommates crazy Braves Game gatherings while living in Summer Hill. Hey, might as well give you play-by-play until someone git(s) actionable, as I can craft anything your heart desires : as a hongry, woke, remote - experience with Amazon Logistics leader and developer of people plus programs.")
### TANGENTIAL STATS
`
since you asked
`
<br>
#### By Ray's Numbers
`
RayRob-Blake : if you know what's best for ya and don't say I don't warn ya as he's a big ole boy now.
`
---
1. [Roberts - Blake](https://coastalcourier.com/sports/hall-of-fame-2007-raymond-roberts-blake/ "RayRob is the only black -or white -and btw- from henceforth black is colorful in my home as this is an extension of my home : you need some slap yo mama sweet tea? we champion it! Trick is in the 'tetley' Georgia Grown too... ok... ok... ok... Ray is the first man (or woman, yeah they are true Peaches down home; that I've ever met who could not just chew more snuff than myself, -since quit; Thank Jesus!-, but I'm talking like - all-but freaking eat 3 or 4 cans a night; then come the grind that is a Friday in Outbound or Inbound LTL - about a 14-20 neverending last day of the week nightmare that is the industry - and will beat you down - did Ray, he still in-industry however - as with me - I'd break my back after helping champion the team as the facilities Operations Manager -with Ray leaving to go be OM at OD - that's Old Dominion Freight Line - and yeah - they're bigger, badder, and better than Saia for sure. Ole RayRob on Fridays; he'd take to getting the scraps from mine after making sure he checked the 20 cans he compiled the previous 4 nights; for any residual delicious wet tobacco leaves of cherry, berry, -my favorite was- Green Apple Long-Cut : Skoal; what's your favorite flavor of wet tobacco leaves with an artificial fruit flavoring in a non-renewable plastic is killing our Mother Earth containers hear-here? For those in the angered states causation : racial tensions : TRUTHcausation : Learned Trait Hate and Unconscious Bias-, I bring up color as Ray is also a country sum-bitch from Liberty -Deep South y'all- County Georgia not unlike my Isabell-e-a-n self from the farm and Deep Deeply broken South that is Southern US. If race, hate, and unconscious bias, Georgia and South US is comparable to only Nazism Germany on the altogether wilfully ignorant richter scale we should partner to program to provide to those very neighbors and family of mine. He and I truly see color - less : saying this confidently since I do have a soon-to-be three year old beautiful -future is female- proud to-be a colorโful woman for a daughter. And he almost married a latin + color-less before they'd call it off, so yeah - I get to touch on the topic a bit as it sickens me as a country as it comes to - truly woke though - color-less living in COVID-ington, Gawja.") chosen as an honorable mention All-American by USA Today.
2. Atlanta Journal-Constitution chose him as one of the top 50 prospects in Georgia.
3. SuperPrep postseason magazine named him the 69th prospect in Dixie.
4. 9.25 rating by **Deep South** Magazine (Deep Deep South Y'all).
5. Roberts-Blake received honorable mention for offensive lineman : SEC region : Prep Star magazine | post-season edition.
- 30 pancake blocks as a Junior
- AAAA all-state honorable mention
- **3-AAAA lineman of the year**.
6. All-Coastal Empire team by The Savannah Daily News.
**Ray's Georgia Tech Days**<br>
7. Named to Techโs second tea-team right guard position in '99 with four games off the bench (no easy feat).
8. 2000 Season : started every game playing left guard;
- Georgia Institute of Technology ranking that season : 17 at 9-3 [Wikipedia : with a Creative Commons attribution BY-SA](https://en.wikipedia.org/wiki/2000_Georgia_Tech_Yellow_Jackets_football_team).<br>
9. Top 20 Team Stats
- passing offense and
- scoring offense, as well as the
- top three in the ACC in rushing.<br>
9. Roberts - Blake was member of the (internal unit within the) team of offensive line that led the ACC in fewest sacks allowed with 16, including
- 14 by the starting five and all-but-always the closest ( unit within the ) team members.
10. In 2001 : seven game starter at LG.
11. 2002 : started every game at RG and
- Elected (one of ) The Georgia Institute of Technology permanent Team Captain.
#### As for C
`
yes - we bob & weave - butterfly weave with stinger for a GT -piss on 'em- kicker
`
<br>
> I know unconscious bias intimately.
- Unconscious bias hit me (and lucky for my daughter she would be too young to recognize the hate-filling the rooms and lingering in its toxicity the way it does in her breatheโless (for daddy -please lil' girl) atmosphere), **center-mass** around Feb 2020 when she came home (to Daddy's colorโless side) and would take to visiting family later that week.
- Just days before global pandemic which would lead US right into the current learned trait hate uncertain US states of stagnation we sit in.
> That I literally thrive in.
- LITERALLY.
- As I literally thrive in high - stress, fast - paced.
> The switch (thrive-swith) flipped Thursday 2 July, 2020
- Who wants to champion this decade with Wind? All-in?
- Let's [ GiT.actionable@chasingthewindllc.com ] already! <br>
- Dial.In (iOS, SMS, Call me) @ : +1 678-338-7339
```
{
recessions thrive off confusion helping fuel imminent depressions, right?;
}
```
### TANGENTIAL
`
(continued from above)
`
<br>
---
George Floyd Murder, Police State [Chasing the Wind ever-so-popular preview of the preamble-project on YouTube](https://www.youtube.com/watch?v=VdqwaqCxjW4 "a first ever attempt at anything digitally creative 10 Jan, 2020 made for 'chasing' the Wind - only if this one is esoteric - hence the 3rd line right? and only if you seek the Verily Verily Truth #TRUTHcausation - if not - call me crazy - but don't call me a liar as metadata is all that matters. right? Well mines freely available - let's mine it!")
3. (cont'd) to assist in implementation)operations to build foundation for the imminent success I absolutely aimed to author. :winking: #BigFacts with a #TRUTHcausation to boot.
:
LINKS :
1. [Saia Inc Comprehensive List of Annual Reports and Letter to Shareholders](https://www.saia.com/about-us/investor-relations/annual-reports "feel free to help vet : please?")
2. [Saia LTL 2010 Annual Report : original pdf file](https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf "By the time you get to 2014 maybe it will be easier to see how a man would literally break his back carrying an international corporation out of a Great Recession into an insane run of profitability right into the present where we sit; the whole - earth pandemic ๐")
---
**Source URL**:
[https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf](https://assets.ctfassets.net/kjv8gs2ccggb/10ARdpUdSOxzD5w63IDiPS/a90d2d33068abcbf1e2ba9bc0380f7b4/Saia_2010_Annual_Report_and_Form_10-K.pdf)
<table><tr><td><strong>Browser</strong></td><td>Chrome 84.0.4147.68</td></tr><tr><td><strong>OS</strong></td><td>Windows 10 64-bit</td></tr><tr><td><strong>Screen Size</strong></td><td>1920x1080</td></tr><tr><td><strong>Viewport Size</strong></td><td>1920x937</td></tr><tr><td><strong>Pixel Ratio</strong></td><td>@1x</td></tr><tr><td><strong>Zoom Level</strong></td><td>100%</td></tr></table>
| process | on c brennan poole inaugural year at saia inc on c brennan poole service at saia creative strategy with wind llc what is this annotation snippet of saia ltl freight inc annual report letter to shareholders the supposed owners of the corprotocracy model we sit neck deep in the throes of as a broken divided by states of us uncertain times writer and fmr chief rick o dell position outbound atlops responsibilities a few comprehensive on build lead team helping to establish key performance indicators kpis of importance to atldockops saia customer service indicators csis not horatio i knew you were thinking that see we make a great team right namesake atldockops authored by creative ops strategist cbrennanpoole i e me as i don t like formal loathe it since we are all ethical and honest here right foster a collaborative and enjoyable yes fun enjoyable smiling kidding safety minded always smile on work environment truly free of work hazards plus political or learned trait hate barriers to entry total eradication thereof safety is always first when attempting to affect change on an enterprise service centered operational or manufacturing level develop scale capable workflow process while nurturing the all but missing team environment our newfound service center manager maybe a need to what debug or qa sold us eventually us all is in singular unit see below on more on atldockops after i cbrennanpoole aka oudcollective would be the to follow our leader rightfully so and in this instance as with every team especially in organizational dynamics a team needs a trusted leader barak was a consummate leader awesome mentor and one of very few on earth who fail to prescribe to unconscious bias more on the rant to come i suppose maybe team centric approach offered by coach frydman it took right at months from hire to gain expertise training in a real time ltl freight fills our interstates and fuels the big rigs up and down the highway system for anyone to gain expertise proficency is more align with a day learning curve the average curve when i was om we prescribed to incoming management hires was months to full fair for all expectations and solo supervisory duties took place at if necessary which it always was with the total lack of foresight us corporate culture model continues to follow most especially it gates most especially how many apprenticeships are available right now in us for remote it apprentice nada my recruit and team lead he was senior to me and next inline for promotion mind you was rayrob as i called him on ray tellin ya big ole boy now don t say i didn t warn you when he sneaks up on you debo s your a ok huge y all huge hall of fame key of city holding huge actionable mentor right right ray is huge great man can drink like a fish i had since quit again thank jesus better friend all muscle too starting tackle with early s georgia institute of technology ramblin wreck aka the yellow jackets my alma mater s neighbor as well having been a graduate of georgia state in the same time line but not meeting until at saia actually we met my senior year as he would come to my roommates crazy braves game gatherings while living in summer hill hey might as well give you play by play until someone git s actionable as i can craft anything your heart desires as a hongry woke remote experience with amazon logistics leader and developer of people plus programs tangential stats since you asked by ray s numbers rayrob blake if you know what s best for ya and don t say i don t warn ya as he s a big ole boy now rayrob is the only black or white and btw from henceforth black is colorful in my home as this is an extension of my home you need some slap yo mama sweet tea we champion it trick is in the tetley georgia grown too ok ok ok ray is the first man or woman yeah they are true peaches down home that i ve ever met who could not just chew more snuff than myself since quit thank jesus but i m talking like all but freaking eat or cans a night then come the grind that is a friday in outbound or inbound ltl about a neverending last day of the week nightmare that is the industry and will beat you down did ray he still in industry however as with me i d break my back after helping champion the team as the facilities operations manager with ray leaving to go be om at od that s old dominion freight line and yeah they re bigger badder and better than saia for sure ole rayrob on fridays he d take to getting the scraps from mine after making sure he checked the cans he compiled the previous nights for any residual delicious wet tobacco leaves of cherry berry my favorite was green apple long cut skoal what s your favorite flavor of wet tobacco leaves with an artificial fruit flavoring in a non renewable plastic is killing our mother earth containers hear here for those in the angered states causation racial tensions truthcausation learned trait hate and unconscious bias i bring up color as ray is also a country sum bitch from liberty deep south y all county georgia not unlike my isabell e a n self from the farm and deep deeply broken south that is southern us if race hate and unconscious bias georgia and south us is comparable to only nazism germany on the altogether wilfully ignorant richter scale we should partner to program to provide to those very neighbors and family of mine he and i truly see color less saying this confidently since i do have a soon to be three year old beautiful future is female proud to be a colorโful woman for a daughter and he almost married a latin color less before they d call it off so yeah i get to touch on the topic a bit as it sickens me as a country as it comes to truly woke though color less living in covid ington gawja chosen as an honorable mention all american by usa today atlanta journal constitution chose him as one of the top prospects in georgia superprep postseason magazine named him the prospect in dixie rating by deep south magazine deep deep south y all roberts blake received honorable mention for offensive lineman sec region prep star magazine post season edition pancake blocks as a junior aaaa all state honorable mention aaaa lineman of the year all coastal empire team by the savannah daily news ray s georgia tech days named to techโs second tea team right guard position in with four games off the bench no easy feat season started every game playing left guard georgia institute of technology ranking that season at top team stats passing offense and scoring offense as well as the top three in the acc in rushing roberts blake was member of the internal unit within the team of offensive line that led the acc in fewest sacks allowed with including by the starting five and all but always the closest unit within the team members in seven game starter at lg started every game at rg and elected one of the georgia institute of technology permanent team captain as for c yes we bob weave butterfly weave with stinger for a gt piss on em kicker i know unconscious bias intimately unconscious bias hit me and lucky for my daughter she would be too young to recognize the hate filling the rooms and lingering in its toxicity the way it does in her breatheโless for daddy please lil girl atmosphere center mass around feb when she came home to daddy s colorโless side and would take to visiting family later that week just days before global pandemic which would lead us right into the current learned trait hate uncertain us states of stagnation we sit in that i literally thrive in literally as i literally thrive in high stress fast paced the switch thrive swith flipped thursday july who wants to champion this decade with wind all in let s already dial in ios sms call me recessions thrive off confusion helping fuel imminent depressions right tangential continued from above george floyd murder police state a first ever attempt at anything digitally creative jan made for chasing the wind only if this one is esoteric hence the line right and only if you seek the verily verily truth truthcausation if not call me crazy but don t call me a liar as metadata is all that matters right well mines freely available let s mine it cont d to assist in implementation operations to build foundation for the imminent success i absolutely aimed to author winking bigfacts with a truthcausation to boot links feel free to help vet please by the time you get to maybe it will be easier to see how a man would literally break his back carrying an international corporation out of a great recession into an insane run of profitability right into the present where we sit the whole earth pandemic ๐ source url browser chrome os windows bit screen size viewport size pixel ratio zoom level | 1 |
18,561 | 24,555,685,322 | IssuesEvent | 2022-10-12 15:41:36 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Mobileapps] Getting an error message in consent signature screen in enrollment flow | Blocker P0 iOS Android Process: Fixed Process: Tested dev | **Steps:**
1. Sign in / Sign up
2. Click on any study, Complete the eligibilty flow
3. Navigate to consent signature , Click on 'Next' button and Verify
**AR:** Getting an error message in consent signature screen in enrollment flow
**ER:** Enrollment flow should be completed by the participant without any error's

| 2.0 | [Mobileapps] Getting an error message in consent signature screen in enrollment flow - **Steps:**
1. Sign in / Sign up
2. Click on any study, Complete the eligibilty flow
3. Navigate to consent signature , Click on 'Next' button and Verify
**AR:** Getting an error message in consent signature screen in enrollment flow
**ER:** Enrollment flow should be completed by the participant without any error's

| process | getting an error message in consent signature screen in enrollment flow steps sign in sign up click on any study complete the eligibilty flow navigate to consent signature click on next button and verify ar getting an error message in consent signature screen in enrollment flow er enrollment flow should be completed by the participant without any error s | 1 |
12,536 | 14,972,462,981 | IssuesEvent | 2021-01-27 22:54:40 | BootBlock/FileSieve | https://api.github.com/repos/BootBlock/FileSieve | closed | Allow initial file scanning to be paused/stopped | processing ui | Due to various changes to the core of FileSieve, the initial file scanning time can take longer than prior versions; because of this, the `Pause` and `Stop` buttons should work during the scanning. I thought it did, but apparently, that doesn't look like the case. | 1.0 | Allow initial file scanning to be paused/stopped - Due to various changes to the core of FileSieve, the initial file scanning time can take longer than prior versions; because of this, the `Pause` and `Stop` buttons should work during the scanning. I thought it did, but apparently, that doesn't look like the case. | process | allow initial file scanning to be paused stopped due to various changes to the core of filesieve the initial file scanning time can take longer than prior versions because of this the pause and stop buttons should work during the scanning i thought it did but apparently that doesn t look like the case | 1 |
15,532 | 19,703,296,758 | IssuesEvent | 2022-01-12 18:54:19 | googleapis/python-monitoring-dashboards | https://api.github.com/repos/googleapis/python-monitoring-dashboards | opened | Your .repo-metadata.json file has a problem ๐ค | type: process repo-metadata: lint | You have a problem with your .repo-metadata.json file:
Result of scan ๐:
* api_shortname 'monitoring-dashboards' invalid in .repo-metadata.json
โ๏ธ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | 1.0 | Your .repo-metadata.json file has a problem ๐ค - You have a problem with your .repo-metadata.json file:
Result of scan ๐:
* api_shortname 'monitoring-dashboards' invalid in .repo-metadata.json
โ๏ธ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions. | process | your repo metadata json file has a problem ๐ค you have a problem with your repo metadata json file result of scan ๐ api shortname monitoring dashboards invalid in repo metadata json โ๏ธ once you correct these problems you can close this issue reach out to go github automation if you have any questions | 1 |
304,249 | 9,329,423,144 | IssuesEvent | 2019-03-28 02:19:15 | kjohnsen/MMAPPR2 | https://api.github.com/repos/kjohnsen/MMAPPR2 | opened | Filter out non-peaks | complexity-medium priority-medium | If no significant peak is detected, the current strategy produces a low threshold (3*standard deviation + median), which in turn gives us a bunch of garbage peaks. It would be nice to avoid this. | 1.0 | Filter out non-peaks - If no significant peak is detected, the current strategy produces a low threshold (3*standard deviation + median), which in turn gives us a bunch of garbage peaks. It would be nice to avoid this. | non_process | filter out non peaks if no significant peak is detected the current strategy produces a low threshold standard deviation median which in turn gives us a bunch of garbage peaks it would be nice to avoid this | 0 |
14,985 | 18,524,076,380 | IssuesEvent | 2021-10-20 18:13:13 | googleapis/python-bigtable | https://api.github.com/repos/googleapis/python-bigtable | closed | 'test_instance_create_w_two_clusters' systest flakes with 503 | api: bigtable type: process | From [this failed systest build](https://source.cloud.google.com/results/invocations/924fbb8c-f387-482d-8767-c16abd2de9e2/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigtable%2Fpresubmit%2Fsystem-3.8/log):
```python
_____________________ test_instance_create_w_two_clusters ______________________
args = (parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f4d5eecd100>
request = parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f4d5ee8b610>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f4d5ed214c0>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The service is currently unavailable."
E debug_error_string = "{"created":"@1634667658.122902172","description":"Error received from peer ipv4:74.125.197.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
E >
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
admin_client = <google.cloud.bigtable.client.Client object at 0x7f4d5ef80e50>
unique_suffix = '-1634667466967'
admin_instance_populated = <google.cloud.bigtable.instance.Instance object at 0x7f4d5eef10a0>
admin_cluster = <google.cloud.bigtable.cluster.Cluster object at 0x7f4d5eef1160>
location_id = 'us-central1-c'
instance_labels = {'python-system': '2021-10-19t18-17-46'}
instances_to_delete = [<google.cloud.bigtable.instance.Instance object at 0x7f4d5ee5cc70>]
skip_on_emulator = None
def test_instance_create_w_two_clusters(
admin_client,
unique_suffix,
admin_instance_populated,
admin_cluster,
location_id,
instance_labels,
instances_to_delete,
skip_on_emulator,
):
alt_instance_id = f"dif{unique_suffix}"
instance = admin_client.instance(
alt_instance_id,
instance_type=enums.Instance.Type.PRODUCTION,
labels=instance_labels,
)
serve_nodes = 1
alt_cluster_id_1 = f"{alt_instance_id}-c1"
cluster_1 = instance.cluster(
alt_cluster_id_1,
location_id=location_id,
serve_nodes=serve_nodes,
default_storage_type=enums.StorageType.HDD,
)
alt_cluster_id_2 = f"{alt_instance_id}-c2"
location_id_2 = "us-central1-f"
cluster_2 = instance.cluster(
alt_cluster_id_2,
location_id=location_id_2,
serve_nodes=serve_nodes,
default_storage_type=enums.StorageType.HDD,
)
operation = instance.create(clusters=[cluster_1, cluster_2])
instances_to_delete.append(instance)
operation.result(timeout=120) # Ensure the operation completes.
# Create a new instance instance and make sure it is the same.
instance_alt = admin_client.instance(alt_instance_id)
instance_alt.reload()
assert instance == instance_alt
assert instance.display_name == instance_alt.display_name
assert instance.type_ == instance_alt.type_
clusters, failed_locations = instance_alt.list_clusters()
assert failed_locations == []
alt_cluster_1, alt_cluster_2 = sorted(clusters, key=lambda x: x.name)
assert cluster_1.location_id == alt_cluster_1.location_id
assert alt_cluster_1.state == enums.Cluster.State.READY
assert cluster_1.serve_nodes == alt_cluster_1.serve_nodes
assert cluster_1.default_storage_type == alt_cluster_1.default_storage_type
assert cluster_2.location_id == alt_cluster_2.location_id
assert alt_cluster_2.state == enums.Cluster.State.READY
assert cluster_2.serve_nodes == alt_cluster_2.serve_nodes
assert cluster_2.default_storage_type == alt_cluster_2.default_storage_type
# Test list clusters in project via 'client.list_clusters'
clusters, failed_locations = admin_client.list_clusters()
assert not failed_locations
found = set([cluster.name for cluster in clusters])
expected = {alt_cluster_1.name, alt_cluster_2.name, admin_cluster.name}
assert expected.issubset(found)
temp_table_id = "test-get-cluster-states"
temp_table = instance.table(temp_table_id)
> _helpers.retry_grpc_unavailable(temp_table.create)()
tests/system/test_instance_admin.py:272:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:100: in wrapped_function
return to_wrap(*args, **kwargs)
google/cloud/bigtable/table.py:402: in create
table_client.create_table(
google/cloud/bigtable_admin_v2/services/bigtable_table_admin/client.py:543: in create_table
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:142: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:68: ServiceUnavailable
``` | 1.0 | 'test_instance_create_w_two_clusters' systest flakes with 503 - From [this failed systest build](https://source.cloud.google.com/results/invocations/924fbb8c-f387-482d-8767-c16abd2de9e2/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-bigtable%2Fpresubmit%2Fsystem-3.8/log):
```python
_____________________ test_instance_create_w_two_clusters ______________________
args = (parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
> return callable_(*args, **kwargs)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <grpc._channel._UnaryUnaryMultiCallable object at 0x7f4d5eecd100>
request = parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
timeout = None
metadata = [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]
credentials = None, wait_for_ready = None, compression = None
def __call__(self,
request,
timeout=None,
metadata=None,
credentials=None,
wait_for_ready=None,
compression=None):
state, call, = self._blocking(request, timeout, metadata, credentials,
wait_for_ready, compression)
> return _end_unary_response_blocking(state, call, False, None)
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:946:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
state = <grpc._channel._RPCState object at 0x7f4d5ee8b610>
call = <grpc._cython.cygrpc.SegregatedCall object at 0x7f4d5ed214c0>
with_call = False, deadline = None
def _end_unary_response_blocking(state, call, with_call, deadline):
if state.code is grpc.StatusCode.OK:
if with_call:
rendezvous = _MultiThreadedRendezvous(state, call, None, deadline)
return state.response, rendezvous
else:
return state.response
else:
> raise _InactiveRpcError(state)
E grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
E status = StatusCode.UNAVAILABLE
E details = "The service is currently unavailable."
E debug_error_string = "{"created":"@1634667658.122902172","description":"Error received from peer ipv4:74.125.197.95:443","file":"src/core/lib/surface/call.cc","file_line":1069,"grpc_message":"The service is currently unavailable.","grpc_status":14}"
E >
.nox/system-3-8/lib/python3.8/site-packages/grpc/_channel.py:849: _InactiveRpcError
The above exception was the direct cause of the following exception:
admin_client = <google.cloud.bigtable.client.Client object at 0x7f4d5ef80e50>
unique_suffix = '-1634667466967'
admin_instance_populated = <google.cloud.bigtable.instance.Instance object at 0x7f4d5eef10a0>
admin_cluster = <google.cloud.bigtable.cluster.Cluster object at 0x7f4d5eef1160>
location_id = 'us-central1-c'
instance_labels = {'python-system': '2021-10-19t18-17-46'}
instances_to_delete = [<google.cloud.bigtable.instance.Instance object at 0x7f4d5ee5cc70>]
skip_on_emulator = None
def test_instance_create_w_two_clusters(
admin_client,
unique_suffix,
admin_instance_populated,
admin_cluster,
location_id,
instance_labels,
instances_to_delete,
skip_on_emulator,
):
alt_instance_id = f"dif{unique_suffix}"
instance = admin_client.instance(
alt_instance_id,
instance_type=enums.Instance.Type.PRODUCTION,
labels=instance_labels,
)
serve_nodes = 1
alt_cluster_id_1 = f"{alt_instance_id}-c1"
cluster_1 = instance.cluster(
alt_cluster_id_1,
location_id=location_id,
serve_nodes=serve_nodes,
default_storage_type=enums.StorageType.HDD,
)
alt_cluster_id_2 = f"{alt_instance_id}-c2"
location_id_2 = "us-central1-f"
cluster_2 = instance.cluster(
alt_cluster_id_2,
location_id=location_id_2,
serve_nodes=serve_nodes,
default_storage_type=enums.StorageType.HDD,
)
operation = instance.create(clusters=[cluster_1, cluster_2])
instances_to_delete.append(instance)
operation.result(timeout=120) # Ensure the operation completes.
# Create a new instance instance and make sure it is the same.
instance_alt = admin_client.instance(alt_instance_id)
instance_alt.reload()
assert instance == instance_alt
assert instance.display_name == instance_alt.display_name
assert instance.type_ == instance_alt.type_
clusters, failed_locations = instance_alt.list_clusters()
assert failed_locations == []
alt_cluster_1, alt_cluster_2 = sorted(clusters, key=lambda x: x.name)
assert cluster_1.location_id == alt_cluster_1.location_id
assert alt_cluster_1.state == enums.Cluster.State.READY
assert cluster_1.serve_nodes == alt_cluster_1.serve_nodes
assert cluster_1.default_storage_type == alt_cluster_1.default_storage_type
assert cluster_2.location_id == alt_cluster_2.location_id
assert alt_cluster_2.state == enums.Cluster.State.READY
assert cluster_2.serve_nodes == alt_cluster_2.serve_nodes
assert cluster_2.default_storage_type == alt_cluster_2.default_storage_type
# Test list clusters in project via 'client.list_clusters'
clusters, failed_locations = admin_client.list_clusters()
assert not failed_locations
found = set([cluster.name for cluster in clusters])
expected = {alt_cluster_1.name, alt_cluster_2.name, admin_cluster.name}
assert expected.issubset(found)
temp_table_id = "test-get-cluster-states"
temp_table = instance.table(temp_table_id)
> _helpers.retry_grpc_unavailable(temp_table.create)()
tests/system/test_instance_admin.py:272:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.nox/system-3-8/lib/python3.8/site-packages/test_utils/retry.py:100: in wrapped_function
return to_wrap(*args, **kwargs)
google/cloud/bigtable/table.py:402: in create
table_client.create_table(
google/cloud/bigtable_admin_v2/services/bigtable_table_admin/client.py:543: in create_table
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py:142: in __call__
return wrapped_func(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
args = (parent: "projects/precise-truck-742/instances/dif-1634667466967"
table_id: "test-get-cluster-states"
table {
}
,)
kwargs = {'metadata': [('x-goog-request-params', 'parent=projects/precise-truck-742/instances/dif-1634667466967'), ('x-goog-api-client', 'gl-python/3.8.12 grpc/1.41.0 gax/2.1.1')]}
@functools.wraps(callable_)
def error_remapped_callable(*args, **kwargs):
try:
return callable_(*args, **kwargs)
except grpc.RpcError as exc:
> raise exceptions.from_grpc_error(exc) from exc
E google.api_core.exceptions.ServiceUnavailable: 503 The service is currently unavailable.
.nox/system-3-8/lib/python3.8/site-packages/google/api_core/grpc_helpers.py:68: ServiceUnavailable
``` | process | test instance create w two clusters systest flakes with from python test instance create w two clusters args parent projects precise truck instances dif table id test get cluster states table kwargs metadata functools wraps callable def error remapped callable args kwargs try return callable args kwargs nox system lib site packages google api core grpc helpers py self request parent projects precise truck instances dif table id test get cluster states table timeout none metadata credentials none wait for ready none compression none def call self request timeout none metadata none credentials none wait for ready none compression none state call self blocking request timeout metadata credentials wait for ready compression return end unary response blocking state call false none nox system lib site packages grpc channel py state call with call false deadline none def end unary response blocking state call with call deadline if state code is grpc statuscode ok if with call rendezvous multithreadedrendezvous state call none deadline return state response rendezvous else return state response else raise inactiverpcerror state e grpc channel inactiverpcerror inactiverpcerror of rpc that terminated with e status statuscode unavailable e details the service is currently unavailable e debug error string created description error received from peer file src core lib surface call cc file line grpc message the service is currently unavailable grpc status e nox system lib site packages grpc channel py inactiverpcerror the above exception was the direct cause of the following exception admin client unique suffix admin instance populated admin cluster location id us c instance labels python system instances to delete skip on emulator none def test instance create w two clusters admin client unique suffix admin instance populated admin cluster location id instance labels instances to delete skip on emulator alt instance id f dif unique suffix instance admin client instance alt instance id instance type enums instance type production labels instance labels serve nodes alt cluster id f alt instance id cluster instance cluster alt cluster id location id location id serve nodes serve nodes default storage type enums storagetype hdd alt cluster id f alt instance id location id us f cluster instance cluster alt cluster id location id location id serve nodes serve nodes default storage type enums storagetype hdd operation instance create clusters instances to delete append instance operation result timeout ensure the operation completes create a new instance instance and make sure it is the same instance alt admin client instance alt instance id instance alt reload assert instance instance alt assert instance display name instance alt display name assert instance type instance alt type clusters failed locations instance alt list clusters assert failed locations alt cluster alt cluster sorted clusters key lambda x x name assert cluster location id alt cluster location id assert alt cluster state enums cluster state ready assert cluster serve nodes alt cluster serve nodes assert cluster default storage type alt cluster default storage type assert cluster location id alt cluster location id assert alt cluster state enums cluster state ready assert cluster serve nodes alt cluster serve nodes assert cluster default storage type alt cluster default storage type test list clusters in project via client list clusters clusters failed locations admin client list clusters assert not failed locations found set expected alt cluster name alt cluster name admin cluster name assert expected issubset found temp table id test get cluster states temp table instance table temp table id helpers retry grpc unavailable temp table create tests system test instance admin py nox system lib site packages test utils retry py in wrapped function return to wrap args kwargs google cloud bigtable table py in create table client create table google cloud bigtable admin services bigtable table admin client py in create table response rpc request retry retry timeout timeout metadata metadata nox system lib site packages google api core gapic method py in call return wrapped func args kwargs args parent projects precise truck instances dif table id test get cluster states table kwargs metadata functools wraps callable def error remapped callable args kwargs try return callable args kwargs except grpc rpcerror as exc raise exceptions from grpc error exc from exc e google api core exceptions serviceunavailable the service is currently unavailable nox system lib site packages google api core grpc helpers py serviceunavailable | 1 |
259,142 | 27,621,707,931 | IssuesEvent | 2023-03-10 01:05:42 | ManageIQ/manageiq-loggers | https://api.github.com/repos/ManageIQ/manageiq-loggers | opened | CVE-2023-27530 (High) detected in rack-2.2.4.gem | Mend: dependency security vulnerability | ## CVE-2023-27530 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.2.4.gem</b></p></summary>
<p>Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.4.gem">https://rubygems.org/gems/rack-2.2.4.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.4.gem</p>
<p>
Dependency Hierarchy:
- manageiq-style-1.3.2.gem (Root Library)
- rubocop-rails-2.15.2.gem
- :x: **rack-2.2.4.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A possible DoS vulnerability in the Multipart MIME parsing code in Rack. This vulnerability has been assigned the CVE identifier CVE-2023-27530.
Versions Affected: All. Not affected: None Fixed Versions: 3.0.4.2, 2.2.6.3, 2.1.4.3, 2.0.9.3 The Multipart MIME parsing code in Rack limits the number of file parts, but does not limit the total number of parts that can be uploaded. Carefully crafted requests can abuse this and cause multipart parsing to take longer than expected. All users running an affected release should either upgrade or use one of the workarounds immediately.
<p>Publish Date: 2023-03-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27530>CVE-2023-27530</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-03-03</p>
<p>Fix Resolution: rack - 2.0.9.3,2.1.4.3,2.2.6.3,3.0.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-27530 (High) detected in rack-2.2.4.gem - ## CVE-2023-27530 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.2.4.gem</b></p></summary>
<p>Rack provides a minimal, modular and adaptable interface for developing
web applications in Ruby. By wrapping HTTP requests and responses in
the simplest way possible, it unifies and distills the API for web
servers, web frameworks, and software in between (the so-called
middleware) into a single method call.
</p>
<p>Library home page: <a href="https://rubygems.org/gems/rack-2.2.4.gem">https://rubygems.org/gems/rack-2.2.4.gem</a></p>
<p>Path to dependency file: /Gemfile.lock</p>
<p>Path to vulnerable library: /home/wss-scanner/.gem/ruby/2.7.0/cache/rack-2.2.4.gem</p>
<p>
Dependency Hierarchy:
- manageiq-style-1.3.2.gem (Root Library)
- rubocop-rails-2.15.2.gem
- :x: **rack-2.2.4.gem** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A possible DoS vulnerability in the Multipart MIME parsing code in Rack. This vulnerability has been assigned the CVE identifier CVE-2023-27530.
Versions Affected: All. Not affected: None Fixed Versions: 3.0.4.2, 2.2.6.3, 2.1.4.3, 2.0.9.3 The Multipart MIME parsing code in Rack limits the number of file parts, but does not limit the total number of parts that can be uploaded. Carefully crafted requests can abuse this and cause multipart parsing to take longer than expected. All users running an affected release should either upgrade or use one of the workarounds immediately.
<p>Publish Date: 2023-03-03
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-27530>CVE-2023-27530</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2023-03-03</p>
<p>Fix Resolution: rack - 2.0.9.3,2.1.4.3,2.2.6.3,3.0.4.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in rack gem cve high severity vulnerability vulnerable library rack gem rack provides a minimal modular and adaptable interface for developing web applications in ruby by wrapping http requests and responses in the simplest way possible it unifies and distills the api for web servers web frameworks and software in between the so called middleware into a single method call library home page a href path to dependency file gemfile lock path to vulnerable library home wss scanner gem ruby cache rack gem dependency hierarchy manageiq style gem root library rubocop rails gem x rack gem vulnerable library found in base branch master vulnerability details a possible dos vulnerability in the multipart mime parsing code in rack this vulnerability has been assigned the cve identifier cve versions affected all not affected none fixed versions the multipart mime parsing code in rack limits the number of file parts but does not limit the total number of parts that can be uploaded carefully crafted requests can abuse this and cause multipart parsing to take longer than expected all users running an affected release should either upgrade or use one of the workarounds immediately publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution rack step up your open source security game with mend | 0 |
101 | 2,537,893,615 | IssuesEvent | 2015-01-26 23:53:57 | tinkerpop/tinkerpop3 | https://api.github.com/repos/tinkerpop/tinkerpop3 | closed | sum() and mean() need to be steps to work in OLAP | enhancement process | Currently `sum()` and `mean()` are terminal methods of `Traversal` ( t -> Number). This means that they can not be submitted to a GraphComputer. As such, we need to make SumStep and MeanStep w/ respective MapReducers so that these functions can be calculated in a distributed manner on GraphComputer. | 1.0 | sum() and mean() need to be steps to work in OLAP - Currently `sum()` and `mean()` are terminal methods of `Traversal` ( t -> Number). This means that they can not be submitted to a GraphComputer. As such, we need to make SumStep and MeanStep w/ respective MapReducers so that these functions can be calculated in a distributed manner on GraphComputer. | process | sum and mean need to be steps to work in olap currently sum and mean are terminal methods of traversal t number this means that they can not be submitted to a graphcomputer as such we need to make sumstep and meanstep w respective mapreducers so that these functions can be calculated in a distributed manner on graphcomputer | 1 |
13,026 | 15,380,386,780 | IssuesEvent | 2021-03-02 21:01:39 | google/closure-compiler | https://api.github.com/repos/google/closure-compiler | closed | Add updating errors -> suppressions map to release process | process | See https://github.com/google/closure-compiler/issues/3554.
The steps I have in mind are:
- open-source `MapDiagnosticToSuppression`
- give that file an option to generate a Markdown table
- add a step to the release process to run that file and update https://github.com/google/closure-compiler/wiki/@suppress-annotations#error-to-suppression-map (we already have a similar step to update https://github.com/google/closure-compiler/wiki/Flags-and-Options automatically) | 1.0 | Add updating errors -> suppressions map to release process - See https://github.com/google/closure-compiler/issues/3554.
The steps I have in mind are:
- open-source `MapDiagnosticToSuppression`
- give that file an option to generate a Markdown table
- add a step to the release process to run that file and update https://github.com/google/closure-compiler/wiki/@suppress-annotations#error-to-suppression-map (we already have a similar step to update https://github.com/google/closure-compiler/wiki/Flags-and-Options automatically) | process | add updating errors suppressions map to release process see the steps i have in mind are open source mapdiagnostictosuppression give that file an option to generate a markdown table add a step to the release process to run that file and update we already have a similar step to update automatically | 1 |
388,940 | 26,789,639,249 | IssuesEvent | 2023-02-01 07:19:48 | nlohmann/json | https://api.github.com/repos/nlohmann/json | closed | Clean up badges | documentation solution: proposed fix good first issue | ### Description
- [x] Badge for recently added Cirrus CI is missing (URL: https://api.cirrus-ci.com/github/nlohmann/json.svg, Link: https://cirrus-ci.com/github/nlohmann/json - it should be added
- [x] Badge for lgtm states `no longer available` - it should be removed | 1.0 | Clean up badges - ### Description
- [x] Badge for recently added Cirrus CI is missing (URL: https://api.cirrus-ci.com/github/nlohmann/json.svg, Link: https://cirrus-ci.com/github/nlohmann/json - it should be added
- [x] Badge for lgtm states `no longer available` - it should be removed | non_process | clean up badges description badge for recently added cirrus ci is missing url link it should be added badge for lgtm states no longer available it should be removed | 0 |
5,073 | 7,869,847,535 | IssuesEvent | 2018-06-24 18:43:01 | pwittchen/ReactiveNetwork | https://api.github.com/repos/pwittchen/ReactiveNetwork | closed | Release 0.12.4 | RxJava1.x in progress release process | **Initial release notes**:
- updated project dependencies - PR #268, PR #269, commit 02449af2f38ac463e1aa8824beee46ea823fd83b
**Things to do**:
- [x] update JavaDoc on `gh-pages`
- [x] update documentation on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] update docs on gh-pages after updating `README.md`
- [x] create new GitHub release
| 1.0 | Release 0.12.4 - **Initial release notes**:
- updated project dependencies - PR #268, PR #269, commit 02449af2f38ac463e1aa8824beee46ea823fd83b
**Things to do**:
- [x] update JavaDoc on `gh-pages`
- [x] update documentation on `gh-pages`
- [x] bump library version
- [x] upload archives to Maven Central
- [x] close and release artifact on Maven Central
- [x] update `CHANGELOG.md` after Maven Sync
- [x] bump library version in `README.md`
- [x] update docs on gh-pages after updating `README.md`
- [x] create new GitHub release
| process | release initial release notes updated project dependencies pr pr commit things to do update javadoc on gh pages update documentation on gh pages bump library version upload archives to maven central close and release artifact on maven central update changelog md after maven sync bump library version in readme md update docs on gh pages after updating readme md create new github release | 1 |
20,449 | 27,108,092,241 | IssuesEvent | 2023-02-15 13:34:20 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | closed | How to get location of code coverage.dat files? | P3 type: support / not a bug (process) coverage team-Rules-Server stale | We are running codecov through bazel. (not sure this is the best way but seems ok)
Running locally it works:
```
filegroup(
name = "coverage_files",
srcs = glob(["bazel-out/**/coverage.dat"]),
)
codecov(
name = "codecov",
runfiles = ["//:coverage_files"],
token_file = ":codecov.token",
tags = ["manual"],
)
```
But on the CI it doesn't work because bazel-out seems not to exist.
Is there any other way to get those files? By a query or rule?
I searched but couldn't find anything useful.
Best idea I have so far is querying for all tests and then build the file paths manually from that. Doesn't seem very neat though. | 1.0 | How to get location of code coverage.dat files? - We are running codecov through bazel. (not sure this is the best way but seems ok)
Running locally it works:
```
filegroup(
name = "coverage_files",
srcs = glob(["bazel-out/**/coverage.dat"]),
)
codecov(
name = "codecov",
runfiles = ["//:coverage_files"],
token_file = ":codecov.token",
tags = ["manual"],
)
```
But on the CI it doesn't work because bazel-out seems not to exist.
Is there any other way to get those files? By a query or rule?
I searched but couldn't find anything useful.
Best idea I have so far is querying for all tests and then build the file paths manually from that. Doesn't seem very neat though. | process | how to get location of code coverage dat files we are running codecov through bazel not sure this is the best way but seems ok running locally it works filegroup name coverage files srcs glob codecov name codecov runfiles token file codecov token tags but on the ci it doesn t work because bazel out seems not to exist is there any other way to get those files by a query or rule i searched but couldn t find anything useful best idea i have so far is querying for all tests and then build the file paths manually from that doesn t seem very neat though | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.