Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
232,983 | 25,718,997,512 | IssuesEvent | 2022-12-07 12:24:31 | dmyers87/boomstrap-react | https://api.github.com/repos/dmyers87/boomstrap-react | opened | CVE-2022-37603 (High) detected in loader-utils-1.4.0.tgz | security vulnerability | ## CVE-2022-37603 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-loader/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- component-playground-1.0.3.tgz (Root Library)
- url-loader-0.5.9.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/boomstrap-react/commit/56ff85f974b05cab00c2299011cfbdf611dd773d">56ff85f974b05cab00c2299011cfbdf611dd773d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37603>CVE-2022-37603</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-3rfm-jhwj-7488">https://github.com/advisories/GHSA-3rfm-jhwj-7488</a></p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution (loader-utils): 2.0.4</p>
<p>Direct dependency fix Resolution (component-playground): 1.0.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2022-37603 (High) detected in loader-utils-1.4.0.tgz - ## CVE-2022-37603 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>loader-utils-1.4.0.tgz</b></p></summary>
<p>utils for webpack loaders</p>
<p>Library home page: <a href="https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz">https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/url-loader/node_modules/loader-utils/package.json</p>
<p>
Dependency Hierarchy:
- component-playground-1.0.3.tgz (Root Library)
- url-loader-0.5.9.tgz
- :x: **loader-utils-1.4.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/dmyers87/boomstrap-react/commit/56ff85f974b05cab00c2299011cfbdf611dd773d">56ff85f974b05cab00c2299011cfbdf611dd773d</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Regular expression denial of service (ReDoS) flaw was found in Function interpolateName in interpolateName.js in webpack loader-utils 2.0.0 via the url variable in interpolateName.js.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-37603>CVE-2022-37603</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-3rfm-jhwj-7488">https://github.com/advisories/GHSA-3rfm-jhwj-7488</a></p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution (loader-utils): 2.0.4</p>
<p>Direct dependency fix Resolution (component-playground): 1.0.4</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_process | cve high detected in loader utils tgz cve high severity vulnerability vulnerable library loader utils tgz utils for webpack loaders library home page a href path to dependency file package json path to vulnerable library node modules url loader node modules loader utils package json dependency hierarchy component playground tgz root library url loader tgz x loader utils tgz vulnerable library found in head commit a href found in base branch master vulnerability details a regular expression denial of service redos flaw was found in function interpolatename in interpolatename js in webpack loader utils via the url variable in interpolatename js publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution loader utils direct dependency fix resolution component playground rescue worker helmet automatic remediation is available for this issue | 0 |
98,371 | 8,675,495,496 | IssuesEvent | 2018-11-30 11:03:12 | shahkhan40/shantestrep | https://api.github.com/repos/shahkhan40/shantestrep | closed | fxscantest : ApiV1IssuesJobIdIdGetQueryParamPagesizeNegativeNumber | fxscantest | Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzY5NzVjYTgtMWM0MS00MWEyLWEzZWEtNTE0OWFkNTNiNTll; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:53:30 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/issues/job-id/MbVspRkr?pageSize=-1&status=MbVspRkr
Request :
Response :
{
"timestamp" : "2018-11-30T10:53:30.747+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/issues/job-id/MbVspRkr"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | 1.0 | fxscantest : ApiV1IssuesJobIdIdGetQueryParamPagesizeNegativeNumber - Project : fxscantest
Job : uatenv
Env : uatenv
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=MzY5NzVjYTgtMWM0MS00MWEyLWEzZWEtNTE0OWFkNTNiNTll; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Fri, 30 Nov 2018 10:53:30 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/issues/job-id/MbVspRkr?pageSize=-1&status=MbVspRkr
Request :
Response :
{
"timestamp" : "2018-11-30T10:53:30.747+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/issues/job-id/MbVspRkr"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]
--- FX Bot --- | non_process | fxscantest project fxscantest job uatenv env uatenv region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api issues job id mbvsprkr logs assertion resolved to result assertion resolved to result fx bot | 0 |
7,762 | 10,883,222,183 | IssuesEvent | 2019-11-18 03:55:54 | zchenry/papers | https://api.github.com/repos/zchenry/papers | opened | Active Search and Bandits on Graphs Using Sigma-Optimality | Active Search Gaussian Process Graph | Yifei Ma, Tzu-Kuo Huang, Jeff Schneider
https://drive.google.com/open?id=1H0nN0VbctRgdM6VN0fTCAMQLlAM_WmGp
Gaussian process based active search method on graph. | 1.0 | Active Search and Bandits on Graphs Using Sigma-Optimality - Yifei Ma, Tzu-Kuo Huang, Jeff Schneider
https://drive.google.com/open?id=1H0nN0VbctRgdM6VN0fTCAMQLlAM_WmGp
Gaussian process based active search method on graph. | process | active search and bandits on graphs using sigma optimality yifei ma tzu kuo huang jeff schneider gaussian process based active search method on graph | 1 |
10,649 | 13,447,558,118 | IssuesEvent | 2020-09-08 14:23:37 | googleapis/google-cloud-dotnet | https://api.github.com/repos/googleapis/google-cloud-dotnet | closed | prod:cloud-sharp/google-cloud-dotnet/gcp_windows/autorelease failing since August 25th | priority: p1 type: process | https://fusion.corp.google.com/projectanalysis/summary/KOKORO/prod%3Acloud-sharp%2Fgoogle-cloud-dotnet%2Fgcp_windows%2Fautorelease
CC @bcoe
| 1.0 | prod:cloud-sharp/google-cloud-dotnet/gcp_windows/autorelease failing since August 25th - https://fusion.corp.google.com/projectanalysis/summary/KOKORO/prod%3Acloud-sharp%2Fgoogle-cloud-dotnet%2Fgcp_windows%2Fautorelease
CC @bcoe
| process | prod cloud sharp google cloud dotnet gcp windows autorelease failing since august cc bcoe | 1 |
54,269 | 13,902,499,075 | IssuesEvent | 2020-10-20 05:29:26 | emilwareus/angular | https://api.github.com/repos/emilwareus/angular | opened | CVE-2020-7733 (High) detected in ua-parser-js-0.7.12.tgz, ua-parser-js-0.7.17.tgz | security vulnerability | ## CVE-2020-7733 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ua-parser-js-0.7.12.tgz</b>, <b>ua-parser-js-0.7.17.tgz</b></p></summary>
<p>
<details><summary><b>ua-parser-js-0.7.12.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.12.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.12.tgz</a></p>
<p>Path to dependency file: angular/integration/hello_world__closure/yarn.lock</p>
<p>Path to vulnerable library: angular/integration/hello_world__closure/yarn.lock,angular/integration/i18n/yarn.lock,angular/integration/dynamic-compiler/yarn.lock,angular/integration/ng_elements/yarn.lock,angular/integration/injectable-def/yarn.lock,angular/integration/hello_world__systemjs_umd/yarn.lock</p>
<p>
Dependency Hierarchy:
- lite-server-2.2.2.tgz (Root Library)
- browser-sync-2.23.5.tgz
- :x: **ua-parser-js-0.7.12.tgz** (Vulnerable Library)
</details>
<details><summary><b>ua-parser-js-0.7.17.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: angular/integration/ngcc/yarn.lock</p>
<p>Path to vulnerable library: angular/integration/ngcc/yarn.lock</p>
<p>
Dependency Hierarchy:
- lite-server-2.2.2.tgz (Root Library)
- browser-sync-2.26.3.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/angular/commit/0a802f3678958587eafa0136d927232b89cc1427">0a802f3678958587eafa0136d927232b89cc1427</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.7.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7733 (High) detected in ua-parser-js-0.7.12.tgz, ua-parser-js-0.7.17.tgz - ## CVE-2020-7733 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ua-parser-js-0.7.12.tgz</b>, <b>ua-parser-js-0.7.17.tgz</b></p></summary>
<p>
<details><summary><b>ua-parser-js-0.7.12.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.12.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.12.tgz</a></p>
<p>Path to dependency file: angular/integration/hello_world__closure/yarn.lock</p>
<p>Path to vulnerable library: angular/integration/hello_world__closure/yarn.lock,angular/integration/i18n/yarn.lock,angular/integration/dynamic-compiler/yarn.lock,angular/integration/ng_elements/yarn.lock,angular/integration/injectable-def/yarn.lock,angular/integration/hello_world__systemjs_umd/yarn.lock</p>
<p>
Dependency Hierarchy:
- lite-server-2.2.2.tgz (Root Library)
- browser-sync-2.23.5.tgz
- :x: **ua-parser-js-0.7.12.tgz** (Vulnerable Library)
</details>
<details><summary><b>ua-parser-js-0.7.17.tgz</b></p></summary>
<p>Lightweight JavaScript-based user-agent string parser</p>
<p>Library home page: <a href="https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz">https://registry.npmjs.org/ua-parser-js/-/ua-parser-js-0.7.17.tgz</a></p>
<p>Path to dependency file: angular/integration/ngcc/yarn.lock</p>
<p>Path to vulnerable library: angular/integration/ngcc/yarn.lock</p>
<p>
Dependency Hierarchy:
- lite-server-2.2.2.tgz (Root Library)
- browser-sync-2.26.3.tgz
- :x: **ua-parser-js-0.7.17.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/emilwareus/angular/commit/0a802f3678958587eafa0136d927232b89cc1427">0a802f3678958587eafa0136d927232b89cc1427</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package ua-parser-js before 0.7.22 are vulnerable to Regular Expression Denial of Service (ReDoS) via the regex for Redmi Phones and Mi Pad Tablets UA.
<p>Publish Date: 2020-09-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7733>CVE-2020-7733</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7733</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 0.7.22</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in ua parser js tgz ua parser js tgz cve high severity vulnerability vulnerable libraries ua parser js tgz ua parser js tgz ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file angular integration hello world closure yarn lock path to vulnerable library angular integration hello world closure yarn lock angular integration yarn lock angular integration dynamic compiler yarn lock angular integration ng elements yarn lock angular integration injectable def yarn lock angular integration hello world systemjs umd yarn lock dependency hierarchy lite server tgz root library browser sync tgz x ua parser js tgz vulnerable library ua parser js tgz lightweight javascript based user agent string parser library home page a href path to dependency file angular integration ngcc yarn lock path to vulnerable library angular integration ngcc yarn lock dependency hierarchy lite server tgz root library browser sync tgz x ua parser js tgz vulnerable library found in head commit a href vulnerability details the package ua parser js before are vulnerable to regular expression denial of service redos via the regex for redmi phones and mi pad tablets ua publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
158,325 | 13,728,635,195 | IssuesEvent | 2020-10-04 12:35:25 | dekaulitz/MockyUp | https://api.github.com/repos/dekaulitz/MockyUp | closed | Update Swagger Documentation | documentation enhancement | Update the api contract for rest api for current existing rest api, for now our contract located on `resources/public/swagger.json` | 1.0 | Update Swagger Documentation - Update the api contract for rest api for current existing rest api, for now our contract located on `resources/public/swagger.json` | non_process | update swagger documentation update the api contract for rest api for current existing rest api for now our contract located on resources public swagger json | 0 |
779,443 | 27,353,059,097 | IssuesEvent | 2023-02-27 10:56:02 | sebastien-d-me/SebBlog | https://api.github.com/repos/sebastien-d-me/SebBlog | opened | Comment deletion system | Priority: High Statut: Not started Type : Back-end | #### Description:
Creation of the comment deletion system.
------------
###### Estimated time: 1 day(s)
###### Difficulty: ⭐
| 1.0 | Comment deletion system - #### Description:
Creation of the comment deletion system.
------------
###### Estimated time: 1 day(s)
###### Difficulty: ⭐
| non_process | comment deletion system description creation of the comment deletion system estimated time day s difficulty ⭐ | 0 |
14,749 | 11,100,558,540 | IssuesEvent | 2019-12-16 19:28:25 | cockroachdb/docs | https://api.github.com/repos/cockroachdb/docs | opened | double `$` prompt showing in 19.2 upgrade instructions | A-docs-infrastructure C-release-note P-2 T-something-broken | In our 19.2 upgrade docs, we have a double `$` prompt for the mac & linux instructions. [Link here](https://www.cockroachlabs.com/docs/v19.2/upgrade-cockroach-version.html).
It's on step 4 of upgrading from a previous version.
> If you use cockroach in your $PATH, rename the outdated cockroach binary, and then move the new one into its place:
```$ $ i="$(which cockroach)"; mv "$i" "$i"_old```
| 1.0 | double `$` prompt showing in 19.2 upgrade instructions - In our 19.2 upgrade docs, we have a double `$` prompt for the mac & linux instructions. [Link here](https://www.cockroachlabs.com/docs/v19.2/upgrade-cockroach-version.html).
It's on step 4 of upgrading from a previous version.
> If you use cockroach in your $PATH, rename the outdated cockroach binary, and then move the new one into its place:
```$ $ i="$(which cockroach)"; mv "$i" "$i"_old```
| non_process | double prompt showing in upgrade instructions in our upgrade docs we have a double prompt for the mac linux instructions it s on step of upgrading from a previous version if you use cockroach in your path rename the outdated cockroach binary and then move the new one into its place i which cockroach mv i i old | 0 |
176,403 | 13,640,120,359 | IssuesEvent | 2020-09-25 12:14:16 | hailstorm75/MarkDoc.Core | https://api.github.com/repos/hailstorm75/MarkDoc.Core | closed | Write unit tests for methods | Unit Test | # Library name(s)
`UT.Members`
# Subtasks
- [x] Name
- [x] Raw name
- [x] Accessors
- [x] Inheritance modifiers
- [x] Override
- [x] Virtual
- [x] Abstract
- [x] None
- [x] Static
- [x] Async
- [x] Generics
- [x] Name
- [x] Constraint
- [x] Return type
- [x] Operator
| 1.0 | Write unit tests for methods - # Library name(s)
`UT.Members`
# Subtasks
- [x] Name
- [x] Raw name
- [x] Accessors
- [x] Inheritance modifiers
- [x] Override
- [x] Virtual
- [x] Abstract
- [x] None
- [x] Static
- [x] Async
- [x] Generics
- [x] Name
- [x] Constraint
- [x] Return type
- [x] Operator
| non_process | write unit tests for methods library name s ut members subtasks name raw name accessors inheritance modifiers override virtual abstract none static async generics name constraint return type operator | 0 |
15,000 | 18,681,719,235 | IssuesEvent | 2021-11-01 06:58:53 | tikv/tikv | https://api.github.com/repos/tikv/tikv | reopened | Coprocessor functions migration from non-vec framework | help wanted sig/coprocessor difficulty/easy | ## Feature Request
Coprocessor historically has two implementations. One is based on the vectorized framework, and the other one is based on the plain (non-vec) framework, which was removed recently. A few of the functions in the legacy non-vec framework have not been ported into the vectorized framework yet.
The following is the list of functions that existed in non-vec framework but not in vectorized framework. You may also want to look into the [non-vec framework before it was removed](https://github.com/tikv/tikv/tree/1a88f12ebf50064b992fc5efc7e7f55795210521/components/tidb_query_normal_expr/src).
- [x] AddDateAndDuration
- [ ] AddDateAndString
- [x] AddDatetimeAndDuration
- [x] AddDatetimeAndString
- [x] AddDurationAndDuration
- [x] AddDurationAndString
- [ ] AddTimeDateTimeNull
- [ ] AddTimeDurationNull
- [ ] AddTimeStringNull
- [ ] Compress
- [x] Date
- [ ] DateDiff
- [ ] Instr
- [x] Locate2ArgsUtf8
- [x] Locate3ArgsUtf8
- [ ] Lower
- [x] NullTimeDiff
- [x] Quote
- [x] RegexpSig
- [x] RegexpUtf8Sig
- [ ] RpadUtf8
- [x] SubDateAndDuration
- [ ] SubDateAndString
- [x] SubDatetimeAndDuration
- [ ] SubDatetimeAndString
- [x] SubDurationAndDuration
- [x] SubDurationAndString
- [ ] SubTimeDateTimeNull
- [ ] SubTimeDurationNull
- [ ] Substring2Args
- [ ] Substring2ArgsUtf8
- [ ] Substring3Args
- [ ] Substring3ArgsUtf8
- [ ] Trim2Args
- [ ] TruncateDecimal
- [ ] TruncateUint
- [ ] Uncompress
- [x] WeekWithoutMode
- [x] YearWeekWithMode
- [x] YearWeekWithoutMode
| 1.0 | Coprocessor functions migration from non-vec framework - ## Feature Request
Coprocessor historically has two implementations. One is based on the vectorized framework, and the other one is based on the plain (non-vec) framework, which was removed recently. A few of the functions in the legacy non-vec framework have not been ported into the vectorized framework yet.
The following is the list of functions that existed in non-vec framework but not in vectorized framework. You may also want to look into the [non-vec framework before it was removed](https://github.com/tikv/tikv/tree/1a88f12ebf50064b992fc5efc7e7f55795210521/components/tidb_query_normal_expr/src).
- [x] AddDateAndDuration
- [ ] AddDateAndString
- [x] AddDatetimeAndDuration
- [x] AddDatetimeAndString
- [x] AddDurationAndDuration
- [x] AddDurationAndString
- [ ] AddTimeDateTimeNull
- [ ] AddTimeDurationNull
- [ ] AddTimeStringNull
- [ ] Compress
- [x] Date
- [ ] DateDiff
- [ ] Instr
- [x] Locate2ArgsUtf8
- [x] Locate3ArgsUtf8
- [ ] Lower
- [x] NullTimeDiff
- [x] Quote
- [x] RegexpSig
- [x] RegexpUtf8Sig
- [ ] RpadUtf8
- [x] SubDateAndDuration
- [ ] SubDateAndString
- [x] SubDatetimeAndDuration
- [ ] SubDatetimeAndString
- [x] SubDurationAndDuration
- [x] SubDurationAndString
- [ ] SubTimeDateTimeNull
- [ ] SubTimeDurationNull
- [ ] Substring2Args
- [ ] Substring2ArgsUtf8
- [ ] Substring3Args
- [ ] Substring3ArgsUtf8
- [ ] Trim2Args
- [ ] TruncateDecimal
- [ ] TruncateUint
- [ ] Uncompress
- [x] WeekWithoutMode
- [x] YearWeekWithMode
- [x] YearWeekWithoutMode
| process | coprocessor functions migration from non vec framework feature request coprocessor historically has two implementations one is based on the vectorized framework and the other one is based on the plain non vec framework which was removed recently a few of the functions in the legacy non vec framework have not been ported into the vectorized framework yet the following is the list of functions that existed in non vec framework but not in vectorized framework you may also want to look into the adddateandduration adddateandstring adddatetimeandduration adddatetimeandstring adddurationandduration adddurationandstring addtimedatetimenull addtimedurationnull addtimestringnull compress date datediff instr lower nulltimediff quote regexpsig subdateandduration subdateandstring subdatetimeandduration subdatetimeandstring subdurationandduration subdurationandstring subtimedatetimenull subtimedurationnull truncatedecimal truncateuint uncompress weekwithoutmode yearweekwithmode yearweekwithoutmode | 1 |
17,511 | 23,325,676,697 | IssuesEvent | 2022-08-08 20:55:55 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | terminal.integrataed.splitCwd inherited doesn't work correctly for unicode characters | bug help wanted terminal terminal-process | when split terminal in chinese path,error! | 1.0 | terminal.integrataed.splitCwd inherited doesn't work correctly for unicode characters - when split terminal in chinese path,error! | process | terminal integrataed splitcwd inherited doesn t work correctly for unicode characters when split terminal in chinese path error | 1 |
19,605 | 25,959,435,574 | IssuesEvent | 2022-12-18 17:48:53 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | Support sample_weight in KBinsDiscretizer(strategy="quantile") | New Feature Easy help wanted module:preprocessing | #### Describe the workflow you want to enable
```python
trans = KBinsDiscretizer(strategy="quantile")
trans.fit(X, sample_weight=w)
```
This should use weighted quantiles.
#### Additional context
Similar to #20522. | 1.0 | Support sample_weight in KBinsDiscretizer(strategy="quantile") - #### Describe the workflow you want to enable
```python
trans = KBinsDiscretizer(strategy="quantile")
trans.fit(X, sample_weight=w)
```
This should use weighted quantiles.
#### Additional context
Similar to #20522. | process | support sample weight in kbinsdiscretizer strategy quantile describe the workflow you want to enable python trans kbinsdiscretizer strategy quantile trans fit x sample weight w this should use weighted quantiles additional context similar to | 1 |
13,532 | 16,065,769,357 | IssuesEvent | 2021-04-23 18:50:01 | googleapis/java-shared-dependencies | https://api.github.com/repos/googleapis/java-shared-dependencies | closed | Promote to 1.0.0 | type: process | We're pretty sure this is the path we want to take, so let's make this 1.0.0 and version appropriately. | 1.0 | Promote to 1.0.0 - We're pretty sure this is the path we want to take, so let's make this 1.0.0 and version appropriately. | process | promote to we re pretty sure this is the path we want to take so let s make this and version appropriately | 1 |
106,037 | 4,258,662,458 | IssuesEvent | 2016-07-11 07:57:37 | GeographicaGS/Alboran | https://api.github.com/repos/GeographicaGS/Alboran | opened | Mapa - eliminar capas de la tabla de contenidos | feature priority:high | Botón para eliminar todas las capas a la vez de la tabla de contenidos. | 1.0 | Mapa - eliminar capas de la tabla de contenidos - Botón para eliminar todas las capas a la vez de la tabla de contenidos. | non_process | mapa eliminar capas de la tabla de contenidos botón para eliminar todas las capas a la vez de la tabla de contenidos | 0 |
17,610 | 23,428,503,078 | IssuesEvent | 2022-08-14 19:07:47 | alchemistry/alchemlyb | https://api.github.com/repos/alchemistry/alchemlyb | closed | extract_u_nk return current state of the file | question preprocessors | I'm working on a workflow for ABFE calculations #111 #114 and is currently working on the preprocessing.subsampling part.
The subsampling method `dhdl` needs to decorrelate the u_nk according to the column of the current state. However, the data frame returned by alchemlyb.parsing.gmx.extract_u_nk doesn't contain the information with regard to the current state.
I noticed that the `alchemlyb.parsing.gmx.extract_u_nk` does read state from the file so I wonder if it is possible for the extract_u_nk to return the current state of the dataframe.
I have several thoughts but I want to get the opinion from the community and possible issues with this.
The first is to set the state as metadata of the dataframe but not many people might know this usage. (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.attrs.html)
`u_k.attrs['state'] = state`
The second is to set this information into the `name` since it is currently set to 'u_nk', which I think is not that useful
`u_k.name = 'u_nk state: {}'.format(state)`
The third option is to return the metadata directly, which will break the current API
```
def extract_u_nk(xvg, T):
return u_k, {'state': state}
```
Obviously, one could also recover the state by using the row name
`state = u_k.columns.values.tolist().index(u_k.index.values[0][1:])` | 1.0 | extract_u_nk return current state of the file - I'm working on a workflow for ABFE calculations #111 #114 and is currently working on the preprocessing.subsampling part.
The subsampling method `dhdl` needs to decorrelate the u_nk according to the column of the current state. However, the data frame returned by alchemlyb.parsing.gmx.extract_u_nk doesn't contain the information with regard to the current state.
I noticed that the `alchemlyb.parsing.gmx.extract_u_nk` does read state from the file so I wonder if it is possible for the extract_u_nk to return the current state of the dataframe.
I have several thoughts but I want to get the opinion from the community and possible issues with this.
The first is to set the state as metadata of the dataframe but not many people might know this usage. (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.attrs.html)
`u_k.attrs['state'] = state`
The second is to set this information into the `name` since it is currently set to 'u_nk', which I think is not that useful
`u_k.name = 'u_nk state: {}'.format(state)`
The third option is to return the metadata directly, which will break the current API
```
def extract_u_nk(xvg, T):
return u_k, {'state': state}
```
Obviously, one could also recover the state by using the row name
`state = u_k.columns.values.tolist().index(u_k.index.values[0][1:])` | process | extract u nk return current state of the file i m working on a workflow for abfe calculations and is currently working on the preprocessing subsampling part the subsampling method dhdl needs to decorrelate the u nk according to the column of the current state however the data frame returned by alchemlyb parsing gmx extract u nk doesn t contain the information with regard to the current state i noticed that the alchemlyb parsing gmx extract u nk does read state from the file so i wonder if it is possible for the extract u nk to return the current state of the dataframe i have several thoughts but i want to get the opinion from the community and possible issues with this the first is to set the state as metadata of the dataframe but not many people might know this usage u k attrs state the second is to set this information into the name since it is currently set to u nk which i think is not that useful u k name u nk state format state the third option is to return the metadata directly which will break the current api def extract u nk xvg t return u k state state obviously one could also recover the state by using the row name state u k columns values tolist index u k index values | 1 |
330,532 | 28,438,118,581 | IssuesEvent | 2023-04-15 15:10:32 | istoreos/istoreos | https://api.github.com/repos/istoreos/istoreos | closed | R2S的lan口经常掉线 | testing | 反馈bug/问题模板,提建议请删除
## 1.关于你要提交的问题
为避免重复issue,请先搜索issue,确认没有类似issue再提交新issue;
注意搜索时**包括已关闭**的issue(删掉搜索框的的“is:open”条件);
Q:是否用关键词搜索了issue? (使用 "x" 选择)
* [ ] 没有类似的issue
## 2. 详细叙述
已经发生过两次了,都是无缘武功lan口掉线,灯灭,重启又好了
### (1) 具体问题
A:
### (2) 路由器型号和固件版本
FriendlyElec NanoPi R2S---------iStoreOS 21.02.3 2023031713 / LuCI istoreos-21.02 branch git-23.037.57600-91128de
A:
### (3) 详细日志和/或截图
A:
| 1.0 | R2S的lan口经常掉线 - 反馈bug/问题模板,提建议请删除
## 1.关于你要提交的问题
为避免重复issue,请先搜索issue,确认没有类似issue再提交新issue;
注意搜索时**包括已关闭**的issue(删掉搜索框的的“is:open”条件);
Q:是否用关键词搜索了issue? (使用 "x" 选择)
* [ ] 没有类似的issue
## 2. 详细叙述
已经发生过两次了,都是无缘武功lan口掉线,灯灭,重启又好了
### (1) 具体问题
A:
### (2) 路由器型号和固件版本
FriendlyElec NanoPi R2S---------iStoreOS 21.02.3 2023031713 / LuCI istoreos-21.02 branch git-23.037.57600-91128de
A:
### (3) 详细日志和/或截图
A:
| non_process | 反馈bug 问题模板,提建议请删除 关于你要提交的问题 为避免重复issue,请先搜索issue,确认没有类似issue再提交新issue; 注意搜索时 包括已关闭 的issue(删掉搜索框的的“is open”条件); q:是否用关键词搜索了issue 使用 x 选择 没有类似的issue 详细叙述 已经发生过两次了,都是无缘武功lan口掉线,灯灭,重启又好了 具体问题 a: 路由器型号和固件版本 friendlyelec nanopi istoreos luci istoreos branch git a: 详细日志和 或截图 a: | 0 |
18,482 | 24,550,765,369 | IssuesEvent | 2022-10-12 12:26:28 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Comprehension test > Blank white screen is displayed on failing comprehension test and clicking on cancel | Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev | Steps:
1. Install the app
2. Login/Signup
3. Enroll into study
4. Fail the comprehension test questions
5. Click on the 'Cancel' button on the top right
6. Observe
AR: Blank white screen is displayed on failing comprehension test and clicking on cancel
ER: Should be redirected to consent sections | 3.0 | [iOS] Comprehension test > Blank white screen is displayed on failing comprehension test and clicking on cancel - Steps:
1. Install the app
2. Login/Signup
3. Enroll into study
4. Fail the comprehension test questions
5. Click on the 'Cancel' button on the top right
6. Observe
AR: Blank white screen is displayed on failing comprehension test and clicking on cancel
ER: Should be redirected to consent sections | process | comprehension test blank white screen is displayed on failing comprehension test and clicking on cancel steps install the app login signup enroll into study fail the comprehension test questions click on the cancel button on the top right observe ar blank white screen is displayed on failing comprehension test and clicking on cancel er should be redirected to consent sections | 1 |
153,483 | 19,706,450,545 | IssuesEvent | 2022-01-12 22:41:46 | KaterinaOrg/maven-modular | https://api.github.com/repos/KaterinaOrg/maven-modular | closed | CVE-2019-12814 (Medium) detected in jackson-databind-2.9.6.jar - autoclosed | security vulnerability | ## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /module2/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- jackson-module-kotlin-2.9.6.jar (Root Library)
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KaterinaOrg/maven-modular/commit/5316d1e17d60b08f67a1c0f5526eeffbf1f3103a">5316d1e17d60b08f67a1c0f5526eeffbf1f3103a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2341">https://github.com/FasterXML/jackson-databind/issues/2341</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/module2/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.fasterxml.jackson.module:jackson-module-kotlin:2.9.6;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12814","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-12814 (Medium) detected in jackson-databind-2.9.6.jar - autoclosed - ## CVE-2019-12814 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.6.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /module2/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.6/jackson-databind-2.9.6.jar</p>
<p>
Dependency Hierarchy:
- jackson-module-kotlin-2.9.6.jar (Root Library)
- :x: **jackson-databind-2.9.6.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/KaterinaOrg/maven-modular/commit/5316d1e17d60b08f67a1c0f5526eeffbf1f3103a">5316d1e17d60b08f67a1c0f5526eeffbf1f3103a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.
<p>Publish Date: 2019-06-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814>CVE-2019-12814</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/FasterXML/jackson-databind/issues/2341">https://github.com/FasterXML/jackson-databind/issues/2341</a></p>
<p>Release Date: 2020-10-20</p>
<p>Fix Resolution: 2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.9.6","packageFilePaths":["/module2/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.fasterxml.jackson.module:jackson-module-kotlin:2.9.6;com.fasterxml.jackson.core:jackson-databind:2.9.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.7.9.6, 2.8.11.4, 2.9.9.1, 2.10.0","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-12814","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.x through 2.9.9. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has JDOM 1.x or 2.x jar in the classpath, an attacker can send a specifically crafted JSON message that allows them to read arbitrary local files on the server.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-12814","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in jackson databind jar autoclosed cve medium severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy jackson module kotlin jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com fasterxml jackson module jackson module kotlin com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind x through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has jdom x or x jar in the classpath an attacker can send a specifically crafted json message that allows them to read arbitrary local files on the server vulnerabilityurl | 0 |
4,709 | 7,548,353,065 | IssuesEvent | 2018-04-18 10:54:11 | qgis/QGIS-Documentation | https://api.github.com/repos/qgis/QGIS-Documentation | closed | [needs-docs][processing] Double clicking a history entry shows
the algorithm dialog instead of immediately executing same alg | Automatic new feature Easy Processing User Manual | Original commit: https://github.com/qgis/QGIS/commit/b25681cc43be5371de6987694c31c61aef5ac145 by nyalldawson
This allows users to edit the parameters before re-running,
which is a more common user-operation (e.g. changing the
input layer, changing a parameter value "oops, that buffer
was a bit too big....").
If someone wants to exactly re-run the algorithm without changes
it's only one extra click anyway... | 1.0 | [needs-docs][processing] Double clicking a history entry shows
the algorithm dialog instead of immediately executing same alg - Original commit: https://github.com/qgis/QGIS/commit/b25681cc43be5371de6987694c31c61aef5ac145 by nyalldawson
This allows users to edit the parameters before re-running,
which is a more common user-operation (e.g. changing the
input layer, changing a parameter value "oops, that buffer
was a bit too big....").
If someone wants to exactly re-run the algorithm without changes
it's only one extra click anyway... | process | double clicking a history entry shows the algorithm dialog instead of immediately executing same alg original commit by nyalldawson this allows users to edit the parameters before re running which is a more common user operation e g changing the input layer changing a parameter value oops that buffer was a bit too big if someone wants to exactly re run the algorithm without changes it s only one extra click anyway | 1 |
3,713 | 6,732,574,360 | IssuesEvent | 2017-10-18 12:03:58 | lockedata/rcms | https://api.github.com/repos/lockedata/rcms | opened | Manage sponsors | conference team odoo processes | ## Detailed task
- Add sponsors to a page
- Change a sponsor's listing
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [odoo](//http://188.166.159.192:8069)
- System documentation: [odoo docs](https://www.odoo.com/page/docs)
- Role: Conference team
- Area: Processes
| 1.0 | Manage sponsors - ## Detailed task
- Add sponsors to a page
- Change a sponsor's listing
## Assessing the task
Try to perform the task. Use google and the system documentation to help - part of what we're trying to assess how easy it is for people to work out how to do tasks.
Use a 👍 (`:+1:`) reaction to this task if you were able to perform the task. Use a 👎 (`:-1:`) reaction to the task if you could not complete it. Add a reply with any comments or feedback.
## Extra Info
- Site: [odoo](//http://188.166.159.192:8069)
- System documentation: [odoo docs](https://www.odoo.com/page/docs)
- Role: Conference team
- Area: Processes
| process | manage sponsors detailed task add sponsors to a page change a sponsor s listing assessing the task try to perform the task use google and the system documentation to help part of what we re trying to assess how easy it is for people to work out how to do tasks use a 👍 reaction to this task if you were able to perform the task use a 👎 reaction to the task if you could not complete it add a reply with any comments or feedback extra info site system documentation role conference team area processes | 1 |
111,766 | 4,487,795,417 | IssuesEvent | 2016-08-30 03:14:04 | concrete5/concrete5 | https://api.github.com/repos/concrete5/concrete5 | closed | /dashboard/system/files/image_uploading formatting | accepted:ready to start priority:love to have type:ux | Change bold image resizing to caps blue.
Add help text that actually explains what this does. Don't we always resize images for responsive breakpoints these days? Are we talking about blowing away the original or what?
| 1.0 | /dashboard/system/files/image_uploading formatting - Change bold image resizing to caps blue.
Add help text that actually explains what this does. Don't we always resize images for responsive breakpoints these days? Are we talking about blowing away the original or what?
| non_process | dashboard system files image uploading formatting change bold image resizing to caps blue add help text that actually explains what this does don t we always resize images for responsive breakpoints these days are we talking about blowing away the original or what | 0 |
1,240 | 3,777,802,071 | IssuesEvent | 2016-03-17 21:24:00 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | opened | March release | P1 Process Release / binary | Should be 0.2.1 (0.3 is almost there but we still needs some documentation for it).
We should cut the rc tomorrow, we have too many incompatibilities now.
/cc @kchodorow fyi | 1.0 | March release - Should be 0.2.1 (0.3 is almost there but we still needs some documentation for it).
We should cut the rc tomorrow, we have too many incompatibilities now.
/cc @kchodorow fyi | process | march release should be is almost there but we still needs some documentation for it we should cut the rc tomorrow we have too many incompatibilities now cc kchodorow fyi | 1 |
6,972 | 10,121,362,673 | IssuesEvent | 2019-07-31 15:28:38 | shirou/gopsutil | https://api.github.com/repos/shirou/gopsutil | closed | [process][linux] Reimplement TOP process.CPUPercent() | os:linux package:process | **[Issue]**
Hi, I am learening GO and I was tring to reimplement Top (Linux's instruction).
I found when using process.CPUPercent() to output the process's CPU Usage, there some different compare to Top's CPU Usage.
If I create a dead loop, it take times to increase the value of %CPU, and it seems the %CPU value of TOP is more sensitive to precess.CPUPercent.
How this solve or the answer.
Please help me with my questions.Thanks!
(sorry for the poor english)
**[the value]**

**[open dead loop]**
a.out is a dead loop process, it takes time to increase the value and seems can not get to 100

**[CODE]**
```go
func NewSort() {
ProcessInfo,err := process.Processes()
p = ProcessInfo
if err != nil {
log.Fatal(err)
}
fmt.Printf("PID PPID USER %%CPU %%MEM Status Name\n")
sort.Sort(pInfo(p))
for i := 0; i < number; i++ {
pid := p[i].Pid
ppid,_ := p[i].Ppid()
user,_ := p[i].Username()
cpu,_ := p[i].CPUPercent()
mem,_ := p[i].MemoryPercent()
sta,_ := p[i].Status()
str,_ := p[i].Name()
fmt.Printf("%-6v %-6v %-5v %-5.1f %-5.1f %-5v %-15v\n", pid, ppid, user, cpu, mem, sta, str)
}
fmt.Println()
}
```
| 1.0 | [process][linux] Reimplement TOP process.CPUPercent() - **[Issue]**
Hi, I am learening GO and I was tring to reimplement Top (Linux's instruction).
I found when using process.CPUPercent() to output the process's CPU Usage, there some different compare to Top's CPU Usage.
If I create a dead loop, it take times to increase the value of %CPU, and it seems the %CPU value of TOP is more sensitive to precess.CPUPercent.
How this solve or the answer.
Please help me with my questions.Thanks!
(sorry for the poor english)
**[the value]**

**[open dead loop]**
a.out is a dead loop process, it takes time to increase the value and seems can not get to 100

**[CODE]**
```go
func NewSort() {
ProcessInfo,err := process.Processes()
p = ProcessInfo
if err != nil {
log.Fatal(err)
}
fmt.Printf("PID PPID USER %%CPU %%MEM Status Name\n")
sort.Sort(pInfo(p))
for i := 0; i < number; i++ {
pid := p[i].Pid
ppid,_ := p[i].Ppid()
user,_ := p[i].Username()
cpu,_ := p[i].CPUPercent()
mem,_ := p[i].MemoryPercent()
sta,_ := p[i].Status()
str,_ := p[i].Name()
fmt.Printf("%-6v %-6v %-5v %-5.1f %-5.1f %-5v %-15v\n", pid, ppid, user, cpu, mem, sta, str)
}
fmt.Println()
}
```
| process | reimplement top process cpupercent hi i am learening go and i was tring to reimplement top linux s instruction i found when using process cpupercent to output the process s cpu usage there some different compare to top s cpu usage if i create a dead loop it take times to increase the value of cpu and it seems the cpu value of top is more sensitive to precess cpupercent how this solve or the answer please help me with my questions thanks sorry for the poor english a out is a dead loop process it takes time to increase the value and seems can not get to go func newsort processinfo err process processes p processinfo if err nil log fatal err fmt printf pid ppid user cpu mem status name n sort sort pinfo p for i i number i pid p pid ppid p ppid user p username cpu p cpupercent mem p memorypercent sta p status str p name fmt printf n pid ppid user cpu mem sta str fmt println | 1 |
764,905 | 26,822,835,843 | IssuesEvent | 2023-02-02 10:40:48 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.youtube.com - see bug description | browser-firefox priority-critical engine-gecko os-win11 | <!-- @browser: Firefox 109.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/109.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/117684 -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 109.0
**Operating System**: Windows 11
**Tested Another Browser**: Yes Firefox
**Problem type**: Something else
**Description**: The site lags so bad, video is so choppy and the whole browser just stutters so badly!!
**Steps to Reproduce**:
The site lags so badly, the video is so choppy and the whole browser just stutters so badly!!
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.youtube.com - see bug description - <!-- @browser: Firefox 109.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/109.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/117684 -->
**URL**: https://www.youtube.com/
**Browser / Version**: Firefox 109.0
**Operating System**: Windows 11
**Tested Another Browser**: Yes Firefox
**Problem type**: Something else
**Description**: The site lags so bad, video is so choppy and the whole browser just stutters so badly!!
**Steps to Reproduce**:
The site lags so badly, the video is so choppy and the whole browser just stutters so badly!!
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | see bug description url browser version firefox operating system windows tested another browser yes firefox problem type something else description the site lags so bad video is so choppy and the whole browser just stutters so badly steps to reproduce the site lags so badly the video is so choppy and the whole browser just stutters so badly browser configuration none from with ❤️ | 0 |
220,617 | 17,211,599,254 | IssuesEvent | 2021-07-19 05:46:50 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | opened | Improved OpInfo dtype testing | module: tests triaged | Currently OpInfos must specify which dtypes they support and have a test validating they do so correctly:
https://github.com/pytorch/pytorch/blob/593e8f41cae82a279a323510dfcf1da6466ad5c9/test/test_ops.py#L60
There are a couple improvements we could make to this test:
- It should report all changes for both forward and backward
- It should test all sample inputs and show how many failed, if any, so engineers can understand if an operator has partial support for a dtype | 1.0 | Improved OpInfo dtype testing - Currently OpInfos must specify which dtypes they support and have a test validating they do so correctly:
https://github.com/pytorch/pytorch/blob/593e8f41cae82a279a323510dfcf1da6466ad5c9/test/test_ops.py#L60
There are a couple improvements we could make to this test:
- It should report all changes for both forward and backward
- It should test all sample inputs and show how many failed, if any, so engineers can understand if an operator has partial support for a dtype | non_process | improved opinfo dtype testing currently opinfos must specify which dtypes they support and have a test validating they do so correctly there are a couple improvements we could make to this test it should report all changes for both forward and backward it should test all sample inputs and show how many failed if any so engineers can understand if an operator has partial support for a dtype | 0 |
494,802 | 14,265,966,519 | IssuesEvent | 2020-11-20 17:56:26 | open-telemetry/opentelemetry-java-instrumentation | https://api.github.com/repos/open-telemetry/opentelemetry-java-instrumentation | closed | Don't support case-insensitivity in propagator names | priority:p3 release:required-for-ga | We don't support case-insensitivity in exporter names, or other configuration property values, so don't think we should support for propagator names either.
Initially noticed this in https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/1545#discussion_r522603943 | 1.0 | Don't support case-insensitivity in propagator names - We don't support case-insensitivity in exporter names, or other configuration property values, so don't think we should support for propagator names either.
Initially noticed this in https://github.com/open-telemetry/opentelemetry-java-instrumentation/pull/1545#discussion_r522603943 | non_process | don t support case insensitivity in propagator names we don t support case insensitivity in exporter names or other configuration property values so don t think we should support for propagator names either initially noticed this in | 0 |
76,252 | 9,414,823,084 | IssuesEvent | 2019-04-10 11:06:10 | algolia/instantsearch.js | https://api.github.com/repos/algolia/instantsearch.js | closed | RFC: InfiniteHits should load previous pages | Design: API Design: feature Feedback ⚠️ Not ready | When reloading a page with urlSync and page > 0, the infinite hits doesn't load the previous hence it doesn't reproduce the exact same state. We decided to close https://github.com/algolia/instantsearch.js/issues/2750 but we've had other users asking for it.
### Proposed API
```diff
const widget = instantsearch.widgets.infiniteHits({
container: string|HTMLElement,
templates: [InfiniteHitsTemplates],
showMoreLabel: [string],
transformData: [InfiniteHitsTransforms],
cssClasses: [InfiniteHitsCSSClasses],
escapeHits: [boolean],
+ loadPreviousPagesOnReload: [boolean=false]
}: InfiniteHitsWidgetOptions);
```
### Behaviour
The widget is not aware of the nature of the rendering. Therefore it should check at each rendering if the current page matches the cache of results. If it doesn't it should reload the missing pages.
Cool side effect of that, is that it could handle the changes of hits per page.
### Raise your voice ;)
Because we might have a lot of pages, we should probably add anchors, and go directly to the current page?
The name is long but explicit 🤔
Please comment if you see a better name or you see any other issue. | 2.0 | RFC: InfiniteHits should load previous pages - When reloading a page with urlSync and page > 0, the infinite hits doesn't load the previous hence it doesn't reproduce the exact same state. We decided to close https://github.com/algolia/instantsearch.js/issues/2750 but we've had other users asking for it.
### Proposed API
```diff
const widget = instantsearch.widgets.infiniteHits({
container: string|HTMLElement,
templates: [InfiniteHitsTemplates],
showMoreLabel: [string],
transformData: [InfiniteHitsTransforms],
cssClasses: [InfiniteHitsCSSClasses],
escapeHits: [boolean],
+ loadPreviousPagesOnReload: [boolean=false]
}: InfiniteHitsWidgetOptions);
```
### Behaviour
The widget is not aware of the nature of the rendering. Therefore it should check at each rendering if the current page matches the cache of results. If it doesn't it should reload the missing pages.
Cool side effect of that, is that it could handle the changes of hits per page.
### Raise your voice ;)
Because we might have a lot of pages, we should probably add anchors, and go directly to the current page?
The name is long but explicit 🤔
Please comment if you see a better name or you see any other issue. | non_process | rfc infinitehits should load previous pages when reloading a page with urlsync and page the infinite hits doesn t load the previous hence it doesn t reproduce the exact same state we decided to close but we ve had other users asking for it proposed api diff const widget instantsearch widgets infinitehits container string htmlelement templates showmorelabel transformdata cssclasses escapehits loadpreviouspagesonreload infinitehitswidgetoptions behaviour the widget is not aware of the nature of the rendering therefore it should check at each rendering if the current page matches the cache of results if it doesn t it should reload the missing pages cool side effect of that is that it could handle the changes of hits per page raise your voice because we might have a lot of pages we should probably add anchors and go directly to the current page the name is long but explicit 🤔 please comment if you see a better name or you see any other issue | 0 |
8,168 | 11,386,130,679 | IssuesEvent | 2020-01-29 12:38:31 | Open-EO/openeo-processes | https://api.github.com/repos/Open-EO/openeo-processes | closed | array_count? | new process | While working on #137, I found that it could be useful for several use cases to have an operation that returns the number of elements in a list, similarly to the existing count process for data cubes.
Some use cases:
- Count the number of elements in a time series when no data is ignored
- Compute mean, sd, variance etc. Every formula that computes something like `1/n * func(...)` (n = number of elements) or so.
Side note: Having that said, we could "save" (i.e. remove) quite a lot of processes if we change arrays to be 1D datacubes. | 1.0 | array_count? - While working on #137, I found that it could be useful for several use cases to have an operation that returns the number of elements in a list, similarly to the existing count process for data cubes.
Some use cases:
- Count the number of elements in a time series when no data is ignored
- Compute mean, sd, variance etc. Every formula that computes something like `1/n * func(...)` (n = number of elements) or so.
Side note: Having that said, we could "save" (i.e. remove) quite a lot of processes if we change arrays to be 1D datacubes. | process | array count while working on i found that it could be useful for several use cases to have an operation that returns the number of elements in a list similarly to the existing count process for data cubes some use cases count the number of elements in a time series when no data is ignored compute mean sd variance etc every formula that computes something like n func n number of elements or so side note having that said we could save i e remove quite a lot of processes if we change arrays to be datacubes | 1 |
18,598 | 24,573,188,846 | IssuesEvent | 2022-10-13 10:12:57 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android][iOS] Inconsistency between Android and iOS for the sorting order for study resources | Bug P1 iOS Android Process: Fixed Process: Tested dev | Steps:-
1. Add few resources for the study and publish from SB
2. Navigate to resources section in mobile and observe the available resources
A/R:- Available resources are displaying in different sorting order in Android and iOS apps
E/R:- Available resources should be displayed in same order and should be consistent between both platforms
**Android**

**iOS**

| 2.0 | [Android][iOS] Inconsistency between Android and iOS for the sorting order for study resources - Steps:-
1. Add few resources for the study and publish from SB
2. Navigate to resources section in mobile and observe the available resources
A/R:- Available resources are displaying in different sorting order in Android and iOS apps
E/R:- Available resources should be displayed in same order and should be consistent between both platforms
**Android**

**iOS**

| process | inconsistency between android and ios for the sorting order for study resources steps add few resources for the study and publish from sb navigate to resources section in mobile and observe the available resources a r available resources are displaying in different sorting order in android and ios apps e r available resources should be displayed in same order and should be consistent between both platforms android ios | 1 |
452,654 | 13,057,570,241 | IssuesEvent | 2020-07-30 07:35:14 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | my.account.sony.com - see bug description | browser-firefox-mobile engine-gecko priority-normal type-fastclick | <!-- @browser: Firefox Mobile 80.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:80.0) Gecko/80.0 Firefox/80.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55999 -->
<!-- @extra_labels: type-fastclick -->
**URL**: https://my.account.sony.com/central/signin/?response_type=code&client_id=78420c74-1fdf-4575-b43f-eb94c7d770bf&redirect_uri=https%3A%2F%2Fwww.bungie.net%2Fen%2FUser%2FSignIn%2FPsnid%2F&scope=psn%3As2s&request_locale=en_US&state=4592617401654830053&service_entity=urn%3Aservice-entity%3Apsn&cid=f7f084fc-5321-4172-aee2-a6a4b186daa5&error=login_required&error_code=4165&no_captcha=true#/signin/ca/password?entry=ca
**Browser / Version**: Firefox Mobile 80.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: cant login
**Steps to Reproduce**:
Tried to login into PlayStation site with my psn id
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200727095125</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>hasFastClick: true</li>
</ul>
</details>
Submitted in the name of `@ogslimtony`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | my.account.sony.com - see bug description - <!-- @browser: Firefox Mobile 80.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:80.0) Gecko/80.0 Firefox/80.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/55999 -->
<!-- @extra_labels: type-fastclick -->
**URL**: https://my.account.sony.com/central/signin/?response_type=code&client_id=78420c74-1fdf-4575-b43f-eb94c7d770bf&redirect_uri=https%3A%2F%2Fwww.bungie.net%2Fen%2FUser%2FSignIn%2FPsnid%2F&scope=psn%3As2s&request_locale=en_US&state=4592617401654830053&service_entity=urn%3Aservice-entity%3Apsn&cid=f7f084fc-5321-4172-aee2-a6a4b186daa5&error=login_required&error_code=4165&no_captcha=true#/signin/ca/password?entry=ca
**Browser / Version**: Firefox Mobile 80.0
**Operating System**: Android
**Tested Another Browser**: Yes Other
**Problem type**: Something else
**Description**: cant login
**Steps to Reproduce**:
Tried to login into PlayStation site with my psn id
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200727095125</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>hasFastClick: true</li>
</ul>
</details>
Submitted in the name of `@ogslimtony`
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | my account sony com see bug description url browser version firefox mobile operating system android tested another browser yes other problem type something else description cant login steps to reproduce tried to login into playstation site with my psn id browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true hasfastclick true submitted in the name of ogslimtony from with ❤️ | 0 |
165,804 | 14,011,579,102 | IssuesEvent | 2020-10-29 07:38:03 | thoth-station/s2i-tensorflow-notebook | https://api.github.com/repos/thoth-station/s2i-tensorflow-notebook | closed | I don't know how to use this | documentation enhancement good first issue question | Same issue as for the scipy notebook - if I am a random user seeing this, I don't know how to use it - https://github.com/thoth-station/s2i-scipy-notebook/issues/4 | 1.0 | I don't know how to use this - Same issue as for the scipy notebook - if I am a random user seeing this, I don't know how to use it - https://github.com/thoth-station/s2i-scipy-notebook/issues/4 | non_process | i don t know how to use this same issue as for the scipy notebook if i am a random user seeing this i don t know how to use it | 0 |
16,361 | 21,046,334,897 | IssuesEvent | 2022-03-31 16:20:50 | pystatgen/sgkit | https://api.github.com/repos/pystatgen/sgkit | opened | Doc build is failing | process + tools upstream | From https://github.com/pystatgen/sgkit/runs/5763700555?check_suite_focus=true:
```
Running Sphinx v4.2.0
python exec: /opt/hostedtoolcache/Python/3.8.[12](https://github.com/pystatgen/sgkit/runs/5763700555?check_suite_focus=true#step:6:12)/x64/bin/python
sys.path: ['/home/runner/work/sgkit/sgkit', '/opt/hostedtoolcache/Python/3.8.12/x64/bin', '/home/runner/work/sgkit/sgkit', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python38.zip', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/lib-dynload', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages']
making output directory... done
[autosummary] generating autosummary for: about.rst, api.rst, contributing.rst, examples/gwas_tutorial.ipynb, examples/index.rst, getting_started.rst, how_do_i.rst, index.rst, user_guide.rst, vcf.rst
WARNING: [autosummary] failed to import 'sgkit.io.vcf.concat_zarrs': no module named sgkit.io.vcf.concat_zarrs
WARNING: [autosummary] failed to import 'sgkit.io.vcf.partition_into_regions': no module named sgkit.io.vcf.partition_into_regions
WARNING: [autosummary] failed to import 'sgkit.io.vcf.vcf_to_zarr': no module named sgkit.io.vcf.vcf_to_zarr
WARNING: [autosummary] failed to import 'sgkit.io.vcf.vcf_to_zarrs': no module named sgkit.io.vcf.vcf_to_zarrs
``` | 1.0 | Doc build is failing - From https://github.com/pystatgen/sgkit/runs/5763700555?check_suite_focus=true:
```
Running Sphinx v4.2.0
python exec: /opt/hostedtoolcache/Python/3.8.[12](https://github.com/pystatgen/sgkit/runs/5763700555?check_suite_focus=true#step:6:12)/x64/bin/python
sys.path: ['/home/runner/work/sgkit/sgkit', '/opt/hostedtoolcache/Python/3.8.12/x64/bin', '/home/runner/work/sgkit/sgkit', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python38.zip', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/lib-dynload', '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages']
making output directory... done
[autosummary] generating autosummary for: about.rst, api.rst, contributing.rst, examples/gwas_tutorial.ipynb, examples/index.rst, getting_started.rst, how_do_i.rst, index.rst, user_guide.rst, vcf.rst
WARNING: [autosummary] failed to import 'sgkit.io.vcf.concat_zarrs': no module named sgkit.io.vcf.concat_zarrs
WARNING: [autosummary] failed to import 'sgkit.io.vcf.partition_into_regions': no module named sgkit.io.vcf.partition_into_regions
WARNING: [autosummary] failed to import 'sgkit.io.vcf.vcf_to_zarr': no module named sgkit.io.vcf.vcf_to_zarr
WARNING: [autosummary] failed to import 'sgkit.io.vcf.vcf_to_zarrs': no module named sgkit.io.vcf.vcf_to_zarrs
``` | process | doc build is failing from running sphinx python exec opt hostedtoolcache python sys path making output directory done generating autosummary for about rst api rst contributing rst examples gwas tutorial ipynb examples index rst getting started rst how do i rst index rst user guide rst vcf rst warning failed to import sgkit io vcf concat zarrs no module named sgkit io vcf concat zarrs warning failed to import sgkit io vcf partition into regions no module named sgkit io vcf partition into regions warning failed to import sgkit io vcf vcf to zarr no module named sgkit io vcf vcf to zarr warning failed to import sgkit io vcf vcf to zarrs no module named sgkit io vcf vcf to zarrs | 1 |
22,233 | 30,784,632,507 | IssuesEvent | 2023-07-31 12:28:27 | keras-team/keras-cv | https://api.github.com/repos/keras-team/keras-cv | closed | Add augment_bounding_boxes support to CenterCrop layer | contribution-welcome preprocessing | The augment_bounding_boxes should be implemented for CenterCrop Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | 1.0 | Add augment_bounding_boxes support to CenterCrop layer - The augment_bounding_boxes should be implemented for CenterCrop Layer in keras_cv. The PR should contain implementation, test scripts and a demo script to verify implementation.
Example code for implementing augment_bounding_boxes() can be found here
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_flip.py#:~:text=def%20augment_bounding_boxes(,)%3A
- https://github.com/keras-team/keras-cv/blob/master/keras_cv/layers/preprocessing/random_rotation.py#:~:text=def%20augment_image(self%2C%20image%2C%20transformation%2C%20**kwargs)%3A
- The implementations can be verified using demo utils in keras_cv.bounding_box - Example of demo script can be found here : https://github.com/keras-team/keras-cv/blob/master/examples/layers/preprocessing/bounding_box/random_rotation_demo.py | process | add augment bounding boxes support to centercrop layer the augment bounding boxes should be implemented for centercrop layer in keras cv the pr should contain implementation test scripts and a demo script to verify implementation example code for implementing augment bounding boxes can be found here the implementations can be verified using demo utils in keras cv bounding box example of demo script can be found here | 1 |
18,957 | 24,920,800,077 | IssuesEvent | 2022-10-30 23:11:39 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add Garth Marenghi's Darkplace | suggested title in process | "Darkplace is presented as a lost classic: a television series produced in the 1980s, though not broadcast at the time (except in Peru). The presentation features commentary from many of the "original" cast, where characters such as Marenghi and Learner reflect on making the show." (https://en.wikipedia.org/wiki/Garth_Marenghi%27s_Darkplace)
Title:
Garth Marenghi's Darkplace
Type (film/tv show):
TV Show
Film or show in which it appears:
Garth Marenghi's Darkplace
Is the parent film/show streaming anywhere?
Channel 4 : https://www.channel4.com/programmes/garth-marenghis-darkplace
About when in the parent film/show does it appear?
Every episode
Actual footage of the film/show can be seen (yes/no)?
yes
https://www.youtube.com/watch?v=yk0qGq7P3lM
| 1.0 | Add Garth Marenghi's Darkplace - "Darkplace is presented as a lost classic: a television series produced in the 1980s, though not broadcast at the time (except in Peru). The presentation features commentary from many of the "original" cast, where characters such as Marenghi and Learner reflect on making the show." (https://en.wikipedia.org/wiki/Garth_Marenghi%27s_Darkplace)
Title:
Garth Marenghi's Darkplace
Type (film/tv show):
TV Show
Film or show in which it appears:
Garth Marenghi's Darkplace
Is the parent film/show streaming anywhere?
Channel 4 : https://www.channel4.com/programmes/garth-marenghis-darkplace
About when in the parent film/show does it appear?
Every episode
Actual footage of the film/show can be seen (yes/no)?
yes
https://www.youtube.com/watch?v=yk0qGq7P3lM
| process | add garth marenghi s darkplace darkplace is presented as a lost classic a television series produced in the though not broadcast at the time except in peru the presentation features commentary from many of the original cast where characters such as marenghi and learner reflect on making the show title garth marenghi s darkplace type film tv show tv show film or show in which it appears garth marenghi s darkplace is the parent film show streaming anywhere channel about when in the parent film show does it appear every episode actual footage of the film show can be seen yes no yes | 1 |
39,411 | 9,440,286,446 | IssuesEvent | 2019-04-14 16:46:45 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | linprog error: UnboundLocalError: local variable 'nit2' referenced before assignment | defect scipy.optimize | Hello! First time posting an issue, so let me know if you need more/different information.
I am getting the following error when calling `optimize.linprog` in `scipy` v1.2.1.
```---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-46-27a3a22e62a0> in <module>()
----> 1 results = linprog(c, A_ub, b_ub, A_eq, b_eq)
/usr/local/anaconda/lib/python3.6/site-packages/scipy/optimize/_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method, callback, options)
464 if meth == 'simplex':
465 x, status, message, iteration = _linprog_simplex(
--> 466 c, c0=c0, A=A, b=b, callback=callback, _T_o=T_o, **solver_options)
467 elif meth == 'interior-point':
468 x, status, message, iteration = _linprog_ip(
/usr/local/anaconda/lib/python3.6/site-packages/scipy/optimize/_linprog_simplex.py in _linprog_simplex(c, c0, A, b, maxiter, disp, callback, tol, bland, _T_o, **unknown_options)
613 x = solution[:m]
614
--> 615 return x, status, messages[status], int(nit2)
616
UnboundLocalError: local variable 'nit2' referenced before assignment
```
From looking at https://github.com/scipy/scipy/blob/v1.2.1/scipy/optimize/_linprog_simplex.py, it appears that `nit2` is always returned, but only assigned if `status==0`, where `status` is returned from the call to `_solve_simplex`.
Unfortunately, I am working with somewhat large matrices, so I don't know how to easily provide the data here to reproduce the error. The structure is as follows.
- The call is `results = linprog(c, A_ub, b_ub, A_eq, b_eq)`
- `c.shape` is `(41,)`
- `A_ub.shape, b_ub.shape` is `(150, 41), (150,)`
- `A_eq.shape, b_eq.shape` is `(1, 41), (1,)`
### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.2.1 1.16.1 sys.version_info(major=3, minor=6, micro=7, releaselevel='final', serial=0)
```
| 1.0 | linprog error: UnboundLocalError: local variable 'nit2' referenced before assignment - Hello! First time posting an issue, so let me know if you need more/different information.
I am getting the following error when calling `optimize.linprog` in `scipy` v1.2.1.
```---------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-46-27a3a22e62a0> in <module>()
----> 1 results = linprog(c, A_ub, b_ub, A_eq, b_eq)
/usr/local/anaconda/lib/python3.6/site-packages/scipy/optimize/_linprog.py in linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method, callback, options)
464 if meth == 'simplex':
465 x, status, message, iteration = _linprog_simplex(
--> 466 c, c0=c0, A=A, b=b, callback=callback, _T_o=T_o, **solver_options)
467 elif meth == 'interior-point':
468 x, status, message, iteration = _linprog_ip(
/usr/local/anaconda/lib/python3.6/site-packages/scipy/optimize/_linprog_simplex.py in _linprog_simplex(c, c0, A, b, maxiter, disp, callback, tol, bland, _T_o, **unknown_options)
613 x = solution[:m]
614
--> 615 return x, status, messages[status], int(nit2)
616
UnboundLocalError: local variable 'nit2' referenced before assignment
```
From looking at https://github.com/scipy/scipy/blob/v1.2.1/scipy/optimize/_linprog_simplex.py, it appears that `nit2` is always returned, but only assigned if `status==0`, where `status` is returned from the call to `_solve_simplex`.
Unfortunately, I am working with somewhat large matrices, so I don't know how to easily provide the data here to reproduce the error. The structure is as follows.
- The call is `results = linprog(c, A_ub, b_ub, A_eq, b_eq)`
- `c.shape` is `(41,)`
- `A_ub.shape, b_ub.shape` is `(150, 41), (150,)`
- `A_eq.shape, b_eq.shape` is `(1, 41), (1,)`
### Scipy/Numpy/Python version information:
```
import sys, scipy, numpy; print(scipy.__version__, numpy.__version__, sys.version_info)
1.2.1 1.16.1 sys.version_info(major=3, minor=6, micro=7, releaselevel='final', serial=0)
```
| non_process | linprog error unboundlocalerror local variable referenced before assignment hello first time posting an issue so let me know if you need more different information i am getting the following error when calling optimize linprog in scipy unboundlocalerror traceback most recent call last in results linprog c a ub b ub a eq b eq usr local anaconda lib site packages scipy optimize linprog py in linprog c a ub b ub a eq b eq bounds method callback options if meth simplex x status message iteration linprog simplex c a a b b callback callback t o t o solver options elif meth interior point x status message iteration linprog ip usr local anaconda lib site packages scipy optimize linprog simplex py in linprog simplex c a b maxiter disp callback tol bland t o unknown options x solution return x status messages int unboundlocalerror local variable referenced before assignment from looking at it appears that is always returned but only assigned if status where status is returned from the call to solve simplex unfortunately i am working with somewhat large matrices so i don t know how to easily provide the data here to reproduce the error the structure is as follows the call is results linprog c a ub b ub a eq b eq c shape is a ub shape b ub shape is a eq shape b eq shape is scipy numpy python version information import sys scipy numpy print scipy version numpy version sys version info sys version info major minor micro releaselevel final serial | 0 |
173,319 | 27,421,237,337 | IssuesEvent | 2023-03-01 16:51:53 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | closed | EPIC: VBA - Public Contact Team Research | Research Design Epic Facilities Regional office | ### Team
- Aslan French
- Dave Conlon
### Method
- Semi-structured interviews with VBA Public Contact staff
### Timeline / Dependencies
- 🛑 Regional Office point of contacts identifying appropriate contacts is a blocker
### Research Questions
- What are the common things Veterans ask for?
- When do Veterans ask for a benefit by phone vs. in person and typically why?
- What do Veterans typically try to accomplish during a facility visit?
- What are the situations when Veterans don't feel successful getting a benefit at the facility?
- What do Veterans need to know or do to be adequately prepared for a visit?
- What are the unique characteristics of the benefits offered by that facility?
- How are the available services handled by the Public Contact team?
- How are Veteran Service Organizations (VSOs) located at that facility important and why?
- When should some benefits be encouraged or promoted to be done online?
- Does the Public Contact staff handle any aspect of the facility's website?
| 1.0 | EPIC: VBA - Public Contact Team Research - ### Team
- Aslan French
- Dave Conlon
### Method
- Semi-structured interviews with VBA Public Contact staff
### Timeline / Dependencies
- 🛑 Regional Office point of contacts identifying appropriate contacts is a blocker
### Research Questions
- What are the common things Veterans ask for?
- When do Veterans ask for a benefit by phone vs. in person and typically why?
- What do Veterans typically try to accomplish during a facility visit?
- What are the situations when Veterans don't feel successful getting a benefit at the facility?
- What do Veterans need to know or do to be adequately prepared for a visit?
- What are the unique characteristics of the benefits offered by that facility?
- How are the available services handled by the Public Contact team?
- How are Veteran Service Organizations (VSOs) located at that facility important and why?
- When should some benefits be encouraged or promoted to be done online?
- Does the Public Contact staff handle any aspect of the facility's website?
| non_process | epic vba public contact team research team aslan french dave conlon method semi structured interviews with vba public contact staff timeline dependencies 🛑 regional office point of contacts identifying appropriate contacts is a blocker research questions what are the common things veterans ask for when do veterans ask for a benefit by phone vs in person and typically why what do veterans typically try to accomplish during a facility visit what are the situations when veterans don t feel successful getting a benefit at the facility what do veterans need to know or do to be adequately prepared for a visit what are the unique characteristics of the benefits offered by that facility how are the available services handled by the public contact team how are veteran service organizations vsos located at that facility important and why when should some benefits be encouraged or promoted to be done online does the public contact staff handle any aspect of the facility s website | 0 |
22,381 | 31,142,283,513 | IssuesEvent | 2023-08-16 01:44:03 | cypress-io/cypress | https://api.github.com/repos/cypress-io/cypress | closed | Flaky test: net_stubbing relative path | process: flaky test topic: flake ❄️ stage: fire watch priority: low topic: net_stubbing.cy.ts stale | ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/38124/overview/7df68fd2-796c-4e1c-a110-b7e170b965c1
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1876
### Analysis
<img width="445" alt="Screen Shot 2022-08-18 at 8 43 56 AM" src="https://user-images.githubusercontent.com/26726429/185437616-09f31879-19d3-4fe9-961b-7a3f656c0e52.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | 1.0 | Flaky test: net_stubbing relative path - ### Link to dashboard or CircleCI failure
https://dashboard.cypress.io/projects/ypt4pf/runs/38124/overview/7df68fd2-796c-4e1c-a110-b7e170b965c1
### Link to failing test in GitHub
https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/net_stubbing.cy.ts#L1876
### Analysis
<img width="445" alt="Screen Shot 2022-08-18 at 8 43 56 AM" src="https://user-images.githubusercontent.com/26726429/185437616-09f31879-19d3-4fe9-961b-7a3f656c0e52.png">
### Cypress Version
10.6.0
### Other
Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed | process | flaky test net stubbing relative path link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed | 1 |
71,821 | 9,540,027,953 | IssuesEvent | 2019-04-30 18:27:03 | redfin/react-server | https://api.github.com/repos/redfin/react-server | closed | Upgrade to Webpack 4 + Babel 7 | breaking change cleanup documentation enhancement housekeeping performance security | It's time, folks. This project is stuck on Webpack 1 and Babel 6 which limits a lot of what can be done, in addition to just being old. Webpack 4 and Babel 7 will poise us for the future and allow us to better integrate the Rollup changes proposed in #985 .
- [x] Upgrade packages to use Webpack 4 and Babel 7
- [x] Ensure sane default values for the react-server-cli generated Webpack configs
- [x] Incorporate the use of `webpack-dev-middleware` instead of `webpack-dev-server` to simplify things (addresses proposal #807 and PR #808 (that was rejected)...gonna have to push this through due to unnecessary complexity here). This also removes the need for #932 and #774 | 1.0 | Upgrade to Webpack 4 + Babel 7 - It's time, folks. This project is stuck on Webpack 1 and Babel 6 which limits a lot of what can be done, in addition to just being old. Webpack 4 and Babel 7 will poise us for the future and allow us to better integrate the Rollup changes proposed in #985 .
- [x] Upgrade packages to use Webpack 4 and Babel 7
- [x] Ensure sane default values for the react-server-cli generated Webpack configs
- [x] Incorporate the use of `webpack-dev-middleware` instead of `webpack-dev-server` to simplify things (addresses proposal #807 and PR #808 (that was rejected)...gonna have to push this through due to unnecessary complexity here). This also removes the need for #932 and #774 | non_process | upgrade to webpack babel it s time folks this project is stuck on webpack and babel which limits a lot of what can be done in addition to just being old webpack and babel will poise us for the future and allow us to better integrate the rollup changes proposed in upgrade packages to use webpack and babel ensure sane default values for the react server cli generated webpack configs incorporate the use of webpack dev middleware instead of webpack dev server to simplify things addresses proposal and pr that was rejected gonna have to push this through due to unnecessary complexity here this also removes the need for and | 0 |
17,281 | 23,084,084,090 | IssuesEvent | 2022-07-26 09:48:00 | comment-reboot/blog-comments-list | https://api.github.com/repos/comment-reboot/blog-comments-list | opened | Inter-Process Communication | TechPaper | Gitalk /post/inter-process-communication/ | https://blog.ibyte.me/post/inter-process-communication/
Share computer science and technology,Java,Golang,Rust,Distributed Systems, System Design articles. | 1.0 | Inter-Process Communication | TechPaper - https://blog.ibyte.me/post/inter-process-communication/
Share computer science and technology,Java,Golang,Rust,Distributed Systems, System Design articles. | process | inter process communication techpaper share computer science and technology java golang rust distributed systems system design articles | 1 |
9,276 | 12,302,469,265 | IssuesEvent | 2020-05-11 17:01:45 | AgPipeline/drone-pipeline-environment | https://api.github.com/repos/AgPipeline/drone-pipeline-environment | closed | Process 3 day's of Francelino's data | data processing | **Task to do**
Start processing Francelino's data
**Reason**
Testing our and proving pipeline
**Result**
Result data is available for comparison
| 1.0 | Process 3 day's of Francelino's data - **Task to do**
Start processing Francelino's data
**Reason**
Testing our and proving pipeline
**Result**
Result data is available for comparison
| process | process day s of francelino s data task to do start processing francelino s data reason testing our and proving pipeline result result data is available for comparison | 1 |
2,556 | 5,312,847,357 | IssuesEvent | 2017-02-13 10:19:59 | matz-e/lobster | https://api.github.com/repos/matz-e/lobster | closed | Release creation needs SCRAM_ARCH set right in some setups | bug processing | Action plan:
1. Use full `SCRAM_ARCH` instead of just the SLC release when packing
2. Extract `SCRAM_ARCH` on the worker from sandbox name and use it | 1.0 | Release creation needs SCRAM_ARCH set right in some setups - Action plan:
1. Use full `SCRAM_ARCH` instead of just the SLC release when packing
2. Extract `SCRAM_ARCH` on the worker from sandbox name and use it | process | release creation needs scram arch set right in some setups action plan use full scram arch instead of just the slc release when packing extract scram arch on the worker from sandbox name and use it | 1 |
1,300 | 3,839,417,546 | IssuesEvent | 2016-04-03 02:38:58 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | child_process.execFile returns strings where doc says it should return a Buffer | child_process doc | [Doc says](https://nodejs.org/api/child_process.html#child_process_child_process_execfile_file_args_options_callback):
> callback Function called with the output when process terminates
> - error Error
> - stdout Buffer
> - stderr Buffer
[This was known back in 2013 already](http://stackoverflow.com/questions/18925426/child-process-the-stdout-parameter)
This points out the options parameter's encoding field can force a buffer response. However the default value is supposed to be 'utf8' according to the doc, which strongly implies that it means _character encoding_, not the type of the return value. The name of this option is terribly misleading.
Demo code:
```
"use strict";
var child_process = require('child_process');
console.log('node version=%s',process.version);
for (let options of [{}, {encoding:'buffer'}]) {
child_process.execFile(
'uname', ['-o'], options,
(error, stdout, stderr) => {
console.log(
'options is %s → stdout is a %s',
JSON.stringify(options),
typeof stdout
)
}
);
}
```
Output:
```
node version=v4.2.1
options is {"encoding":"buffer"} → stdout is a object
options is {} → stdout is a string
```
The behavior or the documentation need to be changed. If the current behavior is kept, the semantic of 'encoding' should be clarified, and the name of the property should be changed (to 'return_type' for example). | 1.0 | child_process.execFile returns strings where doc says it should return a Buffer - [Doc says](https://nodejs.org/api/child_process.html#child_process_child_process_execfile_file_args_options_callback):
> callback Function called with the output when process terminates
> - error Error
> - stdout Buffer
> - stderr Buffer
[This was known back in 2013 already](http://stackoverflow.com/questions/18925426/child-process-the-stdout-parameter)
This points out the options parameter's encoding field can force a buffer response. However the default value is supposed to be 'utf8' according to the doc, which strongly implies that it means _character encoding_, not the type of the return value. The name of this option is terribly misleading.
Demo code:
```
"use strict";
var child_process = require('child_process');
console.log('node version=%s',process.version);
for (let options of [{}, {encoding:'buffer'}]) {
child_process.execFile(
'uname', ['-o'], options,
(error, stdout, stderr) => {
console.log(
'options is %s → stdout is a %s',
JSON.stringify(options),
typeof stdout
)
}
);
}
```
Output:
```
node version=v4.2.1
options is {"encoding":"buffer"} → stdout is a object
options is {} → stdout is a string
```
The behavior or the documentation need to be changed. If the current behavior is kept, the semantic of 'encoding' should be clarified, and the name of the property should be changed (to 'return_type' for example). | process | child process execfile returns strings where doc says it should return a buffer callback function called with the output when process terminates error error stdout buffer stderr buffer this points out the options parameter s encoding field can force a buffer response however the default value is supposed to be according to the doc which strongly implies that it means character encoding not the type of the return value the name of this option is terribly misleading demo code use strict var child process require child process console log node version s process version for let options of child process execfile uname options error stdout stderr console log options is s → stdout is a s json stringify options typeof stdout output node version options is encoding buffer → stdout is a object options is → stdout is a string the behavior or the documentation need to be changed if the current behavior is kept the semantic of encoding should be clarified and the name of the property should be changed to return type for example | 1 |
22,340 | 31,018,093,042 | IssuesEvent | 2023-08-10 01:23:32 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | opened | Define guidelines for improved four eyes principle | Process | Background for this was feedback we got from the community in the 'Meet the Maintainers' session during ZDS 2023.
Right now we require at least 2 approvals (4 eyes) for a pull request to be merged. In the Zephyr case, all eyes (submitter, approvers and merger) can be of the same organisation or team. A change that might seem harmless and if merged quickly to address an issue or add a feature without having being reviewed by a larger group of users might have negative effects and should be avoided.
Ideally we want at least one set of eyes looking at the changes from a different organisation, This for example could be the person merging the change, however, having reviews and approvals from other organisation will simplify things further and the merger + the approval of the assignee removes any ambiguity about the review.
We can further optimize this as we go, but at minimum we shall avoid the following:
- (a) Submitter, Approvers and Merger are from the same organisation
Additionally, the following should be considered:
- (b) Changes to common and shared code shall always have reviews from different organisations (at least one review and approval from a different organisation as the submitter)
- (c) with changes to platform code (driver, soc, boards), rule (a) can be followed.
- ....
Consider and list other possible guidelines below...
| 1.0 | Define guidelines for improved four eyes principle - Background for this was feedback we got from the community in the 'Meet the Maintainers' session during ZDS 2023.
Right now we require at least 2 approvals (4 eyes) for a pull request to be merged. In the Zephyr case, all eyes (submitter, approvers and merger) can be of the same organisation or team. A change that might seem harmless and if merged quickly to address an issue or add a feature without having being reviewed by a larger group of users might have negative effects and should be avoided.
Ideally we want at least one set of eyes looking at the changes from a different organisation, This for example could be the person merging the change, however, having reviews and approvals from other organisation will simplify things further and the merger + the approval of the assignee removes any ambiguity about the review.
We can further optimize this as we go, but at minimum we shall avoid the following:
- (a) Submitter, Approvers and Merger are from the same organisation
Additionally, the following should be considered:
- (b) Changes to common and shared code shall always have reviews from different organisations (at least one review and approval from a different organisation as the submitter)
- (c) with changes to platform code (driver, soc, boards), rule (a) can be followed.
- ....
Consider and list other possible guidelines below...
| process | define guidelines for improved four eyes principle background for this was feedback we got from the community in the meet the maintainers session during zds right now we require at least approvals eyes for a pull request to be merged in the zephyr case all eyes submitter approvers and merger can be of the same organisation or team a change that might seem harmless and if merged quickly to address an issue or add a feature without having being reviewed by a larger group of users might have negative effects and should be avoided ideally we want at least one set of eyes looking at the changes from a different organisation this for example could be the person merging the change however having reviews and approvals from other organisation will simplify things further and the merger the approval of the assignee removes any ambiguity about the review we can further optimize this as we go but at minimum we shall avoid the following a submitter approvers and merger are from the same organisation additionally the following should be considered b changes to common and shared code shall always have reviews from different organisations at least one review and approval from a different organisation as the submitter c with changes to platform code driver soc boards rule a can be followed consider and list other possible guidelines below | 1 |
11,604 | 14,478,816,923 | IssuesEvent | 2020-12-10 08:57:59 | decidim/decidim | https://api.github.com/repos/decidim/decidim | closed | EPIC: Process Group highlights | contract: process-groups type: EPIC | Ref: PG03
As a visitor, I want to better discover participatory process groups that are
- [x] Highlight processes groups on processes list (ref: PG02-01)
- [x] Highlighted process groups content block on Homepage (ref: PG02-02) | 1.0 | EPIC: Process Group highlights - Ref: PG03
As a visitor, I want to better discover participatory process groups that are
- [x] Highlight processes groups on processes list (ref: PG02-01)
- [x] Highlighted process groups content block on Homepage (ref: PG02-02) | process | epic process group highlights ref as a visitor i want to better discover participatory process groups that are highlight processes groups on processes list ref highlighted process groups content block on homepage ref | 1 |
16,719 | 21,882,037,452 | IssuesEvent | 2022-05-19 15:04:53 | camunda/zeebe | https://api.github.com/repos/camunda/zeebe | closed | Reject ProcessInstanceCreation command targeting root start event | team/process-automation | The root start event should not be allowed as the target element for the ProcessInstanceCreation command with start instructions. This might lead to subscribing to the timer/message events twice (once when creating the instance and once when completing the start event). It would also circumvent the usage of output mappings to change the variables before merging them into the process instance.
The `ProcessInstanceCreation` command should be rejected:
- when one of the target element ids refers directly to a root start event
Blocked by https://github.com/camunda/zeebe/issues/9390 | 1.0 | Reject ProcessInstanceCreation command targeting root start event - The root start event should not be allowed as the target element for the ProcessInstanceCreation command with start instructions. This might lead to subscribing to the timer/message events twice (once when creating the instance and once when completing the start event). It would also circumvent the usage of output mappings to change the variables before merging them into the process instance.
The `ProcessInstanceCreation` command should be rejected:
- when one of the target element ids refers directly to a root start event
Blocked by https://github.com/camunda/zeebe/issues/9390 | process | reject processinstancecreation command targeting root start event the root start event should not be allowed as the target element for the processinstancecreation command with start instructions this might lead to subscribing to the timer message events twice once when creating the instance and once when completing the start event it would also circumvent the usage of output mappings to change the variables before merging them into the process instance the processinstancecreation command should be rejected when one of the target element ids refers directly to a root start event blocked by | 1 |
14,603 | 17,703,626,421 | IssuesEvent | 2021-08-25 03:25:42 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | Change term - occurrenceStatus (alternative) | Term - change Class - Occurrence non-normative Process - complete | ## Term change
* Submitter: Tim Robertson @timrobertson100
* Efficacy Justification (why is this change necessary?): Clarification
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): The term is currently being used in two distinct ways. It would be helpful to recognize those two ways and provide guidance on each of them to avoid confusion.
* Stability Justification (what concerns are there that this might affect existing implementations?): Making these clarifications will help both existing uses to proceed with clarity without affecting how either of them is currently implemented.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Current Term definition: https://dwc.tdwg.org/list/#dwc_occurrenceStatus
Proposed attributes of the new term (**in bold**):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): occurrenceStatus
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Occurrence
* Definition of the term (normative): A statement about the presence or absence of a Taxon at a Location.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. **For Occurrences, the default vocabulary is recommended to consist of "present" and "absent", but can be extended by implementers with good justification.**
* Examples (not normative): `present`, `absent`
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
| 1.0 | Change term - occurrenceStatus (alternative) - ## Term change
* Submitter: Tim Robertson @timrobertson100
* Efficacy Justification (why is this change necessary?): Clarification
* Demand Justification (if the change is semantic in nature, name at least two organizations that independently need this term): The term is currently being used in two distinct ways. It would be helpful to recognize those two ways and provide guidance on each of them to avoid confusion.
* Stability Justification (what concerns are there that this might affect existing implementations?): Making these clarifications will help both existing uses to proceed with clarity without affecting how either of them is currently implemented.
* Implications for dwciri: namespace (does this change affect a dwciri term version)?: None
Current Term definition: https://dwc.tdwg.org/list/#dwc_occurrenceStatus
Proposed attributes of the new term (**in bold**):
* Term name (in lowerCamelCase for properties, UpperCamelCase for classes): occurrenceStatus
* Organized in Class (e.g., Occurrence, Event, Location, Taxon): Occurrence
* Definition of the term (normative): A statement about the presence or absence of a Taxon at a Location.
* Usage comments (recommendations regarding content, etc., not normative): Recommended best practice is to use a controlled vocabulary. **For Occurrences, the default vocabulary is recommended to consist of "present" and "absent", but can be extended by implementers with good justification.**
* Examples (not normative): `present`, `absent`
* Refines (identifier of the broader term this term refines; normative): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term; normative): http://rs.tdwg.org/dwc/terms/version/occurrenceStatus-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG; not normative): not in ABCD
| process | change term occurrencestatus alternative term change submitter tim robertson efficacy justification why is this change necessary clarification demand justification if the change is semantic in nature name at least two organizations that independently need this term the term is currently being used in two distinct ways it would be helpful to recognize those two ways and provide guidance on each of them to avoid confusion stability justification what concerns are there that this might affect existing implementations making these clarifications will help both existing uses to proceed with clarity without affecting how either of them is currently implemented implications for dwciri namespace does this change affect a dwciri term version none current term definition proposed attributes of the new term in bold term name in lowercamelcase for properties uppercamelcase for classes occurrencestatus organized in class e g occurrence event location taxon occurrence definition of the term normative a statement about the presence or absence of a taxon at a location usage comments recommendations regarding content etc not normative recommended best practice is to use a controlled vocabulary for occurrences the default vocabulary is recommended to consist of present and absent but can be extended by implementers with good justification examples not normative present absent refines identifier of the broader term this term refines normative none replaces identifier of the existing term that would be deprecated and replaced by this term normative abcd xpath of the equivalent term in abcd or efg not normative not in abcd | 1 |
6,465 | 9,546,605,946 | IssuesEvent | 2019-05-01 20:26:26 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | closed | Department of State: Experience questions | Apply Process Approved Requirements Ready State Dept. | Who: Student intern applicant
What: experience questions
Why: to gather additional information about the applicant's experience
Acceptance Criteria:
On the Experiences & References page, under the experience section, There will be 3 questions with Yes and No radio buttons
Do you have any overseas experience?
If the user selects "Yes" they will presented with the following:
Please indicate what type of overseas experience you possess.
Student
Dependent
Peace Corps
Military
Government
Other
the user will be able to make multiple selections by checking the boxes
If the user selects "Other" they will be presented with two text boxes
"If you chose "Other", please specify"
Please indicate the total length of your overseas experience(s)
Do you have, or have you had, a Security Clearance?
If the user selects "Yes" they will be presented with two text boxes
Type of security clearance
Please provide who issued the clearance
Have you previously participated in the Virtual Student Federal Service (VSFS) program?
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 | 1.0 | Department of State: Experience questions - Who: Student intern applicant
What: experience questions
Why: to gather additional information about the applicant's experience
Acceptance Criteria:
On the Experiences & References page, under the experience section, There will be 3 questions with Yes and No radio buttons
Do you have any overseas experience?
If the user selects "Yes" they will presented with the following:
Please indicate what type of overseas experience you possess.
Student
Dependent
Peace Corps
Military
Government
Other
the user will be able to make multiple selections by checking the boxes
If the user selects "Other" they will be presented with two text boxes
"If you chose "Other", please specify"
Please indicate the total length of your overseas experience(s)
Do you have, or have you had, a Security Clearance?
If the user selects "Yes" they will be presented with two text boxes
Type of security clearance
Please provide who issued the clearance
Have you previously participated in the Virtual Student Federal Service (VSFS) program?
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 | process | department of state experience questions who student intern applicant what experience questions why to gather additional information about the applicant s experience acceptance criteria on the experiences references page under the experience section there will be questions with yes and no radio buttons do you have any overseas experience if the user selects yes they will presented with the following please indicate what type of overseas experience you possess student dependent peace corps military government other the user will be able to make multiple selections by checking the boxes if the user selects other they will be presented with two text boxes if you chose other please specify please indicate the total length of your overseas experience s do you have or have you had a security clearance if the user selects yes they will be presented with two text boxes type of security clearance please provide who issued the clearance have you previously participated in the virtual student federal service vsfs program public link | 1 |
5,415 | 8,248,921,623 | IssuesEvent | 2018-09-11 19:56:38 | w3c/transitions | https://api.github.com/repos/w3c/transitions | closed | [meta] CR Transition request headers are not self-explanatory | Process Issue | I'm filling out a new CR Transition request, and having to ask fantasai what several of the sections are asking for.
She says that this is addressed by <https://www.w3.org/Guide/transitions> (which is linked in the README), but I don't see any immediate mapping; I'd have to fully digest this document and hope that it does explain every heading (and skimming it now, it looks like it *doesn't*).
At minimum, the headings should have something like:
```
# Status
[TODO: whatever Status means]
```
Which we can then replace with whatever we're supposed to. | 1.0 | [meta] CR Transition request headers are not self-explanatory - I'm filling out a new CR Transition request, and having to ask fantasai what several of the sections are asking for.
She says that this is addressed by <https://www.w3.org/Guide/transitions> (which is linked in the README), but I don't see any immediate mapping; I'd have to fully digest this document and hope that it does explain every heading (and skimming it now, it looks like it *doesn't*).
At minimum, the headings should have something like:
```
# Status
[TODO: whatever Status means]
```
Which we can then replace with whatever we're supposed to. | process | cr transition request headers are not self explanatory i m filling out a new cr transition request and having to ask fantasai what several of the sections are asking for she says that this is addressed by which is linked in the readme but i don t see any immediate mapping i d have to fully digest this document and hope that it does explain every heading and skimming it now it looks like it doesn t at minimum the headings should have something like status which we can then replace with whatever we re supposed to | 1 |
9,680 | 12,682,961,179 | IssuesEvent | 2020-06-19 18:35:26 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Chunked stdout/stderr drops writes if terminated early. | confirmed-bug help wanted process | Hello,
I have an app which prints a long json to the output and I need to `| grep` this output in order to parse some data.
It works fine with Node.js but it doesn't with iojs.
It seems the output is chunked in some ways and grep stops before receiving all the data.
I came to this conclusion because when I redirect the output in some file and then `cat file | grep` it works, everything is there, but `iojs app.js | grep` won't.
Any ideas on this issue ?
Thanks.
| 1.0 | Chunked stdout/stderr drops writes if terminated early. - Hello,
I have an app which prints a long json to the output and I need to `| grep` this output in order to parse some data.
It works fine with Node.js but it doesn't with iojs.
It seems the output is chunked in some ways and grep stops before receiving all the data.
I came to this conclusion because when I redirect the output in some file and then `cat file | grep` it works, everything is there, but `iojs app.js | grep` won't.
Any ideas on this issue ?
Thanks.
| process | chunked stdout stderr drops writes if terminated early hello i have an app which prints a long json to the output and i need to grep this output in order to parse some data it works fine with node js but it doesn t with iojs it seems the output is chunked in some ways and grep stops before receiving all the data i came to this conclusion because when i redirect the output in some file and then cat file grep it works everything is there but iojs app js grep won t any ideas on this issue thanks | 1 |
186,037 | 6,732,975,728 | IssuesEvent | 2017-10-18 13:28:23 | kubernetes/dashboard | https://api.github.com/repos/kubernetes/dashboard | closed | Namespace filtering: not(kube-system) or custom multi-select | help wanted kind/feature priority/P2 | I looked for another request like this and made a cursory search to see if it might be covered in the multi-tenant stuff, but didn't see anything; super sorry if I overlooked!
Currently I run a k8s 1.5 cluster where workloads are allocated:
- default: public-facing production services
- tools: internal-facing services
- kube-system: cluster wide services + k8s internals
As it stands, the Dashboard's left sidebar namespace selector is very useful to look at "just public", "just private", "just cluster-wide" or "all".
The public services serve different independent customers, and ideally we'd treat them as multi-tenant workloads (albeit in a scenario where all cluster admins with dashboard access will have at least ro, and probably rw, on all namespaces; isolation is just for the token-carriers/pods w.r.t. RBAC & potentially NetworkPolicy etc)
This would be a step up in data security, but it would be a loss for admin UX. In the cli we can at least suffix our `kubectl get`s w/ `--all-namsepaces | grep -v kube-system` but in the dashboard, splitting up will mean losing our ability to have a "global public" and "global private" view as we currently do.
Our "tools" workloads are probably gonna find their way to a separate cluster, but we are still gonna end up having to chose to look at either all production workloads mixed together w/ kube-system services or just a single namespace.
It would be ideal to be able to have a custom select for multiple namespaces, but that wouldn't really jive with the API namespacing as I understand it; (though label selectors are supported across namespaces and might be an option?)
But I think it would still be worthwhile to a to have a boolean option w.r.t `kube-system` in the context of "All Namespaces". Probably the filtering can/should just happen client side. It seems to me rather akin to the concept of "Show System Files" in Windows Explorer or macOS Finder
| 1.0 | Namespace filtering: not(kube-system) or custom multi-select - I looked for another request like this and made a cursory search to see if it might be covered in the multi-tenant stuff, but didn't see anything; super sorry if I overlooked!
Currently I run a k8s 1.5 cluster where workloads are allocated:
- default: public-facing production services
- tools: internal-facing services
- kube-system: cluster wide services + k8s internals
As it stands, the Dashboard's left sidebar namespace selector is very useful to look at "just public", "just private", "just cluster-wide" or "all".
The public services serve different independent customers, and ideally we'd treat them as multi-tenant workloads (albeit in a scenario where all cluster admins with dashboard access will have at least ro, and probably rw, on all namespaces; isolation is just for the token-carriers/pods w.r.t. RBAC & potentially NetworkPolicy etc)
This would be a step up in data security, but it would be a loss for admin UX. In the cli we can at least suffix our `kubectl get`s w/ `--all-namsepaces | grep -v kube-system` but in the dashboard, splitting up will mean losing our ability to have a "global public" and "global private" view as we currently do.
Our "tools" workloads are probably gonna find their way to a separate cluster, but we are still gonna end up having to chose to look at either all production workloads mixed together w/ kube-system services or just a single namespace.
It would be ideal to be able to have a custom select for multiple namespaces, but that wouldn't really jive with the API namespacing as I understand it; (though label selectors are supported across namespaces and might be an option?)
But I think it would still be worthwhile to a to have a boolean option w.r.t `kube-system` in the context of "All Namespaces". Probably the filtering can/should just happen client side. It seems to me rather akin to the concept of "Show System Files" in Windows Explorer or macOS Finder
| non_process | namespace filtering not kube system or custom multi select i looked for another request like this and made a cursory search to see if it might be covered in the multi tenant stuff but didn t see anything super sorry if i overlooked currently i run a cluster where workloads are allocated default public facing production services tools internal facing services kube system cluster wide services internals as it stands the dashboard s left sidebar namespace selector is very useful to look at just public just private just cluster wide or all the public services serve different independent customers and ideally we d treat them as multi tenant workloads albeit in a scenario where all cluster admins with dashboard access will have at least ro and probably rw on all namespaces isolation is just for the token carriers pods w r t rbac potentially networkpolicy etc this would be a step up in data security but it would be a loss for admin ux in the cli we can at least suffix our kubectl get s w all namsepaces grep v kube system but in the dashboard splitting up will mean losing our ability to have a global public and global private view as we currently do our tools workloads are probably gonna find their way to a separate cluster but we are still gonna end up having to chose to look at either all production workloads mixed together w kube system services or just a single namespace it would be ideal to be able to have a custom select for multiple namespaces but that wouldn t really jive with the api namespacing as i understand it though label selectors are supported across namespaces and might be an option but i think it would still be worthwhile to a to have a boolean option w r t kube system in the context of all namespaces probably the filtering can should just happen client side it seems to me rather akin to the concept of show system files in windows explorer or macos finder | 0 |
3,853 | 6,808,604,568 | IssuesEvent | 2017-11-04 05:24:46 | shoes/shoes4 | https://api.github.com/repos/shoes/shoes4 | closed | Rename this repository to 'shoes' | process question | As we near a release, I think it's appropriate to rename the repository currently know as 'shoes' to 'shoes3', and to rename this repository 'shoes'. Thoughts?
| 1.0 | Rename this repository to 'shoes' - As we near a release, I think it's appropriate to rename the repository currently know as 'shoes' to 'shoes3', and to rename this repository 'shoes'. Thoughts?
| process | rename this repository to shoes as we near a release i think it s appropriate to rename the repository currently know as shoes to and to rename this repository shoes thoughts | 1 |
18,992 | 24,983,966,610 | IssuesEvent | 2022-11-02 13:50:31 | benthosdev/benthos | https://api.github.com/repos/benthosdev/benthos | closed | How to set encoding in parquet data | enhancement processors | Hello,
An error is reported when I use Spark3.2 to read the parquet generated file
```
Caused by: java.lang.UnsupportedOperationException: Unsupported encoding: DELTA_LENGTH_BYTE_ARRAY
at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.getValuesReader(VectorizedColumnReader.java:345)
```
according to the [issue](https://github.com/segmentio/parquet-go/issues/261), i need to set the encoding. How can I set the encoding in benthos? | 1.0 | How to set encoding in parquet data - Hello,
An error is reported when I use Spark3.2 to read the parquet generated file
```
Caused by: java.lang.UnsupportedOperationException: Unsupported encoding: DELTA_LENGTH_BYTE_ARRAY
at org.apache.spark.sql.execution.datasources.parquet.VectorizedColumnReader.getValuesReader(VectorizedColumnReader.java:345)
```
according to the [issue](https://github.com/segmentio/parquet-go/issues/261), i need to set the encoding. How can I set the encoding in benthos? | process | how to set encoding in parquet data hello an error is reported when i use to read the parquet generated file caused by java lang unsupportedoperationexception unsupported encoding delta length byte array at org apache spark sql execution datasources parquet vectorizedcolumnreader getvaluesreader vectorizedcolumnreader java according to the i need to set the encoding how can i set the encoding in benthos? | 1 |
597 | 2,665,879,601 | IssuesEvent | 2015-03-21 00:27:58 | YorickPeterse/oga | https://api.github.com/repos/YorickPeterse/oga | closed | Remove locking in XML::Text#text and XML::Text#text= | Performance | The use of a Mutex here leads to extra allocations that shouldn't be needed. Instead of coming up with complex systems to make all of this atomic I'm taking the easy way out: concurrent calls to `text` and `text=` of the same `Text` instance are no longer supported. This isn't much different from other structures such as `XML::NodeSet` which also isn't thread-safe.
Note that the above only applies to Text instances shared between threads, separate documents can still be handled in parallel just fine. | True | Remove locking in XML::Text#text and XML::Text#text= - The use of a Mutex here leads to extra allocations that shouldn't be needed. Instead of coming up with complex systems to make all of this atomic I'm taking the easy way out: concurrent calls to `text` and `text=` of the same `Text` instance are no longer supported. This isn't much different from other structures such as `XML::NodeSet` which also isn't thread-safe.
Note that the above only applies to Text instances shared between threads, separate documents can still be handled in parallel just fine. | non_process | remove locking in xml text text and xml text text the use of a mutex here leads to extra allocations that shouldn t be needed instead of coming up with complex systems to make all of this atomic i m taking the easy way out concurrent calls to text and text of the same text instance are no longer supported this isn t much different from other structures such as xml nodeset which also isn t thread safe note that the above only applies to text instances shared between threads separate documents can still be handled in parallel just fine | 0 |
79,602 | 7,720,688,599 | IssuesEvent | 2018-05-24 00:37:12 | Microsoft/vscode-python | https://api.github.com/repos/Microsoft/vscode-python | closed | Errors in running unit tests | feature-testing needs PR type-bug | Originally posted by @ayrtonmassey here https://github.com/Microsoft/vscode-python/issues/78#issuecomment-379507189
I managed to fix this on my box. Turns out vscode configured a `settings.json` in the `./.vscode` folder of my workspace with the following content:
```json
"python.unitTest.pyTestEnabled": false,
"python.unitTest.unittestEnabled": true,
"python.unitTest.nosetestsEnabled": false,
"python.pythonPath": "${workspaceFolder}\\venv\\Scripts\\python.exe",
"python.unitTest.unittestArgs": [
"-v",
"-s",
"./server",
"-p",
"test_*.py"
]
```
when I unittest from the command line with these arguments:
```bash
python -m unittest -v -s "./server" -p "test_*.py"
```
I get the following error:
```bash
usage: python.exe -m unittest [-h] [-v] [-q] [--locals] [-f] [-c] [-b]
[tests [tests ...]]
python.exe -m unittest: error: unrecognized arguments: -s
```
It looks like **the default command line arguments configured by vscode are invalid**. You should try deleting any values set for `python.unitTest.unittestArgs` in your workspace or global `settings.json`:
```json
"python.unitTest.pyTestEnabled": false,
"python.unitTest.unittestEnabled": true,
"python.unitTest.nosetestsEnabled": false,
"python.pythonPath": "${workspaceFolder}\\venv\\Scripts\\python.exe",
"python.unitTest.unittestArgs": [
]
```
After deleting the command line arguments in `settings.json` I was able to run unit tests via vscode. | 1.0 | Errors in running unit tests - Originally posted by @ayrtonmassey here https://github.com/Microsoft/vscode-python/issues/78#issuecomment-379507189
I managed to fix this on my box. Turns out vscode configured a `settings.json` in the `./.vscode` folder of my workspace with the following content:
```json
"python.unitTest.pyTestEnabled": false,
"python.unitTest.unittestEnabled": true,
"python.unitTest.nosetestsEnabled": false,
"python.pythonPath": "${workspaceFolder}\\venv\\Scripts\\python.exe",
"python.unitTest.unittestArgs": [
"-v",
"-s",
"./server",
"-p",
"test_*.py"
]
```
when I unittest from the command line with these arguments:
```bash
python -m unittest -v -s "./server" -p "test_*.py"
```
I get the following error:
```bash
usage: python.exe -m unittest [-h] [-v] [-q] [--locals] [-f] [-c] [-b]
[tests [tests ...]]
python.exe -m unittest: error: unrecognized arguments: -s
```
It looks like **the default command line arguments configured by vscode are invalid**. You should try deleting any values set for `python.unitTest.unittestArgs` in your workspace or global `settings.json`:
```json
"python.unitTest.pyTestEnabled": false,
"python.unitTest.unittestEnabled": true,
"python.unitTest.nosetestsEnabled": false,
"python.pythonPath": "${workspaceFolder}\\venv\\Scripts\\python.exe",
"python.unitTest.unittestArgs": [
]
```
After deleting the command line arguments in `settings.json` I was able to run unit tests via vscode. | non_process | errors in running unit tests originally posted by ayrtonmassey here i managed to fix this on my box turns out vscode configured a settings json in the vscode folder of my workspace with the following content json python unittest pytestenabled false python unittest unittestenabled true python unittest nosetestsenabled false python pythonpath workspacefolder venv scripts python exe python unittest unittestargs v s server p test py when i unittest from the command line with these arguments bash python m unittest v s server p test py i get the following error bash usage python exe m unittest python exe m unittest error unrecognized arguments s it looks like the default command line arguments configured by vscode are invalid you should try deleting any values set for python unittest unittestargs in your workspace or global settings json json python unittest pytestenabled false python unittest unittestenabled true python unittest nosetestsenabled false python pythonpath workspacefolder venv scripts python exe python unittest unittestargs after deleting the command line arguments in settings json i was able to run unit tests via vscode | 0 |
8,862 | 11,957,458,500 | IssuesEvent | 2020-04-04 14:28:00 | prisma/prisma-client-js | https://api.github.com/repos/prisma/prisma-client-js | closed | Error when calling prisma delete many. | bug/0-needs-info kind/bug process/candidate | I got the following error when trying to run prisma.user.deleteMany({}). I get the following error. I think it has to do with having multiple back relation on the user model. Is there a way to bypass this?
I am on prisma-client-js@2.0.0-preview024
Thanks.
<img width="1012" alt="Screen Shot 2020-03-15 at 11 45 02 PM" src="https://user-images.githubusercontent.com/1761197/76729856-52d7bb00-6717-11ea-91ad-a95467e41f99.png">
| 1.0 | Error when calling prisma delete many. - I got the following error when trying to run prisma.user.deleteMany({}). I get the following error. I think it has to do with having multiple back relation on the user model. Is there a way to bypass this?
I am on prisma-client-js@2.0.0-preview024
Thanks.
<img width="1012" alt="Screen Shot 2020-03-15 at 11 45 02 PM" src="https://user-images.githubusercontent.com/1761197/76729856-52d7bb00-6717-11ea-91ad-a95467e41f99.png">
| process | error when calling prisma delete many i got the following error when trying to run prisma user deletemany i get the following error i think it has to do with having multiple back relation on the user model is there a way to bypass this i am on prisma client js thanks img width alt screen shot at pm src | 1 |
762,160 | 26,710,333,131 | IssuesEvent | 2023-01-27 22:49:17 | seryy-coordinator/dictionary | https://api.github.com/repos/seryy-coordinator/dictionary | opened | implement ability to create labels (folders) | low priority | The user has to have to ability to choose or create new labels when he adds new expression.
He can choose several labels.
The teacher can do it in student's dictionary. | 1.0 | implement ability to create labels (folders) - The user has to have to ability to choose or create new labels when he adds new expression.
He can choose several labels.
The teacher can do it in student's dictionary. | non_process | implement ability to create labels folders the user has to have to ability to choose or create new labels when he adds new expression he can choose several labels the teacher can do it in student s dictionary | 0 |
16,976 | 4,110,728,938 | IssuesEvent | 2016-06-07 00:59:16 | fguillot/kanboard | https://api.github.com/repos/fguillot/kanboard | closed | Documentation - board-show-hide-columns | documentation Fixed in dev improvement | The English documentation - board-show-hide-columns - contains French
| 1.0 | Documentation - board-show-hide-columns - The English documentation - board-show-hide-columns - contains French
| non_process | documentation board show hide columns the english documentation board show hide columns contains french | 0 |
134,162 | 29,866,971,820 | IssuesEvent | 2023-06-20 05:16:39 | zer0Kerbal/SimpleConstruction | https://api.github.com/repos/zer0Kerbal/SimpleConstruction | closed | [Bug 🐞]: Restart build button bug | bug 🐛 issue: code issue: external | ### Brief description of your issue
the "restart build" button fails when a craft has fully completed construction,
a secondary issue is my 400ton station suffers a Kraken attack when finalizing construction, I've had success with smaller construction projects so I'm unsure if it's the mod or the ship at fault, its just I've never seen a kraken disassemble and throw every single piece at extrasolar velocities in all directions, I thought it might be worth including in the log
log
[KSP.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9174647/KSP.zip)
I can't find any file ending in .configcache but this file in ksp>logs>modulemanager>modulemanager.txt has a full list of my mods and the patches on startup hopefully that's what you wanted
[ModuleManager.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9174655/ModuleManager.zip)
the most recent log behavior should be booting up, loading in the craft, pressing the restart button to no avail, then finalizing build and spawning a Kraken before logging off
please get back to me if I can help any further, id be happy to help
### Steps to reproduce
build a craft until completion, and try to press restart build
### Expected behavior
not entirely sure, I need to cancel/refund/reverse my build so it can be fixed or restarted,
### Actual behavior
the button has no discernable effect ingame
### Environment
```shell
mod: simpleconstruction 4.0.99.9-prerelease-cf
ksp: 1.12.3
manual installation of simple construction using curseforge, i manual install all my mods from either git, curseforge, or spacedock
```
### How did you download and install this?
CurseForge (download and manual installation) | 1.0 | [Bug 🐞]: Restart build button bug - ### Brief description of your issue
the "restart build" button fails when a craft has fully completed construction,
a secondary issue is my 400ton station suffers a Kraken attack when finalizing construction, I've had success with smaller construction projects so I'm unsure if it's the mod or the ship at fault, its just I've never seen a kraken disassemble and throw every single piece at extrasolar velocities in all directions, I thought it might be worth including in the log
log
[KSP.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9174647/KSP.zip)
I can't find any file ending in .configcache but this file in ksp>logs>modulemanager>modulemanager.txt has a full list of my mods and the patches on startup hopefully that's what you wanted
[ModuleManager.zip](https://github.com/zer0Kerbal/SimpleConstruction/files/9174655/ModuleManager.zip)
the most recent log behavior should be booting up, loading in the craft, pressing the restart button to no avail, then finalizing build and spawning a Kraken before logging off
please get back to me if I can help any further, id be happy to help
### Steps to reproduce
build a craft until completion, and try to press restart build
### Expected behavior
not entirely sure, I need to cancel/refund/reverse my build so it can be fixed or restarted,
### Actual behavior
the button has no discernable effect ingame
### Environment
```shell
mod: simpleconstruction 4.0.99.9-prerelease-cf
ksp: 1.12.3
manual installation of simple construction using curseforge, i manual install all my mods from either git, curseforge, or spacedock
```
### How did you download and install this?
CurseForge (download and manual installation) | non_process | restart build button bug brief description of your issue the restart build button fails when a craft has fully completed construction a secondary issue is my station suffers a kraken attack when finalizing construction i ve had success with smaller construction projects so i m unsure if it s the mod or the ship at fault its just i ve never seen a kraken disassemble and throw every single piece at extrasolar velocities in all directions i thought it might be worth including in the log log i can t find any file ending in configcache but this file in ksp logs modulemanager modulemanager txt has a full list of my mods and the patches on startup hopefully that s what you wanted the most recent log behavior should be booting up loading in the craft pressing the restart button to no avail then finalizing build and spawning a kraken before logging off please get back to me if i can help any further id be happy to help steps to reproduce build a craft until completion and try to press restart build expected behavior not entirely sure i need to cancel refund reverse my build so it can be fixed or restarted actual behavior the button has no discernable effect ingame environment shell mod simpleconstruction prerelease cf ksp manual installation of simple construction using curseforge i manual install all my mods from either git curseforge or spacedock how did you download and install this curseforge download and manual installation | 0 |
8,943 | 12,057,612,132 | IssuesEvent | 2020-04-15 16:06:10 | threefoldtech/jumpscaleX_core | https://api.github.com/repos/threefoldtech/jumpscaleX_core | closed | Failed to start 3bot server manually | priority_major process_wontfix | After having issue #724 tried to start a new container with the 3sdk app, same issue
```
3sdk> container install name=first_custom identity=weynandkuijpers.3bot email=weynand@threefold.io words='<<tuple of 24 word removed>>' server=True
create the 3bot container and install jumpscale inside
- SSH PORT ON: 9000
- Docker machine gets created:
0b69d7385ba82b8e186adc04db432c6d25d625dd7d588bd94d4ad3f234b93a6b
- Configure / Start SSH server
- make sure jumpscale code is on local filesystem.
- install jumpscale for identity:weynandkuijpers.3bot
Connection to localhost closed.
- Configure secret
Usage: jsx secret [OPTIONS] SECRET
Try "jsx secret --help" for help.
Error: Got unexpected extra arguments (deliver monkey yard magnet inspire deputy winner gravity rough spend better exercise peanut town unhappy glove task cover false police deal near ethics)
Connection to localhost closed.
Could not execute: cd /tmp
#next will start redis and make sure secret is in there
python3 jsx secret <<24 word tuple removed>>
3sdk> container list
list the containers
- first_custom : localhost : threefoldtech/3bot2 (sshport:9000)
```
Then logged into the container via SSH, and manually added the secret:
```
jsx secret '<<24 word tuple>>`
```
And then manually start the 3bot server:
```
weynand@happy:~$ sudo -i
[sudo] password for weynand:
root@happy:~# ssh -p 9000 localhost -A
OK
3BOTDEVEL:first_custom:~: jsx secret '<<removed 24 word tuple>>'
Reading package lists...
Building dependency tree...
Reading state information...
redis-server is already the newest version (5:5.0.5-2build1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
sysctl: setting key "vm.overcommit_memory": Read-only file system
152:C 14 Apr 2020 04:57:04.871 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
152:C 14 Apr 2020 04:57:04.871 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=152, just started
152:C 14 Apr 2020 04:57:04.871 # Configuration loaded
3BOTDEVEL:first_custom:~: 3bot start
no server running need to start
++ '[' start == kill ']'
++ tmux -f /sandbox/cfg/.tmux.conf has-session -t main
error connecting to /tmp/tmux-0/default (No such file or directory)
++ '[' 1 -eq 1 ']'
++ echo 'no server running need to start'
++ tmux -f /sandbox/cfg/.tmux.conf new -s main -d 'bash --rcfile /sandbox/bin/env_tmux_detach.sh'
++ '[' start '!=' start ']'
Tue 14 04:58:42 ot/ThreebotServer.py - 336 - start : EXCEPTION:
Could not start threebot server
--TRACEBACK------------------
/sandbox/bin/3bot in <module>
51 cli()
/sandbox/bin/3bot in start
22 j.servers.threebot.start(background=True)
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreeBotServersFactory.py in start
124 client = self.default.start(background=True, packages=packages)
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreebotServer.py in start
336 raise j.exceptions.Timeout("Could not start threebot server")
-----------------------------
``` | 1.0 | Failed to start 3bot server manually - After having issue #724 tried to start a new container with the 3sdk app, same issue
```
3sdk> container install name=first_custom identity=weynandkuijpers.3bot email=weynand@threefold.io words='<<tuple of 24 word removed>>' server=True
create the 3bot container and install jumpscale inside
- SSH PORT ON: 9000
- Docker machine gets created:
0b69d7385ba82b8e186adc04db432c6d25d625dd7d588bd94d4ad3f234b93a6b
- Configure / Start SSH server
- make sure jumpscale code is on local filesystem.
- install jumpscale for identity:weynandkuijpers.3bot
Connection to localhost closed.
- Configure secret
Usage: jsx secret [OPTIONS] SECRET
Try "jsx secret --help" for help.
Error: Got unexpected extra arguments (deliver monkey yard magnet inspire deputy winner gravity rough spend better exercise peanut town unhappy glove task cover false police deal near ethics)
Connection to localhost closed.
Could not execute: cd /tmp
#next will start redis and make sure secret is in there
python3 jsx secret <<24 word tuple removed>>
3sdk> container list
list the containers
- first_custom : localhost : threefoldtech/3bot2 (sshport:9000)
```
Then logged into the container via SSH, and manually added the secret:
```
jsx secret '<<24 word tuple>>`
```
And then manually start the 3bot server:
```
weynand@happy:~$ sudo -i
[sudo] password for weynand:
root@happy:~# ssh -p 9000 localhost -A
OK
3BOTDEVEL:first_custom:~: jsx secret '<<removed 24 word tuple>>'
Reading package lists...
Building dependency tree...
Reading state information...
redis-server is already the newest version (5:5.0.5-2build1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
sysctl: setting key "vm.overcommit_memory": Read-only file system
152:C 14 Apr 2020 04:57:04.871 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
152:C 14 Apr 2020 04:57:04.871 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=152, just started
152:C 14 Apr 2020 04:57:04.871 # Configuration loaded
3BOTDEVEL:first_custom:~: 3bot start
no server running need to start
++ '[' start == kill ']'
++ tmux -f /sandbox/cfg/.tmux.conf has-session -t main
error connecting to /tmp/tmux-0/default (No such file or directory)
++ '[' 1 -eq 1 ']'
++ echo 'no server running need to start'
++ tmux -f /sandbox/cfg/.tmux.conf new -s main -d 'bash --rcfile /sandbox/bin/env_tmux_detach.sh'
++ '[' start '!=' start ']'
Tue 14 04:58:42 ot/ThreebotServer.py - 336 - start : EXCEPTION:
Could not start threebot server
--TRACEBACK------------------
/sandbox/bin/3bot in <module>
51 cli()
/sandbox/bin/3bot in start
22 j.servers.threebot.start(background=True)
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreeBotServersFactory.py in start
124 client = self.default.start(background=True, packages=packages)
/sandbox/lib/jumpscale/Jumpscale/servers/threebot/ThreebotServer.py in start
336 raise j.exceptions.Timeout("Could not start threebot server")
-----------------------------
``` | process | failed to start server manually after having issue tried to start a new container with the app same issue container install name first custom identity weynandkuijpers email weynand threefold io words server true create the container and install jumpscale inside ssh port on docker machine gets created configure start ssh server make sure jumpscale code is on local filesystem install jumpscale for identity weynandkuijpers connection to localhost closed configure secret usage jsx secret secret try jsx secret help for help error got unexpected extra arguments deliver monkey yard magnet inspire deputy winner gravity rough spend better exercise peanut town unhappy glove task cover false police deal near ethics connection to localhost closed could not execute cd tmp next will start redis and make sure secret is in there jsx secret container list list the containers first custom localhost threefoldtech sshport then logged into the container via ssh and manually added the secret jsx secret and then manually start the server weynand happy sudo i password for weynand root happy ssh p localhost a ok first custom jsx secret reading package lists building dependency tree reading state information redis server is already the newest version upgraded newly installed to remove and not upgraded sysctl setting key vm overcommit memory read only file system c apr redis is starting c apr redis version bits commit modified pid just started c apr configuration loaded first custom start no server running need to start tmux f sandbox cfg tmux conf has session t main error connecting to tmp tmux default no such file or directory echo no server running need to start tmux f sandbox cfg tmux conf new s main d bash rcfile sandbox bin env tmux detach sh tue ot threebotserver py start exception could not start threebot server traceback sandbox bin in cli sandbox bin in start j servers threebot start background true sandbox lib jumpscale jumpscale servers threebot threebotserversfactory py in start client self default start background true packages packages sandbox lib jumpscale jumpscale servers threebot threebotserver py in start raise j exceptions timeout could not start threebot server | 1 |
139,290 | 20,822,436,103 | IssuesEvent | 2022-03-18 16:42:31 | bcgov/cas-cif | https://api.github.com/repos/bcgov/cas-cif | closed | Craft usability testing calendar invites | Service Design | One area of feedback we've received from the CIF team is to add more clarity to calendar invites about who should be in attendance and what topics will be covered.
Let's take the opportunity to perfect our invites for usability testing and continue to use the format we develop & iterate on for future meetings.
Acceptance criteria:
- [x] Review Diana's guidance on email updates
- [x] Create draft invite structure
- [x] Create calendar invites for usability testing with the Operations team | 1.0 | Craft usability testing calendar invites - One area of feedback we've received from the CIF team is to add more clarity to calendar invites about who should be in attendance and what topics will be covered.
Let's take the opportunity to perfect our invites for usability testing and continue to use the format we develop & iterate on for future meetings.
Acceptance criteria:
- [x] Review Diana's guidance on email updates
- [x] Create draft invite structure
- [x] Create calendar invites for usability testing with the Operations team | non_process | craft usability testing calendar invites one area of feedback we ve received from the cif team is to add more clarity to calendar invites about who should be in attendance and what topics will be covered let s take the opportunity to perfect our invites for usability testing and continue to use the format we develop iterate on for future meetings acceptance criteria review diana s guidance on email updates create draft invite structure create calendar invites for usability testing with the operations team | 0 |
130,332 | 5,114,318,555 | IssuesEvent | 2017-01-06 18:05:17 | Esri/distance-direction-addin-dotnet | https://api.github.com/repos/Esri/distance-direction-addin-dotnet | opened | Labels not drawn for manually typed input coordinates in ArcMap | B - Bug priority - normal V - 10.4.1 | Drawing Lines by typing coordinates in for **Starting Point** and **Ending Point** produces a line without labels, whereas using the *Map Point* tool's line produces labels.
Steps:
1) Select *Lines* tab
2) Type '0.0 .0.0' for the **Starting Point** (without the single quotes)
3) Type '5.0 5.0' for the **Ending Point** (without the single quotes)
4) Select "Enter"
A line is drawn in Gulf of Guinea pointing to the northwest, which is correct, but there is no labels
5) Select **Map Point** tool and draw a line roughly parallel.
A second line is drawn WITH LABELS.

| 1.0 | Labels not drawn for manually typed input coordinates in ArcMap - Drawing Lines by typing coordinates in for **Starting Point** and **Ending Point** produces a line without labels, whereas using the *Map Point* tool's line produces labels.
Steps:
1) Select *Lines* tab
2) Type '0.0 .0.0' for the **Starting Point** (without the single quotes)
3) Type '5.0 5.0' for the **Ending Point** (without the single quotes)
4) Select "Enter"
A line is drawn in Gulf of Guinea pointing to the northwest, which is correct, but there is no labels
5) Select **Map Point** tool and draw a line roughly parallel.
A second line is drawn WITH LABELS.

| non_process | labels not drawn for manually typed input coordinates in arcmap drawing lines by typing coordinates in for starting point and ending point produces a line without labels whereas using the map point tool s line produces labels steps select lines tab type for the starting point without the single quotes type for the ending point without the single quotes select enter a line is drawn in gulf of guinea pointing to the northwest which is correct but there is no labels select map point tool and draw a line roughly parallel a second line is drawn with labels | 0 |
2,115 | 4,955,101,263 | IssuesEvent | 2016-12-01 19:29:37 | Sage-Bionetworks/Genie | https://api.github.com/repos/Sage-Bionetworks/Genie | opened | add higher-level oncotree code to processed clinical file | clinical data processing | Can we add "main type" as a column to the processed clinical file? If not, might try using root node. | 1.0 | add higher-level oncotree code to processed clinical file - Can we add "main type" as a column to the processed clinical file? If not, might try using root node. | process | add higher level oncotree code to processed clinical file can we add main type as a column to the processed clinical file if not might try using root node | 1 |
5,746 | 8,585,174,813 | IssuesEvent | 2018-11-14 02:03:56 | census-instrumentation/opencensus-service | https://api.github.com/repos/census-instrumentation/opencensus-service | closed | receiver/trace: add Jaeger trace interceptor | process | We've finished adding the Zipkin v2 HTTP interceptor. Perhaps let's look at adding the Jaeger interceptor too.
An advantage of having both Jaeger and Zipkin interceptors is that then we can trivially intercept traffic from Istio Mixer whose distributed tracing integration https://istio.io/docs/tasks/telemetry/distributed-tracing/ talks about sending to either Jaeger or Zipkin. This also allows us to increase the reach of the service/agent. | 1.0 | receiver/trace: add Jaeger trace interceptor - We've finished adding the Zipkin v2 HTTP interceptor. Perhaps let's look at adding the Jaeger interceptor too.
An advantage of having both Jaeger and Zipkin interceptors is that then we can trivially intercept traffic from Istio Mixer whose distributed tracing integration https://istio.io/docs/tasks/telemetry/distributed-tracing/ talks about sending to either Jaeger or Zipkin. This also allows us to increase the reach of the service/agent. | process | receiver trace add jaeger trace interceptor we ve finished adding the zipkin http interceptor perhaps let s look at adding the jaeger interceptor too an advantage of having both jaeger and zipkin interceptors is that then we can trivially intercept traffic from istio mixer whose distributed tracing integration talks about sending to either jaeger or zipkin this also allows us to increase the reach of the service agent | 1 |
20,822 | 27,579,369,283 | IssuesEvent | 2023-03-08 15:12:47 | ukri-excalibur/excalibur-tests | https://api.github.com/repos/ukri-excalibur/excalibur-tests | opened | Save spack spec for every benchmark run | UCL postprocessing | There's a hash for every spec, so we can
- [ ] save spec hash in the perflog as a field
- [ ] save the spec itself in a separate file, named with the hash, to be able to connect them. Make a new spec file be created only when something changes in the spec. | 1.0 | Save spack spec for every benchmark run - There's a hash for every spec, so we can
- [ ] save spec hash in the perflog as a field
- [ ] save the spec itself in a separate file, named with the hash, to be able to connect them. Make a new spec file be created only when something changes in the spec. | process | save spack spec for every benchmark run there s a hash for every spec so we can save spec hash in the perflog as a field save the spec itself in a separate file named with the hash to be able to connect them make a new spec file be created only when something changes in the spec | 1 |
21,510 | 29,799,200,732 | IssuesEvent | 2023-06-16 06:40:45 | parca-dev/parca-agent | https://api.github.com/repos/parca-dev/parca-agent | closed | Normalization out of range errors | bug P0 area/process-mapping | ```
level=debug name=parca-agent ts=2023-06-07T23:09:09.909098662Z caller=pprof.go:282 component=converter_manager pid=8389 msg="failed to normalize address" address=1abd48c err="failed to get normalized address from object file: specified address 1abd48c is outside the mapping range [400000, 401000] for ObjectFile \"/proc/8389/root/bin/parca-agent\""
```
ref #1613 #1615 | 1.0 | Normalization out of range errors - ```
level=debug name=parca-agent ts=2023-06-07T23:09:09.909098662Z caller=pprof.go:282 component=converter_manager pid=8389 msg="failed to normalize address" address=1abd48c err="failed to get normalized address from object file: specified address 1abd48c is outside the mapping range [400000, 401000] for ObjectFile \"/proc/8389/root/bin/parca-agent\""
```
ref #1613 #1615 | process | normalization out of range errors level debug name parca agent ts caller pprof go component converter manager pid msg failed to normalize address address err failed to get normalized address from object file specified address is outside the mapping range for objectfile proc root bin parca agent ref | 1 |
9,418 | 12,416,143,703 | IssuesEvent | 2020-05-22 17:33:34 | fluent/fluent-bit | https://api.github.com/repos/fluent/fluent-bit | closed | `rewrite_tag` filter breaks if Rule key contains dots | work-in-process | ## Bug Report
**Describe the bug**
<!--- A clear and concise description of what the bug is. -->
rewrite_tag filter breaks if Rule key contains dots
**To Reproduce**
- Consider the following config:
```
[FILTER]
Name rewrite_tag
Match kube.var.log.containers.*
Rule $kubernetes['annotations']['fluentbit.io/tag'] ^([a-zA-Z0-9]+)$ kube.tag.$1.$TAG[3].$TAG[4] false
```
**Expected behavior**
<!--- A clear and concise description of what you expected to happen. -->
Rule applies for `$kubernetes['annotations']['fluentbit.io/tag']` key.
**Screenshots**
<!--- If applicable, add screenshots to help explain your problem. -->

**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 1.4.3
* Configuration: -
* Environment name and version (e.g. Kubernetes? What version?): Kubernetes 1.15
* Server type and version:
* Operating System and version:
* Filters and plugins: -
**Additional context**
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I've tried escaping and other stuff - nothing worked for me. The `record accessor` definitely breaks on dot symbol.
| 1.0 | `rewrite_tag` filter breaks if Rule key contains dots - ## Bug Report
**Describe the bug**
<!--- A clear and concise description of what the bug is. -->
rewrite_tag filter breaks if Rule key contains dots
**To Reproduce**
- Consider the following config:
```
[FILTER]
Name rewrite_tag
Match kube.var.log.containers.*
Rule $kubernetes['annotations']['fluentbit.io/tag'] ^([a-zA-Z0-9]+)$ kube.tag.$1.$TAG[3].$TAG[4] false
```
**Expected behavior**
<!--- A clear and concise description of what you expected to happen. -->
Rule applies for `$kubernetes['annotations']['fluentbit.io/tag']` key.
**Screenshots**
<!--- If applicable, add screenshots to help explain your problem. -->

**Your Environment**
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used: 1.4.3
* Configuration: -
* Environment name and version (e.g. Kubernetes? What version?): Kubernetes 1.15
* Server type and version:
* Operating System and version:
* Filters and plugins: -
**Additional context**
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
I've tried escaping and other stuff - nothing worked for me. The `record accessor` definitely breaks on dot symbol.
| process | rewrite tag filter breaks if rule key contains dots bug report describe the bug rewrite tag filter breaks if rule key contains dots to reproduce consider the following config name rewrite tag match kube var log containers rule kubernetes kube tag tag tag false expected behavior rule applies for kubernetes key screenshots your environment version used configuration environment name and version e g kubernetes what version kubernetes server type and version operating system and version filters and plugins additional context i ve tried escaping and other stuff nothing worked for me the record accessor definitely breaks on dot symbol | 1 |
672,626 | 22,833,922,135 | IssuesEvent | 2022-07-12 15:02:51 | trufflesuite/truffle | https://api.github.com/repos/trufflesuite/truffle | closed | unhandled exception because of blocked local storage in truffle-contract.js v4.2.3 | needs reproduced needs investigated priority2 ⚠️ |
- [ ] I've asked for help in the [Truffle Gitter](http://gitter.im/Consensys/truffle) before filing this issue.
---------------------------
## Issue
The file `node_modules\@truffle\contract\dist\truffle-contract.js` contains plenty duplicate code because some libraries seem to be included twice. Unfortunately those duplicates are not identical and one of those differences causes an unhandled exception when accessing local storage in case access to local storage is blocked by the browser.
## Steps to Reproduce
* Use Chrome to visit a website which is including `truffle-contract.js`.
* In Chrome block cookies for that domain. Doing so will even block access to local storage.
* Reload website.
* Chrome console will show an unhandled exception: `truffle-contract.js:147920 Uncaught DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.`
This might even happen in other browsers but I can only test in Chrome on Windows right now.
## Expected Behavior
No unhandled exception. And in ideal circumstances no duplicate code in `truffle-contract.js`.
## Actual Results
Unhandled exception. And plenty duplicate-but-not-identical code in `truffle-contract.js`.
## Environment
* Operating System: Windows 10 64 Bit German
* Browser: Chrome, Version 81.0.4044.138 (Official Build) (64-bit)
* Ethereum client: MetaMask 7.7.9
* Truffle Contract version (from package.json): @truffle/contract: ^4.2.3
## Further Information
The file `accounts.js` seems to be included twice. This can be verified by searching for the string `@file accounts.js`.
In line 147920 is the following code which is "the bad one" in my opinion and which causes the unhandled exception:
```
if (typeof localStorage === 'undefined') {
delete Wallet.prototype.save;
delete Wallet.prototype.load;
}
```
In line 51339 is the following code which is "the good one" in my opinion:
```
if (!storageAvailable('localStorage')) {
delete Wallet.prototype.save;
delete Wallet.prototype.load;
}
```
In line 51153 is the definition of `function storageAvailable(type)` which is used by "the good one" and which seems to come straight from the Mozilla website mentioned its code comment.
In line 34499 is the definition of `function localstorage()` (with a small 's') which seems to do something similar but in yet another way.
| 1.0 | unhandled exception because of blocked local storage in truffle-contract.js v4.2.3 -
- [ ] I've asked for help in the [Truffle Gitter](http://gitter.im/Consensys/truffle) before filing this issue.
---------------------------
## Issue
The file `node_modules\@truffle\contract\dist\truffle-contract.js` contains plenty duplicate code because some libraries seem to be included twice. Unfortunately those duplicates are not identical and one of those differences causes an unhandled exception when accessing local storage in case access to local storage is blocked by the browser.
## Steps to Reproduce
* Use Chrome to visit a website which is including `truffle-contract.js`.
* In Chrome block cookies for that domain. Doing so will even block access to local storage.
* Reload website.
* Chrome console will show an unhandled exception: `truffle-contract.js:147920 Uncaught DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.`
This might even happen in other browsers but I can only test in Chrome on Windows right now.
## Expected Behavior
No unhandled exception. And in ideal circumstances no duplicate code in `truffle-contract.js`.
## Actual Results
Unhandled exception. And plenty duplicate-but-not-identical code in `truffle-contract.js`.
## Environment
* Operating System: Windows 10 64 Bit German
* Browser: Chrome, Version 81.0.4044.138 (Official Build) (64-bit)
* Ethereum client: MetaMask 7.7.9
* Truffle Contract version (from package.json): @truffle/contract: ^4.2.3
## Further Information
The file `accounts.js` seems to be included twice. This can be verified by searching for the string `@file accounts.js`.
In line 147920 is the following code which is "the bad one" in my opinion and which causes the unhandled exception:
```
if (typeof localStorage === 'undefined') {
delete Wallet.prototype.save;
delete Wallet.prototype.load;
}
```
In line 51339 is the following code which is "the good one" in my opinion:
```
if (!storageAvailable('localStorage')) {
delete Wallet.prototype.save;
delete Wallet.prototype.load;
}
```
In line 51153 is the definition of `function storageAvailable(type)` which is used by "the good one" and which seems to come straight from the Mozilla website mentioned its code comment.
In line 34499 is the definition of `function localstorage()` (with a small 's') which seems to do something similar but in yet another way.
| non_process | unhandled exception because of blocked local storage in truffle contract js i ve asked for help in the before filing this issue issue the file node modules truffle contract dist truffle contract js contains plenty duplicate code because some libraries seem to be included twice unfortunately those duplicates are not identical and one of those differences causes an unhandled exception when accessing local storage in case access to local storage is blocked by the browser steps to reproduce use chrome to visit a website which is including truffle contract js in chrome block cookies for that domain doing so will even block access to local storage reload website chrome console will show an unhandled exception truffle contract js uncaught domexception failed to read the localstorage property from window access is denied for this document this might even happen in other browsers but i can only test in chrome on windows right now expected behavior no unhandled exception and in ideal circumstances no duplicate code in truffle contract js actual results unhandled exception and plenty duplicate but not identical code in truffle contract js environment operating system windows bit german browser chrome version official build bit ethereum client metamask truffle contract version from package json truffle contract further information the file accounts js seems to be included twice this can be verified by searching for the string file accounts js in line is the following code which is the bad one in my opinion and which causes the unhandled exception if typeof localstorage undefined delete wallet prototype save delete wallet prototype load in line is the following code which is the good one in my opinion if storageavailable localstorage delete wallet prototype save delete wallet prototype load in line is the definition of function storageavailable type which is used by the good one and which seems to come straight from the mozilla website mentioned its code comment in line is the definition of function localstorage with a small s which seems to do something similar but in yet another way | 0 |
14,140 | 17,031,722,544 | IssuesEvent | 2021-07-04 17:56:51 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Black squares, crash with color calibration and color assessment conditions mac 3.6.0.3 | scope: image processing | **Describe the bug/issue**
This perhaps follows on from #9385 where it was determined the black image I was seeing in both darkroom and lighttable was due to "clip negative rgb from gamut" being unticked in color calibration. However with more use, I've discovered the issue doesn't end there.
Here is the screenshot showing black squares appearing in lighttable. Those black squares are not present when viewed in darkroom. xmp: [_DSC0371_01.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759407/_DSC0371_01.NEF.xmp.txt)

The program also crashed while playing around with Color Calibration 1 (2nd instance) I think I was in either colorfulness or brightness tab.
As you can see, both instances of color calibration have "clip negative rgb from gamut" ticked.
Here is another screenshot:

And xmp:
[_DSC0307_06.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759420/_DSC0307_06.NEF.xmp.txt)
What is happening here? Well, I have multiple instances of color calibration. I turn on color assessment conditions. I turn off color calibration 2, and the image goes black. Noticeably, this only happens when the color assessment conditions are turned on. However sometimes it goes black when the module is turned off. And sometimes it goes black when the module is turned off then on again.
Here is the same image, but a different way of turning the image black:

To reproduce this one, in the same xmp as above, move to colorfulness tab of color calibration 6 and change saturation algorithm from version 1 to version 3, with color assessment conditions on.
All of the above occurs on old edits - that is, edits that were made on 3.4.1. The issues did not occur in 3.4.1, they have only occurred since upgrading to 3.6.0+3~g4287791fe
I have tried to recreate on fresh edits made in 3.6.... by resetting history, adding multiple instances of color calibration, and applying all sorts of weird and wonderful adjustments - but I can't recreate. It only seems to happen on edits that were first made in 3.4.1
I am wondering if it is something to do with the different saturation algorithm in colorfulness tab. Check this.
Opacity 100%:

Opacity 65% (despite less opacity, it somehow got MORE saturated):

But wait... opacity 65% (now less saturated. Only difference being whether I went down from 100<65% opacity, or up from a lesser value, say 50>65% opacity):

[_DSC0345.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759437/_DSC0345.NEF.xmp.txt)
That xmp has Version 1 selected for saturation algorithm as per screenshots. If I instead select version3, it behaves as expected.
However, as per the xmp for _DSC0307_06 provided, i am able to reproduce the black image even when the color calibration module does not display saturation algorithm version.
**Which commit introduced the error**
_3.6.0+3~g4287791fe
Was working fine on 3.4.1
**Platform**
* darktable version : e.g. _3.6.0+3~g4287791fe
* OS : Mac OS Sierra 10.12.6
* Memory : 4 GB 1600 MHz DDR3
* Graphics card : Intel HD Graphics 4000 and NVIDIA GeForce GT 650M
* OpenCL installed : Y
* OpenCL activated : Y I think. It was on 3.4.1
| 1.0 | Black squares, crash with color calibration and color assessment conditions mac 3.6.0.3 - **Describe the bug/issue**
This perhaps follows on from #9385 where it was determined the black image I was seeing in both darkroom and lighttable was due to "clip negative rgb from gamut" being unticked in color calibration. However with more use, I've discovered the issue doesn't end there.
Here is the screenshot showing black squares appearing in lighttable. Those black squares are not present when viewed in darkroom. xmp: [_DSC0371_01.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759407/_DSC0371_01.NEF.xmp.txt)

The program also crashed while playing around with Color Calibration 1 (2nd instance) I think I was in either colorfulness or brightness tab.
As you can see, both instances of color calibration have "clip negative rgb from gamut" ticked.
Here is another screenshot:

And xmp:
[_DSC0307_06.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759420/_DSC0307_06.NEF.xmp.txt)
What is happening here? Well, I have multiple instances of color calibration. I turn on color assessment conditions. I turn off color calibration 2, and the image goes black. Noticeably, this only happens when the color assessment conditions are turned on. However sometimes it goes black when the module is turned off. And sometimes it goes black when the module is turned off then on again.
Here is the same image, but a different way of turning the image black:

To reproduce this one, in the same xmp as above, move to colorfulness tab of color calibration 6 and change saturation algorithm from version 1 to version 3, with color assessment conditions on.
All of the above occurs on old edits - that is, edits that were made on 3.4.1. The issues did not occur in 3.4.1, they have only occurred since upgrading to 3.6.0+3~g4287791fe
I have tried to recreate on fresh edits made in 3.6.... by resetting history, adding multiple instances of color calibration, and applying all sorts of weird and wonderful adjustments - but I can't recreate. It only seems to happen on edits that were first made in 3.4.1
I am wondering if it is something to do with the different saturation algorithm in colorfulness tab. Check this.
Opacity 100%:

Opacity 65% (despite less opacity, it somehow got MORE saturated):

But wait... opacity 65% (now less saturated. Only difference being whether I went down from 100<65% opacity, or up from a lesser value, say 50>65% opacity):

[_DSC0345.NEF.xmp.txt](https://github.com/darktable-org/darktable/files/6759437/_DSC0345.NEF.xmp.txt)
That xmp has Version 1 selected for saturation algorithm as per screenshots. If I instead select version3, it behaves as expected.
However, as per the xmp for _DSC0307_06 provided, i am able to reproduce the black image even when the color calibration module does not display saturation algorithm version.
**Which commit introduced the error**
_3.6.0+3~g4287791fe
Was working fine on 3.4.1
**Platform**
* darktable version : e.g. _3.6.0+3~g4287791fe
* OS : Mac OS Sierra 10.12.6
* Memory : 4 GB 1600 MHz DDR3
* Graphics card : Intel HD Graphics 4000 and NVIDIA GeForce GT 650M
* OpenCL installed : Y
* OpenCL activated : Y I think. It was on 3.4.1
| process | black squares crash with color calibration and color assessment conditions mac describe the bug issue this perhaps follows on from where it was determined the black image i was seeing in both darkroom and lighttable was due to clip negative rgb from gamut being unticked in color calibration however with more use i ve discovered the issue doesn t end there here is the screenshot showing black squares appearing in lighttable those black squares are not present when viewed in darkroom xmp the program also crashed while playing around with color calibration instance i think i was in either colorfulness or brightness tab as you can see both instances of color calibration have clip negative rgb from gamut ticked here is another screenshot and xmp what is happening here well i have multiple instances of color calibration i turn on color assessment conditions i turn off color calibration and the image goes black noticeably this only happens when the color assessment conditions are turned on however sometimes it goes black when the module is turned off and sometimes it goes black when the module is turned off then on again here is the same image but a different way of turning the image black to reproduce this one in the same xmp as above move to colorfulness tab of color calibration and change saturation algorithm from version to version with color assessment conditions on all of the above occurs on old edits that is edits that were made on the issues did not occur in they have only occurred since upgrading to i have tried to recreate on fresh edits made in by resetting history adding multiple instances of color calibration and applying all sorts of weird and wonderful adjustments but i can t recreate it only seems to happen on edits that were first made in i am wondering if it is something to do with the different saturation algorithm in colorfulness tab check this opacity opacity despite less opacity it somehow got more saturated but wait opacity now less saturated only difference being whether i went down from opacity that xmp has version selected for saturation algorithm as per screenshots if i instead select it behaves as expected however as per the xmp for provided i am able to reproduce the black image even when the color calibration module does not display saturation algorithm version which commit introduced the error was working fine on platform darktable version e g os mac os sierra memory gb mhz graphics card intel hd graphics and nvidia geforce gt opencl installed y opencl activated y i think it was on | 1 |
12,211 | 19,322,464,868 | IssuesEvent | 2021-12-14 07:47:36 | beeldengeluid/beng-lod-server | https://api.github.com/repos/beeldengeluid/beng-lod-server | closed | As a publisher of NISV RDF data for external parties, I want it to use unique, persistent identifiers that use a namespace that reflects the identity of NISV | Theme: linked data Help: requirements needed | Internally for test purposes, we can use identifiers that are simply a namespace of our choice plus the program identifier. Once we publish the data, those identifiers should be globally unique, persistent, and use a namespace that makes it clear that the data is from NISV, preferably also that is consistent with how URLs are constructed for other uses by NISV. We have also previously discussed that we want the namespace to be understandable for a user and not too specifically connected to a certain technology/use of the data (for this reason we opted not to choose lod.beeldengeluid.nl) | 1.0 | As a publisher of NISV RDF data for external parties, I want it to use unique, persistent identifiers that use a namespace that reflects the identity of NISV - Internally for test purposes, we can use identifiers that are simply a namespace of our choice plus the program identifier. Once we publish the data, those identifiers should be globally unique, persistent, and use a namespace that makes it clear that the data is from NISV, preferably also that is consistent with how URLs are constructed for other uses by NISV. We have also previously discussed that we want the namespace to be understandable for a user and not too specifically connected to a certain technology/use of the data (for this reason we opted not to choose lod.beeldengeluid.nl) | non_process | as a publisher of nisv rdf data for external parties i want it to use unique persistent identifiers that use a namespace that reflects the identity of nisv internally for test purposes we can use identifiers that are simply a namespace of our choice plus the program identifier once we publish the data those identifiers should be globally unique persistent and use a namespace that makes it clear that the data is from nisv preferably also that is consistent with how urls are constructed for other uses by nisv we have also previously discussed that we want the namespace to be understandable for a user and not too specifically connected to a certain technology use of the data for this reason we opted not to choose lod beeldengeluid nl | 0 |
17,887 | 23,859,348,920 | IssuesEvent | 2022-09-07 05:03:50 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | [processor/transform] Add ability to specify attributes that aren't allowed to be dropped during limiting | good first issue priority:p3 processor/transform | **Is your feature request related to a problem? Please describe.**
As part of the `limit` function being added in #9552, which attributes are dropped as part of limiting is random. This could result in priority attributes being dropped.
**Describe the solution you'd like**
The `limit` function should be updated so that users can specify a list of attribute names that should never be dropped during limiting. The number of "priority attributes" should not be allowed to be more than the limit supplied.
**Additional context**
Issue originated from feedback in #9552
| 1.0 | [processor/transform] Add ability to specify attributes that aren't allowed to be dropped during limiting - **Is your feature request related to a problem? Please describe.**
As part of the `limit` function being added in #9552, which attributes are dropped as part of limiting is random. This could result in priority attributes being dropped.
**Describe the solution you'd like**
The `limit` function should be updated so that users can specify a list of attribute names that should never be dropped during limiting. The number of "priority attributes" should not be allowed to be more than the limit supplied.
**Additional context**
Issue originated from feedback in #9552
| process | add ability to specify attributes that aren t allowed to be dropped during limiting is your feature request related to a problem please describe as part of the limit function being added in which attributes are dropped as part of limiting is random this could result in priority attributes being dropped describe the solution you d like the limit function should be updated so that users can specify a list of attribute names that should never be dropped during limiting the number of priority attributes should not be allowed to be more than the limit supplied additional context issue originated from feedback in | 1 |
586,220 | 17,572,981,712 | IssuesEvent | 2021-08-15 04:01:44 | KingSupernova31/RulesGuru | https://api.github.com/repos/KingSupernova31/RulesGuru | opened | SeachLinks display too many questions in the questions list | bug medium priority | If you navigate to a searchLink like [this](https://rulesguru.net/?RG1g7xI94IIGG) one and open the questions list, it will display too many questions. (621 in my case.) If you change any option and then change it back, it recalculates to the correct number. | 1.0 | SeachLinks display too many questions in the questions list - If you navigate to a searchLink like [this](https://rulesguru.net/?RG1g7xI94IIGG) one and open the questions list, it will display too many questions. (621 in my case.) If you change any option and then change it back, it recalculates to the correct number. | non_process | seachlinks display too many questions in the questions list if you navigate to a searchlink like one and open the questions list it will display too many questions in my case if you change any option and then change it back it recalculates to the correct number | 0 |
135,534 | 5,254,030,151 | IssuesEvent | 2017-02-02 11:31:27 | magicDGS/ReadTools | https://api.github.com/repos/magicDGS/ReadTools | opened | Include more information in read groups from barcode file | enhancement LOW_PRIORITY new feature/tool | Because of the new barcode file format (#75), we can include new optional columns to set up the read group information apart of the library and sample name. | 1.0 | Include more information in read groups from barcode file - Because of the new barcode file format (#75), we can include new optional columns to set up the read group information apart of the library and sample name. | non_process | include more information in read groups from barcode file because of the new barcode file format we can include new optional columns to set up the read group information apart of the library and sample name | 0 |
20,897 | 27,727,105,082 | IssuesEvent | 2023-03-15 03:47:13 | NCAR/ucomp-pipeline | https://api.github.com/repos/NCAR/ucomp-pipeline | opened | Check centering against generated images | process validation | From Steve:
> I have generated some artificial occulter images. They are located in svn at: D:\HAO-IG\UCOMP\Integration and Testing\Centroid_Test in a sub folder named Images. The 300 images are stored as idl .sav files and each contain the image, the image offsets, dx and dy and the radius used to compute them. My software to create the images and analyze them is located there as well. I attach a plot of my results. Some takeaways:
> 1) I have modified my centroiding routine to use nx-1/2 as the center of the array
> 2) The 0.5 pixel difference in our derived radius was due to a bug in my centroiding which is now fixed
> Let me know if you have questions.
[Plot of results](https://github.com/NCAR/ucomp-pipeline/files/10975798/idl.pdf)
| 1.0 | Check centering against generated images - From Steve:
> I have generated some artificial occulter images. They are located in svn at: D:\HAO-IG\UCOMP\Integration and Testing\Centroid_Test in a sub folder named Images. The 300 images are stored as idl .sav files and each contain the image, the image offsets, dx and dy and the radius used to compute them. My software to create the images and analyze them is located there as well. I attach a plot of my results. Some takeaways:
> 1) I have modified my centroiding routine to use nx-1/2 as the center of the array
> 2) The 0.5 pixel difference in our derived radius was due to a bug in my centroiding which is now fixed
> Let me know if you have questions.
[Plot of results](https://github.com/NCAR/ucomp-pipeline/files/10975798/idl.pdf)
| process | check centering against generated images from steve i have generated some artificial occulter images they are located in svn at d hao ig ucomp integration and testing centroid test in a sub folder named images the images are stored as idl sav files and each contain the image the image offsets dx and dy and the radius used to compute them my software to create the images and analyze them is located there as well i attach a plot of my results some takeaways i have modified my centroiding routine to use nx as the center of the array the pixel difference in our derived radius was due to a bug in my centroiding which is now fixed let me know if you have questions | 1 |
553,437 | 16,372,001,427 | IssuesEvent | 2021-05-15 10:15:18 | unitystation/unitystation | https://api.github.com/repos/unitystation/unitystation | closed | Playing the game on fullscreen with AMD's "Virtual Super resolution" causes the game to have low FPS | Priority: Questionable Type: Performance | VSR allows you to select higher resolutions which will then be downscaled to fit the monitor, I believe Unitystation is selecting the highest possible when going fullscreen causing low FPS. Disabling it removes any trace of the problem.
Steps to Reproduce
Just turn on, VSR while playing on fullscreen, going to window mode fixes the problem.
| 1.0 | Playing the game on fullscreen with AMD's "Virtual Super resolution" causes the game to have low FPS - VSR allows you to select higher resolutions which will then be downscaled to fit the monitor, I believe Unitystation is selecting the highest possible when going fullscreen causing low FPS. Disabling it removes any trace of the problem.
Steps to Reproduce
Just turn on, VSR while playing on fullscreen, going to window mode fixes the problem.
| non_process | playing the game on fullscreen with amd s virtual super resolution causes the game to have low fps vsr allows you to select higher resolutions which will then be downscaled to fit the monitor i believe unitystation is selecting the highest possible when going fullscreen causing low fps disabling it removes any trace of the problem steps to reproduce just turn on vsr while playing on fullscreen going to window mode fixes the problem | 0 |
388,555 | 11,489,077,258 | IssuesEvent | 2020-02-11 14:57:09 | fossasia/open-event-frontend | https://api.github.com/repos/fossasia/open-event-frontend | closed | Deleting/Restoring an event hangs the frontend | Priority: High bug | Deleting/Restoring event makes the frontend hang and the request is not processed by the server. Ultimately, the Server needs to be restarted to fix the issue.
<img width="1119" alt="Screenshot 2020-02-04 at 12 09 20 PM" src="https://user-images.githubusercontent.com/44091822/73720141-48ed9000-4747-11ea-8649-e9a937d9d702.png">
| 1.0 | Deleting/Restoring an event hangs the frontend - Deleting/Restoring event makes the frontend hang and the request is not processed by the server. Ultimately, the Server needs to be restarted to fix the issue.
<img width="1119" alt="Screenshot 2020-02-04 at 12 09 20 PM" src="https://user-images.githubusercontent.com/44091822/73720141-48ed9000-4747-11ea-8649-e9a937d9d702.png">
| non_process | deleting restoring an event hangs the frontend deleting restoring event makes the frontend hang and the request is not processed by the server ultimately the server needs to be restarted to fix the issue img width alt screenshot at pm src | 0 |
20,888 | 27,714,305,239 | IssuesEvent | 2023-03-14 16:00:12 | dDevTech/tapas-top-frontend | https://api.github.com/repos/dDevTech/tapas-top-frontend | closed | Verificación Edad | in process require testing | Crear package en webapp.app.modules.account con el nombre age-verification
En este paquete se incluirán los archivos necesarios para:
Una página nueva age-verify con un campo para seleccionar la fecha de nacimiento. Verificará la edad (>18)
Si la edad es correcta existirá un botón para continuar a rellenar los campos de registro situados en /account/register
OPCIONAL: Añadir la opción de seleccionar también con calendario la fecha de nacimiento
| 1.0 | Verificación Edad - Crear package en webapp.app.modules.account con el nombre age-verification
En este paquete se incluirán los archivos necesarios para:
Una página nueva age-verify con un campo para seleccionar la fecha de nacimiento. Verificará la edad (>18)
Si la edad es correcta existirá un botón para continuar a rellenar los campos de registro situados en /account/register
OPCIONAL: Añadir la opción de seleccionar también con calendario la fecha de nacimiento
| process | verificación edad crear package en webapp app modules account con el nombre age verification en este paquete se incluirán los archivos necesarios para una página nueva age verify con un campo para seleccionar la fecha de nacimiento verificará la edad si la edad es correcta existirá un botón para continuar a rellenar los campos de registro situados en account register opcional añadir la opción de seleccionar también con calendario la fecha de nacimiento | 1 |
9,115 | 7,570,934,161 | IssuesEvent | 2018-04-23 10:33:09 | moby/moby | https://api.github.com/repos/moby/moby | closed | SELinux alert while building image | area/security/selinux status/more-info-needed version/17.04 | <!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- Docker Support Forums - https://forums.docker.com
- IRC - irc.freenode.net #docker channel
- Post a question on StackOverflow, using the Docker tag
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
While building an image, in a step that runs a lot of scripts (and I don't really know which one triggers this), SELinux alerts, although image builds fine.
**Steps to reproduce the issue:**
1. Install docker from official repos.
1. `docker-compose build --pull`
**Describe the results you received:**
```
SELinux is preventing build.sh from write access on the directory fd.
***** Plugin catchall (100. confidence) suggests **************************
If cree que de manera predeterminada, build.sh debería permitir acceso write sobre fd directory.
Then debería reportar esto como un error.
Puede generar un módulo de política local para permitir este acceso.
Do
allow this access for now by executing:
# ausearch -c 'build.sh' --raw | audit2allow -M my-buildsh
# semodule -X 300 -i my-buildsh.pp
Additional Information:
Source Context system_u:system_r:container_t:s0:c757,c951
Target Context system_u:system_r:container_t:s0:c757,c951
Target Objects fd [ dir ]
Source build.sh
Source Path build.sh
Port <Unknown>
Host yajolap.yajodomain
Source RPM Packages
Target RPM Packages
Policy RPM selinux-policy-3.13.1-225.11.fc25.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name yajolap.yajodomain
Platform Linux yajolap.yajodomain 4.10.8-200.fc25.x86_64 #1
SMP Fri Mar 31 13:20:22 UTC 2017 x86_64 x86_64
Alert Count 2
First Seen 2017-04-24 09:59:58 CEST
Last Seen 2017-04-24 10:00:13 CEST
Local ID 9c27ff97-f7f8-491c-a50a-9c70f9752281
Raw Audit Messages
type=AVC msg=audit(1493020813.393:2623): avc: denied { write } for pid=10101 comm="build.sh" name="fd" dev="proc" ino=1930597 scontext=system_u:system_r:container_t:s0:c757,c951 tcontext=system_u:system_r:container_t:s0:c757,c951 tclass=dir permissive=0
Hash: build.sh,container_t,container_t,dir,write
```
**Describe the results you expected:**
No SELinux problems.
**Additional information you deem important (e.g. issue happens only occasionally):**
It says problems with `fd` directory, but I can tell you I don't use a directory with that name in none of the scripts. I suspect this happens while building something that installs something with apt.
**Output of `docker version`:**
```
Client:
Version: 17.04.0-ce
API version: 1.28
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 19:14:52 2017
OS/Arch: linux/amd64
Server:
Version: 17.04.0-ce
API version: 1.28 (minimum version 1.12)
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 19:14:52 2017
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker info`:**
```
Containers: 80
Running: 0
Paused: 0
Stopped: 80
Images: 713
Server Version: 17.04.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary:
containerd version: 422e31ce907fd9c3833a38d7b8fdd023e5a76e73
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 4.10.8-200.fc25.x86_64
Operating System: Fedora 25 (Workstation Edition)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 6.76GiB
Name: yajolap.yajodomain
ID: KUBN:F7JL:URX6:HO55:R3L2:SCUU:IWVY:EZ2O:F53G:WHTO:3G4D:R4YU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Fedora 25, physical, SELinux enforcing, official packages. | True | SELinux alert while building image - <!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support can be found at the following locations:
- Docker Support Forums - https://forums.docker.com
- IRC - irc.freenode.net #docker channel
- Post a question on StackOverflow, using the Docker tag
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
While building an image, in a step that runs a lot of scripts (and I don't really know which one triggers this), SELinux alerts, although image builds fine.
**Steps to reproduce the issue:**
1. Install docker from official repos.
1. `docker-compose build --pull`
**Describe the results you received:**
```
SELinux is preventing build.sh from write access on the directory fd.
***** Plugin catchall (100. confidence) suggests **************************
If cree que de manera predeterminada, build.sh debería permitir acceso write sobre fd directory.
Then debería reportar esto como un error.
Puede generar un módulo de política local para permitir este acceso.
Do
allow this access for now by executing:
# ausearch -c 'build.sh' --raw | audit2allow -M my-buildsh
# semodule -X 300 -i my-buildsh.pp
Additional Information:
Source Context system_u:system_r:container_t:s0:c757,c951
Target Context system_u:system_r:container_t:s0:c757,c951
Target Objects fd [ dir ]
Source build.sh
Source Path build.sh
Port <Unknown>
Host yajolap.yajodomain
Source RPM Packages
Target RPM Packages
Policy RPM selinux-policy-3.13.1-225.11.fc25.noarch
Selinux Enabled True
Policy Type targeted
Enforcing Mode Enforcing
Host Name yajolap.yajodomain
Platform Linux yajolap.yajodomain 4.10.8-200.fc25.x86_64 #1
SMP Fri Mar 31 13:20:22 UTC 2017 x86_64 x86_64
Alert Count 2
First Seen 2017-04-24 09:59:58 CEST
Last Seen 2017-04-24 10:00:13 CEST
Local ID 9c27ff97-f7f8-491c-a50a-9c70f9752281
Raw Audit Messages
type=AVC msg=audit(1493020813.393:2623): avc: denied { write } for pid=10101 comm="build.sh" name="fd" dev="proc" ino=1930597 scontext=system_u:system_r:container_t:s0:c757,c951 tcontext=system_u:system_r:container_t:s0:c757,c951 tclass=dir permissive=0
Hash: build.sh,container_t,container_t,dir,write
```
**Describe the results you expected:**
No SELinux problems.
**Additional information you deem important (e.g. issue happens only occasionally):**
It says problems with `fd` directory, but I can tell you I don't use a directory with that name in none of the scripts. I suspect this happens while building something that installs something with apt.
**Output of `docker version`:**
```
Client:
Version: 17.04.0-ce
API version: 1.28
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 19:14:52 2017
OS/Arch: linux/amd64
Server:
Version: 17.04.0-ce
API version: 1.28 (minimum version 1.12)
Go version: go1.7.5
Git commit: 4845c56
Built: Wed Apr 5 19:14:52 2017
OS/Arch: linux/amd64
Experimental: false
```
**Output of `docker info`:**
```
Containers: 80
Running: 0
Paused: 0
Stopped: 80
Images: 713
Server Version: 17.04.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary:
containerd version: 422e31ce907fd9c3833a38d7b8fdd023e5a76e73
runc version: 9c2d8d184e5da67c95d601382adf14862e4f2228
init version: 949e6fa
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 4.10.8-200.fc25.x86_64
Operating System: Fedora 25 (Workstation Edition)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 6.76GiB
Name: yajolap.yajodomain
ID: KUBN:F7JL:URX6:HO55:R3L2:SCUU:IWVY:EZ2O:F53G:WHTO:3G4D:R4YU
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
Fedora 25, physical, SELinux enforcing, official packages. | non_process | selinux alert while building image if you are reporting a new issue make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead if you suspect your issue is a bug please edit your issue description to include the bug report information shown below if you fail to provide this information within days we cannot debug your issue and will close it we will however reopen it if you later provide the information for more information about reporting issues see general support information the github issue tracker is for bug reports and feature requests general support can be found at the following locations docker support forums irc irc freenode net docker channel post a question on stackoverflow using the docker tag bug report information use the commands below to provide key information from your environment you do not have to include this information if this is a feature request description briefly describe the problem you are having in a few paragraphs while building an image in a step that runs a lot of scripts and i don t really know which one triggers this selinux alerts although image builds fine steps to reproduce the issue install docker from official repos docker compose build pull describe the results you received selinux is preventing build sh from write access on the directory fd plugin catchall confidence suggests if cree que de manera predeterminada build sh debería permitir acceso write sobre fd directory then debería reportar esto como un error puede generar un módulo de política local para permitir este acceso do allow this access for now by executing ausearch c build sh raw m my buildsh semodule x i my buildsh pp additional information source context system u system r container t target context system u system r container t target objects fd source build sh source path build sh port host yajolap yajodomain source rpm packages target rpm packages policy rpm selinux policy noarch selinux enabled true policy type targeted enforcing mode enforcing host name yajolap yajodomain platform linux yajolap yajodomain smp fri mar utc alert count first seen cest last seen cest local id raw audit messages type avc msg audit avc denied write for pid comm build sh name fd dev proc ino scontext system u system r container t tcontext system u system r container t tclass dir permissive hash build sh container t container t dir write describe the results you expected no selinux problems additional information you deem important e g issue happens only occasionally it says problems with fd directory but i can tell you i don t use a directory with that name in none of the scripts i suspect this happens while building something that installs something with apt output of docker version client version ce api version go version git commit built wed apr os arch linux server version ce api version minimum version go version git commit built wed apr os arch linux experimental false output of docker info containers running paused stopped images server version ce storage driver backing filesystem extfs supports d type true native overlay diff true logging driver journald cgroup driver cgroupfs plugins volume local network bridge host macvlan null overlay swarm inactive runtimes runc default runtime runc init binary containerd version runc version init version security options seccomp profile default selinux kernel version operating system fedora workstation edition ostype linux architecture cpus total memory name yajolap yajodomain id kubn scuu iwvy whto docker root dir var lib docker debug mode client false debug mode server false registry experimental false insecure registries live restore enabled false additional environment details aws virtualbox physical etc fedora physical selinux enforcing official packages | 0 |
8,427 | 11,594,435,355 | IssuesEvent | 2020-02-24 15:19:52 | ORNL-AMO/AMO-Tools-Desktop | https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop | closed | Cost Decimal places | Fans Process Heating Pumps Steam Treasure Hunt | ALL cost results - no decimal places
Pumps - Assessment - good
Pumps - Results - good
**Pumps - Results Sankey - need fix**
PH - Results - good
**PH - Results Sankey - need fix**
Fans- Assessment - good
Fans- Results - good
**Fans - Results Sankey - need fix**
**Steam - Assessment - need fix - lots of rows**
**Steam - Results - need fix - lots of rows**
KEEP DECIMALS for Marginal Steam costs
**Steam - Diagram- need fix - lots of rows + "more cost details**
KEEP DECIMALS for Marginal Steam costs
**TH - Treasure Chest - table & side results - need fix**
**TH - Results - Opportunity Summary & Opportunity Payback Details - need fix**
| 1.0 | Cost Decimal places - ALL cost results - no decimal places
Pumps - Assessment - good
Pumps - Results - good
**Pumps - Results Sankey - need fix**
PH - Results - good
**PH - Results Sankey - need fix**
Fans- Assessment - good
Fans- Results - good
**Fans - Results Sankey - need fix**
**Steam - Assessment - need fix - lots of rows**
**Steam - Results - need fix - lots of rows**
KEEP DECIMALS for Marginal Steam costs
**Steam - Diagram- need fix - lots of rows + "more cost details**
KEEP DECIMALS for Marginal Steam costs
**TH - Treasure Chest - table & side results - need fix**
**TH - Results - Opportunity Summary & Opportunity Payback Details - need fix**
| process | cost decimal places all cost results no decimal places pumps assessment good pumps results good pumps results sankey need fix ph results good ph results sankey need fix fans assessment good fans results good fans results sankey need fix steam assessment need fix lots of rows steam results need fix lots of rows keep decimals for marginal steam costs steam diagram need fix lots of rows more cost details keep decimals for marginal steam costs th treasure chest table side results need fix th results opportunity summary opportunity payback details need fix | 1 |
372,319 | 25,995,450,672 | IssuesEvent | 2022-12-20 11:09:58 | exasol/transformers-extension | https://api.github.com/repos/exasol/transformers-extension | closed | Add manual setup description to User Guide | documentation | - User guide explains how to install the extension from the released artifacts
- We need to add manual setup description
- clone repo
- create virtual env
- install via `pip install`
- .. | 1.0 | Add manual setup description to User Guide - - User guide explains how to install the extension from the released artifacts
- We need to add manual setup description
- clone repo
- create virtual env
- install via `pip install`
- .. | non_process | add manual setup description to user guide user guide explains how to install the extension from the released artifacts we need to add manual setup description clone repo create virtual env install via pip install | 0 |
381,323 | 26,446,557,090 | IssuesEvent | 2023-01-16 07:55:00 | Stanford-Health/wearipedia | https://api.github.com/repos/Stanford-Health/wearipedia | opened | Improve `authenticate()` documentation | documentation | Sibling issue to #113 . Instead of using an authentication dict, just pass the values in as kwargs to improve documentation cleanliness. | 1.0 | Improve `authenticate()` documentation - Sibling issue to #113 . Instead of using an authentication dict, just pass the values in as kwargs to improve documentation cleanliness. | non_process | improve authenticate documentation sibling issue to instead of using an authentication dict just pass the values in as kwargs to improve documentation cleanliness | 0 |
48 | 2,513,878,254 | IssuesEvent | 2015-01-15 04:33:35 | GsDevKit/zinc | https://api.github.com/repos/GsDevKit/zinc | closed | Missing GsSocket exceptions | inprocess | Marten reports:
Zinc under Gemstone does not consider some GsSocket exceptions, which
under some circumstances crashes the server ... | 1.0 | Missing GsSocket exceptions - Marten reports:
Zinc under Gemstone does not consider some GsSocket exceptions, which
under some circumstances crashes the server ... | process | missing gssocket exceptions marten reports zinc under gemstone does not consider some gssocket exceptions which under some circumstances crashes the server | 1 |
49,767 | 7,539,546,452 | IssuesEvent | 2018-04-17 01:10:22 | CpuKnows/NarrativeQA-Project | https://api.github.com/repos/CpuKnows/NarrativeQA-Project | opened | Setup on google cloud | documentation enhancement | Research and document necessary elements for running in the cloud
- [ ] Run some example code on the cloud
- [ ] How to load dataset onto cloud storage or bigtable
- [ ] How to install necessary packages and run python code
- [ ] How to start / stop / set limits on instances
- [ ] Logging
- [ ] Save models
Will be responsible for educating the group.
Tutorials:
https://medium.com/google-cloud/set-up-google-cloud-gpu-for-fast-ai-45a77fa0cb48
https://medium.com/google-cloud/using-a-gpu-tensorflow-on-google-cloud-platform-1a2458f42b0
http://cs231n.github.io/gce-tutorial/
https://cloud.google.com/ml-engine/docs/tensorflow/how-tos | 1.0 | Setup on google cloud - Research and document necessary elements for running in the cloud
- [ ] Run some example code on the cloud
- [ ] How to load dataset onto cloud storage or bigtable
- [ ] How to install necessary packages and run python code
- [ ] How to start / stop / set limits on instances
- [ ] Logging
- [ ] Save models
Will be responsible for educating the group.
Tutorials:
https://medium.com/google-cloud/set-up-google-cloud-gpu-for-fast-ai-45a77fa0cb48
https://medium.com/google-cloud/using-a-gpu-tensorflow-on-google-cloud-platform-1a2458f42b0
http://cs231n.github.io/gce-tutorial/
https://cloud.google.com/ml-engine/docs/tensorflow/how-tos | non_process | setup on google cloud research and document necessary elements for running in the cloud run some example code on the cloud how to load dataset onto cloud storage or bigtable how to install necessary packages and run python code how to start stop set limits on instances logging save models will be responsible for educating the group tutorials | 0 |
171,952 | 21,007,680,959 | IssuesEvent | 2022-03-30 01:19:07 | Satheesh575555/kernel-mm-huge_memory | https://api.github.com/repos/Satheesh575555/kernel-mm-huge_memory | opened | WS-2021-0462 (Medium) detected in linuxlinux-4.19.236 | security vulnerability | ## WS-2021-0462 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/isdn/capi/kcapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/isdn/capi/kcapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Linux/Kernel is vulnerable to check ctr->cnr to avoid array index out of bound in drivers/isdn/capi/kcapi.c
<p>Publish Date: 2021-11-29
<p>URL: <a href=https://github.com/gregkh/linux/commit/1f3e2e97c003f80c4b087092b225c8787ff91e4d>WS-2021-0462</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002166">https://osv.dev/vulnerability/UVI-2021-1002166</a></p>
<p>Release Date: 2021-11-29</p>
<p>Fix Resolution: Linux/Kernel - v4.4.290, v4.9.288, v4.14.253, v4.19.214, v5.4.156, v5.10.76, v5.14.15, v5.15-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | WS-2021-0462 (Medium) detected in linuxlinux-4.19.236 - ## WS-2021-0462 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/isdn/capi/kcapi.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/isdn/capi/kcapi.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Linux/Kernel is vulnerable to check ctr->cnr to avoid array index out of bound in drivers/isdn/capi/kcapi.c
<p>Publish Date: 2021-11-29
<p>URL: <a href=https://github.com/gregkh/linux/commit/1f3e2e97c003f80c4b087092b225c8787ff91e4d>WS-2021-0462</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/UVI-2021-1002166">https://osv.dev/vulnerability/UVI-2021-1002166</a></p>
<p>Release Date: 2021-11-29</p>
<p>Fix Resolution: Linux/Kernel - v4.4.290, v4.9.288, v4.14.253, v4.19.214, v5.4.156, v5.10.76, v5.14.15, v5.15-rc6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | ws medium detected in linuxlinux ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers isdn capi kcapi c drivers isdn capi kcapi c vulnerability details in linux kernel is vulnerable to check ctr cnr to avoid array index out of bound in drivers isdn capi kcapi c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with whitesource | 0 |
84,428 | 24,306,848,567 | IssuesEvent | 2022-09-29 18:13:24 | nextcloud/nextcloudpi | https://api.github.com/repos/nextcloud/nextcloudpi | closed | Provide docker image for 1.50 | bug docker build has-updates | It would be nice if there was a docker image for v1.50. Currently there is only 1.49.1 | 1.0 | Provide docker image for 1.50 - It would be nice if there was a docker image for v1.50. Currently there is only 1.49.1 | non_process | provide docker image for it would be nice if there was a docker image for currently there is only | 0 |
18,890 | 24,833,411,350 | IssuesEvent | 2022-10-26 06:45:57 | didi/mpx | https://api.github.com/repos/didi/mpx | closed | [Bug report] scroll-view: DOM and touch movement issues happened when lacking of width and height | processing | **问题描述**
这不能完全算是一个BUG,但是的确体验不太好。
我在使用scroll-view时,没有注意到width和height props,直接使用后滑动页面时就会产生这些报错,有些情况下是由mpx-scroll-view抛出的,但有的时候是better-scroll抛出的。实际上width和height必须在CSS中指定给scroll-view,因为我在源码中注意到(line 263)通过refs获取到了容器和内容的宽高等样式。
如果这些样式最终是交由JS处理的,有没有可能通过props传入width和height两个rpx props?或者是否可以在scroll-view初始化时抛出一个宽高获取的提示和安全取值呢?依赖指定宽高是应该也是可以被理解的,但是现在的错误抛出方式容易被理解为是better-scroll出现了什么问题,而不是调用层面。
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows):Mac
2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看):2.6.110,但我搜了release记录,这个问题在更新版本中应该依然存在
4. 小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本):仅H5环境
**最简复现demo**
一般来说通过文字和截图的描述我们很难定位到问题,为了帮助我们快速定位问题并修复,请按照以下指南编写并上传最简复现demo:
` <scroll-view
scroll-with-animation
scroll-y
scroll-into-view="1"
>
<view
wx:for="{{30}}"
wx:key="index"
id="{{index}}"
style="
flex: 0 0 100%;
height: 10vw;
background-color: #00bbff;
margin-top: 1vw;
display: flex;
"
>
</view>
</scroll-view>`
| 1.0 | [Bug report] scroll-view: DOM and touch movement issues happened when lacking of width and height - **问题描述**
这不能完全算是一个BUG,但是的确体验不太好。
我在使用scroll-view时,没有注意到width和height props,直接使用后滑动页面时就会产生这些报错,有些情况下是由mpx-scroll-view抛出的,但有的时候是better-scroll抛出的。实际上width和height必须在CSS中指定给scroll-view,因为我在源码中注意到(line 263)通过refs获取到了容器和内容的宽高等样式。
如果这些样式最终是交由JS处理的,有没有可能通过props传入width和height两个rpx props?或者是否可以在scroll-view初始化时抛出一个宽高获取的提示和安全取值呢?依赖指定宽高是应该也是可以被理解的,但是现在的错误抛出方式容易被理解为是better-scroll出现了什么问题,而不是调用层面。
**环境信息描述**
至少包含以下部分:
1. 系统类型(Mac或者Windows):Mac
2. Mpx依赖版本(@mpxjs/core、@mpxjs/webpack-plugin和@mpxjs/api-proxy的具体版本,可以通过package-lock.json或者实际去node_modules当中查看):2.6.110,但我搜了release记录,这个问题在更新版本中应该依然存在
4. 小程序开发者工具信息(小程序平台、开发者工具版本、基础库版本):仅H5环境
**最简复现demo**
一般来说通过文字和截图的描述我们很难定位到问题,为了帮助我们快速定位问题并修复,请按照以下指南编写并上传最简复现demo:
` <scroll-view
scroll-with-animation
scroll-y
scroll-into-view="1"
>
<view
wx:for="{{30}}"
wx:key="index"
id="{{index}}"
style="
flex: 0 0 100%;
height: 10vw;
background-color: #00bbff;
margin-top: 1vw;
display: flex;
"
>
</view>
</scroll-view>`
| process | scroll view dom and touch movement issues happened when lacking of width and height 问题描述 这不能完全算是一个bug,但是的确体验不太好。 我在使用scroll view时,没有注意到width和height props,直接使用后滑动页面时就会产生这些报错,有些情况下是由mpx scroll view抛出的,但有的时候是better scroll抛出的。实际上width和height必须在css中指定给scroll view,因为我在源码中注意到(line 通过refs获取到了容器和内容的宽高等样式。 如果这些样式最终是交由js处理的,有没有可能通过props传入width和height两个rpx props?或者是否可以在scroll view初始化时抛出一个宽高获取的提示和安全取值呢?依赖指定宽高是应该也是可以被理解的,但是现在的错误抛出方式容易被理解为是better scroll出现了什么问题,而不是调用层面。 环境信息描述 至少包含以下部分: 系统类型 mac或者windows :mac mpx依赖版本 mpxjs core、 mpxjs webpack plugin和 mpxjs api proxy的具体版本,可以通过package lock json或者实际去node modules当中查看 : ,但我搜了release记录,这个问题在更新版本中应该依然存在 小程序开发者工具信息 小程序平台、开发者工具版本、基础库版本): 最简复现demo 一般来说通过文字和截图的描述我们很难定位到问题,为了帮助我们快速定位问题并修复,请按照以下指南编写并上传最简复现demo: scroll view scroll with animation scroll y scroll into view view wx for wx key index id index style flex height background color margin top display flex | 1 |
9,147 | 12,203,198,130 | IssuesEvent | 2020-04-30 10:11:25 | MHRA/products | https://api.github.com/repos/MHRA/products | closed | AUTOMATIC BATCH PROCESS - Create service informs State Manager on completion / error | EPIC - Auto Batch Process :oncoming_automobile: HIGH PRIORITY :arrow_double_up: TASK :rescue_worker_helmet: | ### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
**Customer acceptance criteria**
**Technical acceptance criteria**
Create service calls the state Manager with the status of the upload - either failed or successful.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
S
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate | 1.0 | AUTOMATIC BATCH PROCESS - Create service informs State Manager on completion / error - ### User want
As a user
I want to see up to date documents on the products website
So I can make informed decisions
**Customer acceptance criteria**
**Technical acceptance criteria**
Create service calls the state Manager with the status of the upload - either failed or successful.
**Data acceptance criteria**
**Testing acceptance criteria**
**Size**
S
**Value**
**Effort**
### Exit Criteria met
- [x] Backlog
- [x] Discovery
- [x] DUXD
- [ ] Development
- [ ] Quality Assurance
- [ ] Release and Validate | process | automatic batch process create service informs state manager on completion error user want as a user i want to see up to date documents on the products website so i can make informed decisions customer acceptance criteria technical acceptance criteria create service calls the state manager with the status of the upload either failed or successful data acceptance criteria testing acceptance criteria size s value effort exit criteria met backlog discovery duxd development quality assurance release and validate | 1 |
86,473 | 10,755,594,783 | IssuesEvent | 2019-10-31 09:28:45 | wmde/mitmachen | https://api.github.com/repos/wmde/mitmachen | closed | DESIGN: display full category name in suggested topic areas - 1.2 | Design | Currently the suggested categories below the search field (green buttons) are shortened automatically. This irritates users. Find a solution to display full category names or to automatically exclude categories that are longer than a certain number of characters.
Solution/Comments: Switch limitation off. Limit length to one line max, afterwards shorten.
| 1.0 | DESIGN: display full category name in suggested topic areas - 1.2 - Currently the suggested categories below the search field (green buttons) are shortened automatically. This irritates users. Find a solution to display full category names or to automatically exclude categories that are longer than a certain number of characters.
Solution/Comments: Switch limitation off. Limit length to one line max, afterwards shorten.
| non_process | design display full category name in suggested topic areas currently the suggested categories below the search field green buttons are shortened automatically this irritates users find a solution to display full category names or to automatically exclude categories that are longer than a certain number of characters solution comments switch limitation off limit length to one line max afterwards shorten | 0 |
6,575 | 9,659,816,299 | IssuesEvent | 2019-05-20 14:14:40 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | closed | Department of State: Review Application | Apply Process Approved Requirements Ready State Dept. | Who: Student Applicant
What: A Review Application page
Why: As a student I would like to see what I am submitting before it is sent to DoS
A/C
- There will be a header "Review Application" (Bold)
- The following sections will be presented
- "Applying to these internship opportunities"
- There will be a card for each choice (1st choice, 2nd choice, 3rd choice) with the following information
- The title of the internship
- The Bureau/Office
- "Experience"
- "References"
- "Education & Transcripts"
- "Languages"
- "Skills"
- Statement of Interest"
- There will be a "Consent to share information" box
- There will be a Yes No radio button
- The "Submit application" button will be disabled until the user answers the "consent to share information" question
- The user will be able to edit each section on this page by clicking the "edit" button in the section they want to edit. (this will be done in the same window and will only update information in Open Opps)
- When the user clicks the "Submit application" button they will be presented the "submission confirmation" modal #2927
https://opm.invisionapp.com/d/main/#/console/15360465/319289355/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 | 1.0 | Department of State: Review Application - Who: Student Applicant
What: A Review Application page
Why: As a student I would like to see what I am submitting before it is sent to DoS
A/C
- There will be a header "Review Application" (Bold)
- The following sections will be presented
- "Applying to these internship opportunities"
- There will be a card for each choice (1st choice, 2nd choice, 3rd choice) with the following information
- The title of the internship
- The Bureau/Office
- "Experience"
- "References"
- "Education & Transcripts"
- "Languages"
- "Skills"
- Statement of Interest"
- There will be a "Consent to share information" box
- There will be a Yes No radio button
- The "Submit application" button will be disabled until the user answers the "consent to share information" question
- The user will be able to edit each section on this page by clicking the "edit" button in the section they want to edit. (this will be done in the same window and will only update information in Open Opps)
- When the user clicks the "Submit application" button they will be presented the "submission confirmation" modal #2927
https://opm.invisionapp.com/d/main/#/console/15360465/319289355/preview
Public Link: https://opm.invisionapp.com/share/ZEPNZR09Q54 | process | department of state review application who student applicant what a review application page why as a student i would like to see what i am submitting before it is sent to dos a c there will be a header review application bold the following sections will be presented applying to these internship opportunities there will be a card for each choice choice choice choice with the following information the title of the internship the bureau office experience references education transcripts languages skills statement of interest there will be a consent to share information box there will be a yes no radio button the submit application button will be disabled until the user answers the consent to share information question the user will be able to edit each section on this page by clicking the edit button in the section they want to edit this will be done in the same window and will only update information in open opps when the user clicks the submit application button they will be presented the submission confirmation modal public link | 1 |
396,973 | 27,144,004,005 | IssuesEvent | 2023-02-16 18:25:15 | miaamitchell/SFSC | https://api.github.com/repos/miaamitchell/SFSC | closed | System Request | documentation | The System Request part of the Documentation. This is included in the System Request portion of the Project Documentation Report. | 1.0 | System Request - The System Request part of the Documentation. This is included in the System Request portion of the Project Documentation Report. | non_process | system request the system request part of the documentation this is included in the system request portion of the project documentation report | 0 |
5,994 | 8,805,375,180 | IssuesEvent | 2018-12-26 19:14:12 | dita-ot/dita-ot | https://api.github.com/repos/dita-ot/dita-ot | closed | Keyref processing overwrites user-authored file with clone | bug preprocess/keyref priority/medium stale | [Fixtures](https://github.com/eerohele/dita-ot-issues/tree/master/fixtures/2242).
Related to #2134 insofar as it's related to keyscopes and generating topic clones. Might even be the same root cause.
Given:
``` xml
<!-- root.ditamap -->
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd">
<map id="map" title="DITA Map">
<topicref href="topic1.dita"/>
<topicref href="topic1.dita">
<topicref href="topic1-1.dita"/>
</topicref>
</map>
<!-- topic1.dita -->
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic1" xml:lang="en-us">
<title>Topic 1</title>
<body>
<p><keyword keyref="it-does-not-matter-whether-i-have-a-valid-definition"/></p>
</body>
</topic>
<!-- topic1-1.dita -->
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic1-1" xml:lang="en-us">
<title>Topic 1-1</title>
<body>
<p>Hello, world 1-1!</p>
</body>
</topic>
```
`topic1-1.dita` gets overwritten in preprocessing when topic clones are generated for key resolution purposes (see #2134). The effect is clearly visible in the PDF:
<img width="547" alt="screen shot 2016-02-25 at 17 11 02" src="https://cloud.githubusercontent.com/assets/31859/13323590/d590bd68-dbe2-11e5-926f-5ec22ba0012a.png">
The last topic should be "Topic 1-1", not "Topic 1".
| 1.0 | Keyref processing overwrites user-authored file with clone - [Fixtures](https://github.com/eerohele/dita-ot-issues/tree/master/fixtures/2242).
Related to #2134 insofar as it's related to keyscopes and generating topic clones. Might even be the same root cause.
Given:
``` xml
<!-- root.ditamap -->
<!DOCTYPE map PUBLIC "-//OASIS//DTD DITA Map//EN" "map.dtd">
<map id="map" title="DITA Map">
<topicref href="topic1.dita"/>
<topicref href="topic1.dita">
<topicref href="topic1-1.dita"/>
</topicref>
</map>
<!-- topic1.dita -->
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic1" xml:lang="en-us">
<title>Topic 1</title>
<body>
<p><keyword keyref="it-does-not-matter-whether-i-have-a-valid-definition"/></p>
</body>
</topic>
<!-- topic1-1.dita -->
<!DOCTYPE topic PUBLIC "-//OASIS//DTD DITA Topic//EN" "topic.dtd">
<topic id="topic1-1" xml:lang="en-us">
<title>Topic 1-1</title>
<body>
<p>Hello, world 1-1!</p>
</body>
</topic>
```
`topic1-1.dita` gets overwritten in preprocessing when topic clones are generated for key resolution purposes (see #2134). The effect is clearly visible in the PDF:
<img width="547" alt="screen shot 2016-02-25 at 17 11 02" src="https://cloud.githubusercontent.com/assets/31859/13323590/d590bd68-dbe2-11e5-926f-5ec22ba0012a.png">
The last topic should be "Topic 1-1", not "Topic 1".
| process | keyref processing overwrites user authored file with clone related to insofar as it s related to keyscopes and generating topic clones might even be the same root cause given xml topic topic hello world dita gets overwritten in preprocessing when topic clones are generated for key resolution purposes see the effect is clearly visible in the pdf img width alt screen shot at src the last topic should be topic not topic | 1 |
234,617 | 25,879,849,380 | IssuesEvent | 2022-12-14 10:29:59 | rtdip/core | https://api.github.com/repos/rtdip/core | opened | numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl: 1 vulnerabilities (highest severity is: 5.3) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /.ws-temp-CRDBLN-requirements.txt</p>
<p>Path to vulnerable library: /.ws-temp-CRDBLN-requirements.txt,/.ws-temp-CRDBLN-requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/rtdip/core/commit/fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1">fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (numpy version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-34141](https://www.mend.io/vulnerability-database/CVE-2021-34141) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl | Direct | numpy - 1.22.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-34141</summary>
### Vulnerable Library - <b>numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /.ws-temp-CRDBLN-requirements.txt</p>
<p>Path to vulnerable library: /.ws-temp-CRDBLN-requirements.txt,/.ws-temp-CRDBLN-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rtdip/core/commit/fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1">fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An incomplete string comparison in the numpy.core component in NumPy before 1.22.0 allows attackers to trigger slightly incorrect copying by constructing specific string objects. NOTE: the vendor states that this reported code behavior is "completely harmless."
Mend Note: After conducting further research, Mend has determined that versions 1.12.0 through 1.21.6 of numpy are vulnerable to CVE-2021-34141
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-34141>CVE-2021-34141</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-34141">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-34141</a></p>
<p>Release Date: 2021-12-17</p>
<p>Fix Resolution: numpy - 1.22.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl: 1 vulnerabilities (highest severity is: 5.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /.ws-temp-CRDBLN-requirements.txt</p>
<p>Path to vulnerable library: /.ws-temp-CRDBLN-requirements.txt,/.ws-temp-CRDBLN-requirements.txt</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/rtdip/core/commit/fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1">fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (numpy version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-34141](https://www.mend.io/vulnerability-database/CVE-2021-34141) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.3 | numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl | Direct | numpy - 1.22.0 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2021-34141</summary>
### Vulnerable Library - <b>numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</b></p>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/f9/d5/18336e9828d2f07beb0bcd3849c660001bedea50e6219627315968900ad6/numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /.ws-temp-CRDBLN-requirements.txt</p>
<p>Path to vulnerable library: /.ws-temp-CRDBLN-requirements.txt,/.ws-temp-CRDBLN-requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.21.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rtdip/core/commit/fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1">fd25f8b654a14d4f2bf79da5b0c001061f2ab6c1</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
An incomplete string comparison in the numpy.core component in NumPy before 1.22.0 allows attackers to trigger slightly incorrect copying by constructing specific string objects. NOTE: the vendor states that this reported code behavior is "completely harmless."
Mend Note: After conducting further research, Mend has determined that versions 1.12.0 through 1.21.6 of numpy are vulnerable to CVE-2021-34141
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-34141>CVE-2021-34141</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-34141">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-34141</a></p>
<p>Release Date: 2021-12-17</p>
<p>Fix Resolution: numpy - 1.22.0</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_process | numpy manylinux whl vulnerabilities highest severity is vulnerable library numpy manylinux whl numpy is the fundamental package for array computing with python library home page a href path to dependency file ws temp crdbln requirements txt path to vulnerable library ws temp crdbln requirements txt ws temp crdbln requirements txt found in head commit a href vulnerabilities cve severity cvss dependency type fixed in numpy version remediation available medium numpy manylinux whl direct numpy details cve vulnerable library numpy manylinux whl numpy is the fundamental package for array computing with python library home page a href path to dependency file ws temp crdbln requirements txt path to vulnerable library ws temp crdbln requirements txt ws temp crdbln requirements txt dependency hierarchy x numpy manylinux whl vulnerable library found in head commit a href found in base branch develop vulnerability details an incomplete string comparison in the numpy core component in numpy before allows attackers to trigger slightly incorrect copying by constructing specific string objects note the vendor states that this reported code behavior is completely harmless mend note after conducting further research mend has determined that versions through of numpy are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution numpy step up your open source security game with mend | 0 |
114,667 | 17,258,862,950 | IssuesEvent | 2021-07-22 02:51:17 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | sql: GRANT gets confused when username argument is not normalized | A-security A-sql-privileges C-bug T-server-and-security T-sql-experience | **Describe the problem**
usernames are supposed to be case insensitive. Grant seems to be case sensitive.
**To Reproduce**
What did you do? Describe in your own words.
`CREATE USER "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';`
`GRANT ALL ON DATABASE keycloak TO "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073";`
```
* 1 error occurred:
* pq: user or role "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073" does not exist
```
```
show users
username options member_of
admin {}
dba {admin}
root {admin}
v-root-keycloak-rukoxltxbfod6io473d7-1621531073 VALID UNTIL=2022-01-25 10:10:10.555555+00:00 {}
```
**Expected behavior**
GRANT should succeed because usernames are case-insensitive.
**Environment:**
- CockroachDB v21.1.0
**Additional context**
What was the impact?
A partner at Red Hat is working on Vault/Keycloak integration for customers to have a solution to manage users and permissions.
Epic CRDB-7217 | True | sql: GRANT gets confused when username argument is not normalized - **Describe the problem**
usernames are supposed to be case insensitive. Grant seems to be case sensitive.
**To Reproduce**
What did you do? Describe in your own words.
`CREATE USER "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';`
`GRANT ALL ON DATABASE keycloak TO "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073";`
```
* 1 error occurred:
* pq: user or role "v-root-keycloak-ruKOXLtxBFOd6iO473d7-1621531073" does not exist
```
```
show users
username options member_of
admin {}
dba {admin}
root {admin}
v-root-keycloak-rukoxltxbfod6io473d7-1621531073 VALID UNTIL=2022-01-25 10:10:10.555555+00:00 {}
```
**Expected behavior**
GRANT should succeed because usernames are case-insensitive.
**Environment:**
- CockroachDB v21.1.0
**Additional context**
What was the impact?
A partner at Red Hat is working on Vault/Keycloak integration for customers to have a solution to manage users and permissions.
Epic CRDB-7217 | non_process | sql grant gets confused when username argument is not normalized describe the problem usernames are supposed to be case insensitive grant seems to be case sensitive to reproduce what did you do describe in your own words create user v root keycloak with login password password valid until expiration grant all on database keycloak to v root keycloak error occurred pq user or role v root keycloak does not exist show users username options member of admin dba admin root admin v root keycloak valid until expected behavior grant should succeed because usernames are case insensitive environment cockroachdb additional context what was the impact a partner at red hat is working on vault keycloak integration for customers to have a solution to manage users and permissions epic crdb | 0 |
61,503 | 25,545,531,229 | IssuesEvent | 2022-11-29 18:27:40 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | Neptune Global Cluster Support | enhancement new-resource service/neptune | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Add support for newly released Neptune Global Databases
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_neptune_global_cluster
* aws_neptune_cluster
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_neptune_global_cluster" "example" {
global_cluster_identifier = "global-test"
engine = "neptune"
engine_version = "1.2.0.0"
}
resource "aws_neptune_cluster" "example" {
...
global_cluster_identifier = aws_neptune_global_cluster.example.id
...
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
--->
* https://aws.amazon.com/about-aws/whats-new/2022/07/amazon-neptune-global-database/
| 1.0 | Neptune Global Cluster Support - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Add support for newly released Neptune Global Databases
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* aws_neptune_global_cluster
* aws_neptune_cluster
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_neptune_global_cluster" "example" {
global_cluster_identifier = "global-test"
engine = "neptune"
engine_version = "1.2.0.0"
}
resource "aws_neptune_cluster" "example" {
...
global_cluster_identifier = aws_neptune_global_cluster.example.id
...
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
--->
* https://aws.amazon.com/about-aws/whats-new/2022/07/amazon-neptune-global-database/
| non_process | neptune global cluster support community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description add support for newly released neptune global databases new or affected resource s aws neptune global cluster aws neptune cluster potential terraform configuration hcl resource aws neptune global cluster example global cluster identifier global test engine neptune engine version resource aws neptune cluster example global cluster identifier aws neptune global cluster example id references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example | 0 |
11,463 | 14,287,395,304 | IssuesEvent | 2020-11-23 16:18:38 | GSA/CIW | https://api.github.com/repos/GSA/CIW | opened | Contract Match Update: Search Task Order Numbers | Topic: Upload/Processing Type: Enhancement | At present, task order numbers are not looked at when the CIW import process is searching for an existing contract. When a CIW's contract number is specified as a task order number, the process must search the the DB task order number field. | 1.0 | Contract Match Update: Search Task Order Numbers - At present, task order numbers are not looked at when the CIW import process is searching for an existing contract. When a CIW's contract number is specified as a task order number, the process must search the the DB task order number field. | process | contract match update search task order numbers at present task order numbers are not looked at when the ciw import process is searching for an existing contract when a ciw s contract number is specified as a task order number the process must search the the db task order number field | 1 |
8,587 | 11,757,480,932 | IssuesEvent | 2020-03-13 13:46:06 | NationalSecurityAgency/ghidra | https://api.github.com/repos/NationalSecurityAgency/ghidra | closed | [M68000] decompiler cannot create correct array assignments | Feature: Processor/68000 Type: Bug | The M68000 decompiler incorrectly translates the following code fragment:
```
0402cf06 30 2d ff move.w (i,A5),D0w
fe
0402cf0a 48 c0 ext.l D0
0402cf0c e3 80 asl.l #0x1,D0
0402cf0e 41 f9 04 lea (SHORT_ARRAY_0403f996).l,A0 =
03 f9 96
0402cf14 32 2d ff move.w (i,A5),D1w
fe
0402cf18 48 c1 ext.l D1
0402cf1a e3 81 asl.l #0x1,D1
0402cf1c 43 f9 04 lea (SHORT_ARRAY_04042364).l,A1 =
04 23 64
0402cf22 33 b0 08 move.w (0x0,A0,D0*0x1),(0x0,A1,D1*0x1)=>SHORT_AR =
00 18 00
```
It generates a "self" assignment, ignoring the source:
```
SHORT_ARRAY_04042364[i] = SHORT_ARRAY_04042364[i];
```
This would be correct:
```
SHORT_ARRAY_04042364[i] = SHORT_ARRAY_0403f996[i]
```
Somehow it "forgets" the A0 address and uses the A1 address twice.
**Environment (please complete the following information):**
- OS: macOS 10.15.2
- Java Version: 13.0
- Ghidra Version: 9.1.1
| 1.0 | [M68000] decompiler cannot create correct array assignments - The M68000 decompiler incorrectly translates the following code fragment:
```
0402cf06 30 2d ff move.w (i,A5),D0w
fe
0402cf0a 48 c0 ext.l D0
0402cf0c e3 80 asl.l #0x1,D0
0402cf0e 41 f9 04 lea (SHORT_ARRAY_0403f996).l,A0 =
03 f9 96
0402cf14 32 2d ff move.w (i,A5),D1w
fe
0402cf18 48 c1 ext.l D1
0402cf1a e3 81 asl.l #0x1,D1
0402cf1c 43 f9 04 lea (SHORT_ARRAY_04042364).l,A1 =
04 23 64
0402cf22 33 b0 08 move.w (0x0,A0,D0*0x1),(0x0,A1,D1*0x1)=>SHORT_AR =
00 18 00
```
It generates a "self" assignment, ignoring the source:
```
SHORT_ARRAY_04042364[i] = SHORT_ARRAY_04042364[i];
```
This would be correct:
```
SHORT_ARRAY_04042364[i] = SHORT_ARRAY_0403f996[i]
```
Somehow it "forgets" the A0 address and uses the A1 address twice.
**Environment (please complete the following information):**
- OS: macOS 10.15.2
- Java Version: 13.0
- Ghidra Version: 9.1.1
| process | decompiler cannot create correct array assignments the decompiler incorrectly translates the following code fragment ff move w i fe ext l asl l lea short array l ff move w i fe ext l asl l lea short array l move w short ar it generates a self assignment ignoring the source short array short array this would be correct short array short array somehow it forgets the address and uses the address twice environment please complete the following information os macos java version ghidra version | 1 |
6,331 | 9,369,969,451 | IssuesEvent | 2019-04-03 12:28:17 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | opened | GH labels reorganation | area: Process enhancement | Following https://github.com/zephyrproject-rtos/zephyr/pull/15054 (Assuming it is merged),
this issue aims at settling other fixes for the labels which were not trivially agreed yet. | 1.0 | GH labels reorganation - Following https://github.com/zephyrproject-rtos/zephyr/pull/15054 (Assuming it is merged),
this issue aims at settling other fixes for the labels which were not trivially agreed yet. | process | gh labels reorganation following assuming it is merged this issue aims at settling other fixes for the labels which were not trivially agreed yet | 1 |
10,845 | 13,625,637,723 | IssuesEvent | 2020-09-24 09:47:29 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | reopened | nothoTaxon | Class - Taxon Process - need evidence for demand Term - add | Was https://code.google.com/p/darwincore/issues/detail?id=152
Reported by wixner, Mar 22, 2012
==New Term Recommendation==
Submitter: Markus Döring
Justification: To complete the capability of darwin core to share atomised names a new term for named hybrids is needed to denote the part of the name that is considered to be a notho taxon. The multiplication symbol used to mark hybrids is not part of the name and therefore should not be used inside the other name terms like specificEpithet. See ICBN H.3A.1. "The multiplication sign ×, indicating the hybrid nature of a taxon, should be placed so as to express that it belongs with the name or epithet but is not actually part of it."
Definition: The part of a name of a notho taxon which is considered to be of hybrid nature. Values allowed are generic, infrageneric, specific or infraspecific only.
Comment: This term is only to be used for named hybrids, not full hybrid formulas. Examples of named hybrids are "generic" for ×Agropogon P. Fourn. (1934); "generic" for ×Agropogon littoralis (Sm.) C. E. Hubb. (1946); "specific" for Salix ×capreola Andersson (1867); "infraspecific" for Polypodium vulgare nothosubsp. mantoniae (Rothm.) Schidlay
Refines:
Has Domain:
Has Range:
Replaces:
ABCD 2.06:
Mar 23, 2012 comment #1 peter.desmet.cubc
Markus, could you clarify your examples?
scientificName=×Agropogon littoralis (Sm.) C. E. Hubb. (1946)
genus=
species=
infraspecificEpithet=
rank=
nothoTaxon=
Wouldn't a taxonRank=nothospecies be sufficient?
Mar 23, 2012 comment #2 wixner
Peter, it goes like this:
scientificName=×Agropogon littoralis (Sm.) C. E. Hubb. (1946)
genus=Agropogon
species=littoralis
infraspecificEpithet=
scientificNameAuthorship=(Sm.) C. E. Hubb. (1946)
taxonRank=species
nothoTaxon=generic
The trouble here is that the genus is considered to be a hybrid already, but the rank of the taxon still is a species. If it would be a nothospecies it would be Agropogon ×littorals. Im not 100% sure if I interpreted the ICBN correctly, but it seemed to be the use of notho as a prefix to rank terms is limited to infraspecific ranks, e.g. nothosubsp.
Oct 3, 2013 comment #6 gtuco.btuco
I would like to promote the adoption of the concept mentioned in this issue. To do so, I will need a stronger proposal demonstrating the need to share this information - that is, that independent groups, organizations, projects have the same need and can reach a consensus proposal about how the term should be used. It might be a good idea to circulate the proposal on tdwg-content and see if a community can be built around and support the addition.
| 1.0 | nothoTaxon - Was https://code.google.com/p/darwincore/issues/detail?id=152
Reported by wixner, Mar 22, 2012
==New Term Recommendation==
Submitter: Markus Döring
Justification: To complete the capability of darwin core to share atomised names a new term for named hybrids is needed to denote the part of the name that is considered to be a notho taxon. The multiplication symbol used to mark hybrids is not part of the name and therefore should not be used inside the other name terms like specificEpithet. See ICBN H.3A.1. "The multiplication sign ×, indicating the hybrid nature of a taxon, should be placed so as to express that it belongs with the name or epithet but is not actually part of it."
Definition: The part of a name of a notho taxon which is considered to be of hybrid nature. Values allowed are generic, infrageneric, specific or infraspecific only.
Comment: This term is only to be used for named hybrids, not full hybrid formulas. Examples of named hybrids are "generic" for ×Agropogon P. Fourn. (1934); "generic" for ×Agropogon littoralis (Sm.) C. E. Hubb. (1946); "specific" for Salix ×capreola Andersson (1867); "infraspecific" for Polypodium vulgare nothosubsp. mantoniae (Rothm.) Schidlay
Refines:
Has Domain:
Has Range:
Replaces:
ABCD 2.06:
Mar 23, 2012 comment #1 peter.desmet.cubc
Markus, could you clarify your examples?
scientificName=×Agropogon littoralis (Sm.) C. E. Hubb. (1946)
genus=
species=
infraspecificEpithet=
rank=
nothoTaxon=
Wouldn't a taxonRank=nothospecies be sufficient?
Mar 23, 2012 comment #2 wixner
Peter, it goes like this:
scientificName=×Agropogon littoralis (Sm.) C. E. Hubb. (1946)
genus=Agropogon
species=littoralis
infraspecificEpithet=
scientificNameAuthorship=(Sm.) C. E. Hubb. (1946)
taxonRank=species
nothoTaxon=generic
The trouble here is that the genus is considered to be a hybrid already, but the rank of the taxon still is a species. If it would be a nothospecies it would be Agropogon ×littorals. Im not 100% sure if I interpreted the ICBN correctly, but it seemed to be the use of notho as a prefix to rank terms is limited to infraspecific ranks, e.g. nothosubsp.
Oct 3, 2013 comment #6 gtuco.btuco
I would like to promote the adoption of the concept mentioned in this issue. To do so, I will need a stronger proposal demonstrating the need to share this information - that is, that independent groups, organizations, projects have the same need and can reach a consensus proposal about how the term should be used. It might be a good idea to circulate the proposal on tdwg-content and see if a community can be built around and support the addition.
| process | nothotaxon was reported by wixner mar new term recommendation submitter markus döring justification to complete the capability of darwin core to share atomised names a new term for named hybrids is needed to denote the part of the name that is considered to be a notho taxon the multiplication symbol used to mark hybrids is not part of the name and therefore should not be used inside the other name terms like specificepithet see icbn h the multiplication sign × indicating the hybrid nature of a taxon should be placed so as to express that it belongs with the name or epithet but is not actually part of it definition the part of a name of a notho taxon which is considered to be of hybrid nature values allowed are generic infrageneric specific or infraspecific only comment this term is only to be used for named hybrids not full hybrid formulas examples of named hybrids are generic for ×agropogon p fourn generic for ×agropogon littoralis sm c e hubb specific for salix ×capreola andersson infraspecific for polypodium vulgare nothosubsp mantoniae rothm schidlay refines has domain has range replaces abcd mar comment peter desmet cubc markus could you clarify your examples scientificname ×agropogon littoralis sm c e hubb genus species infraspecificepithet rank nothotaxon wouldn t a taxonrank nothospecies be sufficient mar comment wixner peter it goes like this scientificname ×agropogon littoralis sm c e hubb genus agropogon species littoralis infraspecificepithet scientificnameauthorship sm c e hubb taxonrank species nothotaxon generic the trouble here is that the genus is considered to be a hybrid already but the rank of the taxon still is a species if it would be a nothospecies it would be agropogon ×littorals im not sure if i interpreted the icbn correctly but it seemed to be the use of notho as a prefix to rank terms is limited to infraspecific ranks e g nothosubsp oct comment gtuco btuco i would like to promote the adoption of the concept mentioned in this issue to do so i will need a stronger proposal demonstrating the need to share this information that is that independent groups organizations projects have the same need and can reach a consensus proposal about how the term should be used it might be a good idea to circulate the proposal on tdwg content and see if a community can be built around and support the addition | 1 |
84,347 | 10,370,674,597 | IssuesEvent | 2019-09-08 14:35:13 | nullserve/node-packages | https://api.github.com/repos/nullserve/node-packages | opened | Create keywords for each individual package | documentation good first issue help wanted | Packages should have keywords specified in their package.json to aid in finding the packages on npm.
Research which keywords are best/recommended and add them to each package. | 1.0 | Create keywords for each individual package - Packages should have keywords specified in their package.json to aid in finding the packages on npm.
Research which keywords are best/recommended and add them to each package. | non_process | create keywords for each individual package packages should have keywords specified in their package json to aid in finding the packages on npm research which keywords are best recommended and add them to each package | 0 |
35,598 | 12,365,373,803 | IssuesEvent | 2020-05-18 08:44:05 | NatalyaDalid/NatRepository | https://api.github.com/repos/NatalyaDalid/NatRepository | closed | CVE-2019-6284 (Medium) detected in node-sass-4.11.0.tgz, node-sass-0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70 | security vulnerability | ## CVE-2019-6284 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.11.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.11.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/NatRepository/docs/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/NatRepository/docs/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- :x: **node-sass-4.11.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/NatalyaDalid/NatRepository/commit/d5855b917e28b880e479d9131093e8937cf1b61c">d5855b917e28b880e479d9131093e8937cf1b61c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.11.0","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.11.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"vulnerabilityIdentifier":"CVE-2019-6284","vulnerabilityDetails":"In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-6284 (Medium) detected in node-sass-4.11.0.tgz, node-sass-0bd48bbad6fccb0da16d3bdf76ad541f5f45ec70 - ## CVE-2019-6284 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.11.0.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.11.0.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.11.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/NatRepository/docs/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/NatRepository/docs/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- gulp-sass-4.0.2.tgz (Root Library)
- :x: **node-sass-4.11.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/NatalyaDalid/NatRepository/commit/d5855b917e28b880e479d9131093e8937cf1b61c">d5855b917e28b880e479d9131093e8937cf1b61c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.
<p>Publish Date: 2019-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284>CVE-2019-6284</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-6284</a></p>
<p>Release Date: 2019-08-06</p>
<p>Fix Resolution: LibSass - 3.6.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"node-sass","packageVersion":"4.11.0","isTransitiveDependency":true,"dependencyTree":"gulp-sass:4.0.2;node-sass:4.11.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"LibSass - 3.6.0"}],"vulnerabilityIdentifier":"CVE-2019-6284","vulnerabilityDetails":"In LibSass 3.5.5, a heap-based buffer over-read exists in Sass::Prelexer::alternatives in prelexer.hpp.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-6284","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in node sass tgz node sass cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm natrepository docs package json path to vulnerable library tmp ws scm natrepository docs node modules node sass package json dependency hierarchy gulp sass tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails in libsass a heap based buffer over read exists in sass prelexer alternatives in prelexer hpp vulnerabilityurl | 0 |
9,555 | 12,517,084,497 | IssuesEvent | 2020-06-03 10:29:52 | varys-main/ps-tools | https://api.github.com/repos/varys-main/ps-tools | closed | Auslagerung Modul | processing | # User Story
- Die eigenen Funktionen sollen als Modul/Funktion etc. ausgelagert werden. Die Möglichkeiten dazu sollen geprüft werden.
# Tasks
- [x] Möglichkeiten prüfen
- [x] Auslagerung DockerCreate
- [x] Auslagerung DockerRemove
- [x] Auslagerung System/StartUp
- [x] Anpassung System/WarmUp
# Implementation
- Auslagerung guA-Module und eigene Funktionen in DockerHelper Modul
- Auslagerung StartUp-Skript in StartUp-Modul
- Umstellung WarmUp-Skript
# Known Problems
- Falsche Standard-Auswahl bei includeCSide | 1.0 | Auslagerung Modul - # User Story
- Die eigenen Funktionen sollen als Modul/Funktion etc. ausgelagert werden. Die Möglichkeiten dazu sollen geprüft werden.
# Tasks
- [x] Möglichkeiten prüfen
- [x] Auslagerung DockerCreate
- [x] Auslagerung DockerRemove
- [x] Auslagerung System/StartUp
- [x] Anpassung System/WarmUp
# Implementation
- Auslagerung guA-Module und eigene Funktionen in DockerHelper Modul
- Auslagerung StartUp-Skript in StartUp-Modul
- Umstellung WarmUp-Skript
# Known Problems
- Falsche Standard-Auswahl bei includeCSide | process | auslagerung modul user story die eigenen funktionen sollen als modul funktion etc ausgelagert werden die möglichkeiten dazu sollen geprüft werden tasks möglichkeiten prüfen auslagerung dockercreate auslagerung dockerremove auslagerung system startup anpassung system warmup implementation auslagerung gua module und eigene funktionen in dockerhelper modul auslagerung startup skript in startup modul umstellung warmup skript known problems falsche standard auswahl bei includecside | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.