Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
144,148 | 11,596,224,834 | IssuesEvent | 2020-02-24 18:31:50 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Not able to add group at project level | [zube]: To Test kind/bug-qa | **What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- Install Rancher 2.4 `master-head (02/21/2020)` commit id: 264d9235b
- Create a cluster
- Add a new user as Cluster Member for this cluster. (A user with a group)
- Login a the Cluster Member created
- Create a Project
- Edit the Project and add Member, click in the dropdown to list the group(s) the user belongs to.
- Select the Role
- Save
**Result:**
```
Validation failed in API: must target a user [userId]/[userPrincipalId] OR a group [groupId]/[groupPrincipalId]
```
<img width="1710" alt="Screen Shot 2020-02-21 at 6 03 15 PM" src="https://user-images.githubusercontent.com/27659/75093569-0a761480-5540-11ea-9c5c-0fc0e2797898.png">
Happens with Shibboleth without LDAP where I pick the group from the arrow button.
Also this happens when selecting a group from the new Shibboleth OpenLDAP search dropdown.
**Other details that may be helpful:**
Adding group to cluster level works.
Headers
```
POST /v3/projectroletemplatebinding HTTP/1.1
Host: x.x.x.x
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:73.0) Gecko/20100101 Firefox/73.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
content-type: application/json
x-api-action-links: actionLinks
x-api-no-challenge: true
x-api-csrf: a19f3bbbc0
Content-Length: 244
Origin: https://x.x.x.x
DNT: 1
Connection: keep-alive
Referer: https://96.126.101.112/c/c-sxcml/projects-namespaces/project/c-sxcml:p-4ffj4
Cookie: CSRF=a19f3bbbc0; R_SESS=token-7w7sm:wltsfm7pgnrnsghx449k8sz6txxftpcm58qh5fjw5hvshq85xm4rx2
```
POST
```json
{"type":"projectRoleTemplateBinding","subjectKind":"Group","userId":"","projectRoleTemplateId":"","projectId":"c-sxcml:p-4ffj4","groupPrincipalId":"shibboleth_group://cn=testgroup2,ou=groups,dc=example,dc=org","roleTemplateId":"project-member"}
```
Response:
```json
{"baseType":"error","code":"InvalidBodyContent","message":"must target a user [userId]/[userPrincipalId] OR a group [groupId]/[groupPrincipalId]","status":422,"type":"error"}
```
**Environment information**
- Rancher version: `2.4 master-head (02/21/2020)` commit id: 264d9235b
- Installation option (single install/HA): single
| 1.0 | Not able to add group at project level - **What kind of request is this (question/bug/enhancement/feature request):**
Bug
**Steps to reproduce (least amount of steps as possible):**
- Install Rancher 2.4 `master-head (02/21/2020)` commit id: 264d9235b
- Create a cluster
- Add a new user as Cluster Member for this cluster. (A user with a group)
- Login a the Cluster Member created
- Create a Project
- Edit the Project and add Member, click in the dropdown to list the group(s) the user belongs to.
- Select the Role
- Save
**Result:**
```
Validation failed in API: must target a user [userId]/[userPrincipalId] OR a group [groupId]/[groupPrincipalId]
```
<img width="1710" alt="Screen Shot 2020-02-21 at 6 03 15 PM" src="https://user-images.githubusercontent.com/27659/75093569-0a761480-5540-11ea-9c5c-0fc0e2797898.png">
Happens with Shibboleth without LDAP where I pick the group from the arrow button.
Also this happens when selecting a group from the new Shibboleth OpenLDAP search dropdown.
**Other details that may be helpful:**
Adding group to cluster level works.
Headers
```
POST /v3/projectroletemplatebinding HTTP/1.1
Host: x.x.x.x
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:73.0) Gecko/20100101 Firefox/73.0
Accept: application/json
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
content-type: application/json
x-api-action-links: actionLinks
x-api-no-challenge: true
x-api-csrf: a19f3bbbc0
Content-Length: 244
Origin: https://x.x.x.x
DNT: 1
Connection: keep-alive
Referer: https://96.126.101.112/c/c-sxcml/projects-namespaces/project/c-sxcml:p-4ffj4
Cookie: CSRF=a19f3bbbc0; R_SESS=token-7w7sm:wltsfm7pgnrnsghx449k8sz6txxftpcm58qh5fjw5hvshq85xm4rx2
```
POST
```json
{"type":"projectRoleTemplateBinding","subjectKind":"Group","userId":"","projectRoleTemplateId":"","projectId":"c-sxcml:p-4ffj4","groupPrincipalId":"shibboleth_group://cn=testgroup2,ou=groups,dc=example,dc=org","roleTemplateId":"project-member"}
```
Response:
```json
{"baseType":"error","code":"InvalidBodyContent","message":"must target a user [userId]/[userPrincipalId] OR a group [groupId]/[groupPrincipalId]","status":422,"type":"error"}
```
**Environment information**
- Rancher version: `2.4 master-head (02/21/2020)` commit id: 264d9235b
- Installation option (single install/HA): single
| non_process | not able to add group at project level what kind of request is this question bug enhancement feature request bug steps to reproduce least amount of steps as possible install rancher master head commit id create a cluster add a new user as cluster member for this cluster a user with a group login a the cluster member created create a project edit the project and add member click in the dropdown to list the group s the user belongs to select the role save result validation failed in api must target a user or a group img width alt screen shot at pm src happens with shibboleth without ldap where i pick the group from the arrow button also this happens when selecting a group from the new shibboleth openldap search dropdown other details that may be helpful adding group to cluster level works headers post projectroletemplatebinding http host x x x x user agent mozilla macintosh intel mac os x rv gecko firefox accept application json accept language en us en q accept encoding gzip deflate br content type application json x api action links actionlinks x api no challenge true x api csrf content length origin dnt connection keep alive referer cookie csrf r sess token post json type projectroletemplatebinding subjectkind group userid projectroletemplateid projectid c sxcml p groupprincipalid shibboleth group cn ou groups dc example dc org roletemplateid project member response json basetype error code invalidbodycontent message must target a user or a group status type error environment information rancher version master head commit id installation option single install ha single | 0 |
12,936 | 15,301,995,031 | IssuesEvent | 2021-02-24 14:16:01 | bazelbuild/bazel | https://api.github.com/repos/bazelbuild/bazel | opened | Release 4.1 - March 2021 | P1 release team-XProduct type: process | # Status of Bazel 4.1
This release will use Bazel 4.0.0 as its baseline and we will apply selected cherry-picks and backports on top of it. Please request cherry-picks that you'd like to get into Bazel 4.1.0 here via a comment.
- Expected release date: March 2021
- [List of release blockers](https://github.com/bazelbuild/bazel/labels/Release%20blocker)
To report a release-blocking bug, please file a bug using the `Release blocker` label, and cc me.
Task list:
- [ ] Pick release baseline: https://github.com/bazelbuild/bazel/commit/6b33bdb1e22514304c0e35ce8e067f2175685245
- [ ] Create release candidate: https://releases.bazel.build/4.1.0/rc1/
- [ ] Check downstream projects:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
| 1.0 | Release 4.1 - March 2021 - # Status of Bazel 4.1
This release will use Bazel 4.0.0 as its baseline and we will apply selected cherry-picks and backports on top of it. Please request cherry-picks that you'd like to get into Bazel 4.1.0 here via a comment.
- Expected release date: March 2021
- [List of release blockers](https://github.com/bazelbuild/bazel/labels/Release%20blocker)
To report a release-blocking bug, please file a bug using the `Release blocker` label, and cc me.
Task list:
- [ ] Pick release baseline: https://github.com/bazelbuild/bazel/commit/6b33bdb1e22514304c0e35ce8e067f2175685245
- [ ] Create release candidate: https://releases.bazel.build/4.1.0/rc1/
- [ ] Check downstream projects:
- [ ] [Create draft release announcement](https://docs.google.com/document/d/1wDvulLlj4NAlPZamdlEVFORks3YXJonCjyuQMUQEmB0/edit)
- [ ] Send for review the release announcement PR:
- [ ] Push the release, notify package maintainers:
- [ ] Update the documentation
- [ ] Push the blog post
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
| process | release march status of bazel this release will use bazel as its baseline and we will apply selected cherry picks and backports on top of it please request cherry picks that you d like to get into bazel here via a comment expected release date march to report a release blocking bug please file a bug using the release blocker label and cc me task list pick release baseline create release candidate check downstream projects send for review the release announcement pr push the release notify package maintainers update the documentation push the blog post update the | 1 |
81,686 | 15,630,110,597 | IssuesEvent | 2021-03-22 01:23:24 | johnnymythology/material-blog-jp | https://api.github.com/repos/johnnymythology/material-blog-jp | closed | CVE-2018-19826 (Medium) detected in node-sass-v4.11.0 - autoclosed | security vulnerability | ## CVE-2018-19826 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/johnnymythology/material-blog-jp/commit/b32aa2b2c0c66e5829b13877b5425dd83501c412">b32aa2b2c0c66e5829b13877b5425dd83501c412</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (4)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/binding.cpp
- /material-blog-jp/_layouts/material_lite/node_modules/node-sass/src/libsass/src/inspect.cpp
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/libsass/src/operators.cpp
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/libsass/src/parser.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** In inspect.cpp in LibSass 3.5.5, a high memory footprint caused by an endless loop (containing a Sass::Inspect::operator()(Sass::String_Quoted*) stack frame) may cause a Denial of Service via crafted sass input files with stray '&' or '/' characters. NOTE: Upstream comments indicate this issue is closed as "won't fix" and "works as intended" by design.
<p>Publish Date: 2018-12-03
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19826>CVE-2018-19826</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2018-19826 (Medium) detected in node-sass-v4.11.0 - autoclosed - ## CVE-2018-19826 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.11.0</b></p></summary>
<p>
<p>:rainbow: Node.js bindings to libsass</p>
<p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/johnnymythology/material-blog-jp/commit/b32aa2b2c0c66e5829b13877b5425dd83501c412">b32aa2b2c0c66e5829b13877b5425dd83501c412</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Library Source Files (4)</summary>
<p></p>
<p> * The source files were matched to this source library based on a best effort match. Source libraries are selected from a list of probable public libraries.</p>
<p>
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/binding.cpp
- /material-blog-jp/_layouts/material_lite/node_modules/node-sass/src/libsass/src/inspect.cpp
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/libsass/src/operators.cpp
- /material-blog-jp/_layouts/material_lite/_site/node_modules/node-sass/src/libsass/src/parser.cpp
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** In inspect.cpp in LibSass 3.5.5, a high memory footprint caused by an endless loop (containing a Sass::Inspect::operator()(Sass::String_Quoted*) stack frame) may cause a Denial of Service via crafted sass input files with stray '&' or '/' characters. NOTE: Upstream comments indicate this issue is closed as "won't fix" and "works as intended" by design.
<p>Publish Date: 2018-12-03
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19826>CVE-2018-19826</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in node sass autoclosed cve medium severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href library source files the source files were matched to this source library based on a best effort match source libraries are selected from a list of probable public libraries material blog jp layouts material lite site node modules node sass src binding cpp material blog jp layouts material lite node modules node sass src libsass src inspect cpp material blog jp layouts material lite site node modules node sass src libsass src operators cpp material blog jp layouts material lite site node modules node sass src libsass src parser cpp vulnerability details disputed in inspect cpp in libsass a high memory footprint caused by an endless loop containing a sass inspect operator sass string quoted stack frame may cause a denial of service via crafted sass input files with stray or characters note upstream comments indicate this issue is closed as won t fix and works as intended by design publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
4,718 | 7,552,661,150 | IssuesEvent | 2018-04-19 01:38:48 | UnbFeelings/unb-feelings-docs | https://api.github.com/repos/UnbFeelings/unb-feelings-docs | opened | [Não Conformidade] Backlog | Desenvolvimento Processo | @UnbFeelings/devel
A auditoria do [Backlog](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#2011-requisitos-da-release), que é um produto gerado do sub processo **requisitos**, foi feita com base nos [critérios](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria#backlog) pré-estabelecidos para poder verificar se o Processo foi utilizado. O resultado da auditoria pode ser acessado através da seguinte página: [Auditoria Backlog](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Auditoria-Backlog-Ciclo-1).
### Descrição
Foi identificado o principal motivo da não-adesão ao processo:
* Problema de comunicação (externa/interna);
#### Recomendações
Conforme o processo, é necessário que seja documentado o Backlog, os épicos e a features definidas com o cliente. Por a wiki ser a ferramenta para documentar artefatos, é importante que esteja na mesma.
Além disso, deve-se estabelecer com urgência uma nova reunião entre equipe de processos e desenvolvimento para identificar atividades desnecessárias no processo e principalmente artefatos que não agregam valor para o cliente. Ressalta-se ainda levar em conta a carga horária de trabalho efetivo de cada membro do desenvolvimento para propor melhorias ao processo que se adequem a carga horária dos desenvolvedores para que o processo não torne suas atividades tão custoso.
E que engaje de alguma forma a equipe de processo e de desenvolvimento, porque é bem visível a falta de comunicação e alinhamento de todos.
### Detalhes
**Autor:** Naiara Andrade
**Tipo:** Processo
**Prazo:** Seria de 3 dia para a correção, pela Matriz GUT. Mas como depende dos horários das aulas, seria 6 dias. | 1.0 | [Não Conformidade] Backlog - @UnbFeelings/devel
A auditoria do [Backlog](https://github.com/UnbFeelings/unb-feelings-docs/wiki/Processo#2011-requisitos-da-release), que é um produto gerado do sub processo **requisitos**, foi feita com base nos [critérios](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Crit%C3%A9rios-de-Avalia%C3%A7%C3%A3o-e-T%C3%A9cnicas-de-Auditoria#backlog) pré-estabelecidos para poder verificar se o Processo foi utilizado. O resultado da auditoria pode ser acessado através da seguinte página: [Auditoria Backlog](https://github.com/UnbFeelings/unb-feelings-GQA/wiki/Auditoria-Backlog-Ciclo-1).
### Descrição
Foi identificado o principal motivo da não-adesão ao processo:
* Problema de comunicação (externa/interna);
#### Recomendações
Conforme o processo, é necessário que seja documentado o Backlog, os épicos e a features definidas com o cliente. Por a wiki ser a ferramenta para documentar artefatos, é importante que esteja na mesma.
Além disso, deve-se estabelecer com urgência uma nova reunião entre equipe de processos e desenvolvimento para identificar atividades desnecessárias no processo e principalmente artefatos que não agregam valor para o cliente. Ressalta-se ainda levar em conta a carga horária de trabalho efetivo de cada membro do desenvolvimento para propor melhorias ao processo que se adequem a carga horária dos desenvolvedores para que o processo não torne suas atividades tão custoso.
E que engaje de alguma forma a equipe de processo e de desenvolvimento, porque é bem visível a falta de comunicação e alinhamento de todos.
### Detalhes
**Autor:** Naiara Andrade
**Tipo:** Processo
**Prazo:** Seria de 3 dia para a correção, pela Matriz GUT. Mas como depende dos horários das aulas, seria 6 dias. | process | backlog unbfeelings devel a auditoria do que é um produto gerado do sub processo requisitos foi feita com base nos pré estabelecidos para poder verificar se o processo foi utilizado o resultado da auditoria pode ser acessado através da seguinte página descrição foi identificado o principal motivo da não adesão ao processo problema de comunicação externa interna recomendações conforme o processo é necessário que seja documentado o backlog os épicos e a features definidas com o cliente por a wiki ser a ferramenta para documentar artefatos é importante que esteja na mesma além disso deve se estabelecer com urgência uma nova reunião entre equipe de processos e desenvolvimento para identificar atividades desnecessárias no processo e principalmente artefatos que não agregam valor para o cliente ressalta se ainda levar em conta a carga horária de trabalho efetivo de cada membro do desenvolvimento para propor melhorias ao processo que se adequem a carga horária dos desenvolvedores para que o processo não torne suas atividades tão custoso e que engaje de alguma forma a equipe de processo e de desenvolvimento porque é bem visível a falta de comunicação e alinhamento de todos detalhes autor naiara andrade tipo processo prazo seria de dia para a correção pela matriz gut mas como depende dos horários das aulas seria dias | 1 |
19,095 | 25,147,998,811 | IssuesEvent | 2022-11-10 07:41:05 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Processing: Cannot choose attribute for "Class Identifier" in SAGA Supervised Classification for Grids | Processing Bug | ### What is the bug or the crash?
I am using SAGA-GIS tool _Supervised Classification for Grids_. One of the options of this tool is _Class Identifier_. It serves the purpose of indicating which of the fields of _Training Areas_, an optional input shape, contains the definitions of the classes. The SAGA-GIS version 7.3.0 does not enable the selection of that field. Instead, the dropdown list shows all available shape layers. As a consequence, the classification algorithm does not run.
### Steps to reproduce the issue
The figure below shows the SAGA-GIS tool _Supervised Classification for Grids_. The options _Class Identifier_ does not show the fields contained in the attribute table of the shape _treinamento_.

This is different from SAGA-GIS itself as shown in the figure below:

The shape used is available at <https://cloud.utfpr.edu.br/index.php/s/OoMQk3wRcMJEazJ>
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.16.4-Hannover | QGIS code branch | Release 3.16
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.2.1 | Running against GDAL/OGR | 3.2.1
Compiled against GEOS | 3.9.0-CAPI-1.16.2 | Running against GEOS | 3.9.0-CAPI-1.16.2
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) | SpatiaLite Version | 5.0.0
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 7.2.1 | Running against PROJ | Rel. 7.2.1, January 1st, 2021
OS Version | Ubuntu 20.04.2 LTS
Active python plugins | togglegroupvisibility; quick_map_services; processing_r; pointsamplingtool; mapbiomascollection; qfieldsync; QuickOSM; processing; db_manager; MetaSearch
</body></html>
### Additional context
I am happy to provide more info if needed. | 1.0 | Processing: Cannot choose attribute for "Class Identifier" in SAGA Supervised Classification for Grids - ### What is the bug or the crash?
I am using SAGA-GIS tool _Supervised Classification for Grids_. One of the options of this tool is _Class Identifier_. It serves the purpose of indicating which of the fields of _Training Areas_, an optional input shape, contains the definitions of the classes. The SAGA-GIS version 7.3.0 does not enable the selection of that field. Instead, the dropdown list shows all available shape layers. As a consequence, the classification algorithm does not run.
### Steps to reproduce the issue
The figure below shows the SAGA-GIS tool _Supervised Classification for Grids_. The options _Class Identifier_ does not show the fields contained in the attribute table of the shape _treinamento_.

This is different from SAGA-GIS itself as shown in the figure below:

The shape used is available at <https://cloud.utfpr.edu.br/index.php/s/OoMQk3wRcMJEazJ>
### Versions
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.16.4-Hannover | QGIS code branch | Release 3.16
-- | -- | -- | --
Compiled against Qt | 5.12.8 | Running against Qt | 5.12.8
Compiled against GDAL/OGR | 3.2.1 | Running against GDAL/OGR | 3.2.1
Compiled against GEOS | 3.9.0-CAPI-1.16.2 | Running against GEOS | 3.9.0-CAPI-1.16.2
Compiled against SQLite | 3.31.1 | Running against SQLite | 3.31.1
PostgreSQL Client Version | 12.6 (Ubuntu 12.6-0ubuntu0.20.04.1) | SpatiaLite Version | 5.0.0
QWT Version | 6.1.4 | QScintilla2 Version | 2.11.2
Compiled against PROJ | 7.2.1 | Running against PROJ | Rel. 7.2.1, January 1st, 2021
OS Version | Ubuntu 20.04.2 LTS
Active python plugins | togglegroupvisibility; quick_map_services; processing_r; pointsamplingtool; mapbiomascollection; qfieldsync; QuickOSM; processing; db_manager; MetaSearch
</body></html>
### Additional context
I am happy to provide more info if needed. | process | processing cannot choose attribute for class identifier in saga supervised classification for grids what is the bug or the crash i am using saga gis tool supervised classification for grids one of the options of this tool is class identifier it serves the purpose of indicating which of the fields of training areas an optional input shape contains the definitions of the classes the saga gis version does not enable the selection of that field instead the dropdown list shows all available shape layers as a consequence the classification algorithm does not run steps to reproduce the issue the figure below shows the saga gis tool supervised classification for grids the options class identifier does not show the fields contained in the attribute table of the shape treinamento this is different from saga gis itself as shown in the figure below the shape used is available at versions doctype html public dtd html en p li white space pre wrap qgis version hannover qgis code branch release compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version ubuntu spatialite version qwt version version compiled against proj running against proj rel january os version ubuntu lts active python plugins togglegroupvisibility quick map services processing r pointsamplingtool mapbiomascollection qfieldsync quickosm processing db manager metasearch additional context i am happy to provide more info if needed | 1 |
67,171 | 16,827,450,784 | IssuesEvent | 2021-06-17 20:41:51 | coq/coq | https://api.github.com/repos/coq/coq | closed | Neither "make -j 12" nor "make -j 12 coqide" builds coqide | part: build | ```
$ dune exec -- dev/shim/coqide-prelude
File "ide/coqide/coqOps.ml", line 701, characters 31-44:
Error: This expression has type
(Doc.id * (unit, Doc.id) Util.union) Interface.value ->
unit Coq.task
but an expression was expected of type
Interface.add_rty Interface.value -> 'a Coq.task
Type Doc.id * (unit, Doc.id) Util.union is not compatible with type
Interface.add_rty =
Interface.state_id *
((unit, Interface.state_id) Interface.union * string)
Type (unit, Doc.id) Util.union = (unit, Doc.id) CSig.union
is not compatible with type
(unit, Interface.state_id) Interface.union * string
File "ide/coqide/fake_ide.ml", line 243, characters 16-72:
Error: This expression has type Interface.add_rty Interface.value
but an expression was expected of type
(Document.id * (unit, Document.id) Util.union) Interface.value
Type
Interface.add_rty =
Interface.state_id *
((unit, Interface.state_id) Interface.union * string)
is not compatible with type
Document.id * (unit, Document.id) Util.union
Type (unit, Interface.state_id) Interface.union * string
is not compatible with type
(unit, Document.id) Util.union = (unit, Document.id) CSig.union
$ make -j 12
make --warn-undefined-variable --no-builtin-rules -f Makefile.build
make[1]: Entering directory '/home/proj/coq'
DUNE sources
DUNE _build/default/user-contrib/Ltac2/ltac2_plugin.cmxs
DUNE _build/default/doc/tools/docgram/doc_grammar.exe
DUNE revision
make[1]: Leaving directory '/home/proj/coq'
$ make -j 12 coqide
make --warn-undefined-variable --no-builtin-rules -f Makefile.build coqide
make[1]: Entering directory '/home/proj/coq'
DUNE sources
make[1]: Nothing to be done for 'coqide'.
make[1]: Leaving directory '/home/proj/coq'
``` | 1.0 | Neither "make -j 12" nor "make -j 12 coqide" builds coqide - ```
$ dune exec -- dev/shim/coqide-prelude
File "ide/coqide/coqOps.ml", line 701, characters 31-44:
Error: This expression has type
(Doc.id * (unit, Doc.id) Util.union) Interface.value ->
unit Coq.task
but an expression was expected of type
Interface.add_rty Interface.value -> 'a Coq.task
Type Doc.id * (unit, Doc.id) Util.union is not compatible with type
Interface.add_rty =
Interface.state_id *
((unit, Interface.state_id) Interface.union * string)
Type (unit, Doc.id) Util.union = (unit, Doc.id) CSig.union
is not compatible with type
(unit, Interface.state_id) Interface.union * string
File "ide/coqide/fake_ide.ml", line 243, characters 16-72:
Error: This expression has type Interface.add_rty Interface.value
but an expression was expected of type
(Document.id * (unit, Document.id) Util.union) Interface.value
Type
Interface.add_rty =
Interface.state_id *
((unit, Interface.state_id) Interface.union * string)
is not compatible with type
Document.id * (unit, Document.id) Util.union
Type (unit, Interface.state_id) Interface.union * string
is not compatible with type
(unit, Document.id) Util.union = (unit, Document.id) CSig.union
$ make -j 12
make --warn-undefined-variable --no-builtin-rules -f Makefile.build
make[1]: Entering directory '/home/proj/coq'
DUNE sources
DUNE _build/default/user-contrib/Ltac2/ltac2_plugin.cmxs
DUNE _build/default/doc/tools/docgram/doc_grammar.exe
DUNE revision
make[1]: Leaving directory '/home/proj/coq'
$ make -j 12 coqide
make --warn-undefined-variable --no-builtin-rules -f Makefile.build coqide
make[1]: Entering directory '/home/proj/coq'
DUNE sources
make[1]: Nothing to be done for 'coqide'.
make[1]: Leaving directory '/home/proj/coq'
``` | non_process | neither make j nor make j coqide builds coqide dune exec dev shim coqide prelude file ide coqide coqops ml line characters error this expression has type doc id unit doc id util union interface value unit coq task but an expression was expected of type interface add rty interface value a coq task type doc id unit doc id util union is not compatible with type interface add rty interface state id unit interface state id interface union string type unit doc id util union unit doc id csig union is not compatible with type unit interface state id interface union string file ide coqide fake ide ml line characters error this expression has type interface add rty interface value but an expression was expected of type document id unit document id util union interface value type interface add rty interface state id unit interface state id interface union string is not compatible with type document id unit document id util union type unit interface state id interface union string is not compatible with type unit document id util union unit document id csig union make j make warn undefined variable no builtin rules f makefile build make entering directory home proj coq dune sources dune build default user contrib plugin cmxs dune build default doc tools docgram doc grammar exe dune revision make leaving directory home proj coq make j coqide make warn undefined variable no builtin rules f makefile build coqide make entering directory home proj coq dune sources make nothing to be done for coqide make leaving directory home proj coq | 0 |
7,898 | 11,084,025,656 | IssuesEvent | 2019-12-13 15:40:38 | NationalSecurityAgency/ghidra | https://api.github.com/repos/NationalSecurityAgency/ghidra | closed | Sleigh: Motorola 6805: typo in BSET instruction | Feature: Processor/6805 Type: Bug | **Describe the bug**
`BSET` instruction looks like erroneous copy of `BCLR` one.
https://github.com/NationalSecurityAgency/ghidra/blob/7e4dd35859e40233cd1b71248e0da411f0ac0812/Ghidra/Processors/6805/data/languages/6805.slaspec#L243
https://github.com/NationalSecurityAgency/ghidra/blob/7e4dd35859e40233cd1b71248e0da411f0ac0812/Ghidra/Processors/6805/data/languages/6805.slaspec#L150
Please fix it with
```
:BSET n,DIRECT is op4_7=1 & bit_0=0 & n; DIRECT {
local mask = (1 << n);
DIRECT = DIRECT | mask;
}
``` | 1.0 | Sleigh: Motorola 6805: typo in BSET instruction - **Describe the bug**
`BSET` instruction looks like erroneous copy of `BCLR` one.
https://github.com/NationalSecurityAgency/ghidra/blob/7e4dd35859e40233cd1b71248e0da411f0ac0812/Ghidra/Processors/6805/data/languages/6805.slaspec#L243
https://github.com/NationalSecurityAgency/ghidra/blob/7e4dd35859e40233cd1b71248e0da411f0ac0812/Ghidra/Processors/6805/data/languages/6805.slaspec#L150
Please fix it with
```
:BSET n,DIRECT is op4_7=1 & bit_0=0 & n; DIRECT {
local mask = (1 << n);
DIRECT = DIRECT | mask;
}
``` | process | sleigh motorola typo in bset instruction describe the bug bset instruction looks like erroneous copy of bclr one please fix it with bset n direct is bit n direct local mask n direct direct mask | 1 |
1,749 | 4,442,357,847 | IssuesEvent | 2016-08-19 13:12:35 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | enables creation of centerline from a series of points | enhancement priority: medium sct_process_segmentation | ~~~
xuanwu_20160624_yaou_MS013/t1 $ sct_process_segmentation -i t1_mask_viewer.nii.gz -p centerline
----------------------------------------------------------------------------------------------------
Spinal Cord Toolbox (version dev-3e107b779165b4feb9871fa193dec8c660108832)
Running /Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py -i t1_mask_viewer.nii.gz -p centerline
Check parameters:
.. segmentation file: t1_mask_viewer.nii.gz
Create temporary folder...
mkdir tmp.160819081840_272135/
Copying data to tmp folder...
sct_convert -i /Users/julien/data/test/xuanwu_20160624_yaou_MS013/t1/t1_mask_viewer.nii.gz -o tmp.160819081840_272135/segmentation.nii.gz
Orient centerline to RPI orientation...
sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz
Open segmentation volume...
Get data dimensions...
.. matrix size: 96 x 260 x 320
.. voxel size: 1.0mm x 0.98125mm x 0.98125mm
/Users/julien/code/spinalcordtoolbox/python/lib/python2.7/site-packages/numpy/core/_methods.py:59: RuntimeWarning: Mean of empty slice.
warnings.warn("Mean of empty slice.", RuntimeWarning)
/Users/julien/code/spinalcordtoolbox/python/lib/python2.7/site-packages/numpy/core/_methods.py:70: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Smooth centerline/segmentation...
.. Get center of mass of the centerline/segmentation...
.. Smoothing algo = hanning
.. Windows length = 80
WARNING: The smoothing window is larger than the number of points. New value: 11
WARNING: The smoothing window is larger than the number of points. New value: 11
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:411: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
data[round(x_centerline_fit[iz-min_z_index]), round(y_centerline_fit[iz-min_z_index]), iz] = 1 # if index is out of bounds here for hanning: either the segmentation has holes or labels have been added to the file
Traceback (most recent call last):
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 997, in <module>
main(sys.argv[1:])
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 249, in main
fname_output = extract_centerline(fname_segmentation, remove_temp_files, verbose=param.verbose, algo_fitting=param.algo_fitting)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 411, in extract_centerline
data[round(x_centerline_fit[iz-min_z_index]), round(y_centerline_fit[iz-min_z_index]), iz] = 1 # if index is out of bounds here for hanning: either the segmentation has holes or labels have been added to the file
IndexError: index 15 is out of bounds for axis 0 with size 15
~~~ | 1.0 | enables creation of centerline from a series of points - ~~~
xuanwu_20160624_yaou_MS013/t1 $ sct_process_segmentation -i t1_mask_viewer.nii.gz -p centerline
----------------------------------------------------------------------------------------------------
Spinal Cord Toolbox (version dev-3e107b779165b4feb9871fa193dec8c660108832)
Running /Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py -i t1_mask_viewer.nii.gz -p centerline
Check parameters:
.. segmentation file: t1_mask_viewer.nii.gz
Create temporary folder...
mkdir tmp.160819081840_272135/
Copying data to tmp folder...
sct_convert -i /Users/julien/data/test/xuanwu_20160624_yaou_MS013/t1/t1_mask_viewer.nii.gz -o tmp.160819081840_272135/segmentation.nii.gz
Orient centerline to RPI orientation...
sct_image -i segmentation.nii.gz -setorient RPI -o segmentation_RPI.nii.gz
Open segmentation volume...
Get data dimensions...
.. matrix size: 96 x 260 x 320
.. voxel size: 1.0mm x 0.98125mm x 0.98125mm
/Users/julien/code/spinalcordtoolbox/python/lib/python2.7/site-packages/numpy/core/_methods.py:59: RuntimeWarning: Mean of empty slice.
warnings.warn("Mean of empty slice.", RuntimeWarning)
/Users/julien/code/spinalcordtoolbox/python/lib/python2.7/site-packages/numpy/core/_methods.py:70: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Smooth centerline/segmentation...
.. Get center of mass of the centerline/segmentation...
.. Smoothing algo = hanning
.. Windows length = 80
WARNING: The smoothing window is larger than the number of points. New value: 11
WARNING: The smoothing window is larger than the number of points. New value: 11
/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py:411: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future
data[round(x_centerline_fit[iz-min_z_index]), round(y_centerline_fit[iz-min_z_index]), iz] = 1 # if index is out of bounds here for hanning: either the segmentation has holes or labels have been added to the file
Traceback (most recent call last):
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 997, in <module>
main(sys.argv[1:])
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 249, in main
fname_output = extract_centerline(fname_segmentation, remove_temp_files, verbose=param.verbose, algo_fitting=param.algo_fitting)
File "/Users/julien/code/spinalcordtoolbox/scripts/sct_process_segmentation.py", line 411, in extract_centerline
data[round(x_centerline_fit[iz-min_z_index]), round(y_centerline_fit[iz-min_z_index]), iz] = 1 # if index is out of bounds here for hanning: either the segmentation has holes or labels have been added to the file
IndexError: index 15 is out of bounds for axis 0 with size 15
~~~ | process | enables creation of centerline from a series of points xuanwu yaou sct process segmentation i mask viewer nii gz p centerline spinal cord toolbox version dev running users julien code spinalcordtoolbox scripts sct process segmentation py i mask viewer nii gz p centerline check parameters segmentation file mask viewer nii gz create temporary folder mkdir tmp copying data to tmp folder sct convert i users julien data test xuanwu yaou mask viewer nii gz o tmp segmentation nii gz orient centerline to rpi orientation sct image i segmentation nii gz setorient rpi o segmentation rpi nii gz open segmentation volume get data dimensions matrix size x x voxel size x x users julien code spinalcordtoolbox python lib site packages numpy core methods py runtimewarning mean of empty slice warnings warn mean of empty slice runtimewarning users julien code spinalcordtoolbox python lib site packages numpy core methods py runtimewarning invalid value encountered in double scalars ret ret dtype type ret rcount smooth centerline segmentation get center of mass of the centerline segmentation smoothing algo hanning windows length warning the smoothing window is larger than the number of points new value warning the smoothing window is larger than the number of points new value users julien code spinalcordtoolbox scripts sct process segmentation py visibledeprecationwarning using a non integer number instead of an integer will result in an error in the future data round y centerline fit iz if index is out of bounds here for hanning either the segmentation has holes or labels have been added to the file traceback most recent call last file users julien code spinalcordtoolbox scripts sct process segmentation py line in main sys argv file users julien code spinalcordtoolbox scripts sct process segmentation py line in main fname output extract centerline fname segmentation remove temp files verbose param verbose algo fitting param algo fitting file users julien code spinalcordtoolbox scripts sct process segmentation py line in extract centerline data round y centerline fit iz if index is out of bounds here for hanning either the segmentation has holes or labels have been added to the file indexerror index is out of bounds for axis with size | 1 |
37,411 | 8,390,634,001 | IssuesEvent | 2018-10-09 13:10:11 | ShaikASK/Testing | https://api.github.com/repos/ShaikASK/Testing | opened | Tasks : Self Assigned "Tasks" are getting" displayed under HR admin "Dashboard" screen | Dashboard Defect HR Admin Module HR User Module P3 Tasks | Steps To Replicate :
1.Launch the URL
2.Sign in as HR user
3.Create a Task (Self Assigned) and save it
4.Sign in as HR admin user
5.Check the "Dashboard" screen
Experienced Behavior : Observed that Self Assigned "Tasks" are getting" displayed under HR admin "Dashboard" screen (Refer Screen Shot)
Expected behavior : Ensure that Self Assigned "Tasks" should not display under HR admin "Dashboard" screen

| 1.0 | Tasks : Self Assigned "Tasks" are getting" displayed under HR admin "Dashboard" screen - Steps To Replicate :
1.Launch the URL
2.Sign in as HR user
3.Create a Task (Self Assigned) and save it
4.Sign in as HR admin user
5.Check the "Dashboard" screen
Experienced Behavior : Observed that Self Assigned "Tasks" are getting" displayed under HR admin "Dashboard" screen (Refer Screen Shot)
Expected behavior : Ensure that Self Assigned "Tasks" should not display under HR admin "Dashboard" screen

| non_process | tasks self assigned tasks are getting displayed under hr admin dashboard screen steps to replicate launch the url sign in as hr user create a task self assigned and save it sign in as hr admin user check the dashboard screen experienced behavior observed that self assigned tasks are getting displayed under hr admin dashboard screen refer screen shot expected behavior ensure that self assigned tasks should not display under hr admin dashboard screen | 0 |
13,105 | 15,496,597,201 | IssuesEvent | 2021-03-11 02:58:51 | dluiscosta/weather_api | https://api.github.com/repos/dluiscosta/weather_api | opened | Automatically run tests on MR | development process enhancement | Automatically run tests on MR, displaying their results and preventing the merging to ```master``` if any tests fail.
This requires #10. | 1.0 | Automatically run tests on MR - Automatically run tests on MR, displaying their results and preventing the merging to ```master``` if any tests fail.
This requires #10. | process | automatically run tests on mr automatically run tests on mr displaying their results and preventing the merging to master if any tests fail this requires | 1 |
7,509 | 10,588,520,117 | IssuesEvent | 2019-10-09 02:21:52 | kubeflow/testing | https://api.github.com/repos/kubeflow/testing | closed | Create a Google group with viewer only access to the CI infrastructure | kind/process priority/p1 | We should create a suitable google group to grant folks viewer only access to ci infrastructure to support debugging tests.
/cc @jinchihe | 1.0 | Create a Google group with viewer only access to the CI infrastructure - We should create a suitable google group to grant folks viewer only access to ci infrastructure to support debugging tests.
/cc @jinchihe | process | create a google group with viewer only access to the ci infrastructure we should create a suitable google group to grant folks viewer only access to ci infrastructure to support debugging tests cc jinchihe | 1 |
350,846 | 10,509,605,338 | IssuesEvent | 2019-09-27 11:26:36 | getkirby/kirby | https://api.github.com/repos/getkirby/kirby | closed | [Panel] e.nextSibling.focus is not a function on multiselect field | priority: high 🔥 type: bug 🐛 | **Describe the bug**
Error occured when navigate right (no next siblings) selected options on multiselect field
**To Reproduce**
Steps to reproduce the behavior:
1. Go to panel
2. Add multiselect field with options to any page
3. Edit page and select some options
4. Navigate selected options with `right` arrow
4. See error
**Screenshots**

Screenshot actions: TAB + RIGHT + LEFT + RIGHT + RIGHT
**Kirby Version**
3.2.5 Stable
**Console output**
```
app.js:9365 TypeError: e.nextSibling.focus is not a function
at a.navigate (app.js:4900)
at n.nativeOn.keydown (app.js:4698)
at oe (vendor.js:41)
at HTMLSpanElement.n (vendor.js:41)
at HTMLSpanElement.Ni.i._wrapper (vendor.js:41)
```
**Desktop**
- Windows 10
- Chrome 77
**Additional context**
- Left arrow key working properly when no prev siblings
- Left and right arrow actions working properly on tags fields | 1.0 | [Panel] e.nextSibling.focus is not a function on multiselect field - **Describe the bug**
Error occured when navigate right (no next siblings) selected options on multiselect field
**To Reproduce**
Steps to reproduce the behavior:
1. Go to panel
2. Add multiselect field with options to any page
3. Edit page and select some options
4. Navigate selected options with `right` arrow
4. See error
**Screenshots**

Screenshot actions: TAB + RIGHT + LEFT + RIGHT + RIGHT
**Kirby Version**
3.2.5 Stable
**Console output**
```
app.js:9365 TypeError: e.nextSibling.focus is not a function
at a.navigate (app.js:4900)
at n.nativeOn.keydown (app.js:4698)
at oe (vendor.js:41)
at HTMLSpanElement.n (vendor.js:41)
at HTMLSpanElement.Ni.i._wrapper (vendor.js:41)
```
**Desktop**
- Windows 10
- Chrome 77
**Additional context**
- Left arrow key working properly when no prev siblings
- Left and right arrow actions working properly on tags fields | non_process | e nextsibling focus is not a function on multiselect field describe the bug error occured when navigate right no next siblings selected options on multiselect field to reproduce steps to reproduce the behavior go to panel add multiselect field with options to any page edit page and select some options navigate selected options with right arrow see error screenshots screenshot actions tab right left right right kirby version stable console output app js typeerror e nextsibling focus is not a function at a navigate app js at n nativeon keydown app js at oe vendor js at htmlspanelement n vendor js at htmlspanelement ni i wrapper vendor js desktop windows chrome additional context left arrow key working properly when no prev siblings left and right arrow actions working properly on tags fields | 0 |
22,245 | 30,799,251,119 | IssuesEvent | 2023-07-31 23:01:52 | h4sh5/npm-auto-scanner | https://api.github.com/repos/h4sh5/npm-auto-scanner | opened | developer_backup_test529 1.999.0 has 2 guarddog issues | npm-install-script npm-silent-process-execution | ```{"npm-install-script":[{"code":" \"preinstall\": \"node preinstall.js\",","location":"package/package.json:7","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"const child = spawn('node', ['index.js'], {\n detached: true,\n stdio: 'ignore'\n});","location":"package/preinstall.js:3","message":"This package is silently executing another executable"}]}``` | 1.0 | developer_backup_test529 1.999.0 has 2 guarddog issues - ```{"npm-install-script":[{"code":" \"preinstall\": \"node preinstall.js\",","location":"package/package.json:7","message":"The package.json has a script automatically running when the package is installed"}],"npm-silent-process-execution":[{"code":"const child = spawn('node', ['index.js'], {\n detached: true,\n stdio: 'ignore'\n});","location":"package/preinstall.js:3","message":"This package is silently executing another executable"}]}``` | process | developer backup has guarddog issues npm install script npm silent process execution n detached true n stdio ignore n location package preinstall js message this package is silently executing another executable | 1 |
10,140 | 7,094,279,712 | IssuesEvent | 2018-01-13 01:13:44 | owncloud/core | https://api.github.com/repos/owncloud/core | closed | Optimize reading quota/free space on PROPFIND | bug performance status/STALE | @icewind1991 I did some debugging and I noticed that when you do a PROPFIND it will return the quota/free space for every subfolder too.
The problem is that getFileInfo() is likely to be called multiple times on the same node and so far we don't cache it. This means that this will cause additional DB queries, one for every file for which we want to get the quota/size in `OC_Helper::getStorageInfo()`:
```
0 OC\Files\View->getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/view.php:936
1 OCA\Files_Encryption\Proxy->postFileSize() /srv/www/htdocs/owncloud/apps/files_encryption/lib/proxy.php:341
2 OCA\Files_Encryption\Proxy->postGetFileInfo() /srv/www/htdocs/owncloud/apps/files_encryption/lib/proxy.php:309
3 OC_FileProxy::runPostProxies() /srv/www/htdocs/owncloud/lib/private/fileproxy.php:124
4 OC\Files\View->getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/view.php:984
5 OC\Files\Filesystem::getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/filesystem.php:793
6 OC_Helper::getStorageInfo() /srv/www/htdocs/owncloud/lib/private/helper.php:909
7 OC\Connector\Sabre\Directory->getQuotaInfo() /srv/www/htdocs/owncloud/lib/private/connector/sabre/directory.php:236
8 Sabre\DAV\CorePlugin->Sabre\DAV\{closure}() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:792
9 Sabre\DAV\PropFind->handle() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/PropFind.php:98
10 Sabre\DAV\CorePlugin->propFind() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:794
11 call_user_func_array:{/srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105}() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
12 Sabre\Event\EventEmitter->emit() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
13 Sabre\DAV\Server->getPropertiesByNode() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:1016
14 Sabre\DAV\Server->getPropertiesForPath() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:936
15 Sabre\DAV\CorePlugin->httpPropfind() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:327
16 call_user_func_array:{/srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105}() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
17 Sabre\Event\EventEmitter->emit() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
18 Sabre\DAV\Server->invokeMethod() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:469
19 Sabre\DAV\Server->exec() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:254
20 require_once() /srv/www/htdocs/owncloud/apps/files/appinfo/remote.php:62
21 {main} /srv/www/htdocs/owncloud/remote.php:54
```
There must be a way to either cache the file info or simply shortcut the free-space querying when coming from a PROPFIND operation.
What do you think ?
| True | Optimize reading quota/free space on PROPFIND - @icewind1991 I did some debugging and I noticed that when you do a PROPFIND it will return the quota/free space for every subfolder too.
The problem is that getFileInfo() is likely to be called multiple times on the same node and so far we don't cache it. This means that this will cause additional DB queries, one for every file for which we want to get the quota/size in `OC_Helper::getStorageInfo()`:
```
0 OC\Files\View->getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/view.php:936
1 OCA\Files_Encryption\Proxy->postFileSize() /srv/www/htdocs/owncloud/apps/files_encryption/lib/proxy.php:341
2 OCA\Files_Encryption\Proxy->postGetFileInfo() /srv/www/htdocs/owncloud/apps/files_encryption/lib/proxy.php:309
3 OC_FileProxy::runPostProxies() /srv/www/htdocs/owncloud/lib/private/fileproxy.php:124
4 OC\Files\View->getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/view.php:984
5 OC\Files\Filesystem::getFileInfo() /srv/www/htdocs/owncloud/lib/private/files/filesystem.php:793
6 OC_Helper::getStorageInfo() /srv/www/htdocs/owncloud/lib/private/helper.php:909
7 OC\Connector\Sabre\Directory->getQuotaInfo() /srv/www/htdocs/owncloud/lib/private/connector/sabre/directory.php:236
8 Sabre\DAV\CorePlugin->Sabre\DAV\{closure}() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:792
9 Sabre\DAV\PropFind->handle() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/PropFind.php:98
10 Sabre\DAV\CorePlugin->propFind() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:794
11 call_user_func_array:{/srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105}() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
12 Sabre\Event\EventEmitter->emit() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
13 Sabre\DAV\Server->getPropertiesByNode() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:1016
14 Sabre\DAV\Server->getPropertiesForPath() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:936
15 Sabre\DAV\CorePlugin->httpPropfind() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/CorePlugin.php:327
16 call_user_func_array:{/srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105}() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
17 Sabre\Event\EventEmitter->emit() /srv/www/htdocs/owncloud/3rdparty/sabre/event/lib/EventEmitterTrait.php:105
18 Sabre\DAV\Server->invokeMethod() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:469
19 Sabre\DAV\Server->exec() /srv/www/htdocs/owncloud/3rdparty/sabre/dav/lib/DAV/Server.php:254
20 require_once() /srv/www/htdocs/owncloud/apps/files/appinfo/remote.php:62
21 {main} /srv/www/htdocs/owncloud/remote.php:54
```
There must be a way to either cache the file info or simply shortcut the free-space querying when coming from a PROPFIND operation.
What do you think ?
| non_process | optimize reading quota free space on propfind i did some debugging and i noticed that when you do a propfind it will return the quota free space for every subfolder too the problem is that getfileinfo is likely to be called multiple times on the same node and so far we don t cache it this means that this will cause additional db queries one for every file for which we want to get the quota size in oc helper getstorageinfo oc files view getfileinfo srv www htdocs owncloud lib private files view php oca files encryption proxy postfilesize srv www htdocs owncloud apps files encryption lib proxy php oca files encryption proxy postgetfileinfo srv www htdocs owncloud apps files encryption lib proxy php oc fileproxy runpostproxies srv www htdocs owncloud lib private fileproxy php oc files view getfileinfo srv www htdocs owncloud lib private files view php oc files filesystem getfileinfo srv www htdocs owncloud lib private files filesystem php oc helper getstorageinfo srv www htdocs owncloud lib private helper php oc connector sabre directory getquotainfo srv www htdocs owncloud lib private connector sabre directory php sabre dav coreplugin sabre dav closure srv www htdocs owncloud sabre dav lib dav coreplugin php sabre dav propfind handle srv www htdocs owncloud sabre dav lib dav propfind php sabre dav coreplugin propfind srv www htdocs owncloud sabre dav lib dav coreplugin php call user func array srv www htdocs owncloud sabre event lib eventemittertrait php srv www htdocs owncloud sabre event lib eventemittertrait php sabre event eventemitter emit srv www htdocs owncloud sabre event lib eventemittertrait php sabre dav server getpropertiesbynode srv www htdocs owncloud sabre dav lib dav server php sabre dav server getpropertiesforpath srv www htdocs owncloud sabre dav lib dav server php sabre dav coreplugin httppropfind srv www htdocs owncloud sabre dav lib dav coreplugin php call user func array srv www htdocs owncloud sabre event lib eventemittertrait php srv www htdocs owncloud sabre event lib eventemittertrait php sabre event eventemitter emit srv www htdocs owncloud sabre event lib eventemittertrait php sabre dav server invokemethod srv www htdocs owncloud sabre dav lib dav server php sabre dav server exec srv www htdocs owncloud sabre dav lib dav server php require once srv www htdocs owncloud apps files appinfo remote php main srv www htdocs owncloud remote php there must be a way to either cache the file info or simply shortcut the free space querying when coming from a propfind operation what do you think | 0 |
10,454 | 13,234,960,609 | IssuesEvent | 2020-08-18 17:11:26 | googleapis/repo-automation-bots | https://api.github.com/repos/googleapis/repo-automation-bots | closed | Support @octokit/webhooks versions > 7.1.1 by changing import strategy | priority: p1 type: process | [@octokit/webhooks.js](https://www.npmjs.com/package/@octokit/webhooks) changed the way they export the `Webhooks` type i[n a recent commit](https://github.com/octokit/webhooks.js/commit/12a266a34ad4781975038a8a78908481460f5fce#diff-41df78d86ed4de80aae9b0b9560feb2b) and this [breaks the release-please tests](https://github.com/googleapis/repo-automation-bots/pull/791/checks?check_run_id=939823269#step:8:1) for versions above 7.1.1. The fix should be fairly straightforward but it will require some testing to ensure nothing else is broken.
https://github.com/octokit/webhooks.js/pull/113 | 1.0 | Support @octokit/webhooks versions > 7.1.1 by changing import strategy - [@octokit/webhooks.js](https://www.npmjs.com/package/@octokit/webhooks) changed the way they export the `Webhooks` type i[n a recent commit](https://github.com/octokit/webhooks.js/commit/12a266a34ad4781975038a8a78908481460f5fce#diff-41df78d86ed4de80aae9b0b9560feb2b) and this [breaks the release-please tests](https://github.com/googleapis/repo-automation-bots/pull/791/checks?check_run_id=939823269#step:8:1) for versions above 7.1.1. The fix should be fairly straightforward but it will require some testing to ensure nothing else is broken.
https://github.com/octokit/webhooks.js/pull/113 | process | support octokit webhooks versions by changing import strategy changed the way they export the webhooks type i and this for versions above the fix should be fairly straightforward but it will require some testing to ensure nothing else is broken | 1 |
15,421 | 2,852,680,702 | IssuesEvent | 2015-06-01 14:49:32 | tokland/pysheng | https://api.github.com/repos/tokland/pysheng | closed | error from GUI | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. follow installation instructions
2. run pysheng-gui
What is the expected output? What do you see instead?
GUI does not open, following error is printed:
Traceback (most recent call last):
File "pysheng-gui", line 14, in <module>
sys.exit(main(sys.argv[1:]))
File "pysheng-gui", line 9, in main
widgets, state = gui.run(book_url)
File "/usr/local/lib/python2.7/dist-packages/pysheng/gui.py", line 362, in run
raise ValueError, "cannot find glade file: main.glade"
ValueError: cannot find glade file: main.glade
What version of the product are you using? On what operating system?
pysheng 0.1 on Linux Mint 15
NOTE: Pysheng is working fine for me without the GUI
```
Original issue reported on code.google.com by `brian....@gmail.com` on 24 Sep 2013 at 3:26 | 1.0 | error from GUI - ```
What steps will reproduce the problem?
1. follow installation instructions
2. run pysheng-gui
What is the expected output? What do you see instead?
GUI does not open, following error is printed:
Traceback (most recent call last):
File "pysheng-gui", line 14, in <module>
sys.exit(main(sys.argv[1:]))
File "pysheng-gui", line 9, in main
widgets, state = gui.run(book_url)
File "/usr/local/lib/python2.7/dist-packages/pysheng/gui.py", line 362, in run
raise ValueError, "cannot find glade file: main.glade"
ValueError: cannot find glade file: main.glade
What version of the product are you using? On what operating system?
pysheng 0.1 on Linux Mint 15
NOTE: Pysheng is working fine for me without the GUI
```
Original issue reported on code.google.com by `brian....@gmail.com` on 24 Sep 2013 at 3:26 | non_process | error from gui what steps will reproduce the problem follow installation instructions run pysheng gui what is the expected output what do you see instead gui does not open following error is printed traceback most recent call last file pysheng gui line in sys exit main sys argv file pysheng gui line in main widgets state gui run book url file usr local lib dist packages pysheng gui py line in run raise valueerror cannot find glade file main glade valueerror cannot find glade file main glade what version of the product are you using on what operating system pysheng on linux mint note pysheng is working fine for me without the gui original issue reported on code google com by brian gmail com on sep at | 0 |
20,557 | 27,217,797,394 | IssuesEvent | 2023-02-21 00:35:50 | cse442-at-ub/project_s23-iweatherify | https://api.github.com/repos/cse442-at-ub/project_s23-iweatherify | opened | As a beginner participant, I want to read possible semester long projects so that I can see the thought process behind what idea got picked | Processing Task Sprint 1 | **Acceptance tests**
*Test 1*
1) Refer to https://docs.google.com/document/d/1fQDM2_rvD49LgCHpRX-fh-yX0UqBG1fgJzAu0D9KsRQ/edit?usp=sharing | 1.0 | As a beginner participant, I want to read possible semester long projects so that I can see the thought process behind what idea got picked - **Acceptance tests**
*Test 1*
1) Refer to https://docs.google.com/document/d/1fQDM2_rvD49LgCHpRX-fh-yX0UqBG1fgJzAu0D9KsRQ/edit?usp=sharing | process | as a beginner participant i want to read possible semester long projects so that i can see the thought process behind what idea got picked acceptance tests test refer to | 1 |
21,317 | 28,565,849,385 | IssuesEvent | 2023-04-21 02:00:09 | lizhihao6/get-daily-arxiv-noti | https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti | opened | New submissions for Fri, 21 Apr 23 | event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB | ## Keyword: events
### Introducing Construct Theory as a Standard Methodology for Inclusive AI Models
- **Authors:** Susanna Raj, Sudha Jamthe, Yashaswini Viswanath, Suresh Lokiah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.09867
- **Pdf link:** https://arxiv.org/pdf/2304.09867
- **Abstract**
Construct theory in social psychology, developed by George Kelly are mental constructs to predict and anticipate events. Constructs are how humans interpret, curate, predict and validate data; information. AI today is biased because it is trained with a narrow construct as defined by the training data labels. Machine Learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers (Buolamwini, Joy and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. FAT (2018), the inclusion of phenotypic labeling is proposed as a viable solution. In Construct theory, phenotype is just one of the many subelements that make up the construct of a face. In this paper, we present 15 main elements of the construct of face, with 50 subelements and tested Google Cloud Vision API and Microsoft Cognitive Services API using FairFace dataset that currently has data for 7 races, genders and ages, and we retested against FairFace Plus dataset curated by us. Our results show exactly where they have gaps for inclusivity. Based on our experiment results, we propose that validated, inclusive constructs become industry standards for AI ML models going forward.
### Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
- **Authors:** Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.10211
- **Pdf link:** https://arxiv.org/pdf/2304.10211
- **Abstract**
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
## Keyword: event camera
### Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
- **Authors:** Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.10211
- **Pdf link:** https://arxiv.org/pdf/2304.10211
- **Abstract**
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Omni Aggregation Networks for Lightweight Image Super-Resolution
- **Authors:** Hang Wang, Xuanhong Chen, Bingbing Ni, Yutian Liu, Jinfan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10244
- **Pdf link:** https://arxiv.org/pdf/2304.10244
- **Abstract**
While lightweight ViT framework has made tremendous progress in image super-resolution, its uni-dimensional self-attention modeling, as well as homogeneous aggregation scheme, limit its effective receptive field (ERF) to include more comprehensive interactions from both spatial and channel dimensions. To tackle these drawbacks, this work proposes two enhanced components under a new Omni-SR architecture. First, an Omni Self-Attention (OSA) block is proposed based on dense interaction principle, which can simultaneously model pixel-interaction from both spatial and channel dimensions, mining the potential correlations across omni-axis (i.e., spatial and channel). Coupling with mainstream window partitioning strategies, OSA can achieve superior performance with compelling computational budgets. Second, a multi-scale interaction scheme is proposed to mitigate sub-optimal ERF (i.e., premature saturation) in shallow models, which facilitates local propagation and meso-/global-scale interactions, rendering an omni-scale aggregation building block. Extensive experiments demonstrate that Omni-SR achieves record-high performance on lightweight super-resolution benchmarks (e.g., 26.95 dB@Urban100 $\times 4$ with only 792K parameters). Our code is available at \url{https://github.com/Francis0625/Omni-SR}.
## Keyword: ISP
### Introducing Construct Theory as a Standard Methodology for Inclusive AI Models
- **Authors:** Susanna Raj, Sudha Jamthe, Yashaswini Viswanath, Suresh Lokiah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.09867
- **Pdf link:** https://arxiv.org/pdf/2304.09867
- **Abstract**
Construct theory in social psychology, developed by George Kelly are mental constructs to predict and anticipate events. Constructs are how humans interpret, curate, predict and validate data; information. AI today is biased because it is trained with a narrow construct as defined by the training data labels. Machine Learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers (Buolamwini, Joy and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. FAT (2018), the inclusion of phenotypic labeling is proposed as a viable solution. In Construct theory, phenotype is just one of the many subelements that make up the construct of a face. In this paper, we present 15 main elements of the construct of face, with 50 subelements and tested Google Cloud Vision API and Microsoft Cognitive Services API using FairFace dataset that currently has data for 7 races, genders and ages, and we retested against FairFace Plus dataset curated by us. Our results show exactly where they have gaps for inclusivity. Based on our experiment results, we propose that validated, inclusive constructs become industry standards for AI ML models going forward.
### A robust and interpretable deep learning framework for multi-modal registration via keypoints
- **Authors:** Alan Q. Wang, Evan M. Yu, Adrian V. Dalca, Mert R. Sabuncu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.09941
- **Pdf link:** https://arxiv.org/pdf/2304.09941
- **Abstract**
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/evanmy/keymorph.
### Social Distance Detection Using Deep Learning And Risk Management System
- **Authors:** Dr. Sangeetha R.G, Jaya Aravindh V. V
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10259
- **Pdf link:** https://arxiv.org/pdf/2304.10259
- **Abstract**
An outbreak of the coronavirus disease which occurred three years later and it has hit the world again with many evolutions. The effects on the human race have already been profound. We can only safeguard ourselves against this pandemic by mandating a "Face Mask" also maintaining the "Social Distancing." The necessity of protective face masks in all gatherings is required by many civil institutions in India. As a result of the substantial human resource utilization, personally examining the whole country with a huge population like India, to determine whether the execution of mask wearing and social distance maintained is unfeasible. The COVID-19 Social Distancing Detector System is a single-stage detector that employs deep learning to integrate high-end semantic data to a CNN module in order to maintain social distances and simultaneously monitor violations within a specified region. By deploying current Security footages, CCTV cameras, and computer vision (CV), it will also be able to identify those who are experiencing the calamity of social separation. Providing tools for safety and security, this technology disposes the need for a labor-force based surveillance system, yet a manual governing body is still required to monitor, track, and inform on the violations that are committed. Any sort of infrastructure, including universities, hospitals, offices of the government, schools, and building sites, can employ the technology. Therefore, the risk management system created to report and analyze video streams along with the social distance detector system might help to ensure our protection and security as well as the security of our loved ones. Furthermore, we will discuss about deployment and improvement of the project overall.
### NTIRE 2023 Challenge on Light Field Image Super-Resolution: Dataset, Methods and Results
- **Authors:** Yingqian Wang, Longguang Wang, Zhengyu Liang, Jungang Yang, Radu Timofte, Yulan Guo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10415
- **Pdf link:** https://arxiv.org/pdf/2304.10415
- **Abstract**
In this report, we summarize the first NTIRE challenge on light field (LF) image super-resolution (SR), which aims at super-resolving LF images under the standard bicubic degradation with a magnification factor of 4. This challenge develops a new LF dataset called NTIRE-2023 for validation and test, and provides a toolbox called BasicLFSR to facilitate model development. Compared with single image SR, the major challenge of LF image SR lies in how to exploit complementary angular information from plenty of views with varying disparities. In total, 148 participants have registered the challenge, and 11 teams have successfully submitted results with PSNR scores higher than the baseline method LF-InterNet \cite{LF-InterNet}. These newly developed methods have set new state-of-the-art in LF image SR, e.g., the winning method achieves around 1 dB PSNR improvement over the existing state-of-the-art method DistgSSR \cite{DistgLF}. We report the solutions proposed by the participants, and summarize their common trends and useful tricks. We hope this challenge can stimulate future research and inspire new ideas in LF image SR.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Complex Mixer for MedMNIST Classification Decathlon
- **Authors:** Zhuoran Zheng, Xiuyi Jia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10054
- **Pdf link:** https://arxiv.org/pdf/2304.10054
- **Abstract**
With the development of the medical image field, researchers seek to develop a class of datasets to block the need for medical knowledge, such as \text{MedMNIST} (v2). MedMNIST (v2) includes a large number of small-sized (28 $\times$ 28 or 28 $\times$ 28 $\times$ 28) medical samples and the corresponding expert annotations (class label). The existing baseline model (Google AutoML Vision, ResNet-50+3D) can reach an average accuracy of over 70\% on MedMNIST (v2) datasets, which is comparable to the performance of expert decision-making. Nevertheless, we note that there are two insurmountable obstacles to modeling on MedMNIST (v2): 1) the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries; 2) the labelers' subjective insight may cause many uncertainties in the label space. To address these issues, we develop a Complex Mixer (C-Mixer) with a pre-training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self-supervised scheme with random masking. Our method (incentive learning and self-supervised learning with masking) shows surprising potential on both the standard MedMNIST (v2) dataset, the customized weakly supervised datasets, and other image enhancement tasks.
### Omni Aggregation Networks for Lightweight Image Super-Resolution
- **Authors:** Hang Wang, Xuanhong Chen, Bingbing Ni, Yutian Liu, Jinfan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10244
- **Pdf link:** https://arxiv.org/pdf/2304.10244
- **Abstract**
While lightweight ViT framework has made tremendous progress in image super-resolution, its uni-dimensional self-attention modeling, as well as homogeneous aggregation scheme, limit its effective receptive field (ERF) to include more comprehensive interactions from both spatial and channel dimensions. To tackle these drawbacks, this work proposes two enhanced components under a new Omni-SR architecture. First, an Omni Self-Attention (OSA) block is proposed based on dense interaction principle, which can simultaneously model pixel-interaction from both spatial and channel dimensions, mining the potential correlations across omni-axis (i.e., spatial and channel). Coupling with mainstream window partitioning strategies, OSA can achieve superior performance with compelling computational budgets. Second, a multi-scale interaction scheme is proposed to mitigate sub-optimal ERF (i.e., premature saturation) in shallow models, which facilitates local propagation and meso-/global-scale interactions, rendering an omni-scale aggregation building block. Extensive experiments demonstrate that Omni-SR achieves record-high performance on lightweight super-resolution benchmarks (e.g., 26.95 dB@Urban100 $\times 4$ with only 792K parameters). Our code is available at \url{https://github.com/Francis0625/Omni-SR}.
### Collaborative Diffusion for Multi-Modal Face Generation and Editing
- **Authors:** Ziqi Huang, Kelvin C.K. Chan, Yuming Jiang, Ziwei Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10530
- **Pdf link:** https://arxiv.org/pdf/2304.10530
- **Abstract**
Diffusion models arise as a powerful generative tool recently. Despite the great progress, existing diffusion models mainly focus on uni-modal control, i.e., the diffusion process is driven by only one modality of condition. To further unleash the users' creativity, it is desirable for the model to be controllable by multiple modalities simultaneously, e.g., generating and editing faces by describing the age (text-driven) while drawing the face shape (mask-driven). In this work, we present Collaborative Diffusion, where pre-trained uni-modal diffusion models collaborate to achieve multi-modal face generation and editing without re-training. Our key insight is that diffusion models driven by different modalities are inherently complementary regarding the latent denoising steps, where bilateral connections can be established upon. Specifically, we propose dynamic diffuser, a meta-network that adaptively hallucinates multi-modal denoising steps by predicting the spatial-temporal influence functions for each pre-trained uni-modal model. Collaborative Diffusion not only collaborates generation capabilities from uni-modal diffusion models, but also integrates multiple uni-modal manipulations to perform multi-modal editing. Extensive qualitative and quantitative experiments demonstrate the superiority of our framework in both image quality and condition consistency.
## Keyword: raw image
### Complex Mixer for MedMNIST Classification Decathlon
- **Authors:** Zhuoran Zheng, Xiuyi Jia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10054
- **Pdf link:** https://arxiv.org/pdf/2304.10054
- **Abstract**
With the development of the medical image field, researchers seek to develop a class of datasets to block the need for medical knowledge, such as \text{MedMNIST} (v2). MedMNIST (v2) includes a large number of small-sized (28 $\times$ 28 or 28 $\times$ 28 $\times$ 28) medical samples and the corresponding expert annotations (class label). The existing baseline model (Google AutoML Vision, ResNet-50+3D) can reach an average accuracy of over 70\% on MedMNIST (v2) datasets, which is comparable to the performance of expert decision-making. Nevertheless, we note that there are two insurmountable obstacles to modeling on MedMNIST (v2): 1) the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries; 2) the labelers' subjective insight may cause many uncertainties in the label space. To address these issues, we develop a Complex Mixer (C-Mixer) with a pre-training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self-supervised scheme with random masking. Our method (incentive learning and self-supervised learning with masking) shows surprising potential on both the standard MedMNIST (v2) dataset, the customized weakly supervised datasets, and other image enhancement tasks.
| 2.0 | New submissions for Fri, 21 Apr 23 - ## Keyword: events
### Introducing Construct Theory as a Standard Methodology for Inclusive AI Models
- **Authors:** Susanna Raj, Sudha Jamthe, Yashaswini Viswanath, Suresh Lokiah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.09867
- **Pdf link:** https://arxiv.org/pdf/2304.09867
- **Abstract**
Construct theory in social psychology, developed by George Kelly are mental constructs to predict and anticipate events. Constructs are how humans interpret, curate, predict and validate data; information. AI today is biased because it is trained with a narrow construct as defined by the training data labels. Machine Learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers (Buolamwini, Joy and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. FAT (2018), the inclusion of phenotypic labeling is proposed as a viable solution. In Construct theory, phenotype is just one of the many subelements that make up the construct of a face. In this paper, we present 15 main elements of the construct of face, with 50 subelements and tested Google Cloud Vision API and Microsoft Cognitive Services API using FairFace dataset that currently has data for 7 races, genders and ages, and we retested against FairFace Plus dataset curated by us. Our results show exactly where they have gaps for inclusivity. Based on our experiment results, we propose that validated, inclusive constructs become industry standards for AI ML models going forward.
### Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
- **Authors:** Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.10211
- **Pdf link:** https://arxiv.org/pdf/2304.10211
- **Abstract**
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
## Keyword: event camera
### Spiking-Fer: Spiking Neural Network for Facial Expression Recognition With Event Cameras
- **Authors:** Sami Barchid, Benjamin Allaert, Amel Aissaoui, José Mennesson, Chaabane Djéraba
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.10211
- **Pdf link:** https://arxiv.org/pdf/2304.10211
- **Abstract**
Facial Expression Recognition (FER) is an active research domain that has shown great progress recently, notably thanks to the use of large deep learning models. However, such approaches are particularly energy intensive, which makes their deployment difficult for edge devices. To address this issue, Spiking Neural Networks (SNNs) coupled with event cameras are a promising alternative, capable of processing sparse and asynchronous events with lower energy consumption. In this paper, we establish the first use of event cameras for FER, named "Event-based FER", and propose the first related benchmarks by converting popular video FER datasets to event streams. To deal with this new task, we propose "Spiking-FER", a deep convolutional SNN model, and compare it against a similar Artificial Neural Network (ANN). Experiments show that the proposed approach achieves comparable performance to the ANN architecture, while consuming less energy by orders of magnitude (up to 65.39x). In addition, an experimental study of various event-based data augmentation techniques is performed to provide insights into the efficient transformations specific to event-based FER.
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
### Omni Aggregation Networks for Lightweight Image Super-Resolution
- **Authors:** Hang Wang, Xuanhong Chen, Bingbing Ni, Yutian Liu, Jinfan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10244
- **Pdf link:** https://arxiv.org/pdf/2304.10244
- **Abstract**
While lightweight ViT framework has made tremendous progress in image super-resolution, its uni-dimensional self-attention modeling, as well as homogeneous aggregation scheme, limit its effective receptive field (ERF) to include more comprehensive interactions from both spatial and channel dimensions. To tackle these drawbacks, this work proposes two enhanced components under a new Omni-SR architecture. First, an Omni Self-Attention (OSA) block is proposed based on dense interaction principle, which can simultaneously model pixel-interaction from both spatial and channel dimensions, mining the potential correlations across omni-axis (i.e., spatial and channel). Coupling with mainstream window partitioning strategies, OSA can achieve superior performance with compelling computational budgets. Second, a multi-scale interaction scheme is proposed to mitigate sub-optimal ERF (i.e., premature saturation) in shallow models, which facilitates local propagation and meso-/global-scale interactions, rendering an omni-scale aggregation building block. Extensive experiments demonstrate that Omni-SR achieves record-high performance on lightweight super-resolution benchmarks (e.g., 26.95 dB@Urban100 $\times 4$ with only 792K parameters). Our code is available at \url{https://github.com/Francis0625/Omni-SR}.
## Keyword: ISP
### Introducing Construct Theory as a Standard Methodology for Inclusive AI Models
- **Authors:** Susanna Raj, Sudha Jamthe, Yashaswini Viswanath, Suresh Lokiah
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI)
- **Arxiv link:** https://arxiv.org/abs/2304.09867
- **Pdf link:** https://arxiv.org/pdf/2304.09867
- **Abstract**
Construct theory in social psychology, developed by George Kelly are mental constructs to predict and anticipate events. Constructs are how humans interpret, curate, predict and validate data; information. AI today is biased because it is trained with a narrow construct as defined by the training data labels. Machine Learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers (Buolamwini, Joy and Timnit Gebru. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. FAT (2018), the inclusion of phenotypic labeling is proposed as a viable solution. In Construct theory, phenotype is just one of the many subelements that make up the construct of a face. In this paper, we present 15 main elements of the construct of face, with 50 subelements and tested Google Cloud Vision API and Microsoft Cognitive Services API using FairFace dataset that currently has data for 7 races, genders and ages, and we retested against FairFace Plus dataset curated by us. Our results show exactly where they have gaps for inclusivity. Based on our experiment results, we propose that validated, inclusive constructs become industry standards for AI ML models going forward.
### A robust and interpretable deep learning framework for multi-modal registration via keypoints
- **Authors:** Alan Q. Wang, Evan M. Yu, Adrian V. Dalca, Mert R. Sabuncu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.09941
- **Pdf link:** https://arxiv.org/pdf/2304.09941
- **Abstract**
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/evanmy/keymorph.
### Social Distance Detection Using Deep Learning And Risk Management System
- **Authors:** Dr. Sangeetha R.G, Jaya Aravindh V. V
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10259
- **Pdf link:** https://arxiv.org/pdf/2304.10259
- **Abstract**
An outbreak of the coronavirus disease which occurred three years later and it has hit the world again with many evolutions. The effects on the human race have already been profound. We can only safeguard ourselves against this pandemic by mandating a "Face Mask" also maintaining the "Social Distancing." The necessity of protective face masks in all gatherings is required by many civil institutions in India. As a result of the substantial human resource utilization, personally examining the whole country with a huge population like India, to determine whether the execution of mask wearing and social distance maintained is unfeasible. The COVID-19 Social Distancing Detector System is a single-stage detector that employs deep learning to integrate high-end semantic data to a CNN module in order to maintain social distances and simultaneously monitor violations within a specified region. By deploying current Security footages, CCTV cameras, and computer vision (CV), it will also be able to identify those who are experiencing the calamity of social separation. Providing tools for safety and security, this technology disposes the need for a labor-force based surveillance system, yet a manual governing body is still required to monitor, track, and inform on the violations that are committed. Any sort of infrastructure, including universities, hospitals, offices of the government, schools, and building sites, can employ the technology. Therefore, the risk management system created to report and analyze video streams along with the social distance detector system might help to ensure our protection and security as well as the security of our loved ones. Furthermore, we will discuss about deployment and improvement of the project overall.
### NTIRE 2023 Challenge on Light Field Image Super-Resolution: Dataset, Methods and Results
- **Authors:** Yingqian Wang, Longguang Wang, Zhengyu Liang, Jungang Yang, Radu Timofte, Yulan Guo
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10415
- **Pdf link:** https://arxiv.org/pdf/2304.10415
- **Abstract**
In this report, we summarize the first NTIRE challenge on light field (LF) image super-resolution (SR), which aims at super-resolving LF images under the standard bicubic degradation with a magnification factor of 4. This challenge develops a new LF dataset called NTIRE-2023 for validation and test, and provides a toolbox called BasicLFSR to facilitate model development. Compared with single image SR, the major challenge of LF image SR lies in how to exploit complementary angular information from plenty of views with varying disparities. In total, 148 participants have registered the challenge, and 11 teams have successfully submitted results with PSNR scores higher than the baseline method LF-InterNet \cite{LF-InterNet}. These newly developed methods have set new state-of-the-art in LF image SR, e.g., the winning method achieves around 1 dB PSNR improvement over the existing state-of-the-art method DistgSSR \cite{DistgLF}. We report the solutions proposed by the participants, and summarize their common trends and useful tricks. We hope this challenge can stimulate future research and inspire new ideas in LF image SR.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
There is no result
## Keyword: RAW
### Complex Mixer for MedMNIST Classification Decathlon
- **Authors:** Zhuoran Zheng, Xiuyi Jia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10054
- **Pdf link:** https://arxiv.org/pdf/2304.10054
- **Abstract**
With the development of the medical image field, researchers seek to develop a class of datasets to block the need for medical knowledge, such as \text{MedMNIST} (v2). MedMNIST (v2) includes a large number of small-sized (28 $\times$ 28 or 28 $\times$ 28 $\times$ 28) medical samples and the corresponding expert annotations (class label). The existing baseline model (Google AutoML Vision, ResNet-50+3D) can reach an average accuracy of over 70\% on MedMNIST (v2) datasets, which is comparable to the performance of expert decision-making. Nevertheless, we note that there are two insurmountable obstacles to modeling on MedMNIST (v2): 1) the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries; 2) the labelers' subjective insight may cause many uncertainties in the label space. To address these issues, we develop a Complex Mixer (C-Mixer) with a pre-training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self-supervised scheme with random masking. Our method (incentive learning and self-supervised learning with masking) shows surprising potential on both the standard MedMNIST (v2) dataset, the customized weakly supervised datasets, and other image enhancement tasks.
### Omni Aggregation Networks for Lightweight Image Super-Resolution
- **Authors:** Hang Wang, Xuanhong Chen, Bingbing Ni, Yutian Liu, Jinfan Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10244
- **Pdf link:** https://arxiv.org/pdf/2304.10244
- **Abstract**
While lightweight ViT framework has made tremendous progress in image super-resolution, its uni-dimensional self-attention modeling, as well as homogeneous aggregation scheme, limit its effective receptive field (ERF) to include more comprehensive interactions from both spatial and channel dimensions. To tackle these drawbacks, this work proposes two enhanced components under a new Omni-SR architecture. First, an Omni Self-Attention (OSA) block is proposed based on dense interaction principle, which can simultaneously model pixel-interaction from both spatial and channel dimensions, mining the potential correlations across omni-axis (i.e., spatial and channel). Coupling with mainstream window partitioning strategies, OSA can achieve superior performance with compelling computational budgets. Second, a multi-scale interaction scheme is proposed to mitigate sub-optimal ERF (i.e., premature saturation) in shallow models, which facilitates local propagation and meso-/global-scale interactions, rendering an omni-scale aggregation building block. Extensive experiments demonstrate that Omni-SR achieves record-high performance on lightweight super-resolution benchmarks (e.g., 26.95 dB@Urban100 $\times 4$ with only 792K parameters). Our code is available at \url{https://github.com/Francis0625/Omni-SR}.
### Collaborative Diffusion for Multi-Modal Face Generation and Editing
- **Authors:** Ziqi Huang, Kelvin C.K. Chan, Yuming Jiang, Ziwei Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10530
- **Pdf link:** https://arxiv.org/pdf/2304.10530
- **Abstract**
Diffusion models arise as a powerful generative tool recently. Despite the great progress, existing diffusion models mainly focus on uni-modal control, i.e., the diffusion process is driven by only one modality of condition. To further unleash the users' creativity, it is desirable for the model to be controllable by multiple modalities simultaneously, e.g., generating and editing faces by describing the age (text-driven) while drawing the face shape (mask-driven). In this work, we present Collaborative Diffusion, where pre-trained uni-modal diffusion models collaborate to achieve multi-modal face generation and editing without re-training. Our key insight is that diffusion models driven by different modalities are inherently complementary regarding the latent denoising steps, where bilateral connections can be established upon. Specifically, we propose dynamic diffuser, a meta-network that adaptively hallucinates multi-modal denoising steps by predicting the spatial-temporal influence functions for each pre-trained uni-modal model. Collaborative Diffusion not only collaborates generation capabilities from uni-modal diffusion models, but also integrates multiple uni-modal manipulations to perform multi-modal editing. Extensive qualitative and quantitative experiments demonstrate the superiority of our framework in both image quality and condition consistency.
## Keyword: raw image
### Complex Mixer for MedMNIST Classification Decathlon
- **Authors:** Zhuoran Zheng, Xiuyi Jia
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2304.10054
- **Pdf link:** https://arxiv.org/pdf/2304.10054
- **Abstract**
With the development of the medical image field, researchers seek to develop a class of datasets to block the need for medical knowledge, such as \text{MedMNIST} (v2). MedMNIST (v2) includes a large number of small-sized (28 $\times$ 28 or 28 $\times$ 28 $\times$ 28) medical samples and the corresponding expert annotations (class label). The existing baseline model (Google AutoML Vision, ResNet-50+3D) can reach an average accuracy of over 70\% on MedMNIST (v2) datasets, which is comparable to the performance of expert decision-making. Nevertheless, we note that there are two insurmountable obstacles to modeling on MedMNIST (v2): 1) the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries; 2) the labelers' subjective insight may cause many uncertainties in the label space. To address these issues, we develop a Complex Mixer (C-Mixer) with a pre-training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self-supervised scheme with random masking. Our method (incentive learning and self-supervised learning with masking) shows surprising potential on both the standard MedMNIST (v2) dataset, the customized weakly supervised datasets, and other image enhancement tasks.
| process | new submissions for fri apr keyword events introducing construct theory as a standard methodology for inclusive ai models authors susanna raj sudha jamthe yashaswini viswanath suresh lokiah subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract construct theory in social psychology developed by george kelly are mental constructs to predict and anticipate events constructs are how humans interpret curate predict and validate data information ai today is biased because it is trained with a narrow construct as defined by the training data labels machine learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers buolamwini joy and timnit gebru gender shades intersectional accuracy disparities in commercial gender classification fat the inclusion of phenotypic labeling is proposed as a viable solution in construct theory phenotype is just one of the many subelements that make up the construct of a face in this paper we present main elements of the construct of face with subelements and tested google cloud vision api and microsoft cognitive services api using fairface dataset that currently has data for races genders and ages and we retested against fairface plus dataset curated by us our results show exactly where they have gaps for inclusivity based on our experiment results we propose that validated inclusive constructs become industry standards for ai ml models going forward spiking fer spiking neural network for facial expression recognition with event cameras authors sami barchid benjamin allaert amel aissaoui josé mennesson chaabane djéraba subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract facial expression recognition fer is an active research domain that has shown great progress recently notably thanks to the use of large deep learning models however such approaches are particularly energy intensive which makes their deployment difficult for edge devices to address this issue spiking neural networks snns coupled with event cameras are a promising alternative capable of processing sparse and asynchronous events with lower energy consumption in this paper we establish the first use of event cameras for fer named event based fer and propose the first related benchmarks by converting popular video fer datasets to event streams to deal with this new task we propose spiking fer a deep convolutional snn model and compare it against a similar artificial neural network ann experiments show that the proposed approach achieves comparable performance to the ann architecture while consuming less energy by orders of magnitude up to in addition an experimental study of various event based data augmentation techniques is performed to provide insights into the efficient transformations specific to event based fer keyword event camera spiking fer spiking neural network for facial expression recognition with event cameras authors sami barchid benjamin allaert amel aissaoui josé mennesson chaabane djéraba subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract facial expression recognition fer is an active research domain that has shown great progress recently notably thanks to the use of large deep learning models however such approaches are particularly energy intensive which makes their deployment difficult for edge devices to address this issue spiking neural networks snns coupled with event cameras are a promising alternative capable of processing sparse and asynchronous events with lower energy consumption in this paper we establish the first use of event cameras for fer named event based fer and propose the first related benchmarks by converting popular video fer datasets to event streams to deal with this new task we propose spiking fer a deep convolutional snn model and compare it against a similar artificial neural network ann experiments show that the proposed approach achieves comparable performance to the ann architecture while consuming less energy by orders of magnitude up to in addition an experimental study of various event based data augmentation techniques is performed to provide insights into the efficient transformations specific to event based fer keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb omni aggregation networks for lightweight image super resolution authors hang wang xuanhong chen bingbing ni yutian liu jinfan liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract while lightweight vit framework has made tremendous progress in image super resolution its uni dimensional self attention modeling as well as homogeneous aggregation scheme limit its effective receptive field erf to include more comprehensive interactions from both spatial and channel dimensions to tackle these drawbacks this work proposes two enhanced components under a new omni sr architecture first an omni self attention osa block is proposed based on dense interaction principle which can simultaneously model pixel interaction from both spatial and channel dimensions mining the potential correlations across omni axis i e spatial and channel coupling with mainstream window partitioning strategies osa can achieve superior performance with compelling computational budgets second a multi scale interaction scheme is proposed to mitigate sub optimal erf i e premature saturation in shallow models which facilitates local propagation and meso global scale interactions rendering an omni scale aggregation building block extensive experiments demonstrate that omni sr achieves record high performance on lightweight super resolution benchmarks e g db times with only parameters our code is available at url keyword isp introducing construct theory as a standard methodology for inclusive ai models authors susanna raj sudha jamthe yashaswini viswanath suresh lokiah subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract construct theory in social psychology developed by george kelly are mental constructs to predict and anticipate events constructs are how humans interpret curate predict and validate data information ai today is biased because it is trained with a narrow construct as defined by the training data labels machine learning algorithms for facial recognition discriminate against darker skin colors and in the ground breaking research papers buolamwini joy and timnit gebru gender shades intersectional accuracy disparities in commercial gender classification fat the inclusion of phenotypic labeling is proposed as a viable solution in construct theory phenotype is just one of the many subelements that make up the construct of a face in this paper we present main elements of the construct of face with subelements and tested google cloud vision api and microsoft cognitive services api using fairface dataset that currently has data for races genders and ages and we retested against fairface plus dataset curated by us our results show exactly where they have gaps for inclusivity based on our experiment results we propose that validated inclusive constructs become industry standards for ai ml models going forward a robust and interpretable deep learning framework for multi modal registration via keypoints authors alan q wang evan m yu adrian v dalca mert r sabuncu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we present keymorph a deep learning based image registration framework that relies on automatically detecting corresponding keypoints state of the art deep learning methods for registration often are not robust to large misalignments are not interpretable and do not incorporate the symmetries of the problem in addition most models produce only a single prediction at test time our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed form expression we use this observation to drive the end to end learning of keypoints tailored for the registration task and without knowledge of ground truth keypoints this framework not only leads to substantially more robust registration but also yields better interpretability since the keypoints reveal which parts of the image are driving the final alignment moreover keymorph can be designed to be equivariant under image translations and or symmetric with respect to the input image ordering finally we show how multiple deformation fields can be computed efficiently and in closed form at test time corresponding to different transformation variants we demonstrate the proposed framework in solving affine and spline based registration of multi modal brain mri scans in particular we show registration accuracy that surpasses current state of the art methods especially in the context of large displacements our code is available at social distance detection using deep learning and risk management system authors dr sangeetha r g jaya aravindh v v subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract an outbreak of the coronavirus disease which occurred three years later and it has hit the world again with many evolutions the effects on the human race have already been profound we can only safeguard ourselves against this pandemic by mandating a face mask also maintaining the social distancing the necessity of protective face masks in all gatherings is required by many civil institutions in india as a result of the substantial human resource utilization personally examining the whole country with a huge population like india to determine whether the execution of mask wearing and social distance maintained is unfeasible the covid social distancing detector system is a single stage detector that employs deep learning to integrate high end semantic data to a cnn module in order to maintain social distances and simultaneously monitor violations within a specified region by deploying current security footages cctv cameras and computer vision cv it will also be able to identify those who are experiencing the calamity of social separation providing tools for safety and security this technology disposes the need for a labor force based surveillance system yet a manual governing body is still required to monitor track and inform on the violations that are committed any sort of infrastructure including universities hospitals offices of the government schools and building sites can employ the technology therefore the risk management system created to report and analyze video streams along with the social distance detector system might help to ensure our protection and security as well as the security of our loved ones furthermore we will discuss about deployment and improvement of the project overall ntire challenge on light field image super resolution dataset methods and results authors yingqian wang longguang wang zhengyu liang jungang yang radu timofte yulan guo subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in this report we summarize the first ntire challenge on light field lf image super resolution sr which aims at super resolving lf images under the standard bicubic degradation with a magnification factor of this challenge develops a new lf dataset called ntire for validation and test and provides a toolbox called basiclfsr to facilitate model development compared with single image sr the major challenge of lf image sr lies in how to exploit complementary angular information from plenty of views with varying disparities in total participants have registered the challenge and teams have successfully submitted results with psnr scores higher than the baseline method lf internet cite lf internet these newly developed methods have set new state of the art in lf image sr e g the winning method achieves around db psnr improvement over the existing state of the art method distgssr cite distglf we report the solutions proposed by the participants and summarize their common trends and useful tricks we hope this challenge can stimulate future research and inspire new ideas in lf image sr keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw complex mixer for medmnist classification decathlon authors zhuoran zheng xiuyi jia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract with the development of the medical image field researchers seek to develop a class of datasets to block the need for medical knowledge such as text medmnist medmnist includes a large number of small sized times or times times medical samples and the corresponding expert annotations class label the existing baseline model google automl vision resnet can reach an average accuracy of over on medmnist datasets which is comparable to the performance of expert decision making nevertheless we note that there are two insurmountable obstacles to modeling on medmnist the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries the labelers subjective insight may cause many uncertainties in the label space to address these issues we develop a complex mixer c mixer with a pre training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self supervised scheme with random masking our method incentive learning and self supervised learning with masking shows surprising potential on both the standard medmnist dataset the customized weakly supervised datasets and other image enhancement tasks omni aggregation networks for lightweight image super resolution authors hang wang xuanhong chen bingbing ni yutian liu jinfan liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract while lightweight vit framework has made tremendous progress in image super resolution its uni dimensional self attention modeling as well as homogeneous aggregation scheme limit its effective receptive field erf to include more comprehensive interactions from both spatial and channel dimensions to tackle these drawbacks this work proposes two enhanced components under a new omni sr architecture first an omni self attention osa block is proposed based on dense interaction principle which can simultaneously model pixel interaction from both spatial and channel dimensions mining the potential correlations across omni axis i e spatial and channel coupling with mainstream window partitioning strategies osa can achieve superior performance with compelling computational budgets second a multi scale interaction scheme is proposed to mitigate sub optimal erf i e premature saturation in shallow models which facilitates local propagation and meso global scale interactions rendering an omni scale aggregation building block extensive experiments demonstrate that omni sr achieves record high performance on lightweight super resolution benchmarks e g db times with only parameters our code is available at url collaborative diffusion for multi modal face generation and editing authors ziqi huang kelvin c k chan yuming jiang ziwei liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract diffusion models arise as a powerful generative tool recently despite the great progress existing diffusion models mainly focus on uni modal control i e the diffusion process is driven by only one modality of condition to further unleash the users creativity it is desirable for the model to be controllable by multiple modalities simultaneously e g generating and editing faces by describing the age text driven while drawing the face shape mask driven in this work we present collaborative diffusion where pre trained uni modal diffusion models collaborate to achieve multi modal face generation and editing without re training our key insight is that diffusion models driven by different modalities are inherently complementary regarding the latent denoising steps where bilateral connections can be established upon specifically we propose dynamic diffuser a meta network that adaptively hallucinates multi modal denoising steps by predicting the spatial temporal influence functions for each pre trained uni modal model collaborative diffusion not only collaborates generation capabilities from uni modal diffusion models but also integrates multiple uni modal manipulations to perform multi modal editing extensive qualitative and quantitative experiments demonstrate the superiority of our framework in both image quality and condition consistency keyword raw image complex mixer for medmnist classification decathlon authors zhuoran zheng xiuyi jia subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract with the development of the medical image field researchers seek to develop a class of datasets to block the need for medical knowledge such as text medmnist medmnist includes a large number of small sized times or times times medical samples and the corresponding expert annotations class label the existing baseline model google automl vision resnet can reach an average accuracy of over on medmnist datasets which is comparable to the performance of expert decision making nevertheless we note that there are two insurmountable obstacles to modeling on medmnist the raw images are cropped to low scales may cause effective recognition information to be dropped and the classifier to have difficulty in tracing accurate decision boundaries the labelers subjective insight may cause many uncertainties in the label space to address these issues we develop a complex mixer c mixer with a pre training framework to alleviate the problem of insufficient information and uncertainty in the label space by introducing an incentive imaginary matrix and a self supervised scheme with random masking our method incentive learning and self supervised learning with masking shows surprising potential on both the standard medmnist dataset the customized weakly supervised datasets and other image enhancement tasks | 1 |
10,605 | 13,431,565,123 | IssuesEvent | 2020-09-07 07:12:12 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | Missing parent "GO:0140423 JSON effector-mediated suppression of pattern-triggered immunity' | high priority missing parentage multi-species process |
I'm back on PHI base today.
This term:
GO:0140423 effector-mediated suppression of pattern-triggered immunity
still doesn't seem to be a descendant of
GO:0140404 effector-mediated modulation of host innate immune response by symbiont
I'm sure we keep adding this?
| 1.0 | Missing parent "GO:0140423 JSON effector-mediated suppression of pattern-triggered immunity' -
I'm back on PHI base today.
This term:
GO:0140423 effector-mediated suppression of pattern-triggered immunity
still doesn't seem to be a descendant of
GO:0140404 effector-mediated modulation of host innate immune response by symbiont
I'm sure we keep adding this?
| process | missing parent go json effector mediated suppression of pattern triggered immunity i m back on phi base today this term go effector mediated suppression of pattern triggered immunity still doesn t seem to be a descendant of go effector mediated modulation of host innate immune response by symbiont i m sure we keep adding this | 1 |
18,974 | 24,962,395,936 | IssuesEvent | 2022-11-01 16:31:43 | OpenDataScotland/the_od_bods | https://api.github.com/repos/OpenDataScotland/the_od_bods | closed | Add name conversion for publisher "Na h-Eileanan an Iar" to "Comhairle nan Eilean Siar" | bug good first issue data processing | **Describe the bug**
For some reason, datasets by the Local Authority "Comhairle nan Eilean Siar" are being published under its constituency name "Na h-Eileanan an Iar" instead. The LA name on JKAN orgs is correct, so not going to change that, but we need to treat the incoming feed so it matches the org name.
Sources:
https://archive2021.parliament.scot/visitandlearn/94334.aspx and
https://en.wikipedia.org/wiki/Comhairle_nan_Eilean_Siar
There is already an existing function in merge_data.py for renaming publishers, this should just be an additional instance to it.
**To Reproduce**
N/A
**Expected behavior**
Datasets currently listed under publisher "Na h-Eileanan an Iar" should display "Comhairle nan Eilean Siar" instead.
These same datasets should appear listed under the Comhairle nan Eilean Siar [organisation page](https://opendata.scot/organizations/comhairle_nan_eilean_siar/)
The [Local Authority Coverage page](https://opendata.scot/analytics/local-authority-coverage/) should show at least 1 count for Comhairle nan Eilean Siar and the total council count at the top should read 31 instead of 30.
**Screenshots**
[Example] (https://opendata.scot/datasets/na+h-eileanan+an+iar-community+council+boundaries+-+na+h-eileanan+an+iar/)
**Hardware and software used**
N/A
**Additional context**
None
| 1.0 | Add name conversion for publisher "Na h-Eileanan an Iar" to "Comhairle nan Eilean Siar" - **Describe the bug**
For some reason, datasets by the Local Authority "Comhairle nan Eilean Siar" are being published under its constituency name "Na h-Eileanan an Iar" instead. The LA name on JKAN orgs is correct, so not going to change that, but we need to treat the incoming feed so it matches the org name.
Sources:
https://archive2021.parliament.scot/visitandlearn/94334.aspx and
https://en.wikipedia.org/wiki/Comhairle_nan_Eilean_Siar
There is already an existing function in merge_data.py for renaming publishers, this should just be an additional instance to it.
**To Reproduce**
N/A
**Expected behavior**
Datasets currently listed under publisher "Na h-Eileanan an Iar" should display "Comhairle nan Eilean Siar" instead.
These same datasets should appear listed under the Comhairle nan Eilean Siar [organisation page](https://opendata.scot/organizations/comhairle_nan_eilean_siar/)
The [Local Authority Coverage page](https://opendata.scot/analytics/local-authority-coverage/) should show at least 1 count for Comhairle nan Eilean Siar and the total council count at the top should read 31 instead of 30.
**Screenshots**
[Example] (https://opendata.scot/datasets/na+h-eileanan+an+iar-community+council+boundaries+-+na+h-eileanan+an+iar/)
**Hardware and software used**
N/A
**Additional context**
None
| process | add name conversion for publisher na h eileanan an iar to comhairle nan eilean siar describe the bug for some reason datasets by the local authority comhairle nan eilean siar are being published under its constituency name na h eileanan an iar instead the la name on jkan orgs is correct so not going to change that but we need to treat the incoming feed so it matches the org name sources and there is already an existing function in merge data py for renaming publishers this should just be an additional instance to it to reproduce n a expected behavior datasets currently listed under publisher na h eileanan an iar should display comhairle nan eilean siar instead these same datasets should appear listed under the comhairle nan eilean siar the should show at least count for comhairle nan eilean siar and the total council count at the top should read instead of screenshots hardware and software used n a additional context none | 1 |
677 | 3,147,522,960 | IssuesEvent | 2015-09-15 08:39:52 | e-government-ua/i | https://api.github.com/repos/e-government-ua/i | closed | На дашборде в разделе "Звіни" блоке "Експорт" добавить такой-же комбобокс с доступными БП, как и в блоке с "Статистика" и для каждого БП брать свой набор параметров, беря их из патернов /pattern/export | active bug hi priority In process of testing test | 1) сейчас набор параметров захардкоден жестко:
exportLink: function (exportParams) {
var data = {
'sID_BP': 'dnepr_spravka_o_doxodax',
'sID_State_BP': 'usertask1',
'sDateAt': exportParams.from,
'sDateTo': exportParams.to,
'saFields': '${nID_Task};${sDateCreate};${area};${bankIdinn};;;${bankIdlastName} ${bankIdfirstName} ${bankIdmiddleName};4;${aim};${date_start1};${date_stop1};${place_living};${bankIdPassport};1;${phone};${email}',
'sID_Codepage': 'win1251',
'nASCI_Spliter': '18',
'sDateCreateFormat': 'dd.MM.yyyy HH:mm:ss',
'sFileName': 'dohody.dat'
};
, а нужно чтоб он брался из файла по пути
/pattern/export/*
где название файла = id бизнеспроцесса
за исключением параметров:
'sID_BP': 'dnepr_spravka_o_doxodax',
'sDateAt': exportParams.from,
'sDateTo': exportParams.to,
2)
контент паттерна должен быть в виде обычных пропертей:
sID_BP = dnepr_spravka_o_doxodax
sID_State_BP = usertask1
saFields = ${nID_Task};${sDateCreate};${area};${bankIdinn};;;${bankIdlastName} ${bankIdfirstName} ${bankIdmiddleName};4;${aim};${date_start1};${date_stop1};${place_living};${bankIdPassport};1;${phone};${email}
sID_Codepage = win1251
nASCI_Spliter = 18
sDateCreateFormat = dd.MM.yyyy HH:mm:ss
sFileName = dohody.dat
3) Подгружать этот контент файлов с пропертями можно через сервис:
https://test.region.igov.org.ua/wf-region/service/rest/getPatternFile?sPathFile=print/export/dnepr_spravka_o_doxodax.properties
https://github.com/e-government-ua/i/blob/test/docs/specification.md#30-Работа-с-файлами-шаблонами
т.е. просто "пробросить" этот сервис через ноду и в онлайне, после выбора пункта с БП - подтягивать проперти, которые и должны стать в основу, при использовании экспорта...

| 1.0 | На дашборде в разделе "Звіни" блоке "Експорт" добавить такой-же комбобокс с доступными БП, как и в блоке с "Статистика" и для каждого БП брать свой набор параметров, беря их из патернов /pattern/export - 1) сейчас набор параметров захардкоден жестко:
exportLink: function (exportParams) {
var data = {
'sID_BP': 'dnepr_spravka_o_doxodax',
'sID_State_BP': 'usertask1',
'sDateAt': exportParams.from,
'sDateTo': exportParams.to,
'saFields': '${nID_Task};${sDateCreate};${area};${bankIdinn};;;${bankIdlastName} ${bankIdfirstName} ${bankIdmiddleName};4;${aim};${date_start1};${date_stop1};${place_living};${bankIdPassport};1;${phone};${email}',
'sID_Codepage': 'win1251',
'nASCI_Spliter': '18',
'sDateCreateFormat': 'dd.MM.yyyy HH:mm:ss',
'sFileName': 'dohody.dat'
};
, а нужно чтоб он брался из файла по пути
/pattern/export/*
где название файла = id бизнеспроцесса
за исключением параметров:
'sID_BP': 'dnepr_spravka_o_doxodax',
'sDateAt': exportParams.from,
'sDateTo': exportParams.to,
2)
контент паттерна должен быть в виде обычных пропертей:
sID_BP = dnepr_spravka_o_doxodax
sID_State_BP = usertask1
saFields = ${nID_Task};${sDateCreate};${area};${bankIdinn};;;${bankIdlastName} ${bankIdfirstName} ${bankIdmiddleName};4;${aim};${date_start1};${date_stop1};${place_living};${bankIdPassport};1;${phone};${email}
sID_Codepage = win1251
nASCI_Spliter = 18
sDateCreateFormat = dd.MM.yyyy HH:mm:ss
sFileName = dohody.dat
3) Подгружать этот контент файлов с пропертями можно через сервис:
https://test.region.igov.org.ua/wf-region/service/rest/getPatternFile?sPathFile=print/export/dnepr_spravka_o_doxodax.properties
https://github.com/e-government-ua/i/blob/test/docs/specification.md#30-Работа-с-файлами-шаблонами
т.е. просто "пробросить" этот сервис через ноду и в онлайне, после выбора пункта с БП - подтягивать проперти, которые и должны стать в основу, при использовании экспорта...

| process | на дашборде в разделе звіни блоке експорт добавить такой же комбобокс с доступными бп как и в блоке с статистика и для каждого бп брать свой набор параметров беря их из патернов pattern export сейчас набор параметров захардкоден жестко exportlink function exportparams var data sid bp dnepr spravka o doxodax sid state bp sdateat exportparams from sdateto exportparams to safields nid task sdatecreate area bankidinn bankidlastname bankidfirstname bankidmiddlename aim date date place living bankidpassport phone email sid codepage nasci spliter sdatecreateformat dd mm yyyy hh mm ss sfilename dohody dat а нужно чтоб он брался из файла по пути pattern export где название файла id бизнеспроцесса за исключением параметров sid bp dnepr spravka o doxodax sdateat exportparams from sdateto exportparams to контент паттерна должен быть в виде обычных пропертей sid bp dnepr spravka o doxodax sid state bp safields nid task sdatecreate area bankidinn bankidlastname bankidfirstname bankidmiddlename aim date date place living bankidpassport phone email sid codepage nasci spliter sdatecreateformat dd mm yyyy hh mm ss sfilename dohody dat подгружать этот контент файлов с пропертями можно через сервис т е просто пробросить этот сервис через ноду и в онлайне после выбора пункта с бп подтягивать проперти которые и должны стать в основу при использовании экспорта | 1 |
9,231 | 12,260,985,416 | IssuesEvent | 2020-05-06 19:15:39 | cranec-project/Covid-19 | https://api.github.com/repos/cranec-project/Covid-19 | opened | moving prone patients | At overwhelm stage Critical ICU process Specific need Tech:Mechanics Ventilation | Patients with ARDS often benefit from being in a prone position for extended periods of time. the problem si moving them in and out of the prone position, especially if they are heavy, under the twin constraints of:
1. not disconnecting any tubes, etc.
2. using PPE | 1.0 | moving prone patients - Patients with ARDS often benefit from being in a prone position for extended periods of time. the problem si moving them in and out of the prone position, especially if they are heavy, under the twin constraints of:
1. not disconnecting any tubes, etc.
2. using PPE | process | moving prone patients patients with ards often benefit from being in a prone position for extended periods of time the problem si moving them in and out of the prone position especially if they are heavy under the twin constraints of not disconnecting any tubes etc using ppe | 1 |
818,270 | 30,681,463,874 | IssuesEvent | 2023-07-26 09:26:08 | o3de/o3de | https://api.github.com/repos/o3de/o3de | closed | Windows: Mesh entities are not rendered in viewport | kind/bug needs-triage sig/content priority/critical | **Describe the bug**
Ground and shaderball entities in newly created levels are not rendered. Additionally, newly created entities with Mesh component added and asset assigned to it are also not rendering properly. All assets were processed in Asset Processor.
The issue does not occur on Linux.
**Steps to reproduce**
Steps to reproduce the behavior:
1. Open the Editor.
2. Open or create a new level.
3. (Optional) Create a new Entity with a Mesh component with an asset assigned.
**Expected behavior**
The ground and shaderball entities are rendered properly.
**Actual behavior**
The ground and shaderball entities are not rendered.
**Screenshots/Video**
https://github.com/o3de/o3de/assets/87059746/c818c332-14a7-4048-9e05-edc0dbf1e991
**Found in Branch**
Development
**Commit ID from [o3de/o3de](https://github.com/o3de/o3de) Repository**
[0eaa719](https://github.com/o3de/o3de/commit/0eaa71988c9d69d053cecbb2a0118547c7cbf2d7)
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 11
- CPU Ryzen 5600H
- GPU NVidia RTX 3070
- Memory 16GB | 1.0 | Windows: Mesh entities are not rendered in viewport - **Describe the bug**
Ground and shaderball entities in newly created levels are not rendered. Additionally, newly created entities with Mesh component added and asset assigned to it are also not rendering properly. All assets were processed in Asset Processor.
The issue does not occur on Linux.
**Steps to reproduce**
Steps to reproduce the behavior:
1. Open the Editor.
2. Open or create a new level.
3. (Optional) Create a new Entity with a Mesh component with an asset assigned.
**Expected behavior**
The ground and shaderball entities are rendered properly.
**Actual behavior**
The ground and shaderball entities are not rendered.
**Screenshots/Video**
https://github.com/o3de/o3de/assets/87059746/c818c332-14a7-4048-9e05-edc0dbf1e991
**Found in Branch**
Development
**Commit ID from [o3de/o3de](https://github.com/o3de/o3de) Repository**
[0eaa719](https://github.com/o3de/o3de/commit/0eaa71988c9d69d053cecbb2a0118547c7cbf2d7)
**Desktop/Device:**
- Device: PC
- OS: Windows
- Version 11
- CPU Ryzen 5600H
- GPU NVidia RTX 3070
- Memory 16GB | non_process | windows mesh entities are not rendered in viewport describe the bug ground and shaderball entities in newly created levels are not rendered additionally newly created entities with mesh component added and asset assigned to it are also not rendering properly all assets were processed in asset processor the issue does not occur on linux steps to reproduce steps to reproduce the behavior open the editor open or create a new level optional create a new entity with a mesh component with an asset assigned expected behavior the ground and shaderball entities are rendered properly actual behavior the ground and shaderball entities are not rendered screenshots video found in branch development commit id from repository desktop device device pc os windows version cpu ryzen gpu nvidia rtx memory | 0 |
17,760 | 23,677,533,437 | IssuesEvent | 2022-08-28 10:08:30 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Show feature count of Processing outputs | Processing Feature Request | **Feature description.**
The insights from outputs generated with **_Processing_** tools, especially those related with overlays or aggregations, would be more immediate if the 'Show feature count' was toggled on automatically when these layers are added to the `Layers panel`.
This could also be a check box showing in the context of any tool that outputs vector or tabular data, or a general **_Processing_** option the user could activate if he wishes.
Another option maybe, would be include it as part of a style defintion?
**Additional context**
In the example below, the overlapping_training the result of a model where the most important information is the number of features:

If this tool is a recurrent task, or if it runs in batch mode it is a bit cumbersome to do the right-click show feature count for all the layers.
| 1.0 | Show feature count of Processing outputs - **Feature description.**
The insights from outputs generated with **_Processing_** tools, especially those related with overlays or aggregations, would be more immediate if the 'Show feature count' was toggled on automatically when these layers are added to the `Layers panel`.
This could also be a check box showing in the context of any tool that outputs vector or tabular data, or a general **_Processing_** option the user could activate if he wishes.
Another option maybe, would be include it as part of a style defintion?
**Additional context**
In the example below, the overlapping_training the result of a model where the most important information is the number of features:

If this tool is a recurrent task, or if it runs in batch mode it is a bit cumbersome to do the right-click show feature count for all the layers.
| process | show feature count of processing outputs feature description the insights from outputs generated with processing tools especially those related with overlays or aggregations would be more immediate if the show feature count was toggled on automatically when these layers are added to the layers panel this could also be a check box showing in the context of any tool that outputs vector or tabular data or a general processing option the user could activate if he wishes another option maybe would be include it as part of a style defintion additional context in the example below the overlapping training the result of a model where the most important information is the number of features if this tool is a recurrent task or if it runs in batch mode it is a bit cumbersome to do the right click show feature count for all the layers | 1 |
3,689 | 6,717,068,183 | IssuesEvent | 2017-10-14 16:43:52 | TraningManagementSystem/tms | https://api.github.com/repos/TraningManagementSystem/tms | closed | サービス(ビジネスロジック)実装の手前まで案内する | dev process | ### Description
サービス(ビジネスロジック)実装の手前まで案内する
----
### Details
必要なのは、主に自動生成とスクラッチのつなぎの部分のみだと考える。
----
### Relation Issue
なし
---- | 1.0 | サービス(ビジネスロジック)実装の手前まで案内する - ### Description
サービス(ビジネスロジック)実装の手前まで案内する
----
### Details
必要なのは、主に自動生成とスクラッチのつなぎの部分のみだと考える。
----
### Relation Issue
なし
---- | process | サービス(ビジネスロジック)実装の手前まで案内する description サービス(ビジネスロジック)実装の手前まで案内する details 必要なのは、主に自動生成とスクラッチのつなぎの部分のみだと考える。 relation issue なし | 1 |
821,213 | 30,811,038,652 | IssuesEvent | 2023-08-01 10:23:34 | telerik/kendo-react | https://api.github.com/repos/telerik/kendo-react | reopened | Switching between views in the Scheduler causes misalignment | bug pkg:scheduler Priority 1 SEV: High | When switching between the timeline view and other views, items are misaligned.
Steps to reproduce:
1. Choose vertical orientation
2. Select Timeline, observe the alignment of the items
3. Select any other view and then Timeline again
https://stackblitz.com/edit/react-6n6zxv-klygr2?file=app/main.tsx

Ticket ID: 1595695
| 1.0 | Switching between views in the Scheduler causes misalignment - When switching between the timeline view and other views, items are misaligned.
Steps to reproduce:
1. Choose vertical orientation
2. Select Timeline, observe the alignment of the items
3. Select any other view and then Timeline again
https://stackblitz.com/edit/react-6n6zxv-klygr2?file=app/main.tsx

Ticket ID: 1595695
| non_process | switching between views in the scheduler causes misalignment when switching between the timeline view and other views items are misaligned steps to reproduce choose vertical orientation select timeline observe the alignment of the items select any other view and then timeline again ticket id | 0 |
181 | 2,588,005,216 | IssuesEvent | 2015-02-17 22:01:21 | GsDevKit/gsDevKitHome | https://api.github.com/repos/GsDevKit/gsDevKitHome | closed | More definitive `script is finished` needed for installServer and friends | in process | Here's the current end of processing:
```
Bulk migrate of 0 candidate classes
No instance migrations performed.[12/12/2014 19:09:10.134 UTC]
gci login: currSession 1 rpc gem processId 2424 OOB keep-alive interval 0
---Starting backup to '/home/notroot/gsDevKitHome/gemstone/stones/Dario/backups//tode.dbf' (12/12/2014 11:09:10)
---Finished backup to 12/12/2014 11:09:11 -- tode.dbf
---Starting backup to '/home/notroot/gsDevKitHome/gemstone/stones/Dario/backups//home.dbf' (12/12/2014 11:09:11)
---Finished backup to 12/12/2014 11:09:18 -- home.dbf
a Text for '[176463617 sz:9 TDObjectGatewayNode] /home/'
```
no newline. A pretty wimpy way to end a script... | 1.0 | More definitive `script is finished` needed for installServer and friends - Here's the current end of processing:
```
Bulk migrate of 0 candidate classes
No instance migrations performed.[12/12/2014 19:09:10.134 UTC]
gci login: currSession 1 rpc gem processId 2424 OOB keep-alive interval 0
---Starting backup to '/home/notroot/gsDevKitHome/gemstone/stones/Dario/backups//tode.dbf' (12/12/2014 11:09:10)
---Finished backup to 12/12/2014 11:09:11 -- tode.dbf
---Starting backup to '/home/notroot/gsDevKitHome/gemstone/stones/Dario/backups//home.dbf' (12/12/2014 11:09:11)
---Finished backup to 12/12/2014 11:09:18 -- home.dbf
a Text for '[176463617 sz:9 TDObjectGatewayNode] /home/'
```
no newline. A pretty wimpy way to end a script... | process | more definitive script is finished needed for installserver and friends here s the current end of processing bulk migrate of candidate classes no instance migrations performed gci login currsession rpc gem processid oob keep alive interval starting backup to home notroot gsdevkithome gemstone stones dario backups tode dbf finished backup to tode dbf starting backup to home notroot gsdevkithome gemstone stones dario backups home dbf finished backup to home dbf a text for home no newline a pretty wimpy way to end a script | 1 |
421,211 | 28,311,103,395 | IssuesEvent | 2023-04-10 15:28:00 | LuongXuanThang20110724/Web_TalkDesk | https://api.github.com/repos/LuongXuanThang20110724/Web_TalkDesk | closed | Code UI "ServiceLevel" | documentation | - [x] 1. Sử dụng và tinh chỉnh lại component Feature2
- [x] 2. Sử dụng và tinh chỉnh lại select "filter"
- [x] 3. Sử dụng và tinh chỉnh lại "Times"
- [x] 4. Sử dụng và tinh chỉnh lại "Stacked Bar Chart"
- [x] 5. Code "Pie Chart With Needle"
- [x] 6. Review hoàn chỉnh | 1.0 | Code UI "ServiceLevel" - - [x] 1. Sử dụng và tinh chỉnh lại component Feature2
- [x] 2. Sử dụng và tinh chỉnh lại select "filter"
- [x] 3. Sử dụng và tinh chỉnh lại "Times"
- [x] 4. Sử dụng và tinh chỉnh lại "Stacked Bar Chart"
- [x] 5. Code "Pie Chart With Needle"
- [x] 6. Review hoàn chỉnh | non_process | code ui servicelevel sử dụng và tinh chỉnh lại component sử dụng và tinh chỉnh lại select filter sử dụng và tinh chỉnh lại times sử dụng và tinh chỉnh lại stacked bar chart code pie chart with needle review hoàn chỉnh | 0 |
60,577 | 7,359,877,476 | IssuesEvent | 2018-03-10 12:31:08 | w3c/EasyChecks | https://api.github.com/repos/w3c/EasyChecks | closed | URI | wai-redesign-before | current URI is https://www.w3.org/WAI/eval/preliminary
(because once upon a time, this page was called "Preliminary Review of Web Sites for Accessibility")
In-progress prototype has http://w3c.github.io/wai-website/test-evaluate/easychecks/
Given that we expect to change the title when we revise resource, and it might not end up being "Easy Checks", I mildly propose that we do not use "/easychecks/" and probably continue to use "/preliminary/" | 1.0 | URI - current URI is https://www.w3.org/WAI/eval/preliminary
(because once upon a time, this page was called "Preliminary Review of Web Sites for Accessibility")
In-progress prototype has http://w3c.github.io/wai-website/test-evaluate/easychecks/
Given that we expect to change the title when we revise resource, and it might not end up being "Easy Checks", I mildly propose that we do not use "/easychecks/" and probably continue to use "/preliminary/" | non_process | uri current uri is because once upon a time this page was called preliminary review of web sites for accessibility in progress prototype has given that we expect to change the title when we revise resource and it might not end up being easy checks i mildly propose that we do not use easychecks and probably continue to use preliminary | 0 |
10,829 | 13,610,191,308 | IssuesEvent | 2020-09-23 06:56:36 | peopledoc/procrastinate | https://api.github.com/repos/peopledoc/procrastinate | closed | Use setuptools_scm in setup.py | Contains: Only Python Good for: newcomers Type: Process | Our setup.py makes exactly what setuptools_scm provides, so we could get rid of some code for a part that is completely process boilerplate.
I think it could be nice to switch to it.
https://pypi.org/project/setuptools-scm/ | 1.0 | Use setuptools_scm in setup.py - Our setup.py makes exactly what setuptools_scm provides, so we could get rid of some code for a part that is completely process boilerplate.
I think it could be nice to switch to it.
https://pypi.org/project/setuptools-scm/ | process | use setuptools scm in setup py our setup py makes exactly what setuptools scm provides so we could get rid of some code for a part that is completely process boilerplate i think it could be nice to switch to it | 1 |
112 | 2,546,288,699 | IssuesEvent | 2015-01-29 22:47:17 | tinkerpop/tinkerpop3 | https://api.github.com/repos/tinkerpop/tinkerpop3 | opened | There are no lambdas. [proposal] | enhancement process | What if there was only `Traversal`. This would greatly reduce the complexity of many steps. No such think as functions vs. traversals. What is `choose{it.name[0] == 'm'}` then? I'm glad you asked -- `choose(__.filter{it.name[0] == 'm'})`. People could write the lamda form (the first expression), but it is encoded as a traversal. The ONLY steps that actually take lambdas are the core steps -- filter(),map(),flatMap(),sideEffect(), and branch(). Every other place there is a lambda, its actually a traversal rewritten in terms of filter(),map(), etc.
This would mean that there would be no concept of `FunctionHolder`. Just `TraversalHolder`s. No concept of if/else "is this by() a true function or a traversal function?".
The theory then is Gremlin is an arbitrary nesting of `Traversals`. That is it. | 1.0 | There are no lambdas. [proposal] - What if there was only `Traversal`. This would greatly reduce the complexity of many steps. No such think as functions vs. traversals. What is `choose{it.name[0] == 'm'}` then? I'm glad you asked -- `choose(__.filter{it.name[0] == 'm'})`. People could write the lamda form (the first expression), but it is encoded as a traversal. The ONLY steps that actually take lambdas are the core steps -- filter(),map(),flatMap(),sideEffect(), and branch(). Every other place there is a lambda, its actually a traversal rewritten in terms of filter(),map(), etc.
This would mean that there would be no concept of `FunctionHolder`. Just `TraversalHolder`s. No concept of if/else "is this by() a true function or a traversal function?".
The theory then is Gremlin is an arbitrary nesting of `Traversals`. That is it. | process | there are no lambdas what if there was only traversal this would greatly reduce the complexity of many steps no such think as functions vs traversals what is choose it name m then i m glad you asked choose filter it name m people could write the lamda form the first expression but it is encoded as a traversal the only steps that actually take lambdas are the core steps filter map flatmap sideeffect and branch every other place there is a lambda its actually a traversal rewritten in terms of filter map etc this would mean that there would be no concept of functionholder just traversalholder s no concept of if else is this by a true function or a traversal function the theory then is gremlin is an arbitrary nesting of traversals that is it | 1 |
88,608 | 3,779,723,904 | IssuesEvent | 2016-03-18 09:47:23 | radar2go/radar-beta | https://api.github.com/repos/radar2go/radar-beta | closed | Oznaka notifikacije | enhancement priority | Fajn bi bilo da je v appu oznaceno ce imas kako notifikacijo. Neko oznako na fotki in potem v posamezni kategoriji. Ce ti je negdo poslal zahtev za prijateljstvo oznaka na people
Ce mas notifikacijo oznako na notifikacijo.
Enima ni najbolj jasna oznaka za dodajanje prijateljev. Meni pa je.
Moznost je da ko si z nekom prijatelj pise friend ce nisi pa + ko hoces nekoga odstranit kliknes na friend pa popap prasa ce ga hoces zbrisat kot pri brisanju objav
Bom poslusal se vec ljudi | 1.0 | Oznaka notifikacije - Fajn bi bilo da je v appu oznaceno ce imas kako notifikacijo. Neko oznako na fotki in potem v posamezni kategoriji. Ce ti je negdo poslal zahtev za prijateljstvo oznaka na people
Ce mas notifikacijo oznako na notifikacijo.
Enima ni najbolj jasna oznaka za dodajanje prijateljev. Meni pa je.
Moznost je da ko si z nekom prijatelj pise friend ce nisi pa + ko hoces nekoga odstranit kliknes na friend pa popap prasa ce ga hoces zbrisat kot pri brisanju objav
Bom poslusal se vec ljudi | non_process | oznaka notifikacije fajn bi bilo da je v appu oznaceno ce imas kako notifikacijo neko oznako na fotki in potem v posamezni kategoriji ce ti je negdo poslal zahtev za prijateljstvo oznaka na people ce mas notifikacijo oznako na notifikacijo enima ni najbolj jasna oznaka za dodajanje prijateljev meni pa je moznost je da ko si z nekom prijatelj pise friend ce nisi pa ko hoces nekoga odstranit kliknes na friend pa popap prasa ce ga hoces zbrisat kot pri brisanju objav bom poslusal se vec ljudi | 0 |
196,861 | 15,611,918,162 | IssuesEvent | 2021-03-19 14:49:45 | k-roffle/knitting-frontend | https://api.github.com/repos/k-roffle/knitting-frontend | closed | 이슈 템플릿 나오도록 수정 | Documentation Low Other bug | ## Bug 제안 사유
이슈 템플릿이 이슈 등록 시 나오지 않습니다
<!-- 왜 이 Bug를 제안하게 되었는지 간략하게 적어주세요. -->
## 재현 상황
이슈 추가 버튼을 누릅니다
<!-- Bug를 재현할 수 있는 순서를 적어주세요. -->
## 예상 동작
이슈 추가 버튼을 누르면 템플릿이 나와야 합니다
<!-- 예상 동작에 대한 명확하고 간결한 설명을 적어주세요. -->
## 스크린샷
<!-- 문제 해결에 도움이 되는 스크린샷이 있다면 추가해주세요. -->
## Desktop 스펙
- OS:
- Browser:
- Version:
## Mobile 스펙
- Device:
- OS:
- Browser:
- Version:
## 부가 설명
<!-- 생략 가능합니다. --> | 1.0 | 이슈 템플릿 나오도록 수정 - ## Bug 제안 사유
이슈 템플릿이 이슈 등록 시 나오지 않습니다
<!-- 왜 이 Bug를 제안하게 되었는지 간략하게 적어주세요. -->
## 재현 상황
이슈 추가 버튼을 누릅니다
<!-- Bug를 재현할 수 있는 순서를 적어주세요. -->
## 예상 동작
이슈 추가 버튼을 누르면 템플릿이 나와야 합니다
<!-- 예상 동작에 대한 명확하고 간결한 설명을 적어주세요. -->
## 스크린샷
<!-- 문제 해결에 도움이 되는 스크린샷이 있다면 추가해주세요. -->
## Desktop 스펙
- OS:
- Browser:
- Version:
## Mobile 스펙
- Device:
- OS:
- Browser:
- Version:
## 부가 설명
<!-- 생략 가능합니다. --> | non_process | 이슈 템플릿 나오도록 수정 bug 제안 사유 이슈 템플릿이 이슈 등록 시 나오지 않습니다 재현 상황 이슈 추가 버튼을 누릅니다 예상 동작 이슈 추가 버튼을 누르면 템플릿이 나와야 합니다 스크린샷 desktop 스펙 os browser version mobile 스펙 device os browser version 부가 설명 | 0 |
1,186 | 3,687,740,468 | IssuesEvent | 2016-02-25 09:49:19 | dita-ot/dita-ot | https://api.github.com/repos/dita-ot/dita-ot | closed | Resolve coderef before conref | bug preprocess | Currently, the preprocessing order of operations resolves coderefs late — after conref resolution is complete. This can result in coderefs not being resolved. For example:
* Topic A contains an example with a coderef in it.
* Topic B conrefs the example from topic A.
At the time the file lists are generated, Topic B doesn't have a coderef in it, so the coderef never gets resolved in topic B, only in topic A.
This can easily be fixed by moving coderef resolution before conref resolution. A coderef will never require conref resolution of its contents, so changing the order won't result in conrefs not being resolved. | 1.0 | Resolve coderef before conref - Currently, the preprocessing order of operations resolves coderefs late — after conref resolution is complete. This can result in coderefs not being resolved. For example:
* Topic A contains an example with a coderef in it.
* Topic B conrefs the example from topic A.
At the time the file lists are generated, Topic B doesn't have a coderef in it, so the coderef never gets resolved in topic B, only in topic A.
This can easily be fixed by moving coderef resolution before conref resolution. A coderef will never require conref resolution of its contents, so changing the order won't result in conrefs not being resolved. | process | resolve coderef before conref currently the preprocessing order of operations resolves coderefs late — after conref resolution is complete this can result in coderefs not being resolved for example topic a contains an example with a coderef in it topic b conrefs the example from topic a at the time the file lists are generated topic b doesn t have a coderef in it so the coderef never gets resolved in topic b only in topic a this can easily be fixed by moving coderef resolution before conref resolution a coderef will never require conref resolution of its contents so changing the order won t result in conrefs not being resolved | 1 |
18,199 | 24,254,773,124 | IssuesEvent | 2022-09-27 16:48:13 | AltTesterBot/test | https://api.github.com/repos/AltTesterBot/test | closed | setup a pipeline process | doing process gitlab | We need to figure out a way to:
* build automatically
* run the python unit tests
* run some of the Appium tests?
* dist to pypi (pip) when merged to master
---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/5).</sub>
| 1.0 | setup a pipeline process - We need to figure out a way to:
* build automatically
* run the python unit tests
* run some of the Appium tests?
* dist to pypi (pip) when merged to master
---
<sub>You can find the original issue from GitLab [here](https://gitlab.com/altom/altunity/altunitytester/-/issues/5).</sub>
| process | setup a pipeline process we need to figure out a way to build automatically run the python unit tests run some of the appium tests dist to pypi pip when merged to master you can find the original issue from gitlab | 1 |
30,806 | 14,674,622,204 | IssuesEvent | 2020-12-30 15:43:45 | sergiorribeiro/webmetry | https://api.github.com/repos/sergiorribeiro/webmetry | opened | [Controller] Controller/api/v1/hedge_fund_accounting_report_records/ending_balance_share_class_records | controller needs squad prodops transaction-performance | The transaction **`Controller/api/v1/hedge_fund_accounting_report_records/ending_balance_share_class_records` (Controller)** violated a performance threshold.
## Violations:
- [2020-12-30] Maximum execution duration during the current week was exceeded. Duration: **`1.4 min`**. Limit: **`30 s`**. <!-- /// -->
## Weekly transaction performance:
### Evolution graph (percentile 95):
```
[2020-12-21] ~ [2020-12-28] 🟦🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 8.9 s
[2020-12-14] ~ [2020-12-21] 🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦 19.5 s
[2020-12-07] ~ [2020-12-14] 🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 8.3 s
[2020-11-30] ~ [2020-12-07] 🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 5.7 s
```
### Weekly indicators:
| Indicator | Week -3 | Week -2 | Week -1 | Week 0 |
|-|-|-|-|-|
| Above average hits | 0 | 0 | 4 | 1 |
| Max | 12.4 s | 19.2 s | 3.2 min | 1.4 min |
| Average | 1.8 s | 2.1 s | 4.7 s | 3.0 s |
| Percentile 95 | 5.7 s | 8.3 s | 19.5 s | 8.9 s |
### Month totals:
| Indicator | Value |
|-|-|
| Max | 3.2 min |
| Average | 3.0 s |
| Percentile 95 | 9.3 s |
<!-- [EPID:7f924338d8cc7269a95ab671caab9f6aee5c53bf] --> | True | [Controller] Controller/api/v1/hedge_fund_accounting_report_records/ending_balance_share_class_records - The transaction **`Controller/api/v1/hedge_fund_accounting_report_records/ending_balance_share_class_records` (Controller)** violated a performance threshold.
## Violations:
- [2020-12-30] Maximum execution duration during the current week was exceeded. Duration: **`1.4 min`**. Limit: **`30 s`**. <!-- /// -->
## Weekly transaction performance:
### Evolution graph (percentile 95):
```
[2020-12-21] ~ [2020-12-28] 🟦🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 8.9 s
[2020-12-14] ~ [2020-12-21] 🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦 19.5 s
[2020-12-07] ~ [2020-12-14] 🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 8.3 s
[2020-11-30] ~ [2020-12-07] 🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ 5.7 s
```
### Weekly indicators:
| Indicator | Week -3 | Week -2 | Week -1 | Week 0 |
|-|-|-|-|-|
| Above average hits | 0 | 0 | 4 | 1 |
| Max | 12.4 s | 19.2 s | 3.2 min | 1.4 min |
| Average | 1.8 s | 2.1 s | 4.7 s | 3.0 s |
| Percentile 95 | 5.7 s | 8.3 s | 19.5 s | 8.9 s |
### Month totals:
| Indicator | Value |
|-|-|
| Max | 3.2 min |
| Average | 3.0 s |
| Percentile 95 | 9.3 s |
<!-- [EPID:7f924338d8cc7269a95ab671caab9f6aee5c53bf] --> | non_process | controller api hedge fund accounting report records ending balance share class records the transaction controller api hedge fund accounting report records ending balance share class records controller violated a performance threshold violations maximum execution duration during the current week was exceeded duration min limit s weekly transaction performance evolution graph percentile 🟦🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ s 🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦🟦 s 🟦🟦🟦🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ s 🟦🟦🟦🟦🟦⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️⬜️ s weekly indicators indicator week week week week above average hits max s s min min average s s s s percentile s s s s month totals indicator value max min average s percentile s | 0 |
190,986 | 22,192,267,718 | IssuesEvent | 2022-06-07 01:09:09 | opentok/opentok-react-native-samples | https://api.github.com/repos/opentok/opentok-react-native-samples | opened | opentok-react-native-0.20.1.tgz: 3 vulnerabilities (highest severity is: 7.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opentok-react-native-0.20.1.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /BasicVideoChat/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-3749](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | axios-0.21.1.tgz | Transitive | 0.20.2 | ✅ |
| [CVE-2022-0155](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | follow-redirects-1.14.1.tgz | Transitive | 0.20.2 | ✅ |
| [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.1.tgz | Transitive | 0.20.2 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3749</summary>
### Vulnerable Library - <b>axios-0.21.1.tgz</b></p>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.1.tgz">https://registry.npmjs.org/axios/-/axios-0.21.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/axios/package.json,/BasicVideoChat/node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- :x: **axios-0.21.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
axios is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/">https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (axios): 0.21.2</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0155</summary>
### Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- axios-0.21.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution (follow-redirects): 1.14.7</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary>
### Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- axios-0.21.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-3749","vulnerabilityDetails":"axios is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0155","vulnerabilityDetails":"follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> --> | True | opentok-react-native-0.20.1.tgz: 3 vulnerabilities (highest severity is: 7.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opentok-react-native-0.20.1.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /BasicVideoChat/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2021-3749](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | axios-0.21.1.tgz | Transitive | 0.20.2 | ✅ |
| [CVE-2022-0155](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.5 | follow-redirects-1.14.1.tgz | Transitive | 0.20.2 | ✅ |
| [CVE-2022-0536](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.9 | follow-redirects-1.14.1.tgz | Transitive | 0.20.2 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3749</summary>
### Vulnerable Library - <b>axios-0.21.1.tgz</b></p>
<p>Promise based HTTP client for the browser and node.js</p>
<p>Library home page: <a href="https://registry.npmjs.org/axios/-/axios-0.21.1.tgz">https://registry.npmjs.org/axios/-/axios-0.21.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/axios/package.json,/BasicVideoChat/node_modules/axios/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- :x: **axios-0.21.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
axios is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749>CVE-2021-3749</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/">https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (axios): 0.21.2</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0155</summary>
### Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- axios-0.21.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor
<p>Publish Date: 2022-01-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p>
<p>Release Date: 2022-01-10</p>
<p>Fix Resolution (follow-redirects): 1.14.7</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-0536</summary>
### Vulnerable Library - <b>follow-redirects-1.14.1.tgz</b></p>
<p>HTTP and HTTPS modules that follow redirects.</p>
<p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.1.tgz</a></p>
<p>Path to dependency file: /Signaling/package.json</p>
<p>Path to vulnerable library: /Signaling/node_modules/follow-redirects/package.json,/BasicVideoChat/node_modules/follow-redirects/package.json</p>
<p>
Dependency Hierarchy:
- opentok-react-native-0.20.1.tgz (Root Library)
- axios-0.21.1.tgz
- :x: **follow-redirects-1.14.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.
<p>Publish Date: 2022-02-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536>CVE-2022-0536</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.9</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0536</a></p>
<p>Release Date: 2022-02-09</p>
<p>Fix Resolution (follow-redirects): 1.14.8</p>
<p>Direct dependency fix Resolution (opentok-react-native): 0.20.2</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2021-3749","vulnerabilityDetails":"axios is vulnerable to Inefficient Regular Expression Complexity","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3749","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0155","vulnerabilityDetails":"follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"Required","AV":"Network","I":"None"},"extraData":{}},{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"opentok-react-native","packageVersion":"0.20.1","packageFilePaths":["/Signaling/package.json"],"isTransitiveDependency":false,"dependencyTree":"opentok-react-native:0.20.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"0.20.2","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2022-0536","vulnerabilityDetails":"Exposure of Sensitive Information to an Unauthorized Actor in NPM follow-redirects prior to 1.14.8.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0536","cvss3Severity":"medium","cvss3Score":"5.9","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}]</REMEDIATE> --> | non_process | opentok react native tgz vulnerabilities highest severity is vulnerable library opentok react native tgz path to dependency file basicvideochat package json path to vulnerable library signaling node modules follow redirects package json basicvideochat node modules follow redirects package json vulnerabilities cve severity cvss dependency type fixed in remediation available high axios tgz transitive medium follow redirects tgz transitive medium follow redirects tgz transitive details cve vulnerable library axios tgz promise based http client for the browser and node js library home page a href path to dependency file signaling package json path to vulnerable library signaling node modules axios package json basicvideochat node modules axios package json dependency hierarchy opentok react native tgz root library x axios tgz vulnerable library found in base branch main vulnerability details axios is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution axios direct dependency fix resolution opentok react native rescue worker helmet automatic remediation is available for this issue cve vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file signaling package json path to vulnerable library signaling node modules follow redirects package json basicvideochat node modules follow redirects package json dependency hierarchy opentok react native tgz root library axios tgz x follow redirects tgz vulnerable library found in base branch main vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution opentok react native rescue worker helmet automatic remediation is available for this issue cve vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file signaling package json path to vulnerable library signaling node modules follow redirects package json basicvideochat node modules follow redirects package json dependency hierarchy opentok react native tgz root library axios tgz x follow redirects tgz vulnerable library found in base branch main vulnerability details exposure of sensitive information to an unauthorized actor in npm follow redirects prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution opentok react native rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue istransitivedependency false dependencytree opentok react native isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails axios is vulnerable to inefficient regular expression complexity vulnerabilityurl istransitivedependency false dependencytree opentok react native isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails follow redirects is vulnerable to exposure of private personal information to an unauthorized actor vulnerabilityurl istransitivedependency false dependencytree opentok react native isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails exposure of sensitive information to an unauthorized actor in npm follow redirects prior to vulnerabilityurl | 0 |
438,915 | 30,668,851,842 | IssuesEvent | 2023-07-25 20:32:34 | jetstream-cloud/js2docs | https://api.github.com/repos/jetstream-cloud/js2docs | opened | [documentation] Add article for extending volume | documentation | ## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this. | 1.0 | [documentation] Add article for extending volume - ## Opportunity
This seems like a fairly common things users will want to do, and we get a good number of tickets asking how to do this. Here is an example of a ticket, including my reply:
https://access-ci.atlassian.net/browse/ATS-1987
## Resolution
We should add a page on the public docs for how to do this. | non_process | add article for extending volume opportunity this seems like a fairly common things users will want to do and we get a good number of tickets asking how to do this here is an example of a ticket including my reply resolution we should add a page on the public docs for how to do this | 0 |
94,177 | 8,475,697,565 | IssuesEvent | 2018-10-24 19:41:00 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: hotspotsplits/nodes=4 failed | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/0dba537ae88e495ddf29b4c347b4c30ee99bd046
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stressrace TESTS=hotspotsplits/nodes=4 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=983962&tab=buildLog
```
The test failed on release-2.1:
asm_amd64.s:574,panic.go:502,panic.go:63,signal_unix.go:388,log.go:172,log.go:216,cluster.go:221,cluster.go:695: runtime error: invalid memory address or nil pointer dereference
``` | 1.0 | roachtest: hotspotsplits/nodes=4 failed - SHA: https://github.com/cockroachdb/cockroach/commits/0dba537ae88e495ddf29b4c347b4c30ee99bd046
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stressrace TESTS=hotspotsplits/nodes=4 PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=983962&tab=buildLog
```
The test failed on release-2.1:
asm_amd64.s:574,panic.go:502,panic.go:63,signal_unix.go:388,log.go:172,log.go:216,cluster.go:221,cluster.go:695: runtime error: invalid memory address or nil pointer dereference
``` | non_process | roachtest hotspotsplits nodes failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach make stressrace tests hotspotsplits nodes pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on release asm s panic go panic go signal unix go log go log go cluster go cluster go runtime error invalid memory address or nil pointer dereference | 0 |
21,710 | 30,210,241,146 | IssuesEvent | 2023-07-05 12:19:54 | GispoCoding/eis_toolkit | https://api.github.com/repos/GispoCoding/eis_toolkit | closed | Add Cell-Based Association | enhancement Preprocessing | **Cell-Based Association** (CBA) is used as a pre-processing method for geological data, such as geological maps and structural data. This method allows identifying specific associations of geological features across a given area (i.e. geological environments) using a regular square grid. Associations of geological features are identified and synthetized into unique binary codes, representing the absence or presence of each variable inside the environments.
This method allows the analysis of point-environments relationship. Known occurences define mineralized environments, which are used to compute favorability scores, using various methods. Such methods include Agglomerative Hierarchical Clustering (AHC), Ranking (lithology/mineralization ratios) or Random Forest (RF) for instance.
For more details about the CBA:
**Tourlière, B., Pakyuz-Charrier, E., Cassard, D., Barbanson, L., & Gumiaux, C. (2015)**. *Cell Based Associations: A procedure for considering scarce and mixed mineral occurrences in predictive mapping*. Computers & geosciences, 78, 53-62.
**A. Vella (2022)**. *Highlighting mineralized geological environments through a new Data-driven predictive mapping approach*. PhD Thesis, en, University of Orléans, France.
| 1.0 | Add Cell-Based Association - **Cell-Based Association** (CBA) is used as a pre-processing method for geological data, such as geological maps and structural data. This method allows identifying specific associations of geological features across a given area (i.e. geological environments) using a regular square grid. Associations of geological features are identified and synthetized into unique binary codes, representing the absence or presence of each variable inside the environments.
This method allows the analysis of point-environments relationship. Known occurences define mineralized environments, which are used to compute favorability scores, using various methods. Such methods include Agglomerative Hierarchical Clustering (AHC), Ranking (lithology/mineralization ratios) or Random Forest (RF) for instance.
For more details about the CBA:
**Tourlière, B., Pakyuz-Charrier, E., Cassard, D., Barbanson, L., & Gumiaux, C. (2015)**. *Cell Based Associations: A procedure for considering scarce and mixed mineral occurrences in predictive mapping*. Computers & geosciences, 78, 53-62.
**A. Vella (2022)**. *Highlighting mineralized geological environments through a new Data-driven predictive mapping approach*. PhD Thesis, en, University of Orléans, France.
| process | add cell based association cell based association cba is used as a pre processing method for geological data such as geological maps and structural data this method allows identifying specific associations of geological features across a given area i e geological environments using a regular square grid associations of geological features are identified and synthetized into unique binary codes representing the absence or presence of each variable inside the environments this method allows the analysis of point environments relationship known occurences define mineralized environments which are used to compute favorability scores using various methods such methods include agglomerative hierarchical clustering ahc ranking lithology mineralization ratios or random forest rf for instance for more details about the cba tourlière b pakyuz charrier e cassard d barbanson l gumiaux c cell based associations a procedure for considering scarce and mixed mineral occurrences in predictive mapping computers geosciences a vella highlighting mineralized geological environments through a new data driven predictive mapping approach phd thesis en university of orléans france | 1 |
99,677 | 8,708,076,053 | IssuesEvent | 2018-12-06 09:52:56 | club-soda/club-soda-guide | https://api.github.com/repos/club-soda/club-soda-guide | closed | Filter Beer, Wine, Cider and Spirits by ABV | Nisha - Consumer please-test priority-3 question | As a customer viewing all drinks,
I'd like to filter Beer, Wine, Cider and Spirits by ABV levels: 0.05, 0.5, 1-2.5 and 2.5 - 8%,
so I only see drinks I am comfortable drinking | 1.0 | Filter Beer, Wine, Cider and Spirits by ABV - As a customer viewing all drinks,
I'd like to filter Beer, Wine, Cider and Spirits by ABV levels: 0.05, 0.5, 1-2.5 and 2.5 - 8%,
so I only see drinks I am comfortable drinking | non_process | filter beer wine cider and spirits by abv as a customer viewing all drinks i d like to filter beer wine cider and spirits by abv levels and so i only see drinks i am comfortable drinking | 0 |
389,165 | 26,801,396,440 | IssuesEvent | 2023-02-01 15:20:00 | GillianPlatform/Gillian | https://api.github.com/repos/GillianPlatform/Gillian | closed | Deploy docs | documentation admin | We now have sphinx and odoc documentation ready, we just need to actually deploy it somewhere.
Related to #142 | 1.0 | Deploy docs - We now have sphinx and odoc documentation ready, we just need to actually deploy it somewhere.
Related to #142 | non_process | deploy docs we now have sphinx and odoc documentation ready we just need to actually deploy it somewhere related to | 0 |
17,683 | 23,520,366,515 | IssuesEvent | 2022-08-19 04:48:50 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Is it possible to add a method that kills the process tree? | area-System.Diagnostics.Process untriaged | .Net standard does not provide any method to kill a process tree. | 1.0 | Is it possible to add a method that kills the process tree? - .Net standard does not provide any method to kill a process tree. | process | is it possible to add a method that kills the process tree net standard does not provide any method to kill a process tree | 1 |
95,026 | 27,362,103,365 | IssuesEvent | 2023-02-27 16:30:10 | opensafely-core/databuilder | https://api.github.com/repos/opensafely-core/databuilder | closed | Add sequence operations to generative tests | databuilder-1 | See `tests.generative.variable_strategies.known_missing_operations` for the missing operations.
At the time of writing they are:
```
AggregateByPatient.CombineAsSet,
Function.In,
Function.StringContains,
```
(Note that there may be other non-sequence-related operations mentioned there which are not covered by this issue.)
The operations need adding to `tests.generative.variable_strategies`. If you're not familiar with the generative tests then you should get an intro from (probably) Ben before starting this work.
There is a verification step to ensure that we're including all the operations:
```
GENTEST_COMPREHENSIVE=t GENTEST_EXAMPLES=5000 just test-generative
```
Once you've added the operations you'll find that this fails. You will then need to remove the newly added operations from tests.generative.test_query_model.recorder and it should pass. If you find you've removed the last of the known_missing operations then please remove this exception mechanism altogether.this exception mechanism altogether. | 1.0 | Add sequence operations to generative tests - See `tests.generative.variable_strategies.known_missing_operations` for the missing operations.
At the time of writing they are:
```
AggregateByPatient.CombineAsSet,
Function.In,
Function.StringContains,
```
(Note that there may be other non-sequence-related operations mentioned there which are not covered by this issue.)
The operations need adding to `tests.generative.variable_strategies`. If you're not familiar with the generative tests then you should get an intro from (probably) Ben before starting this work.
There is a verification step to ensure that we're including all the operations:
```
GENTEST_COMPREHENSIVE=t GENTEST_EXAMPLES=5000 just test-generative
```
Once you've added the operations you'll find that this fails. You will then need to remove the newly added operations from tests.generative.test_query_model.recorder and it should pass. If you find you've removed the last of the known_missing operations then please remove this exception mechanism altogether.this exception mechanism altogether. | non_process | add sequence operations to generative tests see tests generative variable strategies known missing operations for the missing operations at the time of writing they are aggregatebypatient combineasset function in function stringcontains note that there may be other non sequence related operations mentioned there which are not covered by this issue the operations need adding to tests generative variable strategies if you re not familiar with the generative tests then you should get an intro from probably ben before starting this work there is a verification step to ensure that we re including all the operations gentest comprehensive t gentest examples just test generative once you ve added the operations you ll find that this fails you will then need to remove the newly added operations from tests generative test query model recorder and it should pass if you find you ve removed the last of the known missing operations then please remove this exception mechanism altogether this exception mechanism altogether | 0 |
6,367 | 9,417,975,792 | IssuesEvent | 2019-04-10 18:03:22 | material-components/material-components-ios | https://api.github.com/repos/material-components/material-components-ios | closed | [Cards] Internal issue: b/129758049 | [Cards] type:Process | This was filed as an internal issue. If you are a Googler, please visit [b/129758049](http://b/129758049) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/129758049](http://b/129758049)
- Blocked by: https://github.com/material-components/material-components-ios/issues/7108
- Blocked by: https://github.com/material-components/material-components-ios/issues/7074
- Blocked by: https://github.com/material-components/material-components-ios/issues/7068
- Blocked by: https://github.com/material-components/material-components-ios/issues/6701
- Blocked by: https://github.com/material-components/material-components-ios/issues/6592
- Blocked by: https://github.com/material-components/material-components-ios/issues/5914
- Blocked by: https://github.com/material-components/material-components-ios/issues/3788 | 1.0 | [Cards] Internal issue: b/129758049 - This was filed as an internal issue. If you are a Googler, please visit [b/129758049](http://b/129758049) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/129758049](http://b/129758049)
- Blocked by: https://github.com/material-components/material-components-ios/issues/7108
- Blocked by: https://github.com/material-components/material-components-ios/issues/7074
- Blocked by: https://github.com/material-components/material-components-ios/issues/7068
- Blocked by: https://github.com/material-components/material-components-ios/issues/6701
- Blocked by: https://github.com/material-components/material-components-ios/issues/6592
- Blocked by: https://github.com/material-components/material-components-ios/issues/5914
- Blocked by: https://github.com/material-components/material-components-ios/issues/3788 | process | internal issue b this was filed as an internal issue if you are a googler please visit for more details internal data associated internal bug blocked by blocked by blocked by blocked by blocked by blocked by blocked by | 1 |
13,181 | 15,609,758,979 | IssuesEvent | 2021-03-19 12:20:09 | hashicorp/packer | https://api.github.com/repos/hashicorp/packer | closed | Post-Processor: Delete docker image in local build machine after pushing to remote registry | enhancement post-processor/docker remote-plugin/docker | I do not want docker images piling up in my jenkins-slave. Is there any way to not commit images to local build machine and able to push docker image to remote registry?
I tried with "export" in builder and "docker-import" in post-processor but its still keeps a copy.
Thanks,
Karthik | 1.0 | Post-Processor: Delete docker image in local build machine after pushing to remote registry - I do not want docker images piling up in my jenkins-slave. Is there any way to not commit images to local build machine and able to push docker image to remote registry?
I tried with "export" in builder and "docker-import" in post-processor but its still keeps a copy.
Thanks,
Karthik | process | post processor delete docker image in local build machine after pushing to remote registry i do not want docker images piling up in my jenkins slave is there any way to not commit images to local build machine and able to push docker image to remote registry i tried with export in builder and docker import in post processor but its still keeps a copy thanks karthik | 1 |
242,170 | 26,257,124,273 | IssuesEvent | 2023-01-06 02:25:48 | Hans-Zamorano-Matamala/mean_entrenamiento | https://api.github.com/repos/Hans-Zamorano-Matamala/mean_entrenamiento | opened | CVE-2022-21704 (Medium) detected in log4js-0.6.38.tgz | security vulnerability | ## CVE-2022-21704 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4js-0.6.38.tgz</b></p></summary>
<p>Port of Log4js to work with node.</p>
<p>Library home page: <a href="https://registry.npmjs.org/log4js/-/log4js-0.6.38.tgz">https://registry.npmjs.org/log4js/-/log4js-0.6.38.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/log4js/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.1.tgz (Root Library)
- :x: **log4js-0.6.38.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hans-Zamorano-Matamala/mean_entrenamiento/commit/0f094ecc422f26d3138f57e9bfc643b6c44307ca">0f094ecc422f26d3138f57e9bfc643b6c44307ca</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
log4js-node is a port of log4js to node.js. In affected versions default file permissions for log files created by the file, fileSync and dateFile appenders are world-readable (in unix). This could cause problems if log files contain sensitive information. This would affect any users that have not supplied their own permissions for the files via the mode parameter in the config. Users are advised to update.
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21704>CVE-2022-21704</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/log4js-node/log4js-node/security/advisories/GHSA-82v2-mx6x-wq7q">https://github.com/log4js-node/log4js-node/security/advisories/GHSA-82v2-mx6x-wq7q</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution (log4js): 6.4.0</p>
<p>Direct dependency fix Resolution (karma): 5.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-21704 (Medium) detected in log4js-0.6.38.tgz - ## CVE-2022-21704 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4js-0.6.38.tgz</b></p></summary>
<p>Port of Log4js to work with node.</p>
<p>Library home page: <a href="https://registry.npmjs.org/log4js/-/log4js-0.6.38.tgz">https://registry.npmjs.org/log4js/-/log4js-0.6.38.tgz</a></p>
<p>Path to dependency file: /client/package.json</p>
<p>Path to vulnerable library: /client/node_modules/log4js/package.json</p>
<p>
Dependency Hierarchy:
- karma-1.7.1.tgz (Root Library)
- :x: **log4js-0.6.38.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Hans-Zamorano-Matamala/mean_entrenamiento/commit/0f094ecc422f26d3138f57e9bfc643b6c44307ca">0f094ecc422f26d3138f57e9bfc643b6c44307ca</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
log4js-node is a port of log4js to node.js. In affected versions default file permissions for log files created by the file, fileSync and dateFile appenders are world-readable (in unix). This could cause problems if log files contain sensitive information. This would affect any users that have not supplied their own permissions for the files via the mode parameter in the config. Users are advised to update.
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-21704>CVE-2022-21704</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/log4js-node/log4js-node/security/advisories/GHSA-82v2-mx6x-wq7q">https://github.com/log4js-node/log4js-node/security/advisories/GHSA-82v2-mx6x-wq7q</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution (log4js): 6.4.0</p>
<p>Direct dependency fix Resolution (karma): 5.0.8</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in tgz cve medium severity vulnerability vulnerable library tgz port of to work with node library home page a href path to dependency file client package json path to vulnerable library client node modules package json dependency hierarchy karma tgz root library x tgz vulnerable library found in head commit a href vulnerability details node is a port of to node js in affected versions default file permissions for log files created by the file filesync and datefile appenders are world readable in unix this could cause problems if log files contain sensitive information this would affect any users that have not supplied their own permissions for the files via the mode parameter in the config users are advised to update publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution direct dependency fix resolution karma step up your open source security game with mend | 0 |
24,045 | 23,264,099,428 | IssuesEvent | 2022-08-04 15:44:40 | Kitware/dive | https://api.github.com/repos/Kitware/dive | opened | Future Attribute Features | Type: Feature Request Affects: Usability | Additional Features to add to Attributes and the Attribute Details panel in the future.
Detection Attribute String Graphing:
- Graphing of attributes using a system to display the text/boolean values instead of the numerical values.
- This would require using sometime similar to the event viewer where the color and label of the bar changes over time.
- Require giving each subset value it's own color representation and a key to make it easier to see the values
Track Attribute Filtering:
- Currently the filtering of items only works on the display of the attributes and the visualization of items themselves.
- Track Attribute Filtering would remove and hide tracks that don't meet the necessary requirements.
- This has a use case in reducing the number of tracks visible based on the filter values.
Attribute Annotation Visualization:
- Visualizing attribute values within the annotation itself.
- Select a subset of keys to show in the annotation
- For numerical values this could include a heatmap range of coloration
- For string values it could be an icon or text indicator.
- Could also possible be a style modifier as well, like giving the bbox some transparency with a color representing the attribute if it's value is there and true.
| True | Future Attribute Features - Additional Features to add to Attributes and the Attribute Details panel in the future.
Detection Attribute String Graphing:
- Graphing of attributes using a system to display the text/boolean values instead of the numerical values.
- This would require using sometime similar to the event viewer where the color and label of the bar changes over time.
- Require giving each subset value it's own color representation and a key to make it easier to see the values
Track Attribute Filtering:
- Currently the filtering of items only works on the display of the attributes and the visualization of items themselves.
- Track Attribute Filtering would remove and hide tracks that don't meet the necessary requirements.
- This has a use case in reducing the number of tracks visible based on the filter values.
Attribute Annotation Visualization:
- Visualizing attribute values within the annotation itself.
- Select a subset of keys to show in the annotation
- For numerical values this could include a heatmap range of coloration
- For string values it could be an icon or text indicator.
- Could also possible be a style modifier as well, like giving the bbox some transparency with a color representing the attribute if it's value is there and true.
| non_process | future attribute features additional features to add to attributes and the attribute details panel in the future detection attribute string graphing graphing of attributes using a system to display the text boolean values instead of the numerical values this would require using sometime similar to the event viewer where the color and label of the bar changes over time require giving each subset value it s own color representation and a key to make it easier to see the values track attribute filtering currently the filtering of items only works on the display of the attributes and the visualization of items themselves track attribute filtering would remove and hide tracks that don t meet the necessary requirements this has a use case in reducing the number of tracks visible based on the filter values attribute annotation visualization visualizing attribute values within the annotation itself select a subset of keys to show in the annotation for numerical values this could include a heatmap range of coloration for string values it could be an icon or text indicator could also possible be a style modifier as well like giving the bbox some transparency with a color representing the attribute if it s value is there and true | 0 |
9,920 | 12,960,217,173 | IssuesEvent | 2020-07-20 14:04:44 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Add a time limit to Buildkite jobs | kind/improvement process/candidate | Noticed that a job I started had a mistake on my end, and so it kept going for a couple of hours: https://buildkite.com/prisma/prisma2-test/builds/1864
We should probably have an upper limit on the job duration (like 30 mins or something) | 1.0 | Add a time limit to Buildkite jobs - Noticed that a job I started had a mistake on my end, and so it kept going for a couple of hours: https://buildkite.com/prisma/prisma2-test/builds/1864
We should probably have an upper limit on the job duration (like 30 mins or something) | process | add a time limit to buildkite jobs noticed that a job i started had a mistake on my end and so it kept going for a couple of hours we should probably have an upper limit on the job duration like mins or something | 1 |
9,733 | 11,783,283,507 | IssuesEvent | 2020-03-17 05:02:16 | hhxsv5/laravel-s | https://api.github.com/repos/hhxsv5/laravel-s | closed | laravelS和laravel-excel3.1冲突,导致报错无法导出数据 | compatibility | [2019-10-21 16:53:27] [TRACE] Swoole is running, press Ctrl+C to quit.
PHP Fatal error: Method Encore\Admin\Grid\Tools\Paginator::__toString() must not throw an exception, caught BadMethodCallException: Method Illuminate\Database\Eloquent\Collection::firstItem does not exist. in /www/wwwroot/XXX/storage/framework/views/7e133bcce95b35842dc3896fa8ba106ca4f271b1.php on line 0
Symfony\Component\Debug\Exception\FatalErrorException : Method Encore\Admin\Grid\Tools\Paginator::__toString() must not throw an exception, caught BadMethodCallException: Method Illuminate\Database\Eloquent\Collection::firstItem does not exist.
在未安装使用LaravelS的项目下,使用laravel-excel没有问题和报错,但是在有的项目下,就会报错!使用框架自带的数据导出报错:swoole exit;查询了swoole文档:
https://wiki.swoole.com/wiki/page/501.html ,修改了相应的方法之后还是不行,报了其他错!
| True | laravelS和laravel-excel3.1冲突,导致报错无法导出数据 - [2019-10-21 16:53:27] [TRACE] Swoole is running, press Ctrl+C to quit.
PHP Fatal error: Method Encore\Admin\Grid\Tools\Paginator::__toString() must not throw an exception, caught BadMethodCallException: Method Illuminate\Database\Eloquent\Collection::firstItem does not exist. in /www/wwwroot/XXX/storage/framework/views/7e133bcce95b35842dc3896fa8ba106ca4f271b1.php on line 0
Symfony\Component\Debug\Exception\FatalErrorException : Method Encore\Admin\Grid\Tools\Paginator::__toString() must not throw an exception, caught BadMethodCallException: Method Illuminate\Database\Eloquent\Collection::firstItem does not exist.
在未安装使用LaravelS的项目下,使用laravel-excel没有问题和报错,但是在有的项目下,就会报错!使用框架自带的数据导出报错:swoole exit;查询了swoole文档:
https://wiki.swoole.com/wiki/page/501.html ,修改了相应的方法之后还是不行,报了其他错!
| non_process | laravels和laravel ,导致报错无法导出数据 swoole is running press ctrl c to quit php fatal error method encore admin grid tools paginator tostring must not throw an exception caught badmethodcallexception method illuminate database eloquent collection firstitem does not exist in www wwwroot xxx storage framework views php on line symfony component debug exception fatalerrorexception method encore admin grid tools paginator tostring must not throw an exception caught badmethodcallexception method illuminate database eloquent collection firstitem does not exist 在未安装使用laravels的项目下,使用laravel excel没有问题和报错,但是在有的项目下,就会报错!使用框架自带的数据导出报错:swoole exit 查询了swoole文档: 修改了相应的方法之后还是不行,报了其他错! | 0 |
5,482 | 8,356,217,461 | IssuesEvent | 2018-10-02 17:50:45 | HumanCellAtlas/dcp-community | https://api.github.com/repos/HumanCellAtlas/dcp-community | closed | Updates to RFC template from reviews | rfc-process | - [x] **User Stories** are *Required* rather than *Optional*
- [x] **Motivation** reference more generic PM plans instead of a _Thematic_ roadmap
- [x] **Prior Art** add _Community Standards_ as another example
- [x] **Acceptance Criteria** add new optional section suggested by @kbergin | 1.0 | Updates to RFC template from reviews - - [x] **User Stories** are *Required* rather than *Optional*
- [x] **Motivation** reference more generic PM plans instead of a _Thematic_ roadmap
- [x] **Prior Art** add _Community Standards_ as another example
- [x] **Acceptance Criteria** add new optional section suggested by @kbergin | process | updates to rfc template from reviews user stories are required rather than optional motivation reference more generic pm plans instead of a thematic roadmap prior art add community standards as another example acceptance criteria add new optional section suggested by kbergin | 1 |
360,107 | 25,273,532,751 | IssuesEvent | 2022-11-16 10:57:57 | fedotkin/dotnet | https://api.github.com/repos/fedotkin/dotnet | closed | Root folder workarounds | documentation master | Root folder looks not good! There is no git ignore settings and readme-file looks ugly.
1. Rename Readme.txt file to [README.md](../trey-nash/README.md). Update content by short repository summary, .NET framework info, links and dotnet logo. Note: Readme-file is added on the first UI steps during a repo creation.
2. Add `.gitignore` file. This is the common approach to support GitHub's repositories. The best practice for code repository: it should not contain compiler's build-files at all, only source code! More info here: [github/gitignore](https://github.com/github/gitignore). I advise to use [VisualStudio.gitignore](https://github.com/github/gitignore/blob/main/VisualStudio.gitignore) because the main coding language of this repo is C#. Just copy this file to the root folder and rename to `.gitignore`.
| 1.0 | Root folder workarounds - Root folder looks not good! There is no git ignore settings and readme-file looks ugly.
1. Rename Readme.txt file to [README.md](../trey-nash/README.md). Update content by short repository summary, .NET framework info, links and dotnet logo. Note: Readme-file is added on the first UI steps during a repo creation.
2. Add `.gitignore` file. This is the common approach to support GitHub's repositories. The best practice for code repository: it should not contain compiler's build-files at all, only source code! More info here: [github/gitignore](https://github.com/github/gitignore). I advise to use [VisualStudio.gitignore](https://github.com/github/gitignore/blob/main/VisualStudio.gitignore) because the main coding language of this repo is C#. Just copy this file to the root folder and rename to `.gitignore`.
| non_process | root folder workarounds root folder looks not good there is no git ignore settings and readme file looks ugly rename readme txt file to trey nash readme md update content by short repository summary net framework info links and dotnet logo note readme file is added on the first ui steps during a repo creation add gitignore file this is the common approach to support github s repositories the best practice for code repository it should not contain compiler s build files at all only source code more info here i advise to use because the main coding language of this repo is c just copy this file to the root folder and rename to gitignore | 0 |
344,849 | 24,831,313,368 | IssuesEvent | 2022-10-26 03:56:18 | casey/just | https://api.github.com/repos/casey/just | closed | Make a video | documentation | I think a quick terminalcast is probably the best way to convey what just does.
- make a new folder
- cd into that folder
- start a new project
- initialize a rust project
- demonstrate every feature
- show error messages | 1.0 | Make a video - I think a quick terminalcast is probably the best way to convey what just does.
- make a new folder
- cd into that folder
- start a new project
- initialize a rust project
- demonstrate every feature
- show error messages | non_process | make a video i think a quick terminalcast is probably the best way to convey what just does make a new folder cd into that folder start a new project initialize a rust project demonstrate every feature show error messages | 0 |
14,361 | 17,382,011,840 | IssuesEvent | 2021-07-31 22:58:03 | AcademySoftwareFoundation/OpenCue | https://api.github.com/repos/AcademySoftwareFoundation/OpenCue | closed | Upgrade PySide to a more recent version. | process | **Describe the process**
Newer versions of Python 3 / pip on Windows no longer provide the version of PySide2 that we're using:
```
> pip3 install PySide2==5.11.2
ERROR: Could not find a version that satisfies the requirement PySide2==5.11.2 (from versions: 5.14.0, 5.14.1, 5.14.2, 5.14.2.1, 5.14.2.2, 5.14.2.3, 5.15.0)
ERROR: No matching distribution found for PySide2==5.11.2
```
We should upgrade to a more recent version to avoid affecting more Windows users. | 1.0 | Upgrade PySide to a more recent version. - **Describe the process**
Newer versions of Python 3 / pip on Windows no longer provide the version of PySide2 that we're using:
```
> pip3 install PySide2==5.11.2
ERROR: Could not find a version that satisfies the requirement PySide2==5.11.2 (from versions: 5.14.0, 5.14.1, 5.14.2, 5.14.2.1, 5.14.2.2, 5.14.2.3, 5.15.0)
ERROR: No matching distribution found for PySide2==5.11.2
```
We should upgrade to a more recent version to avoid affecting more Windows users. | process | upgrade pyside to a more recent version describe the process newer versions of python pip on windows no longer provide the version of that we re using install error could not find a version that satisfies the requirement from versions error no matching distribution found for we should upgrade to a more recent version to avoid affecting more windows users | 1 |
119,785 | 12,042,806,482 | IssuesEvent | 2020-04-14 11:16:06 | poliastro/poliastro | https://api.github.com/repos/poliastro/poliastro | opened | Recommended conda instructions might leave user with older version | bug documentation | After following the official installation instructions for conda, @simium ended up with poliastro 0.9.1. This deserves a bit of investigation, but I suspect that having an old `base` environment and conda dependency solving played a role. We should either:
* Recommend installing exact versions: `conda install -c conda-forge poliastro=0.14"
* Recommend creating a new environment | 1.0 | Recommended conda instructions might leave user with older version - After following the official installation instructions for conda, @simium ended up with poliastro 0.9.1. This deserves a bit of investigation, but I suspect that having an old `base` environment and conda dependency solving played a role. We should either:
* Recommend installing exact versions: `conda install -c conda-forge poliastro=0.14"
* Recommend creating a new environment | non_process | recommended conda instructions might leave user with older version after following the official installation instructions for conda simium ended up with poliastro this deserves a bit of investigation but i suspect that having an old base environment and conda dependency solving played a role we should either recommend installing exact versions conda install c conda forge poliastro recommend creating a new environment | 0 |
15,287 | 19,286,484,705 | IssuesEvent | 2021-12-11 03:04:10 | q191201771/lal | https://api.github.com/repos/q191201771/lal | closed | 建议…可以做一个web后台监管系统 | #Feature *In process *Indefinite delay | 1.实时更新推拉流的客户端数量,ip,带宽流量等
2.分布式集群管理,一键添加或删除流服务器节点
3.实时监控播放正在推流内容,类似于监控室画面
4.根据时间段,播放量等可过滤直播回放视频列表
等等以及其他需要监管的数据信息。 | 1.0 | 建议…可以做一个web后台监管系统 - 1.实时更新推拉流的客户端数量,ip,带宽流量等
2.分布式集群管理,一键添加或删除流服务器节点
3.实时监控播放正在推流内容,类似于监控室画面
4.根据时间段,播放量等可过滤直播回放视频列表
等等以及其他需要监管的数据信息。 | process | 建议…可以做一个web后台监管系统 实时更新推拉流的客户端数量,ip,带宽流量等 分布式集群管理,一键添加或删除流服务器节点 实时监控播放正在推流内容,类似于监控室画面 根据时间段,播放量等可过滤直播回放视频列表 等等以及其他需要监管的数据信息。 | 1 |
271,410 | 23,602,892,163 | IssuesEvent | 2022-08-24 05:02:05 | istio/istio | https://api.github.com/repos/istio/istio | closed | Renaming base chart as istio base | kind/enhancement area/test and release area/environments area/user experience lifecycle/stale | (This is used to request new product features, please visit <https://discuss.istio.io> for questions on using Istio)
**Describe the feature request**
Rename Istio base chart name to istio-base
**Describe alternatives you've considered**
The chart name cannot have "base" name, since there a lot of base projects and the chart needs to be defined as an base with istio context.
ref: https://artifacthub.io/packages/helm/istio-official/base
**Affected product area (please put an X in all that apply)**
[ ] Docs
[X] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[X] User Experience
[X] Developer Infrastructure
**Affected features (please put an X in all that apply)**
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
**Additional context**
| 1.0 | Renaming base chart as istio base - (This is used to request new product features, please visit <https://discuss.istio.io> for questions on using Istio)
**Describe the feature request**
Rename Istio base chart name to istio-base
**Describe alternatives you've considered**
The chart name cannot have "base" name, since there a lot of base projects and the chart needs to be defined as an base with istio context.
ref: https://artifacthub.io/packages/helm/istio-official/base
**Affected product area (please put an X in all that apply)**
[ ] Docs
[X] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Extensions and Telemetry
[ ] Security
[ ] Test and Release
[X] User Experience
[X] Developer Infrastructure
**Affected features (please put an X in all that apply)**
[ ] Multi Cluster
[ ] Virtual Machine
[ ] Multi Control Plane
**Additional context**
| non_process | renaming base chart as istio base this is used to request new product features please visit for questions on using istio describe the feature request rename istio base chart name to istio base describe alternatives you ve considered the chart name cannot have base name since there a lot of base projects and the chart needs to be defined as an base with istio context ref affected product area please put an x in all that apply docs installation networking performance and scalability extensions and telemetry security test and release user experience developer infrastructure affected features please put an x in all that apply multi cluster virtual machine multi control plane additional context | 0 |
666,304 | 22,349,638,188 | IssuesEvent | 2022-06-15 10:51:21 | codeklasse/codeklasse.de | https://api.github.com/repos/codeklasse/codeklasse.de | opened | Change regular text font | priority:high | Averta is too pricy for our use case.
We should find another simple font for text usage. | 1.0 | Change regular text font - Averta is too pricy for our use case.
We should find another simple font for text usage. | non_process | change regular text font averta is too pricy for our use case we should find another simple font for text usage | 0 |
789,482 | 27,791,678,161 | IssuesEvent | 2023-03-17 09:24:13 | VeriFIT/mata | https://api.github.com/repos/VeriFIT/mata | opened | Complementation over non-existent states | For:library Module:nfa Type:discussion Priority:low | In classical complement algorithm implemented in `Mata::Nfa::complement_classical`, we call `Mata::Nfa::Nfa::size()` which returns domain size of initial/final state sets. Those might contain “deleted” states (final/initial at some point but not now, yet still allocated with `false` value in `NumberPredicate`), so when we complement such automaton, non-existent states become final.
As of now, we agreed on leaving it up to the user to call `Mata::Nfa::Nfa::trim` before complementing. We might want to reconsider this decision in the future. | 1.0 | Complementation over non-existent states - In classical complement algorithm implemented in `Mata::Nfa::complement_classical`, we call `Mata::Nfa::Nfa::size()` which returns domain size of initial/final state sets. Those might contain “deleted” states (final/initial at some point but not now, yet still allocated with `false` value in `NumberPredicate`), so when we complement such automaton, non-existent states become final.
As of now, we agreed on leaving it up to the user to call `Mata::Nfa::Nfa::trim` before complementing. We might want to reconsider this decision in the future. | non_process | complementation over non existent states in classical complement algorithm implemented in mata nfa complement classical we call mata nfa nfa size which returns domain size of initial final state sets those might contain “deleted” states final initial at some point but not now yet still allocated with false value in numberpredicate so when we complement such automaton non existent states become final as of now we agreed on leaving it up to the user to call mata nfa nfa trim before complementing we might want to reconsider this decision in the future | 0 |
176,120 | 14,564,010,015 | IssuesEvent | 2020-12-17 03:58:26 | MakeContributions/markdown-dungeon | https://api.github.com/repos/MakeContributions/markdown-dungeon | closed | Shouldn't We Add Translations | beginner documentation good first issue help wanted | Recently, there has been many commits to the `english` folder and fresh rooms and dungeons have been added.
But, the new files have not been translated into portuguese (I think chinese is a seperate dungeon).
So, shouldn't we update the translations?
>Inviting some maintainers for their feedback: @ming-tsai, @Arsenic-ATG, @FukurouMakoto. Others are also welcome. | 1.0 | Shouldn't We Add Translations - Recently, there has been many commits to the `english` folder and fresh rooms and dungeons have been added.
But, the new files have not been translated into portuguese (I think chinese is a seperate dungeon).
So, shouldn't we update the translations?
>Inviting some maintainers for their feedback: @ming-tsai, @Arsenic-ATG, @FukurouMakoto. Others are also welcome. | non_process | shouldn t we add translations recently there has been many commits to the english folder and fresh rooms and dungeons have been added but the new files have not been translated into portuguese i think chinese is a seperate dungeon so shouldn t we update the translations inviting some maintainers for their feedback ming tsai arsenic atg fukuroumakoto others are also welcome | 0 |
12,544 | 14,975,634,863 | IssuesEvent | 2021-01-28 06:33:59 | darktable-org/darktable | https://api.github.com/repos/darktable-org/darktable | closed | Support for Dual Demosaicing (e.g. 3-pass+fast and AMaZE+VNG4) | feature: new scope: image processing | Last year RawTherapee added support for "[Dual Demosaicing](https://rawpedia.rawtherapee.com/Demosaicing#Dual_Demosaic)", which uses two different types of demosaicing on the same image, with each type being used on the parts of the image that it would work best with.
This feature has been very well received on RawTherapee, and it would be great to see Darktable add it as well. In particular, the feature has been well received in the 3-pass+fast (Markesteijn) implementation on X-Trans sensors (as it helps with noise performance on flat areas). [AMaZE+VNG4](https://discuss.pixls.us/t/combined-amaze-and-vng4-demosaic/7897), RCD+VNG4, and DCB+VNG4 have been well received on Bayer sensors as well. | 1.0 | Support for Dual Demosaicing (e.g. 3-pass+fast and AMaZE+VNG4) - Last year RawTherapee added support for "[Dual Demosaicing](https://rawpedia.rawtherapee.com/Demosaicing#Dual_Demosaic)", which uses two different types of demosaicing on the same image, with each type being used on the parts of the image that it would work best with.
This feature has been very well received on RawTherapee, and it would be great to see Darktable add it as well. In particular, the feature has been well received in the 3-pass+fast (Markesteijn) implementation on X-Trans sensors (as it helps with noise performance on flat areas). [AMaZE+VNG4](https://discuss.pixls.us/t/combined-amaze-and-vng4-demosaic/7897), RCD+VNG4, and DCB+VNG4 have been well received on Bayer sensors as well. | process | support for dual demosaicing e g pass fast and amaze last year rawtherapee added support for which uses two different types of demosaicing on the same image with each type being used on the parts of the image that it would work best with this feature has been very well received on rawtherapee and it would be great to see darktable add it as well in particular the feature has been well received in the pass fast markesteijn implementation on x trans sensors as it helps with noise performance on flat areas rcd and dcb have been well received on bayer sensors as well | 1 |
6,388 | 9,462,590,267 | IssuesEvent | 2019-04-17 15:44:36 | openopps/openopps-platform | https://api.github.com/repos/openopps/openopps-platform | closed | Change security clearance values to match USAJOBS | Apply Process Requirements Ready State Dept. | Who: Internship applicants
What: Security clearance values
Why: To match USAJOBS
Modify the security clearance dropdown on the internship application to match the new values in USAJOBS.

| 1.0 | Change security clearance values to match USAJOBS - Who: Internship applicants
What: Security clearance values
Why: To match USAJOBS
Modify the security clearance dropdown on the internship application to match the new values in USAJOBS.

| process | change security clearance values to match usajobs who internship applicants what security clearance values why to match usajobs modify the security clearance dropdown on the internship application to match the new values in usajobs | 1 |
22,309 | 30,862,207,782 | IssuesEvent | 2023-08-03 04:45:14 | vnphanquang/svelte-put | https://api.github.com/repos/vnphanquang/svelte-put | closed | [preprocess-auto-slug] Unit Tests | scope:preprocess-auto-slug type:tests | Implement testing for `preprocess-auto-slug`, simple input source code vs processed output? | 1.0 | [preprocess-auto-slug] Unit Tests - Implement testing for `preprocess-auto-slug`, simple input source code vs processed output? | process | unit tests implement testing for preprocess auto slug simple input source code vs processed output | 1 |
83,756 | 3,642,103,851 | IssuesEvent | 2016-02-14 03:22:26 | leo-project/leofs | https://api.github.com/repos/leo-project/leofs | closed | [leo_erasure] double free & memory leak and fragmentation | Bug Priority-HIGH | ## Problem
- double free
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L148-L149
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L263-L264
- `real_decoding_matrix` will leak at the below blocks
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L517
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L536
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L640
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L659
- fragmentation
- Since there are lots of malloc/free calls over multiple threads under heavy load, Heap can be fragmented and that may result in failing malloc ( return NULL ) and may crash due to the above issues.
## Solution
dead simple.
Allocating on stack will solve all problems. | 1.0 | [leo_erasure] double free & memory leak and fragmentation - ## Problem
- double free
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L148-L149
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L263-L264
- `real_decoding_matrix` will leak at the below blocks
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L517
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L536
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L640
- https://github.com/leo-project/leo_erasure/blob/develop/c_src/jerasure_mod.cpp#L659
- fragmentation
- Since there are lots of malloc/free calls over multiple threads under heavy load, Heap can be fragmented and that may result in failing malloc ( return NULL ) and may crash due to the above issues.
## Solution
dead simple.
Allocating on stack will solve all problems. | non_process | double free memory leak and fragmentation problem double free real decoding matrix will leak at the below blocks fragmentation since there are lots of malloc free calls over multiple threads under heavy load heap can be fragmented and that may result in failing malloc return null and may crash due to the above issues solution dead simple allocating on stack will solve all problems | 0 |
212,793 | 23,953,009,398 | IssuesEvent | 2022-09-12 13:03:43 | mendts-workshop/MarcinKuder | https://api.github.com/repos/mendts-workshop/MarcinKuder | opened | mysql-connector-java-5.1.25.jar: 9 vulnerabilities (highest severity is: 8.5) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2017-3523](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.5 | mysql-connector-java-5.1.25.jar | Direct | 5.1.41 | ✅ |
| [CVE-2022-21363](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.6 | mysql-connector-java-5.1.25.jar | Direct | mysql:mysql-connector-java:8.0.28 | ✅ |
| [CVE-2017-3586](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.4 | mysql-connector-java-5.1.25.jar | Direct | 5.1.42 | ✅ |
| [CVE-2020-2934](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.0 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
| [CVE-2020-2875](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2875) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.7 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
| [CVE-2019-2692](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2692) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.5 | mysql-connector-java-5.1.25.jar | Direct | 5.1.48 | ✅ |
| [CVE-2015-2575](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2575) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.2 | mysql-connector-java-5.1.25.jar | Direct | 5.1.35 | ✅ |
| [CVE-2017-3589](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | mysql-connector-java-5.1.25.jar | Direct | 5.1.42 | ✅ |
| [CVE-2020-2933](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.2 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-3523</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-2xxh-f8r3-hvvr">https://github.com/advisories/GHSA-2xxh-f8r3-hvvr</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-21363</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.1 Base Score 6.6 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363>CVE-2022-21363</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-g76j-4cxx-23h9">https://github.com/advisories/GHSA-g76j-4cxx-23h9</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.28</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2017-3586</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.4</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-2934</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-2875</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.14 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker and while the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 4.7 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:C/C:L/I:L/A:N).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2875>CVE-2020-2875</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-2692</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.15 and prior. Difficult to exploit vulnerability allows high privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 6.3 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2692>CVE-2019-2692</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jcq3-cprp-m333">https://github.com/advisories/GHSA-jcq3-cprp-m333</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: 5.1.48</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-2575</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Unspecified vulnerability in the MySQL Connectors component in Oracle MySQL 5.1.34 and earlier allows remote authenticated users to affect confidentiality and integrity via unknown vectors related to Connector/J.
<p>Publish Date: 2015-04-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2575>CVE-2015-2575</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-gc43-g62c-99g2">https://github.com/advisories/GHSA-gc43-g62c-99g2</a></p>
<p>Release Date: 2015-04-16</p>
<p>Fix Resolution: 5.1.35</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2017-3589</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589>CVE-2017-3589</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-2933</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | True | mysql-connector-java-5.1.25.jar: 9 vulnerabilities (highest severity is: 8.5) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2017-3523](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 8.5 | mysql-connector-java-5.1.25.jar | Direct | 5.1.41 | ✅ |
| [CVE-2022-21363](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.6 | mysql-connector-java-5.1.25.jar | Direct | mysql:mysql-connector-java:8.0.28 | ✅ |
| [CVE-2017-3586](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 6.4 | mysql-connector-java-5.1.25.jar | Direct | 5.1.42 | ✅ |
| [CVE-2020-2934](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 5.0 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
| [CVE-2020-2875](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2875) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.7 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
| [CVE-2019-2692](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2692) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.5 | mysql-connector-java-5.1.25.jar | Direct | 5.1.48 | ✅ |
| [CVE-2015-2575](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2575) | <img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Medium | 4.2 | mysql-connector-java-5.1.25.jar | Direct | 5.1.35 | ✅ |
| [CVE-2017-3589](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 3.3 | mysql-connector-java-5.1.25.jar | Direct | 5.1.42 | ✅ |
| [CVE-2020-2933](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933) | <img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Low | 2.2 | mysql-connector-java-5.1.25.jar | Direct | 5.1.49 | ✅ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2017-3523</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.40 and earlier. Difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 8.5 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3523>CVE-2017-3523</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>8.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-2xxh-f8r3-hvvr">https://github.com/advisories/GHSA-2xxh-f8r3-hvvr</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.41</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2022-21363</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.27 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.1 Base Score 6.6 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:H/I:H/A:H).
<p>Publish Date: 2022-01-19
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21363>CVE-2022-21363</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.6</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-g76j-4cxx-23h9">https://github.com/advisories/GHSA-g76j-4cxx-23h9</a></p>
<p>Release Date: 2022-01-19</p>
<p>Fix Resolution: mysql:mysql-connector-java:8.0.28</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2017-3586</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with network access via multiple protocols to compromise MySQL Connectors. While the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 6.4 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3586>CVE-2017-3586</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>6.4</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1444406">https://bugzilla.redhat.com/show_bug.cgi?id=1444406</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-2934</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>5.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2020-2875</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.14 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker and while the vulnerability is in MySQL Connectors, attacks may significantly impact additional products. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data. CVSS 3.0 Base Score 4.7 (Confidentiality and Integrity impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:C/C:L/I:L/A:N).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2875>CVE-2020-2875</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.7</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2019-2692</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 8.0.15 and prior. Difficult to exploit vulnerability allows high privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in takeover of MySQL Connectors. CVSS 3.0 Base Score 6.3 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:H/PR:H/UI:R/S:U/C:H/I:H/A:H).
<p>Publish Date: 2019-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-2692>CVE-2019-2692</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-jcq3-cprp-m333">https://github.com/advisories/GHSA-jcq3-cprp-m333</a></p>
<p>Release Date: 2020-08-24</p>
<p>Fix Resolution: 5.1.48</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> CVE-2015-2575</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Unspecified vulnerability in the MySQL Connectors component in Oracle MySQL 5.1.34 and earlier allows remote authenticated users to affect confidentiality and integrity via unknown vectors related to Connector/J.
<p>Publish Date: 2015-04-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-2575>CVE-2015-2575</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>4.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-gc43-g62c-99g2">https://github.com/advisories/GHSA-gc43-g62c-99g2</a></p>
<p>Release Date: 2015-04-16</p>
<p>Fix Resolution: 5.1.35</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2017-3589</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors component of Oracle MySQL (subcomponent: Connector/J). Supported versions that are affected are 5.1.41 and earlier. Easily "exploitable" vulnerability allows low privileged attacker with logon to the infrastructure where MySQL Connectors executes to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data. CVSS 3.0 Base Score 3.3 (Integrity impacts). CVSS Vector: (CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:L/A:N).
<p>Publish Date: 2017-04-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-3589>CVE-2017-3589</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>3.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-3589</a></p>
<p>Release Date: 2017-04-24</p>
<p>Fix Resolution: 5.1.42</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details><details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> CVE-2020-2933</summary>
### Vulnerable Library - <b>mysql-connector-java-5.1.25.jar</b></p>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /epository/mysql/mysql-connector-java/5.1.25/mysql-connector-java-5.1.25.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.25.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/mendts-workshop/MarcinKuder/commit/316ad4ed4f8aad0802f46fc8f19b1625d6daa719">316ad4ed4f8aad0802f46fc8f19b1625d6daa719</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 5.1.48 and prior. Difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 2.2 (Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:N/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2933>CVE-2020-2933</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>2.2</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING">https://docs.oracle.com/javase/7/docs/api/javax/xml/XMLConstants.html#FEATURE_SECURE_PROCESSING</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: 5.1.49</p>
</p>
<p></p>
:rescue_worker_helmet: Automatic Remediation is available for this issue
</details>
***
<p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p> | non_process | mysql connector java jar vulnerabilities highest severity is vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high mysql connector java jar direct medium mysql connector java jar direct mysql mysql connector java medium mysql connector java jar direct medium mysql connector java jar direct medium mysql connector java jar direct medium mysql connector java jar direct medium mysql connector java jar direct low mysql connector java jar direct low mysql connector java jar direct details cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier difficult to exploit vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr l ui n s c c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr h ui n s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with network access via multiple protocols to compromise mysql connectors while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data cvss base score confidentiality and integrity impacts cvss vector cvss av n ac l pr l ui n s c c l i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data and unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr n ui r s u c l i l a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker and while the vulnerability is in mysql connectors attacks may significantly impact additional products successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data cvss base score confidentiality and integrity impacts cvss vector cvss av n ac h pr n ui r s c c l i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with logon to the infrastructure where mysql connectors executes to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in takeover of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av l ac h pr h ui r s u c h i h a h publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details unspecified vulnerability in the mysql connectors component in oracle mysql and earlier allows remote authenticated users to affect confidentiality and integrity via unknown vectors related to connector j publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors component of oracle mysql subcomponent connector j supported versions that are affected are and earlier easily exploitable vulnerability allows low privileged attacker with logon to the infrastructure where mysql connectors executes to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data cvss base score integrity impacts cvss vector cvss av l ac l pr l ui n s u c n i l a n publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to dependency file pom xml path to vulnerable library epository mysql mysql connector java mysql connector java jar dependency hierarchy x mysql connector java jar vulnerable library found in head commit a href found in base branch master vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior difficult to exploit vulnerability allows high privileged attacker with network access via multiple protocols to compromise mysql connectors successful attacks of this vulnerability can result in unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score availability impacts cvss vector cvss av n ac h pr h ui n s u c n i n a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue | 0 |
2,558 | 5,313,781,930 | IssuesEvent | 2017-02-13 13:18:51 | openvstorage/volumedriver | https://api.github.com/repos/openvstorage/volumedriver | closed | Directories can be renamed | process_wontfix type_feature | While volumes cannot be renamed, directories can. This results in the model being out-of-sync with reality. We would either need an event that informs us about the directory rename, or block directory renames at all.
If we decide that renaming directories (or volumes for that matter) need to be possible and an event is added, it would also be interesting to expose a call on the volumedriver API that can find the path for a given volume_id. | 1.0 | Directories can be renamed - While volumes cannot be renamed, directories can. This results in the model being out-of-sync with reality. We would either need an event that informs us about the directory rename, or block directory renames at all.
If we decide that renaming directories (or volumes for that matter) need to be possible and an event is added, it would also be interesting to expose a call on the volumedriver API that can find the path for a given volume_id. | process | directories can be renamed while volumes cannot be renamed directories can this results in the model being out of sync with reality we would either need an event that informs us about the directory rename or block directory renames at all if we decide that renaming directories or volumes for that matter need to be possible and an event is added it would also be interesting to expose a call on the volumedriver api that can find the path for a given volume id | 1 |
16,003 | 28,435,927,718 | IssuesEvent | 2023-04-15 10:02:43 | makingdevices/TC-Logger-Device | https://api.github.com/repos/makingdevices/TC-Logger-Device | opened | [Requisite][1.2b] Thermocouple Reading | Baseline Requirement | The logger device needs to work with Thermocouples type K. In addition, the speed will be for at least 2Hz or 2 reading every second. The precision will be +-0.1C
Process Design:
- [ ] Find IC chip for thermocouple readings
- [ ] Implement the communication between main IC and ADC
Validation Design:
- [ ] The data is reliable and fast enough | 1.0 | [Requisite][1.2b] Thermocouple Reading - The logger device needs to work with Thermocouples type K. In addition, the speed will be for at least 2Hz or 2 reading every second. The precision will be +-0.1C
Process Design:
- [ ] Find IC chip for thermocouple readings
- [ ] Implement the communication between main IC and ADC
Validation Design:
- [ ] The data is reliable and fast enough | non_process | thermocouple reading the logger device needs to work with thermocouples type k in addition the speed will be for at least or reading every second the precision will be process design find ic chip for thermocouple readings implement the communication between main ic and adc validation design the data is reliable and fast enough | 0 |
11,628 | 14,485,577,197 | IssuesEvent | 2020-12-10 17:45:53 | Feryi/5a | https://api.github.com/repos/Feryi/5a | opened | complete_size_estimating_template | process_dashboard | - completar el formato de estimcion de LOC con los valores obtenidos | 1.0 | complete_size_estimating_template - - completar el formato de estimcion de LOC con los valores obtenidos | process | complete size estimating template completar el formato de estimcion de loc con los valores obtenidos | 1 |
187,837 | 15,109,399,037 | IssuesEvent | 2021-02-08 17:47:57 | commercialhaskell/stackage | https://api.github.com/repos/commercialhaskell/stackage | closed | Docker images not automatically built? | type:documentation | The latest LTS versions seem to not have dockers [HERE](https://hub.docker.com/r/fpco/stack-build-small/).
Has something gone wrong in the automatic builds, or is there a certain release schedule for the newest LTS versions? | 1.0 | Docker images not automatically built? - The latest LTS versions seem to not have dockers [HERE](https://hub.docker.com/r/fpco/stack-build-small/).
Has something gone wrong in the automatic builds, or is there a certain release schedule for the newest LTS versions? | non_process | docker images not automatically built the latest lts versions seem to not have dockers has something gone wrong in the automatic builds or is there a certain release schedule for the newest lts versions | 0 |
36,840 | 8,153,877,234 | IssuesEvent | 2018-08-23 00:01:59 | primefaces/primereact | https://api.github.com/repos/primefaces/primereact | closed | Tooltip does not remove event listeners | defect | ```
[X ] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Current behavior**
Tooltip does not remove event listeners in componentWillUnmount().
After it is displayed once, it remains there forever.
**Expected behavior**
remove event listeners when unmounting
| 1.0 | Tooltip does not remove event listeners - ```
[X ] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
```
**Current behavior**
Tooltip does not remove event listeners in componentWillUnmount().
After it is displayed once, it remains there forever.
**Expected behavior**
remove event listeners when unmounting
| non_process | tooltip does not remove event listeners bug report feature request support request please do not submit support request here instead see current behavior tooltip does not remove event listeners in componentwillunmount after it is displayed once it remains there forever expected behavior remove event listeners when unmounting | 0 |
2,160 | 5,006,506,047 | IssuesEvent | 2016-12-12 14:21:47 | openvstorage/framework | https://api.github.com/repos/openvstorage/framework | closed | No such file or directory: '/opt/OpenvStorage/config/framework.json' | process_wontfix | I follow this quick install guide : https://openvstorage.gitbooks.io/administration/content/Installation/quickinstall.html
and cluster install guide : https://openvstorage.gitbooks.io/administration/content/Installation/quickinstall.html
In the first config : ovs setup
ERROR: Failed to setup first node
ERROR: No authentication methods available
+++ Rolling back setup of current node +++
+++ An unexpected error occurred: +++
+++ [Errno 2] No such file or directory: '/opt/OpenvStorage/config/framework.json' +++ | 1.0 | No such file or directory: '/opt/OpenvStorage/config/framework.json' - I follow this quick install guide : https://openvstorage.gitbooks.io/administration/content/Installation/quickinstall.html
and cluster install guide : https://openvstorage.gitbooks.io/administration/content/Installation/quickinstall.html
In the first config : ovs setup
ERROR: Failed to setup first node
ERROR: No authentication methods available
+++ Rolling back setup of current node +++
+++ An unexpected error occurred: +++
+++ [Errno 2] No such file or directory: '/opt/OpenvStorage/config/framework.json' +++ | process | no such file or directory opt openvstorage config framework json i follow this quick install guide and cluster install guide in the first config ovs setup error failed to setup first node error no authentication methods available rolling back setup of current node an unexpected error occurred no such file or directory opt openvstorage config framework json | 1 |
266,874 | 28,480,075,727 | IssuesEvent | 2023-04-18 01:23:01 | tomdgl397/juice-shop | https://api.github.com/repos/tomdgl397/juice-shop | opened | CVE-2023-28484 (Medium) detected in reactos0.4.13-dev | Mend: dependency security vulnerability | ## CVE-2023-28484 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactos0.4.13-dev</b></p></summary>
<p>
<p>A free Windows-compatible Operating System</p>
<p>Library home page: <a href=https://github.com/reactos/reactos.git>https://github.com/reactos/reactos.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/tomdgl397/juice-shop/commit/24e92478a2e956132cc96bf9e3bc8ca7fecf375d">24e92478a2e956132cc96bf9e3bc8ca7fecf375d</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/xmlschemas.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In libxml2 prior to 2.10.4, a NULL pointer dereference was found when parsing (invalid) XML schemas.
<p>Publish Date: 2023-03-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28484>CVE-2023-28484</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2023-28484">https://security-tracker.debian.org/tracker/CVE-2023-28484</a></p>
<p>Release Date: 2023-03-16</p>
<p>Fix Resolution: v2.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-28484 (Medium) detected in reactos0.4.13-dev - ## CVE-2023-28484 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>reactos0.4.13-dev</b></p></summary>
<p>
<p>A free Windows-compatible Operating System</p>
<p>Library home page: <a href=https://github.com/reactos/reactos.git>https://github.com/reactos/reactos.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/tomdgl397/juice-shop/commit/24e92478a2e956132cc96bf9e3bc8ca7fecf375d">24e92478a2e956132cc96bf9e3bc8ca7fecf375d</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/libxmljs2/vendor/libxml/xmlschemas.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In libxml2 prior to 2.10.4, a NULL pointer dereference was found when parsing (invalid) XML schemas.
<p>Publish Date: 2023-03-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-28484>CVE-2023-28484</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://security-tracker.debian.org/tracker/CVE-2023-28484">https://security-tracker.debian.org/tracker/CVE-2023-28484</a></p>
<p>Release Date: 2023-03-16</p>
<p>Fix Resolution: v2.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in dev cve medium severity vulnerability vulnerable library dev a free windows compatible operating system library home page a href found in head commit a href vulnerable source files node modules vendor libxml xmlschemas c vulnerability details in prior to a null pointer dereference was found when parsing invalid xml schemas publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
253,456 | 27,300,617,836 | IssuesEvent | 2023-02-24 01:23:38 | panasalap/linux-4.19.72_1 | https://api.github.com/repos/panasalap/linux-4.19.72_1 | closed | CVE-2020-27152 (Medium) detected in linux-yoctov5.4.51 - autoclosed | security vulnerability | ## CVE-2020-27152 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ioapic_lazy_update_eoi in arch/x86/kvm/ioapic.c in the Linux kernel before 5.9.2. It has an infinite loop related to improper interaction between a resampler and edge triggering, aka CID-77377064c3a9.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-27152>CVE-2020-27152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: v5.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-27152 (Medium) detected in linux-yoctov5.4.51 - autoclosed - ## CVE-2020-27152 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/panasalap/linux-4.19.72/commit/c5a08fe8179013aad614165d792bc5b436591df6">c5a08fe8179013aad614165d792bc5b436591df6</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/arch/x86/kvm/ioapic.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in ioapic_lazy_update_eoi in arch/x86/kvm/ioapic.c in the Linux kernel before 5.9.2. It has an infinite loop related to improper interaction between a resampler and edge triggering, aka CID-77377064c3a9.
<p>Publish Date: 2020-11-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-27152>CVE-2020-27152</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-11-06</p>
<p>Fix Resolution: v5.9.2</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linux autoclosed cve medium severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files arch kvm ioapic c arch kvm ioapic c vulnerability details an issue was discovered in ioapic lazy update eoi in arch kvm ioapic c in the linux kernel before it has an infinite loop related to improper interaction between a resampler and edge triggering aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend | 0 |
31,971 | 4,309,303,608 | IssuesEvent | 2016-07-21 15:36:02 | vincenthz/hs-foundation | https://api.github.com/repos/vincenthz/hs-foundation | opened | FilePath API | design enhancement | * [ ] extension manipulation
* [ ] path dropping / append filename
* [ ] parent
* [ ] windows support
* [ ] OS support (`readdir`)
| 1.0 | FilePath API - * [ ] extension manipulation
* [ ] path dropping / append filename
* [ ] parent
* [ ] windows support
* [ ] OS support (`readdir`)
| non_process | filepath api extension manipulation path dropping append filename parent windows support os support readdir | 0 |
764,986 | 26,827,297,475 | IssuesEvent | 2023-02-02 13:46:51 | GenomicMedLab/cool-seq-tool | https://api.github.com/repos/GenomicMedLab/cool-seq-tool | opened | `get_mane_data` raises `IndexError` | bug priority:medium | ```
File "cool_seq_tool/data_sources/mane_transcript.py", line 428, in <lambda>
copy_df["ac_no_version_as_int"] = copy_df["tx_ac"].apply(lambda x: int(x.split(".")[0].split("NM_00")[1])) # noqa: E501
IndexError: list index out of range
```
I'm not sure why I split on `NM_00`, it should just be `NM_` | 1.0 | `get_mane_data` raises `IndexError` - ```
File "cool_seq_tool/data_sources/mane_transcript.py", line 428, in <lambda>
copy_df["ac_no_version_as_int"] = copy_df["tx_ac"].apply(lambda x: int(x.split(".")[0].split("NM_00")[1])) # noqa: E501
IndexError: list index out of range
```
I'm not sure why I split on `NM_00`, it should just be `NM_` | non_process | get mane data raises indexerror file cool seq tool data sources mane transcript py line in copy df copy df apply lambda x int x split split nm noqa indexerror list index out of range i m not sure why i split on nm it should just be nm | 0 |
171,202 | 6,481,225,039 | IssuesEvent | 2017-08-18 15:07:14 | knowmetools/km-api | https://api.github.com/repos/knowmetools/km-api | closed | Site is inaccessible | Priority: Critical Status: In Progress Type: Bug | ### Bug Report
#### Expected Behavior
We expect the site to be accessible after a deployment.
#### Actual Behavior
Any connections to the site time out. This is caused by a few different issues.
1. We have `SECURE_SSl_REDIRECT` set to `True`, but our load balancer is not listening on port 443. This is the cause of the timeout issues.
2. We terminate SSL at the load balancer and use the `X-Forwarded-Proto` header to determine if the request was sent over HTTPS. The problem is NGINX isn't forwarding this header to Gunicorn, which causes an infinite redirect.
3. The webserver doesn't have permission to send outbound traffic to anywhere. This is particularly problematic when attempting to connect to the database because Gunicorn times out waiting for a database connection that can't occur and we recieve a `502 Bad Gateway` response.
| 1.0 | Site is inaccessible - ### Bug Report
#### Expected Behavior
We expect the site to be accessible after a deployment.
#### Actual Behavior
Any connections to the site time out. This is caused by a few different issues.
1. We have `SECURE_SSl_REDIRECT` set to `True`, but our load balancer is not listening on port 443. This is the cause of the timeout issues.
2. We terminate SSL at the load balancer and use the `X-Forwarded-Proto` header to determine if the request was sent over HTTPS. The problem is NGINX isn't forwarding this header to Gunicorn, which causes an infinite redirect.
3. The webserver doesn't have permission to send outbound traffic to anywhere. This is particularly problematic when attempting to connect to the database because Gunicorn times out waiting for a database connection that can't occur and we recieve a `502 Bad Gateway` response.
| non_process | site is inaccessible bug report expected behavior we expect the site to be accessible after a deployment actual behavior any connections to the site time out this is caused by a few different issues we have secure ssl redirect set to true but our load balancer is not listening on port this is the cause of the timeout issues we terminate ssl at the load balancer and use the x forwarded proto header to determine if the request was sent over https the problem is nginx isn t forwarding this header to gunicorn which causes an infinite redirect the webserver doesn t have permission to send outbound traffic to anywhere this is particularly problematic when attempting to connect to the database because gunicorn times out waiting for a database connection that can t occur and we recieve a bad gateway response | 0 |
564,656 | 16,738,151,160 | IssuesEvent | 2021-06-11 06:18:32 | mchen162/it115-explore-california | https://api.github.com/repos/mchen162/it115-explore-california | opened | Tour Information Link Error | bug priority 4 severity 4 | **Describe the bug**
When clicking the Tour Information Link the user is given a 404 File not found error.
**To Reproduce**
Steps to reproduce the behavior:
1. Hover over Resources item at the navigation bar located at the top.
2. Click on "Tour Information"
4. See error
**Expected behavior**
A working Tour Information link should be provided for the user to be able to access additional resources on tours around California.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10 Home 64-bit
- Browser: Chrome
**Additional context**
Add any other context about the problem here.
| 1.0 | Tour Information Link Error - **Describe the bug**
When clicking the Tour Information Link the user is given a 404 File not found error.
**To Reproduce**
Steps to reproduce the behavior:
1. Hover over Resources item at the navigation bar located at the top.
2. Click on "Tour Information"
4. See error
**Expected behavior**
A working Tour Information link should be provided for the user to be able to access additional resources on tours around California.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows 10 Home 64-bit
- Browser: Chrome
**Additional context**
Add any other context about the problem here.
| non_process | tour information link error describe the bug when clicking the tour information link the user is given a file not found error to reproduce steps to reproduce the behavior hover over resources item at the navigation bar located at the top click on tour information see error expected behavior a working tour information link should be provided for the user to be able to access additional resources on tours around california screenshots desktop please complete the following information os windows home bit browser chrome additional context add any other context about the problem here | 0 |
11,988 | 14,737,168,371 | IssuesEvent | 2021-01-07 01:03:53 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | 001- NCSM - Unable to upload | anc-ops anc-process anp-important anp-urgent ant-support has attachment | In GitLab by @kdjstudios on Apr 23, 2018, 09:14
**Submitted by:** "Jesus Corchado" <jesus.corchado@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-20-73483/conversation
**Server:** Internal
**Client/Site:** Multiple
**Account:** NA
**Issue:**
I’m trying to process the billing for the NCSM 15th Cycles on the sites but I keep getting an error. I am using the blank usage file I use for Webster City and it works perfectly on there. I attached it just in case you wanted to view the usage file.
 | 1.0 | 001- NCSM - Unable to upload - In GitLab by @kdjstudios on Apr 23, 2018, 09:14
**Submitted by:** "Jesus Corchado" <jesus.corchado@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-20-73483/conversation
**Server:** Internal
**Client/Site:** Multiple
**Account:** NA
**Issue:**
I’m trying to process the billing for the NCSM 15th Cycles on the sites but I keep getting an error. I am using the blank usage file I use for Webster City and it works perfectly on there. I attached it just in case you wanted to view the usage file.
 | process | ncsm unable to upload in gitlab by kdjstudios on apr submitted by jesus corchado helpdesk server internal client site multiple account na issue i’m trying to process the billing for the ncsm cycles on the sites but i keep getting an error i am using the blank usage file i use for webster city and it works perfectly on there i attached it just in case you wanted to view the usage file uploads jpg | 1 |
54,965 | 11,355,192,556 | IssuesEvent | 2020-01-24 19:26:42 | stan-dev/math | https://api.github.com/repos/stan-dev/math | closed | There are some error-checking functions we can lift out of loops | code cleanup | ## Description
This:
```
for (i=...)
check_foo(...x[i]...);
```
can sometimes be replaced by:
```
check_foo(x)
```
and should be.
## Example
```
- for (size_t i = 0; i < ns.size(); ++i) {
- check_bounded(function, "element of outcome array", ns[i], lb,
- theta.size());
- }
+ check_bounded(function, "element of outcome array", ns, lb, theta.size());
```
## Expected Output
Same, but error messages are a bit more detailed, because the check_foo function can report which index in the argument contains a non-conforming value.
#### Current Version:
v3.0.0
| 1.0 | There are some error-checking functions we can lift out of loops - ## Description
This:
```
for (i=...)
check_foo(...x[i]...);
```
can sometimes be replaced by:
```
check_foo(x)
```
and should be.
## Example
```
- for (size_t i = 0; i < ns.size(); ++i) {
- check_bounded(function, "element of outcome array", ns[i], lb,
- theta.size());
- }
+ check_bounded(function, "element of outcome array", ns, lb, theta.size());
```
## Expected Output
Same, but error messages are a bit more detailed, because the check_foo function can report which index in the argument contains a non-conforming value.
#### Current Version:
v3.0.0
| non_process | there are some error checking functions we can lift out of loops description this for i check foo x can sometimes be replaced by check foo x and should be example for size t i i ns size i check bounded function element of outcome array ns lb theta size check bounded function element of outcome array ns lb theta size expected output same but error messages are a bit more detailed because the check foo function can report which index in the argument contains a non conforming value current version | 0 |
21,074 | 28,017,797,697 | IssuesEvent | 2023-03-28 01:08:23 | HaoNguyenNhat/CNPMNC | https://api.github.com/repos/HaoNguyenNhat/CNPMNC | opened | Đánh giá và phản hồi chất lượng | DG HUY NN HÀO 5 point IN PROCESS | Là khách hàng chính thức tôi muốn đánh giá và phản hồi về chất lượng dịch vụ trên website để giúp cửa hàng cải thiện | 1.0 | Đánh giá và phản hồi chất lượng - Là khách hàng chính thức tôi muốn đánh giá và phản hồi về chất lượng dịch vụ trên website để giúp cửa hàng cải thiện | process | đánh giá và phản hồi chất lượng là khách hàng chính thức tôi muốn đánh giá và phản hồi về chất lượng dịch vụ trên website để giúp cửa hàng cải thiện | 1 |
20,544 | 27,194,085,199 | IssuesEvent | 2023-02-20 02:36:57 | cse442-at-ub/project_s23-team-infinity | https://api.github.com/repos/cse442-at-ub/project_s23-team-infinity | opened | Settle the basic environment for React.js | Processing Task | 1. Install Node.js and NPM (Node Package Manager)
2. Create a basic app name 'my-app' by running 'npx create-react-app my-app' command in the terminal
3. Open the directory by running 'cd my-app' command
4. start the app by running 'npm start' command, default hosted at localhost:3000 | 1.0 | Settle the basic environment for React.js - 1. Install Node.js and NPM (Node Package Manager)
2. Create a basic app name 'my-app' by running 'npx create-react-app my-app' command in the terminal
3. Open the directory by running 'cd my-app' command
4. start the app by running 'npm start' command, default hosted at localhost:3000 | process | settle the basic environment for react js install node js and npm node package manager create a basic app name my app by running npx create react app my app command in the terminal open the directory by running cd my app command start the app by running npm start command default hosted at localhost | 1 |
610,200 | 18,900,547,880 | IssuesEvent | 2021-11-16 00:04:43 | googleapis/google-cloud-go | https://api.github.com/repos/googleapis/google-cloud-go | reopened | storage: TestIntegration_MultiChunkWriteGRPC failed | type: bug api: storage priority: p1 flakybot: issue flakybot: flaky | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: e33350cfcabcddcda1a90069383d39c68deb977a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d911046b-1b89-4699-ab5c-8742f910d2d5), [Sponge](http://sponge2/d911046b-1b89-4699-ab5c-8742f910d2d5)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:1258: rpc error: code = PermissionDenied desc = Requested bucket, 'projects/_/buckets/golang-grpc-test--20211112-63274549042293-0001', is not allowed to access the GCS gRPC API. Note: this API is currently in testing, and is not yet available for general use.
2021/11/12 17:34:53 failed to delete test object: storage: object doesn't exist</pre></details> | 1.0 | storage: TestIntegration_MultiChunkWriteGRPC failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: e33350cfcabcddcda1a90069383d39c68deb977a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/d911046b-1b89-4699-ab5c-8742f910d2d5), [Sponge](http://sponge2/d911046b-1b89-4699-ab5c-8742f910d2d5)
status: failed
<details><summary>Test output</summary><br><pre> integration_test.go:1258: rpc error: code = PermissionDenied desc = Requested bucket, 'projects/_/buckets/golang-grpc-test--20211112-63274549042293-0001', is not allowed to access the GCS gRPC API. Note: this API is currently in testing, and is not yet available for general use.
2021/11/12 17:34:53 failed to delete test object: storage: object doesn't exist</pre></details> | non_process | storage testintegration multichunkwritegrpc failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output integration test go rpc error code permissiondenied desc requested bucket projects buckets golang grpc test is not allowed to access the gcs grpc api note this api is currently in testing and is not yet available for general use failed to delete test object storage object doesn t exist | 0 |
453,595 | 13,085,208,603 | IssuesEvent | 2020-08-02 00:48:09 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | drivers: i2c_nios2: device config_info content mutated | Stale area: NIOS2 bug priority: low | The device config_info structure is intended to be read-only, and so
the generic pointer stored in struct device is a pointer-to-const.
The i2c_nios2 driver removes this qualifier when casting the pointer to
the device-specific type. This removal is a violation of MISRA 11.8
as it evokes undefined behavior.
When the qualifier is preserved the driver does not build, because it
modifies the object content at runtime. Consequently the fix for this
driver has been removed from #25248.
https://github.com/pabigot/zephyr/commits/nordic/issue/24745d provides
the changes necessary to preserve const correctness. When applied it
produces the following diagnostic, among others:
```
home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c: In function 'i2c_nios2_configure':
/home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:36:13: error: passing argument 1 of 'k_sem_take' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
36 | k_sem_take(&config->sem_lock, K_FOREVER);
| ^~~~~~~~~~~~~~~~~
In file included from ../../../../../../../include/kernel.h:5421,
from ../../../../../../../include/init.h:11,
from ../../../../../../../include/device.h:22,
from ../../../../../../../include/drivers/i2c.h:23,
from /home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:10:
zephyr/include/generated/syscalls/kernel.h:746:45: note: expected 'struct k_sem *' but argument is of type 'const struct k_sem *'
746 | static inline int k_sem_take(struct k_sem * sem, k_timeout_t timeout)
| ~~~~~~~~~~~~~~~^~~
/home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:55:22: error: passing argument 1 of 'alt_avalon_i2c_init' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
55 | alt_avalon_i2c_init(&config->i2c_dev);
| ^~~~~~~~~~~~~~~~
```
The driver must be updated to resolve this and incorporate the change
from the referenced branch.
| 1.0 | drivers: i2c_nios2: device config_info content mutated - The device config_info structure is intended to be read-only, and so
the generic pointer stored in struct device is a pointer-to-const.
The i2c_nios2 driver removes this qualifier when casting the pointer to
the device-specific type. This removal is a violation of MISRA 11.8
as it evokes undefined behavior.
When the qualifier is preserved the driver does not build, because it
modifies the object content at runtime. Consequently the fix for this
driver has been removed from #25248.
https://github.com/pabigot/zephyr/commits/nordic/issue/24745d provides
the changes necessary to preserve const correctness. When applied it
produces the following diagnostic, among others:
```
home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c: In function 'i2c_nios2_configure':
/home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:36:13: error: passing argument 1 of 'k_sem_take' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
36 | k_sem_take(&config->sem_lock, K_FOREVER);
| ^~~~~~~~~~~~~~~~~
In file included from ../../../../../../../include/kernel.h:5421,
from ../../../../../../../include/init.h:11,
from ../../../../../../../include/device.h:22,
from ../../../../../../../include/drivers/i2c.h:23,
from /home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:10:
zephyr/include/generated/syscalls/kernel.h:746:45: note: expected 'struct k_sem *' but argument is of type 'const struct k_sem *'
746 | static inline int k_sem_take(struct k_sem * sem, k_timeout_t timeout)
| ~~~~~~~~~~~~~~~^~~
/home/buildslave/src/github.com/zephyrproject-rtos/zephyr/drivers/i2c/i2c_nios2.c:55:22: error: passing argument 1 of 'alt_avalon_i2c_init' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
55 | alt_avalon_i2c_init(&config->i2c_dev);
| ^~~~~~~~~~~~~~~~
```
The driver must be updated to resolve this and incorporate the change
from the referenced branch.
| non_process | drivers device config info content mutated the device config info structure is intended to be read only and so the generic pointer stored in struct device is a pointer to const the driver removes this qualifier when casting the pointer to the device specific type this removal is a violation of misra as it evokes undefined behavior when the qualifier is preserved the driver does not build because it modifies the object content at runtime consequently the fix for this driver has been removed from provides the changes necessary to preserve const correctness when applied it produces the following diagnostic among others home buildslave src github com zephyrproject rtos zephyr drivers c in function configure home buildslave src github com zephyrproject rtos zephyr drivers c error passing argument of k sem take discards const qualifier from pointer target type k sem take config sem lock k forever in file included from include kernel h from include init h from include device h from include drivers h from home buildslave src github com zephyrproject rtos zephyr drivers c zephyr include generated syscalls kernel h note expected struct k sem but argument is of type const struct k sem static inline int k sem take struct k sem sem k timeout t timeout home buildslave src github com zephyrproject rtos zephyr drivers c error passing argument of alt avalon init discards const qualifier from pointer target type alt avalon init config dev the driver must be updated to resolve this and incorporate the change from the referenced branch | 0 |
12,797 | 15,180,672,570 | IssuesEvent | 2021-02-15 00:49:16 | Geonovum/disgeo-arch | https://api.github.com/repos/Geonovum/disgeo-arch | closed | 5.2.3.1 Afgeleide opslag | Flexibiliteit In Behandeling In behandeling - voorstel processen e.d. Processen Functies Componenten | Dit hoort niet bij opslag. Bij opslag moet benoemd worden, dat in verband met performance en/of beter nog disaster recovery het is toegestaan om een synchrone kopie van de data te hebben. Daarnaast kan het mogelijk zijn om aggregaties van informatie te maken tbv business intelligence oid.
We moeten naar meer data bij de bron.
Er mist een principe of uitgangspunt. Er moet geen monolithische uitwerking van de SOR worden bedacht. De redenen hiervoor:
· Enorme kapitaalvernietiging van wat er door de jaren heen gemaakt is.
· De trend tegenwoordig en een oplossing die wendbaarheid biedt, is toch echt: maak kleine losely coupled componenten die apart van elkaar aangepast kunnen worden. Dus eerder meer kleinere basisregistraties dan minder, eerder meer informatiemodellen (of losjes gekoppelde onderdelen in een groter ding die de samenhang bewaakt) dan minder enz.
· Dat gegevensmodellen makkelijk aan te passen zijn en dat informatie meegroeit met de behoefte en in samenhang bruikbaar is, dan juist is het nodig om met dynamiek om te kunnen gaan. Een monoliet past daar uitermate slecht bij.
Een monoliet - 1 registratie component - lost de problemen en uitdagingen niet op. Alles in 1 component stoppen maakt het er mijns inziens complexer op, en niet eenvoudiger.
Als er nog onduidelijkheid bestaat over wat de SOR nu eigenlijk is en wat de doelstelling van de SOR is, registratie van besluiten of een registratie van in werkelijkheid bestaande objecten of een combinatie daarvan dan moet dat wel duidelijk worden, inclusief de consequenties voor de inwinprocessen en het gebruik door afnemers.
Uitgangspunt moet daarom wat mij betreft zijn: relatief kleine, goed beheerderbare, goed aanpasbare losse componenten, en niet e.e.a. samenbrengen naar 1 component.
Er moet een goed informatiemodel beschikbaar zijn, waarin de samenhang gemodelleerd is. Vanuit de gebruiksdoelen volgen de inwinningsprocessen en de gebruiksmogelijkheden (Zoals hieronder ook staat. (Als het goed is zou dat al in het informatie model zichtbaar moeten zijn.
Verder zou ik een principe willen zien dat er samenhang aangebracht wordt, over componenten heen. Samenhang is een concept: in samenhang inwinnen, in samenhang gebruiken | 2.0 | 5.2.3.1 Afgeleide opslag - Dit hoort niet bij opslag. Bij opslag moet benoemd worden, dat in verband met performance en/of beter nog disaster recovery het is toegestaan om een synchrone kopie van de data te hebben. Daarnaast kan het mogelijk zijn om aggregaties van informatie te maken tbv business intelligence oid.
We moeten naar meer data bij de bron.
Er mist een principe of uitgangspunt. Er moet geen monolithische uitwerking van de SOR worden bedacht. De redenen hiervoor:
· Enorme kapitaalvernietiging van wat er door de jaren heen gemaakt is.
· De trend tegenwoordig en een oplossing die wendbaarheid biedt, is toch echt: maak kleine losely coupled componenten die apart van elkaar aangepast kunnen worden. Dus eerder meer kleinere basisregistraties dan minder, eerder meer informatiemodellen (of losjes gekoppelde onderdelen in een groter ding die de samenhang bewaakt) dan minder enz.
· Dat gegevensmodellen makkelijk aan te passen zijn en dat informatie meegroeit met de behoefte en in samenhang bruikbaar is, dan juist is het nodig om met dynamiek om te kunnen gaan. Een monoliet past daar uitermate slecht bij.
Een monoliet - 1 registratie component - lost de problemen en uitdagingen niet op. Alles in 1 component stoppen maakt het er mijns inziens complexer op, en niet eenvoudiger.
Als er nog onduidelijkheid bestaat over wat de SOR nu eigenlijk is en wat de doelstelling van de SOR is, registratie van besluiten of een registratie van in werkelijkheid bestaande objecten of een combinatie daarvan dan moet dat wel duidelijk worden, inclusief de consequenties voor de inwinprocessen en het gebruik door afnemers.
Uitgangspunt moet daarom wat mij betreft zijn: relatief kleine, goed beheerderbare, goed aanpasbare losse componenten, en niet e.e.a. samenbrengen naar 1 component.
Er moet een goed informatiemodel beschikbaar zijn, waarin de samenhang gemodelleerd is. Vanuit de gebruiksdoelen volgen de inwinningsprocessen en de gebruiksmogelijkheden (Zoals hieronder ook staat. (Als het goed is zou dat al in het informatie model zichtbaar moeten zijn.
Verder zou ik een principe willen zien dat er samenhang aangebracht wordt, over componenten heen. Samenhang is een concept: in samenhang inwinnen, in samenhang gebruiken | process | afgeleide opslag dit hoort niet bij opslag bij opslag moet benoemd worden dat in verband met performance en of beter nog disaster recovery het is toegestaan om een synchrone kopie van de data te hebben daarnaast kan het mogelijk zijn om aggregaties van informatie te maken tbv business intelligence oid we moeten naar meer data bij de bron er mist een principe of uitgangspunt er moet geen monolithische uitwerking van de sor worden bedacht de redenen hiervoor · enorme kapitaalvernietiging van wat er door de jaren heen gemaakt is · de trend tegenwoordig en een oplossing die wendbaarheid biedt is toch echt maak kleine losely coupled componenten die apart van elkaar aangepast kunnen worden dus eerder meer kleinere basisregistraties dan minder eerder meer informatiemodellen of losjes gekoppelde onderdelen in een groter ding die de samenhang bewaakt dan minder enz · dat gegevensmodellen makkelijk aan te passen zijn en dat informatie meegroeit met de behoefte en in samenhang bruikbaar is dan juist is het nodig om met dynamiek om te kunnen gaan een monoliet past daar uitermate slecht bij een monoliet registratie component lost de problemen en uitdagingen niet op alles in component stoppen maakt het er mijns inziens complexer op en niet eenvoudiger als er nog onduidelijkheid bestaat over wat de sor nu eigenlijk is en wat de doelstelling van de sor is registratie van besluiten of een registratie van in werkelijkheid bestaande objecten of een combinatie daarvan dan moet dat wel duidelijk worden inclusief de consequenties voor de inwinprocessen en het gebruik door afnemers uitgangspunt moet daarom wat mij betreft zijn relatief kleine goed beheerderbare goed aanpasbare losse componenten en niet e e a samenbrengen naar component er moet een goed informatiemodel beschikbaar zijn waarin de samenhang gemodelleerd is vanuit de gebruiksdoelen volgen de inwinningsprocessen en de gebruiksmogelijkheden zoals hieronder ook staat als het goed is zou dat al in het informatie model zichtbaar moeten zijn verder zou ik een principe willen zien dat er samenhang aangebracht wordt over componenten heen samenhang is een concept in samenhang inwinnen in samenhang gebruiken | 1 |
203,328 | 15,362,626,120 | IssuesEvent | 2021-03-01 19:44:16 | seattle-uat/universal-application-tool | https://api.github.com/repos/seattle-uat/universal-application-tool | closed | Refactor BrowserTest to have a superclass with common methods | testing | **Describe the solution you'd like**
We should have multiple browser tests for different parts of the system. They are likely to share methods (for example, a method to go to a specific page) - these methods should be in a superclass the other tests extend
**Additional context**
See PR comment on https://github.com/seattle-uat/universal-application-tool/pull/229
**Done when**
The current `BrowserTest.java` is refactored to inherit from a class with common methods
| 1.0 | Refactor BrowserTest to have a superclass with common methods - **Describe the solution you'd like**
We should have multiple browser tests for different parts of the system. They are likely to share methods (for example, a method to go to a specific page) - these methods should be in a superclass the other tests extend
**Additional context**
See PR comment on https://github.com/seattle-uat/universal-application-tool/pull/229
**Done when**
The current `BrowserTest.java` is refactored to inherit from a class with common methods
| non_process | refactor browsertest to have a superclass with common methods describe the solution you d like we should have multiple browser tests for different parts of the system they are likely to share methods for example a method to go to a specific page these methods should be in a superclass the other tests extend additional context see pr comment on done when the current browsertest java is refactored to inherit from a class with common methods | 0 |
640,030 | 20,771,506,955 | IssuesEvent | 2022-03-16 05:34:13 | VolmitSoftware/Iris | https://api.github.com/repos/VolmitSoftware/Iris | closed | Iris Crashing | High Priority | ### Problem
1. Install Iris on 1.17.1 purpur
2. Teleport to a value like 12k and 10k
3. Render a few chunks
4. Server will crash with the error attached
### Solution
None
### Minecraft Version
1.17.1
### Iris Version
1.9.5
### Log
https://mclo.gs/PiIzMBv | 1.0 | Iris Crashing - ### Problem
1. Install Iris on 1.17.1 purpur
2. Teleport to a value like 12k and 10k
3. Render a few chunks
4. Server will crash with the error attached
### Solution
None
### Minecraft Version
1.17.1
### Iris Version
1.9.5
### Log
https://mclo.gs/PiIzMBv | non_process | iris crashing problem install iris on purpur teleport to a value like and render a few chunks server will crash with the error attached solution none minecraft version iris version log | 0 |
48,952 | 3,001,067,496 | IssuesEvent | 2015-07-24 08:37:31 | lua-carbon/carbon | https://api.github.com/repos/lua-carbon/carbon | opened | Generate instance IDs | difficulty:easy feature priority:high | It'd be great to have a running track of all objects by some sort of ID. | 1.0 | Generate instance IDs - It'd be great to have a running track of all objects by some sort of ID. | non_process | generate instance ids it d be great to have a running track of all objects by some sort of id | 0 |
18,103 | 24,127,930,512 | IssuesEvent | 2022-09-21 03:32:38 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | fail to merge rasters on macOS | Feedback stale Processing Bug | ### What is the bug or the crash?
When I was trying to merge 3 rasters, the GDAL command output:
**Process gdal_merge.py failed to start. Either gdal_merge.py is missing, or you may have insufficient permissions to run the program.**
The whole log is:
Processing algorithm…
Algorithm 'Merge' starting…
Input parameters:
{ 'DATA_TYPE' : 5, 'EXTRA' : '', 'INPUT' : ['/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge_2.vrt','/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge_3.vrt','/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge.vrt'], 'NODATA_INPUT' : None, 'NODATA_OUTPUT' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'PCT' : False, 'SEPARATE' : False }
GDAL command:
gdal_merge.py -ot Float32 -of GTiff -o /private/var/folders/6n/2l97lvf57ybb_y78ktcgt6b80000gn/T/processing_ehhETO/4a465f20eb5a4283892d2dfcdffad7d6/OUTPUT.tif --optfile /private/var/folders/6n/2l97lvf57ybb_y78ktcgt6b80000gn/T/processing_ehhETO/3cb209287d10442d8b99cec38e28794e/mergeInputFiles.txt
GDAL command output:
Process gdal_merge.py failed to start. Either gdal_merge.py is missing, or you may have insufficient permissions to run the program.
Execution failed after 0.08 seconds
Loading resulting layers
Algorithm 'Merge' finished
### Steps to reproduce the issue
1. Go to Toolbar
2. Click on Raster
3. Choose Miscellaneous -> Merge...
4. Choose rasters, then click run
5. See error
### Versions
QGIS version: 3.18.3-Zürich
Qt version: 5.15.2
GDAL version: 3.2.2
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 7.2.1, January 1st, 2021
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.18.3-Zürich | QGIS code branch | Release 3.18
-- | -- | -- | --
Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2
Compiled against GDAL/OGR | 3.2.2 | Running against GDAL/OGR | 3.2.2
Compiled against GEOS | 3.9.1-CAPI-1.14.2 | Running against GEOS | 3.9.1-CAPI-1.14.2
Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2
PostgreSQL Client Version | 12.5 | SpatiaLite Version | 5.0.1
QWT Version | 6.1.6 | QScintilla2 Version | 2.11.6
Compiled against PROJ | 7.2.1 | Running against PROJ | Rel. 7.2.1, January 1st, 2021
OS Version | macOS 11.6
Active python plugins | SemiAutomaticClassificationPlugin; ee_plugin; pca4cd; processing; db_manager; MetaSearch
</body></html>QGIS version
3.18.3-Zürich
QGIS code branch
[Release 3.18](https://github.com/qgis/QGIS/tree/release-3_18)
Compiled against Qt
5.15.2
Running against Qt
5.15.2
Compiled against GDAL/OGR
3.2.2
Running against GDAL/OGR
3.2.2
Compiled against GEOS
3.9.1-CAPI-1.14.2
Running against GEOS
3.9.1-CAPI-1.14.2
Compiled against SQLite
3.35.2
Running against SQLite
3.35.2
PostgreSQL Client Version
12.5
SpatiaLite Version
5.0.1
QWT Version
6.1.6
QScintilla2 Version
2.11.6
Compiled against PROJ
7.2.1
Running against PROJ
Rel. 7.2.1, January 1st, 2021
OS Version
macOS 11.6
Active python plugins
SemiAutomaticClassificationPlugin;
ee_plugin;
pca4cd;
processing;
db_manager;
MetaSearch
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
This is the screenshot of error.
<img width="1045" alt="image" src="https://user-images.githubusercontent.com/93780466/178125568-f3e7489a-c7f3-441b-9272-a4af44b1678d.png">
| 1.0 | fail to merge rasters on macOS - ### What is the bug or the crash?
When I was trying to merge 3 rasters, the GDAL command output:
**Process gdal_merge.py failed to start. Either gdal_merge.py is missing, or you may have insufficient permissions to run the program.**
The whole log is:
Processing algorithm…
Algorithm 'Merge' starting…
Input parameters:
{ 'DATA_TYPE' : 5, 'EXTRA' : '', 'INPUT' : ['/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge_2.vrt','/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge_3.vrt','/Users/yuki_huang/Library/Application Support/QGIS/QGIS3/profiles/default/processing/outputs/merge.vrt'], 'NODATA_INPUT' : None, 'NODATA_OUTPUT' : None, 'OPTIONS' : '', 'OUTPUT' : 'TEMPORARY_OUTPUT', 'PCT' : False, 'SEPARATE' : False }
GDAL command:
gdal_merge.py -ot Float32 -of GTiff -o /private/var/folders/6n/2l97lvf57ybb_y78ktcgt6b80000gn/T/processing_ehhETO/4a465f20eb5a4283892d2dfcdffad7d6/OUTPUT.tif --optfile /private/var/folders/6n/2l97lvf57ybb_y78ktcgt6b80000gn/T/processing_ehhETO/3cb209287d10442d8b99cec38e28794e/mergeInputFiles.txt
GDAL command output:
Process gdal_merge.py failed to start. Either gdal_merge.py is missing, or you may have insufficient permissions to run the program.
Execution failed after 0.08 seconds
Loading resulting layers
Algorithm 'Merge' finished
### Steps to reproduce the issue
1. Go to Toolbar
2. Click on Raster
3. Choose Miscellaneous -> Merge...
4. Choose rasters, then click run
5. See error
### Versions
QGIS version: 3.18.3-Zürich
Qt version: 5.15.2
GDAL version: 3.2.2
GEOS version: 3.9.1-CAPI-1.14.2
PROJ version: Rel. 7.2.1, January 1st, 2021
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.18.3-Zürich | QGIS code branch | Release 3.18
-- | -- | -- | --
Compiled against Qt | 5.15.2 | Running against Qt | 5.15.2
Compiled against GDAL/OGR | 3.2.2 | Running against GDAL/OGR | 3.2.2
Compiled against GEOS | 3.9.1-CAPI-1.14.2 | Running against GEOS | 3.9.1-CAPI-1.14.2
Compiled against SQLite | 3.35.2 | Running against SQLite | 3.35.2
PostgreSQL Client Version | 12.5 | SpatiaLite Version | 5.0.1
QWT Version | 6.1.6 | QScintilla2 Version | 2.11.6
Compiled against PROJ | 7.2.1 | Running against PROJ | Rel. 7.2.1, January 1st, 2021
OS Version | macOS 11.6
Active python plugins | SemiAutomaticClassificationPlugin; ee_plugin; pca4cd; processing; db_manager; MetaSearch
</body></html>QGIS version
3.18.3-Zürich
QGIS code branch
[Release 3.18](https://github.com/qgis/QGIS/tree/release-3_18)
Compiled against Qt
5.15.2
Running against Qt
5.15.2
Compiled against GDAL/OGR
3.2.2
Running against GDAL/OGR
3.2.2
Compiled against GEOS
3.9.1-CAPI-1.14.2
Running against GEOS
3.9.1-CAPI-1.14.2
Compiled against SQLite
3.35.2
Running against SQLite
3.35.2
PostgreSQL Client Version
12.5
SpatiaLite Version
5.0.1
QWT Version
6.1.6
QScintilla2 Version
2.11.6
Compiled against PROJ
7.2.1
Running against PROJ
Rel. 7.2.1, January 1st, 2021
OS Version
macOS 11.6
Active python plugins
SemiAutomaticClassificationPlugin;
ee_plugin;
pca4cd;
processing;
db_manager;
MetaSearch
### Supported QGIS version
- [ ] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
This is the screenshot of error.
<img width="1045" alt="image" src="https://user-images.githubusercontent.com/93780466/178125568-f3e7489a-c7f3-441b-9272-a4af44b1678d.png">
| process | fail to merge rasters on macos what is the bug or the crash when i was trying to merge rasters the gdal command output process gdal merge py failed to start either gdal merge py is missing or you may have insufficient permissions to run the program the whole log is processing algorithm… algorithm merge starting… input parameters data type extra input nodata input none nodata output none options output temporary output pct false separate false gdal command gdal merge py ot of gtiff o private var folders t processing ehheto output tif optfile private var folders t processing ehheto mergeinputfiles txt gdal command output process gdal merge py failed to start either gdal merge py is missing or you may have insufficient permissions to run the program execution failed after seconds loading resulting layers algorithm merge finished steps to reproduce the issue go to toolbar click on raster choose miscellaneous merge choose rasters then click run see error versions qgis version zürich qt version gdal version geos version capi proj version rel january doctype html public dtd html en p li white space pre wrap qgis version zürich qgis code branch release compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel january os version macos active python plugins semiautomaticclassificationplugin ee plugin processing db manager metasearch qgis version zürich qgis code branch compiled against qt running against qt compiled against gdal ogr running against gdal ogr compiled against geos capi running against geos capi compiled against sqlite running against sqlite postgresql client version spatialite version qwt version version compiled against proj running against proj rel january os version macos active python plugins semiautomaticclassificationplugin ee plugin processing db manager metasearch supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context this is the screenshot of error img width alt image src | 1 |
14,208 | 10,132,152,382 | IssuesEvent | 2019-08-01 21:29:35 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | IoT Device Provisioning Resource | new-resource service/iothub | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Add a resource for Azure IoT Device Provisioning Service. This would likely be a new resource; something like azurerm_iotdps. This command would be completed through the Azure-CLI through a command like that below.
```hcl
az iot dps create --name MyDps --resource-group MyResourceGroup --location westus
```
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
This would be a new resource. Something like:
* azurerm_iotdps
OR
* azurerm_iot_dps
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_iot_dps" "test" {
name = "test"
resource_group_name = "test"
location = "West US"
sku = "S1"
tags {
"purpose" = "testing"
}
}
```
An extended version of this would include the options for access-policy, certificate, and linked-hub. That might look something like:
```hcl
resource "azurerm_iot_dps" "test" {
name = "test"
resource_group_name = "test"
location = "West US"
sku = "S1"
access_policy {
name = "test_policy"
rights = "DeviceConnect"
shared_access_policy {
primary_key = "123"
secondary_key = "abc"
}
}
certificate {
contents = "${base64encode(file("certificate-to-import.pfx"))}"
password = "Pass@word1"
}
linked-hub = "${azurerm_iothub.test.name}"
tags {
"purpose" = "testing"
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
---> | 1.0 | IoT Device Provisioning Resource - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Add a resource for Azure IoT Device Provisioning Service. This would likely be a new resource; something like azurerm_iotdps. This command would be completed through the Azure-CLI through a command like that below.
```hcl
az iot dps create --name MyDps --resource-group MyResourceGroup --location westus
```
<!--- Please leave a helpful description of the feature request here. --->
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
This would be a new resource. Something like:
* azurerm_iotdps
OR
* azurerm_iot_dps
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_iot_dps" "test" {
name = "test"
resource_group_name = "test"
location = "West US"
sku = "S1"
tags {
"purpose" = "testing"
}
}
```
An extended version of this would include the options for access-policy, certificate, and linked-hub. That might look something like:
```hcl
resource "azurerm_iot_dps" "test" {
name = "test"
resource_group_name = "test"
location = "West US"
sku = "S1"
access_policy {
name = "test_policy"
rights = "DeviceConnect"
shared_access_policy {
primary_key = "123"
secondary_key = "abc"
}
}
certificate {
contents = "${base64encode(file("certificate-to-import.pfx"))}"
password = "Pass@word1"
}
linked-hub = "${azurerm_iothub.test.name}"
tags {
"purpose" = "testing"
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation? For example:
* https://azure.microsoft.com/en-us/roadmap/virtual-network-service-endpoint-for-azure-cosmos-db/
---> | non_process | iot device provisioning resource community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description add a resource for azure iot device provisioning service this would likely be a new resource something like azurerm iotdps this command would be completed through the azure cli through a command like that below hcl az iot dps create name mydps resource group myresourcegroup location westus new or affected resource s this would be a new resource something like azurerm iotdps or azurerm iot dps potential terraform configuration hcl resource azurerm iot dps test name test resource group name test location west us sku tags purpose testing an extended version of this would include the options for access policy certificate and linked hub that might look something like hcl resource azurerm iot dps test name test resource group name test location west us sku access policy name test policy rights deviceconnect shared access policy primary key secondary key abc certificate contents file certificate to import pfx password pass linked hub azurerm iothub test name tags purpose testing references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation for example | 0 |
3,283 | 2,754,188,590 | IssuesEvent | 2015-04-25 12:26:07 | willsALMANJ/Zutilo | https://api.github.com/repos/willsALMANJ/Zutilo | closed | Fix german translation of docs | documentation translation | This is just a reminder to fix the german translation of the docs, I (or someone) needs to finish at:
https://github.com/willsALMANJ/Zutilo/blob/master/chrome/locale/de/zutilo/README.html
todo:
* convert back to markdown without word wraps using pandoc -> document this in #20
* remove word wraps manually if necessary.
* fix the translation from about the middle
* reformat some prose to lists (after the main doc is fixed in english) see #17, #19 | 1.0 | Fix german translation of docs - This is just a reminder to fix the german translation of the docs, I (or someone) needs to finish at:
https://github.com/willsALMANJ/Zutilo/blob/master/chrome/locale/de/zutilo/README.html
todo:
* convert back to markdown without word wraps using pandoc -> document this in #20
* remove word wraps manually if necessary.
* fix the translation from about the middle
* reformat some prose to lists (after the main doc is fixed in english) see #17, #19 | non_process | fix german translation of docs this is just a reminder to fix the german translation of the docs i or someone needs to finish at todo convert back to markdown without word wraps using pandoc document this in remove word wraps manually if necessary fix the translation from about the middle reformat some prose to lists after the main doc is fixed in english see | 0 |
21,528 | 29,810,657,454 | IssuesEvent | 2023-06-16 14:49:37 | googleapis/google-cloud-java | https://api.github.com/repos/googleapis/google-cloud-java | closed | Enable versions in titles of release PRs | type: process priority: p3 | Currently the release PRs for this repo have the title `chore: release main` (e.g. https://github.com/googleapis/google-cloud-java/pull/9427). It would be great if we could have the version number as well as if it is a snapshot version in the title so it's easier to tell what version is getting released instead of having to click into the files. F
or example, java-bigquery has release PRs that look like this: `chore(main): release 2.26.1` (https://github.com/googleapis/java-bigquery/pull/2703) and `chore(main): release 2.26.2-SNAPSHOT` https://github.com/googleapis/java-bigquery/pull/2704. | 1.0 | Enable versions in titles of release PRs - Currently the release PRs for this repo have the title `chore: release main` (e.g. https://github.com/googleapis/google-cloud-java/pull/9427). It would be great if we could have the version number as well as if it is a snapshot version in the title so it's easier to tell what version is getting released instead of having to click into the files. F
or example, java-bigquery has release PRs that look like this: `chore(main): release 2.26.1` (https://github.com/googleapis/java-bigquery/pull/2703) and `chore(main): release 2.26.2-SNAPSHOT` https://github.com/googleapis/java-bigquery/pull/2704. | process | enable versions in titles of release prs currently the release prs for this repo have the title chore release main e g it would be great if we could have the version number as well as if it is a snapshot version in the title so it s easier to tell what version is getting released instead of having to click into the files f or example java bigquery has release prs that look like this chore main release and chore main release snapshot | 1 |
11,438 | 14,259,876,253 | IssuesEvent | 2020-11-20 09:00:03 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | annotations to defense response to fungus, incompatible interaction | multi-species process quick fix | There are 16 annotations from
Proteomics of the response of Arabidopsis thaliana to infection with Alternaria brassicicola.
Abstract
We have studied the proteome of the model plant Arabidopsis thaliana infected with a necrotrophic fungal pathogen, Alternaria brassicicola. The Arabidopsis-A. brassicicola host-pathogen pair is being developed as a model genetic system for incompatible plant-fungal interactions, in which the spread of disease is limited by plant defense responses. After confirming that a defense response was induced at the transcriptional level, we identified proteins whose abundance on 2-DE gels increased or decreased in infected leaves. At least 11 protein spots showed reproducible differences in abundance, increasing or decreasing during the progress of the infection. The pathogenesis-related protein PR4, a glycosyl hydrolase, and the antifungal protein osmotin are strongly up-regulated. Two members of the Arabidopsis glutathione S-transferase (GST) family increased in abundance in infected leaves. The spots in which these GST proteins were identified contain additional members of the GST family. Representation of GST family members in several protein spots migrating at similar molecular weight suggests post-translational modifications. The signature of GST regulation may be specific for the type of plant-pathogen interaction. The proteomic view of the defense response to A. brassicicola can be compared with other types of plant-pathogen interactions, and to leaf senescence, identifying unique regulatory patterns.
PubMed
PMID:19857612
a) this is really just "defense response to fungus" see
https://github.com/geneontology/go-ontology/issues/18738
The "incompatibe interaction" part just means the pathogen is able to detect this specific fungus
b)
using IDA seems a stretch it seems that HDA would be more appropriate? | 1.0 | annotations to defense response to fungus, incompatible interaction - There are 16 annotations from
Proteomics of the response of Arabidopsis thaliana to infection with Alternaria brassicicola.
Abstract
We have studied the proteome of the model plant Arabidopsis thaliana infected with a necrotrophic fungal pathogen, Alternaria brassicicola. The Arabidopsis-A. brassicicola host-pathogen pair is being developed as a model genetic system for incompatible plant-fungal interactions, in which the spread of disease is limited by plant defense responses. After confirming that a defense response was induced at the transcriptional level, we identified proteins whose abundance on 2-DE gels increased or decreased in infected leaves. At least 11 protein spots showed reproducible differences in abundance, increasing or decreasing during the progress of the infection. The pathogenesis-related protein PR4, a glycosyl hydrolase, and the antifungal protein osmotin are strongly up-regulated. Two members of the Arabidopsis glutathione S-transferase (GST) family increased in abundance in infected leaves. The spots in which these GST proteins were identified contain additional members of the GST family. Representation of GST family members in several protein spots migrating at similar molecular weight suggests post-translational modifications. The signature of GST regulation may be specific for the type of plant-pathogen interaction. The proteomic view of the defense response to A. brassicicola can be compared with other types of plant-pathogen interactions, and to leaf senescence, identifying unique regulatory patterns.
PubMed
PMID:19857612
a) this is really just "defense response to fungus" see
https://github.com/geneontology/go-ontology/issues/18738
The "incompatibe interaction" part just means the pathogen is able to detect this specific fungus
b)
using IDA seems a stretch it seems that HDA would be more appropriate? | process | annotations to defense response to fungus incompatible interaction there are annotations from proteomics of the response of arabidopsis thaliana to infection with alternaria brassicicola abstract we have studied the proteome of the model plant arabidopsis thaliana infected with a necrotrophic fungal pathogen alternaria brassicicola the arabidopsis a brassicicola host pathogen pair is being developed as a model genetic system for incompatible plant fungal interactions in which the spread of disease is limited by plant defense responses after confirming that a defense response was induced at the transcriptional level we identified proteins whose abundance on de gels increased or decreased in infected leaves at least protein spots showed reproducible differences in abundance increasing or decreasing during the progress of the infection the pathogenesis related protein a glycosyl hydrolase and the antifungal protein osmotin are strongly up regulated two members of the arabidopsis glutathione s transferase gst family increased in abundance in infected leaves the spots in which these gst proteins were identified contain additional members of the gst family representation of gst family members in several protein spots migrating at similar molecular weight suggests post translational modifications the signature of gst regulation may be specific for the type of plant pathogen interaction the proteomic view of the defense response to a brassicicola can be compared with other types of plant pathogen interactions and to leaf senescence identifying unique regulatory patterns pubmed pmid a this is really just defense response to fungus see the incompatibe interaction part just means the pathogen is able to detect this specific fungus b using ida seems a stretch it seems that hda would be more appropriate | 1 |
6,706 | 9,815,575,684 | IssuesEvent | 2019-06-13 12:58:01 | linnovate/root | https://api.github.com/repos/linnovate/root | closed | in tasks and projects, "no select" option when selecting an assignee changes the status to assigned | 2.0.7 Fixed Process bug Projects Tasks | create a new task/ project
click on select assignee
click on "no select"
the status is changed to assigned

| 1.0 | in tasks and projects, "no select" option when selecting an assignee changes the status to assigned - create a new task/ project
click on select assignee
click on "no select"
the status is changed to assigned

| process | in tasks and projects no select option when selecting an assignee changes the status to assigned create a new task project click on select assignee click on no select the status is changed to assigned | 1 |
18,544 | 24,555,140,263 | IssuesEvent | 2022-10-12 15:19:05 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] Comprehension test questions > Participant is automatically navigating to the next screen in the following scenario | Bug P1 iOS Process: Fixed Process: Tested dev | Steps:
1. SB> Add or edit study > Go to comprehension section
2. Add a comprehension test question by selecting 'any of the answer options marked 'correct' above' button
3. Launch/publish the study
4. Sign u or sign in to the mobile app
5. Enroll to the study
6. After navigating the comprehension test screen, select the answer option and observe
AR: Participant is automatically navigating to the next screen
ER: Participant should stay on the same screen, participant should navigate to the next screen once he/she clicks on the 'Next' button
https://user-images.githubusercontent.com/71445210/188446247-8783ae4a-a718-4ab7-9014-087aa70f33ae.MOV
| 2.0 | [iOS] Comprehension test questions > Participant is automatically navigating to the next screen in the following scenario - Steps:
1. SB> Add or edit study > Go to comprehension section
2. Add a comprehension test question by selecting 'any of the answer options marked 'correct' above' button
3. Launch/publish the study
4. Sign u or sign in to the mobile app
5. Enroll to the study
6. After navigating the comprehension test screen, select the answer option and observe
AR: Participant is automatically navigating to the next screen
ER: Participant should stay on the same screen, participant should navigate to the next screen once he/she clicks on the 'Next' button
https://user-images.githubusercontent.com/71445210/188446247-8783ae4a-a718-4ab7-9014-087aa70f33ae.MOV
| process | comprehension test questions participant is automatically navigating to the next screen in the following scenario steps sb add or edit study go to comprehension section add a comprehension test question by selecting any of the answer options marked correct above button launch publish the study sign u or sign in to the mobile app enroll to the study after navigating the comprehension test screen select the answer option and observe ar participant is automatically navigating to the next screen er participant should stay on the same screen participant should navigate to the next screen once he she clicks on the next button | 1 |
1,635 | 6,572,661,671 | IssuesEvent | 2017-09-11 04:11:06 | ansible/ansible-modules-extras | https://api.github.com/repos/ansible/ansible-modules-extras | closed | ovirt_vms: Support multiple Cloud-Init NICs | affects_2.3 cloud feature_idea waiting_on_maintainer | ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ovirt_vms
##### SUMMARY
Would it be possible to support multiple network interfaces for Cloud-Init? I could write a pull request, but I'm not sure how it should work:
1. Support ip/netmask/gateway etc in the nics option that is already there (not cloud-init)
2. Add an optional nics dictionary list as a subitem of cloud-init
3. Some as above but remove the current nic_* items | True | ovirt_vms: Support multiple Cloud-Init NICs - ##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
ovirt_vms
##### SUMMARY
Would it be possible to support multiple network interfaces for Cloud-Init? I could write a pull request, but I'm not sure how it should work:
1. Support ip/netmask/gateway etc in the nics option that is already there (not cloud-init)
2. Add an optional nics dictionary list as a subitem of cloud-init
3. Some as above but remove the current nic_* items | non_process | ovirt vms support multiple cloud init nics issue type feature idea component name ovirt vms summary would it be possible to support multiple network interfaces for cloud init i could write a pull request but i m not sure how it should work support ip netmask gateway etc in the nics option that is already there not cloud init add an optional nics dictionary list as a subitem of cloud init some as above but remove the current nic items | 0 |
13,052 | 15,389,331,761 | IssuesEvent | 2021-03-03 11:57:34 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | closed | GO:0007050 cell cycle arrest | PomBase cell cycle and DNA processes | GO:0007050 cell cycle arrest
Definition
A regulatory process that halts progression through the cell cycle during one of the normal phases (G1, S, G2, M).
arrest of cell cycle progression | exact
cessation of cell cycle | exact
termination of cell cycle | exact
This term is only for use with cell which are permanently arrested as part of a normal process, or where the cell cycle is exited for some other reason (i.e. to enter meiosis/ apoptosis/senescence).
~The descendant term
GO:0071850 mitotic cell cycle arrest
is equivalent. I don't think there is a meiotic equivalent 'normal ' process.~
Could we
1. Make this term not direct annotation ? (on its own it isn't very useful because the context is very different)
2. Add a comment that it is only for complete arrest (not transient) or cell cycle exit and not for cell cycle checkpoints which halt the cycle temporarily in order to fix a problem (if a problem cannot be fixed, arrest will result, but this is an undesirable phenotype).
~3. Merge GO:0007050 cell cycle arrest and GO:0071850 mitotic cell cycle arrest~
from
https://github.com/geneontology/go-annotation/issues/3647
| 1.0 | GO:0007050 cell cycle arrest - GO:0007050 cell cycle arrest
Definition
A regulatory process that halts progression through the cell cycle during one of the normal phases (G1, S, G2, M).
arrest of cell cycle progression | exact
cessation of cell cycle | exact
termination of cell cycle | exact
This term is only for use with cell which are permanently arrested as part of a normal process, or where the cell cycle is exited for some other reason (i.e. to enter meiosis/ apoptosis/senescence).
~The descendant term
GO:0071850 mitotic cell cycle arrest
is equivalent. I don't think there is a meiotic equivalent 'normal ' process.~
Could we
1. Make this term not direct annotation ? (on its own it isn't very useful because the context is very different)
2. Add a comment that it is only for complete arrest (not transient) or cell cycle exit and not for cell cycle checkpoints which halt the cycle temporarily in order to fix a problem (if a problem cannot be fixed, arrest will result, but this is an undesirable phenotype).
~3. Merge GO:0007050 cell cycle arrest and GO:0071850 mitotic cell cycle arrest~
from
https://github.com/geneontology/go-annotation/issues/3647
| process | go cell cycle arrest go cell cycle arrest definition a regulatory process that halts progression through the cell cycle during one of the normal phases s m arrest of cell cycle progression exact cessation of cell cycle exact termination of cell cycle exact this term is only for use with cell which are permanently arrested as part of a normal process or where the cell cycle is exited for some other reason i e to enter meiosis apoptosis senescence the descendant term go mitotic cell cycle arrest is equivalent i don t think there is a meiotic equivalent normal process could we make this term not direct annotation on its own it isn t very useful because the context is very different add a comment that it is only for complete arrest not transient or cell cycle exit and not for cell cycle checkpoints which halt the cycle temporarily in order to fix a problem if a problem cannot be fixed arrest will result but this is an undesirable phenotype merge go cell cycle arrest and go mitotic cell cycle arrest from | 1 |
710,575 | 24,423,413,525 | IssuesEvent | 2022-10-05 22:57:47 | mikezimm/drilldown7 | https://api.github.com/repos/mikezimm/drilldown7 | closed | Only get list definitions when the prop pane is loaded | enhancement High Priority complete | ## See SecureScript7 for code to load on propPaneStart
Maybe try loading _getListDefinitions at time of opening prop pane.

| 1.0 | Only get list definitions when the prop pane is loaded - ## See SecureScript7 for code to load on propPaneStart
Maybe try loading _getListDefinitions at time of opening prop pane.

| non_process | only get list definitions when the prop pane is loaded see for code to load on proppanestart maybe try loading getlistdefinitions at time of opening prop pane | 0 |
21,984 | 30,482,266,287 | IssuesEvent | 2023-07-17 21:24:53 | bisq-network/proposals | https://api.github.com/repos/bisq-network/proposals | closed | Safely compensate traders requiring arbitration | was:approved a:proposal re:processes | ### Rationale
After #386, the arbitrator does not pay one security deposit for security reasons, as an evil burningman could take profit from creating self trades. This is bad for the traders, who need to wait a long time to be reimbursed, and get no compensation for the time spent and the inconveniences caused. The security deposit is the amount that grants that a rogue burningman can't have a profit when 15% security deposit is used, but when traders set higher security deposits, it's possible to compensate the winner without putting the DAO in danger.
It needs to be considered that arbitration must be the last resource for traders, and that [distributing burningman](https://github.com/bisq-network/proposals/issues/385) role and its effects on the arbitrator should have a higher cost that needs to be subsidized with the security deposit from the peer who causes a trade going to arbitration. With that in mind, the compensated amount should not be very high.
### Proposal
Compensate the traders requiring arbitration with half of the extra amount that makes a burningman to be breakeven or have a small loss if he tried to create a self trade to take profit from the reimbursement.
### Details
I've checked with @refund-agent2 ond other contributors, and a reimbursement table like this could be applied safely:
Sec deposit % | Breakeven % | Compensation % | % BM keep
-- | -- | -- | --
17.5 | 2.65 | 1.5 | 16
20 | 4.6 | 3.00 | 17
25 | 8.5 | 4.00 | 21
30 | 12.4 | 6.00 | 24
35 | 16.3 | 8.00 | 27
40 | 20.2 | 10.00 | 30
45 | 24.1 | 12.00 | 33
50 | 28 | 14.00 | 36
100 | 67 | 35.00 | 65
The % always refers to the trade amount. A trade using 18% as security deposit would get 1.5% trade amount as compensation and another one using 22% would get a 3%.
If this proposal has rough consensus, the payments could be automatically calculated, but this table gives an idea about what traders and the DAO should expect, and could be implemented sooner without waiting for a PR. | 1.0 | Safely compensate traders requiring arbitration - ### Rationale
After #386, the arbitrator does not pay one security deposit for security reasons, as an evil burningman could take profit from creating self trades. This is bad for the traders, who need to wait a long time to be reimbursed, and get no compensation for the time spent and the inconveniences caused. The security deposit is the amount that grants that a rogue burningman can't have a profit when 15% security deposit is used, but when traders set higher security deposits, it's possible to compensate the winner without putting the DAO in danger.
It needs to be considered that arbitration must be the last resource for traders, and that [distributing burningman](https://github.com/bisq-network/proposals/issues/385) role and its effects on the arbitrator should have a higher cost that needs to be subsidized with the security deposit from the peer who causes a trade going to arbitration. With that in mind, the compensated amount should not be very high.
### Proposal
Compensate the traders requiring arbitration with half of the extra amount that makes a burningman to be breakeven or have a small loss if he tried to create a self trade to take profit from the reimbursement.
### Details
I've checked with @refund-agent2 ond other contributors, and a reimbursement table like this could be applied safely:
Sec deposit % | Breakeven % | Compensation % | % BM keep
-- | -- | -- | --
17.5 | 2.65 | 1.5 | 16
20 | 4.6 | 3.00 | 17
25 | 8.5 | 4.00 | 21
30 | 12.4 | 6.00 | 24
35 | 16.3 | 8.00 | 27
40 | 20.2 | 10.00 | 30
45 | 24.1 | 12.00 | 33
50 | 28 | 14.00 | 36
100 | 67 | 35.00 | 65
The % always refers to the trade amount. A trade using 18% as security deposit would get 1.5% trade amount as compensation and another one using 22% would get a 3%.
If this proposal has rough consensus, the payments could be automatically calculated, but this table gives an idea about what traders and the DAO should expect, and could be implemented sooner without waiting for a PR. | process | safely compensate traders requiring arbitration rationale after the arbitrator does not pay one security deposit for security reasons as an evil burningman could take profit from creating self trades this is bad for the traders who need to wait a long time to be reimbursed and get no compensation for the time spent and the inconveniences caused the security deposit is the amount that grants that a rogue burningman can t have a profit when security deposit is used but when traders set higher security deposits it s possible to compensate the winner without putting the dao in danger it needs to be considered that arbitration must be the last resource for traders and that role and its effects on the arbitrator should have a higher cost that needs to be subsidized with the security deposit from the peer who causes a trade going to arbitration with that in mind the compensated amount should not be very high proposal compensate the traders requiring arbitration with half of the extra amount that makes a burningman to be breakeven or have a small loss if he tried to create a self trade to take profit from the reimbursement details i ve checked with refund ond other contributors and a reimbursement table like this could be applied safely sec deposit breakeven compensation bm keep the always refers to the trade amount a trade using as security deposit would get trade amount as compensation and another one using would get a if this proposal has rough consensus the payments could be automatically calculated but this table gives an idea about what traders and the dao should expect and could be implemented sooner without waiting for a pr | 1 |
540,884 | 15,818,961,580 | IssuesEvent | 2021-04-05 16:47:43 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Investigate PSet configuration upload to central CouchDB | CouchDB Medium Priority New Feature Python3 ToDo | **Impact of the new feature**
ReqMgr2 / CouchDB
**Is your feature request related to a problem? Please describe.**
Yes, it's related to the tightly coupling of WMCore, WMControl and CMSSW, and the complications affecting the support of multiple python versions and job configuration (Pset) upload to the WM system.
**Describe the solution you'd like**
This is more like an investigation issue, where we are supposed to track all the required code to upload a PSet configuration to central couch.
Then we need to evaluate whether all of that is really needed.
Last but not least, we could identify other possible options that we could implement for the long run.
**Describe alternatives you've considered**
Possible alternatives will likely be:
* implement a REST API to take a job configuration input and persist it in CouchDB
* create a standalone library, which could live in a different place than WMCore
**Additional context**
This is where we had our initial discussion: https://indico.cern.ch/event/935517/ | 1.0 | Investigate PSet configuration upload to central CouchDB - **Impact of the new feature**
ReqMgr2 / CouchDB
**Is your feature request related to a problem? Please describe.**
Yes, it's related to the tightly coupling of WMCore, WMControl and CMSSW, and the complications affecting the support of multiple python versions and job configuration (Pset) upload to the WM system.
**Describe the solution you'd like**
This is more like an investigation issue, where we are supposed to track all the required code to upload a PSet configuration to central couch.
Then we need to evaluate whether all of that is really needed.
Last but not least, we could identify other possible options that we could implement for the long run.
**Describe alternatives you've considered**
Possible alternatives will likely be:
* implement a REST API to take a job configuration input and persist it in CouchDB
* create a standalone library, which could live in a different place than WMCore
**Additional context**
This is where we had our initial discussion: https://indico.cern.ch/event/935517/ | non_process | investigate pset configuration upload to central couchdb impact of the new feature couchdb is your feature request related to a problem please describe yes it s related to the tightly coupling of wmcore wmcontrol and cmssw and the complications affecting the support of multiple python versions and job configuration pset upload to the wm system describe the solution you d like this is more like an investigation issue where we are supposed to track all the required code to upload a pset configuration to central couch then we need to evaluate whether all of that is really needed last but not least we could identify other possible options that we could implement for the long run describe alternatives you ve considered possible alternatives will likely be implement a rest api to take a job configuration input and persist it in couchdb create a standalone library which could live in a different place than wmcore additional context this is where we had our initial discussion | 0 |
771,231 | 27,075,857,307 | IssuesEvent | 2023-02-14 10:31:13 | sygmaprotocol/sygma-fee-oracle | https://api.github.com/repos/sygmaprotocol/sygma-fee-oracle | opened | Refactor conversion rate pairs property | Priority: P2 | <!--- Provide a general summary of the issue in the Title above -->
Currently, fee oracle requires us to define all possible conversion pairs manually. This, paired with the format of this configuration property, makes it really hard to maintain this in any real production environment.
It would be ideal for removing this as a configuration property fully.
## Implementation details
<!-- Enter description of implementation that may help dev team -->
I see two approaches here:
_To further facilitate these approaches, we can expand the shared configuration for each individual resource.
Add property `feeType - "basic" / "oracle"`. This would mark all tokens that are supported by fee oracle_
1) Currently, we are calculating rates for each defined pair and persisting it. We could only fetch the value of each supported token (in $) and persist that value; then, when someone requires specific pair, we just divide $ values to get the rate. This would significantly reduce complexity, as adding a new token to fee oracle requires us just to add one more query toward a service like coinmarketcap. The biggest con to this approach is that we are using $ price as a medium for calculating rate (but probably services like coinmarketcap are also using this under the hood).
2) Second approach would be to parse shared config, extract tokens that are marked to use oracle, and then query rates for all possible pairs between them. This approach removes the necessity for us to define all the pairs for fee oracle manually but still has this problem of rapidly increasing the number of queries as we start supporting new tokens.
cc: @freddyli7 @P1sar @mpetrun5, what do you think about this? I am leaning more toward the 1) option.
## Testing details
<!-- Enter description of special test-cases-->
TBD
## Acceptance Criteria
<!-- Enter the conditions of satisfaction here. That is, the conditions that will satisfy the user/persona that the goal/benefit/value has been achieved -->
TBD | 1.0 | Refactor conversion rate pairs property - <!--- Provide a general summary of the issue in the Title above -->
Currently, fee oracle requires us to define all possible conversion pairs manually. This, paired with the format of this configuration property, makes it really hard to maintain this in any real production environment.
It would be ideal for removing this as a configuration property fully.
## Implementation details
<!-- Enter description of implementation that may help dev team -->
I see two approaches here:
_To further facilitate these approaches, we can expand the shared configuration for each individual resource.
Add property `feeType - "basic" / "oracle"`. This would mark all tokens that are supported by fee oracle_
1) Currently, we are calculating rates for each defined pair and persisting it. We could only fetch the value of each supported token (in $) and persist that value; then, when someone requires specific pair, we just divide $ values to get the rate. This would significantly reduce complexity, as adding a new token to fee oracle requires us just to add one more query toward a service like coinmarketcap. The biggest con to this approach is that we are using $ price as a medium for calculating rate (but probably services like coinmarketcap are also using this under the hood).
2) Second approach would be to parse shared config, extract tokens that are marked to use oracle, and then query rates for all possible pairs between them. This approach removes the necessity for us to define all the pairs for fee oracle manually but still has this problem of rapidly increasing the number of queries as we start supporting new tokens.
cc: @freddyli7 @P1sar @mpetrun5, what do you think about this? I am leaning more toward the 1) option.
## Testing details
<!-- Enter description of special test-cases-->
TBD
## Acceptance Criteria
<!-- Enter the conditions of satisfaction here. That is, the conditions that will satisfy the user/persona that the goal/benefit/value has been achieved -->
TBD | non_process | refactor conversion rate pairs property currently fee oracle requires us to define all possible conversion pairs manually this paired with the format of this configuration property makes it really hard to maintain this in any real production environment it would be ideal for removing this as a configuration property fully implementation details i see two approaches here to further facilitate these approaches we can expand the shared configuration for each individual resource add property feetype basic oracle this would mark all tokens that are supported by fee oracle currently we are calculating rates for each defined pair and persisting it we could only fetch the value of each supported token in and persist that value then when someone requires specific pair we just divide values to get the rate this would significantly reduce complexity as adding a new token to fee oracle requires us just to add one more query toward a service like coinmarketcap the biggest con to this approach is that we are using price as a medium for calculating rate but probably services like coinmarketcap are also using this under the hood second approach would be to parse shared config extract tokens that are marked to use oracle and then query rates for all possible pairs between them this approach removes the necessity for us to define all the pairs for fee oracle manually but still has this problem of rapidly increasing the number of queries as we start supporting new tokens cc what do you think about this i am leaning more toward the option testing details tbd acceptance criteria tbd | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.