Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,536 | 13,311,772,054 | IssuesEvent | 2020-08-26 08:46:37 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Not prettified error `This line is invalid. It does not start with any known Prisma schema keyword.` in introspection for invalid schema. | kind/improvement process/candidate topic: errors topic: introspection | ## Bug description
I get this strange, not prettified error in introspection (with or without reintrospect flag):
```
j42@Pluto ~/D/p/s/p/introspection> env DEBUG="*" ts-node src/bin.ts introspect --experimental-reintrospection
Introspecting based on datasource defined in schema.prisma …
IntrospectionEngine:rpc starting introspection engine with binary: /Users/j42/Dev/prisma/src/packages/sdk/introspection-engine-darwin +0ms
IntrospectionEngine:rpc SENDING RPC CALL {"id":1,"jsonrpc":"2.0","method":"introspect","params":[{"schema":"datasource db {\n provider = \"postgresql\"\n url = \"postgres://timsu@localhost:5432/j42\"\n}\nasfds\nmodel stories {\n id Int @default(autoincrement()) @id\n tags Json\n title String\n}\n","reintrospect":true}]} +8ms
IntrospectionEngine:rpc {
IntrospectionEngine:rpc jsonrpc: '2.0',
IntrospectionEngine:rpc error: {
IntrospectionEngine:rpc code: 4466,
IntrospectionEngine:rpc message: 'An error happened. Check the data field for details.',
IntrospectionEngine:rpc data: {
IntrospectionEngine:rpc is_panic: false,
IntrospectionEngine:rpc message: 'Error in datamodel: ErrorCollection { errors: [ValidationError { message: "This line is invalid. It does not start with any known Prisma schema keyword.", span: Span { start: 95, end: 101 } }] }',
IntrospectionEngine:rpc backtrace: null
IntrospectionEngine:rpc }
IntrospectionEngine:rpc },
IntrospectionEngine:rpc id: 1
IntrospectionEngine:rpc } +9ms
Error: Error in datamodel: ErrorCollection { errors: [ValidationError { message: "This line is invalid. It does not start with any known Prisma schema keyword.", span: Span { start: 95, end: 101 } }] }
```
## How to reproduce
Example schema.prisma to get this error
```
ssss
```
and run `prisma introspect`
From @janpio
> That sounds like a bug in the validator, if it finds just an invalid/unknown term it seems to not respond with the expected error message but something else.
Originally found in https://github.com/prisma/prisma/issues/3323#issuecomment-680121955 | 1.0 | Not prettified error `This line is invalid. It does not start with any known Prisma schema keyword.` in introspection for invalid schema. - ## Bug description
I get this strange, not prettified error in introspection (with or without reintrospect flag):
```
j42@Pluto ~/D/p/s/p/introspection> env DEBUG="*" ts-node src/bin.ts introspect --experimental-reintrospection
Introspecting based on datasource defined in schema.prisma …
IntrospectionEngine:rpc starting introspection engine with binary: /Users/j42/Dev/prisma/src/packages/sdk/introspection-engine-darwin +0ms
IntrospectionEngine:rpc SENDING RPC CALL {"id":1,"jsonrpc":"2.0","method":"introspect","params":[{"schema":"datasource db {\n provider = \"postgresql\"\n url = \"postgres://timsu@localhost:5432/j42\"\n}\nasfds\nmodel stories {\n id Int @default(autoincrement()) @id\n tags Json\n title String\n}\n","reintrospect":true}]} +8ms
IntrospectionEngine:rpc {
IntrospectionEngine:rpc jsonrpc: '2.0',
IntrospectionEngine:rpc error: {
IntrospectionEngine:rpc code: 4466,
IntrospectionEngine:rpc message: 'An error happened. Check the data field for details.',
IntrospectionEngine:rpc data: {
IntrospectionEngine:rpc is_panic: false,
IntrospectionEngine:rpc message: 'Error in datamodel: ErrorCollection { errors: [ValidationError { message: "This line is invalid. It does not start with any known Prisma schema keyword.", span: Span { start: 95, end: 101 } }] }',
IntrospectionEngine:rpc backtrace: null
IntrospectionEngine:rpc }
IntrospectionEngine:rpc },
IntrospectionEngine:rpc id: 1
IntrospectionEngine:rpc } +9ms
Error: Error in datamodel: ErrorCollection { errors: [ValidationError { message: "This line is invalid. It does not start with any known Prisma schema keyword.", span: Span { start: 95, end: 101 } }] }
```
## How to reproduce
Example schema.prisma to get this error
```
ssss
```
and run `prisma introspect`
From @janpio
> That sounds like a bug in the validator, if it finds just an invalid/unknown term it seems to not respond with the expected error message but something else.
Originally found in https://github.com/prisma/prisma/issues/3323#issuecomment-680121955 | process | not prettified error this line is invalid it does not start with any known prisma schema keyword in introspection for invalid schema bug description i get this strange not prettified error in introspection with or without reintrospect flag pluto d p s p introspection env debug ts node src bin ts introspect experimental reintrospection introspecting based on datasource defined in schema prisma … introspectionengine rpc starting introspection engine with binary users dev prisma src packages sdk introspection engine darwin introspectionengine rpc sending rpc call id jsonrpc method introspect params introspectionengine rpc introspectionengine rpc jsonrpc introspectionengine rpc error introspectionengine rpc code introspectionengine rpc message an error happened check the data field for details introspectionengine rpc data introspectionengine rpc is panic false introspectionengine rpc message error in datamodel errorcollection errors introspectionengine rpc backtrace null introspectionengine rpc introspectionengine rpc introspectionengine rpc id introspectionengine rpc error error in datamodel errorcollection errors how to reproduce example schema prisma to get this error ssss and run prisma introspect from janpio that sounds like a bug in the validator if it finds just an invalid unknown term it seems to not respond with the expected error message but something else originally found in | 1 |
255,653 | 27,488,305,168 | IssuesEvent | 2023-03-04 10:03:36 | ckt1031/cktidy-manager | https://api.github.com/repos/ckt1031/cktidy-manager | opened | mobile-1.0.7.tgz: 1 vulnerabilities (highest severity is: 9.8) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mobile-1.0.7.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ckt1031/cktidy-manager/commit/e62a307bf1172baba18275dc2a271a4600684b53">e62a307bf1172baba18275dc2a271a4600684b53</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (mobile version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-23440](https://www.mend.io/vulnerability-database/CVE-2021-23440) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | set-value-2.0.1.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23440</summary>
### Vulnerable Library - <b>set-value-2.0.1.tgz</b></p>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- mobile-1.0.7.tgz (Root Library)
- react-native-0.71.3.tgz
- react-native-codegen-0.71.5.tgz
- jscodeshift-0.13.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ckt1031/cktidy-manager/commit/e62a307bf1172baba18275dc2a271a4600684b53">e62a307bf1172baba18275dc2a271a4600684b53</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.
Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440.
<p>Publish Date: 2021-09-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-12</p>
<p>Fix Resolution: set-value - 4.0.1
</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | True | mobile-1.0.7.tgz: 1 vulnerabilities (highest severity is: 9.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mobile-1.0.7.tgz</b></p></summary>
<p></p>
<p>Path to dependency file: /package.json</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/ckt1031/cktidy-manager/commit/e62a307bf1172baba18275dc2a271a4600684b53">e62a307bf1172baba18275dc2a271a4600684b53</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (mobile version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2021-23440](https://www.mend.io/vulnerability-database/CVE-2021-23440) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 9.8 | set-value-2.0.1.tgz | Transitive | N/A* | ❌ |
<p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p>
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-23440</summary>
### Vulnerable Library - <b>set-value-2.0.1.tgz</b></p>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz</a></p>
<p>
Dependency Hierarchy:
- mobile-1.0.7.tgz (Root Library)
- react-native-0.71.3.tgz
- react-native-codegen-0.71.5.tgz
- jscodeshift-0.13.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ckt1031/cktidy-manager/commit/e62a307bf1172baba18275dc2a271a4600684b53">e62a307bf1172baba18275dc2a271a4600684b53</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
This affects the package set-value before <2.0.1, >=3.0.0 <4.0.1. A type confusion vulnerability can lead to a bypass of CVE-2019-10747 when the user-provided keys used in the path parameter are arrays.
Mend Note: After conducting further research, Mend has determined that all versions of set-value up to version 4.0.0 are vulnerable to CVE-2021-23440.
<p>Publish Date: 2021-09-12
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-23440>CVE-2021-23440</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>9.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-09-12</p>
<p>Fix Resolution: set-value - 4.0.1
</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details> | non_process | mobile tgz vulnerabilities highest severity is vulnerable library mobile tgz path to dependency file package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in mobile version remediation available high set value tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href dependency hierarchy mobile tgz root library react native tgz react native codegen tgz jscodeshift tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library found in head commit a href found in base branch main vulnerability details this affects the package set value before a type confusion vulnerability can lead to a bypass of cve when the user provided keys used in the path parameter are arrays mend note after conducting further research mend has determined that all versions of set value up to version are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution set value step up your open source security game with mend | 0 |
6,493 | 9,559,778,800 | IssuesEvent | 2019-05-03 17:41:25 | aiidateam/aiida_core | https://api.github.com/repos/aiidateam/aiida_core | closed | Use function name/docstring for label/description of CalcFunctioNode | priority/nice-to-have topic/processes type/accepted feature | When running a calcfunction, the name of the function (here: `testing`) is shown as the process label in the `verdi process list` command. This is obviously very useful information.
```
$ verdi process list -a
...
104 7s ago ⏹ Finished [0] testing
...
```
However, when looking at the generated `CalcFunctionNode` using `verdi node show`, label and description are empty.
```
$ verdi node show 104
\/Users/leopold/Applications/miniconda3/envs/aiida_rmq_py3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
Property Value
------------- ------------------------------------
type CalcFunctionNode
pk 104
uuid dccc4421-4349-46cd-9203-41e485bcff0d
label
description
ctime 2019-04-01 12:13:02.502981+00:00
mtime 2019-04-01 12:13:02.902249+00:00
process state ProcessState.FINISHED
exit status 0
Inputs PK Type
-------- ---- ------
a 103 Int
Outputs PK Type
--------- ---- ------
result 105 Int
```
I propose to populate the fields
* `label` by the name of the function
* `description` by the docstring of the function (if exists) | 1.0 | Use function name/docstring for label/description of CalcFunctioNode - When running a calcfunction, the name of the function (here: `testing`) is shown as the process label in the `verdi process list` command. This is obviously very useful information.
```
$ verdi process list -a
...
104 7s ago ⏹ Finished [0] testing
...
```
However, when looking at the generated `CalcFunctionNode` using `verdi node show`, label and description are empty.
```
$ verdi node show 104
\/Users/leopold/Applications/miniconda3/envs/aiida_rmq_py3/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
Property Value
------------- ------------------------------------
type CalcFunctionNode
pk 104
uuid dccc4421-4349-46cd-9203-41e485bcff0d
label
description
ctime 2019-04-01 12:13:02.502981+00:00
mtime 2019-04-01 12:13:02.902249+00:00
process state ProcessState.FINISHED
exit status 0
Inputs PK Type
-------- ---- ------
a 103 Int
Outputs PK Type
--------- ---- ------
result 105 Int
```
I propose to populate the fields
* `label` by the name of the function
* `description` by the docstring of the function (if exists) | process | use function name docstring for label description of calcfunctionode when running a calcfunction the name of the function here testing is shown as the process label in the verdi process list command this is obviously very useful information verdi process list a ago ⏹ finished testing however when looking at the generated calcfunctionnode using verdi node show label and description are empty verdi node show users leopold applications envs aiida rmq lib site packages init py userwarning the wheel package will be renamed from release in order to keep installing from binary please use pip install binary instead for details see property value type calcfunctionnode pk uuid label description ctime mtime process state processstate finished exit status inputs pk type a int outputs pk type result int i propose to populate the fields label by the name of the function description by the docstring of the function if exists | 1 |
19,614 | 25,969,368,544 | IssuesEvent | 2022-12-19 09:58:50 | didi/mpx | https://api.github.com/repos/didi/mpx | closed | [Bug report] 当使用 tailwindcss 插件开发页面时,文件热重载没有触发 tailwindcss,导致 app.wxss 没有重新生成从而 tailwindcss 失效 | processing | **问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 使用 postcss tailwindcss 插件开发页面时,保存 `pages/*.mpx` ,`components/*.mpx` 这类的文件,HMR 并没有触发 tailwindcss 插件,导致 app.wxss 没有重新生成。
2. 保存 `app.mpx`/ `tailwind.config.js` / `package.json` 会走全量,重新生成 `app.wxss`
**环境信息描述**
至少包含以下部分:
1. Windows11
3. Mpx 2.8 (LTS) 刚从官网教程里新建的
4. 小程序平台 wx 、开发者工具版本: 微信开发者工具 1.062209190 LTS.、基础库版本:2.28.0
**最简复现demo**
https://github.com/sonofmagic/weapp-tailwindcss-webpack-plugin/tree/feature/mpx-demo/demo/mpx-app
先执行 yarn ,
yarn serve
此时全量情况下 `pages/index.mpx` 中的 `text-[#3f6e47]` 样式顺利生成
更改为 `text-[#aa88cc]` 保存。
此时 `dist/wx/app.wxss` 并没有重新生成,也没有生成 `text-[#aa88cc]` 的工具类。
后来我进行了调试,发现在hmr下
全量情况,进入了 `tailwindcss` 源码中:
单个保存页面文件的情况下,居然没有命中 `tailwindcss` 的入口断点,
所以前来请教一下
| 1.0 | [Bug report] 当使用 tailwindcss 插件开发页面时,文件热重载没有触发 tailwindcss,导致 app.wxss 没有重新生成从而 tailwindcss 失效 - **问题描述**
请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整:
1. 使用 postcss tailwindcss 插件开发页面时,保存 `pages/*.mpx` ,`components/*.mpx` 这类的文件,HMR 并没有触发 tailwindcss 插件,导致 app.wxss 没有重新生成。
2. 保存 `app.mpx`/ `tailwind.config.js` / `package.json` 会走全量,重新生成 `app.wxss`
**环境信息描述**
至少包含以下部分:
1. Windows11
3. Mpx 2.8 (LTS) 刚从官网教程里新建的
4. 小程序平台 wx 、开发者工具版本: 微信开发者工具 1.062209190 LTS.、基础库版本:2.28.0
**最简复现demo**
https://github.com/sonofmagic/weapp-tailwindcss-webpack-plugin/tree/feature/mpx-demo/demo/mpx-app
先执行 yarn ,
yarn serve
此时全量情况下 `pages/index.mpx` 中的 `text-[#3f6e47]` 样式顺利生成
更改为 `text-[#aa88cc]` 保存。
此时 `dist/wx/app.wxss` 并没有重新生成,也没有生成 `text-[#aa88cc]` 的工具类。
后来我进行了调试,发现在hmr下
全量情况,进入了 `tailwindcss` 源码中:
单个保存页面文件的情况下,居然没有命中 `tailwindcss` 的入口断点,
所以前来请教一下
| process | 当使用 tailwindcss 插件开发页面时,文件热重载没有触发 tailwindcss,导致 app wxss 没有重新生成从而 tailwindcss 失效 问题描述 请用简洁的语言描述你遇到的bug,至少包括以下部分,如提供截图请尽量完整: 使用 postcss tailwindcss 插件开发页面时,保存 pages mpx components mpx 这类的文件,hmr 并没有触发 tailwindcss 插件,导致 app wxss 没有重新生成。 保存 app mpx tailwind config js package json 会走全量,重新生成 app wxss 环境信息描述 至少包含以下部分: mpx lts 刚从官网教程里新建的 小程序平台 wx 、开发者工具版本 微信开发者工具 lts 、基础库版本 最简复现demo 先执行 yarn yarn serve 此时全量情况下 pages index mpx 中的 text 样式顺利生成 更改为 text 保存。 此时 dist wx app wxss 并没有重新生成,也没有生成 text 的工具类。 后来我进行了调试,发现在hmr下 全量情况,进入了 tailwindcss 源码中: 单个保存页面文件的情况下,居然没有命中 tailwindcss 的入口断点, 所以前来请教一下 | 1 |
24,273 | 11,026,892,349 | IssuesEvent | 2019-12-06 08:06:13 | rammatzkvosky/saleor | https://api.github.com/repos/rammatzkvosky/saleor | opened | CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/saleor/node_modules/js-base64/test-moment/index.html</p>
<p>Path to vulnerable library: /saleor/node_modules/js-base64/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/saleor/commit/cdb7b8312ae0518a12664a669906dc86027a763c">cdb7b8312ae0518a12664a669906dc86027a763c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/753d591aea698e57d6db58c9f722cd0808619b1b">https://github.com/jquery/jquery/commit/753d591aea698e57d6db58c9f722cd0808619b1b</a></p>
<p>Release Date: 2019-03-25</p>
<p>Fix Resolution: Replace or update the following files: core.js, core.js</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/saleor/node_modules/js-base64/test-moment/index.html</p>
<p>Path to vulnerable library: /saleor/node_modules/js-base64/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rammatzkvosky/saleor/commit/cdb7b8312ae0518a12664a669906dc86027a763c">cdb7b8312ae0518a12664a669906dc86027a763c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Change files</p>
<p>Origin: <a href="https://github.com/jquery/jquery/commit/753d591aea698e57d6db58c9f722cd0808619b1b">https://github.com/jquery/jquery/commit/753d591aea698e57d6db58c9f722cd0808619b1b</a></p>
<p>Release Date: 2019-03-25</p>
<p>Fix Resolution: Replace or update the following files: core.js, core.js</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":false}],"vulnerabilityIdentifier":"CVE-2019-11358","vulnerabilityDetails":"jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2019-11358","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> --> | non_process | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm saleor node modules js test moment index html path to vulnerable library saleor node modules js test moment index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files core js core js isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl | 0 |
5,722 | 8,567,918,379 | IssuesEvent | 2018-11-10 16:34:32 | Great-Hill-Corporation/quickBlocks | https://api.github.com/repos/Great-Hill-Corporation/quickBlocks | closed | Error string in revert | libs-etherlib status-inprocess type-enhancement | Error string are encoded in the same way as a function call would be using this function:
```
Error(string msg)
```
| 1.0 | Error string in revert - Error string are encoded in the same way as a function call would be using this function:
```
Error(string msg)
```
| process | error string in revert error string are encoded in the same way as a function call would be using this function error string msg | 1 |
2,036 | 4,847,429,429 | IssuesEvent | 2016-11-10 14:56:59 | Alfresco/alfresco-ng2-components | https://api.github.com/repos/Alfresco/alfresco-ng2-components | opened | Typeahead does not display suggestions | browser: all bug comp: activiti-processList | If typeahead widget is part of form attached to start event no suggestions are displayed when starting a process
**Component**
<img width="1362" alt="screen shot 2016-11-10 at 14 54 22" src="https://cloud.githubusercontent.com/assets/13200338/20181336/b08df488-a755-11e6-8331-debfc29c284f.png">
**Activiti**

| 1.0 | Typeahead does not display suggestions - If typeahead widget is part of form attached to start event no suggestions are displayed when starting a process
**Component**
<img width="1362" alt="screen shot 2016-11-10 at 14 54 22" src="https://cloud.githubusercontent.com/assets/13200338/20181336/b08df488-a755-11e6-8331-debfc29c284f.png">
**Activiti**

| process | typeahead does not display suggestions if typeahead widget is part of form attached to start event no suggestions are displayed when starting a process component img width alt screen shot at src activiti | 1 |
49,137 | 26,004,440,760 | IssuesEvent | 2022-12-20 17:58:18 | RafaelGB/obsidian-db-folder | https://api.github.com/repos/RafaelGB/obsidian-db-folder | closed | [FR]: Refresh | Performance epic enhancement | ### Contact Details
_No response_
### Present your request
When I complete a task in the task property it doesn't disappear from view until the database is refreshed. I thought that was a trigger for a refresh.
When a page is added to my vault and should appear in a view it doesn't until I manually refresh the database (via turning on and off the filter or clicking on and off the database page.
### For which platform do you request this request??
Desktop | True | [FR]: Refresh - ### Contact Details
_No response_
### Present your request
When I complete a task in the task property it doesn't disappear from view until the database is refreshed. I thought that was a trigger for a refresh.
When a page is added to my vault and should appear in a view it doesn't until I manually refresh the database (via turning on and off the filter or clicking on and off the database page.
### For which platform do you request this request??
Desktop | non_process | refresh contact details no response present your request when i complete a task in the task property it doesn t disappear from view until the database is refreshed i thought that was a trigger for a refresh when a page is added to my vault and should appear in a view it doesn t until i manually refresh the database via turning on and off the filter or clicking on and off the database page for which platform do you request this request desktop | 0 |
19,175 | 25,284,196,711 | IssuesEvent | 2022-11-16 17:54:24 | googleapis/nodejs-compute | https://api.github.com/repos/googleapis/nodejs-compute | closed | Missing typescript declaration file | type: process type: feature request api: compute | #### Environment details
- OS: Linux
- npm version: latest
- `@google-cloud/compute` version: 2.1.0
#### Steps to reproduce
1. Install it with npm/yarn
2. Try to use it from typescript with `import Compute from '@google-cloud/compute`
3. See errors re: missing typescript declaration
It seems this lib is missing a `.d.ts` typescript decl file, and I was unable to find typings on definitely-typed either. If this is intended to be the idiomatic way to interact with the compute API, I think typings would be very helpful.
| 1.0 | Missing typescript declaration file - #### Environment details
- OS: Linux
- npm version: latest
- `@google-cloud/compute` version: 2.1.0
#### Steps to reproduce
1. Install it with npm/yarn
2. Try to use it from typescript with `import Compute from '@google-cloud/compute`
3. See errors re: missing typescript declaration
It seems this lib is missing a `.d.ts` typescript decl file, and I was unable to find typings on definitely-typed either. If this is intended to be the idiomatic way to interact with the compute API, I think typings would be very helpful.
| process | missing typescript declaration file environment details os linux npm version latest google cloud compute version steps to reproduce install it with npm yarn try to use it from typescript with import compute from google cloud compute see errors re missing typescript declaration it seems this lib is missing a d ts typescript decl file and i was unable to find typings on definitely typed either if this is intended to be the idiomatic way to interact with the compute api i think typings would be very helpful | 1 |
11,712 | 14,546,521,170 | IssuesEvent | 2020-12-15 21:22:17 | MicrosoftDocs/azure-devops-docs | https://api.github.com/repos/MicrosoftDocs/azure-devops-docs | closed | Pipeline Trigger on Stage Completion needs more info | Pri2 devops-cicd-process/tech devops/prod product-feedback | I am trying to trigger a pipeline on the completion of a stage:
```
resources:
pipelines:
- pipeline: MyPipelineResource
source: 'My Pipeline'
trigger:
stages:
- Release_To_Production
```
I am expecting this to trigger this pipeline anytime the Release_To_Production stage completes in 'My Pipeline'. There is no other triggers block in my YAML.
The pipeline currently is triggering on other events in this repository that are not expected. How can I make this work in the way I am expecting?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#resources-pipelines)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | 1.0 | Pipeline Trigger on Stage Completion needs more info - I am trying to trigger a pipeline on the completion of a stage:
```
resources:
pipelines:
- pipeline: MyPipelineResource
source: 'My Pipeline'
trigger:
stages:
- Release_To_Production
```
I am expecting this to trigger this pipeline anytime the Release_To_Production stage completes in 'My Pipeline'. There is no other triggers block in my YAML.
The pipeline currently is triggering on other events in this repository that are not expected. How can I make this work in the way I am expecting?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema#resources-pipelines)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam** | process | pipeline trigger on stage completion needs more info i am trying to trigger a pipeline on the completion of a stage resources pipelines pipeline mypipelineresource source my pipeline trigger stages release to production i am expecting this to trigger this pipeline anytime the release to production stage completes in my pipeline there is no other triggers block in my yaml the pipeline currently is triggering on other events in this repository that are not expected how can i make this work in the way i am expecting document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam | 1 |
384,947 | 26,609,245,374 | IssuesEvent | 2023-01-23 22:10:56 | NetAppDocs/ontap | https://api.github.com/repos/NetAppDocs/ontap | closed | Root SVM volume with a space guarantee set to "volume" can be added to FabricPool. | documentation good first issue | Page: [Considerations and requirements for using FabricPool](https://docs.netapp.com/us-en/ontap/fabricpool/requirements-concept.html)
In the Functionality or features not supported by FabricPool section of this article there is a line indicating that a volume with a space guarantee set to anything other than "none" will not allow the owning aggregate to be added to FabricPool. A data aggregate containing a root SVM volume with a space guarantee set to "volume" can be added to FabricPool. This has been reproduced in a VSIM a few times now. | 1.0 | Root SVM volume with a space guarantee set to "volume" can be added to FabricPool. - Page: [Considerations and requirements for using FabricPool](https://docs.netapp.com/us-en/ontap/fabricpool/requirements-concept.html)
In the Functionality or features not supported by FabricPool section of this article there is a line indicating that a volume with a space guarantee set to anything other than "none" will not allow the owning aggregate to be added to FabricPool. A data aggregate containing a root SVM volume with a space guarantee set to "volume" can be added to FabricPool. This has been reproduced in a VSIM a few times now. | non_process | root svm volume with a space guarantee set to volume can be added to fabricpool page in the functionality or features not supported by fabricpool section of this article there is a line indicating that a volume with a space guarantee set to anything other than none will not allow the owning aggregate to be added to fabricpool a data aggregate containing a root svm volume with a space guarantee set to volume can be added to fabricpool this has been reproduced in a vsim a few times now | 0 |
19,826 | 26,217,009,184 | IssuesEvent | 2023-01-04 11:51:27 | OpenEnergyPlatform/open-MaStR | https://api.github.com/repos/OpenEnergyPlatform/open-MaStR | closed | Adapt MaStRDownload.download_power_plants() to be working with postprocess() | :scissors: post processing |
**Tasks**
- [ ] Replace column renaming in `MaStRDownload.download_power_plants()` by renaming pattern used in `MaStRMirror.to_csv`
- [ ] Adapt documentation `postprocessing.rst` and remove warning about data from `MaStRDownload.download_power_plants()` | 1.0 | Adapt MaStRDownload.download_power_plants() to be working with postprocess() -
**Tasks**
- [ ] Replace column renaming in `MaStRDownload.download_power_plants()` by renaming pattern used in `MaStRMirror.to_csv`
- [ ] Adapt documentation `postprocessing.rst` and remove warning about data from `MaStRDownload.download_power_plants()` | process | adapt mastrdownload download power plants to be working with postprocess tasks replace column renaming in mastrdownload download power plants by renaming pattern used in mastrmirror to csv adapt documentation postprocessing rst and remove warning about data from mastrdownload download power plants | 1 |
430,782 | 12,465,814,584 | IssuesEvent | 2020-05-28 14:35:05 | luna/ide | https://api.github.com/repos/luna/ide | opened | Disable Zoom/Pan When Visualization is in Fullscreen Mode | Category: IDE Change: Non-Breaking Difficulty: Core Contributor Priority: Medium Type: Enhancement | ### Summary
The zooming/panning functionality needs to be disabled if the general scene is not visible.
### Value
Avoids confusion when unexpected interactions happen that are not visible to the user.
### Specification
When the visualization fullscreen mode is enabled, at the same time the zoom/pan should be disabled.
When the visualization fullscreen mode is ended, zoom/pan should be enabled again.
### Acceptance Criteria & Test Cases
Behavior tested as described above. | 1.0 | Disable Zoom/Pan When Visualization is in Fullscreen Mode - ### Summary
The zooming/panning functionality needs to be disabled if the general scene is not visible.
### Value
Avoids confusion when unexpected interactions happen that are not visible to the user.
### Specification
When the visualization fullscreen mode is enabled, at the same time the zoom/pan should be disabled.
When the visualization fullscreen mode is ended, zoom/pan should be enabled again.
### Acceptance Criteria & Test Cases
Behavior tested as described above. | non_process | disable zoom pan when visualization is in fullscreen mode summary the zooming panning functionality needs to be disabled if the general scene is not visible value avoids confusion when unexpected interactions happen that are not visible to the user specification when the visualization fullscreen mode is enabled at the same time the zoom pan should be disabled when the visualization fullscreen mode is ended zoom pan should be enabled again acceptance criteria test cases behavior tested as described above | 0 |
17,106 | 22,627,975,458 | IssuesEvent | 2022-06-30 12:26:19 | camunda/feel-scala | https://api.github.com/repos/camunda/feel-scala | opened | Functions `put all` and `context` do not propagate errors to the result | type: bug team/process-automation | **Describe the bug**
Currently the built-in functions `put all` and `context` do not propagate errors that are passed in as argument. Instead, they return `null` in these cases and the error gets lost.
This is the root cause for: https://github.com/camunda/zeebe/issues/9543
**To Reproduce**
see ticket linked above
**Expected behavior**
* If an error is passed in, this error should be returned as the result of the function
* If multiple errors are passed in, the first error should be returned as the result of the function
**Environment**
* FEEL engine version: master
* Affects:
* Zeebe broker: https://github.com/camunda/zeebe/issues/9543
| 1.0 | Functions `put all` and `context` do not propagate errors to the result - **Describe the bug**
Currently the built-in functions `put all` and `context` do not propagate errors that are passed in as argument. Instead, they return `null` in these cases and the error gets lost.
This is the root cause for: https://github.com/camunda/zeebe/issues/9543
**To Reproduce**
see ticket linked above
**Expected behavior**
* If an error is passed in, this error should be returned as the result of the function
* If multiple errors are passed in, the first error should be returned as the result of the function
**Environment**
* FEEL engine version: master
* Affects:
* Zeebe broker: https://github.com/camunda/zeebe/issues/9543
| process | functions put all and context do not propagate errors to the result describe the bug currently the built in functions put all and context do not propagate errors that are passed in as argument instead they return null in these cases and the error gets lost this is the root cause for to reproduce see ticket linked above expected behavior if an error is passed in this error should be returned as the result of the function if multiple errors are passed in the first error should be returned as the result of the function environment feel engine version master affects zeebe broker | 1 |
10,577 | 13,388,377,615 | IssuesEvent | 2020-09-02 17:16:59 | neuropoly/spinalcordtoolbox | https://api.github.com/repos/neuropoly/spinalcordtoolbox | closed | Adding CSA results in QC report | feature sct_process_segmentation sct_qc | It would be welcome to introduce a QC report for CSA results.
This report would also facilitate the optimization of default parameter for `get_centerline` (used for angle correction in CSA computation), see https://github.com/neuropoly/spinalcordtoolbox/pull/2299
| 1.0 | Adding CSA results in QC report - It would be welcome to introduce a QC report for CSA results.
This report would also facilitate the optimization of default parameter for `get_centerline` (used for angle correction in CSA computation), see https://github.com/neuropoly/spinalcordtoolbox/pull/2299
| process | adding csa results in qc report it would be welcome to introduce a qc report for csa results this report would also facilitate the optimization of default parameter for get centerline used for angle correction in csa computation see | 1 |
306,542 | 9,396,301,931 | IssuesEvent | 2019-04-08 06:45:46 | aeternity/aepp-base | https://api.github.com/repos/aeternity/aepp-base | closed | iPhone SE in Simulator: Strange behavior of password input fields | bug priority review | Can be reproduced via iPhone Simulator > iPhone SE device.
 | 1.0 | iPhone SE in Simulator: Strange behavior of password input fields - Can be reproduced via iPhone Simulator > iPhone SE device.
 | non_process | iphone se in simulator strange behavior of password input fields can be reproduced via iphone simulator iphone se device | 0 |
11,995 | 14,737,253,351 | IssuesEvent | 2021-01-07 01:18:38 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | RE: 001- NCSM - Unable to upload- Can't process NCSM billing cycles at these sites | anc-ops anc-process anp-important ant-bug ant-support | In GitLab by @kdjstudios on Apr 30, 2018, 13:53
**Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-30-78949/conversation
**Server:** Internal
**Client/Site:** Multi
**Account:** NA
**Issue:**
Hi- We are still unable to process the NCSM 4/15 billing cycles at the following sites due to a file upload error. We need to have this prioritized please. Thank you, Cori
Allentown
Billerica Inbound
Billings
Chattanooga
Chicago
Columbus
EL Paso
Fair Oaks
Memphis
Orlando
Portland
Santa Rosa
Sarasota
Stockton
Toronto
West Memphis
Winnipeg | 1.0 | RE: 001- NCSM - Unable to upload- Can't process NCSM billing cycles at these sites - In GitLab by @kdjstudios on Apr 30, 2018, 13:53
**Submitted by:** "Cori Bartlett" <cori.bartlett@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-30-78949/conversation
**Server:** Internal
**Client/Site:** Multi
**Account:** NA
**Issue:**
Hi- We are still unable to process the NCSM 4/15 billing cycles at the following sites due to a file upload error. We need to have this prioritized please. Thank you, Cori
Allentown
Billerica Inbound
Billings
Chattanooga
Chicago
Columbus
EL Paso
Fair Oaks
Memphis
Orlando
Portland
Santa Rosa
Sarasota
Stockton
Toronto
West Memphis
Winnipeg | process | re ncsm unable to upload can t process ncsm billing cycles at these sites in gitlab by kdjstudios on apr submitted by cori bartlett helpdesk server internal client site multi account na issue hi we are still unable to process the ncsm billing cycles at the following sites due to a file upload error we need to have this prioritized please thank you cori allentown billerica inbound billings chattanooga chicago columbus el paso fair oaks memphis orlando portland santa rosa sarasota stockton toronto west memphis winnipeg | 1 |
78,885 | 27,807,630,288 | IssuesEvent | 2023-03-17 21:46:30 | dotCMS/core | https://api.github.com/repos/dotCMS/core | closed | [UI] Modal not removed after created Experiment | Type : Defect Merged QA : Passed Internal Team : Falcon dotCMS : Experiments Next Release | ### Parent Issue
https://github.com/dotCMS/core/issues/23095
### Problem Statement
After update PrimeNg when we destroy the sidebar not remove the background modal (Blur)
### Steps to Reproduce
- Go to lists experiment
- Press Add new Experiment
- Fill the name and description
- Save
### Acceptance Criteria
- After creat the experiment, the backend blur needs to be removed.
### dotCMS Version
master
### Proposed Objective
User Experience
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
<img width="1647" alt="Screenshot 2023-03-09 at 11 00 16 AM" src="https://user-images.githubusercontent.com/1909643/224081201-a8f98d03-df34-42f9-a3ea-1d5ba626aa32.png">
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_ | 1.0 | [UI] Modal not removed after created Experiment - ### Parent Issue
https://github.com/dotCMS/core/issues/23095
### Problem Statement
After update PrimeNg when we destroy the sidebar not remove the background modal (Blur)
### Steps to Reproduce
- Go to lists experiment
- Press Add new Experiment
- Fill the name and description
- Save
### Acceptance Criteria
- After creat the experiment, the backend blur needs to be removed.
### dotCMS Version
master
### Proposed Objective
User Experience
### Proposed Priority
Priority 2 - Important
### External Links... Slack Conversations, Support Tickets, Figma Designs, etc.
<img width="1647" alt="Screenshot 2023-03-09 at 11 00 16 AM" src="https://user-images.githubusercontent.com/1909643/224081201-a8f98d03-df34-42f9-a3ea-1d5ba626aa32.png">
### Assumptions & Initiation Needs
_No response_
### Quality Assurance Notes & Workarounds
_No response_
### Sub-Tasks & Estimates
_No response_ | non_process | modal not removed after created experiment parent issue problem statement after update primeng when we destroy the sidebar not remove the background modal blur steps to reproduce go to lists experiment press add new experiment fill the name and description save acceptance criteria after creat the experiment the backend blur needs to be removed dotcms version master proposed objective user experience proposed priority priority important external links slack conversations support tickets figma designs etc img width alt screenshot at am src assumptions initiation needs no response quality assurance notes workarounds no response sub tasks estimates no response | 0 |
18,898 | 24,837,415,188 | IssuesEvent | 2022-10-26 09:56:00 | altillimity/SatDump | https://api.github.com/repos/altillimity/SatDump | closed | GOES HRIT leaves corrupted GIFs | bug Processing | Whenever images are downloaded from GOES-18 the NWS folder is filled with unopenable or obviously corrupted images. If needed I can upload some for testing or you can view them in my github repository. | 1.0 | GOES HRIT leaves corrupted GIFs - Whenever images are downloaded from GOES-18 the NWS folder is filled with unopenable or obviously corrupted images. If needed I can upload some for testing or you can view them in my github repository. | process | goes hrit leaves corrupted gifs whenever images are downloaded from goes the nws folder is filled with unopenable or obviously corrupted images if needed i can upload some for testing or you can view them in my github repository | 1 |
261,671 | 8,244,934,957 | IssuesEvent | 2018-09-11 08:10:01 | nlbdev/nordic-epub3-dtbook-migrator | https://api.github.com/repos/nlbdev/nordic-epub3-dtbook-migrator | closed | Move semantic classes and ZedAI types to EDUPUB namespace | 0 - Low priority guidelines revision | All ZedAI types and semantic classes we use should be moved to the EDUPUB vocabulary if possible. We should compile a list of suggested additions and submit it to the EDUPUB WG.
<!---
@huboard:{"milestone_order":55.5}
-->
| 1.0 | Move semantic classes and ZedAI types to EDUPUB namespace - All ZedAI types and semantic classes we use should be moved to the EDUPUB vocabulary if possible. We should compile a list of suggested additions and submit it to the EDUPUB WG.
<!---
@huboard:{"milestone_order":55.5}
-->
| non_process | move semantic classes and zedai types to edupub namespace all zedai types and semantic classes we use should be moved to the edupub vocabulary if possible we should compile a list of suggested additions and submit it to the edupub wg huboard milestone order | 0 |
279,142 | 24,202,785,500 | IssuesEvent | 2022-09-24 20:12:13 | red/red | https://api.github.com/repos/red/red | closed | [Core] Object path access failure within loops scope | status.built status.tested type.bug test.written | **Describe the bug**
Could be related to #4854
**To reproduce**
Run this:
```
Red []
obj1: object []
obj2: object [owner: 'obj1]
list: function [obj [object!]] [
print ">>>"
?? obj/owner ;) works outside of loop!
; test: copy/deep [ ;) no problem if copied!
test: [
?? obj/owner ;) but not within!
all [
word? :obj/owner
value? obj/owner
obj: get obj/owner
]
]
; until [not do test]
while test []
print "<<<"
]
list obj2
list obj2
```
It outputs:
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
>>>
obj/owner: obj1
obj/owner: *** Script Error: owner has no value
*** Where: get
*** Near : ?? obj/owner all [word? :obj/owner value? ]
*** Stack: list ??
```
Note that `get/any` call fails within `??` after entering loops body or condition.
In 0.6.4 stable it was even worse:
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: *** Script Error: cannot access owner in path obj/owner
*** Where: get
*** Stack: list ??
```
Plain `get/any` call fails too, only within the loop.
`test: copy/deep [...]` removes the problem.
**Expected behavior**
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
```
**Platform version**
```
-----------RED & PLATFORM VERSION-----------
RED: [ branch: "master" tag: #v0.6.4 ahead: 4506 date: 24-Sep-2022/7:39:31 commit: #487881e2aacbd6037801f75af6a975bfeaf0d90c ]
PLATFORM: [ name: "Windows 10" OS: 'Windows arch: 'x86-64 version: 10.0.0 build: 19044 ]
--------------------------------------------
``` | 2.0 | [Core] Object path access failure within loops scope - **Describe the bug**
Could be related to #4854
**To reproduce**
Run this:
```
Red []
obj1: object []
obj2: object [owner: 'obj1]
list: function [obj [object!]] [
print ">>>"
?? obj/owner ;) works outside of loop!
; test: copy/deep [ ;) no problem if copied!
test: [
?? obj/owner ;) but not within!
all [
word? :obj/owner
value? obj/owner
obj: get obj/owner
]
]
; until [not do test]
while test []
print "<<<"
]
list obj2
list obj2
```
It outputs:
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
>>>
obj/owner: obj1
obj/owner: *** Script Error: owner has no value
*** Where: get
*** Near : ?? obj/owner all [word? :obj/owner value? ]
*** Stack: list ??
```
Note that `get/any` call fails within `??` after entering loops body or condition.
In 0.6.4 stable it was even worse:
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: *** Script Error: cannot access owner in path obj/owner
*** Where: get
*** Stack: list ??
```
Plain `get/any` call fails too, only within the loop.
`test: copy/deep [...]` removes the problem.
**Expected behavior**
```
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
>>>
obj/owner: obj1
obj/owner: obj1
obj/owner: unset
<<<
```
**Platform version**
```
-----------RED & PLATFORM VERSION-----------
RED: [ branch: "master" tag: #v0.6.4 ahead: 4506 date: 24-Sep-2022/7:39:31 commit: #487881e2aacbd6037801f75af6a975bfeaf0d90c ]
PLATFORM: [ name: "Windows 10" OS: 'Windows arch: 'x86-64 version: 10.0.0 build: 19044 ]
--------------------------------------------
``` | non_process | object path access failure within loops scope describe the bug could be related to to reproduce run this red object object list function print obj owner works outside of loop test copy deep no problem if copied test obj owner but not within all word obj owner value obj owner obj get obj owner until while test print list list it outputs obj owner obj owner obj owner unset obj owner obj owner script error owner has no value where get near obj owner all stack list note that get any call fails within after entering loops body or condition in stable it was even worse obj owner obj owner obj owner script error cannot access owner in path obj owner where get stack list plain get any call fails too only within the loop test copy deep removes the problem expected behavior obj owner obj owner obj owner unset obj owner obj owner obj owner unset platform version red platform version red platform | 0 |
12,194 | 14,742,346,955 | IssuesEvent | 2021-01-07 12:08:15 | kdjstudios/SABillingGitlab | https://api.github.com/repos/kdjstudios/SABillingGitlab | closed | Exceptions report | anc-external anc-process anc-report anp-1 ant-support has attachment | In GitLab by @kdjstudios on Apr 10, 2019, 11:02
**Submitted by:** Gaylan Garrett <Gaylan.Garrett@Nexa.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-10-13579
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
I just ran the exceptions report for the Keener 4/7/2019 billing ( I am still in draft mode as doing some research of usage before I finalize) and I noticed that we have data for 8049651874 (line 1839) on the resource usage report that was imported into SA billing, but there is not an account with that resource usage but it did not show up on the exceptions report. I have attached the report that was imported.
I am concerned now that there could be other data imported from resource usage reports but because of a possible type or an oversight, it has no where to go and therefore not being billing back, but it is not showing up on the exceptions report.
[4.8.2019+MAIN+both+resource+usage+from+Keener+and+A1+Switch+ResourceUsage_03-11_04-07+-+Copy.csv](/uploads/fcac5264a123743cae96a10ccdf0a7de/4.8.2019+MAIN+both+resource+usage+from+Keener+and+A1+Switch+ResourceUsage_03-11_04-07+-+Copy.csv) | 1.0 | Exceptions report - In GitLab by @kdjstudios on Apr 10, 2019, 11:02
**Submitted by:** Gaylan Garrett <Gaylan.Garrett@Nexa.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-04-10-13579
**Server:** External
**Client/Site:** Keener
**Account:** NA
**Issue:**
I just ran the exceptions report for the Keener 4/7/2019 billing ( I am still in draft mode as doing some research of usage before I finalize) and I noticed that we have data for 8049651874 (line 1839) on the resource usage report that was imported into SA billing, but there is not an account with that resource usage but it did not show up on the exceptions report. I have attached the report that was imported.
I am concerned now that there could be other data imported from resource usage reports but because of a possible type or an oversight, it has no where to go and therefore not being billing back, but it is not showing up on the exceptions report.
[4.8.2019+MAIN+both+resource+usage+from+Keener+and+A1+Switch+ResourceUsage_03-11_04-07+-+Copy.csv](/uploads/fcac5264a123743cae96a10ccdf0a7de/4.8.2019+MAIN+both+resource+usage+from+Keener+and+A1+Switch+ResourceUsage_03-11_04-07+-+Copy.csv) | process | exceptions report in gitlab by kdjstudios on apr submitted by gaylan garrett helpdesk server external client site keener account na issue i just ran the exceptions report for the keener billing i am still in draft mode as doing some research of usage before i finalize and i noticed that we have data for line on the resource usage report that was imported into sa billing but there is not an account with that resource usage but it did not show up on the exceptions report i have attached the report that was imported i am concerned now that there could be other data imported from resource usage reports but because of a possible type or an oversight it has no where to go and therefore not being billing back but it is not showing up on the exceptions report uploads main both resource usage from keener and switch resourceusage copy csv | 1 |
4,822 | 7,717,886,378 | IssuesEvent | 2018-05-23 14:50:22 | openvstorage/framework | https://api.github.com/repos/openvstorage/framework | closed | Arakoon deployment configurable via GUI/framework | process_wontfix state_question type_feature | The use of external Arakoons has cost OPS and framework team a lot of work and has lots of disadvantages
The user (customer / OPS / ...) should be able to determine where each and every Arakoon cluster should be deployed by using the GUI and/or the API calls delivered by the framework team | 1.0 | Arakoon deployment configurable via GUI/framework - The use of external Arakoons has cost OPS and framework team a lot of work and has lots of disadvantages
The user (customer / OPS / ...) should be able to determine where each and every Arakoon cluster should be deployed by using the GUI and/or the API calls delivered by the framework team | process | arakoon deployment configurable via gui framework the use of external arakoons has cost ops and framework team a lot of work and has lots of disadvantages the user customer ops should be able to determine where each and every arakoon cluster should be deployed by using the gui and or the api calls delivered by the framework team | 1 |
10,453 | 13,233,452,615 | IssuesEvent | 2020-08-18 14:49:19 | prisma/prisma-client-js | https://api.github.com/repos/prisma/prisma-client-js | opened | node script hanging after disconnect | bug/1-repro-available kind/bug process/candidate | <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
We observed this with some internal testing. The error log is like the following:
```
prisma-client TypeError: Cannot read property 'signalCode' of undefined
prisma-client at NodeEngine.handleRequestError (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:111388)
prisma-client at /Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:125748
prisma-client at runMicrotasks (<anonymous>)
prisma-client at processTicksAndRejections (internal/process/task_queues.js:97:5)
prisma-client at async PrismaClientFetcher.request (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:222501)
prisma-client at async Promise.all (index 8) +1ms
engine {
engine error: SocketError: other side closed
engine at Socket.onSocketEnd (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:203556)
engine at Socket.emit (events.js:327:22)
engine at endReadableNT (_stream_readable.js:1221:12)
engine at processTicksAndRejections (internal/process/task_queues.js:84:21) {
engine code: 'UND_ERR_SOCKET'
engine }
engine } +1ms
```
## How to reproduce
https://github.com/prisma/system-behavior/pull/6 (internal)
## Expected behavior
It shouldn't hang.
## Prisma information
Prisma CLI: `2.5.0-dev.61` | 1.0 | node script hanging after disconnect - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
We observed this with some internal testing. The error log is like the following:
```
prisma-client TypeError: Cannot read property 'signalCode' of undefined
prisma-client at NodeEngine.handleRequestError (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:111388)
prisma-client at /Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:125748
prisma-client at runMicrotasks (<anonymous>)
prisma-client at processTicksAndRejections (internal/process/task_queues.js:97:5)
prisma-client at async PrismaClientFetcher.request (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:222501)
prisma-client at async Promise.all (index 8) +1ms
engine {
engine error: SocketError: other side closed
engine at Socket.onSocketEnd (/Users/divyendusingh/Documents/prisma/system-behavior/pgbouncer-connection/node_modules/@prisma/client/runtime/index.js:1:203556)
engine at Socket.emit (events.js:327:22)
engine at endReadableNT (_stream_readable.js:1221:12)
engine at processTicksAndRejections (internal/process/task_queues.js:84:21) {
engine code: 'UND_ERR_SOCKET'
engine }
engine } +1ms
```
## How to reproduce
https://github.com/prisma/system-behavior/pull/6 (internal)
## Expected behavior
It shouldn't hang.
## Prisma information
Prisma CLI: `2.5.0-dev.61` | process | node script hanging after disconnect thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description we observed this with some internal testing the error log is like the following prisma client typeerror cannot read property signalcode of undefined prisma client at nodeengine handlerequesterror users divyendusingh documents prisma system behavior pgbouncer connection node modules prisma client runtime index js prisma client at users divyendusingh documents prisma system behavior pgbouncer connection node modules prisma client runtime index js prisma client at runmicrotasks prisma client at processticksandrejections internal process task queues js prisma client at async prismaclientfetcher request users divyendusingh documents prisma system behavior pgbouncer connection node modules prisma client runtime index js prisma client at async promise all index engine engine error socketerror other side closed engine at socket onsocketend users divyendusingh documents prisma system behavior pgbouncer connection node modules prisma client runtime index js engine at socket emit events js engine at endreadablent stream readable js engine at processticksandrejections internal process task queues js engine code und err socket engine engine how to reproduce internal expected behavior it shouldn t hang prisma information prisma cli dev | 1 |
337,840 | 24,559,337,679 | IssuesEvent | 2022-10-12 18:44:59 | osPrims/chatApp | https://api.github.com/repos/osPrims/chatApp | closed | fix inconsistent badges in README | documentation good first issue reserved-for-uni | https://github.com/osPrims/chatApp/blob/fcf7cd9731fc05c4e014a789434151c0be03a022/README.md?plain=1#L6-L9
The 'Issues' and 'Pull Requests' badges -
* should have consistent style with the 'forks' and 'stars' issue (flat-square)
The 'Forks' and 'Star' badges -
* should not have a logo in them
Required fix would be to correct the query parameters used for the badges. | 1.0 | fix inconsistent badges in README - https://github.com/osPrims/chatApp/blob/fcf7cd9731fc05c4e014a789434151c0be03a022/README.md?plain=1#L6-L9
The 'Issues' and 'Pull Requests' badges -
* should have consistent style with the 'forks' and 'stars' issue (flat-square)
The 'Forks' and 'Star' badges -
* should not have a logo in them
Required fix would be to correct the query parameters used for the badges. | non_process | fix inconsistent badges in readme the issues and pull requests badges should have consistent style with the forks and stars issue flat square the forks and star badges should not have a logo in them required fix would be to correct the query parameters used for the badges | 0 |
736,226 | 25,463,609,699 | IssuesEvent | 2022-11-24 23:52:19 | diffgram/diffgram | https://api.github.com/repos/diffgram/diffgram | opened | Standard process for surfacing sql timeout errors | lowpriority performance sqlalchemy optimization | `"error":"(psycopg2.errors.QueryCanceled) canceling statement due to statement timeout`
Can sometimes cause cascading errors if exception is not caught.
We may want to think about how we expect key operations to work and surface an error to the user when a timeout occurs. | 1.0 | Standard process for surfacing sql timeout errors - `"error":"(psycopg2.errors.QueryCanceled) canceling statement due to statement timeout`
Can sometimes cause cascading errors if exception is not caught.
We may want to think about how we expect key operations to work and surface an error to the user when a timeout occurs. | non_process | standard process for surfacing sql timeout errors error errors querycanceled canceling statement due to statement timeout can sometimes cause cascading errors if exception is not caught we may want to think about how we expect key operations to work and surface an error to the user when a timeout occurs | 0 |
13,905 | 16,664,707,468 | IssuesEvent | 2021-06-07 00:07:58 | turnkeylinux/tracker | https://api.github.com/repos/turnkeylinux/tracker | opened | Processmaker - include pre-configured cron job | bug processmaker | It appears that we aren't shipping a pre-configured cron job in the Processmaker appliance. We should!
Here's a workaround to create a cron job (as suggested for [Processmaker 3.2 -> 3.6](https://wiki.processmaker.com/3.2/Executing_cron.php#Configuring_crontab_in_Linux.2FUNIX) - adjusted for TurnKey):
```
cat > /etc/cron.d/processmaker <<EOF
0 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/cron.php +force
10 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/messageeventcron.php +force
15 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/timereventcron.php +force
20 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/ldapcron.php +force
25 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/sendnotificationscron.php +force
30 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/cron.php +force
40 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/messageeventcron.php +force
45 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/timereventcron.php +force
50 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/ldapcron.php +force
55 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/sendnotificationscron.php +force
EOF | 1.0 | Processmaker - include pre-configured cron job - It appears that we aren't shipping a pre-configured cron job in the Processmaker appliance. We should!
Here's a workaround to create a cron job (as suggested for [Processmaker 3.2 -> 3.6](https://wiki.processmaker.com/3.2/Executing_cron.php#Configuring_crontab_in_Linux.2FUNIX) - adjusted for TurnKey):
```
cat > /etc/cron.d/processmaker <<EOF
0 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/cron.php +force
10 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/messageeventcron.php +force
15 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/timereventcron.php +force
20 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/ldapcron.php +force
25 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/sendnotificationscron.php +force
30 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/cron.php +force
40 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/messageeventcron.php +force
45 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/timereventcron.php +force
50 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/ldapcron.php +force
55 * * * * www-data /usr/bin/php -f /var/www/processmaker/workflow/engine/bin/sendnotificationscron.php +force
EOF | process | processmaker include pre configured cron job it appears that we aren t shipping a pre configured cron job in the processmaker appliance we should here s a workaround to create a cron job as suggested for adjusted for turnkey cat etc cron d processmaker eof www data usr bin php f var www processmaker workflow engine bin cron php force www data usr bin php f var www processmaker workflow engine bin messageeventcron php force www data usr bin php f var www processmaker workflow engine bin timereventcron php force www data usr bin php f var www processmaker workflow engine bin ldapcron php force www data usr bin php f var www processmaker workflow engine bin sendnotificationscron php force www data usr bin php f var www processmaker workflow engine bin cron php force www data usr bin php f var www processmaker workflow engine bin messageeventcron php force www data usr bin php f var www processmaker workflow engine bin timereventcron php force www data usr bin php f var www processmaker workflow engine bin ldapcron php force www data usr bin php f var www processmaker workflow engine bin sendnotificationscron php force eof | 1 |
16,576 | 21,606,800,009 | IssuesEvent | 2022-05-04 04:55:34 | jamandujanoa/WASA | https://api.github.com/repos/jamandujanoa/WASA | opened | Implement a solution to configure unique local admin credentials | WARP-Import WAF-Assessment Security Operational Procedures Patch & Update Process (PNU) | <a href="https://techcommunity.microsoft.com/t5/itops-talk-blog/step-by-step-guide-how-to-configure-microsoft-local/ba-p/2806185">Implement a solution to configure unique local admin credentials</a>
<p><b>Why Consider This?</b></p>
Use of consistent local administrator passwords leaves the organization susceptible to rapid lateral account movement as a compromised credential can be used on multiple hosts in attempt to escalate privilege.
<p><b>Context</b></p>
<p><span>"nbsp;</span></p><p><span>LAPS provides a solution to the issue of using a common local account with an identical password on every computer in a domain. LAPS resolves this issue by setting a different, random password for the common local administrator account on every computer in the domain. Domain administrators who use this solution can determine which users, such as helpdesk administrators, are authorized to read passwords.</span></p><p><span>Best practice to avoid common lateral attack techniques such as pass-the-hash is to configure unique local administrator credentials and change periodically. </span></p>
<p><b>Suggested Actions</b></p>
<p><span>Deploy Microsoft Local Administrator Password Solution (LAPS) or comparable solution to ensure no system uses the same local administrator credential.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://techcommunity.microsoft.com/t5/itops-talk-blog/step-by-step-guide-how-to-configure-microsoft-local/ba-p/2806185" target="_blank"><span>Microsoft security advisory: Local Administrator Password Solution</span></a><span /></p> | 1.0 | Implement a solution to configure unique local admin credentials - <a href="https://techcommunity.microsoft.com/t5/itops-talk-blog/step-by-step-guide-how-to-configure-microsoft-local/ba-p/2806185">Implement a solution to configure unique local admin credentials</a>
<p><b>Why Consider This?</b></p>
Use of consistent local administrator passwords leaves the organization susceptible to rapid lateral account movement as a compromised credential can be used on multiple hosts in attempt to escalate privilege.
<p><b>Context</b></p>
<p><span>"nbsp;</span></p><p><span>LAPS provides a solution to the issue of using a common local account with an identical password on every computer in a domain. LAPS resolves this issue by setting a different, random password for the common local administrator account on every computer in the domain. Domain administrators who use this solution can determine which users, such as helpdesk administrators, are authorized to read passwords.</span></p><p><span>Best practice to avoid common lateral attack techniques such as pass-the-hash is to configure unique local administrator credentials and change periodically. </span></p>
<p><b>Suggested Actions</b></p>
<p><span>Deploy Microsoft Local Administrator Password Solution (LAPS) or comparable solution to ensure no system uses the same local administrator credential.</span></p>
<p><b>Learn More</b></p>
<p><a href="https://techcommunity.microsoft.com/t5/itops-talk-blog/step-by-step-guide-how-to-configure-microsoft-local/ba-p/2806185" target="_blank"><span>Microsoft security advisory: Local Administrator Password Solution</span></a><span /></p> | process | implement a solution to configure unique local admin credentials why consider this use of consistent local administrator passwords leaves the organization susceptible to rapid lateral account movement as a compromised credential can be used on multiple hosts in attempt to escalate privilege context nbsp laps provides a solution to the issue of using a common local account with an identical password on every computer in a domain laps resolves this issue by setting a different random password for the common local administrator account on every computer in the domain domain administrators who use this solution can determine which users such as helpdesk administrators are authorized to read passwords best practice to avoid common lateral attack techniques such as pass the hash is to configure unique local administrator credentials and change periodically suggested actions deploy microsoft local administrator password solution laps or comparable solution to ensure no system uses the same local administrator credential learn more microsoft security advisory local administrator password solution | 1 |
18,623 | 24,579,634,482 | IssuesEvent | 2022-10-13 14:44:10 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [FHIR] Questionnaire responses > Response records are not getting created in the FHIR store | Bug Blocker P0 Response datastore Process: Fixed Process: Tested dev | Steps:
1. Sign up or sign in to the mobile app
2. Enroll to the study or click on enrolled study
3. Submit the responses for the activities
4. Go to the eFHIR store
5. Go to the Questionnaire responses and observe
AR: Questionnaire responses > Responses records are not getting created in the FHIR store
ER: Response records should get created in the FHIR store
| 2.0 | [FHIR] Questionnaire responses > Response records are not getting created in the FHIR store - Steps:
1. Sign up or sign in to the mobile app
2. Enroll to the study or click on enrolled study
3. Submit the responses for the activities
4. Go to the eFHIR store
5. Go to the Questionnaire responses and observe
AR: Questionnaire responses > Responses records are not getting created in the FHIR store
ER: Response records should get created in the FHIR store
| process | questionnaire responses response records are not getting created in the fhir store steps sign up or sign in to the mobile app enroll to the study or click on enrolled study submit the responses for the activities go to the efhir store go to the questionnaire responses and observe ar questionnaire responses responses records are not getting created in the fhir store er response records should get created in the fhir store | 1 |
290,299 | 8,886,665,359 | IssuesEvent | 2019-01-15 01:39:35 | bcgov/ols-router | https://api.github.com/repos/bcgov/ols-router | closed | Add cardinal directions to Route Directions | api enhancement functional route planner low priority usability | @mraross commented on [Sat Apr 07 2018](https://github.com/bcgov/api-specs/issues/325)
Add cardinal directions to start of route and start of any reversals (going out the way you came). For example: Try this route in demo app:
16357 Hwy 2, Tupper, BC
16972 201 Rd, Tupper, BC
562 188 Rd, Tupper, BC
The first instruction is:
Continue onto Hwy 2 for 1.3 km (46 seconds)
but would be more helpful if it said:
Go east along Hwy 2 for 1.3 km (46 seconds)
The first instruction after Stopover 1 is:
Continue onto 201 Rd for 4 km (5 minutes 56 seconds)
but it is actually not a continuation in the same direction but a reversal. We were going south. Now we need to go north as follows:
Go north along 201 Rd for 4 km (5 minutes 56 seconds)
or even
Turn around and go north along 201 Rd for 4 km (5 minutes 56 seconds)
| 1.0 | Add cardinal directions to Route Directions - @mraross commented on [Sat Apr 07 2018](https://github.com/bcgov/api-specs/issues/325)
Add cardinal directions to start of route and start of any reversals (going out the way you came). For example: Try this route in demo app:
16357 Hwy 2, Tupper, BC
16972 201 Rd, Tupper, BC
562 188 Rd, Tupper, BC
The first instruction is:
Continue onto Hwy 2 for 1.3 km (46 seconds)
but would be more helpful if it said:
Go east along Hwy 2 for 1.3 km (46 seconds)
The first instruction after Stopover 1 is:
Continue onto 201 Rd for 4 km (5 minutes 56 seconds)
but it is actually not a continuation in the same direction but a reversal. We were going south. Now we need to go north as follows:
Go north along 201 Rd for 4 km (5 minutes 56 seconds)
or even
Turn around and go north along 201 Rd for 4 km (5 minutes 56 seconds)
| non_process | add cardinal directions to route directions mraross commented on add cardinal directions to start of route and start of any reversals going out the way you came for example try this route in demo app hwy tupper bc rd tupper bc rd tupper bc the first instruction is continue onto hwy for km seconds but would be more helpful if it said go east along hwy for km seconds the first instruction after stopover is continue onto rd for km minutes seconds but it is actually not a continuation in the same direction but a reversal we were going south now we need to go north as follows go north along rd for km minutes seconds or even turn around and go north along rd for km minutes seconds | 0 |
234,117 | 25,800,871,007 | IssuesEvent | 2022-12-11 01:07:32 | praneethpanasala/linux | https://api.github.com/repos/praneethpanasala/linux | reopened | CVE-2019-19448 (High) detected in linuxv4.19 | security vulnerability | ## CVE-2019-19448 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/free-space-cache.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/free-space-cache.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.21 and 5.3.11, mounting a crafted btrfs filesystem image, performing some operations, and then making a syncfs system call can lead to a use-after-free in try_merge_free_space in fs/btrfs/free-space-cache.c because the pointer to a left data structure can be the same as the pointer to a right data structure.
<p>Publish Date: 2019-12-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19448>CVE-2019-19448</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19448">https://www.linuxkernelcves.com/cves/CVE-2019-19448</a></p>
<p>Release Date: 2020-11-02</p>
<p>Fix Resolution: v4.4.233, v4.9.233, v4.14.194, v4.19.141, v5.4.60, v5.7.17, v5.8.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-19448 (High) detected in linuxv4.19 - ## CVE-2019-19448 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv4.19</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in HEAD commit: <a href="https://api.github.com/repos/praneethpanasala/linux/commits/d80c4f847c91020292cb280132b15e2ea147f1a3">d80c4f847c91020292cb280132b15e2ea147f1a3</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/free-space-cache.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/btrfs/free-space-cache.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.21 and 5.3.11, mounting a crafted btrfs filesystem image, performing some operations, and then making a syncfs system call can lead to a use-after-free in try_merge_free_space in fs/btrfs/free-space-cache.c because the pointer to a left data structure can be the same as the pointer to a right data structure.
<p>Publish Date: 2019-12-08
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19448>CVE-2019-19448</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2019-19448">https://www.linuxkernelcves.com/cves/CVE-2019-19448</a></p>
<p>Release Date: 2020-11-02</p>
<p>Fix Resolution: v4.4.233, v4.9.233, v4.14.194, v4.19.141, v5.4.60, v5.7.17, v5.8.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href found in head commit a href found in base branch master vulnerable source files fs btrfs free space cache c fs btrfs free space cache c vulnerability details in the linux kernel and mounting a crafted btrfs filesystem image performing some operations and then making a syncfs system call can lead to a use after free in try merge free space in fs btrfs free space cache c because the pointer to a left data structure can be the same as the pointer to a right data structure publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
14,377 | 17,399,997,365 | IssuesEvent | 2021-08-02 18:12:15 | googleapis/doc-templates | https://api.github.com/repos/googleapis/doc-templates | closed | docfx: update Go golden | type: process | Go doesn't include READMEs any more. There have been other changes, too. | 1.0 | docfx: update Go golden - Go doesn't include READMEs any more. There have been other changes, too. | process | docfx update go golden go doesn t include readmes any more there have been other changes too | 1 |
957 | 3,419,124,280 | IssuesEvent | 2015-12-08 07:55:02 | e-government-ua/iBP | https://api.github.com/repos/e-government-ua/iBP | closed | Днепр обл. Видача довідки про: наявність та розмір земельної частки (паю), наявність у Державному земельному кадастрі відомостей про одержання у власність земельної ділянки у межах норм безоплатної приватизації за певним видом її цільового призначення (використання) | In process of testing in work | переработка процесса issue 78 с дополнениями.
Инфокарта http://e-services.dp.gov.ua/_layouts/WordViewer.aspx?id=/Lists/PermitStages/Attachments/8129/%D0%86%D0%9A_006.doc
Заява
https://drive.google.com/file/d/0B68lQ-z45GpYMGUyZnhxdzJDcHM/view?usp=sharing | 1.0 | Днепр обл. Видача довідки про: наявність та розмір земельної частки (паю), наявність у Державному земельному кадастрі відомостей про одержання у власність земельної ділянки у межах норм безоплатної приватизації за певним видом її цільового призначення (використання) - переработка процесса issue 78 с дополнениями.
Инфокарта http://e-services.dp.gov.ua/_layouts/WordViewer.aspx?id=/Lists/PermitStages/Attachments/8129/%D0%86%D0%9A_006.doc
Заява
https://drive.google.com/file/d/0B68lQ-z45GpYMGUyZnhxdzJDcHM/view?usp=sharing | process | днепр обл видача довідки про наявність та розмір земельної частки паю наявність у державному земельному кадастрі відомостей про одержання у власність земельної ділянки у межах норм безоплатної приватизації за певним видом її цільового призначення використання переработка процесса issue с дополнениями инфокарта заява | 1 |
407,321 | 11,912,014,561 | IssuesEvent | 2020-03-31 09:34:09 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | SQL thread pools | Estimation: M Module: SQL Priority: High Source: Internal Team: Core Type: Enhancement | We need to introduce thread pools for query processing. This includes:
1. Processing of query operations
1. Execution of query fragments | 1.0 | SQL thread pools - We need to introduce thread pools for query processing. This includes:
1. Processing of query operations
1. Execution of query fragments | non_process | sql thread pools we need to introduce thread pools for query processing this includes processing of query operations execution of query fragments | 0 |
11,909 | 14,699,401,870 | IssuesEvent | 2021-01-04 08:29:41 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Console Aplication (C#) - Memory Leak | area-System.Diagnostics.Process question | I have a program "Console Aplication" install in Windows that execute every time. Initial consume 32 MB RAM, but grow infinitely. My program execute connection with DB oracle through provider "Oracle.ManagedDataAccess". To Diagnostics what was happening, i executed o PerfView, the result show high consume/execute in namespace System.Diagnostics and Microsoft.Win32 that GB (Garbage Colletion) can´t solve.
-->
### Configuration
* Which version of .NET is the code running on?
NET 4.8
* What OS and version, and what distro if applicable?
Windows Server 2008 and Windows Server 2016
* What is the architecture (x64, x86, ARM, ARM64)?
Architecture x64,
* Do you know whether it is specific to that configuration?
No, I don´t.
* If you're using Blazor, which web browser(s) do you see this issue in?
Not applicable
### Regression?
<!--
* Did this work in a previous build or release of .NET Core, or from .NET Framework? If you can try a previous release or build to find out, that can help us narrow down the problem. If you don't know, that's OK.
-->
### Other information


| 1.0 | Console Aplication (C#) - Memory Leak - I have a program "Console Aplication" install in Windows that execute every time. Initial consume 32 MB RAM, but grow infinitely. My program execute connection with DB oracle through provider "Oracle.ManagedDataAccess". To Diagnostics what was happening, i executed o PerfView, the result show high consume/execute in namespace System.Diagnostics and Microsoft.Win32 that GB (Garbage Colletion) can´t solve.
-->
### Configuration
* Which version of .NET is the code running on?
NET 4.8
* What OS and version, and what distro if applicable?
Windows Server 2008 and Windows Server 2016
* What is the architecture (x64, x86, ARM, ARM64)?
Architecture x64,
* Do you know whether it is specific to that configuration?
No, I don´t.
* If you're using Blazor, which web browser(s) do you see this issue in?
Not applicable
### Regression?
<!--
* Did this work in a previous build or release of .NET Core, or from .NET Framework? If you can try a previous release or build to find out, that can help us narrow down the problem. If you don't know, that's OK.
-->
### Other information


| process | console aplication c memory leak i have a program console aplication install in windows that execute every time initial consume mb ram but grow infinitely my program execute connection with db oracle through provider oracle manageddataaccess to diagnostics what was happening i executed o perfview the result show high consume execute in namespace system diagnostics and microsoft that gb garbage colletion can´t solve configuration which version of net is the code running on net what os and version and what distro if applicable windows server and windows server what is the architecture arm architecture do you know whether it is specific to that configuration no i don´t if you re using blazor which web browser s do you see this issue in not applicable regression did this work in a previous build or release of net core or from net framework if you can try a previous release or build to find out that can help us narrow down the problem if you don t know that s ok other information | 1 |
10,789 | 13,608,996,931 | IssuesEvent | 2020-09-23 03:58:27 | googleapis/java-redis | https://api.github.com/repos/googleapis/java-redis | closed | Dependency Dashboard | api: redis type: process | This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-0.x -->deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v0.9.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| 1.0 | Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-shared-dependencies-0.x -->deps: update dependency com.google.cloud:google-cloud-shared-dependencies to v0.9.0
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
- [ ] <!-- rebase-all-open-prs -->**Check this option to rebase all the above open PRs at once**
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
| process | dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to deps update dependency com google cloud google cloud shared dependencies to chore deps update dependency com google cloud libraries bom to check this option to rebase all the above open prs at once check this box to trigger a request for renovate to run again on this repository | 1 |
12,953 | 15,327,261,502 | IssuesEvent | 2021-02-26 05:43:17 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [PD] Emails sent to study participants must use study-specific support email address | P1 Participant datastore Process: Enhancement Process: Fixed Process: Tested QA Process: Tested dev | Emails sent to participants of a specific study (for study invitations) must contain the support contact email address specific to the study, and as configured in the Study Builder | 4.0 | [PD] Emails sent to study participants must use study-specific support email address - Emails sent to participants of a specific study (for study invitations) must contain the support contact email address specific to the study, and as configured in the Study Builder | process | emails sent to study participants must use study specific support email address emails sent to participants of a specific study for study invitations must contain the support contact email address specific to the study and as configured in the study builder | 1 |
5,333 | 8,150,064,095 | IssuesEvent | 2018-08-22 11:47:04 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | I find the value of "unique visitors" is not right! | log-processing question | env:
GoAccess - 1.2
ubuntu 14.04
my goaccess command:
```
goaccess -f /srv/logs/access.log -o report.html
```
The result of "unique visitors" is : '10704"
But if I use shell command:
```
cat /srv/logs/access.log |awk '{print $1}'|sort|uniq -c|wc -l
````
The result is : "5025"
| 1.0 | I find the value of "unique visitors" is not right! - env:
GoAccess - 1.2
ubuntu 14.04
my goaccess command:
```
goaccess -f /srv/logs/access.log -o report.html
```
The result of "unique visitors" is : '10704"
But if I use shell command:
```
cat /srv/logs/access.log |awk '{print $1}'|sort|uniq -c|wc -l
````
The result is : "5025"
| process | i find the value of unique visitors is not right env: goaccess ubuntu my goaccess command goaccess f srv logs access log o report html the result of unique visitors is but if i use shell command cat srv logs access log awk print sort uniq c wc l the result is | 1 |
10,041 | 13,044,161,625 | IssuesEvent | 2020-07-29 03:47:24 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | UCP: Migrate scalar function `SubDateDurationInt` from TiDB | challenge-program-2 component/coprocessor difficulty/easy sig/coprocessor |
## Description
Port the scalar function `SubDateDurationInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| 2.0 | UCP: Migrate scalar function `SubDateDurationInt` from TiDB -
## Description
Port the scalar function `SubDateDurationInt` from TiDB to coprocessor.
## Score
* 50
## Mentor(s)
* @lonng
## Recommended Skills
* Rust programming
## Learning Materials
Already implemented expressions ported from TiDB
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/rpn_expr)
- https://github.com/tikv/tikv/tree/master/components/tidb_query/src/expr)
| process | ucp migrate scalar function subdatedurationint from tidb description port the scalar function subdatedurationint from tidb to coprocessor score mentor s lonng recommended skills rust programming learning materials already implemented expressions ported from tidb | 1 |
769,682 | 27,016,358,526 | IssuesEvent | 2023-02-10 19:47:43 | layer5io/layer5 | https://api.github.com/repos/layer5io/layer5 | closed | [Site Performance] Incorporate SVGO | help wanted kind/chore hacktoberfest area/ci priority/high kind/child issue/remind kind/performance | **Current Behavior**
The layer5.io website has poor performance (see #3366). Portions of its performance can be enhanced by reducing the file size of images, concluding SVG images.
**Desired Situation**
Incorporate https://github.com/svg/svgo into the site's build process or commit hooks or make targets to ensure that SVG images are downsized as much as possible while not losing their original quality.
---
**Contributor Resources**
The layer5.io website uses Gatsby, React, and GitHub Pages. Site content is found under the [`master` branch](https://github.com/layer5io/layer5/tree/master).
- See [contributing instructions](https://github.com/layer5io/layer5/blob/master/CONTRIBUTING.md)
- See Layer5 site designs in this [Figma project](https://www.figma.com/file/5ZwEkSJwUPitURD59YHMEN/Layer5-Designs). Join the [Layer5 Community](http://slack.layer5.io) for access.
| 1.0 | [Site Performance] Incorporate SVGO - **Current Behavior**
The layer5.io website has poor performance (see #3366). Portions of its performance can be enhanced by reducing the file size of images, concluding SVG images.
**Desired Situation**
Incorporate https://github.com/svg/svgo into the site's build process or commit hooks or make targets to ensure that SVG images are downsized as much as possible while not losing their original quality.
---
**Contributor Resources**
The layer5.io website uses Gatsby, React, and GitHub Pages. Site content is found under the [`master` branch](https://github.com/layer5io/layer5/tree/master).
- See [contributing instructions](https://github.com/layer5io/layer5/blob/master/CONTRIBUTING.md)
- See Layer5 site designs in this [Figma project](https://www.figma.com/file/5ZwEkSJwUPitURD59YHMEN/Layer5-Designs). Join the [Layer5 Community](http://slack.layer5.io) for access.
| non_process | incorporate svgo current behavior the io website has poor performance see portions of its performance can be enhanced by reducing the file size of images concluding svg images desired situation incorporate into the site s build process or commit hooks or make targets to ensure that svg images are downsized as much as possible while not losing their original quality contributor resources the io website uses gatsby react and github pages site content is found under the see see site designs in this join the for access | 0 |
34,277 | 29,188,338,711 | IssuesEvent | 2023-05-19 17:25:59 | casangi/astrohack | https://api.github.com/repos/casangi/astrohack | closed | Warnings about the deprecation of the numpy matrix class | Infrastructure | The matrix class is now deprecated in numpy and should not be used anymore.
PendingDeprecationWarning: the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.
30 return matrix(data, dtype=dtype, copy=False) | 1.0 | Warnings about the deprecation of the numpy matrix class - The matrix class is now deprecated in numpy and should not be used anymore.
PendingDeprecationWarning: the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.
30 return matrix(data, dtype=dtype, copy=False) | non_process | warnings about the deprecation of the numpy matrix class the matrix class is now deprecated in numpy and should not be used anymore pendingdeprecationwarning the matrix subclass is not the recommended way to represent matrices or deal with linear algebra see please adjust your code to use regular ndarray return matrix data dtype dtype copy false | 0 |
6,992 | 4,717,123,221 | IssuesEvent | 2016-10-16 13:02:12 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 27516243: iCloud Keychain should not be required to use Home app | classification:ui/usability reproducible:always status:open | #### Description
Summary:
Currently if you try to use the Home app, you must first enable iCloud Keychain. I think user's should be able to use the Home app without this, and live with the limitations this creates.
Steps to Reproduce:
1) Make sure iCloud Keychain is disabled in Settings -> iCloud -> Keychain
2) Open Home
Expected Results:
You can use the Home app practically like normal while having this disabled.
Actual Results:
You are prompted with an un-skippable screen requiring you to enable iCloud Keychain
Version:
iOS 10 beta 3
Notes:
-
Product Version: iOS 10 beta 3
Created: 2016-07-24T20:22:59.027740
Originated: 2016-07-24T13:22:00
Open Radar Link: http://www.openradar.me/27516243 | True | 27516243: iCloud Keychain should not be required to use Home app - #### Description
Summary:
Currently if you try to use the Home app, you must first enable iCloud Keychain. I think user's should be able to use the Home app without this, and live with the limitations this creates.
Steps to Reproduce:
1) Make sure iCloud Keychain is disabled in Settings -> iCloud -> Keychain
2) Open Home
Expected Results:
You can use the Home app practically like normal while having this disabled.
Actual Results:
You are prompted with an un-skippable screen requiring you to enable iCloud Keychain
Version:
iOS 10 beta 3
Notes:
-
Product Version: iOS 10 beta 3
Created: 2016-07-24T20:22:59.027740
Originated: 2016-07-24T13:22:00
Open Radar Link: http://www.openradar.me/27516243 | non_process | icloud keychain should not be required to use home app description summary currently if you try to use the home app you must first enable icloud keychain i think user s should be able to use the home app without this and live with the limitations this creates steps to reproduce make sure icloud keychain is disabled in settings icloud keychain open home expected results you can use the home app practically like normal while having this disabled actual results you are prompted with an un skippable screen requiring you to enable icloud keychain version ios beta notes product version ios beta created originated open radar link | 0 |
1,311 | 14,916,335,241 | IssuesEvent | 2021-01-22 18:02:48 | hashicorp/consul | https://api.github.com/repos/hashicorp/consul | opened | Add emergency server write rate limit to allow controlled recovery from replication failure | theme/reliability | ## Background
This is a follow up to several incidents where the failure described in #9609 was the root cause.
## Proposal
In the outage situation described in the linked ticket above, there is currently no easy way in Consul to reduce the write load on the servers without external coordination like shutting down consumers or reconfiguring rate limits on all clients.
It would be incredibly useful to be able to have a "break glass" emergency setting that allowed operators to specifically introduce a rate limit for accepting new writes into raft. This _must_ be reloadable without restarting the leader to be effective.
We have other investigations about the best way to implement automatic backpressure that will improve Consul server stability in general, there are numerous factors to take into account there that make it a much more complex problem to research and design a solution for so having this simple solution seem important to provide a way to recover without full downtime if this situation does occur. This proposal is not a replacement for more general work but is a quicker path to improving recovery options for operators during incidents.
The idea would be to have a hot-reloadable `rate.Limit` that could be configured on the leader to just error any calls to `raftApply` with a "try again later" error equivalent to an HTTP 429. For requests that originate from client HTTP requests we should ensure that our standard HTTP handler converts those errors into a 429. We have one example of this already in the Connect CA certificate signing RPC endpoint which has a similar back-pressure mechanism.
In use, we'd accept that some writes would fail and some clients would see errors, but this is often preferable to having to shut down the whole cluster to recover which is the only other option in this case often.
## Smoothing
We could wait for a short period before returning the error if rate limit is exceeded which allows handling of small spikes above the limit a little more fairly. See https://github.com/hashicorp/consul/blob/a4327305d1ca583ae459f1d0e3eeacc6805b4316/agent/consul/server_connect.go#L195-L200
More info also in: https://github.com/hashicorp/consul/blob/a4327305d1ca583ae459f1d0e3eeacc6805b4316/agent/consul/server_connect.go#L64-L69
## Possible complications
There may be internal code paths within Consul that write to raft which might not tolerate this error nicely. For example if we have internal leader goroutines that modify state and currently treat a failure to write raft as fatal and force leader stepdown or similar. We'd need to check that carefully and possible have an internal flag that forces internal writes to bypass the limit.
Alternatively we could only enforce the limit higher up for server RPCs so internal writes will always work.
## Proposed Use
We should document the intended usage. It can be used to relieve write load on the cluster without taking it offline by limiting the overall write throughput. Operators could slowly lower the limit until the write throughput is low enough to allow unhealthy followers to catch up. If this is causing a significant error rate in downstream services, time can be minimised by only adding the rate limit for the time it takes to get a new follower caught up after a restart. Each follower can then be restarted in turn with the rate dropped only for the time needed for them to catch up, after a few such restarts, the increased `raft_trailing_logs` should take affect from one of the restarted servers and remove the need for further rate limiting or errors.
While this is not a perfect process, it's much more controlled and preferable to unknown whole-cluster downtime that is currently required since you can't keep a quorum of servers healthy while restarting to change the config to one that will work.
| True | Add emergency server write rate limit to allow controlled recovery from replication failure - ## Background
This is a follow up to several incidents where the failure described in #9609 was the root cause.
## Proposal
In the outage situation described in the linked ticket above, there is currently no easy way in Consul to reduce the write load on the servers without external coordination like shutting down consumers or reconfiguring rate limits on all clients.
It would be incredibly useful to be able to have a "break glass" emergency setting that allowed operators to specifically introduce a rate limit for accepting new writes into raft. This _must_ be reloadable without restarting the leader to be effective.
We have other investigations about the best way to implement automatic backpressure that will improve Consul server stability in general, there are numerous factors to take into account there that make it a much more complex problem to research and design a solution for so having this simple solution seem important to provide a way to recover without full downtime if this situation does occur. This proposal is not a replacement for more general work but is a quicker path to improving recovery options for operators during incidents.
The idea would be to have a hot-reloadable `rate.Limit` that could be configured on the leader to just error any calls to `raftApply` with a "try again later" error equivalent to an HTTP 429. For requests that originate from client HTTP requests we should ensure that our standard HTTP handler converts those errors into a 429. We have one example of this already in the Connect CA certificate signing RPC endpoint which has a similar back-pressure mechanism.
In use, we'd accept that some writes would fail and some clients would see errors, but this is often preferable to having to shut down the whole cluster to recover which is the only other option in this case often.
## Smoothing
We could wait for a short period before returning the error if rate limit is exceeded which allows handling of small spikes above the limit a little more fairly. See https://github.com/hashicorp/consul/blob/a4327305d1ca583ae459f1d0e3eeacc6805b4316/agent/consul/server_connect.go#L195-L200
More info also in: https://github.com/hashicorp/consul/blob/a4327305d1ca583ae459f1d0e3eeacc6805b4316/agent/consul/server_connect.go#L64-L69
## Possible complications
There may be internal code paths within Consul that write to raft which might not tolerate this error nicely. For example if we have internal leader goroutines that modify state and currently treat a failure to write raft as fatal and force leader stepdown or similar. We'd need to check that carefully and possible have an internal flag that forces internal writes to bypass the limit.
Alternatively we could only enforce the limit higher up for server RPCs so internal writes will always work.
## Proposed Use
We should document the intended usage. It can be used to relieve write load on the cluster without taking it offline by limiting the overall write throughput. Operators could slowly lower the limit until the write throughput is low enough to allow unhealthy followers to catch up. If this is causing a significant error rate in downstream services, time can be minimised by only adding the rate limit for the time it takes to get a new follower caught up after a restart. Each follower can then be restarted in turn with the rate dropped only for the time needed for them to catch up, after a few such restarts, the increased `raft_trailing_logs` should take affect from one of the restarted servers and remove the need for further rate limiting or errors.
While this is not a perfect process, it's much more controlled and preferable to unknown whole-cluster downtime that is currently required since you can't keep a quorum of servers healthy while restarting to change the config to one that will work.
| non_process | add emergency server write rate limit to allow controlled recovery from replication failure background this is a follow up to several incidents where the failure described in was the root cause proposal in the outage situation described in the linked ticket above there is currently no easy way in consul to reduce the write load on the servers without external coordination like shutting down consumers or reconfiguring rate limits on all clients it would be incredibly useful to be able to have a break glass emergency setting that allowed operators to specifically introduce a rate limit for accepting new writes into raft this must be reloadable without restarting the leader to be effective we have other investigations about the best way to implement automatic backpressure that will improve consul server stability in general there are numerous factors to take into account there that make it a much more complex problem to research and design a solution for so having this simple solution seem important to provide a way to recover without full downtime if this situation does occur this proposal is not a replacement for more general work but is a quicker path to improving recovery options for operators during incidents the idea would be to have a hot reloadable rate limit that could be configured on the leader to just error any calls to raftapply with a try again later error equivalent to an http for requests that originate from client http requests we should ensure that our standard http handler converts those errors into a we have one example of this already in the connect ca certificate signing rpc endpoint which has a similar back pressure mechanism in use we d accept that some writes would fail and some clients would see errors but this is often preferable to having to shut down the whole cluster to recover which is the only other option in this case often smoothing we could wait for a short period before returning the error if rate limit is exceeded which allows handling of small spikes above the limit a little more fairly see more info also in possible complications there may be internal code paths within consul that write to raft which might not tolerate this error nicely for example if we have internal leader goroutines that modify state and currently treat a failure to write raft as fatal and force leader stepdown or similar we d need to check that carefully and possible have an internal flag that forces internal writes to bypass the limit alternatively we could only enforce the limit higher up for server rpcs so internal writes will always work proposed use we should document the intended usage it can be used to relieve write load on the cluster without taking it offline by limiting the overall write throughput operators could slowly lower the limit until the write throughput is low enough to allow unhealthy followers to catch up if this is causing a significant error rate in downstream services time can be minimised by only adding the rate limit for the time it takes to get a new follower caught up after a restart each follower can then be restarted in turn with the rate dropped only for the time needed for them to catch up after a few such restarts the increased raft trailing logs should take affect from one of the restarted servers and remove the need for further rate limiting or errors while this is not a perfect process it s much more controlled and preferable to unknown whole cluster downtime that is currently required since you can t keep a quorum of servers healthy while restarting to change the config to one that will work | 0 |
178,458 | 13,780,627,021 | IssuesEvent | 2020-10-08 15:08:58 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | Failing test: X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/create·ts - saved objects security and spaces enabled _create dual-privileges user within the space_1 space with overwrite enabled should return 200 success [globaltype/globaltype-id] | failed-test | A test failed on a tracked branch
```
Error: expected 200 "OK", got 503 "Service Unavailable"
at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/8656/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/create·ts","test.name":"saved objects security and spaces enabled _create dual-privileges user within the space_1 space with overwrite enabled should return 200 success [globaltype/globaltype-id]","test.failCount":1}} --> | 1.0 | Failing test: X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/create·ts - saved objects security and spaces enabled _create dual-privileges user within the space_1 space with overwrite enabled should return 200 success [globaltype/globaltype-id] - A test failed on a tracked branch
```
Error: expected 200 "OK", got 503 "Service Unavailable"
at Test._assertStatus (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:268:12)
at Test._assertFunction (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:283:11)
at Test.assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:173:18)
at assert (/dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:131:12)
at /dev/shm/workspace/kibana/node_modules/supertest/lib/test.js:128:5
at Test.Request.callback (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:718:3)
at parser (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/index.js:906:18)
at IncomingMessage.res.on (/dev/shm/workspace/kibana/node_modules/superagent/lib/node/parsers/json.js:19:7)
at endReadableNT (_stream_readable.js:1145:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
```
First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+7.x/8656/)
<!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Saved Object API Integration Tests -- security_and_spaces.x-pack/test/saved_object_api_integration/security_and_spaces/apis/create·ts","test.name":"saved objects security and spaces enabled _create dual-privileges user within the space_1 space with overwrite enabled should return 200 success [globaltype/globaltype-id]","test.failCount":1}} --> | non_process | failing test x pack saved object api integration tests security and spaces x pack test saved object api integration security and spaces apis create·ts saved objects security and spaces enabled create dual privileges user within the space space with overwrite enabled should return success a test failed on a tracked branch error expected ok got service unavailable at test assertstatus dev shm workspace kibana node modules supertest lib test js at test assertfunction dev shm workspace kibana node modules supertest lib test js at test assert dev shm workspace kibana node modules supertest lib test js at assert dev shm workspace kibana node modules supertest lib test js at dev shm workspace kibana node modules supertest lib test js at test request callback dev shm workspace kibana node modules superagent lib node index js at parser dev shm workspace kibana node modules superagent lib node index js at incomingmessage res on dev shm workspace kibana node modules superagent lib node parsers json js at endreadablent stream readable js at process tickcallback internal process next tick js first failure | 0 |
155,805 | 13,633,444,519 | IssuesEvent | 2020-09-24 21:26:42 | Witekio/pluma-automation | https://api.github.com/repos/Witekio/pluma-automation | opened | CopyToDeviceAction is not documented | bug documentation good first issue | Requires documentation in README.md, we missed that part. @mjftw , just FYI | 1.0 | CopyToDeviceAction is not documented - Requires documentation in README.md, we missed that part. @mjftw , just FYI | non_process | copytodeviceaction is not documented requires documentation in readme md we missed that part mjftw just fyi | 0 |
5,560 | 8,403,419,839 | IssuesEvent | 2018-10-11 09:43:16 | kiwicom/orbit-components | https://api.github.com/repos/kiwicom/orbit-components | closed | <Select />: longer help text is getting hidden underneath the input | bug processing | ## Current Behavior
Help text which is long enough so that it must break onto a new line gets partially covered by the input.
In the screenshot below, the text is:
`Must be appropriately covered to avoid dents and scratches.`
but only the word is visible.

## Possible Solution
make the position absolute to top?
## Steps to Reproduce
put a two line help text to Select element
## Context (Environment)
using orbit 0.14.0
| 1.0 | <Select />: longer help text is getting hidden underneath the input - ## Current Behavior
Help text which is long enough so that it must break onto a new line gets partially covered by the input.
In the screenshot below, the text is:
`Must be appropriately covered to avoid dents and scratches.`
but only the word is visible.

## Possible Solution
make the position absolute to top?
## Steps to Reproduce
put a two line help text to Select element
## Context (Environment)
using orbit 0.14.0
| process | longer help text is getting hidden underneath the input current behavior help text which is long enough so that it must break onto a new line gets partially covered by the input in the screenshot below the text is must be appropriately covered to avoid dents and scratches but only the word is visible possible solution make the position absolute to top steps to reproduce put a two line help text to select element context environment using orbit | 1 |
124,173 | 17,772,477,202 | IssuesEvent | 2021-08-30 15:06:58 | kapseliboi/SWoT | https://api.github.com/repos/kapseliboi/SWoT | opened | CVE-2021-23368 (Medium) detected in postcss-7.0.30.tgz, postcss-7.0.21.tgz | security vulnerability | ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.30.tgz</b>, <b>postcss-7.0.21.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.30.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.30.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.30.tgz</a></p>
<p>Path to dependency file: SWoT/web/package.json</p>
<p>Path to vulnerable library: SWoT/web/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- css-loader-3.4.2.tgz
- :x: **postcss-7.0.30.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: SWoT/web/package.json</p>
<p>Path to vulnerable library: SWoT/web/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/SWoT/commit/8d372876409db6f93b958aebaf02be985ee81f03">8d372876409db6f93b958aebaf02be985ee81f03</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: postcss -8.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-23368 (Medium) detected in postcss-7.0.30.tgz, postcss-7.0.21.tgz - ## CVE-2021-23368 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.30.tgz</b>, <b>postcss-7.0.21.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.30.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.30.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.30.tgz</a></p>
<p>Path to dependency file: SWoT/web/package.json</p>
<p>Path to vulnerable library: SWoT/web/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- css-loader-3.4.2.tgz
- :x: **postcss-7.0.30.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: SWoT/web/package.json</p>
<p>Path to vulnerable library: SWoT/web/node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.1.tgz (Root Library)
- resolve-url-loader-3.1.1.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/kapseliboi/SWoT/commit/8d372876409db6f93b958aebaf02be985ee81f03">8d372876409db6f93b958aebaf02be985ee81f03</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss from 7.0.0 and before 8.2.10 are vulnerable to Regular Expression Denial of Service (ReDoS) during source map parsing.
<p>Publish Date: 2021-04-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23368>CVE-2021-23368</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23368</a></p>
<p>Release Date: 2021-04-12</p>
<p>Fix Resolution: postcss -8.2.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in postcss tgz postcss tgz cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file swot web package json path to vulnerable library swot web node modules postcss package json dependency hierarchy react scripts tgz root library css loader tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file swot web package json path to vulnerable library swot web node modules resolve url loader node modules postcss package json dependency hierarchy react scripts tgz root library resolve url loader tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss from and before are vulnerable to regular expression denial of service redos during source map parsing publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss step up your open source security game with whitesource | 0 |
166,992 | 6,329,899,642 | IssuesEvent | 2017-07-26 05:18:24 | HouraiTeahouse/FantasyCrescendo | https://api.github.com/repos/HouraiTeahouse/FantasyCrescendo | closed | Changing stages without stopping game causes softlock | Category:Game Engine Priority:0 Severity:0 Status:Assigned Type:Bug | ### When reporting a bug/issue:
- Fantasy Crescendo Version: Build #276
- Operating System: Windows 7
- Expected Behavior: Proper character clearing and restarting.
- Actual Behavior: Softlock - "P1" indicator shown, but character is not loaded.
- Steps to reproduce the behavior:
1. Enter any stage and load a character.
2. Press Ctrl+R without unloading the character.
3. Enter another stage.
[Output log](https://github.com/HouraiTeahouse/FantasyCrescendo/files/1146227/output_log.txt)

| 1.0 | Changing stages without stopping game causes softlock - ### When reporting a bug/issue:
- Fantasy Crescendo Version: Build #276
- Operating System: Windows 7
- Expected Behavior: Proper character clearing and restarting.
- Actual Behavior: Softlock - "P1" indicator shown, but character is not loaded.
- Steps to reproduce the behavior:
1. Enter any stage and load a character.
2. Press Ctrl+R without unloading the character.
3. Enter another stage.
[Output log](https://github.com/HouraiTeahouse/FantasyCrescendo/files/1146227/output_log.txt)

| non_process | changing stages without stopping game causes softlock when reporting a bug issue fantasy crescendo version build operating system windows expected behavior proper character clearing and restarting actual behavior softlock indicator shown but character is not loaded steps to reproduce the behavior enter any stage and load a character press ctrl r without unloading the character enter another stage | 0 |
259,534 | 27,639,781,938 | IssuesEvent | 2023-03-10 17:05:08 | MatBenfield/news | https://api.github.com/repos/MatBenfield/news | opened | [SecurityWeek] Cyber Madness Bracket Challenge – Register to Play | SecurityWeek |
SecurityWeek’s Cyber Madness Bracket Challenge is a contest designed to bring the community together in a fun, competitive way through one of America’s top sporting events.
The post [Cyber Madness Bracket Challenge – Register to Play](https://www.securityweek.com/cyber-madness-bracket-challenge-register-to-play/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/cyber-madness-bracket-challenge-register-to-play/>
| True | [SecurityWeek] Cyber Madness Bracket Challenge – Register to Play -
SecurityWeek’s Cyber Madness Bracket Challenge is a contest designed to bring the community together in a fun, competitive way through one of America’s top sporting events.
The post [Cyber Madness Bracket Challenge – Register to Play](https://www.securityweek.com/cyber-madness-bracket-challenge-register-to-play/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/cyber-madness-bracket-challenge-register-to-play/>
| non_process | cyber madness bracket challenge – register to play securityweek’s cyber madness bracket challenge is a contest designed to bring the community together in a fun competitive way through one of america’s top sporting events the post appeared first on | 0 |
2,677 | 5,502,844,774 | IssuesEvent | 2017-03-16 01:17:36 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | Return relevant FK dest fields automatically in "rows" queries | Duplicate Proposal Query Processor | User Story: as a user looking at row level data I would like to see human readable identifiers in my data rather than cryptic id values (numbers, hashes) so that I can better understand what I'm looking at.
Example if pulling up the list of Invoices from a db I would want to see the name of the User or Customer the invoice is related to instead of a user_id or customer_id value.
| 1.0 | Return relevant FK dest fields automatically in "rows" queries - User Story: as a user looking at row level data I would like to see human readable identifiers in my data rather than cryptic id values (numbers, hashes) so that I can better understand what I'm looking at.
Example if pulling up the list of Invoices from a db I would want to see the name of the User or Customer the invoice is related to instead of a user_id or customer_id value.
| process | return relevant fk dest fields automatically in rows queries user story as a user looking at row level data i would like to see human readable identifiers in my data rather than cryptic id values numbers hashes so that i can better understand what i m looking at example if pulling up the list of invoices from a db i would want to see the name of the user or customer the invoice is related to instead of a user id or customer id value | 1 |
21,371 | 29,202,227,386 | IssuesEvent | 2023-05-21 00:36:39 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] Data Analyst na Coodesh | SALVADOR HOME OFFICE PJ BANCO DE DADOS DATA SCIENCE PYTHON SQL GIT STARTUP NOSQL SOLID REQUISITOS REMOTO PROCESSOS GITHUB INGLÊS CI UMA ESPANHOL BI BIGQUERY NEGÓCIOS Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-185949334?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A<strong> Fhinck</strong> está em busca de<strong> </strong><strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>Somos uma startup de tecnologia. Nós queremos que as pessoas trabalhem e vivam no seu melhor potencial.</p>
<p>E como fazemos isso? Com velocidade, agilidade, muita tecnologia, ciência de dados e, junto com as pessoas, ajudamos a compreender a interação e o comportamento humano, com seus processos, rotinas, tarefas e suas plataformas e sistemas.<br><br><strong>Responsabilidades:</strong></p>
<ul>
<li>Transformar dados complexos em informações/insights compreensíveis para pessoas não técnicas;</li>
<li>Compreender os problemas de um contexto de negócios e ser capaz de encontrar maneiras em que os dados possam traduzir esse contexto, criando visões/relatórios interativos para usuários finais de forma dar significado aos dados coletados;</li>
<li>Além da parte técnica, suas atividades exigirão muita comunicação e relacionamento, comunicando-se constantemente tanto com com a equipe interna quanto com o cliente para validação de novas implementações.</li>
</ul>
## Fhinck:
<p>A Fhinck é uma ferramenta para melhoria contínua de processos, analisa tempos e movimentos entre telas, entende o perfil das atividades que são realizadas, como o tempo é distribuído através da interação das pessoas com os sistemas que utiliza. Somos uma ferramenta para melhoria contínua de processos e não para monitorar funcionários. Esse é nosso website www.fhinck.com.</p>
</p>
## Habilidades:
- Python
- Banco de dados relacionais (SQL)
- BI
- GIT
- Banco de dados não-relacionais (NoSQL)
- BigQuery
- Inglês
## Local:
100% Remoto
## Requisitos:
- Sólido conhecimento em SQL;
- Experiência com Python;
- Conhecimento em GIT;
- Conhecimentos em plataformas de BI;
- Conhecimento em Bancos de Dados Não Relacionais (Preferencialmente em Google BigQuery);
- Boa comunicação oral e escrita;
- Inglês intermediário/avançado.
## Diferenciais:
- Ter atuado em Startup;
- Espanhol.
## Benefícios:
- Férias remuneradas;
- Plano de Saúde 100% custeado para o colaborador e 50% para os dependentes;
- Plano odontológico (100% para Fhinckers e dependentes);
- Auxílio home office;
- Auxílio creche;
- Auxílio nutricionista;
- Allya (Plataforma de benefícios);
- Fhinck solidário;
- Dayoff no aniversário;
- Equipamentos disponibilizados pela empresa;
- Faturamento de serviço sem contraprestação equivalente.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Fhinck](https://coodesh.com/vagas/data-analyst-185949334?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Data Science | 1.0 | [Remoto] Data Analyst na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-analyst-185949334?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A<strong> Fhinck</strong> está em busca de<strong> </strong><strong><ins>Data Analyst</ins></strong> para compor seu time!</p>
<p>Somos uma startup de tecnologia. Nós queremos que as pessoas trabalhem e vivam no seu melhor potencial.</p>
<p>E como fazemos isso? Com velocidade, agilidade, muita tecnologia, ciência de dados e, junto com as pessoas, ajudamos a compreender a interação e o comportamento humano, com seus processos, rotinas, tarefas e suas plataformas e sistemas.<br><br><strong>Responsabilidades:</strong></p>
<ul>
<li>Transformar dados complexos em informações/insights compreensíveis para pessoas não técnicas;</li>
<li>Compreender os problemas de um contexto de negócios e ser capaz de encontrar maneiras em que os dados possam traduzir esse contexto, criando visões/relatórios interativos para usuários finais de forma dar significado aos dados coletados;</li>
<li>Além da parte técnica, suas atividades exigirão muita comunicação e relacionamento, comunicando-se constantemente tanto com com a equipe interna quanto com o cliente para validação de novas implementações.</li>
</ul>
## Fhinck:
<p>A Fhinck é uma ferramenta para melhoria contínua de processos, analisa tempos e movimentos entre telas, entende o perfil das atividades que são realizadas, como o tempo é distribuído através da interação das pessoas com os sistemas que utiliza. Somos uma ferramenta para melhoria contínua de processos e não para monitorar funcionários. Esse é nosso website www.fhinck.com.</p>
</p>
## Habilidades:
- Python
- Banco de dados relacionais (SQL)
- BI
- GIT
- Banco de dados não-relacionais (NoSQL)
- BigQuery
- Inglês
## Local:
100% Remoto
## Requisitos:
- Sólido conhecimento em SQL;
- Experiência com Python;
- Conhecimento em GIT;
- Conhecimentos em plataformas de BI;
- Conhecimento em Bancos de Dados Não Relacionais (Preferencialmente em Google BigQuery);
- Boa comunicação oral e escrita;
- Inglês intermediário/avançado.
## Diferenciais:
- Ter atuado em Startup;
- Espanhol.
## Benefícios:
- Férias remuneradas;
- Plano de Saúde 100% custeado para o colaborador e 50% para os dependentes;
- Plano odontológico (100% para Fhinckers e dependentes);
- Auxílio home office;
- Auxílio creche;
- Auxílio nutricionista;
- Allya (Plataforma de benefícios);
- Fhinck solidário;
- Dayoff no aniversário;
- Equipamentos disponibilizados pela empresa;
- Faturamento de serviço sem contraprestação equivalente.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Analyst na Fhinck](https://coodesh.com/vagas/data-analyst-185949334?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Data Science | process | data analyst na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a fhinck está em busca de data analyst para compor seu time somos uma startup de tecnologia nós queremos que as pessoas trabalhem e vivam no seu melhor potencial e como fazemos isso com velocidade agilidade muita tecnologia ciência de dados e junto com as pessoas ajudamos a compreender a interação e o comportamento humano com seus processos rotinas tarefas e suas plataformas e sistemas responsabilidades transformar dados complexos em informações insights compreensíveis para pessoas não técnicas compreender os problemas de um contexto de negócios e ser capaz de encontrar maneiras em que os dados possam traduzir esse contexto criando visões relatórios interativos para usuários finais de forma dar significado aos dados coletados além da parte técnica suas atividades exigirão muita comunicação e relacionamento comunicando se constantemente tanto com com a equipe interna quanto com o cliente para validação de novas implementações fhinck a fhinck é uma ferramenta para melhoria contínua de processos analisa tempos e movimentos entre telas entende o perfil das atividades que são realizadas como o tempo é distribuído através da interação das pessoas com os sistemas que utiliza somos uma ferramenta para melhoria contínua de processos e não para monitorar funcionários esse é nosso website habilidades python banco de dados relacionais sql bi git banco de dados não relacionais nosql bigquery inglês local remoto requisitos sólido conhecimento em sql experiência com python conhecimento em git conhecimentos em plataformas de bi conhecimento em bancos de dados não relacionais preferencialmente em google bigquery boa comunicação oral e escrita inglês intermediário avançado diferenciais ter atuado em startup espanhol benefícios férias remuneradas plano de saúde custeado para o colaborador e para os dependentes plano odontológico para fhinckers e dependentes auxílio home office auxílio creche auxílio nutricionista allya plataforma de benefícios fhinck solidário dayoff no aniversário equipamentos disponibilizados pela empresa faturamento de serviço sem contraprestação equivalente como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria data science | 1 |
151,317 | 12,031,698,299 | IssuesEvent | 2020-04-13 10:17:38 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | closed | [Bug] Fix intermittent UI test tabMediaControlButtonTest | Feature:Media intermittent-test 🐞 bug | https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8046457654635721978/executions/bs.4d9073ffb2260f3d/testcases/2
```Log
androidx.test.espresso.base.DefaultFailureHandler$AssertionFailedWithCauseError: 'with content description text: is "Play"' doesn't match the selected view.
Expected: with content description text: is "Play"
Got: "AppCompatImageButton{id=2131362607, res-name=play_pause_button, desc=Pause, visibility=VISIBLE, width=84, height=84, has-focus=false, has-focusable=true, has-window-focus=true, is-clickable=true, is-enabled=true, is-focused=false, is-focusable=true, is-layout-requested=false, is-selected=false, layout-params=androidx.constraintlayout.widget.ConstraintLayout$LayoutParams@614b60f, tag=null, root-is-layout-requested=false, has-input-connection=false, x=143.0, y=5.0}"
at dalvik.system.VMStack.getThreadStackTrace(Native Method)
at java.lang.Thread.getStackTrace(Thread.java:1538)
at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:16)
at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:36)
at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:103)
at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:31)
at org.mozilla.fenix.ui.robots.HomeScreenRobot.verifyTabMediaControlButtonState(HomeScreenRobot.kt:201)
at org.mozilla.fenix.ui.MediaNotificationTest$tabMediaControlButtonTest$3.invoke(MediaNotificationTest.kt:102)
at org.mozilla.fenix.ui.MediaNotificationTest$tabMediaControlButtonTest$3.invoke(MediaNotificationTest.kt:26)
at org.mozilla.fenix.ui.robots.BrowserRobot$Transition.openHomeScreen(BrowserRobot.kt:355)
at org.mozilla.fenix.ui.MediaNotificationTest.tabMediaControlButtonTest(MediaNotificationTest.kt:99)``` | 1.0 | [Bug] Fix intermittent UI test tabMediaControlButtonTest - https://console.firebase.google.com/u/0/project/moz-fenix/testlab/histories/bh.66b7091e15d53d45/matrices/8046457654635721978/executions/bs.4d9073ffb2260f3d/testcases/2
```Log
androidx.test.espresso.base.DefaultFailureHandler$AssertionFailedWithCauseError: 'with content description text: is "Play"' doesn't match the selected view.
Expected: with content description text: is "Play"
Got: "AppCompatImageButton{id=2131362607, res-name=play_pause_button, desc=Pause, visibility=VISIBLE, width=84, height=84, has-focus=false, has-focusable=true, has-window-focus=true, is-clickable=true, is-enabled=true, is-focused=false, is-focusable=true, is-layout-requested=false, is-selected=false, layout-params=androidx.constraintlayout.widget.ConstraintLayout$LayoutParams@614b60f, tag=null, root-is-layout-requested=false, has-input-connection=false, x=143.0, y=5.0}"
at dalvik.system.VMStack.getThreadStackTrace(Native Method)
at java.lang.Thread.getStackTrace(Thread.java:1538)
at androidx.test.espresso.base.DefaultFailureHandler.getUserFriendlyError(DefaultFailureHandler.java:16)
at androidx.test.espresso.base.DefaultFailureHandler.handle(DefaultFailureHandler.java:36)
at androidx.test.espresso.ViewInteraction.waitForAndHandleInteractionResults(ViewInteraction.java:103)
at androidx.test.espresso.ViewInteraction.check(ViewInteraction.java:31)
at org.mozilla.fenix.ui.robots.HomeScreenRobot.verifyTabMediaControlButtonState(HomeScreenRobot.kt:201)
at org.mozilla.fenix.ui.MediaNotificationTest$tabMediaControlButtonTest$3.invoke(MediaNotificationTest.kt:102)
at org.mozilla.fenix.ui.MediaNotificationTest$tabMediaControlButtonTest$3.invoke(MediaNotificationTest.kt:26)
at org.mozilla.fenix.ui.robots.BrowserRobot$Transition.openHomeScreen(BrowserRobot.kt:355)
at org.mozilla.fenix.ui.MediaNotificationTest.tabMediaControlButtonTest(MediaNotificationTest.kt:99)``` | non_process | fix intermittent ui test tabmediacontrolbuttontest log androidx test espresso base defaultfailurehandler assertionfailedwithcauseerror with content description text is play doesn t match the selected view expected with content description text is play got appcompatimagebutton id res name play pause button desc pause visibility visible width height has focus false has focusable true has window focus true is clickable true is enabled true is focused false is focusable true is layout requested false is selected false layout params androidx constraintlayout widget constraintlayout layoutparams tag null root is layout requested false has input connection false x y at dalvik system vmstack getthreadstacktrace native method at java lang thread getstacktrace thread java at androidx test espresso base defaultfailurehandler getuserfriendlyerror defaultfailurehandler java at androidx test espresso base defaultfailurehandler handle defaultfailurehandler java at androidx test espresso viewinteraction waitforandhandleinteractionresults viewinteraction java at androidx test espresso viewinteraction check viewinteraction java at org mozilla fenix ui robots homescreenrobot verifytabmediacontrolbuttonstate homescreenrobot kt at org mozilla fenix ui medianotificationtest tabmediacontrolbuttontest invoke medianotificationtest kt at org mozilla fenix ui medianotificationtest tabmediacontrolbuttontest invoke medianotificationtest kt at org mozilla fenix ui robots browserrobot transition openhomescreen browserrobot kt at org mozilla fenix ui medianotificationtest tabmediacontrolbuttontest medianotificationtest kt | 0 |
14,086 | 16,977,309,872 | IssuesEvent | 2021-06-30 02:14:23 | q191201771/lal | https://api.github.com/repos/q191201771/lal | closed | arm32交叉编译失败 | #Bug *In process |
1.尝试交叉编译arm包出现以下错误,arm64没问题。
2.测试环境
go version go1.16.5 linux/amd64
ubuntu18.04
lal 最新代码
3.测试过程
编译命令
`CGO_ENABLED=0 GOOS=linux GOARCH=arm go build app/lalserver/main.go`
错误日志
` github.com/q191201771/naza/pkg/nazaatomic
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:15:6: Int64 redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:19:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:19:6: Uint64 redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:24:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:25:20: (*Uint64).Load redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:31:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:29:20: (*Uint64).Store redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:38:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:33:20: (*Uint64).Add redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:44:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:38:20: (*Uint64).Sub redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:53:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:42:20: (*Uint64).Increment redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:61:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:46:20: (*Uint64).Decrement redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:69:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:50:20: (*Uint64).CompareAndSwap redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:77:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:54:20: (*Uint64).Swap redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:89:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:54:20: too many errors
` | 1.0 | arm32交叉编译失败 -
1.尝试交叉编译arm包出现以下错误,arm64没问题。
2.测试环境
go version go1.16.5 linux/amd64
ubuntu18.04
lal 最新代码
3.测试过程
编译命令
`CGO_ENABLED=0 GOOS=linux GOARCH=arm go build app/lalserver/main.go`
错误日志
` github.com/q191201771/naza/pkg/nazaatomic
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:15:6: Int64 redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:19:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:19:6: Uint64 redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:24:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:25:20: (*Uint64).Load redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:31:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:29:20: (*Uint64).Store redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:38:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:33:20: (*Uint64).Add redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:44:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:38:20: (*Uint64).Sub redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:53:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:42:20: (*Uint64).Increment redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:61:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:46:20: (*Uint64).Decrement redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:69:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:50:20: (*Uint64).CompareAndSwap redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:77:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:54:20: (*Uint64).Swap redeclared in this block
previous declaration at /root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_32bit.go:89:6
/root/go/pkg/mod/github.com/q191201771/naza@v0.19.1/pkg/nazaatomic/atomic_64bit.go:54:20: too many errors
` | process | 尝试交叉编译arm包出现以下错误, 。 测试环境 go version linux lal 最新代码 测试过程 编译命令 cgo enabled goos linux goarch arm go build app lalserver main go 错误日志 github com naza pkg nazaatomic root go pkg mod github com naza pkg nazaatomic atomic go redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go load redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go store redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go add redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go sub redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go increment redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go decrement redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go compareandswap redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go swap redeclared in this block previous declaration at root go pkg mod github com naza pkg nazaatomic atomic go root go pkg mod github com naza pkg nazaatomic atomic go too many errors | 1 |
2,812 | 5,738,574,655 | IssuesEvent | 2017-04-23 05:50:54 | SIMEXP/niak | https://api.github.com/repos/SIMEXP/niak | closed | A progress verbose while doing QC | enhancement preprocessing quality control | Note from @amanbadhwar : it would be a good add to have a verbose like progression bar or percentage when doing QC
| 1.0 | A progress verbose while doing QC - Note from @amanbadhwar : it would be a good add to have a verbose like progression bar or percentage when doing QC
| process | a progress verbose while doing qc note from amanbadhwar it would be a good add to have a verbose like progression bar or percentage when doing qc | 1 |
265,826 | 28,298,760,040 | IssuesEvent | 2023-04-10 02:38:11 | nidhi7598/linux-4.19.72 | https://api.github.com/repos/nidhi7598/linux-4.19.72 | closed | CVE-2021-26931 (Medium) detected in linuxlinux-4.19.254, linuxlinux-4.19.254 - autoclosed | Mend: dependency security vulnerability | ## CVE-2021-26931 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.254</b>, <b>linuxlinux-4.19.254</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 2.6.39 through 5.10.16, as used in Xen. Block, net, and SCSI backends consider certain errors a plain bug, deliberately causing a kernel crash. For errors potentially being at least under the influence of guests (such as out of memory conditions), it isn't correct to assume a plain bug. Memory allocations potentially causing such crashes occur only when Linux is running in PV mode, though. This affects drivers/block/xen-blkback/blkback.c and drivers/xen/xen-scsiback.c.
<p>Publish Date: 2021-02-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-26931>CVE-2021-26931</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-26931">https://nvd.nist.gov/vuln/detail/CVE-2021-26931</a></p>
<p>Release Date: 2021-02-17</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-26931 (Medium) detected in linuxlinux-4.19.254, linuxlinux-4.19.254 - autoclosed - ## CVE-2021-26931 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>linuxlinux-4.19.254</b>, <b>linuxlinux-4.19.254</b></p></summary>
<p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in the Linux kernel 2.6.39 through 5.10.16, as used in Xen. Block, net, and SCSI backends consider certain errors a plain bug, deliberately causing a kernel crash. For errors potentially being at least under the influence of guests (such as out of memory conditions), it isn't correct to assume a plain bug. Memory allocations potentially causing such crashes occur only when Linux is running in PV mode, though. This affects drivers/block/xen-blkback/blkback.c and drivers/xen/xen-scsiback.c.
<p>Publish Date: 2021-02-17
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-26931>CVE-2021-26931</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-26931">https://nvd.nist.gov/vuln/detail/CVE-2021-26931</a></p>
<p>Release Date: 2021-02-17</p>
<p>Fix Resolution: linux-libc-headers - 5.13;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in linuxlinux linuxlinux autoclosed cve medium severity vulnerability vulnerable libraries linuxlinux linuxlinux vulnerability details an issue was discovered in the linux kernel through as used in xen block net and scsi backends consider certain errors a plain bug deliberately causing a kernel crash for errors potentially being at least under the influence of guests such as out of memory conditions it isn t correct to assume a plain bug memory allocations potentially causing such crashes occur only when linux is running in pv mode though this affects drivers block xen blkback blkback c and drivers xen xen scsiback c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend | 0 |
10,987 | 13,783,874,237 | IssuesEvent | 2020-10-08 19:56:22 | googleapis/gax-php | https://api.github.com/repos/googleapis/gax-php | closed | The latest release does not comply with psr-4 autoloading standard | type: process | Please release the current dev-master as our project strictly requires compatibility with Composer v2.0 (the release should contain #278 and #279).
Due to this I can't use your library (google/cloud-pubsub) in our project to support RTDN from Google Play.
Thank you. | 1.0 | The latest release does not comply with psr-4 autoloading standard - Please release the current dev-master as our project strictly requires compatibility with Composer v2.0 (the release should contain #278 and #279).
Due to this I can't use your library (google/cloud-pubsub) in our project to support RTDN from Google Play.
Thank you. | process | the latest release does not comply with psr autoloading standard please release the current dev master as our project strictly requires compatibility with composer the release should contain and due to this i can t use your library google cloud pubsub in our project to support rtdn from google play thank you | 1 |
6,626 | 9,725,756,550 | IssuesEvent | 2019-05-30 09:34:55 | linnovate/root | https://api.github.com/repos/linnovate/root | closed | Delete multiple choice - button not clickable | 2.0.7 Process bug | go to Settings -> Folders
create new item
go to Documents -> Manage Documents
create few items
choose some items and delete them
the items are deleted only after refreshing the page or going to another tab and back to the deleted item's list
- Also happens in: Folders from Office, Tasks from Discussion, Tasks from Projects and Office from Tasks

| 1.0 | Delete multiple choice - button not clickable - go to Settings -> Folders
create new item
go to Documents -> Manage Documents
create few items
choose some items and delete them
the items are deleted only after refreshing the page or going to another tab and back to the deleted item's list
- Also happens in: Folders from Office, Tasks from Discussion, Tasks from Projects and Office from Tasks

| process | delete multiple choice button not clickable go to settings folders create new item go to documents manage documents create few items choose some items and delete them the items are deleted only after refreshing the page or going to another tab and back to the deleted item s list also happens in folders from office tasks from discussion tasks from projects and office from tasks | 1 |
19,007 | 25,006,597,169 | IssuesEvent | 2022-11-03 12:21:58 | Tencent/tdesign-miniprogram | https://api.github.com/repos/Tencent/tdesign-miniprogram | closed | [tabs] 希望在选项卡名称旁边增加标签 | in process | ### 这个功能解决了什么问题
选项卡现在遇到情况:
1.分选项卡的消息,显示不同页卡的数量,并且有样式(现在仅仅能显示数量)
2.该选项卡,用于商品,新品,热卖等有样式标签,无法设置
### 你建议的方案是什么
单独设置一个badge,可以支持slot,标签可以放在名称的前面或后面 | 1.0 | [tabs] 希望在选项卡名称旁边增加标签 - ### 这个功能解决了什么问题
选项卡现在遇到情况:
1.分选项卡的消息,显示不同页卡的数量,并且有样式(现在仅仅能显示数量)
2.该选项卡,用于商品,新品,热卖等有样式标签,无法设置
### 你建议的方案是什么
单独设置一个badge,可以支持slot,标签可以放在名称的前面或后面 | process | 希望在选项卡名称旁边增加标签 这个功能解决了什么问题 选项卡现在遇到情况: 分选项卡的消息,显示不同页卡的数量,并且有样式(现在仅仅能显示数量) 该选项卡,用于商品,新品,热卖等有样式标签,无法设置 你建议的方案是什么 单独设置一个badge,可以支持slot,标签可以放在名称的前面或后面 | 1 |
198,342 | 14,974,029,018 | IssuesEvent | 2021-01-28 02:31:53 | microsoft/AzureStorageExplorer | https://api.github.com/repos/microsoft/AzureStorageExplorer | closed | The folder name displays as strange strings when using non-ENU characters to create one ADLS Gen2 folder | :beetle: regression :gear: adls gen2 :heavy_check_mark: merged 🧪 testing | **Storage Explorer Version:** 1.17.0
**Build Number:** 20210127.3
**Branch:** main
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS Catalina
**Architecture:** ia32/ x64
**Language**: Czech/ Hungarian/ Japanese/ Korean/ Russian/ Swedish/ ZH-CN/ ZH-TW
**Regression From:** Previous release (1.17.0)
**Steps to reproduce:**
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Click 'New Folder' on the toolbar.
3. Enter the name '新建文件夹' to create a new folder.
4. Check whether a folder named '新建文件夹' displays.
**Expect Experience:**
A folder named '新建文件夹' displays.
**Actual Experience:**
A folder named '%E6%96%B0%E5%BB%BA%E6%96%87%E4%BB%B6%E5%A4%B9' displays.

## Additional Context ##
This issue also reproduces when creating folder with characters in Czech/ Hungarian/ Japanese/ Korean/ Russian/ Swedish/ ZH-CN/ ZH-TW. | 1.0 | The folder name displays as strange strings when using non-ENU characters to create one ADLS Gen2 folder - **Storage Explorer Version:** 1.17.0
**Build Number:** 20210127.3
**Branch:** main
**Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ MacOS Catalina
**Architecture:** ia32/ x64
**Language**: Czech/ Hungarian/ Japanese/ Korean/ Russian/ Swedish/ ZH-CN/ ZH-TW
**Regression From:** Previous release (1.17.0)
**Steps to reproduce:**
1. Expand one ADLS Gen2 storage account -> Blob Containers.
2. Click 'New Folder' on the toolbar.
3. Enter the name '新建文件夹' to create a new folder.
4. Check whether a folder named '新建文件夹' displays.
**Expect Experience:**
A folder named '新建文件夹' displays.
**Actual Experience:**
A folder named '%E6%96%B0%E5%BB%BA%E6%96%87%E4%BB%B6%E5%A4%B9' displays.

## Additional Context ##
This issue also reproduces when creating folder with characters in Czech/ Hungarian/ Japanese/ Korean/ Russian/ Swedish/ ZH-CN/ ZH-TW. | non_process | the folder name displays as strange strings when using non enu characters to create one adls folder storage explorer version build number branch main platform os windows linux ubuntu macos catalina architecture language czech hungarian japanese korean russian swedish zh cn zh tw regression from previous release steps to reproduce expand one adls storage account blob containers click new folder on the toolbar enter the name 新建文件夹 to create a new folder check whether a folder named 新建文件夹 displays expect experience a folder named 新建文件夹 displays actual experience a folder named bb ba bb displays additional context this issue also reproduces when creating folder with characters in czech hungarian japanese korean russian swedish zh cn zh tw | 0 |
71,349 | 18,714,538,776 | IssuesEvent | 2021-11-03 01:27:05 | NVIDIA/spark-rapids | https://api.github.com/repos/NVIDIA/spark-rapids | opened | [FEA] build and test pipelines for databricks 9.1 | feature request build P1 | https://github.com/NVIDIA/spark-rapids/pull/3767 has add shim layer for databricks 9.1 runtime,
we will need to update our nightly build and test pipeline to support build new shims.
also our pre-merge pipeline should also cover databricks 9.1 and keep compatibility w/ databricks 8.2 for plugin version lower than 21.12 (like 21.10)
related to https://github.com/NVIDIA/spark-rapids/issues/4006
| 1.0 | [FEA] build and test pipelines for databricks 9.1 - https://github.com/NVIDIA/spark-rapids/pull/3767 has add shim layer for databricks 9.1 runtime,
we will need to update our nightly build and test pipeline to support build new shims.
also our pre-merge pipeline should also cover databricks 9.1 and keep compatibility w/ databricks 8.2 for plugin version lower than 21.12 (like 21.10)
related to https://github.com/NVIDIA/spark-rapids/issues/4006
| non_process | build and test pipelines for databricks has add shim layer for databricks runtime we will need to update our nightly build and test pipeline to support build new shims also our pre merge pipeline should also cover databricks and keep compatibility w databricks for plugin version lower than like related to | 0 |
16,192 | 20,674,120,624 | IssuesEvent | 2022-03-10 07:21:16 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | CockroachDB: Remove the Json native type | process/candidate topic: schema engines/data model parser team/migrations topic: cockroachdb team/psl-wg | On cockroachdb, [JSON is an alias for JSONB](https://www.cockroachlabs.com/docs/v21.2/jsonb#alias). We should reflect that in our native type definitions and remove `Json`. | 1.0 | CockroachDB: Remove the Json native type - On cockroachdb, [JSON is an alias for JSONB](https://www.cockroachlabs.com/docs/v21.2/jsonb#alias). We should reflect that in our native type definitions and remove `Json`. | process | cockroachdb remove the json native type on cockroachdb we should reflect that in our native type definitions and remove json | 1 |
17,717 | 23,619,046,225 | IssuesEvent | 2022-08-24 18:37:58 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | opened | [processors/transform] Add `drop` action support | enhancement processor/transform | `drop` action is pretty important functionality to replace filter processors. It's already being mentioned in the docs but not implemented yet.
Subtasks per data type:
- [ ] Add `drop` action support for metrics
- [ ] Add `drop` action support for traces
- [ ] Add `drop` action support for logs | 1.0 | [processors/transform] Add `drop` action support - `drop` action is pretty important functionality to replace filter processors. It's already being mentioned in the docs but not implemented yet.
Subtasks per data type:
- [ ] Add `drop` action support for metrics
- [ ] Add `drop` action support for traces
- [ ] Add `drop` action support for logs | process | add drop action support drop action is pretty important functionality to replace filter processors it s already being mentioned in the docs but not implemented yet subtasks per data type add drop action support for metrics add drop action support for traces add drop action support for logs | 1 |
203,259 | 15,875,896,886 | IssuesEvent | 2021-04-09 07:39:40 | zkat/big-brain | https://api.github.com/repos/zkat/big-brain | opened | Write guide | documentation help wanted | There should be a step-by-step guide on how to get started with big-brain and do incrementally more complex things. | 1.0 | Write guide - There should be a step-by-step guide on how to get started with big-brain and do incrementally more complex things. | non_process | write guide there should be a step by step guide on how to get started with big brain and do incrementally more complex things | 0 |
18,494 | 24,550,979,314 | IssuesEvent | 2022-10-12 12:35:42 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [iOS] [Offline indicator] Share button should be disabled in the below mentioned screens when participant is offline | Bug P1 iOS Process: Fixed Process: Tested QA Process: Tested dev | Share button should be disabled in the below mentioned screens when the participant is offline
1. App glossary
2. Dashboard
3. Consent pdf ( both resources screen and study overview screen)

| 3.0 | [iOS] [Offline indicator] Share button should be disabled in the below mentioned screens when participant is offline - Share button should be disabled in the below mentioned screens when the participant is offline
1. App glossary
2. Dashboard
3. Consent pdf ( both resources screen and study overview screen)

| process | share button should be disabled in the below mentioned screens when participant is offline share button should be disabled in the below mentioned screens when the participant is offline app glossary dashboard consent pdf both resources screen and study overview screen | 1 |
5,700 | 8,563,583,843 | IssuesEvent | 2018-11-09 14:28:39 | easy-software-ufal/annotations_repos | https://api.github.com/repos/easy-software-ufal/annotations_repos | opened | natemcmaster/CommandLineUtils Difficult to omit short names using attributes | C# RPV test wrong processing | Issue: `https://github.com/natemcmaster/CommandLineUtils/issues/48`
PR: `https://github.com/natemcmaster/CommandLineUtils/pull/50` | 1.0 | natemcmaster/CommandLineUtils Difficult to omit short names using attributes - Issue: `https://github.com/natemcmaster/CommandLineUtils/issues/48`
PR: `https://github.com/natemcmaster/CommandLineUtils/pull/50` | process | natemcmaster commandlineutils difficult to omit short names using attributes issue pr | 1 |
15,683 | 19,847,797,641 | IssuesEvent | 2022-01-21 08:53:22 | ooi-data/RS03ECAL-MJ03E-06-BOTPTA302-streamed-botpt_lily_sample | https://api.github.com/repos/ooi-data/RS03ECAL-MJ03E-06-BOTPTA302-streamed-botpt_lily_sample | opened | 🛑 Processing failed: ValueError | process | ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:53:21.561491.
## Details
Flow name: `RS03ECAL-MJ03E-06-BOTPTA302-streamed-botpt_lily_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (25000000,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (25000000,)
```
</details>
| 1.0 | 🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:53:21.561491.
## Details
Flow name: `RS03ECAL-MJ03E-06-BOTPTA302-streamed-botpt_lily_sample`
Task name: `processing_task`
Error type: `ValueError`
Error message: cannot reshape array of size 1209600 into shape (25000000,)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2305, in append
return self._write_op(self._append_nosync, data, axis=axis)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2211, in _write_op
return self._synchronized_op(f, *args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2201, in _synchronized_op
result = f(*args, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2341, in _append_nosync
self[append_selection] = data
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1224, in __setitem__
self.set_basic_selection(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1319, in set_basic_selection
return self._set_basic_selection_nd(selection, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1610, in _set_basic_selection_nd
self._set_selection(indexer, value, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1682, in _set_selection
self._chunk_setitems(lchunk_coords, lchunk_selection, chunk_values,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in _chunk_setitems
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1871, in <listcomp>
cdatas = [self._process_for_setitem(key, sel, val, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1950, in _process_for_setitem
chunk = self._decode_chunk(cdata)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 2003, in _decode_chunk
chunk = chunk.reshape(expected_shape or self._chunks, order=self._order)
ValueError: cannot reshape array of size 1209600 into shape (25000000,)
```
</details>
| process | 🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name streamed botpt lily sample task name processing task error type valueerror error message cannot reshape array of size into shape traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages zarr core py line in append return self write op self append nosync data axis axis file srv conda envs notebook lib site packages zarr core py line in write op return self synchronized op f args kwargs file srv conda envs notebook lib site packages zarr core py line in synchronized op result f args kwargs file srv conda envs notebook lib site packages zarr core py line in append nosync self data file srv conda envs notebook lib site packages zarr core py line in setitem self set basic selection selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection return self set basic selection nd selection value fields fields file srv conda envs notebook lib site packages zarr core py line in set basic selection nd self set selection indexer value fields fields file srv conda envs notebook lib site packages zarr core py line in set selection self chunk setitems lchunk coords lchunk selection chunk values file srv conda envs notebook lib site packages zarr core py line in chunk setitems cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in cdatas self process for setitem key sel val fields fields file srv conda envs notebook lib site packages zarr core py line in process for setitem chunk self decode chunk cdata file srv conda envs notebook lib site packages zarr core py line in decode chunk chunk chunk reshape expected shape or self chunks order self order valueerror cannot reshape array of size into shape | 1 |
349,750 | 10,472,734,335 | IssuesEvent | 2019-09-23 10:55:28 | wix/wix-style-react | https://api.github.com/repos/wix/wix-style-react | closed | <ImageViewer/> - `removeRoundedBorders` property doesn't work | ImageViewer Priority:Low | # 🐛 Bug Report
### 🏗 Relevant Components
<ImageViewer/>
### 😯 Current Behavior
`removeRoundedBorders` property doesn't do enything
### 🤔 Expected Behavior
`removeRoundedBorders` property should remove round borders
priority - minor | 1.0 | <ImageViewer/> - `removeRoundedBorders` property doesn't work - # 🐛 Bug Report
### 🏗 Relevant Components
<ImageViewer/>
### 😯 Current Behavior
`removeRoundedBorders` property doesn't do enything
### 🤔 Expected Behavior
`removeRoundedBorders` property should remove round borders
priority - minor | non_process | removeroundedborders property doesn t work 🐛 bug report 🏗 relevant components 😯 current behavior removeroundedborders property doesn t do enything 🤔 expected behavior removeroundedborders property should remove round borders priority minor | 0 |
362,882 | 25,396,678,492 | IssuesEvent | 2022-11-22 09:11:34 | marmelab/react-admin | https://api.github.com/repos/marmelab/react-admin | closed | SimpleForm redirect attribute entails Warning: Received `false` for a non-boolean attribute `redirect`. | bug documentation | **What you were expecting:**
(According to the [documentation](https://github.com/marmelab/react-admin/blob/0ead4754e847a25aea57db81ce8468cf054e7534/packages/ra-ui-materialui/src/form/Toolbar.tsx#L42))
There is no warning in the browser console when I use the SimpleForm tag with redirect={false}
```
<SimpleForm redirect={false}>
</SimpleForm>
```
**What happened instead:**
There is a warning:
```
Warning: Received `false` for a non-boolean attribute `redirect`.
```
**Steps to reproduce:**
1) Launch [CodeSandbox](https://codesandbox.io/p/github/magicxor/react-admin-repro-received-false-for-redirect-bug/csb-42eu1r/draft/hardcore-resonance?file=%2FREADME.md&workspace=%257B%2522activeFileId%2522%253A%2522claqnajtx000sl2el8mhe7hu9%2522%252C%2522openFiles%2522%253A%255B%2522%252FREADME.md%2522%255D%252C%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522gitSidebarPanel%2522%253A%2522COMMIT%2522%252C%2522sidekickItems%2522%253A%255B%257B%2522type%2522%253A%2522PREVIEW%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522port%2522%253A3000%252C%2522key%2522%253A%2522claqnc4ea008g2a69k10b95q4%2522%252C%2522isMinimized%2522%253Afalse%252C%2522path%2522%253A%2522%252F%2523%252Fuser%252F112%2522%257D%252C%257B%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522key%2522%253A%2522claqnc1su006f2a69sjr2i1nm%2522%252C%2522isMinimized%2522%253Afalse%257D%255D%257D) or clone https://github.com/magicxor/react-admin-repro-received-false-for-redirect-bug
2) Go to User -> 112 -> Edit (http://localhost:3000/#/user/112)
**Related code:**
[CodeSandbox](https://codesandbox.io/p/github/magicxor/react-admin-repro-received-false-for-redirect-bug/csb-42eu1r/draft/hardcore-resonance?file=%2FREADME.md&workspace=%257B%2522activeFileId%2522%253A%2522claqnajtx000sl2el8mhe7hu9%2522%252C%2522openFiles%2522%253A%255B%2522%252FREADME.md%2522%255D%252C%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522gitSidebarPanel%2522%253A%2522COMMIT%2522%252C%2522sidekickItems%2522%253A%255B%257B%2522type%2522%253A%2522PREVIEW%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522port%2522%253A3000%252C%2522key%2522%253A%2522claqnc4ea008g2a69k10b95q4%2522%252C%2522isMinimized%2522%253Afalse%252C%2522path%2522%253A%2522%252F%2523%252Fuser%252F112%2522%257D%252C%257B%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522key%2522%253A%2522claqnc1su006f2a69sjr2i1nm%2522%252C%2522isMinimized%2522%253Afalse%257D%255D%257D)
https://github.com/magicxor/react-admin-repro-received-false-for-redirect-bug
**Environment**
* React-admin version: ^4.5.1
* Last version that did not exhibit the issue (if applicable):
* React version: ^18.2.0
* Browser: Chrome 107.0.5304.107 (Official Build) (64-bit) and Firefox 107.0 (64-bit)
* Stack trace (in case of a JS error):
```
Warning: Received `false` for a non-boolean attribute `redirect`.
If you want to write it to the DOM, pass a string instead: redirect="false" or redirect={value.toString()}.
If you used to conditionally omit it with redirect={condition && value}, pass redirect={condition ? value : undefined} instead.
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Stack@http://localhost:3000/static/js/bundle.js:24108:87
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
CardContent@http://localhost:3000/static/js/bundle.js:9166:82
DefaultComponent@http://localhost:3000/static/js/bundle.js:92104:18
form
FormGroupsProvider@http://localhost:3000/static/js/bundle.js:82076:18
FormProvider@http://localhost:3000/static/js/bundle.js:143048:7
LabelPrefixContextProvider@http://localhost:3000/static/js/bundle.js:86407:16
RecordContextProvider@http://localhost:3000/static/js/bundle.js:77419:18
OptionalRecordContextProvider@http://localhost:3000/static/js/bundle.js:77363:15
Form@http://localhost:3000/static/js/bundle.js:81975:18
SimpleForm@http://localhost:3000/static/js/bundle.js:92079:18
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Paper@http://localhost:3000/static/js/bundle.js:20960:82
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Card@http://localhost:3000/static/js/bundle.js:9300:82
div
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
EditView@http://localhost:3000/static/js/bundle.js:90534:17
RecordContextProvider@http://localhost:3000/static/js/bundle.js:77419:18
SaveContextProvider@http://localhost:3000/static/js/bundle.js:77516:18
EditContextProvider@http://localhost:3000/static/js/bundle.js:74911:18
EditBase@http://localhost:3000/static/js/bundle.js:74801:18
Edit@http://localhost:3000/static/js/bundle.js:90334:72
UserEdit
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
ResourceContextProvider@http://localhost:3000/static/js/bundle.js:78396:18
Resource@http://localhost:3000/static/js/bundle.js:78280:16
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
ErrorBoundary@http://localhost:3000/static/js/bundle.js:125919:37
div
main
div
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Layout@http://localhost:3000/static/js/bundle.js:93416:12
div
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
CoreAdminRoutes@http://localhost:3000/static/js/bundle.js:78099:77
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
CoreAdminUI@http://localhost:3000/static/js/bundle.js:78206:12
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
ScopedCssBaseline@http://localhost:3000/static/js/bundle.js:21876:82
AdminUI@http://localhost:3000/static/js/bundle.js:87812:22
InnerThemeProvider@http://localhost:3000/static/js/bundle.js:30802:70
ThemeProvider@http://localhost:3000/static/js/bundle.js:30516:7
ThemeProvider@http://localhost:3000/static/js/bundle.js:30823:7
ThemeProvider@http://localhost:3000/static/js/bundle.js:94915:18
ResourceDefinitionContextProvider@http://localhost:3000/static/js/bundle.js:78468:12
NotificationContextProvider@http://localhost:3000/static/js/bundle.js:84812:18
I18nContextProvider@http://localhost:3000/static/js/bundle.js:83835:12
Router@http://localhost:3000/static/js/bundle.js:132821:7
HistoryRouter@http://localhost:3000/static/js/bundle.js:85371:18
InternalRouter@http://localhost:3000/static/js/bundle.js:85285:18
BasenameContextProvider@http://localhost:3000/static/js/bundle.js:85340:18
AdminRouter@http://localhost:3000/static/js/bundle.js:85267:17
QueryClientProvider@http://localhost:3000/static/js/bundle.js:129625:16
PreferencesEditorContextProvider@http://localhost:3000/static/js/bundle.js:85020:18
StoreContextProvider@http://localhost:3000/static/js/bundle.js:85711:15
CoreAdminContext@http://localhost:3000/static/js/bundle.js:78029:22
AdminContext@http://localhost:3000/static/js/bundle.js:87755:12
Admin@http://localhost:3000/static/js/bundle.js:99936:22
App react-dom.development.js:86
```
| 1.0 | SimpleForm redirect attribute entails Warning: Received `false` for a non-boolean attribute `redirect`. - **What you were expecting:**
(According to the [documentation](https://github.com/marmelab/react-admin/blob/0ead4754e847a25aea57db81ce8468cf054e7534/packages/ra-ui-materialui/src/form/Toolbar.tsx#L42))
There is no warning in the browser console when I use the SimpleForm tag with redirect={false}
```
<SimpleForm redirect={false}>
</SimpleForm>
```
**What happened instead:**
There is a warning:
```
Warning: Received `false` for a non-boolean attribute `redirect`.
```
**Steps to reproduce:**
1) Launch [CodeSandbox](https://codesandbox.io/p/github/magicxor/react-admin-repro-received-false-for-redirect-bug/csb-42eu1r/draft/hardcore-resonance?file=%2FREADME.md&workspace=%257B%2522activeFileId%2522%253A%2522claqnajtx000sl2el8mhe7hu9%2522%252C%2522openFiles%2522%253A%255B%2522%252FREADME.md%2522%255D%252C%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522gitSidebarPanel%2522%253A%2522COMMIT%2522%252C%2522sidekickItems%2522%253A%255B%257B%2522type%2522%253A%2522PREVIEW%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522port%2522%253A3000%252C%2522key%2522%253A%2522claqnc4ea008g2a69k10b95q4%2522%252C%2522isMinimized%2522%253Afalse%252C%2522path%2522%253A%2522%252F%2523%252Fuser%252F112%2522%257D%252C%257B%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522key%2522%253A%2522claqnc1su006f2a69sjr2i1nm%2522%252C%2522isMinimized%2522%253Afalse%257D%255D%257D) or clone https://github.com/magicxor/react-admin-repro-received-false-for-redirect-bug
2) Go to User -> 112 -> Edit (http://localhost:3000/#/user/112)
**Related code:**
[CodeSandbox](https://codesandbox.io/p/github/magicxor/react-admin-repro-received-false-for-redirect-bug/csb-42eu1r/draft/hardcore-resonance?file=%2FREADME.md&workspace=%257B%2522activeFileId%2522%253A%2522claqnajtx000sl2el8mhe7hu9%2522%252C%2522openFiles%2522%253A%255B%2522%252FREADME.md%2522%255D%252C%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522gitSidebarPanel%2522%253A%2522COMMIT%2522%252C%2522sidekickItems%2522%253A%255B%257B%2522type%2522%253A%2522PREVIEW%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522port%2522%253A3000%252C%2522key%2522%253A%2522claqnc4ea008g2a69k10b95q4%2522%252C%2522isMinimized%2522%253Afalse%252C%2522path%2522%253A%2522%252F%2523%252Fuser%252F112%2522%257D%252C%257B%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522start%2522%252C%2522key%2522%253A%2522claqnc1su006f2a69sjr2i1nm%2522%252C%2522isMinimized%2522%253Afalse%257D%255D%257D)
https://github.com/magicxor/react-admin-repro-received-false-for-redirect-bug
**Environment**
* React-admin version: ^4.5.1
* Last version that did not exhibit the issue (if applicable):
* React version: ^18.2.0
* Browser: Chrome 107.0.5304.107 (Official Build) (64-bit) and Firefox 107.0 (64-bit)
* Stack trace (in case of a JS error):
```
Warning: Received `false` for a non-boolean attribute `redirect`.
If you want to write it to the DOM, pass a string instead: redirect="false" or redirect={value.toString()}.
If you used to conditionally omit it with redirect={condition && value}, pass redirect={condition ? value : undefined} instead.
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Stack@http://localhost:3000/static/js/bundle.js:24108:87
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
CardContent@http://localhost:3000/static/js/bundle.js:9166:82
DefaultComponent@http://localhost:3000/static/js/bundle.js:92104:18
form
FormGroupsProvider@http://localhost:3000/static/js/bundle.js:82076:18
FormProvider@http://localhost:3000/static/js/bundle.js:143048:7
LabelPrefixContextProvider@http://localhost:3000/static/js/bundle.js:86407:16
RecordContextProvider@http://localhost:3000/static/js/bundle.js:77419:18
OptionalRecordContextProvider@http://localhost:3000/static/js/bundle.js:77363:15
Form@http://localhost:3000/static/js/bundle.js:81975:18
SimpleForm@http://localhost:3000/static/js/bundle.js:92079:18
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Paper@http://localhost:3000/static/js/bundle.js:20960:82
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Card@http://localhost:3000/static/js/bundle.js:9300:82
div
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
EditView@http://localhost:3000/static/js/bundle.js:90534:17
RecordContextProvider@http://localhost:3000/static/js/bundle.js:77419:18
SaveContextProvider@http://localhost:3000/static/js/bundle.js:77516:18
EditContextProvider@http://localhost:3000/static/js/bundle.js:74911:18
EditBase@http://localhost:3000/static/js/bundle.js:74801:18
Edit@http://localhost:3000/static/js/bundle.js:90334:72
UserEdit
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
ResourceContextProvider@http://localhost:3000/static/js/bundle.js:78396:18
Resource@http://localhost:3000/static/js/bundle.js:78280:16
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
ErrorBoundary@http://localhost:3000/static/js/bundle.js:125919:37
div
main
div
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
Layout@http://localhost:3000/static/js/bundle.js:93416:12
div
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
CoreAdminRoutes@http://localhost:3000/static/js/bundle.js:78099:77
RenderedRoute@http://localhost:3000/static/js/bundle.js:132458:7
Routes@http://localhost:3000/static/js/bundle.js:132879:7
CoreAdminUI@http://localhost:3000/static/js/bundle.js:78206:12
div
./node_modules/@emotion/react/dist/emotion-element-6a883da9.browser.esm.js/withEmotionCache/<@http://localhost:3000/static/js/bundle.js:1124:66
ScopedCssBaseline@http://localhost:3000/static/js/bundle.js:21876:82
AdminUI@http://localhost:3000/static/js/bundle.js:87812:22
InnerThemeProvider@http://localhost:3000/static/js/bundle.js:30802:70
ThemeProvider@http://localhost:3000/static/js/bundle.js:30516:7
ThemeProvider@http://localhost:3000/static/js/bundle.js:30823:7
ThemeProvider@http://localhost:3000/static/js/bundle.js:94915:18
ResourceDefinitionContextProvider@http://localhost:3000/static/js/bundle.js:78468:12
NotificationContextProvider@http://localhost:3000/static/js/bundle.js:84812:18
I18nContextProvider@http://localhost:3000/static/js/bundle.js:83835:12
Router@http://localhost:3000/static/js/bundle.js:132821:7
HistoryRouter@http://localhost:3000/static/js/bundle.js:85371:18
InternalRouter@http://localhost:3000/static/js/bundle.js:85285:18
BasenameContextProvider@http://localhost:3000/static/js/bundle.js:85340:18
AdminRouter@http://localhost:3000/static/js/bundle.js:85267:17
QueryClientProvider@http://localhost:3000/static/js/bundle.js:129625:16
PreferencesEditorContextProvider@http://localhost:3000/static/js/bundle.js:85020:18
StoreContextProvider@http://localhost:3000/static/js/bundle.js:85711:15
CoreAdminContext@http://localhost:3000/static/js/bundle.js:78029:22
AdminContext@http://localhost:3000/static/js/bundle.js:87755:12
Admin@http://localhost:3000/static/js/bundle.js:99936:22
App react-dom.development.js:86
```
| non_process | simpleform redirect attribute entails warning received false for a non boolean attribute redirect what you were expecting according to the there is no warning in the browser console when i use the simpleform tag with redirect false what happened instead there is a warning warning received false for a non boolean attribute redirect steps to reproduce launch or clone go to user edit related code environment react admin version last version that did not exhibit the issue if applicable react version browser chrome official build bit and firefox bit stack trace in case of a js error warning received false for a non boolean attribute redirect if you want to write it to the dom pass a string instead redirect false or redirect value tostring if you used to conditionally omit it with redirect condition value pass redirect condition value undefined instead div node modules emotion react dist emotion element browser esm js withemotioncache stack div node modules emotion react dist emotion element browser esm js withemotioncache cardcontent defaultcomponent form formgroupsprovider formprovider labelprefixcontextprovider recordcontextprovider optionalrecordcontextprovider form simpleform div node modules emotion react dist emotion element browser esm js withemotioncache paper node modules emotion react dist emotion element browser esm js withemotioncache card div div node modules emotion react dist emotion element browser esm js withemotioncache editview recordcontextprovider savecontextprovider editcontextprovider editbase edit useredit renderedroute routes resourcecontextprovider resource renderedroute routes errorboundary div main div div node modules emotion react dist emotion element browser esm js withemotioncache layout div renderedroute routes coreadminroutes renderedroute routes coreadminui div node modules emotion react dist emotion element browser esm js withemotioncache scopedcssbaseline adminui innerthemeprovider themeprovider themeprovider themeprovider resourcedefinitioncontextprovider notificationcontextprovider router historyrouter internalrouter basenamecontextprovider adminrouter queryclientprovider preferenceseditorcontextprovider storecontextprovider coreadmincontext admincontext admin app react dom development js | 0 |
425,093 | 29,191,820,276 | IssuesEvent | 2023-05-19 20:51:26 | caproto/caproto | https://api.github.com/repos/caproto/caproto | closed | Document meaning of "High load. Batched 2 commands" and warn only if above a threshold | help wanted documentation server | I am getting this message but I do not how to make sense out of it. See below. Is it something to do with sending too much data or sending to many updates to PVs?
```
[I 11:50:39.604 common: 390] High load. Batched 2 commands (168B) with 0.0003s latency.
[I 11:50:49.370 common: 390] High load. Batched 2 commands (168B) with 0.0033s latency.
[I 11:50:52.602 common: 390] High load. Batched 4 commands (416B) with 0.0005s latency.
[I 11:50:58.699 common: 390] High load. Batched 2 commands (168B) with 0.0002s latency.
[I 11:51:00.885 common: 390] High load. Batched 2 commands (168B) with 0.0008s latency.
[I 11:51:01.226 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:51:09.341 common: 390] High load. Batched 2 commands (168B) with 0.0006s latency.
[I 11:51:09.428 common: 390] High load. Batched 2 commands (208B) with 0.0012s latency.
[I 11:51:11.877 common: 390] High load. Batched 2 commands (168B) with 0.0124s latency.
[I 11:51:12.216 common: 390] High load. Batched 4 commands (416B) with 0.0003s latency.
[I 11:51:20.059 common: 390] High load. Batched 2 commands (168B) with 0.0008s latency.
[I 11:51:23.274 common: 390] High load. Batched 2 commands (168B) with 0.0002s latency.
[I 11:51:23.656 common: 390] High load. Batched 4 commands (416B) with 0.0005s latency.
[I 11:51:28.470 common: 390] High load. Batched 2 commands (168B) with 0.0006s latency.
[I 11:51:28.681 common: 390] High load. Batched 2 commands (168B) with 0.0009s latency.
[I 11:51:30.737 common: 390] High load. Batched 2 commands (168B) with 0.0143s latency.
[I 11:51:33.044 common: 390] High load. Batched 2 commands (168B) with 0.0069s latency.
[I 11:51:33.393 common: 390] High load. Batched 4 commands (416B) with 0.0003s latency.
[I 11:52:34.463 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:53:36.582 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:54:34.552 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
```
| 1.0 | Document meaning of "High load. Batched 2 commands" and warn only if above a threshold - I am getting this message but I do not how to make sense out of it. See below. Is it something to do with sending too much data or sending to many updates to PVs?
```
[I 11:50:39.604 common: 390] High load. Batched 2 commands (168B) with 0.0003s latency.
[I 11:50:49.370 common: 390] High load. Batched 2 commands (168B) with 0.0033s latency.
[I 11:50:52.602 common: 390] High load. Batched 4 commands (416B) with 0.0005s latency.
[I 11:50:58.699 common: 390] High load. Batched 2 commands (168B) with 0.0002s latency.
[I 11:51:00.885 common: 390] High load. Batched 2 commands (168B) with 0.0008s latency.
[I 11:51:01.226 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:51:09.341 common: 390] High load. Batched 2 commands (168B) with 0.0006s latency.
[I 11:51:09.428 common: 390] High load. Batched 2 commands (208B) with 0.0012s latency.
[I 11:51:11.877 common: 390] High load. Batched 2 commands (168B) with 0.0124s latency.
[I 11:51:12.216 common: 390] High load. Batched 4 commands (416B) with 0.0003s latency.
[I 11:51:20.059 common: 390] High load. Batched 2 commands (168B) with 0.0008s latency.
[I 11:51:23.274 common: 390] High load. Batched 2 commands (168B) with 0.0002s latency.
[I 11:51:23.656 common: 390] High load. Batched 4 commands (416B) with 0.0005s latency.
[I 11:51:28.470 common: 390] High load. Batched 2 commands (168B) with 0.0006s latency.
[I 11:51:28.681 common: 390] High load. Batched 2 commands (168B) with 0.0009s latency.
[I 11:51:30.737 common: 390] High load. Batched 2 commands (168B) with 0.0143s latency.
[I 11:51:33.044 common: 390] High load. Batched 2 commands (168B) with 0.0069s latency.
[I 11:51:33.393 common: 390] High load. Batched 4 commands (416B) with 0.0003s latency.
[I 11:52:34.463 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:53:36.582 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
[I 11:54:34.552 common: 390] High load. Batched 4 commands (416B) with 0.0002s latency.
```
| non_process | document meaning of high load batched commands and warn only if above a threshold i am getting this message but i do not how to make sense out of it see below is it something to do with sending too much data or sending to many updates to pvs high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency high load batched commands with latency | 0 |
123,058 | 4,851,953,873 | IssuesEvent | 2016-11-11 08:23:03 | MatchboxDorry/dorry-web | https://api.github.com/repos/MatchboxDorry/dorry-web | closed | start 多个aiphine 3:2后,无法找到service | effort: 1 (easy) feature: controller flag: fixed priority: 1 (urgent) type: bug | **Dorry UI Build Versin:**
Version: 0.1.2-alpha
**Operation System:**
Name: Ubuntu
Version: 16.04-LTS(64bit)
**Browser:**
Browser name: Chrome
Browser version: 54.0.2840.59
**What I want to do**
想启动多个alphine服务
**Where I am**
app页面
**What I have done**
点击start,大概6次
**What I expect:**
增加6个running service
**What really happened**:
app页面显示container created,service 页面没有任何变化,10分钟内无任何改变。
1个小时内,增加了2个stopped的container,但是没有有效信息,无法确认是什么app。
| 1.0 | start 多个aiphine 3:2后,无法找到service - **Dorry UI Build Versin:**
Version: 0.1.2-alpha
**Operation System:**
Name: Ubuntu
Version: 16.04-LTS(64bit)
**Browser:**
Browser name: Chrome
Browser version: 54.0.2840.59
**What I want to do**
想启动多个alphine服务
**Where I am**
app页面
**What I have done**
点击start,大概6次
**What I expect:**
增加6个running service
**What really happened**:
app页面显示container created,service 页面没有任何变化,10分钟内无任何改变。
1个小时内,增加了2个stopped的container,但是没有有效信息,无法确认是什么app。
| non_process | start 多个aiphine ,无法找到service dorry ui build versin version alpha operation system name ubuntu version lts browser browser name chrome browser version what i want to do 想启动多个alphine服务 where i am app页面 what i have done 点击start, what i expect service what really happened app页面显示container created,service 页面没有任何变化, 。 , ,但是没有有效信息,无法确认是什么app。 | 0 |
8,204 | 11,396,669,078 | IssuesEvent | 2020-01-30 13:59:30 | prisma/prisma-client-js | https://api.github.com/repos/prisma/prisma-client-js | closed | Improve photon to prisma client error | kind/improvement process/candidate | Right now we show this:

It's great that we detect this, but it's not very actionable. I propose we reword this to the following:
---
Oops! Photon has been renamed to Prisma Client. Please make the following adjustments:
1. Rename `provider = "photonjs" ` to `provider = "prisma-client-js"` in your `schema.prisma` file.
2. Update your `package.json`'s `@prisma/photon` dependency to `@prisma/client`
3. Adjust `import { Photon } from '@prisma/photon'` to `import { PrismaClient } from '@prisma/client'` in your code.
4. Run `prisma2 generate` again
| 1.0 | Improve photon to prisma client error - Right now we show this:

It's great that we detect this, but it's not very actionable. I propose we reword this to the following:
---
Oops! Photon has been renamed to Prisma Client. Please make the following adjustments:
1. Rename `provider = "photonjs" ` to `provider = "prisma-client-js"` in your `schema.prisma` file.
2. Update your `package.json`'s `@prisma/photon` dependency to `@prisma/client`
3. Adjust `import { Photon } from '@prisma/photon'` to `import { PrismaClient } from '@prisma/client'` in your code.
4. Run `prisma2 generate` again
| process | improve photon to prisma client error right now we show this it s great that we detect this but it s not very actionable i propose we reword this to the following oops photon has been renamed to prisma client please make the following adjustments rename provider photonjs to provider prisma client js in your schema prisma file update your package json s prisma photon dependency to prisma client adjust import photon from prisma photon to import prismaclient from prisma client in your code run generate again | 1 |
21,551 | 29,865,435,491 | IssuesEvent | 2023-06-20 03:06:26 | cncf/tag-security | https://api.github.com/repos/cncf/tag-security | closed | [Sec Assess WG] Time and Effort of Security Assessments | help wanted assessment-process suggestion inactive | This issue was created from results of the Security Assessment Improvement Working Group (https://github.com/cncf/sig-security/issues/167#issuecomment-714514142).
# Time and Effort of Security Assessments
## Premise
- The result time span of assessments tend to stretch
- There is little awareness or lack of clarity of the current scheduling aspects of assessments
- Fairly time consuming on project side - barrier for project without strong corp sponsorship
## Ideas
- Assessments tend to drag on over >2-3 weeks
- Assesment timeline should be capped, or with a hard time limit
- Give recommendations of word length for sections
- Part of the review should be automated
- It makes it easy for people to report issues
- Templates for project and reviewers
## Additional Context:
- https://github.com/cncf/sig-security/tree/master/assessments/guide
- https://github.com/cncf/sig-security/blob/master/assessments/guide/security-reviewer.md#time-and-effort
- https://github.com/cncf/sig-security/blob/master/assessments/guide/project-lead.md#time-and-effort
## Logistics
- [ ] Contributors (For multiple contributors, 1 lead to coordinate)
- Placeholder_1
- Placeholder_2
- [ ] SIG-Representative | 1.0 | [Sec Assess WG] Time and Effort of Security Assessments - This issue was created from results of the Security Assessment Improvement Working Group (https://github.com/cncf/sig-security/issues/167#issuecomment-714514142).
# Time and Effort of Security Assessments
## Premise
- The result time span of assessments tend to stretch
- There is little awareness or lack of clarity of the current scheduling aspects of assessments
- Fairly time consuming on project side - barrier for project without strong corp sponsorship
## Ideas
- Assessments tend to drag on over >2-3 weeks
- Assesment timeline should be capped, or with a hard time limit
- Give recommendations of word length for sections
- Part of the review should be automated
- It makes it easy for people to report issues
- Templates for project and reviewers
## Additional Context:
- https://github.com/cncf/sig-security/tree/master/assessments/guide
- https://github.com/cncf/sig-security/blob/master/assessments/guide/security-reviewer.md#time-and-effort
- https://github.com/cncf/sig-security/blob/master/assessments/guide/project-lead.md#time-and-effort
## Logistics
- [ ] Contributors (For multiple contributors, 1 lead to coordinate)
- Placeholder_1
- Placeholder_2
- [ ] SIG-Representative | process | time and effort of security assessments this issue was created from results of the security assessment improvement working group time and effort of security assessments premise the result time span of assessments tend to stretch there is little awareness or lack of clarity of the current scheduling aspects of assessments fairly time consuming on project side barrier for project without strong corp sponsorship ideas assessments tend to drag on over weeks assesment timeline should be capped or with a hard time limit give recommendations of word length for sections part of the review should be automated it makes it easy for people to report issues templates for project and reviewers additional context logistics contributors for multiple contributors lead to coordinate placeholder placeholder sig representative | 1 |
405,285 | 11,870,909,487 | IssuesEvent | 2020-03-26 13:35:56 | WebAhead/wajjbat-social | https://api.github.com/repos/WebAhead/wajjbat-social | opened | Add share button functionality | T4h priority-2 | Make the share button send all the collected data to a specific route | 1.0 | Add share button functionality - Make the share button send all the collected data to a specific route | non_process | add share button functionality make the share button send all the collected data to a specific route | 0 |
9,432 | 12,422,312,275 | IssuesEvent | 2020-05-23 21:24:56 | MicrosoftDocs/azure-docs | https://api.github.com/repos/MicrosoftDocs/azure-docs | closed | Runbooks missing | Pri1 automation/svc cxp process-automation/subsvc product-question triaged | None of the runbooks listed in the table were auto-deployed by the solution to my Automation account.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | 1.0 | Runbooks missing - None of the runbooks listed in the table were auto-deployed by the solution to my Automation account.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 225c9d05-83dd-b006-0025-3753f5ab25bf
* Version Independent ID: 9eecef0c-b1cb-1136-faf7-542214492096
* Content: [Start/stop VMs during off-hours solution](https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management#feedback)
* Content Source: [articles/automation/automation-solution-vm-management.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-solution-vm-management.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte** | process | runbooks missing none of the runbooks listed in the table were auto deployed by the solution to my automation account document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte | 1 |
547,516 | 16,043,447,355 | IssuesEvent | 2021-04-22 10:46:20 | googleapis/java-dialogflow | https://api.github.com/repos/googleapis/java-dialogflow | reopened | com.example.dialogflow.DetectIntentWithAudioTest: testDetectIntentAudio failed | api: dialogflow flakybot: flaky flakybot: issue priority: p1 type: bug | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 122d52c2f89d0e831926eabe6ff43a614523041a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f0423470-f450-4e00-b001-89901495c704), [Sponge](http://sponge2/f0423470-f450-4e00-b001-89901495c704)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.dialogflow.v2.SessionsClient.detectIntent(SessionsClient.java:267)
at com.example.dialogflow.DetectIntentAudio.detectIntentAudio(DetectIntentAudio.java:76)
at com.example.dialogflow.DetectIntentWithAudioTest.testDetectIntentAudio(DetectIntentWithAudioTest.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata
at io.grpc.Status.asRuntimeException(Status.java:535)
... 14 more
Caused by: java.io.IOException: Error getting access token for service account: 400 Bad Request
POST https://oauth2.googleapis.com/token
{"error":"invalid_grant","error_description":"Invalid JWT Signature."}
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:612)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:164)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:149)
at com.google.auth.oauth2.ServiceAccountCredentials.getRequestMetadata(ServiceAccountCredentials.java:946)
at com.google.auth.Credentials.blockingGetToCallback(Credentials.java:112)
at com.google.auth.Credentials$1.run(Credentials.java:98)
... 7 more
Caused by: com.google.api.client.http.HttpResponseException: 400 Bad Request
POST https://oauth2.googleapis.com/token
{"error":"invalid_grant","error_description":"Invalid JWT Signature."}
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1116)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:609)
... 12 more
</pre></details> | 1.0 | com.example.dialogflow.DetectIntentWithAudioTest: testDetectIntentAudio failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 122d52c2f89d0e831926eabe6ff43a614523041a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f0423470-f450-4e00-b001-89901495c704), [Sponge](http://sponge2/f0423470-f450-4e00-b001-89901495c704)
status: failed
<details><summary>Test output</summary><br><pre>com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed
at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57)
at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112)
at com.google.cloud.dialogflow.v2.SessionsClient.detectIntent(SessionsClient.java:267)
at com.example.dialogflow.DetectIntentAudio.detectIntentAudio(DetectIntentAudio.java:76)
at com.example.dialogflow.DetectIntentWithAudioTest.testDetectIntentAudio(DetectIntentWithAudioTest.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: Credentials failed to obtain metadata
at io.grpc.Status.asRuntimeException(Status.java:535)
... 14 more
Caused by: java.io.IOException: Error getting access token for service account: 400 Bad Request
POST https://oauth2.googleapis.com/token
{"error":"invalid_grant","error_description":"Invalid JWT Signature."}
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:612)
at com.google.auth.oauth2.OAuth2Credentials.refresh(OAuth2Credentials.java:164)
at com.google.auth.oauth2.OAuth2Credentials.getRequestMetadata(OAuth2Credentials.java:149)
at com.google.auth.oauth2.ServiceAccountCredentials.getRequestMetadata(ServiceAccountCredentials.java:946)
at com.google.auth.Credentials.blockingGetToCallback(Credentials.java:112)
at com.google.auth.Credentials$1.run(Credentials.java:98)
... 7 more
Caused by: com.google.api.client.http.HttpResponseException: 400 Bad Request
POST https://oauth2.googleapis.com/token
{"error":"invalid_grant","error_description":"Invalid JWT Signature."}
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1116)
at com.google.auth.oauth2.ServiceAccountCredentials.refreshAccessToken(ServiceAccountCredentials.java:609)
... 12 more
</pre></details> | non_process | com example dialogflow detectintentwithaudiotest testdetectintentaudio failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output com google api gax rpc unavailableexception io grpc statusruntimeexception unavailable credentials failed to obtain metadata at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask access scheduledthreadpoolexecutor java at java util concurrent scheduledthreadpoolexecutor scheduledfuturetask run scheduledthreadpoolexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax rpc asynctaskexception asynchronous task failed at com google api gax rpc apiexceptions callandtranslateapiexception apiexceptions java at com google api gax rpc unarycallable call unarycallable java at com google cloud dialogflow sessionsclient detectintent sessionsclient java at com example dialogflow detectintentaudio detectintentaudio detectintentaudio java at com example dialogflow detectintentwithaudiotest testdetectintentaudio detectintentwithaudiotest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit internal runners statements runbefores evaluate runbefores java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by io grpc statusruntimeexception unavailable credentials failed to obtain metadata at io grpc status asruntimeexception status java more caused by java io ioexception error getting access token for service account bad request post error invalid grant error description invalid jwt signature at com google auth serviceaccountcredentials refreshaccesstoken serviceaccountcredentials java at com google auth refresh java at com google auth getrequestmetadata java at com google auth serviceaccountcredentials getrequestmetadata serviceaccountcredentials java at com google auth credentials blockinggettocallback credentials java at com google auth credentials run credentials java more caused by com google api client http httpresponseexception bad request post error invalid grant error description invalid jwt signature at com google api client http httprequest execute httprequest java at com google auth serviceaccountcredentials refreshaccesstoken serviceaccountcredentials java more | 0 |
9,756 | 12,740,109,756 | IssuesEvent | 2020-06-26 01:22:46 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | `process.features`: deprecate it or add more properties? | discuss process | Stem from https://github.com/nodejs/node/pull/25291#issuecomment-450671389
> What I liked about process.config.variables.v8_enable_inspector is that it's also a user accessible way to detect if the inspector is enabled. Instead of moving to an internal solution it would be rad to expose this in a more user-friendly way.
Right now, there is an undocumented `process.features` for that purpose, but it has not been well-maintained. It's possible to detect certain features known at build time via `process.config.variables` but that object is mutable in the user land so it's not exactly reliable (whereas `process.features` have been immutable). Also it's not really a good design to have users rely on internal variable names.
Should we deprecate it in favor of a better feature detection API, or start maintain it properly?
Related: https://github.com/nodejs/node/issues/22585 but that is more about availability of specific APIs like recursive `fs.mkdir`, whereas `process.features` have been more about higher-level feature sets.
| 1.0 | `process.features`: deprecate it or add more properties? - Stem from https://github.com/nodejs/node/pull/25291#issuecomment-450671389
> What I liked about process.config.variables.v8_enable_inspector is that it's also a user accessible way to detect if the inspector is enabled. Instead of moving to an internal solution it would be rad to expose this in a more user-friendly way.
Right now, there is an undocumented `process.features` for that purpose, but it has not been well-maintained. It's possible to detect certain features known at build time via `process.config.variables` but that object is mutable in the user land so it's not exactly reliable (whereas `process.features` have been immutable). Also it's not really a good design to have users rely on internal variable names.
Should we deprecate it in favor of a better feature detection API, or start maintain it properly?
Related: https://github.com/nodejs/node/issues/22585 but that is more about availability of specific APIs like recursive `fs.mkdir`, whereas `process.features` have been more about higher-level feature sets.
| process | process features deprecate it or add more properties stem from what i liked about process config variables enable inspector is that it s also a user accessible way to detect if the inspector is enabled instead of moving to an internal solution it would be rad to expose this in a more user friendly way right now there is an undocumented process features for that purpose but it has not been well maintained it s possible to detect certain features known at build time via process config variables but that object is mutable in the user land so it s not exactly reliable whereas process features have been immutable also it s not really a good design to have users rely on internal variable names should we deprecate it in favor of a better feature detection api or start maintain it properly related but that is more about availability of specific apis like recursive fs mkdir whereas process features have been more about higher level feature sets | 1 |
44,161 | 7,094,080,904 | IssuesEvent | 2018-01-12 23:41:56 | oasis-tcs/sarif-spec | https://api.github.com/repos/oasis-tcs/sarif-spec | opened | Decide on and implement uniform approach to normative keywords | discussion-ongoing impact-documentation-only | We adhere to RFC 2119 in that we distinguish three categories of normative statements:
- The statement is an absolute requirement.
- There might be a valid reason to ignore the statement.
- The statement is truly optional.
We also adhere to RFC 2119 in that we use the (capitalized) keywords SHALL and MUST to designate "absolute requirements." (RFC 2119 also allows "REQUIRED", but we don't currently use it.) We treat SHALL and MUST as absolutely synonymous, and we choose between them on a case by case basis, purely on the basis of which one reads most naturally in English.
Stefan and Ykaterina have both said that they find it easier to distinguish MUST from SHOULD than to distinguish SHALL from SHOULD, and Stefan has suggested using MUST exclusively. On the other hand, if we want to be ISO ready, we should not use MUST in this sense, since "must" means something else in an ISO spec. On the other other hand, Stefan cites the OData spec, which was written to OASIS standards, and then accepted without change by ISO.
We need to establish, and then revise the spec to conform to, a policy for keyword usage that meets the following requirements:
- Maximize understandability for both native English speakers and non-native speakers.
- Maximize consistency of expression throughout the spec (this improves understandability _except_ in cases where an arbitrarily imposed consistency leads to strained expression).
- Make the spec ISO ready (_if_ the TC agrees that this is a goal, which it has not yet explicitly done). | 1.0 | Decide on and implement uniform approach to normative keywords - We adhere to RFC 2119 in that we distinguish three categories of normative statements:
- The statement is an absolute requirement.
- There might be a valid reason to ignore the statement.
- The statement is truly optional.
We also adhere to RFC 2119 in that we use the (capitalized) keywords SHALL and MUST to designate "absolute requirements." (RFC 2119 also allows "REQUIRED", but we don't currently use it.) We treat SHALL and MUST as absolutely synonymous, and we choose between them on a case by case basis, purely on the basis of which one reads most naturally in English.
Stefan and Ykaterina have both said that they find it easier to distinguish MUST from SHOULD than to distinguish SHALL from SHOULD, and Stefan has suggested using MUST exclusively. On the other hand, if we want to be ISO ready, we should not use MUST in this sense, since "must" means something else in an ISO spec. On the other other hand, Stefan cites the OData spec, which was written to OASIS standards, and then accepted without change by ISO.
We need to establish, and then revise the spec to conform to, a policy for keyword usage that meets the following requirements:
- Maximize understandability for both native English speakers and non-native speakers.
- Maximize consistency of expression throughout the spec (this improves understandability _except_ in cases where an arbitrarily imposed consistency leads to strained expression).
- Make the spec ISO ready (_if_ the TC agrees that this is a goal, which it has not yet explicitly done). | non_process | decide on and implement uniform approach to normative keywords we adhere to rfc in that we distinguish three categories of normative statements the statement is an absolute requirement there might be a valid reason to ignore the statement the statement is truly optional we also adhere to rfc in that we use the capitalized keywords shall and must to designate absolute requirements rfc also allows required but we don t currently use it we treat shall and must as absolutely synonymous and we choose between them on a case by case basis purely on the basis of which one reads most naturally in english stefan and ykaterina have both said that they find it easier to distinguish must from should than to distinguish shall from should and stefan has suggested using must exclusively on the other hand if we want to be iso ready we should not use must in this sense since must means something else in an iso spec on the other other hand stefan cites the odata spec which was written to oasis standards and then accepted without change by iso we need to establish and then revise the spec to conform to a policy for keyword usage that meets the following requirements maximize understandability for both native english speakers and non native speakers maximize consistency of expression throughout the spec this improves understandability except in cases where an arbitrarily imposed consistency leads to strained expression make the spec iso ready if the tc agrees that this is a goal which it has not yet explicitly done | 0 |
30,735 | 25,023,153,629 | IssuesEvent | 2022-11-04 04:11:45 | UBCSailbot/sailbot_workspace | https://api.github.com/repos/UBCSailbot/sailbot_workspace | closed | Add C++ dependencies and configuration files | infrastructure | ### Purpose
### Changes
- Explore VS Code C++ integration
- Linting and formatting (keep as tasks, or use extensions and format on save?)
### Resources
- https://code.visualstudio.com/docs/cpp/cpp-ide#:~:text=Code%20formatting%23,in%20right%2Dclick%20context%20menu.
| 1.0 | Add C++ dependencies and configuration files - ### Purpose
### Changes
- Explore VS Code C++ integration
- Linting and formatting (keep as tasks, or use extensions and format on save?)
### Resources
- https://code.visualstudio.com/docs/cpp/cpp-ide#:~:text=Code%20formatting%23,in%20right%2Dclick%20context%20menu.
| non_process | add c dependencies and configuration files purpose changes explore vs code c integration linting and formatting keep as tasks or use extensions and format on save resources | 0 |
1,451 | 4,028,452,018 | IssuesEvent | 2016-05-18 06:25:07 | opentrials/opentrials | https://api.github.com/repos/opentrials/opentrials | opened | Trial appears to have an "empty" condition | bug Processors | By UI, it looks like this trial has a condition which is an empty string. This may indicate an issue with processing this data, so flagging:
http://explorer.opentrials.net/trials/48ecbd14-46d9-421f-bf32-0f2786170bae | 1.0 | Trial appears to have an "empty" condition - By UI, it looks like this trial has a condition which is an empty string. This may indicate an issue with processing this data, so flagging:
http://explorer.opentrials.net/trials/48ecbd14-46d9-421f-bf32-0f2786170bae | process | trial appears to have an empty condition by ui it looks like this trial has a condition which is an empty string this may indicate an issue with processing this data so flagging | 1 |
134,145 | 12,566,297,051 | IssuesEvent | 2020-06-08 10:57:34 | geosolutions-it/MapStore2 | https://api.github.com/repos/geosolutions-it/MapStore2 | opened | mkdocs-material issues with new version | Documentation bug investigation | ## Description
Mkdocs v5 is installed by default in some cases, but the current MapStore theme supports only v4. When this happens, you can not run mkdocs serve.
We should find out how to avoid this issues and/or update to v5.
We should also investigate if this is a problem of for all the new installations of mkdocs-material (set-ups that gave problems are all new installations) or if this is a problem related to the version of python or other causes.
Issue found in :
- Ubuntu 20.04 --> python 3.4
- MacOSX
| 1.0 | mkdocs-material issues with new version - ## Description
Mkdocs v5 is installed by default in some cases, but the current MapStore theme supports only v4. When this happens, you can not run mkdocs serve.
We should find out how to avoid this issues and/or update to v5.
We should also investigate if this is a problem of for all the new installations of mkdocs-material (set-ups that gave problems are all new installations) or if this is a problem related to the version of python or other causes.
Issue found in :
- Ubuntu 20.04 --> python 3.4
- MacOSX
| non_process | mkdocs material issues with new version description mkdocs is installed by default in some cases but the current mapstore theme supports only when this happens you can not run mkdocs serve we should find out how to avoid this issues and or update to we should also investigate if this is a problem of for all the new installations of mkdocs material set ups that gave problems are all new installations or if this is a problem related to the version of python or other causes issue found in ubuntu python macosx | 0 |
12,540 | 14,973,492,603 | IssuesEvent | 2021-01-28 01:12:43 | 2i2c-org/team-compass | https://api.github.com/repos/2i2c-org/team-compass | opened | Tech Team Update: 2021-01-27 | team-process | - [ ] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [ ] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [ ] Wait 2-3 days
- [ ] Copy/paste into the `team-compass` repository
- [ ] Clean up the HackMD
- [ ] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
| 1.0 | Tech Team Update: 2021-01-27 - - [ ] Clean up the [HackMD](https://hackmd.io/i2Siurp1TkmPYgn3ZgxFQw) for this update
- [ ] Ping the team members in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
- [ ] Wait 2-3 days
- [ ] Copy/paste into the `team-compass` repository
- [ ] Clean up the HackMD
- [ ] Link to new updates in `team-compass/` in [`#tech-updates`](https://2i2c.slack.com/archives/C01GLCC1VCN)
| process | tech team update clean up the for this update ping the team members in wait days copy paste into the team compass repository clean up the hackmd link to new updates in team compass in | 1 |
17,891 | 23,866,231,836 | IssuesEvent | 2022-09-07 11:16:37 | scikit-learn/scikit-learn | https://api.github.com/repos/scikit-learn/scikit-learn | closed | add to_cyclical function for cyclical data transformation [enhancement] [non-trivial feature] | New Feature Hard module:preprocessing | #### Description
create a to_cyclical function that transforms ordinal or continous cyclical features to sine transform and cosine transform representing the reality of the feature.
example if hours is to be taken as a feature, 23 and 01 is 22 units apart but in reality they are only two units apart.

If the idea is accepted as an enhancement, I would like to work on it and implement it then create a pull request.
#### Steps/Code to Reproduce
below is a quick draft to the idea
```
def to_cyclical(a_series, max=None):
if max:
sin_ = np.sin(2*np.pi*a_series/max)
cos_ = np.cos(2*np.pi*a_series/max)
else:
sin_ = np.sin(2*np.pi*df.col_name/a_series.max())
cos_ = np.cos(2*np.pi*df.col_name/a_series.max())
return pd.concat([sin_ , cos_ ], axis=1)
```
```
to_cyclical(series, max=None)
```
please give feedbacks if you think this feature could be beneficial.
Further readings for the solution:
https://ianlondon.github.io/blog/encoding-cyclical-features-24hour-time/
https://stats.stackexchange.com/questions/126230/optimal-construction-of-day-feature-in-neural-networks
https://datascience.stackexchange.com/questions/5990/what-is-a-good-way-to-transform-cyclic-ordinal-attributes
https://github.com/pandas-dev/pandas/issues/29849 | 1.0 | add to_cyclical function for cyclical data transformation [enhancement] [non-trivial feature] - #### Description
create a to_cyclical function that transforms ordinal or continous cyclical features to sine transform and cosine transform representing the reality of the feature.
example if hours is to be taken as a feature, 23 and 01 is 22 units apart but in reality they are only two units apart.

If the idea is accepted as an enhancement, I would like to work on it and implement it then create a pull request.
#### Steps/Code to Reproduce
below is a quick draft to the idea
```
def to_cyclical(a_series, max=None):
if max:
sin_ = np.sin(2*np.pi*a_series/max)
cos_ = np.cos(2*np.pi*a_series/max)
else:
sin_ = np.sin(2*np.pi*df.col_name/a_series.max())
cos_ = np.cos(2*np.pi*df.col_name/a_series.max())
return pd.concat([sin_ , cos_ ], axis=1)
```
```
to_cyclical(series, max=None)
```
please give feedbacks if you think this feature could be beneficial.
Further readings for the solution:
https://ianlondon.github.io/blog/encoding-cyclical-features-24hour-time/
https://stats.stackexchange.com/questions/126230/optimal-construction-of-day-feature-in-neural-networks
https://datascience.stackexchange.com/questions/5990/what-is-a-good-way-to-transform-cyclic-ordinal-attributes
https://github.com/pandas-dev/pandas/issues/29849 | process | add to cyclical function for cyclical data transformation description create a to cyclical function that transforms ordinal or continous cyclical features to sine transform and cosine transform representing the reality of the feature example if hours is to be taken as a feature and is units apart but in reality they are only two units apart if the idea is accepted as an enhancement i would like to work on it and implement it then create a pull request steps code to reproduce below is a quick draft to the idea def to cyclical a series max none if max sin np sin np pi a series max cos np cos np pi a series max else sin np sin np pi df col name a series max cos np cos np pi df col name a series max return pd concat axis to cyclical series max none please give feedbacks if you think this feature could be beneficial further readings for the solution | 1 |
18,052 | 24,065,159,440 | IssuesEvent | 2022-09-17 11:27:29 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | closed | Opening nodejs project in VS Code creates node zombie processes | info-needed terminal-process |
Type: <b>Performance Issue</b>
$ `yarn create next-app --typescript node-zombie`
$ `cd node-zombie`
$ `code .`
Wait a few hours, usually the problem happens in the end of the day.
When filing this report I have:
* disabled all extensions
* closed all vs code editors
* closed all vs code terminals
* no other vs code windows open
I have 2 node-processes consuming 100% cpu each,
This is the information shown in Activity Monitor:
node (PID: 69053)
```
cwd
/Users/david/****/*****
txt
/Users/david/.nvm/versions/node/v16.17.0/bin/node
txt
/usr/lib/dyld
0
->(none)
1
->(none)
2
->(none)
3
count=0, state=0
4
->0xb636d26fa67b8c8a
5
->0x82c07915c62c67bd
6
->0xfee8ccbf80f3e48f
7
->0x3e4d1465c8ca9447
8
->0xae5fa0d313bf9e61
9
->0xa4cafdceed509d0c
10
count=0, state=0xa
11
->0xb56fb30abe09b11c
12
->0xf4dbef87cbd1da02
13
->0x4a301880de2ce9d9
14
->0xe87eeaac7382526a
15
count=1, state=0x8
16
->0x3d223fffbac5298a
17
->0xeda46a054229300f
18
->0x73207ade3d10149
19
->0xa481dac0a0105635
20
/dev/null
25
/Users/david/Library/Application Support/Code/logs/20220906T141523/ptyhost.log
27
/Applications/Visual Studio Code.app/Contents/Resources/app/node_modules.asar
28
/dev/ptmx
29
/dev/ptmx
```
node (PID: 76086)
```
cwd
/Users/david/****/*****
txt
/Users/david/.nvm/versions/node/v16.17.0/bin/node
txt
/usr/lib/dyld
0
->(none)
1
->(none)
2
->(none)
3
count=0, state=0
4
->0x2dd6271dc1c8e2e5
5
->0x20da1f97b861c1e4
6
->0x4e756397d8a2af6e
7
->0x207ed7135dee2a0b
8
->0xeb00a81eee1b9c9a
9
->0xc8a26f8fb6b0f035
10
count=0, state=0xa
11
->0x91bf2ebe086b3819
12
->0x579bf5bbbe7f0743
13
->0x24966e8e8cc77a22
14
->0x284d6c3c268b3f76
15
count=1, state=0x8
16
->0x9ede50a5588cbf65
17
->0xacb5843d541592ab
18
->0xd6107b962c568745
19
->0x2196f586c387be8d
20
/dev/null
25
/Users/david/Library/Application Support/Code/logs/20220906T141523/ptyhost.log
27
/Applications/Visual Studio Code.app/Contents/Resources/app/node_modules.asar
28
/dev/ptmx
29
/dev/ptmx
30
/dev/ptmx
31
/dev/ptmx
```
VS Code version: Code 1.71.0 (784b0177c56c607789f9638da7b6bf3230d47a8c, 2022-09-01T07:25:25.516Z)
OS version: Darwin arm64 21.6.0
Modes:
Sandboxed: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 24)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|4, 5, 4|
|Memory (System)|16.00GB (0.10GB free)|
|Process Argv|. --crash-reporter-id 34771417-56e9-4eff-a2c1-aed91c8d4b78|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
17 147 2298 code main
0 66 2301 gpu-process
0 16 2302 utility-network-service
2 131 2309 shared-process
0 16 2311 ptyHost
0 16 55347 fileWatcher
0 33 56636 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper (Renderer).app/Contents/MacOS/Code Helper (Renderer) --ms-enable-electron-run-as-node /Applications/Visual Studio Code.app/Contents/Resources/app/out/bootstrap-fork ms-vscode.pwa-chrome {"common.vscodemachineid":"604978e9bd04ca76ad8dbdbe2eef848fac1ba6d4bd5776d8af71ebd7d9ca2f3c","common.vscodesessionid":"f6d119b3-be2e-42dc-abc3-e1bc94d43dee1662466524829"} AIF-d9b70cd4-b9f9-4d70-929b-a071c400b217
0 0 78902 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
1 442 41096 window (web-order)
0 164 55300 extensionHost
0 16 55327 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/redhat.vscode-yaml-1.10.1/dist/languageserver.js --node-ipc --clientProcessId=55300
0 16 55385 node /Users/david/.nvm/versions/node/v16.17.0/bin/yarn jest --runInBand --testLocationInResults --json --useStderr --outputFile /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/jest_runner_web_order.json --watch --no-coverage --reporters default --reporters /Users/david/.vscode/extensions/orta.vscode-jest-4.6.0/out/reporter.js --colors
0 16 55386 /Users/david/.nvm/versions/node/v16.17.0/bin/node /Users/david/yabie/web-order/node_modules/.bin/jest --runInBand --testLocationInResults --json --useStderr --outputFile /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/jest_runner_web_order.json --watch --no-coverage --reporters default --reporters /Users/david/.vscode/extensions/orta.vscode-jest-4.6.0/out/reporter.js --colors
0 16 55631 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/dbaeumer.vscode-eslint-2.2.6/server/out/eslintServer.js --node-ipc --clientProcessId=55300
0 16 55638 /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/jre/17.0.4-macosx-aarch64.tar/bin/java -jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/server/sonarlint-ls.jar 50186 -analyzers /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarjava.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarjs.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarphp.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarpython.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarhtml.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarxml.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarcfamily.jar -extraAnalyzers /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarsecrets.jar
0 16 55639 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/ms-python.vscode-pylance-2022.8.50/dist/server.bundle.js --cancellationReceive=file:8ae6e49a1772fd5ed6d51eadbbf46dc271eef0f537 --node-ipc --clientProcessId=55300
0 16 55791 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/json-language-features/server/dist/node/jsonServerMain --node-ipc --clientProcessId=55300
0 16 55893 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node --max-old-space-size=3072 /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/tsserver.js --serverMode partialSemantic --useInferredProjectPerProjectRoot --disableAutomaticTypingAcquisition --cancellationPipeName /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/vscode-typescript501/c3895724df077298d24f/tscancellation-bc3b49fc6e42299cdea5.tmp* --globalPlugins @vsintellicode/typescript-intellicode-plugin,ms-vsintellicode-typescript --pluginProbeLocations /Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24,/Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24 --locale en --noGetErrOnBackgroundUpdate --validateDefaultNpmLocation --useNodeIpc
0 33 55894 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node --max-old-space-size=3072 /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/tsserver.js --useInferredProjectPerProjectRoot --enableTelemetry --cancellationPipeName /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/vscode-typescript501/c3895724df077298d24f/tscancellation-1867f7d07d81c78c98c8.tmp* --globalPlugins @vsintellicode/typescript-intellicode-plugin,ms-vsintellicode-typescript --pluginProbeLocations /Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24,/Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24 --locale en --noGetErrOnBackgroundUpdate --validateDefaultNpmLocation --useNodeIpc
0 16 55906 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/typingsInstaller.js --globalTypingsCacheLocation /Users/david/Library/Caches/typescript/4.9 --enableTelemetry --typesMapLocation /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/typesMap.json --validateDefaultNpmLocation
0 98 56635 electron_node ms-vscode.js
0 66 56647 gpu-process
0 33 56651 utility-network-service
0 16 56656 utility
0 16 57180 window (undefined)
0 49 57212 window (undefined)
0 49 57215 window (undefined)
0 16 57228 window (undefined)
0 33 57229 window (undefined)
0 66 57308 window (undefined)
0 82 57341 window (undefined)
0 82 57342 window (undefined)
0 16 57552 window (undefined)
0 49 58119 window (undefined)
0 49 68934 window (undefined)
0 33 68945 window (undefined)
0 82 68946 window (undefined)
0 16 68953 utility
0 33 68962 window (undefined)
0 49 69823 window (undefined)
0 82 76540 process-explorer
0 279 77512 window (undefined)
3 98 78875 issue-reporter
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (web-order)
| Folder (web-order): 2076 files
| File types: js(124) ts(46) tsx(41) json(19) pack(18) png(17) jsx(5)
| gitignore(3) sh(3) css(3)
| Conf files: package.json(2) launch.json(1) settings.json(1)
| dockerfile(1) tsconfig.json(1)
| Launch Configs: pwa-chrome node;
```
</details>
Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vshan820:30294714
vstes263:30335439
pythondataviewer:30285071
vscod805cf:30301675
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411cf:30557515
vsaa593cf:30376535
pythonvs932:30410667
cppdebug:30492333
vscaac:30438847
pylanb8912:30545647
vsclangdc:30486549
c4g48928:30535728
hb751961:30553087
dsvsc012:30540252
azure-dev_surveyone:30548225
i497e931:30553904
```
</details>
<!-- generated by issue reporter --> | 1.0 | Opening nodejs project in VS Code creates node zombie processes -
Type: <b>Performance Issue</b>
$ `yarn create next-app --typescript node-zombie`
$ `cd node-zombie`
$ `code .`
Wait a few hours, usually the problem happens in the end of the day.
When filing this report I have:
* disabled all extensions
* closed all vs code editors
* closed all vs code terminals
* no other vs code windows open
I have 2 node-processes consuming 100% cpu each,
This is the information shown in Activity Monitor:
node (PID: 69053)
```
cwd
/Users/david/****/*****
txt
/Users/david/.nvm/versions/node/v16.17.0/bin/node
txt
/usr/lib/dyld
0
->(none)
1
->(none)
2
->(none)
3
count=0, state=0
4
->0xb636d26fa67b8c8a
5
->0x82c07915c62c67bd
6
->0xfee8ccbf80f3e48f
7
->0x3e4d1465c8ca9447
8
->0xae5fa0d313bf9e61
9
->0xa4cafdceed509d0c
10
count=0, state=0xa
11
->0xb56fb30abe09b11c
12
->0xf4dbef87cbd1da02
13
->0x4a301880de2ce9d9
14
->0xe87eeaac7382526a
15
count=1, state=0x8
16
->0x3d223fffbac5298a
17
->0xeda46a054229300f
18
->0x73207ade3d10149
19
->0xa481dac0a0105635
20
/dev/null
25
/Users/david/Library/Application Support/Code/logs/20220906T141523/ptyhost.log
27
/Applications/Visual Studio Code.app/Contents/Resources/app/node_modules.asar
28
/dev/ptmx
29
/dev/ptmx
```
node (PID: 76086)
```
cwd
/Users/david/****/*****
txt
/Users/david/.nvm/versions/node/v16.17.0/bin/node
txt
/usr/lib/dyld
0
->(none)
1
->(none)
2
->(none)
3
count=0, state=0
4
->0x2dd6271dc1c8e2e5
5
->0x20da1f97b861c1e4
6
->0x4e756397d8a2af6e
7
->0x207ed7135dee2a0b
8
->0xeb00a81eee1b9c9a
9
->0xc8a26f8fb6b0f035
10
count=0, state=0xa
11
->0x91bf2ebe086b3819
12
->0x579bf5bbbe7f0743
13
->0x24966e8e8cc77a22
14
->0x284d6c3c268b3f76
15
count=1, state=0x8
16
->0x9ede50a5588cbf65
17
->0xacb5843d541592ab
18
->0xd6107b962c568745
19
->0x2196f586c387be8d
20
/dev/null
25
/Users/david/Library/Application Support/Code/logs/20220906T141523/ptyhost.log
27
/Applications/Visual Studio Code.app/Contents/Resources/app/node_modules.asar
28
/dev/ptmx
29
/dev/ptmx
30
/dev/ptmx
31
/dev/ptmx
```
VS Code version: Code 1.71.0 (784b0177c56c607789f9638da7b6bf3230d47a8c, 2022-09-01T07:25:25.516Z)
OS version: Darwin arm64 21.6.0
Modes:
Sandboxed: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Pro (10 x 24)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>metal: disabled_off<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_renderer: enabled_on<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off|
|Load (avg)|4, 5, 4|
|Memory (System)|16.00GB (0.10GB free)|
|Process Argv|. --crash-reporter-id 34771417-56e9-4eff-a2c1-aed91c8d4b78|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
17 147 2298 code main
0 66 2301 gpu-process
0 16 2302 utility-network-service
2 131 2309 shared-process
0 16 2311 ptyHost
0 16 55347 fileWatcher
0 33 56636 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper (Renderer).app/Contents/MacOS/Code Helper (Renderer) --ms-enable-electron-run-as-node /Applications/Visual Studio Code.app/Contents/Resources/app/out/bootstrap-fork ms-vscode.pwa-chrome {"common.vscodemachineid":"604978e9bd04ca76ad8dbdbe2eef848fac1ba6d4bd5776d8af71ebd7d9ca2f3c","common.vscodesessionid":"f6d119b3-be2e-42dc-abc3-e1bc94d43dee1662466524829"} AIF-d9b70cd4-b9f9-4d70-929b-a071c400b217
0 0 78902 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
1 442 41096 window (web-order)
0 164 55300 extensionHost
0 16 55327 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/redhat.vscode-yaml-1.10.1/dist/languageserver.js --node-ipc --clientProcessId=55300
0 16 55385 node /Users/david/.nvm/versions/node/v16.17.0/bin/yarn jest --runInBand --testLocationInResults --json --useStderr --outputFile /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/jest_runner_web_order.json --watch --no-coverage --reporters default --reporters /Users/david/.vscode/extensions/orta.vscode-jest-4.6.0/out/reporter.js --colors
0 16 55386 /Users/david/.nvm/versions/node/v16.17.0/bin/node /Users/david/yabie/web-order/node_modules/.bin/jest --runInBand --testLocationInResults --json --useStderr --outputFile /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/jest_runner_web_order.json --watch --no-coverage --reporters default --reporters /Users/david/.vscode/extensions/orta.vscode-jest-4.6.0/out/reporter.js --colors
0 16 55631 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/dbaeumer.vscode-eslint-2.2.6/server/out/eslintServer.js --node-ipc --clientProcessId=55300
0 16 55638 /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/jre/17.0.4-macosx-aarch64.tar/bin/java -jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/server/sonarlint-ls.jar 50186 -analyzers /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarjava.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarjs.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarphp.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarpython.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarhtml.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarxml.jar /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarcfamily.jar -extraAnalyzers /Users/david/.vscode/extensions/sonarsource.sonarlint-vscode-3.9.0-darwin-arm64/analyzers/sonarsecrets.jar
0 16 55639 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/ms-python.vscode-pylance-2022.8.50/dist/server.bundle.js --cancellationReceive=file:8ae6e49a1772fd5ed6d51eadbbf46dc271eef0f537 --node-ipc --clientProcessId=55300
0 16 55791 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/json-language-features/server/dist/node/jsonServerMain --node-ipc --clientProcessId=55300
0 16 55893 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node --max-old-space-size=3072 /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/tsserver.js --serverMode partialSemantic --useInferredProjectPerProjectRoot --disableAutomaticTypingAcquisition --cancellationPipeName /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/vscode-typescript501/c3895724df077298d24f/tscancellation-bc3b49fc6e42299cdea5.tmp* --globalPlugins @vsintellicode/typescript-intellicode-plugin,ms-vsintellicode-typescript --pluginProbeLocations /Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24,/Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24 --locale en --noGetErrOnBackgroundUpdate --validateDefaultNpmLocation --useNodeIpc
0 33 55894 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node --max-old-space-size=3072 /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/tsserver.js --useInferredProjectPerProjectRoot --enableTelemetry --cancellationPipeName /var/folders/ll/_n1f627d7nl61d0nwtff9gnc0000gn/T/vscode-typescript501/c3895724df077298d24f/tscancellation-1867f7d07d81c78c98c8.tmp* --globalPlugins @vsintellicode/typescript-intellicode-plugin,ms-vsintellicode-typescript --pluginProbeLocations /Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24,/Users/david/.vscode/extensions/visualstudioexptteam.vscodeintellicode-1.2.24 --locale en --noGetErrOnBackgroundUpdate --validateDefaultNpmLocation --useNodeIpc
0 16 55906 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/typingsInstaller.js --globalTypingsCacheLocation /Users/david/Library/Caches/typescript/4.9 --enableTelemetry --typesMapLocation /Users/david/.vscode/extensions/ms-vscode.vscode-typescript-next-4.9.20220905/node_modules/typescript/lib/typesMap.json --validateDefaultNpmLocation
0 98 56635 electron_node ms-vscode.js
0 66 56647 gpu-process
0 33 56651 utility-network-service
0 16 56656 utility
0 16 57180 window (undefined)
0 49 57212 window (undefined)
0 49 57215 window (undefined)
0 16 57228 window (undefined)
0 33 57229 window (undefined)
0 66 57308 window (undefined)
0 82 57341 window (undefined)
0 82 57342 window (undefined)
0 16 57552 window (undefined)
0 49 58119 window (undefined)
0 49 68934 window (undefined)
0 33 68945 window (undefined)
0 82 68946 window (undefined)
0 16 68953 utility
0 33 68962 window (undefined)
0 49 69823 window (undefined)
0 82 76540 process-explorer
0 279 77512 window (undefined)
3 98 78875 issue-reporter
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (web-order)
| Folder (web-order): 2076 files
| File types: js(124) ts(46) tsx(41) json(19) pack(18) png(17) jsx(5)
| gitignore(3) sh(3) css(3)
| Conf files: package.json(2) launch.json(1) settings.json(1)
| dockerfile(1) tsconfig.json(1)
| Launch Configs: pwa-chrome node;
```
</details>
Extensions: none<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vsreu685:30147344
python383cf:30185419
vspor879:30202332
vspor708:30202333
vspor363:30204092
vslsvsres303:30308271
pythonvspyl392:30443607
vserr242:30382549
pythontb:30283811
vsjup518:30340749
pythonptprofiler:30281270
vshan820:30294714
vstes263:30335439
pythondataviewer:30285071
vscod805cf:30301675
binariesv615:30325510
bridge0708:30335490
bridge0723:30353136
cmake_vspar411cf:30557515
vsaa593cf:30376535
pythonvs932:30410667
cppdebug:30492333
vscaac:30438847
pylanb8912:30545647
vsclangdc:30486549
c4g48928:30535728
hb751961:30553087
dsvsc012:30540252
azure-dev_surveyone:30548225
i497e931:30553904
```
</details>
<!-- generated by issue reporter --> | process | opening nodejs project in vs code creates node zombie processes type performance issue yarn create next app typescript node zombie cd node zombie code wait a few hours usually the problem happens in the end of the day when filing this report i have disabled all extensions closed all vs code editors closed all vs code terminals no other vs code windows open i have node processes consuming cpu each this is the information shown in activity monitor node pid cwd users david txt users david nvm versions node bin node txt usr lib dyld none none none count state count state count state dev null users david library application support code logs ptyhost log applications visual studio code app contents resources app node modules asar dev ptmx dev ptmx node pid cwd users david txt users david nvm versions node bin node txt usr lib dyld none none none count state count state count state dev null users david library application support code logs ptyhost log applications visual studio code app contents resources app node modules asar dev ptmx dev ptmx dev ptmx dev ptmx vs code version code os version darwin modes sandboxed no system info item value cpus apple pro x gpu status canvas enabled canvas oop rasterization disabled off direct rendering display compositor disabled off ok gpu compositing enabled metal disabled off multiple raster threads enabled on opengl enabled on rasterization enabled raw draw disabled off ok skia renderer enabled on video decode enabled video encode enabled vulkan disabled off webgl enabled enabled webgpu disabled off load avg memory system free process argv crash reporter id screen reader no vm process info cpu mem mb pid process code main gpu process utility network service shared process ptyhost filewatcher applications visual studio code app contents frameworks code helper renderer app contents macos code helper renderer ms enable electron run as node applications visual studio code app contents resources app out bootstrap fork ms vscode pwa chrome common vscodemachineid common vscodesessionid aif bin ps ax o pid ppid pcpu pmem command window web order extensionhost applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node users david vscode extensions redhat vscode yaml dist languageserver js node ipc clientprocessid node users david nvm versions node bin yarn jest runinband testlocationinresults json usestderr outputfile var folders ll t jest runner web order json watch no coverage reporters default reporters users david vscode extensions orta vscode jest out reporter js colors users david nvm versions node bin node users david yabie web order node modules bin jest runinband testlocationinresults json usestderr outputfile var folders ll t jest runner web order json watch no coverage reporters default reporters users david vscode extensions orta vscode jest out reporter js colors applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node users david vscode extensions dbaeumer vscode eslint server out eslintserver js node ipc clientprocessid users david vscode extensions sonarsource sonarlint vscode darwin jre macosx tar bin java jar users david vscode extensions sonarsource sonarlint vscode darwin server sonarlint ls jar analyzers users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarjava jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarjs jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarphp jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarpython jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarhtml jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarxml jar users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarcfamily jar extraanalyzers users david vscode extensions sonarsource sonarlint vscode darwin analyzers sonarsecrets jar applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node users david vscode extensions ms python vscode pylance dist server bundle js cancellationreceive file node ipc clientprocessid applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node applications visual studio code app contents resources app extensions json language features server dist node jsonservermain node ipc clientprocessid applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node max old space size users david vscode extensions ms vscode vscode typescript next node modules typescript lib tsserver js servermode partialsemantic useinferredprojectperprojectroot disableautomatictypingacquisition cancellationpipename var folders ll t vscode tscancellation tmp globalplugins vsintellicode typescript intellicode plugin ms vsintellicode typescript pluginprobelocations users david vscode extensions visualstudioexptteam vscodeintellicode users david vscode extensions visualstudioexptteam vscodeintellicode locale en nogeterronbackgroundupdate validatedefaultnpmlocation usenodeipc applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node max old space size users david vscode extensions ms vscode vscode typescript next node modules typescript lib tsserver js useinferredprojectperprojectroot enabletelemetry cancellationpipename var folders ll t vscode tscancellation tmp globalplugins vsintellicode typescript intellicode plugin ms vsintellicode typescript pluginprobelocations users david vscode extensions visualstudioexptteam vscodeintellicode users david vscode extensions visualstudioexptteam vscodeintellicode locale en nogeterronbackgroundupdate validatedefaultnpmlocation usenodeipc applications visual studio code app contents frameworks code helper app contents macos code helper ms enable electron run as node users david vscode extensions ms vscode vscode typescript next node modules typescript lib typingsinstaller js globaltypingscachelocation users david library caches typescript enabletelemetry typesmaplocation users david vscode extensions ms vscode vscode typescript next node modules typescript lib typesmap json validatedefaultnpmlocation electron node ms vscode js gpu process utility network service utility window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined window undefined utility window undefined window undefined process explorer window undefined issue reporter workspace info window web order folder web order files file types js ts tsx json pack png jsx gitignore sh css conf files package json launch json settings json dockerfile tsconfig json launch configs pwa chrome node extensions none a b experiments pythontb pythonptprofiler pythondataviewer cmake cppdebug vscaac vsclangdc azure dev surveyone | 1 |
83,425 | 10,349,860,761 | IssuesEvent | 2019-09-05 00:14:16 | OfficeDev/office-js | https://api.github.com/repos/OfficeDev/office-js | closed | Certificate issues with Edge | Area: DevX Area: Edge WebView Needs: author feedback Resolution: by design | <!--- Provide a general summary of the issue in the Title above -->
I setup an Office Add-in using a manifest created by yo office (with the rest through the Vue setup and manual edits described in the [quick start](https://review.docs.microsoft.com/en-us/office/dev/add-ins/quickstarts/excel-quickstart-vue?branch=master)). I opened a browser pointing to the add-in, running on localhost:3000, and trusted the certificate. When attempting to sideload the add-in, the service could not be reached from an Edge browser or the win32 client. Sideloading through Chrome (in Excel on the web) worked fine.
## Expected Behavior
<!--- Tell us what you expected to happen -->
## Current Behavior

## Steps to Reproduce, or Live Example
Follow the steps outlined in the updated Vue quick start: https://review.docs.microsoft.com/en-us/office/dev/add-ins/quickstarts/excel-quickstart-vue?branch=master
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Platform [PC desktop, Mac, iOS, Office Online]: PC desktop/ Office Online
* Host [Excel, Word, PowerPoint, etc.]: Excel
* Office version number: 1910
* Operating System: Windows 10
* Browser (if using Office Online): Edge
## Useful logs
<!--- Please include any of the following logs that may help us debugging your issue -->
- [ ] Console errors
- [ ] Screenshots
- [ ] Test file (if only happens on a particular file)
| 1.0 | Certificate issues with Edge - <!--- Provide a general summary of the issue in the Title above -->
I setup an Office Add-in using a manifest created by yo office (with the rest through the Vue setup and manual edits described in the [quick start](https://review.docs.microsoft.com/en-us/office/dev/add-ins/quickstarts/excel-quickstart-vue?branch=master)). I opened a browser pointing to the add-in, running on localhost:3000, and trusted the certificate. When attempting to sideload the add-in, the service could not be reached from an Edge browser or the win32 client. Sideloading through Chrome (in Excel on the web) worked fine.
## Expected Behavior
<!--- Tell us what you expected to happen -->
## Current Behavior

## Steps to Reproduce, or Live Example
Follow the steps outlined in the updated Vue quick start: https://review.docs.microsoft.com/en-us/office/dev/add-ins/quickstarts/excel-quickstart-vue?branch=master
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Platform [PC desktop, Mac, iOS, Office Online]: PC desktop/ Office Online
* Host [Excel, Word, PowerPoint, etc.]: Excel
* Office version number: 1910
* Operating System: Windows 10
* Browser (if using Office Online): Edge
## Useful logs
<!--- Please include any of the following logs that may help us debugging your issue -->
- [ ] Console errors
- [ ] Screenshots
- [ ] Test file (if only happens on a particular file)
| non_process | certificate issues with edge i setup an office add in using a manifest created by yo office with the rest through the vue setup and manual edits described in the i opened a browser pointing to the add in running on localhost and trusted the certificate when attempting to sideload the add in the service could not be reached from an edge browser or the client sideloading through chrome in excel on the web worked fine expected behavior current behavior steps to reproduce or live example follow the steps outlined in the updated vue quick start context your environment platform pc desktop office online host excel office version number operating system windows browser if using office online edge useful logs console errors screenshots test file if only happens on a particular file | 0 |
21,250 | 28,374,929,479 | IssuesEvent | 2023-04-12 20:01:43 | pyOpenSci/software-peer-review | https://api.github.com/repos/pyOpenSci/software-peer-review | closed | Suggest in review template to refer to test requirements | review-process-update | While reviewing Crowsetta (https://github.com/pyOpenSci/software-submission/issues/68), I thought of the following suggestion:
> I wonder if PyOpenSci’s review template could suggest that reviewers make sure to look in the package's documentation to figure out how the package maintainers request tests be run, e.g., using a development install of the package.
...because a handful of tests failed at first when I attempted to use a conda-installed older version of Crowsetta to run the version of tests that were in Crowsetta's development branch. Had I looked at the package documentation first, I would've found the recommendation to run tests within a development install. | 1.0 | Suggest in review template to refer to test requirements - While reviewing Crowsetta (https://github.com/pyOpenSci/software-submission/issues/68), I thought of the following suggestion:
> I wonder if PyOpenSci’s review template could suggest that reviewers make sure to look in the package's documentation to figure out how the package maintainers request tests be run, e.g., using a development install of the package.
...because a handful of tests failed at first when I attempted to use a conda-installed older version of Crowsetta to run the version of tests that were in Crowsetta's development branch. Had I looked at the package documentation first, I would've found the recommendation to run tests within a development install. | process | suggest in review template to refer to test requirements while reviewing crowsetta i thought of the following suggestion i wonder if pyopensci’s review template could suggest that reviewers make sure to look in the package s documentation to figure out how the package maintainers request tests be run e g using a development install of the package because a handful of tests failed at first when i attempted to use a conda installed older version of crowsetta to run the version of tests that were in crowsetta s development branch had i looked at the package documentation first i would ve found the recommendation to run tests within a development install | 1 |
93,578 | 19,273,946,595 | IssuesEvent | 2021-12-10 09:37:40 | ballerina-platform/ballerina-lang | https://api.github.com/repos/ballerina-platform/ballerina-lang | closed | Missing create variable code action for error expression and byte-array-literal | Type/Bug Team/LanguageServer Team/CompilerFE Area/Parser Area/CodeAction | **Description:**
$Title
**Steps to reproduce:**
<img width="659" alt="Screenshot 2021-12-03 at 18 20 32" src="https://user-images.githubusercontent.com/46120162/144788965-d87c0005-d245-4292-b0e0-d3d7468fac7b.png">
<img width="696" alt="Screenshot 2021-12-03 at 18 21 28" src="https://user-images.githubusercontent.com/46120162/144798591-d025e840-8f54-49df-b075-1109646e4167.png">
| 1.0 | Missing create variable code action for error expression and byte-array-literal - **Description:**
$Title
**Steps to reproduce:**
<img width="659" alt="Screenshot 2021-12-03 at 18 20 32" src="https://user-images.githubusercontent.com/46120162/144788965-d87c0005-d245-4292-b0e0-d3d7468fac7b.png">
<img width="696" alt="Screenshot 2021-12-03 at 18 21 28" src="https://user-images.githubusercontent.com/46120162/144798591-d025e840-8f54-49df-b075-1109646e4167.png">
| non_process | missing create variable code action for error expression and byte array literal description title steps to reproduce img width alt screenshot at src img width alt screenshot at src | 0 |
169,253 | 14,215,274,690 | IssuesEvent | 2020-11-17 07:04:03 | DCC-EX/CommandStation-EX | https://api.github.com/repos/DCC-EX/CommandStation-EX | closed | Clarify WiFi config documentation | Documentation | The current WiFi documentation does not tell the user how to get the password for the AP, nor how to customize the password. Overall, the documentation is very wordy and needs to be cleaned up. It should probably be split into two files: a basic setup and an advanced setup file. | 1.0 | Clarify WiFi config documentation - The current WiFi documentation does not tell the user how to get the password for the AP, nor how to customize the password. Overall, the documentation is very wordy and needs to be cleaned up. It should probably be split into two files: a basic setup and an advanced setup file. | non_process | clarify wifi config documentation the current wifi documentation does not tell the user how to get the password for the ap nor how to customize the password overall the documentation is very wordy and needs to be cleaned up it should probably be split into two files a basic setup and an advanced setup file | 0 |
49,497 | 13,453,559,834 | IssuesEvent | 2020-09-09 01:18:48 | fufunoyu/shop | https://api.github.com/repos/fufunoyu/shop | opened | CVE-2020-14060 (High) detected in jackson-databind-2.9.9.jar | security vulnerability | ## CVE-2020-14060 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /shop/target/shop/WEB-INF/lib/jackson-databind-2.9.9.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5">96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
| True | CVE-2020-14060 (High) detected in jackson-databind-2.9.9.jar - ## CVE-2020-14060 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /shop/target/shop/WEB-INF/lib/jackson-databind-2.9.9.jar,canner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.9/jackson-databind-2.9.9.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.9.9.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5">96853bac1b04b3d6e7138b4ba3bf6b400a2a14c5</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to oadd.org.apache.xalan.lib.sql.JNDIConnectionPool (aka apache/drill).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14060>CVE-2020-14060</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14060</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
| non_process | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library shop target shop web inf lib jackson databind jar canner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to oadd org apache xalan lib sql jndiconnectionpool aka apache drill publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind | 0 |
19,243 | 25,403,746,400 | IssuesEvent | 2022-11-22 13:57:40 | zotero/zotero | https://api.github.com/repos/zotero/zotero | closed | Quick Format: Avoid page detection on paste | Word Processor Integration | https://forums.zotero.org/discussion/comment/419017/#Comment_419017
Pasting in "On the Blurred Boundaries of Freedom: Liberated Africans in Cuba, 1817–1870"
Possible to detect paste separately and avoid the page parsing? | 1.0 | Quick Format: Avoid page detection on paste - https://forums.zotero.org/discussion/comment/419017/#Comment_419017
Pasting in "On the Blurred Boundaries of Freedom: Liberated Africans in Cuba, 1817–1870"
Possible to detect paste separately and avoid the page parsing? | process | quick format avoid page detection on paste pasting in on the blurred boundaries of freedom liberated africans in cuba – possible to detect paste separately and avoid the page parsing | 1 |
14,590 | 17,703,528,962 | IssuesEvent | 2021-08-25 03:13:00 | tdwg/dwc | https://api.github.com/repos/tdwg/dwc | closed | New term - recordedByID | Term - add Class - Occurrence normative Process - complete | ## New Term
Submitter: Tim Robertson
Justification: There is no way to identify individuals by e.g. ORCIDs
Proponents: GBIF (already in production), CETAF, DiSCCO
Definition: A list (concatenated and separated) of the globally unique identifiers for the person, people, groups, or organizations responsible for recording the original Occurrence.
Comment: Recommended best practice is to provide a single identifier that disambiguates the details of the identifying agent. If a list is used, it is recommended to separate the values in the list with space vertical bar space ( | ). The order of the identifiers on any list for this term can not be guaranteed to convey any semantics.
Examples: `https://orcid.org/0000-0002-1825-0097` (for an individual); `https://orcid.org/0000-0002-1825-0097 | https://orcid.org/0000-0002-1825-0098` (for a list of people).
Refines: None
Replaces: None
ABCD 2.06: not in ABCD
| 1.0 | New term - recordedByID - ## New Term
Submitter: Tim Robertson
Justification: There is no way to identify individuals by e.g. ORCIDs
Proponents: GBIF (already in production), CETAF, DiSCCO
Definition: A list (concatenated and separated) of the globally unique identifiers for the person, people, groups, or organizations responsible for recording the original Occurrence.
Comment: Recommended best practice is to provide a single identifier that disambiguates the details of the identifying agent. If a list is used, it is recommended to separate the values in the list with space vertical bar space ( | ). The order of the identifiers on any list for this term can not be guaranteed to convey any semantics.
Examples: `https://orcid.org/0000-0002-1825-0097` (for an individual); `https://orcid.org/0000-0002-1825-0097 | https://orcid.org/0000-0002-1825-0098` (for a list of people).
Refines: None
Replaces: None
ABCD 2.06: not in ABCD
| process | new term recordedbyid new term submitter tim robertson justification there is no way to identify individuals by e g orcids proponents gbif already in production cetaf discco definition a list concatenated and separated of the globally unique identifiers for the person people groups or organizations responsible for recording the original occurrence comment recommended best practice is to provide a single identifier that disambiguates the details of the identifying agent if a list is used it is recommended to separate the values in the list with space vertical bar space the order of the identifiers on any list for this term can not be guaranteed to convey any semantics examples for an individual for a list of people refines none replaces none abcd not in abcd | 1 |
6,809 | 9,955,397,711 | IssuesEvent | 2019-07-05 10:55:38 | log2timeline/plaso | https://api.github.com/repos/log2timeline/plaso | closed | Improve preprocessor | clean up issue close after review enhancement preprocessing | - [x] ~~Get rid of [old preprocess](https://codereview.appspot.com/255380043/)~~
- [x] ~~Finish https://github.com/log2timeline/plaso/issues/217~~
- [x] ~~[Make attributes containers register with a manager e.g. for use in the JSON serializer](https://codereview.appspot.com/303900043/)~~
- [x] ~~[Remove pstorage output module](https://codereview.appspot.com/300500043/)~~
- [x] ~~[Move storage file out of output mediator](https://codereview.appspot.com/302850043/)~~
- [x] ~~split processing run information from preprocessing object e.g. first store which tool was run, then the data.~~
- ~~split in start and end of run to determine if run was interrupted~~
- [x] ~~[migrate to attribute container](https://codereview.appspot.com/304900043/)~~
- ~~move collection information out~~
- ~~move parser counters out~~
- [x] ~~remove preprocess object~~
- [x] ~~[Refactor preprocess plugins to directly use knowledge base](https://codereview.appspot.com/306070043/)~~
- [x] ~~[Remove GetPathAttributes from knowledge base](https://codereview.appspot.com/306940043/)~~
- [x] ~~[knowledge base clean up](https://codereview.appspot.com/305120043/)~~
- ~~knowledge base change SetX to AddX~~
- ~~knowledge base remove need for store_number~~
- ~~knowledge base add tests~~
- ~~use SetTimezone and SetCodepage in preprocess plugins~~
- ~~deprecate use of time_zone_str~~
- [x] knowledge base remove GetStoredHostname - https://github.com/log2timeline/plaso/pull/2674
| 1.0 | Improve preprocessor - - [x] ~~Get rid of [old preprocess](https://codereview.appspot.com/255380043/)~~
- [x] ~~Finish https://github.com/log2timeline/plaso/issues/217~~
- [x] ~~[Make attributes containers register with a manager e.g. for use in the JSON serializer](https://codereview.appspot.com/303900043/)~~
- [x] ~~[Remove pstorage output module](https://codereview.appspot.com/300500043/)~~
- [x] ~~[Move storage file out of output mediator](https://codereview.appspot.com/302850043/)~~
- [x] ~~split processing run information from preprocessing object e.g. first store which tool was run, then the data.~~
- ~~split in start and end of run to determine if run was interrupted~~
- [x] ~~[migrate to attribute container](https://codereview.appspot.com/304900043/)~~
- ~~move collection information out~~
- ~~move parser counters out~~
- [x] ~~remove preprocess object~~
- [x] ~~[Refactor preprocess plugins to directly use knowledge base](https://codereview.appspot.com/306070043/)~~
- [x] ~~[Remove GetPathAttributes from knowledge base](https://codereview.appspot.com/306940043/)~~
- [x] ~~[knowledge base clean up](https://codereview.appspot.com/305120043/)~~
- ~~knowledge base change SetX to AddX~~
- ~~knowledge base remove need for store_number~~
- ~~knowledge base add tests~~
- ~~use SetTimezone and SetCodepage in preprocess plugins~~
- ~~deprecate use of time_zone_str~~
- [x] knowledge base remove GetStoredHostname - https://github.com/log2timeline/plaso/pull/2674
| process | improve preprocessor get rid of finish split processing run information from preprocessing object e g first store which tool was run then the data split in start and end of run to determine if run was interrupted move collection information out move parser counters out remove preprocess object knowledge base change setx to addx knowledge base remove need for store number knowledge base add tests use settimezone and setcodepage in preprocess plugins deprecate use of time zone str knowledge base remove getstoredhostname | 1 |
464,101 | 13,306,326,487 | IssuesEvent | 2020-08-25 20:03:01 | unitystation/unitystation | https://api.github.com/repos/unitystation/unitystation | closed | Client hangs when attempting to join while server is loading scene | Bug High Priority | ## Description
On staging (sometimes while connecting to a local server, too) client sometimes hangs when attempting to load scene.
### Steps to Reproduce
1. Join staging.
2. Restart the round.
3. While the client is loading the scene, it may hang. If not, repeat steps 1. to 3.
| 1.0 | Client hangs when attempting to join while server is loading scene - ## Description
On staging (sometimes while connecting to a local server, too) client sometimes hangs when attempting to load scene.
### Steps to Reproduce
1. Join staging.
2. Restart the round.
3. While the client is loading the scene, it may hang. If not, repeat steps 1. to 3.
| non_process | client hangs when attempting to join while server is loading scene description on staging sometimes while connecting to a local server too client sometimes hangs when attempting to load scene steps to reproduce join staging restart the round while the client is loading the scene it may hang if not repeat steps to | 0 |
702,618 | 24,128,563,028 | IssuesEvent | 2022-09-21 04:32:50 | roq-trading/roq-issues | https://api.github.com/repos/roq-trading/roq-issues | closed | [roq-server] OrderAck didn't populate all fields for CreateOrder rejects | bug medium priority support | This was due to a generic exception handler used when no internal OMS order had been created.
The same handler was used for all order actions and only the union of fields was being populated.
In particular, exchange and symbol wasn't populated for `CreateOrder` because these are not present on the `ModifyOrder` and `CancelOrder` requests.
There are now specific handlers for each type of request and all possible fields are now being populated for each type. | 1.0 | [roq-server] OrderAck didn't populate all fields for CreateOrder rejects - This was due to a generic exception handler used when no internal OMS order had been created.
The same handler was used for all order actions and only the union of fields was being populated.
In particular, exchange and symbol wasn't populated for `CreateOrder` because these are not present on the `ModifyOrder` and `CancelOrder` requests.
There are now specific handlers for each type of request and all possible fields are now being populated for each type. | non_process | orderack didn t populate all fields for createorder rejects this was due to a generic exception handler used when no internal oms order had been created the same handler was used for all order actions and only the union of fields was being populated in particular exchange and symbol wasn t populated for createorder because these are not present on the modifyorder and cancelorder requests there are now specific handlers for each type of request and all possible fields are now being populated for each type | 0 |
58,862 | 14,352,151,251 | IssuesEvent | 2020-11-30 03:29:41 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | Vulnerability roundup 84: http-client-0.6.4: 2 advisories | 1.severity: security | [search](https://search.nix.gsc.io/?q=http-client&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=http-client+in%3Apath&type=Code)
* [ ] [CVE-2016-6287](https://nvd.nist.gov/vuln/detail/CVE-2016-6287) CVSSv3=7.5 (nixos-20.03)
* [ ] [CVE-2020-11021](https://nvd.nist.gov/vuln/detail/CVE-2020-11021) CVSSv3=7.5 (nixos-19.09, nixos-20.03)
Scanned versions: nixos-19.09: 31dcaa5eb67; nixos-20.03: 82b5f87fcc7. May contain false positives.
| True | Vulnerability roundup 84: http-client-0.6.4: 2 advisories - [search](https://search.nix.gsc.io/?q=http-client&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=http-client+in%3Apath&type=Code)
* [ ] [CVE-2016-6287](https://nvd.nist.gov/vuln/detail/CVE-2016-6287) CVSSv3=7.5 (nixos-20.03)
* [ ] [CVE-2020-11021](https://nvd.nist.gov/vuln/detail/CVE-2020-11021) CVSSv3=7.5 (nixos-19.09, nixos-20.03)
Scanned versions: nixos-19.09: 31dcaa5eb67; nixos-20.03: 82b5f87fcc7. May contain false positives.
| non_process | vulnerability roundup http client advisories nixos nixos nixos scanned versions nixos nixos may contain false positives | 0 |
9,584 | 12,535,590,433 | IssuesEvent | 2020-06-04 21:41:51 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | Self JOIN in query builder generates SQL with duplicated columns in query. | Priority:P2 Querying/Nested Queries Querying/Processor Type:Bug | **Describe the bug**
When using self join in query builder, resulting SQL selects same column from both instances of self-joined table, resulting in "Duplicate column name 'foo'" (MySQL), because the query contains inner SELECT `table1`.`foo`, `table2`.`foo` and they clash. Probably the columns are present in the subquery, because they are used in WHERE filter.
This worked in v0.33.6, but does not work in v0.35.3. The reason is new way of generating SQL with JOINS, that doesn't do proper column aliasing (see below for comparison of query SQL)
**Logs**
Frontend displays: There was a problem with your question, and "Duplicate column name 'metric'".
More details in server logs:
```
05-26 10:55:27 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 2,
:started_at (t/zoned-date-time "2020-05-26T10:55:25.924456Z[GMT]"),
:via
[{:status :failed,
:class java.sql.SQLSyntaxErrorException,
:error "(conn=2357549) Duplicate column name 'metric'",
:stacktrace
["org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:243)"
"org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:164)"
"org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:258)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:225)"
"org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:145)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeQuery(ClientSidePreparedStatement.java:159)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
"--> driver.sql_jdbc.execute$fn__70660.invokeStatic(execute.clj:267)"
"driver.sql_jdbc.execute$fn__70660.invoke(execute.clj:265)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:389)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__72711.invokeStatic(sql_jdbc.clj:50)"
"driver.sql_jdbc$fn__72711.invoke(sql_jdbc.clj:48)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__43003.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__42317.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__43168.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47065.invoke(wrap_value_literals.clj:137)"
"query_processor.middleware.annotate$add_column_info$fn__40946.invoke(annotate.clj:577)"
"query_processor.middleware.permissions$check_query_permissions$fn__42192.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__43667.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__42390.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__44199.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39133.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.limit$limit$fn__42989.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__42970.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__42456.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__41490.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__41998.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__38669.invoke(add_dimension_projections.clj:272)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__38889.invoke(add_implicit_clauses.clj:147)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39282.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__43864.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__41131.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__42045.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__43649.invoke(parameters.clj:97)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__42097.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__42712.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39313.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__46949.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875$fn__43879.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__42918.invoke(fetch_source_query.clj:243)"
"query_processor.middleware.store$initialize_store$fn__46958$fn__46959.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__46958.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__41974.invoke(cache.clj:208)"
"query_processor.middleware.validate$validate_query$fn__46967.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__43016.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39151.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__46934.invoke(results_metadata.clj:128)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__42333.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__43738.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__42276.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___37952$thunk__37953.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___37952$fn__37955.invoke(reducible.clj:106)"],
:state "42S21"}],
:state "42S21",
:json_query
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware nil,
:database 2,
:query
{:source-table 23,
:joins [{:source-table 23, :fields :none, :condition [:= [:field-id 157] [:joined-field "Broadcast" [:field-id 157]]], :alias "Broadcast"}],
:expressions {:rate [:/ [:field-id 156] [:joined-field "Broadcast" [:field-id 156]]]},
:fields [[:expression "rate"]],
:limit 1,
:filter [:and [:= [:field-id 160] "bounces"] [:= [:joined-field "Broadcast" [:field-id 160]] "recipients"]]},
:parameters [],
:async? true,
:cache-ttl nil},
:native
{:query
"SELECT `source`.`rate` AS `rate` FROM (SELECT (`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`, `broadcast_metric`.`metric` AS `metric`, `Broadcast`.`metric` AS `metric` FROM `broadcast_metric` LEFT JOIN `broadcast_metric` `Broadcast` ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`) `source` WHERE (`source`.`metric` = ? AND `source`.`metric` = ?) LIMIT 1",
:params ("bounces" "recipients")},
:status :failed,
:class java.sql.SQLException,
:stacktrace
["org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.readErrorPacket(AbstractQueryProtocol.java:1599)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.readPacket(AbstractQueryProtocol.java:1461)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:1424)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:240)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:216)"
"org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:145)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeQuery(ClientSidePreparedStatement.java:159)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
"--> driver.sql_jdbc.execute$fn__70660.invokeStatic(execute.clj:267)"
"driver.sql_jdbc.execute$fn__70660.invoke(execute.clj:265)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:389)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__72711.invokeStatic(sql_jdbc.clj:50)"
"driver.sql_jdbc$fn__72711.invoke(sql_jdbc.clj:48)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__43003.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__42317.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__43168.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47065.invoke(wrap_value_literals.clj:137)"
"query_processor.middleware.annotate$add_column_info$fn__40946.invoke(annotate.clj:577)"
"query_processor.middleware.permissions$check_query_permissions$fn__42192.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__43667.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__42390.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__44199.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39133.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.limit$limit$fn__42989.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__42970.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__42456.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__41490.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__41998.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__38669.invoke(add_dimension_projections.clj:272)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__38889.invoke(add_implicit_clauses.clj:147)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39282.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__43864.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__41131.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__42045.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__43649.invoke(parameters.clj:97)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__42097.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__42712.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39313.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__46949.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875$fn__43879.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__42918.invoke(fetch_source_query.clj:243)"
"query_processor.middleware.store$initialize_store$fn__46958$fn__46959.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__46958.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__41974.invoke(cache.clj:208)"
"query_processor.middleware.validate$validate_query$fn__46967.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__43016.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39151.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__46934.invoke(results_metadata.clj:128)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__42333.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__43738.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__42276.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___37952$thunk__37953.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___37952$fn__37955.invoke(reducible.clj:106)"],
:context :question,
:error "Duplicate column name 'metric'",
:row_count 0,
:running_time 0,
:preprocessed
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:info
{:executed-by 1,
:context :question,
:card-id 223,
:query-hash [-117, -36, 24, -114, 111, 6, 48, 21, -72, -117, 22, -53, 118, 32, 114, 15, -14, -21, 69, 82, -116, -22, 27, -45, 104, -19, -109, 13, -94, 31, 84, 126]},
:database 2,
:query
{:source-table 23,
:joins [{:strategy :left-join, :source-table 23, :condition [:= [:field-id 157] [:joined-field "Broadcast" [:field-id 157]]], :alias "Broadcast"}],
:expressions {:rate [:/ [:field-id 156] [:joined-field "Broadcast" [:field-id 156]]]},
:fields [[:expression "rate"]],
:limit 1,
:filter
[:and
[:= [:field-id 160] [:value "bounces" {:base_type :type/*, :special_type :type/Category, :database_type "ENUM"}]]
[:= [:joined-field "Broadcast" [:field-id 160]] [:value "recipients" {:base_type :type/*, :special_type :type/Category, :database_type "ENUM"}]]]},
:async? true},
:data {:rows [], :cols []}}
```
**To Reproduce**
We have a table `broadcast_metric` with following structure:
```
| id | int(11)
| broadcast_id | int(11)
| broadcast_name | varchar(128)
| segment_id | int(11)
| metric | enum(...)
| value | decimal(12,4)
```
We want to do a self join on broadcast_metric, so we can calculate bounce rate for each broadcast_id. We will divide `value` column of broadcast_metric with metric = 'bounces', by `value` column of broadcast_metric with metric = `recipients`, for a particular broadcast_id (which is card's variable)
See screenshot of question here: https://imgur.com/a/KxCeR41
We add the custom column `rate` which is equal to: "Value / Broadcast Metric -> Value"
In metabase v0.33.6 the resulting SQL is:
```SQL
SELECT (`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`
FROM `broadcast_metric`
LEFT JOIN `broadcast_metric` `Broadcast` ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`
WHERE
(`broadcast_metric`.`metric` = 'bounces' AND `Broadcast`.`metric` = 'recipients')
LIMIT 1
```
Which works as expected.
However, in v0.35.3, the SQL is different and contains inner SELECT, which has duplicate columns:
```SQL
SELECT `source`.`rate` AS `rate` FROM
(
SELECT
(`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`,
`broadcast_metric`. `metric` AS `metric`, // <--- these two
`Broadcast`.`metric` AS `metric` // <--- conflict
FROM
`broadcast_metric` LEFT JOIN `broadcast_metric` `Broadcast`
ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`) `source`
WHERE
(`source`.`metric` = ? AND `source`.`metric` = ?) LIMIT 1
```
**Expected behavior**
The query returns a result without error.
The `metric` column is not selected in the inner query, because it is not necessary in the final result, or is selected with proper aliasing (metric, and metric2) to avoid column name conflict.
**Information about your Metabase Installation:**
```
{
"browser-info": {
"language": "en-US",
"platform": "Linux x86_64",
"userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.7+10",
"java.vendor": "AdoptOpenJDK",
"java.vendor.url": "https://adoptopenjdk.net/",
"java.version": "11.0.7",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.7+10",
"os.name": "Linux",
"os.version": "5.4.0-31-generic",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"h2",
"mysql"
],
"hosting-env": "unknown",
"application-database": "h2",
"application-database-details": {
"database": {
"name": "H2",
"version": "1.4.197 (2018-03-18)"
},
"jdbc-driver": {
"name": "H2 JDBC Driver",
"version": "1.4.197 (2018-03-18)"
}
},
"run-mode": "prod",
"version": {
"date": "2019-08-23",
"tag": "v0.35.3",
"branch": "HEAD",
"hash": "7380dcb"
},
"settings": {
"report-timezone": null
}
}
}
```
Metabase v0.35.3 is run as docker image.
The database used is MySQL.
**Severity**
*blocking* - some queries that the query editor allows to build stopped working. This affects only queries with self-joined table. However, maybe joining different tables with same column names could trigger this bug.
| 1.0 | Self JOIN in query builder generates SQL with duplicated columns in query. - **Describe the bug**
When using self join in query builder, resulting SQL selects same column from both instances of self-joined table, resulting in "Duplicate column name 'foo'" (MySQL), because the query contains inner SELECT `table1`.`foo`, `table2`.`foo` and they clash. Probably the columns are present in the subquery, because they are used in WHERE filter.
This worked in v0.33.6, but does not work in v0.35.3. The reason is new way of generating SQL with JOINS, that doesn't do proper column aliasing (see below for comparison of query SQL)
**Logs**
Frontend displays: There was a problem with your question, and "Duplicate column name 'metric'".
More details in server logs:
```
05-26 10:55:27 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 2,
:started_at (t/zoned-date-time "2020-05-26T10:55:25.924456Z[GMT]"),
:via
[{:status :failed,
:class java.sql.SQLSyntaxErrorException,
:error "(conn=2357549) Duplicate column name 'metric'",
:stacktrace
["org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:243)"
"org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:164)"
"org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:258)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:225)"
"org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:145)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeQuery(ClientSidePreparedStatement.java:159)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
"--> driver.sql_jdbc.execute$fn__70660.invokeStatic(execute.clj:267)"
"driver.sql_jdbc.execute$fn__70660.invoke(execute.clj:265)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:389)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__72711.invokeStatic(sql_jdbc.clj:50)"
"driver.sql_jdbc$fn__72711.invoke(sql_jdbc.clj:48)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__43003.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__42317.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__43168.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47065.invoke(wrap_value_literals.clj:137)"
"query_processor.middleware.annotate$add_column_info$fn__40946.invoke(annotate.clj:577)"
"query_processor.middleware.permissions$check_query_permissions$fn__42192.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__43667.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__42390.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__44199.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39133.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.limit$limit$fn__42989.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__42970.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__42456.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__41490.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__41998.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__38669.invoke(add_dimension_projections.clj:272)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__38889.invoke(add_implicit_clauses.clj:147)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39282.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__43864.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__41131.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__42045.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__43649.invoke(parameters.clj:97)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__42097.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__42712.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39313.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__46949.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875$fn__43879.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__42918.invoke(fetch_source_query.clj:243)"
"query_processor.middleware.store$initialize_store$fn__46958$fn__46959.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__46958.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__41974.invoke(cache.clj:208)"
"query_processor.middleware.validate$validate_query$fn__46967.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__43016.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39151.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__46934.invoke(results_metadata.clj:128)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__42333.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__43738.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__42276.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___37952$thunk__37953.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___37952$fn__37955.invoke(reducible.clj:106)"],
:state "42S21"}],
:state "42S21",
:json_query
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:middleware nil,
:database 2,
:query
{:source-table 23,
:joins [{:source-table 23, :fields :none, :condition [:= [:field-id 157] [:joined-field "Broadcast" [:field-id 157]]], :alias "Broadcast"}],
:expressions {:rate [:/ [:field-id 156] [:joined-field "Broadcast" [:field-id 156]]]},
:fields [[:expression "rate"]],
:limit 1,
:filter [:and [:= [:field-id 160] "bounces"] [:= [:joined-field "Broadcast" [:field-id 160]] "recipients"]]},
:parameters [],
:async? true,
:cache-ttl nil},
:native
{:query
"SELECT `source`.`rate` AS `rate` FROM (SELECT (`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`, `broadcast_metric`.`metric` AS `metric`, `Broadcast`.`metric` AS `metric` FROM `broadcast_metric` LEFT JOIN `broadcast_metric` `Broadcast` ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`) `source` WHERE (`source`.`metric` = ? AND `source`.`metric` = ?) LIMIT 1",
:params ("bounces" "recipients")},
:status :failed,
:class java.sql.SQLException,
:stacktrace
["org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.readErrorPacket(AbstractQueryProtocol.java:1599)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.readPacket(AbstractQueryProtocol.java:1461)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.getResult(AbstractQueryProtocol.java:1424)"
"org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeQuery(AbstractQueryProtocol.java:240)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeInternal(ClientSidePreparedStatement.java:216)"
"org.mariadb.jdbc.ClientSidePreparedStatement.execute(ClientSidePreparedStatement.java:145)"
"org.mariadb.jdbc.ClientSidePreparedStatement.executeQuery(ClientSidePreparedStatement.java:159)"
"com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:431)"
"--> driver.sql_jdbc.execute$fn__70660.invokeStatic(execute.clj:267)"
"driver.sql_jdbc.execute$fn__70660.invoke(execute.clj:265)"
"driver.sql_jdbc.execute$execute_reducible_query.invokeStatic(execute.clj:389)"
"driver.sql_jdbc.execute$execute_reducible_query.invoke(execute.clj:377)"
"driver.sql_jdbc$fn__72711.invokeStatic(sql_jdbc.clj:50)"
"driver.sql_jdbc$fn__72711.invoke(sql_jdbc.clj:48)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__43003.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__42317.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__43168.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47065.invoke(wrap_value_literals.clj:137)"
"query_processor.middleware.annotate$add_column_info$fn__40946.invoke(annotate.clj:577)"
"query_processor.middleware.permissions$check_query_permissions$fn__42192.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__43667.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__42390.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__44199.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39133.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.limit$limit$fn__42989.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__42970.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__42456.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__41490.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__41998.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__38669.invoke(add_dimension_projections.clj:272)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__38889.invoke(add_implicit_clauses.clj:147)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39282.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__43864.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__41131.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__42045.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__43649.invoke(parameters.clj:97)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__42097.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__42712.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39313.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__46949.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875$fn__43879.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__43875.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__42918.invoke(fetch_source_query.clj:243)"
"query_processor.middleware.store$initialize_store$fn__46958$fn__46959.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__46958.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__41974.invoke(cache.clj:208)"
"query_processor.middleware.validate$validate_query$fn__46967.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__43016.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39151.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__46934.invoke(results_metadata.clj:128)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__42333.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__43738.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__42276.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___37952$thunk__37953.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___37952$fn__37955.invoke(reducible.clj:106)"],
:context :question,
:error "Duplicate column name 'metric'",
:row_count 0,
:running_time 0,
:preprocessed
{:constraints {:max-results 10000, :max-results-bare-rows 2000},
:type :query,
:info
{:executed-by 1,
:context :question,
:card-id 223,
:query-hash [-117, -36, 24, -114, 111, 6, 48, 21, -72, -117, 22, -53, 118, 32, 114, 15, -14, -21, 69, 82, -116, -22, 27, -45, 104, -19, -109, 13, -94, 31, 84, 126]},
:database 2,
:query
{:source-table 23,
:joins [{:strategy :left-join, :source-table 23, :condition [:= [:field-id 157] [:joined-field "Broadcast" [:field-id 157]]], :alias "Broadcast"}],
:expressions {:rate [:/ [:field-id 156] [:joined-field "Broadcast" [:field-id 156]]]},
:fields [[:expression "rate"]],
:limit 1,
:filter
[:and
[:= [:field-id 160] [:value "bounces" {:base_type :type/*, :special_type :type/Category, :database_type "ENUM"}]]
[:= [:joined-field "Broadcast" [:field-id 160]] [:value "recipients" {:base_type :type/*, :special_type :type/Category, :database_type "ENUM"}]]]},
:async? true},
:data {:rows [], :cols []}}
```
**To Reproduce**
We have a table `broadcast_metric` with following structure:
```
| id | int(11)
| broadcast_id | int(11)
| broadcast_name | varchar(128)
| segment_id | int(11)
| metric | enum(...)
| value | decimal(12,4)
```
We want to do a self join on broadcast_metric, so we can calculate bounce rate for each broadcast_id. We will divide `value` column of broadcast_metric with metric = 'bounces', by `value` column of broadcast_metric with metric = `recipients`, for a particular broadcast_id (which is card's variable)
See screenshot of question here: https://imgur.com/a/KxCeR41
We add the custom column `rate` which is equal to: "Value / Broadcast Metric -> Value"
In metabase v0.33.6 the resulting SQL is:
```SQL
SELECT (`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`
FROM `broadcast_metric`
LEFT JOIN `broadcast_metric` `Broadcast` ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`
WHERE
(`broadcast_metric`.`metric` = 'bounces' AND `Broadcast`.`metric` = 'recipients')
LIMIT 1
```
Which works as expected.
However, in v0.35.3, the SQL is different and contains inner SELECT, which has duplicate columns:
```SQL
SELECT `source`.`rate` AS `rate` FROM
(
SELECT
(`broadcast_metric`.`value` / CASE WHEN `Broadcast`.`value` = 0 THEN NULL ELSE `Broadcast`.`value` END) AS `rate`,
`broadcast_metric`. `metric` AS `metric`, // <--- these two
`Broadcast`.`metric` AS `metric` // <--- conflict
FROM
`broadcast_metric` LEFT JOIN `broadcast_metric` `Broadcast`
ON `broadcast_metric`.`broadcast_id` = `Broadcast`.`broadcast_id`) `source`
WHERE
(`source`.`metric` = ? AND `source`.`metric` = ?) LIMIT 1
```
**Expected behavior**
The query returns a result without error.
The `metric` column is not selected in the inner query, because it is not necessary in the final result, or is selected with proper aliasing (metric, and metric2) to avoid column name conflict.
**Information about your Metabase Installation:**
```
{
"browser-info": {
"language": "en-US",
"platform": "Linux x86_64",
"userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36",
"vendor": "Google Inc."
},
"system-info": {
"file.encoding": "UTF-8",
"java.runtime.name": "OpenJDK Runtime Environment",
"java.runtime.version": "11.0.7+10",
"java.vendor": "AdoptOpenJDK",
"java.vendor.url": "https://adoptopenjdk.net/",
"java.version": "11.0.7",
"java.vm.name": "OpenJDK 64-Bit Server VM",
"java.vm.version": "11.0.7+10",
"os.name": "Linux",
"os.version": "5.4.0-31-generic",
"user.language": "en",
"user.timezone": "GMT"
},
"metabase-info": {
"databases": [
"h2",
"mysql"
],
"hosting-env": "unknown",
"application-database": "h2",
"application-database-details": {
"database": {
"name": "H2",
"version": "1.4.197 (2018-03-18)"
},
"jdbc-driver": {
"name": "H2 JDBC Driver",
"version": "1.4.197 (2018-03-18)"
}
},
"run-mode": "prod",
"version": {
"date": "2019-08-23",
"tag": "v0.35.3",
"branch": "HEAD",
"hash": "7380dcb"
},
"settings": {
"report-timezone": null
}
}
}
```
Metabase v0.35.3 is run as docker image.
The database used is MySQL.
**Severity**
*blocking* - some queries that the query editor allows to build stopped working. This affects only queries with self-joined table. However, maybe joining different tables with same column names could trigger this bug.
| process | self join in query builder generates sql with duplicated columns in query describe the bug when using self join in query builder resulting sql selects same column from both instances of self joined table resulting in duplicate column name foo mysql because the query contains inner select foo foo and they clash probably the columns are present in the subquery because they are used in where filter this worked in but does not work in the reason is new way of generating sql with joins that doesn t do proper column aliasing see below for comparison of query sql logs frontend displays there was a problem with your question and duplicate column name metric more details in server logs error middleware catch exceptions error processing query null database id started at t zoned date time via status failed class java sql sqlsyntaxerrorexception error conn duplicate column name metric stacktrace org mariadb jdbc internal util exceptions exceptionmapper get exceptionmapper java org mariadb jdbc internal util exceptions exceptionmapper getexception exceptionmapper java org mariadb jdbc mariadbstatement executeexceptionepilogue mariadbstatement java org mariadb jdbc clientsidepreparedstatement executeinternal clientsidepreparedstatement java org mariadb jdbc clientsidepreparedstatement execute clientsidepreparedstatement java org mariadb jdbc clientsidepreparedstatement executequery clientsidepreparedstatement java com mchange impl newproxypreparedstatement executequery newproxypreparedstatement java driver sql jdbc execute fn invokestatic execute clj driver sql jdbc execute fn invoke execute clj driver sql jdbc execute execute reducible query invokestatic execute clj driver sql jdbc execute execute reducible query invoke execute clj driver sql jdbc fn invokestatic sql jdbc clj driver sql jdbc fn invoke sql jdbc clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible pivot invokestatic reducible clj query processor reducible pivot invoke reducible clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware check features check features fn invoke check features clj query processor middleware optimize datetime filters optimize datetime filters fn invoke optimize datetime filters clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware limit limit fn invoke limit clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star fn invoke reducible clj state state json query constraints max results max results bare rows type query middleware nil database query source table joins alias broadcast expressions rate fields limit filter bounces recipients parameters async true cache ttl nil native query select source rate as rate from select broadcast metric value case when broadcast value then null else broadcast value end as rate broadcast metric metric as metric broadcast metric as metric from broadcast metric left join broadcast metric broadcast on broadcast metric broadcast id broadcast broadcast id source where source metric and source metric limit params bounces recipients status failed class java sql sqlexception stacktrace org mariadb jdbc internal protocol abstractqueryprotocol readerrorpacket abstractqueryprotocol java org mariadb jdbc internal protocol abstractqueryprotocol readpacket abstractqueryprotocol java org mariadb jdbc internal protocol abstractqueryprotocol getresult abstractqueryprotocol java org mariadb jdbc internal protocol abstractqueryprotocol executequery abstractqueryprotocol java org mariadb jdbc clientsidepreparedstatement executeinternal clientsidepreparedstatement java org mariadb jdbc clientsidepreparedstatement execute clientsidepreparedstatement java org mariadb jdbc clientsidepreparedstatement executequery clientsidepreparedstatement java com mchange impl newproxypreparedstatement executequery newproxypreparedstatement java driver sql jdbc execute fn invokestatic execute clj driver sql jdbc execute fn invoke execute clj driver sql jdbc execute execute reducible query invokestatic execute clj driver sql jdbc execute execute reducible query invoke execute clj driver sql jdbc fn invokestatic sql jdbc clj driver sql jdbc fn invoke sql jdbc clj query processor context executef invokestatic context clj query processor context executef invoke context clj query processor context default default runf invokestatic default clj query processor context default default runf invoke default clj query processor context runf invokestatic context clj query processor context runf invoke context clj query processor reducible pivot invokestatic reducible clj query processor reducible pivot invoke reducible clj query processor middleware mbql to native mbql gt native fn invoke mbql to native clj query processor middleware check features check features fn invoke check features clj query processor middleware optimize datetime filters optimize datetime filters fn invoke optimize datetime filters clj query processor middleware wrap value literals wrap value literals fn invoke wrap value literals clj query processor middleware annotate add column info fn invoke annotate clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware limit limit fn invoke limit clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware cache maybe return cached results fn invoke cache clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star fn invoke reducible clj context question error duplicate column name metric row count running time preprocessed constraints max results max results bare rows type query info executed by context question card id query hash database query source table joins alias broadcast expressions rate fields limit filter and async true data rows cols to reproduce we have a table broadcast metric with following structure id int broadcast id int broadcast name varchar segment id int metric enum value decimal we want to do a self join on broadcast metric so we can calculate bounce rate for each broadcast id we will divide value column of broadcast metric with metric bounces by value column of broadcast metric with metric recipients for a particular broadcast id which is card s variable see screenshot of question here we add the custom column rate which is equal to value broadcast metric value in metabase the resulting sql is sql select broadcast metric value case when broadcast value then null else broadcast value end as rate from broadcast metric left join broadcast metric broadcast on broadcast metric broadcast id broadcast broadcast id where broadcast metric metric bounces and broadcast metric recipients limit which works as expected however in the sql is different and contains inner select which has duplicate columns sql select source rate as rate from select broadcast metric value case when broadcast value then null else broadcast value end as rate broadcast metric metric as metric these two broadcast metric as metric conflict from broadcast metric left join broadcast metric broadcast on broadcast metric broadcast id broadcast broadcast id source where source metric and source metric limit expected behavior the query returns a result without error the metric column is not selected in the inner query because it is not necessary in the final result or is selected with proper aliasing metric and to avoid column name conflict information about your metabase installation browser info language en us platform linux useragent mozilla linux applewebkit khtml like gecko chrome safari vendor google inc system info file encoding utf java runtime name openjdk runtime environment java runtime version java vendor adoptopenjdk java vendor url java version java vm name openjdk bit server vm java vm version os name linux os version generic user language en user timezone gmt metabase info databases mysql hosting env unknown application database application database details database name version jdbc driver name jdbc driver version run mode prod version date tag branch head hash settings report timezone null metabase is run as docker image the database used is mysql severity blocking some queries that the query editor allows to build stopped working this affects only queries with self joined table however maybe joining different tables with same column names could trigger this bug | 1 |
288,341 | 8,838,585,591 | IssuesEvent | 2019-01-05 18:52:55 | Lembas-Modding-Team/pvp-mode | https://api.github.com/repos/Lembas-Modding-Team/pvp-mode | closed | Define loading points for compatibility modules | cleanup compatibility medium priority | With enums. Currently:
* PreInitialization
* Initialization
* PostInitialization
* LoadingCompleted
* ServerStarting
Will be defined at the loaders. | 1.0 | Define loading points for compatibility modules - With enums. Currently:
* PreInitialization
* Initialization
* PostInitialization
* LoadingCompleted
* ServerStarting
Will be defined at the loaders. | non_process | define loading points for compatibility modules with enums currently preinitialization initialization postinitialization loadingcompleted serverstarting will be defined at the loaders | 0 |
5,803 | 21,186,134,525 | IssuesEvent | 2022-04-08 12:56:44 | longhorn/longhorn | https://api.github.com/repos/longhorn/longhorn | reopened | [e2e] add integration test for annotation last-applied-tolerations magically added in 1.1.0 | require/automation-e2e | Ref: https://github.com/longhorn/longhorn/issues/2120 | 1.0 | [e2e] add integration test for annotation last-applied-tolerations magically added in 1.1.0 - Ref: https://github.com/longhorn/longhorn/issues/2120 | non_process | add integration test for annotation last applied tolerations magically added in ref | 0 |
8,247 | 11,421,369,221 | IssuesEvent | 2020-02-03 12:02:27 | parcel-bundler/parcel | https://api.github.com/repos/parcel-bundler/parcel | closed | Wrong css/scss assets path | :bug: Bug CSS Preprocessing Stale | # 🐛 bug report
After building, the css assets paths are wrong.
## 🎛 Configuration (.babelrc, package.json, cli command)
I'm using cli command for js only:
`parcel build src/app.js -d dist`
For example, I have this in my SCSS, since my images folder is at my root and my SCSS is under /src/scss:
```scss
body {
background-image: url(../../images/bg.png);
}
```
I imported the SCSS in my app.js like so:
```js
import '/scss/app.scss';
```
## 🤔 Expected Behavior
My bg should go into my dist directory and my compiled css should point at `bg.png`
## 😯 Current Behavior
My bg goes into my dist directory but the compiled css points at `/bg.png` (so it search the bg at my root where it isn't).
## 💁 Possible Solution
I guess the issue is in Asset.js at the addURLDependecy function where it adds an extra starting / where it shoudn't but I didn't look further into this.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.3
| Node | 10.15.3
| npm/Yarn | npm 6.4.1
| Operating System | Windows x64
| 1.0 | Wrong css/scss assets path - # 🐛 bug report
After building, the css assets paths are wrong.
## 🎛 Configuration (.babelrc, package.json, cli command)
I'm using cli command for js only:
`parcel build src/app.js -d dist`
For example, I have this in my SCSS, since my images folder is at my root and my SCSS is under /src/scss:
```scss
body {
background-image: url(../../images/bg.png);
}
```
I imported the SCSS in my app.js like so:
```js
import '/scss/app.scss';
```
## 🤔 Expected Behavior
My bg should go into my dist directory and my compiled css should point at `bg.png`
## 😯 Current Behavior
My bg goes into my dist directory but the compiled css points at `/bg.png` (so it search the bg at my root where it isn't).
## 💁 Possible Solution
I guess the issue is in Asset.js at the addURLDependecy function where it adds an extra starting / where it shoudn't but I didn't look further into this.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.3
| Node | 10.15.3
| npm/Yarn | npm 6.4.1
| Operating System | Windows x64
| process | wrong css scss assets path 🐛 bug report after building the css assets paths are wrong 🎛 configuration babelrc package json cli command i m using cli command for js only parcel build src app js d dist for example i have this in my scss since my images folder is at my root and my scss is under src scss scss body background image url images bg png i imported the scss in my app js like so js import scss app scss 🤔 expected behavior my bg should go into my dist directory and my compiled css should point at bg png 😯 current behavior my bg goes into my dist directory but the compiled css points at bg png so it search the bg at my root where it isn t 💁 possible solution i guess the issue is in asset js at the addurldependecy function where it adds an extra starting where it shoudn t but i didn t look further into this 🌍 your environment software version s parcel node npm yarn npm operating system windows | 1 |
363,781 | 25,466,229,939 | IssuesEvent | 2022-11-25 04:51:30 | timescale/docs | https://api.github.com/repos/timescale/docs | closed | [Content Bug] Install TimescaleDB on Kubernetes | bug documentation community | 

According to the prompt in the first picture, without the secret "timescale-timescaledb", all secrets seem to have no correct results.
In addition, the steps on the official website are different from the actual operation results. | 1.0 | [Content Bug] Install TimescaleDB on Kubernetes - 

According to the prompt in the first picture, without the secret "timescale-timescaledb", all secrets seem to have no correct results.
In addition, the steps on the official website are different from the actual operation results. | non_process | install timescaledb on kubernetes according to the prompt in the first picture without the secret timescale timescaledb all secrets seem to have no correct results in addition the steps on the official website are different from the actual operation results | 0 |
546,913 | 16,021,324,147 | IssuesEvent | 2021-04-21 00:06:12 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Cayman Airways online check-in page doesn't work with Shields | feature/shields/adblock priority/P4 workaround/shields-down | ## Description
Cayman Airways online check-in page doesn't work with Shields enabled.
## Steps to Reproduce
1. Go to https://dx.checkin.caymanairways.com/dx/KXCI/
2. Enter confirmation number and last name.
3. Click "Search".
## Actual result:
Clicking "Search" clears the form.
## Expected result:
Clicking "Search" should take me to the check-in interface.
## Reproduces how often:
Always.
## Brave version (brave://version info)
Brave | 0.63.48 Chromium: 74.0.3729.108 (Official Build) (64-bit)
Revision | daaff52abef89988bf2a26091062160b1482b108-refs/branch-heads/3729@{#901}
OS | Linux
## Version/Channel Information:
- Can you reproduce this issue with the current release? Yes.
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
Yes, only "ads and trackers" needs to be disabled.
Here are the blocked resources:
- `https://www.googletagmanager.com/gtm.js?id=GTM-NJ9FTD5`
- `https://assets.adobedtm.com/d8fa8fa49b406106290f6b4c580f81ee0c3c443e/satelliteLib-3c979e5ff1e9c2902705700a13a94175e8bda6cc.js`
- `https://dx.checkin.caymanairways.com/dx/KXCI/2.0.195-184-3.Updateheaders_KX_ERS0/images/beacon.gif?accessToken=...`
- `https://cp-2sg.as.havail.sabre.com/v2.2/dcci/passenger/details?jipcc=KXCI` | 1.0 | Cayman Airways online check-in page doesn't work with Shields - ## Description
Cayman Airways online check-in page doesn't work with Shields enabled.
## Steps to Reproduce
1. Go to https://dx.checkin.caymanairways.com/dx/KXCI/
2. Enter confirmation number and last name.
3. Click "Search".
## Actual result:
Clicking "Search" clears the form.
## Expected result:
Clicking "Search" should take me to the check-in interface.
## Reproduces how often:
Always.
## Brave version (brave://version info)
Brave | 0.63.48 Chromium: 74.0.3729.108 (Official Build) (64-bit)
Revision | daaff52abef89988bf2a26091062160b1482b108-refs/branch-heads/3729@{#901}
OS | Linux
## Version/Channel Information:
- Can you reproduce this issue with the current release? Yes.
## Other Additional Information:
- Does the issue resolve itself when disabling Brave Shields?
Yes, only "ads and trackers" needs to be disabled.
Here are the blocked resources:
- `https://www.googletagmanager.com/gtm.js?id=GTM-NJ9FTD5`
- `https://assets.adobedtm.com/d8fa8fa49b406106290f6b4c580f81ee0c3c443e/satelliteLib-3c979e5ff1e9c2902705700a13a94175e8bda6cc.js`
- `https://dx.checkin.caymanairways.com/dx/KXCI/2.0.195-184-3.Updateheaders_KX_ERS0/images/beacon.gif?accessToken=...`
- `https://cp-2sg.as.havail.sabre.com/v2.2/dcci/passenger/details?jipcc=KXCI` | non_process | cayman airways online check in page doesn t work with shields description cayman airways online check in page doesn t work with shields enabled steps to reproduce go to enter confirmation number and last name click search actual result clicking search clears the form expected result clicking search should take me to the check in interface reproduces how often always brave version brave version info brave chromium official build bit revision refs branch heads os linux version channel information can you reproduce this issue with the current release yes other additional information does the issue resolve itself when disabling brave shields yes only ads and trackers needs to be disabled here are the blocked resources | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.