Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
110,870 | 16,995,012,346 | IssuesEvent | 2021-07-01 04:41:15 | avallete/yt-playlists-delete-enhancer | https://api.github.com/repos/avallete/yt-playlists-delete-enhancer | closed | CVE-2020-11022 (Medium) detected in jquery-1.8.1.min.js | no-issue-activity security vulnerability | ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: yt-playlists-delete-enhancer/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: yt-playlists-delete-enhancer/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/avallete/yt-playlists-delete-enhancer/commit/9c336b0fa3155406498ca56519999606da4494b5">9c336b0fa3155406498ca56519999606da4494b5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-11022 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2020-11022 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: yt-playlists-delete-enhancer/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: yt-playlists-delete-enhancer/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/avallete/yt-playlists-delete-enhancer/commit/9c336b0fa3155406498ca56519999606da4494b5">9c336b0fa3155406498ca56519999606da4494b5</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In jQuery versions greater than or equal to 1.2 and before 3.5.0, passing HTML from untrusted sources - even after sanitizing it - to one of jQuery's DOM manipulation methods (i.e. .html(), .append(), and others) may execute untrusted code. This problem is patched in jQuery 3.5.0.
<p>Publish Date: 2020-04-29
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11022>CVE-2020-11022</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/">https://blog.jquery.com/2020/04/10/jquery-3-5-0-released/</a></p>
<p>Release Date: 2020-04-29</p>
<p>Fix Resolution: jQuery - 3.5.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file yt playlists delete enhancer node modules redeyed examples browser index html path to vulnerable library yt playlists delete enhancer node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details in jquery versions greater than or equal to and before passing html from untrusted sources even after sanitizing it to one of jquery s dom manipulation methods i e html append and others may execute untrusted code this problem is patched in jquery publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource | 0 |
141,280 | 12,963,178,036 | IssuesEvent | 2020-07-20 18:21:24 | edgexfoundry/edgex-docs | https://api.github.com/repos/edgexfoundry/edgex-docs | closed | Document how to add additional services to API Gateway | documentation geneva security-services | Geneva release supports adding additional services to the API Gateway. This need to be documents in the API Gateway sections | 1.0 | Document how to add additional services to API Gateway - Geneva release supports adding additional services to the API Gateway. This need to be documents in the API Gateway sections | non_defect | document how to add additional services to api gateway geneva release supports adding additional services to the api gateway this need to be documents in the api gateway sections | 0 |
304,075 | 26,250,972,441 | IssuesEvent | 2023-01-05 19:17:04 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie·ts - Actions and Triggers app Connectors Opsgenie connector page should disable the run button when the message field is not filled | failed-test Team:ResponseOps | A test failed on a tracked branch
```
Error: expected true to equal false
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.be.Assertion.equal (node_modules/@kbn/expect/expect.js:227:8)
at Assertion.be (node_modules/@kbn/expect/expect.js:69:22)
at Context.<anonymous> (x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie.ts:141:87)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/24837#0185104f-b530-40de-a946-462b5ba61783)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie·ts","test.name":"Actions and Triggers app Connectors Opsgenie connector page should disable the run button when the message field is not filled","test.failCount":1}} --> | 1.0 | Failing test: Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie·ts - Actions and Triggers app Connectors Opsgenie connector page should disable the run button when the message field is not filled - A test failed on a tracked branch
```
Error: expected true to equal false
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.be.Assertion.equal (node_modules/@kbn/expect/expect.js:227:8)
at Assertion.be (node_modules/@kbn/expect/expect.js:69:22)
at Context.<anonymous> (x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie.ts:141:87)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at Object.apply (node_modules/@kbn/test/target_node/src/functional_test_runner/lib/mocha/wrap_function.js:78:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/24837#0185104f-b530-40de-a946-462b5ba61783)
<!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests.x-pack/test/functional_with_es_ssl/apps/triggers_actions_ui/connectors/opsgenie·ts","test.name":"Actions and Triggers app Connectors Opsgenie connector page should disable the run button when the message field is not filled","test.failCount":1}} --> | non_defect | failing test chrome x pack ui functional tests x pack test functional with es ssl apps triggers actions ui connectors opsgenie·ts actions and triggers app connectors opsgenie connector page should disable the run button when the message field is not filled a test failed on a tracked branch error expected true to equal false at assertion assert node modules kbn expect expect js at assertion be assertion equal node modules kbn expect expect js at assertion be node modules kbn expect expect js at context x pack test functional with es ssl apps triggers actions ui connectors opsgenie ts at runmicrotasks at processticksandrejections node internal process task queues at object apply node modules kbn test target node src functional test runner lib mocha wrap function js first failure | 0 |
74,421 | 25,122,028,302 | IssuesEvent | 2022-11-09 08:56:01 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Jumping up to first unread clears timeline and issues error message | T-Defect | ### Steps to reproduce
1. Entered a room
2. clicked on marker to jump up to first unread message
### Outcome
#### What did you expect?
jumping up to first unread message
#### What happened instead?
timeline is cleared to empty screen and error message is displayed

### Operating system
Windows 11
### Application version
_No response_
### How did you install the app?
from element.io
### Homeserver
_No response_
### Will you send logs?
Yes | 1.0 | Jumping up to first unread clears timeline and issues error message - ### Steps to reproduce
1. Entered a room
2. clicked on marker to jump up to first unread message
### Outcome
#### What did you expect?
jumping up to first unread message
#### What happened instead?
timeline is cleared to empty screen and error message is displayed

### Operating system
Windows 11
### Application version
_No response_
### How did you install the app?
from element.io
### Homeserver
_No response_
### Will you send logs?
Yes | defect | jumping up to first unread clears timeline and issues error message steps to reproduce entered a room clicked on marker to jump up to first unread message outcome what did you expect jumping up to first unread message what happened instead timeline is cleared to empty screen and error message is displayed operating system windows application version no response how did you install the app from element io homeserver no response will you send logs yes | 1 |
49,436 | 13,186,712,873 | IssuesEvent | 2020-08-13 01:04:28 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | steamshovel recopulsewaveform (Trac #1380) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1380">https://code.icecube.wisc.edu/ticket/1380</a>, reported by berghaus and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-06T16:08:56",
"description": "And another strange problem:\nFor older files (IC86-I and earlier), the calibrated waveform plotter gives an error apparently related to the calibration frame (even though it's in there and can be accessed by python scripts). For newer files (tested IC86-III) it works ok.\n\nERROR (Artist): Python exception in PyArtist::create\nTraceback (most recent call last):\n File \"/home/berghaus/icerec/build/lib/icecube/steamshovel/artists/mplart/AbstractMPLArtist.py\", line 83, in create\n self.create_plot( frame, fig )\n File \"/home/berghaus/icerec/build/lib/icecube/steamshovel/artists/RecoPulseWaveform.py\", line 33, in create_plot\n calib = frame['I3Calibration']\nKeyError: I3Calibration (PyArtist.cpp:154 in virtual void scripting::shovelart::PyArtist::create(I3FramePtr, SceneGroup*, const SceneState&))\n",
"reporter": "berghaus",
"cc": "",
"resolution": "worksforme",
"_ts": "1444147736633303",
"component": "combo reconstruction",
"summary": "steamshovel recopulsewaveform",
"priority": "normal",
"keywords": "steamshovel",
"time": "2015-10-05T08:25:32",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| 1.0 | steamshovel recopulsewaveform (Trac #1380) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1380">https://code.icecube.wisc.edu/ticket/1380</a>, reported by berghaus and owned by hdembinski</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-06T16:08:56",
"description": "And another strange problem:\nFor older files (IC86-I and earlier), the calibrated waveform plotter gives an error apparently related to the calibration frame (even though it's in there and can be accessed by python scripts). For newer files (tested IC86-III) it works ok.\n\nERROR (Artist): Python exception in PyArtist::create\nTraceback (most recent call last):\n File \"/home/berghaus/icerec/build/lib/icecube/steamshovel/artists/mplart/AbstractMPLArtist.py\", line 83, in create\n self.create_plot( frame, fig )\n File \"/home/berghaus/icerec/build/lib/icecube/steamshovel/artists/RecoPulseWaveform.py\", line 33, in create_plot\n calib = frame['I3Calibration']\nKeyError: I3Calibration (PyArtist.cpp:154 in virtual void scripting::shovelart::PyArtist::create(I3FramePtr, SceneGroup*, const SceneState&))\n",
"reporter": "berghaus",
"cc": "",
"resolution": "worksforme",
"_ts": "1444147736633303",
"component": "combo reconstruction",
"summary": "steamshovel recopulsewaveform",
"priority": "normal",
"keywords": "steamshovel",
"time": "2015-10-05T08:25:32",
"milestone": "",
"owner": "hdembinski",
"type": "defect"
}
```
</p>
</details>
| defect | steamshovel recopulsewaveform trac migrated from json status closed changetime description and another strange problem nfor older files i and earlier the calibrated waveform plotter gives an error apparently related to the calibration frame even though it s in there and can be accessed by python scripts for newer files tested iii it works ok n nerror artist python exception in pyartist create ntraceback most recent call last n file home berghaus icerec build lib icecube steamshovel artists mplart abstractmplartist py line in create n self create plot frame fig n file home berghaus icerec build lib icecube steamshovel artists recopulsewaveform py line in create plot n calib frame nkeyerror pyartist cpp in virtual void scripting shovelart pyartist create scenegroup const scenestate n reporter berghaus cc resolution worksforme ts component combo reconstruction summary steamshovel recopulsewaveform priority normal keywords steamshovel time milestone owner hdembinski type defect | 1 |
28,532 | 5,286,610,870 | IssuesEvent | 2017-02-08 09:51:01 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | ClassCastException on Record.get(Field<T>) when Record and Field T types don't match | C: Functionality P: Medium T: Defect | This mostly happens with plain SQL queries, e.g.
```java
Record record = ctx.fetchOne("select 1 AS one from dual");
Field<String> one = field("one", String.class);
String string = record.get(one);
```
The above usage seems perfectly fine from an API perspective, but doesn't yield the correct result (any number type converted to `String`). It throws a `ClassCastException` though. | 1.0 | ClassCastException on Record.get(Field<T>) when Record and Field T types don't match - This mostly happens with plain SQL queries, e.g.
```java
Record record = ctx.fetchOne("select 1 AS one from dual");
Field<String> one = field("one", String.class);
String string = record.get(one);
```
The above usage seems perfectly fine from an API perspective, but doesn't yield the correct result (any number type converted to `String`). It throws a `ClassCastException` though. | defect | classcastexception on record get field when record and field t types don t match this mostly happens with plain sql queries e g java record record ctx fetchone select as one from dual field one field one string class string string record get one the above usage seems perfectly fine from an api perspective but doesn t yield the correct result any number type converted to string it throws a classcastexception though | 1 |
259,687 | 22,504,673,273 | IssuesEvent | 2022-06-23 14:35:09 | MPMG-DCC-UFMG/F01 | https://api.github.com/repos/MPMG-DCC-UFMG/F01 | opened | Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Tabuleiro | generalization test development | DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Tabuleiro. | 1.0 | Teste de generalizacao para a tag Terceiro Setor - Dados de Parcerias - Tabuleiro - DoD: Realizar o teste de Generalização do validador da tag Terceiro Setor - Dados de Parcerias para o Município de Tabuleiro. | non_defect | teste de generalizacao para a tag terceiro setor dados de parcerias tabuleiro dod realizar o teste de generalização do validador da tag terceiro setor dados de parcerias para o município de tabuleiro | 0 |
78,910 | 27,817,616,162 | IssuesEvent | 2023-03-18 21:36:30 | scipy/scipy | https://api.github.com/repos/scipy/scipy | closed | BUG: stats: large errors in genhyperbolic.cdf and .sf for large x | defect scipy.stats | ### Describe your issue.
When the input `x` is large, `genhyperbolic.cdf(x, ...)` can be very inaccurate, and in some cases it can return a completely wrong value. Because `genhyperbolic.sf()` uses the default implementation, it also suffers from accuracy issues with large `x`.
I know what the problem is, and I have a pull request with a fix in development.
### Reproducing Code Example
```python
In [2]: from scipy.stats import genhyperbolic
In [3]: genhyperbolic.cdf(80, 1, 3, 2) # Result should not exceed 1.
Out[3]: 1.0000000000017628
In [4]: genhyperbolic.cdf(85, 1, 3, 2) # Result is completely wrong.
Out[4]: 3.6037918275063203e-10
```
### Error message
```shell
N/A.
```
### SciPy/NumPy/Python version and system information
```shell
1.11.0.dev0+1671.81f84c8 1.25.0.dev0+880.g4db5303f47 sys.version_info(major=3, minor=10, micro=8, releaselevel='final', serial=0)
/home/warren/repos/git/forks/scipy/build-install/lib/python3.10/site-packages/scipy/__config__.py:140: UserWarning: Install `pyyaml` for better output
warnings.warn("Install `pyyaml` for better output", stacklevel=1)
{
"Compilers": {
"c": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "cc"
},
"cython": {
"name": "cython",
"linker": "cython",
"version": "0.29.33",
"commands": "cython"
},
"c++": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "c++"
},
"fortran": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "gfortran"
},
"pythran": {
"version": "0.12.1",
"include directory": "../../../../../py3.10.8/lib/python3.10/site-packages/pythran"
}
},
"Machine Information": {
"host": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
},
"build": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
},
"cross-compiled": false
},
"Build Dependencies": {
"blas": {
"name": "openblas",
"found": true,
"version": "0.3.20",
"detection method": "pkgconfig",
"include directory": "/usr/include/x86_64-linux-gnu/openblas-pthread/",
"lib directory": "/usr/lib/x86_64-linux-gnu/openblas-pthread/",
"openblas configuration": "USE_64BITINT= DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 NO_CBLAS= NO_LAPACK= NO_LAPACKE=1 NO_AFFINITY=1 USE_OPENMP=0 generic MAX_THREADS=64",
"pc file directory": "/usr/lib/x86_64-linux-gnu/pkgconfig"
},
"lapack": {
"name": "openblas",
"found": true,
"version": "0.3.20",
"detection method": "pkgconfig",
"include directory": "/usr/include/x86_64-linux-gnu/openblas-pthread/",
"lib directory": "/usr/lib/x86_64-linux-gnu/openblas-pthread/",
"openblas configuration": "USE_64BITINT= DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 NO_CBLAS= NO_LAPACK= NO_LAPACKE=1 NO_AFFINITY=1 USE_OPENMP=0 generic MAX_THREADS=64",
"pc file directory": "/usr/lib/x86_64-linux-gnu/pkgconfig"
}
},
"Python Information": {
"path": "/home/warren/py3.10.8/bin/python3",
"version": "3.10"
}
}
```
| 1.0 | BUG: stats: large errors in genhyperbolic.cdf and .sf for large x - ### Describe your issue.
When the input `x` is large, `genhyperbolic.cdf(x, ...)` can be very inaccurate, and in some cases it can return a completely wrong value. Because `genhyperbolic.sf()` uses the default implementation, it also suffers from accuracy issues with large `x`.
I know what the problem is, and I have a pull request with a fix in development.
### Reproducing Code Example
```python
In [2]: from scipy.stats import genhyperbolic
In [3]: genhyperbolic.cdf(80, 1, 3, 2) # Result should not exceed 1.
Out[3]: 1.0000000000017628
In [4]: genhyperbolic.cdf(85, 1, 3, 2) # Result is completely wrong.
Out[4]: 3.6037918275063203e-10
```
### Error message
```shell
N/A.
```
### SciPy/NumPy/Python version and system information
```shell
1.11.0.dev0+1671.81f84c8 1.25.0.dev0+880.g4db5303f47 sys.version_info(major=3, minor=10, micro=8, releaselevel='final', serial=0)
/home/warren/repos/git/forks/scipy/build-install/lib/python3.10/site-packages/scipy/__config__.py:140: UserWarning: Install `pyyaml` for better output
warnings.warn("Install `pyyaml` for better output", stacklevel=1)
{
"Compilers": {
"c": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "cc"
},
"cython": {
"name": "cython",
"linker": "cython",
"version": "0.29.33",
"commands": "cython"
},
"c++": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "c++"
},
"fortran": {
"name": "gcc",
"linker": "ld.bfd",
"version": "11.3.0",
"commands": "gfortran"
},
"pythran": {
"version": "0.12.1",
"include directory": "../../../../../py3.10.8/lib/python3.10/site-packages/pythran"
}
},
"Machine Information": {
"host": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
},
"build": {
"cpu": "x86_64",
"family": "x86_64",
"endian": "little",
"system": "linux"
},
"cross-compiled": false
},
"Build Dependencies": {
"blas": {
"name": "openblas",
"found": true,
"version": "0.3.20",
"detection method": "pkgconfig",
"include directory": "/usr/include/x86_64-linux-gnu/openblas-pthread/",
"lib directory": "/usr/lib/x86_64-linux-gnu/openblas-pthread/",
"openblas configuration": "USE_64BITINT= DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 NO_CBLAS= NO_LAPACK= NO_LAPACKE=1 NO_AFFINITY=1 USE_OPENMP=0 generic MAX_THREADS=64",
"pc file directory": "/usr/lib/x86_64-linux-gnu/pkgconfig"
},
"lapack": {
"name": "openblas",
"found": true,
"version": "0.3.20",
"detection method": "pkgconfig",
"include directory": "/usr/include/x86_64-linux-gnu/openblas-pthread/",
"lib directory": "/usr/lib/x86_64-linux-gnu/openblas-pthread/",
"openblas configuration": "USE_64BITINT= DYNAMIC_ARCH=1 DYNAMIC_OLDER=1 NO_CBLAS= NO_LAPACK= NO_LAPACKE=1 NO_AFFINITY=1 USE_OPENMP=0 generic MAX_THREADS=64",
"pc file directory": "/usr/lib/x86_64-linux-gnu/pkgconfig"
}
},
"Python Information": {
"path": "/home/warren/py3.10.8/bin/python3",
"version": "3.10"
}
}
```
| defect | bug stats large errors in genhyperbolic cdf and sf for large x describe your issue when the input x is large genhyperbolic cdf x can be very inaccurate and in some cases it can return a completely wrong value because genhyperbolic sf uses the default implementation it also suffers from accuracy issues with large x i know what the problem is and i have a pull request with a fix in development reproducing code example python in from scipy stats import genhyperbolic in genhyperbolic cdf result should not exceed out in genhyperbolic cdf result is completely wrong out error message shell n a scipy numpy python version and system information shell sys version info major minor micro releaselevel final serial home warren repos git forks scipy build install lib site packages scipy config py userwarning install pyyaml for better output warnings warn install pyyaml for better output stacklevel compilers c name gcc linker ld bfd version commands cc cython name cython linker cython version commands cython c name gcc linker ld bfd version commands c fortran name gcc linker ld bfd version commands gfortran pythran version include directory lib site packages pythran machine information host cpu family endian little system linux build cpu family endian little system linux cross compiled false build dependencies blas name openblas found true version detection method pkgconfig include directory usr include linux gnu openblas pthread lib directory usr lib linux gnu openblas pthread openblas configuration use dynamic arch dynamic older no cblas no lapack no lapacke no affinity use openmp generic max threads pc file directory usr lib linux gnu pkgconfig lapack name openblas found true version detection method pkgconfig include directory usr include linux gnu openblas pthread lib directory usr lib linux gnu openblas pthread openblas configuration use dynamic arch dynamic older no cblas no lapack no lapacke no affinity use openmp generic max threads pc file directory usr lib linux gnu pkgconfig python information path home warren bin version | 1 |
22,040 | 3,932,116,028 | IssuesEvent | 2016-04-25 14:47:09 | Microsoft/vscode | https://api.github.com/repos/Microsoft/vscode | opened | Test Git: Commit in the command palette | testplan-item | Test plan item for #4471
@joaomoreno Please complete... | 1.0 | Test Git: Commit in the command palette - Test plan item for #4471
@joaomoreno Please complete... | non_defect | test git commit in the command palette test plan item for joaomoreno please complete | 0 |
73,717 | 24,768,667,085 | IssuesEvent | 2022-10-22 21:35:08 | scipy/scipy | https://api.github.com/repos/scipy/scipy | opened | BUG: Functions that call lapack/blas use lp64 even when SciPy is built with ilp64 libraries | defect | ### Describe your issue.
In functions that call lapack or blas, such as svds, the lp64 version is called even when Scipy is built with ilp64. This is problematic for arrays with more than $2^{31} - 1$ entries.
I have built SciPy with ilp64 lapack/blas
```
>>> import scipy as sp
>>> sp.show_config()
lapack_armpl_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
openblas64__lapack_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
lapack_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
openblas64__info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
blas_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL
```
### Reproducing Code Example
```python
import numpy as np
import scipy as sp
from scipy.sparse.linalg import LinearOperator
from scipy.sparse.linalg import svds
N = 2**25
diag_scale = np.ones((N,), np.float32)
diag_scale[-1] = 100
# This is a diagonal matrix where the last entry is 100.
# The largest singular value should be 100.
A = LinearOperator((N, N),
matvec=lambda x: x.ravel() * diag_scale,
rmatvec=lambda x : x.ravel() * diag_scale,
dtype=np.float32)
print(svds(A, k=1, solver='lobpcg', return_singular_vectors=False)[0])
```
### Error message
```shell
This should print 100. However, it returns 1 instead. This is because the number of indices in this matrix is greater than $2^31-1$, the last entry is never used in computations.
```
### SciPy/NumPy/Python version information
1.9.3 1.23.4 sys.version_info(major=3, minor=10, micro=7, releaselevel='final', serial=0) | 1.0 | BUG: Functions that call lapack/blas use lp64 even when SciPy is built with ilp64 libraries - ### Describe your issue.
In functions that call lapack or blas, such as svds, the lp64 version is called even when Scipy is built with ilp64. This is problematic for arrays with more than $2^{31} - 1$ entries.
I have built SciPy with ilp64 lapack/blas
```
>>> import scipy as sp
>>> sp.show_config()
lapack_armpl_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
openblas64__lapack_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
lapack_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None), ('HAVE_LAPACKE', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
openblas64__info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
blas_ilp64_opt_info:
libraries = ['openblas64_', 'openblas64_']
library_dirs = ['/opt/OpenBLAS/lib']
language = c
define_macros = [('HAVE_CBLAS', None), ('BLAS_SYMBOL_SUFFIX', '64_'), ('HAVE_BLAS_ILP64', None)]
runtime_library_dirs = ['/opt/OpenBLAS/lib']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL
```
### Reproducing Code Example
```python
import numpy as np
import scipy as sp
from scipy.sparse.linalg import LinearOperator
from scipy.sparse.linalg import svds
N = 2**25
diag_scale = np.ones((N,), np.float32)
diag_scale[-1] = 100
# This is a diagonal matrix where the last entry is 100.
# The largest singular value should be 100.
A = LinearOperator((N, N),
matvec=lambda x: x.ravel() * diag_scale,
rmatvec=lambda x : x.ravel() * diag_scale,
dtype=np.float32)
print(svds(A, k=1, solver='lobpcg', return_singular_vectors=False)[0])
```
### Error message
```shell
This should print 100. However, it returns 1 instead. This is because the number of indices in this matrix is greater than $2^31-1$, the last entry is never used in computations.
```
### SciPy/NumPy/Python version information
1.9.3 1.23.4 sys.version_info(major=3, minor=10, micro=7, releaselevel='final', serial=0) | defect | bug functions that call lapack blas use even when scipy is built with libraries describe your issue in functions that call lapack or blas such as svds the version is called even when scipy is built with this is problematic for arrays with more than entries i have built scipy with lapack blas import scipy as sp sp show config lapack armpl info not available lapack mkl info not available openblas lapack info libraries library dirs language c define macros runtime library dirs lapack opt info libraries library dirs language c define macros runtime library dirs lapack info libraries library dirs language c define macros runtime library dirs lapack opt info libraries library dirs language c define macros runtime library dirs info libraries library dirs language c define macros runtime library dirs blas opt info libraries library dirs language c define macros runtime library dirs supported simd extensions in this numpy install baseline sse found popcnt avx not found knl knm skx clx cnl icl reproducing code example python import numpy as np import scipy as sp from scipy sparse linalg import linearoperator from scipy sparse linalg import svds n diag scale np ones n np diag scale this is a diagonal matrix where the last entry is the largest singular value should be a linearoperator n n matvec lambda x x ravel diag scale rmatvec lambda x x ravel diag scale dtype np print svds a k solver lobpcg return singular vectors false error message shell this should print however it returns instead this is because the number of indices in this matrix is greater than the last entry is never used in computations scipy numpy python version information sys version info major minor micro releaselevel final serial | 1 |
182,917 | 31,027,446,015 | IssuesEvent | 2023-08-10 10:05:33 | hirosystems/wallet | https://api.github.com/repos/hirosystems/wallet | opened | [Design] Review, and improve, secret key input page | Design :gem: | Low priority, but we should consider how we can improve secret key input with new changes coming.
**Design requirements**
- Must have separate inputs
- Each input must be clearly labelled with its corresponding number
- Per-input validation (if one word is wrong, how does the user know?)
- Works with both 12 and 24 word phrases | 1.0 | [Design] Review, and improve, secret key input page - Low priority, but we should consider how we can improve secret key input with new changes coming.
**Design requirements**
- Must have separate inputs
- Each input must be clearly labelled with its corresponding number
- Per-input validation (if one word is wrong, how does the user know?)
- Works with both 12 and 24 word phrases | non_defect | review and improve secret key input page low priority but we should consider how we can improve secret key input with new changes coming design requirements must have separate inputs each input must be clearly labelled with its corresponding number per input validation if one word is wrong how does the user know works with both and word phrases | 0 |
144,951 | 19,318,937,973 | IssuesEvent | 2021-12-14 01:41:31 | vlaship/async | https://api.github.com/repos/vlaship/async | opened | CVE-2021-22096 (Medium) detected in spring-webmvc-5.1.10.RELEASE.jar, spring-web-5.1.10.RELEASE.jar | security vulnerability | ## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-webmvc-5.1.10.RELEASE.jar</b>, <b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-webmvc-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web MVC</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: async/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-webmvc/5.1.10.RELEASE/67b6da7852e89bc0df6ce36a263ac4377fe48e27/spring-webmvc-5.1.10.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-webmvc/5.1.10.RELEASE/67b6da7852e89bc0df6ce36a263ac4377fe48e27/spring-webmvc-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- :x: **spring-webmvc-5.1.10.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: async/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.10.RELEASE/f769e9287286f80f6b1d943cc27194ec33d2041c/spring-web-5.1.10.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.10.RELEASE/f769e9287286f80f6b1d943cc27194ec33d2041c/spring-web-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- spring-webmvc-5.1.10.RELEASE.jar
- :x: **spring-web-5.1.10.RELEASE.jar** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring-core:5.2.18.RELEASE,5.3.12;org.springframework:spring-web:5.2.18.RELEASE,5.3.12;org.springframework:spring-webmvc:5.2.18.RELEASE,5.3.12;org.springframework:spring-webflux:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-22096 (Medium) detected in spring-webmvc-5.1.10.RELEASE.jar, spring-web-5.1.10.RELEASE.jar - ## CVE-2021-22096 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>spring-webmvc-5.1.10.RELEASE.jar</b>, <b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>
<details><summary><b>spring-webmvc-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web MVC</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: async/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-webmvc/5.1.10.RELEASE/67b6da7852e89bc0df6ce36a263ac4377fe48e27/spring-webmvc-5.1.10.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-webmvc/5.1.10.RELEASE/67b6da7852e89bc0df6ce36a263ac4377fe48e27/spring-webmvc-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- :x: **spring-webmvc-5.1.10.RELEASE.jar** (Vulnerable Library)
</details>
<details><summary><b>spring-web-5.1.10.RELEASE.jar</b></p></summary>
<p>Spring Web</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: async/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.10.RELEASE/f769e9287286f80f6b1d943cc27194ec33d2041c/spring-web-5.1.10.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-web/5.1.10.RELEASE/f769e9287286f80f6b1d943cc27194ec33d2041c/spring-web-5.1.10.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.1.9.RELEASE.jar (Root Library)
- spring-webmvc-5.1.10.RELEASE.jar
- :x: **spring-web-5.1.10.RELEASE.jar** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.10, 5.2.0 - 5.2.17, and older unsupported versions, it is possible for a user to provide malicious input to cause the insertion of additional log entries.
<p>Publish Date: 2021-10-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-22096>CVE-2021-22096</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2021-22096">https://tanzu.vmware.com/security/cve-2021-22096</a></p>
<p>Release Date: 2021-10-28</p>
<p>Fix Resolution: org.springframework:spring-core:5.2.18.RELEASE,5.3.12;org.springframework:spring-web:5.2.18.RELEASE,5.3.12;org.springframework:spring-webmvc:5.2.18.RELEASE,5.3.12;org.springframework:spring-webflux:5.2.18.RELEASE,5.3.12</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in spring webmvc release jar spring web release jar cve medium severity vulnerability vulnerable libraries spring webmvc release jar spring web release jar spring webmvc release jar spring web mvc library home page a href path to dependency file async build gradle path to vulnerable library root gradle caches modules files org springframework spring webmvc release spring webmvc release jar root gradle caches modules files org springframework spring webmvc release spring webmvc release jar dependency hierarchy spring boot starter web release jar root library x spring webmvc release jar vulnerable library spring web release jar spring web library home page a href path to dependency file async build gradle path to vulnerable library root gradle caches modules files org springframework spring web release spring web release jar root gradle caches modules files org springframework spring web release spring web release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring web release jar vulnerable library vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide malicious input to cause the insertion of additional log entries publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring core release org springframework spring web release org springframework spring webmvc release org springframework spring webflux release step up your open source security game with whitesource | 0 |
179,999 | 30,343,709,467 | IssuesEvent | 2023-07-11 14:13:06 | NCIOCPL/ncids | https://api.github.com/repos/NCIOCPL/ncids | closed | [R2.1 Content] Confirm Content for Rotating Feature Card Content | Track: Design | **Description:** Per R2.1 content launch schedule, need to solidify the feature card row content that is intended to be added to the rotating/timely feature card rows ahead of launch.
**Sub-tasks:**
- [x] Create tracker for featured content [Laurel]
- [x] Review tracker [Lindsay]
- [x] Send tracker to ODDC for dissemination [Anna]
- [x] Feature card team complete the tracker by **Decision made for next steps (see comment) on 7/5**
- Includes Feature Cards and Promo Block (Stomach Cancer)
- [x] Update the home page (English and Spanish) with the featured content from the current News and Events page in the following locations:
- [x] Content template (English) (Laurel)
- [x] Content template (Spanish) (Adriana)
- [x] ACSF-test (English) (Laurel) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] ACSF-test (Spanish) (Adriana) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] Figma comps (Monika)
- [x] Image Excel Tracker (Monika)
- [x] v3 Mural board (Laurel) **[N/A - already noted on v3 Mural]**
- [x] Verify none of the new feature card images have quality concerns **[7/5 - Monika]**
- [x] Sent two issue images to Anna for review
- [x] Update accordingly (per Anna, we will not remediate these images at this time, rather, will build out the pages on 7/10 and review images at that time)
- [x] Verify if changes were made to the featured content and update in all locations, as needed **[7/10]**
- [x] Content template (Adriana)
- [x] ACSF-test (Adriana) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] Figma Comps (Monika)
- [x] Image Excel tracker (Monika) | 1.0 | [R2.1 Content] Confirm Content for Rotating Feature Card Content - **Description:** Per R2.1 content launch schedule, need to solidify the feature card row content that is intended to be added to the rotating/timely feature card rows ahead of launch.
**Sub-tasks:**
- [x] Create tracker for featured content [Laurel]
- [x] Review tracker [Lindsay]
- [x] Send tracker to ODDC for dissemination [Anna]
- [x] Feature card team complete the tracker by **Decision made for next steps (see comment) on 7/5**
- Includes Feature Cards and Promo Block (Stomach Cancer)
- [x] Update the home page (English and Spanish) with the featured content from the current News and Events page in the following locations:
- [x] Content template (English) (Laurel)
- [x] Content template (Spanish) (Adriana)
- [x] ACSF-test (English) (Laurel) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] ACSF-test (Spanish) (Adriana) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] Figma comps (Monika)
- [x] Image Excel Tracker (Monika)
- [x] v3 Mural board (Laurel) **[N/A - already noted on v3 Mural]**
- [x] Verify none of the new feature card images have quality concerns **[7/5 - Monika]**
- [x] Sent two issue images to Anna for review
- [x] Update accordingly (per Anna, we will not remediate these images at this time, rather, will build out the pages on 7/10 and review images at that time)
- [x] Verify if changes were made to the featured content and update in all locations, as needed **[7/10]**
- [x] Content template (Adriana)
- [x] ACSF-test (Adriana) **[N/A - pages have not been copied down to ACSF-test from Prod]**
- [x] Figma Comps (Monika)
- [x] Image Excel tracker (Monika) | non_defect | confirm content for rotating feature card content description per content launch schedule need to solidify the feature card row content that is intended to be added to the rotating timely feature card rows ahead of launch sub tasks create tracker for featured content review tracker send tracker to oddc for dissemination feature card team complete the tracker by decision made for next steps see comment on includes feature cards and promo block stomach cancer update the home page english and spanish with the featured content from the current news and events page in the following locations content template english laurel content template spanish adriana acsf test english laurel acsf test spanish adriana figma comps monika image excel tracker monika mural board laurel verify none of the new feature card images have quality concerns sent two issue images to anna for review update accordingly per anna we will not remediate these images at this time rather will build out the pages on and review images at that time verify if changes were made to the featured content and update in all locations as needed content template adriana acsf test adriana figma comps monika image excel tracker monika | 0 |
70,508 | 30,682,427,977 | IssuesEvent | 2023-07-26 10:01:45 | vmware/singleton | https://api.github.com/repos/vmware/singleton | opened | [BUG] [Java Service]Failed to get/post the key with slash or backslash char with KEY-based APIs. | kind/bug area/java-service priority/medium | **Describe the bug**
commit: 4269c6f4cbf878072d26e92f0944a42237f80f1c
Failed to get/post the key with slash or backslash char with KEY-based APIs.
Effected APIs:
V2:
GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys]
GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]
POST [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]
V1: all key-based APIs
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]'
2. Input the key name: /\
3. Input other valid info to required parameters, then send the request
4. See error:

**Expected behavior**
Chars slash and backslash(/ \) can be accepted and no error pops up.
**Additional context**
The issue is not reproducible with commit b95eb9760798164df51584f57b8358a3e68389a1
| 1.0 | [BUG] [Java Service]Failed to get/post the key with slash or backslash char with KEY-based APIs. - **Describe the bug**
commit: 4269c6f4cbf878072d26e92f0944a42237f80f1c
Failed to get/post the key with slash or backslash char with KEY-based APIs.
Effected APIs:
V2:
GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys]
GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]
POST [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]
V1: all key-based APIs
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'GET [/i18n/api/v2/translation/products/{productName}/versions/{version}/locales/{locale}/components/{component}/keys/{key}]'
2. Input the key name: /\
3. Input other valid info to required parameters, then send the request
4. See error:

**Expected behavior**
Chars slash and backslash(/ \) can be accepted and no error pops up.
**Additional context**
The issue is not reproducible with commit b95eb9760798164df51584f57b8358a3e68389a1
| non_defect | failed to get post the key with slash or backslash char with key based apis describe the bug commit failed to get post the key with slash or backslash char with key based apis effected apis get get post all key based apis to reproduce steps to reproduce the behavior go to get input the key name input other valid info to required parameters then send the request see error expected behavior chars slash and backslash can be accepted and no error pops up additional context the issue is not reproducible with commit | 0 |
34,623 | 7,458,075,624 | IssuesEvent | 2018-03-30 08:32:28 | kerdokullamae/test_koik_issued | https://api.github.com/repos/kerdokullamae/test_koik_issued | closed | Kleio otsi nupp ei leia olemasolevaid asju | C: SDB P: high R: wontfix T: defect | **Reported by kati sein on 1 Oct 2014 12:13 UTC**
Näiteks otsin Kleios ERA.5040 projektist "ERA.5040.2.102", aga seda ei leita. Tieto testserveris sama viga. | 1.0 | Kleio otsi nupp ei leia olemasolevaid asju - **Reported by kati sein on 1 Oct 2014 12:13 UTC**
Näiteks otsin Kleios ERA.5040 projektist "ERA.5040.2.102", aga seda ei leita. Tieto testserveris sama viga. | defect | kleio otsi nupp ei leia olemasolevaid asju reported by kati sein on oct utc näiteks otsin kleios era projektist era aga seda ei leita tieto testserveris sama viga | 1 |
43,526 | 5,543,122,819 | IssuesEvent | 2017-03-22 16:18:30 | openshift/origin | https://api.github.com/repos/openshift/origin | closed | [flake] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.TestDelete | component/kubernetes kind/test-flake priority/P1 | https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/165/testReport/junit/github.com_openshift_origin_vendor_k8s/io_kubernetes_pkg_registry_core_event_etcd/TestDelete/
```
=== RUN TestDelete
2017-03-21 14:22:30.955023 I | integration: launching 4536839067055683264 (unix://localhost:45368390670556832640)
2017-03-21 14:22:30.956182 I | etcdserver: name = 4536839067055683264
2017-03-21 14:22:30.956267 I | etcdserver: data dir = /openshifttmp/etcd197674038
2017-03-21 14:22:30.956327 I | etcdserver: member dir = /openshifttmp/etcd197674038/member
2017-03-21 14:22:30.956381 I | etcdserver: heartbeat = 10ms
2017-03-21 14:22:30.956426 I | etcdserver: election = 100ms
2017-03-21 14:22:30.956475 I | etcdserver: snapshot count = 0
2017-03-21 14:22:30.956530 I | etcdserver: advertise client URLs = unix://127.0.0.1:2100616715
2017-03-21 14:22:30.956597 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2100516715
2017-03-21 14:22:30.956673 I | etcdserver: initial cluster = 4536839067055683264=unix://127.0.0.1:2100516715
2017-03-21 14:22:30.983490 I | etcdserver: starting member 7f1526f1c0b24cac in cluster b6f9385f6c39d8e8
2017-03-21 14:22:30.983630 I | raft: 7f1526f1c0b24cac became follower at term 0
2017-03-21 14:22:30.983724 I | raft: newRaft 7f1526f1c0b24cac [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-03-21 14:22:30.983817 I | raft: 7f1526f1c0b24cac became follower at term 1
unexpected fault address 0xc420e80000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc420e80000 pc=0x1681fc5]
goroutine 305 [running]:
runtime.throw(0x1abef59, 0x5)
/usr/local/go/src/runtime/panic.go:566 +0x95 fp=0xc4205f4fe8 sp=0xc4205f4fc8
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc4205f5040 sp=0xc4205f4fe8
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*node).write(0xc420268380, 0xc420e7fff0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/node.go:205 +0x195 fp=0xc4205f51d8 sp=0xc4205f5040
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Bucket).write(0xc4205f5348, 0x68, 0x3, 0x7fb25c004077)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/bucket.go:598 +0x11f fp=0xc4205f5238 sp=0xc4205f51d8
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Bucket).CreateBucket(0xc420330b78, 0x24713a6, 0x3, 0x3, 0x7fb4f0021958, 0xc420e7f300, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/bucket.go:181 +0x545 fp=0xc4205f5390 sp=0xc4205f5238
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Tx).CreateBucket(0xc420330b60, 0x24713a6, 0x3, 0x3, 0x60c3de, 0xc420e7f300, 0xc4205f5478)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/tx.go:108 +0x61 fp=0xc4205f53f0 sp=0xc4205f5390
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTx).UnsafeCreateBucket(0xc420e7f300, 0x24713a6, 0x3, 0x3)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:53 +0xa4 fp=0xc4205f54a0 sp=0xc4205f53f0
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.NewStore(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0x1c43c80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/kvstore.go:116 +0x3d6 fp=0xc4205f5568 sp=0xc4205f54a0
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.newWatchableStore(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0xc420b23f80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:73 +0x92 fp=0xc4205f5608 sp=0xc4205f5568
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.New(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0x2490140, 0xc420b237a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:68 +0x6f fp=0xc4205f5658 sp=0xc4205f5608
github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver.NewServer(0xc420e7b200, 0xc420206dc0, 0x0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver/server.go:448 +0x1a4a fp=0xc4205f5cc0 sp=0xc4205f5658
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*member).Launch(0xc420e7b200, 0x0, 0xc420a537a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:593 +0x1d1 fp=0xc4205f5f58 sp=0xc4205f5cc0
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch.func1(0xc420b22f60, 0xc420e7b200)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:159 +0x39 fp=0xc4205f5f90 sp=0xc4205f5f58
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4205f5f98 sp=0xc4205f5f90
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:160 +0xe9
goroutine 1 [chan receive]:
testing.(*T).Run(0xc42039c180, 0x1ac4e8f, 0xa, 0x1c49df8, 0x4c0601)
/usr/local/go/src/testing/testing.go:647 +0x56e
testing.RunTests.func1(0xc42039c180)
/usr/local/go/src/testing/testing.go:793 +0xba
testing.tRunner(0xc42039c180, 0xc420229d20)
/usr/local/go/src/testing/testing.go:610 +0xca
testing.RunTests(0x1c4a090, 0x24d70e0, 0x3, 0x3, 0x0)
/usr/local/go/src/testing/testing.go:799 +0x4bb
testing.(*M).Run(0xc420229ef0, 0x30)
/usr/local/go/src/testing/testing.go:743 +0x130
main.main()
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/_test/_testmain.go:104 +0x2f9
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1
goroutine 34 [chan receive]:
github.com/openshift/origin/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x24e5120)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:879 +0x9e
created by github.com/openshift/origin/vendor/github.com/golang/glog.init.1
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:410 +0x23b
goroutine 45 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc42039a9a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0xb5
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x125
goroutine 303 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch(0xc420e7ec00, 0xc42061a180)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:163 +0x17b
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.NewClusterV3(0xc42061a180, 0xc4204b3b00, 0x51b601)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:841 +0x13e
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/storage/etcd/testing.NewUnsecuredEtcd3TestClientServer(0xc42061a180, 0x101c4204b3580, 0x15)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/storage/etcd/testing/utils.go:315 +0xd0
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/registrytest.NewEtcdStorage(0xc42061a180, 0x0, 0x0, 0x0, 0xc400000000)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/registrytest/etcd.go:40 +0x51
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.newStorage(0xc42061a180, 0xc42005c9c0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/etcd_test.go:32 +0x63
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.TestDelete(0xc42061a180)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/etcd_test.go:90 +0x40
testing.tRunner(0xc42061a180, 0x1c49df8)
/usr/local/go/src/testing/testing.go:610 +0xca
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:646 +0x530
goroutine 7 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc420446520)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0xb5
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x125
goroutine 340 [syscall]:
syscall.Syscall6(0x11d, 0xa, 0x0, 0x0, 0x3d09000, 0x0, 0x0, 0x1, 0xc420088aa8, 0xc420a6cd80)
/usr/local/go/src/syscall/asm_linux_amd64.s:44 +0x5
syscall.Fallocate(0xa, 0x0, 0x0, 0x3d09000, 0x0, 0x0)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:399 +0x80
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil.preallocExtend(0xc420088aa0, 0x3d09000, 0x0, 0xc420088aa8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil/preallocate_unix.go:26 +0x68
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil.Preallocate(0xc420088aa0, 0x3d09000, 0x1, 0xc400000180, 0xc420088aa8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil/preallocate.go:26 +0x50
github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.(*filePipeline).alloc(0xc4201e0240, 0x1c4a678, 0xc420b235c0, 0x248e500)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:72 +0x2dc
github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.(*filePipeline).run(0xc4201e0240)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:84 +0x9f
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.newFilePipeline
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:47 +0x1e8
goroutine 342 [runnable]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/lease.(*lessor).runLoop(0xc420b23f80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/lease/lessor.go:422
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/lease.newLessor
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/lease/lessor.go:169 +0x2da
goroutine 304 [IO wait]:
net.runtime_pollWait(0x7fb4f474a090, 0x72, 0x413a23)
/usr/local/go/src/runtime/netpoll.go:160 +0x5e
net.(*pollDesc).wait(0xc420563870, 0x72, 0x4102d8, 0x1942c20)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x5b
net.(*pollDesc).waitRead(0xc420563870, 0x247c380, 0xc42007a080)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x42
net.(*netFD).accept(0xc420563810, 0x0, 0x2479bc0, 0xc420e7f2a0)
/usr/local/go/src/net/fd_unix.go:419 +0x2b8
net.(*UnixListener).accept(0xc420e7f120, 0x0, 0x0, 0x0)
/usr/local/go/src/net/unixsock_posix.go:158 +0x51
net.(*UnixListener).Accept(0xc420e7f120, 0xc420acbf30, 0xc420acbf40, 0x45c150, 0xc420acbf30)
/usr/local/go/src/net/unixsock.go:229 +0x50
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc4203dc100, 0x1c43c58, 0xc420b22f00, 0x38d2368800495851, 0xed0632e26)
<autogenerated>:91 +0x69
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*bridge).serveListen(0xc420b22f00)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/bridge.go:90 +0x91
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.newBridge
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/bridge.go:54 +0x466
goroutine 341 [select]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/raft.(*node).run(0xc420b237a0, 0xc420344e10)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/raft/node.go:309 +0x134a
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/raft.StartNode
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/raft/node.go:206 +0x800
goroutine 339 [select]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.(*backend).run(0xc420b232c0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:193 +0x26b
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.newBackend
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:119 +0x29e
goroutine 343 [runnable]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule.(*fifo).run(0xc42064a120)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:130
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule.NewFIFOScheduler
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:71 +0x2e8
```
| 1.0 | [flake] github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.TestDelete - https://ci.openshift.redhat.com/jenkins/job/merge_pull_request_origin/165/testReport/junit/github.com_openshift_origin_vendor_k8s/io_kubernetes_pkg_registry_core_event_etcd/TestDelete/
```
=== RUN TestDelete
2017-03-21 14:22:30.955023 I | integration: launching 4536839067055683264 (unix://localhost:45368390670556832640)
2017-03-21 14:22:30.956182 I | etcdserver: name = 4536839067055683264
2017-03-21 14:22:30.956267 I | etcdserver: data dir = /openshifttmp/etcd197674038
2017-03-21 14:22:30.956327 I | etcdserver: member dir = /openshifttmp/etcd197674038/member
2017-03-21 14:22:30.956381 I | etcdserver: heartbeat = 10ms
2017-03-21 14:22:30.956426 I | etcdserver: election = 100ms
2017-03-21 14:22:30.956475 I | etcdserver: snapshot count = 0
2017-03-21 14:22:30.956530 I | etcdserver: advertise client URLs = unix://127.0.0.1:2100616715
2017-03-21 14:22:30.956597 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2100516715
2017-03-21 14:22:30.956673 I | etcdserver: initial cluster = 4536839067055683264=unix://127.0.0.1:2100516715
2017-03-21 14:22:30.983490 I | etcdserver: starting member 7f1526f1c0b24cac in cluster b6f9385f6c39d8e8
2017-03-21 14:22:30.983630 I | raft: 7f1526f1c0b24cac became follower at term 0
2017-03-21 14:22:30.983724 I | raft: newRaft 7f1526f1c0b24cac [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-03-21 14:22:30.983817 I | raft: 7f1526f1c0b24cac became follower at term 1
unexpected fault address 0xc420e80000
fatal error: fault
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc420e80000 pc=0x1681fc5]
goroutine 305 [running]:
runtime.throw(0x1abef59, 0x5)
/usr/local/go/src/runtime/panic.go:566 +0x95 fp=0xc4205f4fe8 sp=0xc4205f4fc8
runtime.sigpanic()
/usr/local/go/src/runtime/sigpanic_unix.go:27 +0x288 fp=0xc4205f5040 sp=0xc4205f4fe8
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*node).write(0xc420268380, 0xc420e7fff0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/node.go:205 +0x195 fp=0xc4205f51d8 sp=0xc4205f5040
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Bucket).write(0xc4205f5348, 0x68, 0x3, 0x7fb25c004077)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/bucket.go:598 +0x11f fp=0xc4205f5238 sp=0xc4205f51d8
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Bucket).CreateBucket(0xc420330b78, 0x24713a6, 0x3, 0x3, 0x7fb4f0021958, 0xc420e7f300, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/bucket.go:181 +0x545 fp=0xc4205f5390 sp=0xc4205f5238
github.com/openshift/origin/vendor/github.com/boltdb/bolt.(*Tx).CreateBucket(0xc420330b60, 0x24713a6, 0x3, 0x3, 0x60c3de, 0xc420e7f300, 0xc4205f5478)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/boltdb/bolt/tx.go:108 +0x61 fp=0xc4205f53f0 sp=0xc4205f5390
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.(*batchTx).UnsafeCreateBucket(0xc420e7f300, 0x24713a6, 0x3, 0x3)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/batch_tx.go:53 +0xa4 fp=0xc4205f54a0 sp=0xc4205f53f0
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.NewStore(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0x1c43c80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/kvstore.go:116 +0x3d6 fp=0xc4205f5568 sp=0xc4205f54a0
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.newWatchableStore(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0xc420b23f80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:73 +0x92 fp=0xc4205f5608 sp=0xc4205f5568
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc.New(0x248e480, 0xc420b232c0, 0x248fe20, 0xc420b23f80, 0x2473f40, 0xc420206dd8, 0x2490140, 0xc420b237a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/watchable_store.go:68 +0x6f fp=0xc4205f5658 sp=0xc4205f5608
github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver.NewServer(0xc420e7b200, 0xc420206dc0, 0x0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/etcdserver/server.go:448 +0x1a4a fp=0xc4205f5cc0 sp=0xc4205f5658
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*member).Launch(0xc420e7b200, 0x0, 0xc420a537a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:593 +0x1d1 fp=0xc4205f5f58 sp=0xc4205f5cc0
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch.func1(0xc420b22f60, 0xc420e7b200)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:159 +0x39 fp=0xc4205f5f90 sp=0xc4205f5f58
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4205f5f98 sp=0xc4205f5f90
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:160 +0xe9
goroutine 1 [chan receive]:
testing.(*T).Run(0xc42039c180, 0x1ac4e8f, 0xa, 0x1c49df8, 0x4c0601)
/usr/local/go/src/testing/testing.go:647 +0x56e
testing.RunTests.func1(0xc42039c180)
/usr/local/go/src/testing/testing.go:793 +0xba
testing.tRunner(0xc42039c180, 0xc420229d20)
/usr/local/go/src/testing/testing.go:610 +0xca
testing.RunTests(0x1c4a090, 0x24d70e0, 0x3, 0x3, 0x0)
/usr/local/go/src/testing/testing.go:799 +0x4bb
testing.(*M).Run(0xc420229ef0, 0x30)
/usr/local/go/src/testing/testing.go:743 +0x130
main.main()
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/_test/_testmain.go:104 +0x2f9
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2086 +0x1
goroutine 34 [chan receive]:
github.com/openshift/origin/vendor/github.com/golang/glog.(*loggingT).flushDaemon(0x24e5120)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:879 +0x9e
created by github.com/openshift/origin/vendor/github.com/golang/glog.init.1
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/golang/glog/glog.go:410 +0x23b
goroutine 45 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc42039a9a0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0xb5
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x125
goroutine 303 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*cluster).Launch(0xc420e7ec00, 0xc42061a180)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:163 +0x17b
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.NewClusterV3(0xc42061a180, 0xc4204b3b00, 0x51b601)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/cluster.go:841 +0x13e
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/storage/etcd/testing.NewUnsecuredEtcd3TestClientServer(0xc42061a180, 0x101c4204b3580, 0x15)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/storage/etcd/testing/utils.go:315 +0xd0
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/registrytest.NewEtcdStorage(0xc42061a180, 0x0, 0x0, 0x0, 0xc400000000)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/registrytest/etcd.go:40 +0x51
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.newStorage(0xc42061a180, 0xc42005c9c0, 0x0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/etcd_test.go:32 +0x63
github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd.TestDelete(0xc42061a180)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/pkg/registry/core/event/etcd/etcd_test.go:90 +0x40
testing.tRunner(0xc42061a180, 0x1c49df8)
/usr/local/go/src/testing/testing.go:610 +0xca
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:646 +0x530
goroutine 7 [chan receive]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.(*MergeLogger).outputLoop(0xc420446520)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:174 +0xb5
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil.NewMergeLogger
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/logutil/merge_logger.go:92 +0x125
goroutine 340 [syscall]:
syscall.Syscall6(0x11d, 0xa, 0x0, 0x0, 0x3d09000, 0x0, 0x0, 0x1, 0xc420088aa8, 0xc420a6cd80)
/usr/local/go/src/syscall/asm_linux_amd64.s:44 +0x5
syscall.Fallocate(0xa, 0x0, 0x0, 0x3d09000, 0x0, 0x0)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:399 +0x80
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil.preallocExtend(0xc420088aa0, 0x3d09000, 0x0, 0xc420088aa8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil/preallocate_unix.go:26 +0x68
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil.Preallocate(0xc420088aa0, 0x3d09000, 0x1, 0xc400000180, 0xc420088aa8)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/fileutil/preallocate.go:26 +0x50
github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.(*filePipeline).alloc(0xc4201e0240, 0x1c4a678, 0xc420b235c0, 0x248e500)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:72 +0x2dc
github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.(*filePipeline).run(0xc4201e0240)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:84 +0x9f
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/wal.newFilePipeline
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/wal/file_pipeline.go:47 +0x1e8
goroutine 342 [runnable]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/lease.(*lessor).runLoop(0xc420b23f80)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/lease/lessor.go:422
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/lease.newLessor
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/lease/lessor.go:169 +0x2da
goroutine 304 [IO wait]:
net.runtime_pollWait(0x7fb4f474a090, 0x72, 0x413a23)
/usr/local/go/src/runtime/netpoll.go:160 +0x5e
net.(*pollDesc).wait(0xc420563870, 0x72, 0x4102d8, 0x1942c20)
/usr/local/go/src/net/fd_poll_runtime.go:73 +0x5b
net.(*pollDesc).waitRead(0xc420563870, 0x247c380, 0xc42007a080)
/usr/local/go/src/net/fd_poll_runtime.go:78 +0x42
net.(*netFD).accept(0xc420563810, 0x0, 0x2479bc0, 0xc420e7f2a0)
/usr/local/go/src/net/fd_unix.go:419 +0x2b8
net.(*UnixListener).accept(0xc420e7f120, 0x0, 0x0, 0x0)
/usr/local/go/src/net/unixsock_posix.go:158 +0x51
net.(*UnixListener).Accept(0xc420e7f120, 0xc420acbf30, 0xc420acbf40, 0x45c150, 0xc420acbf30)
/usr/local/go/src/net/unixsock.go:229 +0x50
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/transport.(*unixListener).Accept(0xc4203dc100, 0x1c43c58, 0xc420b22f00, 0x38d2368800495851, 0xed0632e26)
<autogenerated>:91 +0x69
github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.(*bridge).serveListen(0xc420b22f00)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/bridge.go:90 +0x91
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/integration.newBridge
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/integration/bridge.go:54 +0x466
goroutine 341 [select]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/raft.(*node).run(0xc420b237a0, 0xc420344e10)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/raft/node.go:309 +0x134a
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/raft.StartNode
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/raft/node.go:206 +0x800
goroutine 339 [select]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.(*backend).run(0xc420b232c0)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:193 +0x26b
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend.newBackend
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/mvcc/backend/backend.go:119 +0x29e
goroutine 343 [runnable]:
github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule.(*fifo).run(0xc42064a120)
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:130
created by github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule.NewFIFOScheduler
/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/github.com/coreos/etcd/pkg/schedule/schedule.go:71 +0x2e8
```
| non_defect | github com openshift origin vendor io kubernetes pkg registry core event etcd testdelete run testdelete i integration launching unix localhost i etcdserver name i etcdserver data dir openshifttmp i etcdserver member dir openshifttmp member i etcdserver heartbeat i etcdserver election i etcdserver snapshot count i etcdserver advertise client urls unix i etcdserver initial advertise peer urls unix i etcdserver initial cluster unix i etcdserver starting member in cluster i raft became follower at term i raft newraft term commit applied lastindex lastterm i raft became follower at term unexpected fault address fatal error fault goroutine runtime throw usr local go src runtime panic go fp sp runtime sigpanic usr local go src runtime sigpanic unix go fp sp github com openshift origin vendor github com boltdb bolt node write go src github com openshift origin output local go src github com openshift origin vendor github com boltdb bolt node go fp sp github com openshift origin vendor github com boltdb bolt bucket write go src github com openshift origin output local go src github com openshift origin vendor github com boltdb bolt bucket go fp sp github com openshift origin vendor github com boltdb bolt bucket createbucket go src github com openshift origin output local go src github com openshift origin vendor github com boltdb bolt bucket go fp sp github com openshift origin vendor github com boltdb bolt tx createbucket go src github com openshift origin output local go src github com openshift origin vendor github com boltdb bolt tx go fp sp github com openshift origin vendor github com coreos etcd mvcc backend batchtx unsafecreatebucket go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc backend batch tx go fp sp github com openshift origin vendor github com coreos etcd mvcc newstore go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc kvstore go fp sp github com openshift origin vendor github com coreos etcd mvcc newwatchablestore go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc watchable store go fp sp github com openshift origin vendor github com coreos etcd mvcc new go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc watchable store go fp sp github com openshift origin vendor github com coreos etcd etcdserver newserver go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd etcdserver server go fp sp github com openshift origin vendor github com coreos etcd integration member launch go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration cluster go fp sp github com openshift origin vendor github com coreos etcd integration cluster launch go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration cluster go fp sp runtime goexit usr local go src runtime asm s fp sp created by github com openshift origin vendor github com coreos etcd integration cluster launch go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration cluster go goroutine testing t run usr local go src testing testing go testing runtests usr local go src testing testing go testing trunner usr local go src testing testing go testing runtests usr local go src testing testing go testing m run usr local go src testing testing go main main github com openshift origin vendor io kubernetes pkg registry core event etcd test testmain go goroutine runtime goexit usr local go src runtime asm s goroutine github com openshift origin vendor github com golang glog loggingt flushdaemon go src github com openshift origin output local go src github com openshift origin vendor github com golang glog glog go created by github com openshift origin vendor github com golang glog init go src github com openshift origin output local go src github com openshift origin vendor github com golang glog glog go goroutine github com openshift origin vendor github com coreos etcd pkg logutil mergelogger outputloop go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg logutil merge logger go created by github com openshift origin vendor github com coreos etcd pkg logutil newmergelogger go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg logutil merge logger go goroutine github com openshift origin vendor github com coreos etcd integration cluster launch go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration cluster go github com openshift origin vendor github com coreos etcd integration go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration cluster go github com openshift origin vendor io kubernetes pkg storage etcd testing go src github com openshift origin output local go src github com openshift origin vendor io kubernetes pkg storage etcd testing utils go github com openshift origin vendor io kubernetes pkg registry registrytest newetcdstorage go src github com openshift origin output local go src github com openshift origin vendor io kubernetes pkg registry registrytest etcd go github com openshift origin vendor io kubernetes pkg registry core event etcd newstorage go src github com openshift origin output local go src github com openshift origin vendor io kubernetes pkg registry core event etcd etcd test go github com openshift origin vendor io kubernetes pkg registry core event etcd testdelete go src github com openshift origin output local go src github com openshift origin vendor io kubernetes pkg registry core event etcd etcd test go testing trunner usr local go src testing testing go created by testing t run usr local go src testing testing go goroutine github com openshift origin vendor github com coreos etcd pkg logutil mergelogger outputloop go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg logutil merge logger go created by github com openshift origin vendor github com coreos etcd pkg logutil newmergelogger go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg logutil merge logger go goroutine syscall usr local go src syscall asm linux s syscall fallocate usr local go src syscall zsyscall linux go github com openshift origin vendor github com coreos etcd pkg fileutil preallocextend go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg fileutil preallocate unix go github com openshift origin vendor github com coreos etcd pkg fileutil preallocate go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg fileutil preallocate go github com openshift origin vendor github com coreos etcd wal filepipeline alloc go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd wal file pipeline go github com openshift origin vendor github com coreos etcd wal filepipeline run go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd wal file pipeline go created by github com openshift origin vendor github com coreos etcd wal newfilepipeline go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd wal file pipeline go goroutine github com openshift origin vendor github com coreos etcd lease lessor runloop go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd lease lessor go created by github com openshift origin vendor github com coreos etcd lease newlessor go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd lease lessor go goroutine net runtime pollwait usr local go src runtime netpoll go net polldesc wait usr local go src net fd poll runtime go net polldesc waitread usr local go src net fd poll runtime go net netfd accept usr local go src net fd unix go net unixlistener accept usr local go src net unixsock posix go net unixlistener accept usr local go src net unixsock go github com openshift origin vendor github com coreos etcd pkg transport unixlistener accept github com openshift origin vendor github com coreos etcd integration bridge servelisten go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration bridge go created by github com openshift origin vendor github com coreos etcd integration newbridge go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd integration bridge go goroutine github com openshift origin vendor github com coreos etcd raft node run go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd raft node go created by github com openshift origin vendor github com coreos etcd raft startnode go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd raft node go goroutine github com openshift origin vendor github com coreos etcd mvcc backend backend run go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc backend backend go created by github com openshift origin vendor github com coreos etcd mvcc backend newbackend go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd mvcc backend backend go goroutine github com openshift origin vendor github com coreos etcd pkg schedule fifo run go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg schedule schedule go created by github com openshift origin vendor github com coreos etcd pkg schedule newfifoscheduler go src github com openshift origin output local go src github com openshift origin vendor github com coreos etcd pkg schedule schedule go | 0 |
6,548 | 2,610,256,694 | IssuesEvent | 2015-02-26 19:21:57 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳激光祛痘机构 | auto-migrated Priority-Medium Type-Defect | ```
深圳激光祛痘机构【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:35 | 1.0 | 深圳激光祛痘机构 - ```
深圳激光祛痘机构【深圳韩方科颜全国热线400-869-1818,24小时
QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘��
�——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方�
��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健
康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业��
�疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘�
��。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:35 | defect | 深圳激光祛痘机构 深圳激光祛痘机构【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at | 1 |
17,501 | 10,710,452,178 | IssuesEvent | 2019-10-25 02:17:11 | Aquatop/Principal | https://api.github.com/repos/Aquatop/Principal | closed | TS - Criar schema do aquário | aquarium crud service software technical story | ### Descrição
Criar um schema com os dados necessários para um cadastro de aquário.
### Critérios de Aceitação
- [x] Possuir os dados necessários para um cadastro de aquário;
- [x] Estar corretamente conectado com o banco de dados.
### Tarefas
- [x] Inserir formulário do aquário(A ser definido).
**Estimativa de Esforço** : 3 pontos | 1.0 | TS - Criar schema do aquário - ### Descrição
Criar um schema com os dados necessários para um cadastro de aquário.
### Critérios de Aceitação
- [x] Possuir os dados necessários para um cadastro de aquário;
- [x] Estar corretamente conectado com o banco de dados.
### Tarefas
- [x] Inserir formulário do aquário(A ser definido).
**Estimativa de Esforço** : 3 pontos | non_defect | ts criar schema do aquário descrição criar um schema com os dados necessários para um cadastro de aquário critérios de aceitação possuir os dados necessários para um cadastro de aquário estar corretamente conectado com o banco de dados tarefas inserir formulário do aquário a ser definido estimativa de esforço pontos | 0 |
216,363 | 16,657,203,535 | IssuesEvent | 2021-06-05 18:49:27 | ClassicNick/palemoon26 | https://api.github.com/repos/ClassicNick/palemoon26 | opened | Build Instructions | documentation | Recommended: Windows 7 SDK (please also install .NET Framework 4.0), Visual Studio 2010, MozillaBuild 1.9
Optional: Windows 7 SDK, Visual Studio 2008 (Not operational yet), MozillaBuild 1.6-1.8 (also not operational yet) | 1.0 | Build Instructions - Recommended: Windows 7 SDK (please also install .NET Framework 4.0), Visual Studio 2010, MozillaBuild 1.9
Optional: Windows 7 SDK, Visual Studio 2008 (Not operational yet), MozillaBuild 1.6-1.8 (also not operational yet) | non_defect | build instructions recommended windows sdk please also install net framework visual studio mozillabuild optional windows sdk visual studio not operational yet mozillabuild also not operational yet | 0 |
28,089 | 5,186,118,823 | IssuesEvent | 2017-01-20 12:57:32 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | datatable emptyMessage not working 2.0.0-rc.1 | defect | **I'm submitting a ...**
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/tehXSsrFFMBCG9tdlAcJ?p=preview
**Current behavior**
empty message is not displayed when no data
**Expected behavior**
empty message should be displayed when no data
**Minimal reproduction of the problem with instructions**
add an emptyMessage attribute to your datatable, the message will not display.
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
Windows, Atom, angular-cli, tomcat backend server
* **Angular version:** 2.0.1
* **PrimeNG version:** 2.0.0-rc.1
Was working on previous version
* **Browser:** [all]
* **Language:** [all]
* **Node (for AoT issues):** `node --version` =
v6.9.2 | 1.0 | datatable emptyMessage not working 2.0.0-rc.1 - **I'm submitting a ...**
```
[x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
```
**Plunkr Case (Bug Reports)**
http://plnkr.co/edit/tehXSsrFFMBCG9tdlAcJ?p=preview
**Current behavior**
empty message is not displayed when no data
**Expected behavior**
empty message should be displayed when no data
**Minimal reproduction of the problem with instructions**
add an emptyMessage attribute to your datatable, the message will not display.
**What is the motivation / use case for changing the behavior?**
<!-- Describe the motivation or the concrete use case -->
**Please tell us about your environment:**
Windows, Atom, angular-cli, tomcat backend server
* **Angular version:** 2.0.1
* **PrimeNG version:** 2.0.0-rc.1
Was working on previous version
* **Browser:** [all]
* **Language:** [all]
* **Node (for AoT issues):** `node --version` =
v6.9.2 | defect | datatable emptymessage not working rc i m submitting a bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports current behavior empty message is not displayed when no data expected behavior empty message should be displayed when no data minimal reproduction of the problem with instructions add an emptymessage attribute to your datatable the message will not display what is the motivation use case for changing the behavior please tell us about your environment windows atom angular cli tomcat backend server angular version primeng version rc was working on previous version browser language node for aot issues node version | 1 |
13,038 | 2,732,889,733 | IssuesEvent | 2015-04-17 10:01:01 | tiku01/oryx-editor | https://api.github.com/repos/tiku01/oryx-editor | closed | displaced elements of BPMN 2.0 diagram | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.Load from repository a diagram.
2.view the diagram in the editor
What is the expected output?
the modelled diagram.
What do you see instead?
A diagram which elements have been displaced.
Please provide any additional information below.
when I import the json file of the diagram, I get the same displaced view of
the diagram elements.
```
Original issue reported on code.google.com by `123emma4...@googlemail.com` on 30 Jun 2010 at 4:08 | 1.0 | displaced elements of BPMN 2.0 diagram - ```
What steps will reproduce the problem?
1.Load from repository a diagram.
2.view the diagram in the editor
What is the expected output?
the modelled diagram.
What do you see instead?
A diagram which elements have been displaced.
Please provide any additional information below.
when I import the json file of the diagram, I get the same displaced view of
the diagram elements.
```
Original issue reported on code.google.com by `123emma4...@googlemail.com` on 30 Jun 2010 at 4:08 | defect | displaced elements of bpmn diagram what steps will reproduce the problem load from repository a diagram view the diagram in the editor what is the expected output the modelled diagram what do you see instead a diagram which elements have been displaced please provide any additional information below when i import the json file of the diagram i get the same displaced view of the diagram elements original issue reported on code google com by googlemail com on jun at | 1 |
19,611 | 6,743,680,833 | IssuesEvent | 2017-10-20 13:06:52 | kubevirt/kubevirt | https://api.github.com/repos/kubevirt/kubevirt | closed | make tries to read non-existant .glide.lock.hash on fresh checkout | kind/bug topic/build | I just cloned a fresh checkout of kubevirt and ran 'make manifests docker'. I noticed that make tried to read a file which did not exist:
$ make manifests docker
./hack/build-manifests.sh
test -f .glide.yaml.hash || md5sum glide.yaml > .glide.yaml.hash
if [ "`md5sum glide.yaml`" != "`cat .glide.yaml.hash`" ]; then \
glide cc; \
glide update --strip-vendor; \
md5sum glide.yaml > .glide.yaml.hash; \
md5sum glide.lock > .glide.lock.hash; \
elif [ "`md5sum glide.lock`" != "`cat .glide.lock.hash`" ]; then \
make sync; \
fi
cat: .glide.lock.hash: No such file or directory
....
everything seems to carry on ok after this, so harmless, but might be nice to avoid triggering the error message | 1.0 | make tries to read non-existant .glide.lock.hash on fresh checkout - I just cloned a fresh checkout of kubevirt and ran 'make manifests docker'. I noticed that make tried to read a file which did not exist:
$ make manifests docker
./hack/build-manifests.sh
test -f .glide.yaml.hash || md5sum glide.yaml > .glide.yaml.hash
if [ "`md5sum glide.yaml`" != "`cat .glide.yaml.hash`" ]; then \
glide cc; \
glide update --strip-vendor; \
md5sum glide.yaml > .glide.yaml.hash; \
md5sum glide.lock > .glide.lock.hash; \
elif [ "`md5sum glide.lock`" != "`cat .glide.lock.hash`" ]; then \
make sync; \
fi
cat: .glide.lock.hash: No such file or directory
....
everything seems to carry on ok after this, so harmless, but might be nice to avoid triggering the error message | non_defect | make tries to read non existant glide lock hash on fresh checkout i just cloned a fresh checkout of kubevirt and ran make manifests docker i noticed that make tried to read a file which did not exist make manifests docker hack build manifests sh test f glide yaml hash glide yaml glide yaml hash if then glide cc glide update strip vendor glide yaml glide yaml hash glide lock glide lock hash elif then make sync fi cat glide lock hash no such file or directory everything seems to carry on ok after this so harmless but might be nice to avoid triggering the error message | 0 |
211,445 | 16,445,238,766 | IssuesEvent | 2021-05-20 18:45:21 | EclipseFdn/react-eclipsefdn-members | https://api.github.com/repos/EclipseFdn/react-eclipsefdn-members | closed | Create base API for retrieving and storing form data | Backend documentation | To supply the form data, we require an endpoint that can both send and receive form data for a given user and form. We should enable multiple forms per user by having a unique form ID. To better enable partial filling of the form, we will need multiple endpoints for the different points of data being stored (basic form data, contacts, addresses). We should have an update and read operation for each of these data points, with the option in the future of deletion to perform cleanup in case a user has excessive forms and requests it.
Part of the solution for this issue should provide an OpenAPI spec to enable better support of the endpoint + web UI. This spec should cover all of the requests made in an end-to-end form filling session, including deletion requests (which while not currently in scope is useful in development at least). | 1.0 | Create base API for retrieving and storing form data - To supply the form data, we require an endpoint that can both send and receive form data for a given user and form. We should enable multiple forms per user by having a unique form ID. To better enable partial filling of the form, we will need multiple endpoints for the different points of data being stored (basic form data, contacts, addresses). We should have an update and read operation for each of these data points, with the option in the future of deletion to perform cleanup in case a user has excessive forms and requests it.
Part of the solution for this issue should provide an OpenAPI spec to enable better support of the endpoint + web UI. This spec should cover all of the requests made in an end-to-end form filling session, including deletion requests (which while not currently in scope is useful in development at least). | non_defect | create base api for retrieving and storing form data to supply the form data we require an endpoint that can both send and receive form data for a given user and form we should enable multiple forms per user by having a unique form id to better enable partial filling of the form we will need multiple endpoints for the different points of data being stored basic form data contacts addresses we should have an update and read operation for each of these data points with the option in the future of deletion to perform cleanup in case a user has excessive forms and requests it part of the solution for this issue should provide an openapi spec to enable better support of the endpoint web ui this spec should cover all of the requests made in an end to end form filling session including deletion requests which while not currently in scope is useful in development at least | 0 |
216,417 | 16,761,076,226 | IssuesEvent | 2021-06-13 19:57:56 | bounswe/2021SpringGroup3 | https://api.github.com/repos/bounswe/2021SpringGroup3 | closed | Implement Test: getProfile | Component: Junit-Testing Priority: Medium Status: Review Needed Type: Testing | Can you please implement the tests for getting profile by id functionality for both functions in Profile Service and Profile Controller? | 2.0 | Implement Test: getProfile - Can you please implement the tests for getting profile by id functionality for both functions in Profile Service and Profile Controller? | non_defect | implement test getprofile can you please implement the tests for getting profile by id functionality for both functions in profile service and profile controller | 0 |
63,224 | 17,465,112,860 | IssuesEvent | 2021-08-06 15:42:14 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | code block expand-collapse icon missing when `Expand code blocks by default` is not set | T-Defect | Hello!
When `Expand code blocks by default` is set:

When `Expand code blocks by default` is not set:

Code block expand-collapse icon should also be there when `Expand code blocks by default` is not set which then can be used to expand the code block.
#### Version information
- **Platform**: web (in-browser)
For the web app:
- **Browser**: Firefox 90.0 (64-bit)
- **OS**: Ubuntu 20.04.2 LTS
- **URL**: app.element.io | 1.0 | code block expand-collapse icon missing when `Expand code blocks by default` is not set - Hello!
When `Expand code blocks by default` is set:

When `Expand code blocks by default` is not set:

Code block expand-collapse icon should also be there when `Expand code blocks by default` is not set which then can be used to expand the code block.
#### Version information
- **Platform**: web (in-browser)
For the web app:
- **Browser**: Firefox 90.0 (64-bit)
- **OS**: Ubuntu 20.04.2 LTS
- **URL**: app.element.io | defect | code block expand collapse icon missing when expand code blocks by default is not set hello when expand code blocks by default is set when expand code blocks by default is not set code block expand collapse icon should also be there when expand code blocks by default is not set which then can be used to expand the code block version information platform web in browser for the web app browser firefox bit os ubuntu lts url app element io | 1 |
26,216 | 4,623,391,234 | IssuesEvent | 2016-09-27 10:48:35 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | Cluster Shutdown Queue offer and poll, more elements offered than taken. | Team: Core Type: Critical Type: Defect | 3.7.2, 3.8-SANPSHOT
This nightly build
https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/
e.g. https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/2/console
Is Failing with
```
HzMember1HZ validate hz.queue.Assert threadId=0 global.AssertionException: queueZ2-offer=9877 != queueZ2-poll=9876
```
A version of this test, with out cluster manipulation, seems to pass with out issue, 3 of 3.
however with cluster manipulation (cluster member shutdown) we take less items than offered.
e.g. https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/4/console
```
HzMember1HZ validate hz.queue.Assert threadId=0 global.AssertionException: queueZ12-offer=2042 != queueZ12-poll=2041
```
we can see that the first queues with names queueZ0 to queueZ11 offered and took the same number of items.
using a 4 member cluster and 4 clients, over 4 ec2 boxes.
- we make offers to 200 named queues at random, from Client1 using 1 thread, using a timeout.
- we randomly poll the 200 queues, from all 4 clients using 1 thread, using a time out.
At the end all polling clients, pause, and poll each queue while it has items. in an effort to catch the last items in the queue.
tests settings and script
https://github.com/Danny-Hazelcast/hzCmd-bench/tree/master/lab/hz/shutdown/queue
test offer
https://github.com/Danny-Hazelcast/hzCmd-bench/blob/master/src/main/java/hz/queue/Offer.java
test poll
https://github.com/Danny-Hazelcast/hzCmd-bench/blob/master/src/main/java/hz/queue/Poll.java#L35
we are pausing, and polling repeatedly all the queues, to catch last items offered.
I have seen a 2nd Fail mode.
https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/3/
where 1 of clients fails to return from the call to poll / offer. and so the test hangs.
I have only seen this in the cluster shutdown case. | 1.0 | Cluster Shutdown Queue offer and poll, more elements offered than taken. - 3.7.2, 3.8-SANPSHOT
This nightly build
https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/
e.g. https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/2/console
Is Failing with
```
HzMember1HZ validate hz.queue.Assert threadId=0 global.AssertionException: queueZ2-offer=9877 != queueZ2-poll=9876
```
A version of this test, with out cluster manipulation, seems to pass with out issue, 3 of 3.
however with cluster manipulation (cluster member shutdown) we take less items than offered.
e.g. https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/4/console
```
HzMember1HZ validate hz.queue.Assert threadId=0 global.AssertionException: queueZ12-offer=2042 != queueZ12-poll=2041
```
we can see that the first queues with names queueZ0 to queueZ11 offered and took the same number of items.
using a 4 member cluster and 4 clients, over 4 ec2 boxes.
- we make offers to 200 named queues at random, from Client1 using 1 thread, using a timeout.
- we randomly poll the 200 queues, from all 4 clients using 1 thread, using a time out.
At the end all polling clients, pause, and poll each queue while it has items. in an effort to catch the last items in the queue.
tests settings and script
https://github.com/Danny-Hazelcast/hzCmd-bench/tree/master/lab/hz/shutdown/queue
test offer
https://github.com/Danny-Hazelcast/hzCmd-bench/blob/master/src/main/java/hz/queue/Offer.java
test poll
https://github.com/Danny-Hazelcast/hzCmd-bench/blob/master/src/main/java/hz/queue/Poll.java#L35
we are pausing, and polling repeatedly all the queues, to catch last items offered.
I have seen a 2nd Fail mode.
https://hazelcast-l337.ci.cloudbees.com/view/shutdown/job/shutdown-queue/3/
where 1 of clients fails to return from the call to poll / offer. and so the test hangs.
I have only seen this in the cluster shutdown case. | defect | cluster shutdown queue offer and poll more elements offered than taken sanpshot this nightly build e g is failing with validate hz queue assert threadid global assertionexception offer poll a version of this test with out cluster manipulation seems to pass with out issue of however with cluster manipulation cluster member shutdown we take less items than offered e g validate hz queue assert threadid global assertionexception offer poll we can see that the first queues with names to offered and took the same number of items using a member cluster and clients over boxes we make offers to named queues at random from using thread using a timeout we randomly poll the queues from all clients using thread using a time out at the end all polling clients pause and poll each queue while it has items in an effort to catch the last items in the queue tests settings and script test offer test poll we are pausing and polling repeatedly all the queues to catch last items offered i have seen a fail mode where of clients fails to return from the call to poll offer and so the test hangs i have only seen this in the cluster shutdown case | 1 |
176,582 | 6,561,229,192 | IssuesEvent | 2017-09-07 12:33:18 | envistaInteractive/itagroup-ecommerce-template | https://api.github.com/repos/envistaInteractive/itagroup-ecommerce-template | opened | Events: Checkout | Confirmation | High Priority Page Layout | ### Summary
Layout contents of Events: Checkout | Confirmation page as specified on Events: Checkout | Confirmation in Zeplin.
We do not have color mockups. The top bar is the blue that is also used on the checkout pages. Use those same classes and html. We will move that out of the checkout to be more generic later.
Use a mobile first approach to adjust the layout using responsive design as the screen gets larger.
**Use branch**: feature/events-checkout
**Layout file**: templates/events/checkout-confirmation.liquid (file does not exist)
**Url for testing**: http://localhost:1337/events/checkout/confirmation
**Delivery Date**: Sept 10th | 1.0 | Events: Checkout | Confirmation - ### Summary
Layout contents of Events: Checkout | Confirmation page as specified on Events: Checkout | Confirmation in Zeplin.
We do not have color mockups. The top bar is the blue that is also used on the checkout pages. Use those same classes and html. We will move that out of the checkout to be more generic later.
Use a mobile first approach to adjust the layout using responsive design as the screen gets larger.
**Use branch**: feature/events-checkout
**Layout file**: templates/events/checkout-confirmation.liquid (file does not exist)
**Url for testing**: http://localhost:1337/events/checkout/confirmation
**Delivery Date**: Sept 10th | non_defect | events checkout confirmation summary layout contents of events checkout confirmation page as specified on events checkout confirmation in zeplin we do not have color mockups the top bar is the blue that is also used on the checkout pages use those same classes and html we will move that out of the checkout to be more generic later use a mobile first approach to adjust the layout using responsive design as the screen gets larger use branch feature events checkout layout file templates events checkout confirmation liquid file does not exist url for testing delivery date sept | 0 |
28,220 | 12,810,162,382 | IssuesEvent | 2020-07-03 17:42:44 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Tainting aws_instance resources with count > 1 causes recreation of other unrelated resources | bug service/ec2 stale | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
```
Terraform v0.11.7
+ provider.aws v1.21.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_instance
* aws_volume_attachment
* (others too)
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
terraform {
required_version = ">= 0.11"
}
provider "aws" {
region = "us-east-1"
}
locals
{
server_count = 3
availability_zone = [ "us-east-1a", "us-east-1b", "us-east-1c" ]
key_name = "buildkey"
ami = "ami-97785bed"
instance_type = "t2.micro"
}
resource "aws_instance" "foo" {
count = "${local.server_count}"
ami = "${local.ami}"
availability_zone = "${element(local.availability_zone,count.index)}"
instance_type = "${local.instance_type}"
key_name = "${local.key_name}"
associate_public_ip_address = true
}
resource "aws_ebs_volume" "foovol" {
count = "${local.server_count}"
size = "10"
availability_zone = "${element(local.availability_zone,count.index)}"
type = "gp2"
}
resource "aws_volume_attachment" "foovol" {
count = "${local.server_count}"
device_name = "/dev/xvdf"
volume_id = "${element(aws_ebs_volume.foovol.*.id,count.index)}"
instance_id = "${element(aws_instance.foo.*.id,count.index)}"
force_detach = true
}
```
### Debug Output
https://gist.github.com/duckfez/349388e6e184565c3414fd24c76dfb3d
### Expected Behavior
When tainting an individual `aws_instance` in a resource with count > 0, only the specific item I tainted should be affected.
### Actual Behavior
All of the `aws_instance` resources have their `ebs_volume_attachment` resources recreated.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
2. `terraform taint aws_instance.foo.0`
3. `terraform plan`
### Important Factoids
This is the simplest reproduction we could come up with. In production, the tainting of one instance spreads to EBS attachments, NLBs, route53, and a bunch of dependent resources.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* https://github.com/terraform-providers/terraform-provider-aws/issues/83 (Maybe / probably the same issue)
| 1.0 | Tainting aws_instance resources with count > 1 causes recreation of other unrelated resources - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
```
Terraform v0.11.7
+ provider.aws v1.21.0
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_instance
* aws_volume_attachment
* (others too)
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
terraform {
required_version = ">= 0.11"
}
provider "aws" {
region = "us-east-1"
}
locals
{
server_count = 3
availability_zone = [ "us-east-1a", "us-east-1b", "us-east-1c" ]
key_name = "buildkey"
ami = "ami-97785bed"
instance_type = "t2.micro"
}
resource "aws_instance" "foo" {
count = "${local.server_count}"
ami = "${local.ami}"
availability_zone = "${element(local.availability_zone,count.index)}"
instance_type = "${local.instance_type}"
key_name = "${local.key_name}"
associate_public_ip_address = true
}
resource "aws_ebs_volume" "foovol" {
count = "${local.server_count}"
size = "10"
availability_zone = "${element(local.availability_zone,count.index)}"
type = "gp2"
}
resource "aws_volume_attachment" "foovol" {
count = "${local.server_count}"
device_name = "/dev/xvdf"
volume_id = "${element(aws_ebs_volume.foovol.*.id,count.index)}"
instance_id = "${element(aws_instance.foo.*.id,count.index)}"
force_detach = true
}
```
### Debug Output
https://gist.github.com/duckfez/349388e6e184565c3414fd24c76dfb3d
### Expected Behavior
When tainting an individual `aws_instance` in a resource with count > 0, only the specific item I tainted should be affected.
### Actual Behavior
All of the `aws_instance` resources have their `ebs_volume_attachment` resources recreated.
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
2. `terraform taint aws_instance.foo.0`
3. `terraform plan`
### Important Factoids
This is the simplest reproduction we could come up with. In production, the tainting of one instance spreads to EBS attachments, NLBs, route53, and a bunch of dependent resources.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* https://github.com/terraform-providers/terraform-provider-aws/issues/83 (Maybe / probably the same issue)
| non_defect | tainting aws instance resources with count causes recreation of other unrelated resources community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider aws affected resource s aws instance aws volume attachment others too terraform configuration files hcl terraform required version provider aws region us east locals server count availability zone key name buildkey ami ami instance type micro resource aws instance foo count local server count ami local ami availability zone element local availability zone count index instance type local instance type key name local key name associate public ip address true resource aws ebs volume foovol count local server count size availability zone element local availability zone count index type resource aws volume attachment foovol count local server count device name dev xvdf volume id element aws ebs volume foovol id count index instance id element aws instance foo id count index force detach true debug output expected behavior when tainting an individual aws instance in a resource with count only the specific item i tainted should be affected actual behavior all of the aws instance resources have their ebs volume attachment resources recreated steps to reproduce terraform apply terraform taint aws instance foo terraform plan important factoids this is the simplest reproduction we could come up with in production the tainting of one instance spreads to ebs attachments nlbs and a bunch of dependent resources references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example maybe probably the same issue | 0 |
108,550 | 16,778,530,272 | IssuesEvent | 2021-06-15 02:49:47 | S69y/flight-manual.atom.io | https://api.github.com/repos/S69y/flight-manual.atom.io | opened | CVE-2020-28499 (High) detected in merge-1.2.0.tgz | security vulnerability | ## CVE-2020-28499 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- gulp-coffee-2.3.5.tgz (Root Library)
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/S69y/flight-manual.atom.io/commit/dc229a9f59fdfe6153b16c2f9456017e48115716">dc229a9f59fdfe6153b16c2f9456017e48115716</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package merge are vulnerable to Prototype Pollution via _recursiveMerge .
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28499>CVE-2020-28499</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1666">https://www.npmjs.com/advisories/1666</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: merge - 2.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-28499 (High) detected in merge-1.2.0.tgz - ## CVE-2020-28499 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>merge-1.2.0.tgz</b></p></summary>
<p>Merge multiple objects into one, optionally creating a new cloned object. Similar to the jQuery.extend but more flexible. Works in Node.js and the browser.</p>
<p>Library home page: <a href="https://registry.npmjs.org/merge/-/merge-1.2.0.tgz">https://registry.npmjs.org/merge/-/merge-1.2.0.tgz</a></p>
<p>Path to dependency file: flight-manual.atom.io/package.json</p>
<p>Path to vulnerable library: flight-manual.atom.io/node_modules/merge/package.json</p>
<p>
Dependency Hierarchy:
- gulp-coffee-2.3.5.tgz (Root Library)
- :x: **merge-1.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/S69y/flight-manual.atom.io/commit/dc229a9f59fdfe6153b16c2f9456017e48115716">dc229a9f59fdfe6153b16c2f9456017e48115716</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package merge are vulnerable to Prototype Pollution via _recursiveMerge .
<p>Publish Date: 2021-02-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28499>CVE-2020-28499</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1666">https://www.npmjs.com/advisories/1666</a></p>
<p>Release Date: 2021-02-18</p>
<p>Fix Resolution: merge - 2.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in merge tgz cve high severity vulnerability vulnerable library merge tgz merge multiple objects into one optionally creating a new cloned object similar to the jquery extend but more flexible works in node js and the browser library home page a href path to dependency file flight manual atom io package json path to vulnerable library flight manual atom io node modules merge package json dependency hierarchy gulp coffee tgz root library x merge tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of package merge are vulnerable to prototype pollution via recursivemerge publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution merge step up your open source security game with whitesource | 0 |
72,837 | 24,321,435,618 | IssuesEvent | 2022-09-30 11:07:10 | vector-im/element-meta | https://api.github.com/repos/vector-im/element-meta | closed | Serious security problems in encrypted chat rooms | T-Defect Security | ### Steps to reproduce
A few of the protocol problems are outlined by this researcher in a tweet storm:
https://twitter.com/tqbf/status/1575259743278563329
### Outcome
Related informations are on this arstechnica article: https://arstechnica.com/information-technology/2022/09/matrix-patches-vulnerabilities-that-completely-subvert-e2ee-guarantees/
I open this ticket to understand if there are serious implications and to know if there are known mitigations we can use to limit the scope of the problems (if any)
### Operating system
all
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | Serious security problems in encrypted chat rooms - ### Steps to reproduce
A few of the protocol problems are outlined by this researcher in a tweet storm:
https://twitter.com/tqbf/status/1575259743278563329
### Outcome
Related informations are on this arstechnica article: https://arstechnica.com/information-technology/2022/09/matrix-patches-vulnerabilities-that-completely-subvert-e2ee-guarantees/
I open this ticket to understand if there are serious implications and to know if there are known mitigations we can use to limit the scope of the problems (if any)
### Operating system
all
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | serious security problems in encrypted chat rooms steps to reproduce a few of the protocol problems are outlined by this researcher in a tweet storm outcome related informations are on this arstechnica article i open this ticket to understand if there are serious implications and to know if there are known mitigations we can use to limit the scope of the problems if any operating system all application version no response how did you install the app no response homeserver no response will you send logs no | 1 |
154,922 | 5,939,760,253 | IssuesEvent | 2017-05-25 06:44:23 | Nebo15/ehealth.api | https://api.github.com/repos/Nebo15/ehealth.api | opened | Postman Tests Collection for Health Monitor (Release 1) | kind/task priority/high | This task is a part of #240
- [ ] Prepare Postman Tests Collection for Health Monitor for Release 1
Cover:
- [ ] Dictionaries
- [ ] UAddresses
- [ ] Legale Entity
- [ ] Employee Request
- [ ] Employee | 1.0 | Postman Tests Collection for Health Monitor (Release 1) - This task is a part of #240
- [ ] Prepare Postman Tests Collection for Health Monitor for Release 1
Cover:
- [ ] Dictionaries
- [ ] UAddresses
- [ ] Legale Entity
- [ ] Employee Request
- [ ] Employee | non_defect | postman tests collection for health monitor release this task is a part of prepare postman tests collection for health monitor for release cover dictionaries uaddresses legale entity employee request employee | 0 |
13,922 | 2,789,756,671 | IssuesEvent | 2015-05-08 21:17:54 | google/google-visualization-api-issues | https://api.github.com/repos/google/google-visualization-api-issues | closed | GWT BarChart.Options.setColors(JsArrayString) only accepts first color | Priority-Medium Type-Defect | Original [issue 63](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=63) created by orwant on 2009-09-24T09:46:32.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. create a DataTable with multiple rows, the simplest example being:
DataTable data = DataTable.create();
data.addColumn(ColumnType.STRING, "Time");
data.addColumn(ColumnType.NUMBER);
data.addRows(2);
data.setValue(0, 0, "1 day");
data.setValue(0, 1, -34);
data.setValue(1, 0, "1 week");
data.setValue(1, 1, 11);
2. create BarChart.Options:
Options options = Options.create();
JsArrayString colors = generateColors();
colors.set(0, "red");
colors.set(1, "blue");
public final native JsArrayString generateColors() /*-{
return ['red','blue'];
}-*/;
3. create a GWT Visualization BarChart:
BarChart barChart = new BarChart(data, options);
4. observe that all barChart elements are colored the first color only, in
this case, 'red'.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
BarChart, but quite possibly other components.
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
Mac OS X Version 10.5.8
Firefox Version 3.5.3
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| 1.0 | GWT BarChart.Options.setColors(JsArrayString) only accepts first color - Original [issue 63](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=63) created by orwant on 2009-09-24T09:46:32.000Z:
<b>What steps will reproduce the problem? Please provide a link to a</b>
<b>demonstration page if at all possible, or attach code.</b>
1. create a DataTable with multiple rows, the simplest example being:
DataTable data = DataTable.create();
data.addColumn(ColumnType.STRING, "Time");
data.addColumn(ColumnType.NUMBER);
data.addRows(2);
data.setValue(0, 0, "1 day");
data.setValue(0, 1, -34);
data.setValue(1, 0, "1 week");
data.setValue(1, 1, 11);
2. create BarChart.Options:
Options options = Options.create();
JsArrayString colors = generateColors();
colors.set(0, "red");
colors.set(1, "blue");
public final native JsArrayString generateColors() /*-{
return ['red','blue'];
}-*/;
3. create a GWT Visualization BarChart:
BarChart barChart = new BarChart(data, options);
4. observe that all barChart elements are colored the first color only, in
this case, 'red'.
<b>What component is this issue related to (PieChart, LineChart, DataTable,</b>
<b>Query, etc)?</b>
BarChart, but quite possibly other components.
<b>Are you using the test environment (version 1.1)?</b>
<b>(If you are not sure, answer NO)</b>
NO
<b>What operating system and browser are you using?</b>
Mac OS X Version 10.5.8
Firefox Version 3.5.3
<b>*********************************************************</b>
<b>For developers viewing this issue: please click the 'star' icon to be</b>
<b>notified of future changes, and to let us know how many of you are</b>
<b>interested in seeing it resolved.</b>
<b>*********************************************************</b>
| defect | gwt barchart options setcolors jsarraystring only accepts first color original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code create a datatable with multiple rows the simplest example being datatable data datatable create data addcolumn columntype string quot time quot data addcolumn columntype number data addrows data setvalue quot day quot data setvalue data setvalue quot week quot data setvalue create barchart options options options options create jsarraystring colors generatecolors colors set quot red quot colors set quot blue quot public final native jsarraystring generatecolors return create a gwt visualization barchart barchart barchart new barchart data options observe that all barchart elements are colored the first color only in this case red what component is this issue related to piechart linechart datatable query etc barchart but quite possibly other components are you using the test environment version if you are not sure answer no no what operating system and browser are you using mac os x version firefox version for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved | 1 |
77,852 | 27,194,475,933 | IssuesEvent | 2023-02-20 03:08:35 | zed-industries/community | https://api.github.com/repos/zed-industries/community | opened | Keymap does not work | defect triage | ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
I defined customized keymap but it doesn't work.
### Environment
```
Zed: v0.73.3 (stable)
OS: macOS 13.1.0
Memory: 16 GiB
Architecture: aarch64
```
### If applicable, add mockups / screenshots to help explain present your vision of the feature
<img width="894" alt="Screenshot 2023-02-20 at 11 06 12" src="https://user-images.githubusercontent.com/11454175/220000302-d0fad314-cbf4-40e6-bdb9-b5ebff68109d.png">
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
Althougth the below log has an `invalid keystore ctrl-c-d` error, but I already changed it to `ctrl-c ctrl-d`, but doesn't work either.
[log.txt](https://github.com/zed-industries/community/files/10779122/log.txt)
| 1.0 | Keymap does not work - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
I defined customized keymap but it doesn't work.
### Environment
```
Zed: v0.73.3 (stable)
OS: macOS 13.1.0
Memory: 16 GiB
Architecture: aarch64
```
### If applicable, add mockups / screenshots to help explain present your vision of the feature
<img width="894" alt="Screenshot 2023-02-20 at 11 06 12" src="https://user-images.githubusercontent.com/11454175/220000302-d0fad314-cbf4-40e6-bdb9-b5ebff68109d.png">
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
Althougth the below log has an `invalid keystore ctrl-c-d` error, but I already changed it to `ctrl-c ctrl-d`, but doesn't work either.
[log.txt](https://github.com/zed-industries/community/files/10779122/log.txt)
| defect | keymap does not work check for existing issues completed describe the bug provide steps to reproduce it i defined customized keymap but it doesn t work environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature img width alt screenshot at src if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last althougth the below log has an invalid keystore ctrl c d error but i already changed it to ctrl c ctrl d but doesn t work either | 1 |
23,502 | 3,834,595,102 | IssuesEvent | 2016-04-01 10:31:52 | mitlm/mitlm | https://api.github.com/repos/mitlm/mitlm | closed | cutoff | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `jing.zhe...@gmail.com` on 23 Dec 2014 at 12:16 | 1.0 | cutoff - ```
What steps will reproduce the problem?
1.
2.
3.
What is the expected output? What do you see instead?
What version of the product are you using? On what operating system?
Please provide any additional information below.
```
Original issue reported on code.google.com by `jing.zhe...@gmail.com` on 23 Dec 2014 at 12:16 | defect | cutoff what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below original issue reported on code google com by jing zhe gmail com on dec at | 1 |
4,545 | 2,610,115,232 | IssuesEvent | 2015-02-26 18:35:45 | chrsmith/scribefire-chrome | https://api.github.com/repos/chrsmith/scribefire-chrome | closed | Cannot download ScribeFire into Chrome | auto-migrated Priority-Medium Type-Defect | ```
What's the problem?
I have downloaded Chrome twice and it still will not allow me to download
ScribeFire. Yet when I go to extensions in Chrome it says Boo Hoo you have not
downloaded any extensions. Could you please help. Thank you.
What version of ScribeFire for Chrome are you running? 1.5.4
```
-----
Original issue reported on code.google.com by `bate...@gmail.com` on 27 Jun 2010 at 10:21 | 1.0 | Cannot download ScribeFire into Chrome - ```
What's the problem?
I have downloaded Chrome twice and it still will not allow me to download
ScribeFire. Yet when I go to extensions in Chrome it says Boo Hoo you have not
downloaded any extensions. Could you please help. Thank you.
What version of ScribeFire for Chrome are you running? 1.5.4
```
-----
Original issue reported on code.google.com by `bate...@gmail.com` on 27 Jun 2010 at 10:21 | defect | cannot download scribefire into chrome what s the problem i have downloaded chrome twice and it still will not allow me to download scribefire yet when i go to extensions in chrome it says boo hoo you have not downloaded any extensions could you please help thank you what version of scribefire for chrome are you running original issue reported on code google com by bate gmail com on jun at | 1 |
3,606 | 2,610,065,585 | IssuesEvent | 2015-02-26 18:19:18 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 临海治疗前列腺炎的费用 | auto-migrated Priority-Medium Type-Defect | ```
临海治疗前列腺炎的费用【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:23 | 1.0 | 临海治疗前列腺炎的费用 - ```
临海治疗前列腺炎的费用【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:23 | defect | 临海治疗前列腺炎的费用 临海治疗前列腺炎的费用【台州五洲生殖医院】 询热线 微信号tzwzszyy 医院地址 台州 (枫南大转盘旁)乘车线路 、 � �� 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 1 |
200,389 | 15,104,990,735 | IssuesEvent | 2021-02-08 12:26:31 | ansible-collections/community.general | https://api.github.com/repos/ansible-collections/community.general | closed | Ignore me | affects_2.10 feature needs_triage tests unit | **Summary**
This is broken
**Issue Type**
Feature Idea
**Component Name**
tests/utils/shippable
**Ansible Version**
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
**Configuration**
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
**OS / Environment**
_No response_
**Steps To Reproduce**
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
**Expected Results**
this failed
**Actual Results**
<!--- Paste verbatim command output between -->
```paste below
```
Extra info | 1.0 | Ignore me - **Summary**
This is broken
**Issue Type**
Feature Idea
**Component Name**
tests/utils/shippable
**Ansible Version**
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
**Configuration**
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
**OS / Environment**
_No response_
**Steps To Reproduce**
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
**Expected Results**
this failed
**Actual Results**
<!--- Paste verbatim command output between -->
```paste below
```
Extra info | non_defect | ignore me summary this is broken issue type feature idea component name tests utils shippable ansible version paste below configuration paste below os environment no response steps to reproduce paste below expected results this failed actual results paste below extra info | 0 |
25,205 | 4,233,648,316 | IssuesEvent | 2016-07-05 08:47:13 | arkayenro/arkinventory | https://api.github.com/repos/arkayenro/arkinventory | closed | trying to queue for bgs and i get an forbidden error | auto-migrated Priority-Medium Type-Defect | ```
Downloaded from curse
What steps will reproduce the problem?
1. queue for bgs.
What is the expected output? What do you see instead?
All that happends is that error accure and i can't join bg queue without
relogging.
What version of the product are you using? On what operating system?
version 30331 Arkinventory.
Windows Vista Ultimate.
Please provide any additional information below.
Date: 2013-11-04 15:24:19
ID: 1
Error occured in: AddOn: ArkInventory
Count: 1
Message: Error: AddOn ArkInventory attempted to call a forbidden function
(JoinBattlefield()) from a tainted execution path.
Debug:
[C]: JoinBattlefield()
Blizzard_PVPUI\Blizzard_PVPUI.lua:421: HonorFrame_Queue()
[string "*:OnClick"]:2:
[string "*:OnClick"]:1
Locals:
None
AddOns:
Swatter, v5.18.5433 (PassionatePhascogale)
WowheadLooter, v50008
NPCScan, v5.4.1.2
NPCScanOverlay, vv5.4.0.5
ACP, v3.4.5
ArkInventory, v30331
AskMrRobot, v1.2.1.0
AtlasLootLoader, vv7.07.01
AucAdvanced, v5.18.5433 (PassionatePhascogale)
AucFilterBasic, v5.18.5433 (PassionatePhascogale)
AucFilterOutlier, v5.18.5433.5347(5.18/embedded)
AucMatchUndercut, v5.18.5433.5364(5.18/embedded)
AucScanData, v5.18.5433 (PassionatePhascogale)
AucStatHistogram, v5.18.5433 (PassionatePhascogale)
AucStatiLevel, v5.18.5433 (PassionatePhascogale)
AucStatPurchased, v5.18.5433 (PassionatePhascogale)
AucStatSales, v5.18.5433.5376(5.18/embedded)
AucStatSimple, v5.18.5433 (PassionatePhascogale)
AucStatStdDev, v5.18.5433 (PassionatePhascogale)
AucStatWOWEcon, v5.18.5433.5323(5.18/embedded)
AucUtilAHWindowControl, v5.18.5433.5347(5.18/embedded)
AucUtilAppraiser, v5.18.5433.5427(5.18/embedded)
AucUtilAskPrice, v5.18.5433.5347(5.18/embedded)
AucUtilAutoMagic, v5.18.5433.5415(5.18/embedded)
AucUtilCompactUI, v5.18.5433.5427(5.18/embedded)
AucUtilEasyBuyout, v5.18.5433.5427(5.18/embedded)
AucUtilFixAH, v5.18.5433 (PassionatePhascogale)
AucUtilItemSuggest, v5.18.5433.5417(5.18/embedded)
AucUtilPriceLevel, v5.18.5433.5427(5.18/embedded)
AucUtilScanButton, v5.18.5433.5403(5.18/embedded)
AucUtilScanFinish, v5.18.5433.5347(5.18/embedded)
AucUtilScanProgress, v5.18.5433.4979(5.18/embedded)
AucUtilScanStart, v5.18.5433.5347(5.18/embedded)
AucUtilSearchUI, v5.18.5433.5373(5.18/embedded)
AucUtilSimpleAuction, v5.18.5433.5415(5.18/embedded)
AucUtilVendMarkup, v5.18.5433.4828(5.18/embedded)
Babylonian, v5.1.DEV.332(/embedded)
BadBoy, v12.047
BattlegroundTargets, v50400-1
BeanCounter, v5.18.5433 (PassionatePhascogale)
Chatter, v1.4.4
CLCDK, v5.3.0
Configator, v5.1.DEV.344(/embedded)
DBMCore, v
DBMSpellTimers, v
DebugLib, v5.1.DEV.337(/embedded)
Informant, v5.18.5433 (PassionatePhascogale)
LibExtraTip, v5.12.DEV.355(/embedded)
Malkorok, v
OmniCC, v5.4.1
Outfitter, v5.9.3
Postal, v3.5.1
Prat30, v3.5.7
Prat30Libraries, v
Quartz, v3.1.4
Skada, v1.4-14
SkadaAvoidanceMitigation, v1.2.0
SkadaCC, v1.0
SkadaDamage, v1.0
SkadaDamageTaken, v1.0
SkadaDeaths, v1.0
SkadaDebuffs, v1.0
SkadaDispels, v1.0
SkadaEnemies, v1.0
SkadaGraph, v1.0
SkadaHealAbsorbs, v
SkadaHealing, v1.0
SkadaPower, v1.0
SkadaThreat, v1.0
SkadaWindowButtons, v1.0
SlideBar, v5.18.5433 (PassionatePhascogale)
Stubby, v5.18.5433 (PassionatePhascogale)
TidyPlates, v6.12.6
TidyPlatesHub, v
TidyPlatesWidgets, v
TipHelper, v5.12.DEV.351(/embedded)
Titan, v5.2.1.50400
TitanBag, v5.2.1.50400
TitanClock, v5.2.1.50400
TitanCurrency, v5.9
TitanDurability, v1.24
TitanGold, v5.2.1.50400
TitanGuild, v5.4.0.0
TitanLocation, v5.2.1.50400
TitanPerformance, v5.2.1.50400
TitanRepair, v5.2.1.50400
TitanXP, v5.2.1.50400
WIM, v3.6.11
BlizRuntimeLib_enUS v5.4.1.50400 <eu>
(ck=bb7)
```
Original issue reported on code.google.com by `ronny.to...@gmail.com` on 4 Nov 2013 at 2:44 | 1.0 | trying to queue for bgs and i get an forbidden error - ```
Downloaded from curse
What steps will reproduce the problem?
1. queue for bgs.
What is the expected output? What do you see instead?
All that happends is that error accure and i can't join bg queue without
relogging.
What version of the product are you using? On what operating system?
version 30331 Arkinventory.
Windows Vista Ultimate.
Please provide any additional information below.
Date: 2013-11-04 15:24:19
ID: 1
Error occured in: AddOn: ArkInventory
Count: 1
Message: Error: AddOn ArkInventory attempted to call a forbidden function
(JoinBattlefield()) from a tainted execution path.
Debug:
[C]: JoinBattlefield()
Blizzard_PVPUI\Blizzard_PVPUI.lua:421: HonorFrame_Queue()
[string "*:OnClick"]:2:
[string "*:OnClick"]:1
Locals:
None
AddOns:
Swatter, v5.18.5433 (PassionatePhascogale)
WowheadLooter, v50008
NPCScan, v5.4.1.2
NPCScanOverlay, vv5.4.0.5
ACP, v3.4.5
ArkInventory, v30331
AskMrRobot, v1.2.1.0
AtlasLootLoader, vv7.07.01
AucAdvanced, v5.18.5433 (PassionatePhascogale)
AucFilterBasic, v5.18.5433 (PassionatePhascogale)
AucFilterOutlier, v5.18.5433.5347(5.18/embedded)
AucMatchUndercut, v5.18.5433.5364(5.18/embedded)
AucScanData, v5.18.5433 (PassionatePhascogale)
AucStatHistogram, v5.18.5433 (PassionatePhascogale)
AucStatiLevel, v5.18.5433 (PassionatePhascogale)
AucStatPurchased, v5.18.5433 (PassionatePhascogale)
AucStatSales, v5.18.5433.5376(5.18/embedded)
AucStatSimple, v5.18.5433 (PassionatePhascogale)
AucStatStdDev, v5.18.5433 (PassionatePhascogale)
AucStatWOWEcon, v5.18.5433.5323(5.18/embedded)
AucUtilAHWindowControl, v5.18.5433.5347(5.18/embedded)
AucUtilAppraiser, v5.18.5433.5427(5.18/embedded)
AucUtilAskPrice, v5.18.5433.5347(5.18/embedded)
AucUtilAutoMagic, v5.18.5433.5415(5.18/embedded)
AucUtilCompactUI, v5.18.5433.5427(5.18/embedded)
AucUtilEasyBuyout, v5.18.5433.5427(5.18/embedded)
AucUtilFixAH, v5.18.5433 (PassionatePhascogale)
AucUtilItemSuggest, v5.18.5433.5417(5.18/embedded)
AucUtilPriceLevel, v5.18.5433.5427(5.18/embedded)
AucUtilScanButton, v5.18.5433.5403(5.18/embedded)
AucUtilScanFinish, v5.18.5433.5347(5.18/embedded)
AucUtilScanProgress, v5.18.5433.4979(5.18/embedded)
AucUtilScanStart, v5.18.5433.5347(5.18/embedded)
AucUtilSearchUI, v5.18.5433.5373(5.18/embedded)
AucUtilSimpleAuction, v5.18.5433.5415(5.18/embedded)
AucUtilVendMarkup, v5.18.5433.4828(5.18/embedded)
Babylonian, v5.1.DEV.332(/embedded)
BadBoy, v12.047
BattlegroundTargets, v50400-1
BeanCounter, v5.18.5433 (PassionatePhascogale)
Chatter, v1.4.4
CLCDK, v5.3.0
Configator, v5.1.DEV.344(/embedded)
DBMCore, v
DBMSpellTimers, v
DebugLib, v5.1.DEV.337(/embedded)
Informant, v5.18.5433 (PassionatePhascogale)
LibExtraTip, v5.12.DEV.355(/embedded)
Malkorok, v
OmniCC, v5.4.1
Outfitter, v5.9.3
Postal, v3.5.1
Prat30, v3.5.7
Prat30Libraries, v
Quartz, v3.1.4
Skada, v1.4-14
SkadaAvoidanceMitigation, v1.2.0
SkadaCC, v1.0
SkadaDamage, v1.0
SkadaDamageTaken, v1.0
SkadaDeaths, v1.0
SkadaDebuffs, v1.0
SkadaDispels, v1.0
SkadaEnemies, v1.0
SkadaGraph, v1.0
SkadaHealAbsorbs, v
SkadaHealing, v1.0
SkadaPower, v1.0
SkadaThreat, v1.0
SkadaWindowButtons, v1.0
SlideBar, v5.18.5433 (PassionatePhascogale)
Stubby, v5.18.5433 (PassionatePhascogale)
TidyPlates, v6.12.6
TidyPlatesHub, v
TidyPlatesWidgets, v
TipHelper, v5.12.DEV.351(/embedded)
Titan, v5.2.1.50400
TitanBag, v5.2.1.50400
TitanClock, v5.2.1.50400
TitanCurrency, v5.9
TitanDurability, v1.24
TitanGold, v5.2.1.50400
TitanGuild, v5.4.0.0
TitanLocation, v5.2.1.50400
TitanPerformance, v5.2.1.50400
TitanRepair, v5.2.1.50400
TitanXP, v5.2.1.50400
WIM, v3.6.11
BlizRuntimeLib_enUS v5.4.1.50400 <eu>
(ck=bb7)
```
Original issue reported on code.google.com by `ronny.to...@gmail.com` on 4 Nov 2013 at 2:44 | defect | trying to queue for bgs and i get an forbidden error downloaded from curse what steps will reproduce the problem queue for bgs what is the expected output what do you see instead all that happends is that error accure and i can t join bg queue without relogging what version of the product are you using on what operating system version arkinventory windows vista ultimate please provide any additional information below date id error occured in addon arkinventory count message error addon arkinventory attempted to call a forbidden function joinbattlefield from a tainted execution path debug joinbattlefield blizzard pvpui blizzard pvpui lua honorframe queue locals none addons swatter passionatephascogale wowheadlooter npcscan npcscanoverlay acp arkinventory askmrrobot atlaslootloader aucadvanced passionatephascogale aucfilterbasic passionatephascogale aucfilteroutlier embedded aucmatchundercut embedded aucscandata passionatephascogale aucstathistogram passionatephascogale aucstatilevel passionatephascogale aucstatpurchased passionatephascogale aucstatsales embedded aucstatsimple passionatephascogale aucstatstddev passionatephascogale aucstatwowecon embedded aucutilahwindowcontrol embedded aucutilappraiser embedded aucutilaskprice embedded aucutilautomagic embedded aucutilcompactui embedded aucutileasybuyout embedded aucutilfixah passionatephascogale aucutilitemsuggest embedded aucutilpricelevel embedded aucutilscanbutton embedded aucutilscanfinish embedded aucutilscanprogress embedded aucutilscanstart embedded aucutilsearchui embedded aucutilsimpleauction embedded aucutilvendmarkup embedded babylonian dev embedded badboy battlegroundtargets beancounter passionatephascogale chatter clcdk configator dev embedded dbmcore v dbmspelltimers v debuglib dev embedded informant passionatephascogale libextratip dev embedded malkorok v omnicc outfitter postal v quartz skada skadaavoidancemitigation skadacc skadadamage skadadamagetaken skadadeaths skadadebuffs skadadispels skadaenemies skadagraph skadahealabsorbs v skadahealing skadapower skadathreat skadawindowbuttons slidebar passionatephascogale stubby passionatephascogale tidyplates tidyplateshub v tidyplateswidgets v tiphelper dev embedded titan titanbag titanclock titancurrency titandurability titangold titanguild titanlocation titanperformance titanrepair titanxp wim blizruntimelib enus ck original issue reported on code google com by ronny to gmail com on nov at | 1 |
61,159 | 17,023,621,057 | IssuesEvent | 2021-07-03 02:58:03 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Can't find Imperial War Museum, Duxford | Component: nominatim Priority: minor Resolution: worksforme Type: defect | **[Submitted to the original trac issue database at 7.57pm, Monday, 2nd August 2010]**
The Imperial War Museum has a branch in Duxford, Cambridgeshire, here:
http://osm.org/go/0EQDO90v--
Nominatim doesn't find it (though it does find the ones in London and Salford). The main museum area is marked
name: Imperial War Museum, Duxford
tourism: attraction
There's also several buildings marked
building: hangar
name: Imperial War Museum Duxford
(no comma this time) | 1.0 | Can't find Imperial War Museum, Duxford - **[Submitted to the original trac issue database at 7.57pm, Monday, 2nd August 2010]**
The Imperial War Museum has a branch in Duxford, Cambridgeshire, here:
http://osm.org/go/0EQDO90v--
Nominatim doesn't find it (though it does find the ones in London and Salford). The main museum area is marked
name: Imperial War Museum, Duxford
tourism: attraction
There's also several buildings marked
building: hangar
name: Imperial War Museum Duxford
(no comma this time) | defect | can t find imperial war museum duxford the imperial war museum has a branch in duxford cambridgeshire here nominatim doesn t find it though it does find the ones in london and salford the main museum area is marked name imperial war museum duxford tourism attraction there s also several buildings marked building hangar name imperial war museum duxford no comma this time | 1 |
48,071 | 13,067,427,323 | IssuesEvent | 2020-07-31 00:25:13 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | closed | [production_histograms] invalid syntax (Trac #1737) | Migrated from Trac cmake defect |
```text
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py", line 12
self.append(Histogram(, , , "FluxSum"))
^
SyntaxError: invalid syntax
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py", line 12
self.append(Histogram(, , , "OneWeight"))
^
SyntaxError: invalid syntax
```
Migrated from https://code.icecube.wisc.edu/ticket/1737
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py\", line 12\n self.append(Histogram(, , , \"FluxSum\"))\n ^\nSyntaxError: invalid syntax\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py\", line 12\n self.append(Histogram(, , , \"OneWeight\"))\n ^\nSyntaxError: invalid syntax\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "cmake",
"summary": "[production_histograms] invalid syntax",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:44:37",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | [production_histograms] invalid syntax (Trac #1737) -
```text
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py", line 12
self.append(Histogram(, , , "FluxSum"))
^
SyntaxError: invalid syntax
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py", line 12
self.append(Histogram(, , , "OneWeight"))
^
SyntaxError: invalid syntax
```
Migrated from https://code.icecube.wisc.edu/ticket/1737
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py\", line 12\n self.append(Histogram(, , , \"FluxSum\"))\n ^\nSyntaxError: invalid syntax\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py\", line 12\n self.append(Histogram(, , , \"OneWeight\"))\n ^\nSyntaxError: invalid syntax\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "cmake",
"summary": "[production_histograms] invalid syntax",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:44:37",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| defect | invalid syntax trac text users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation corsika weight the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation corsika weight py line self append histogram fluxsum syntaxerror invalid syntax users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation nugen weight the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation nugen weight py line self append histogram oneweight syntaxerror invalid syntax migrated from json status closed changetime description n n users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation corsika weight the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation corsika weight py line n self append histogram fluxsum n nsyntaxerror invalid syntax n users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation nugen weight the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation nugen weight py line n self append histogram oneweight n nsyntaxerror invalid syntax n n n reporter kjmeagher cc resolution fixed ts component cmake summary invalid syntax priority normal keywords documentation time milestone owner olivas type defect | 1 |
39,765 | 5,245,632,590 | IssuesEvent | 2017-02-01 05:38:42 | RIOT-OS/RIOT | https://api.github.com/repos/RIOT-OS/RIOT | opened | tests/lwip target board for python test is hardcoded to native | bug tests | The python driven [test](https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py) does not run for boards other then `native`
But since the default `BOARD` for is set to `iotlab-m3` for this test, I guess they should be the primary target.
Setting the boards and ttys manually in the 01_test.py [1]
``` python
Board("iotlab-m3", None, "/dev/ttyUSBX")
```
forces the script to start with these parameters but it fails when trying to call reset [2] before going on with testing:
```
RIOT/tests/lwip$ make BOARD=iotlab-m3 test
./tests/01-run.py
Testing for (<Board 'iotlab-m3',port=None,serial='/dev/ttyUSB0'>, <Board 'iotlab-m3',port=None,serial='/dev/ttyUSB2'>):
Traceback (most recent call last):
File "./tests/01-run.py", line 270, in <module>
[test_ipv6_send, test_udpv6_send, test_dual_send])
File "./tests/01-run.py", line 172, in execute
board_group.reset()
File "./tests/01-run.py", line 153, in reset
board.reset(application, env)
File "./tests/01-run.py", line 124, in reset
self.reset_strategy.execute(application, env)
File "./tests/01-run.py", line 71, in execute
super(ResetStrategy, self).__run_make(application, ("reset",), env)
AttributeError: 'super' object has no attribute '_ResetStrategy__run_make'
```
[1] https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py#L264
[2] https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py#L168 | 1.0 | tests/lwip target board for python test is hardcoded to native - The python driven [test](https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py) does not run for boards other then `native`
But since the default `BOARD` for is set to `iotlab-m3` for this test, I guess they should be the primary target.
Setting the boards and ttys manually in the 01_test.py [1]
``` python
Board("iotlab-m3", None, "/dev/ttyUSBX")
```
forces the script to start with these parameters but it fails when trying to call reset [2] before going on with testing:
```
RIOT/tests/lwip$ make BOARD=iotlab-m3 test
./tests/01-run.py
Testing for (<Board 'iotlab-m3',port=None,serial='/dev/ttyUSB0'>, <Board 'iotlab-m3',port=None,serial='/dev/ttyUSB2'>):
Traceback (most recent call last):
File "./tests/01-run.py", line 270, in <module>
[test_ipv6_send, test_udpv6_send, test_dual_send])
File "./tests/01-run.py", line 172, in execute
board_group.reset()
File "./tests/01-run.py", line 153, in reset
board.reset(application, env)
File "./tests/01-run.py", line 124, in reset
self.reset_strategy.execute(application, env)
File "./tests/01-run.py", line 71, in execute
super(ResetStrategy, self).__run_make(application, ("reset",), env)
AttributeError: 'super' object has no attribute '_ResetStrategy__run_make'
```
[1] https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py#L264
[2] https://github.com/RIOT-OS/RIOT/blob/master/tests/lwip/tests/01-run.py#L168 | non_defect | tests lwip target board for python test is hardcoded to native the python driven does not run for boards other then native but since the default board for is set to iotlab for this test i guess they should be the primary target setting the boards and ttys manually in the test py python board iotlab none dev ttyusbx forces the script to start with these parameters but it fails when trying to call reset before going on with testing riot tests lwip make board iotlab test tests run py testing for traceback most recent call last file tests run py line in file tests run py line in execute board group reset file tests run py line in reset board reset application env file tests run py line in reset self reset strategy execute application env file tests run py line in execute super resetstrategy self run make application reset env attributeerror super object has no attribute resetstrategy run make | 0 |
61,756 | 17,023,773,187 | IssuesEvent | 2021-07-03 03:46:23 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Deceased Mappers | Component: website Priority: major Resolution: invalid Type: defect | **[Submitted to the original trac issue database at 10.08pm, Saturday, 4th February 2012]**
http://www.openstreetmap.org/user/Bj%C3%B6rn%20Moormann needs a link to http://www.noz.de/anzeigen/traueranzeigen?id=132954 as the user cannot do this any more.
http://www.openstreetmap.org/user/ulfm needs a link to
http://blog.osmfoundation.org/2012/01/18/ulf-m%C3%B6ller-1973-2012/
It would be nice if you could add these links with appropriate dignity.
| 1.0 | Deceased Mappers - **[Submitted to the original trac issue database at 10.08pm, Saturday, 4th February 2012]**
http://www.openstreetmap.org/user/Bj%C3%B6rn%20Moormann needs a link to http://www.noz.de/anzeigen/traueranzeigen?id=132954 as the user cannot do this any more.
http://www.openstreetmap.org/user/ulfm needs a link to
http://blog.osmfoundation.org/2012/01/18/ulf-m%C3%B6ller-1973-2012/
It would be nice if you could add these links with appropriate dignity.
| defect | deceased mappers needs a link to as the user cannot do this any more needs a link to it would be nice if you could add these links with appropriate dignity | 1 |
23,174 | 3,774,285,892 | IssuesEvent | 2016-03-17 08:32:02 | daelsepara/sofia-ml | https://api.github.com/repos/daelsepara/sofia-ml | closed | lambda parameter not passed into SvmObjective correctly | auto-migrated Priority-Medium Type-Defect | ```
in sofia-ml.cc
337 float objective = sofia_ml::SvmObjective(training_data,
338 *w,
339 CMD_LINE_BOOLS["--lambda"]);
Note that lambda is passed in from CMD_LINE_BOOLS not CMD_LINE_FLOATS which
results in lambda=0. In TrainModel the correct value of lambda is used:
176 float lambda = CMD_LINE_FLOATS["--lambda"];
```
Original issue reported on code.google.com by `ed...@ly.st` on 9 May 2013 at 1:20 | 1.0 | lambda parameter not passed into SvmObjective correctly - ```
in sofia-ml.cc
337 float objective = sofia_ml::SvmObjective(training_data,
338 *w,
339 CMD_LINE_BOOLS["--lambda"]);
Note that lambda is passed in from CMD_LINE_BOOLS not CMD_LINE_FLOATS which
results in lambda=0. In TrainModel the correct value of lambda is used:
176 float lambda = CMD_LINE_FLOATS["--lambda"];
```
Original issue reported on code.google.com by `ed...@ly.st` on 9 May 2013 at 1:20 | defect | lambda parameter not passed into svmobjective correctly in sofia ml cc float objective sofia ml svmobjective training data w cmd line bools note that lambda is passed in from cmd line bools not cmd line floats which results in lambda in trainmodel the correct value of lambda is used float lambda cmd line floats original issue reported on code google com by ed ly st on may at | 1 |
176,630 | 6,561,924,495 | IssuesEvent | 2017-09-07 14:53:09 | zero-os/0-orchestrator | https://api.github.com/repos/zero-os/0-orchestrator | closed | Internal server error when create VM | priority_critical state_verification type_bug | ### ays log
```
[Wed06 07:52] - RunStep.py :84 :j.atyourservice.server - INFO - runstep 1: ok [165/1811]
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.59cf481a4d05a0a53dad5f96ab62e925 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (input) d
one sucessfuly
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.fe04dd442aacd43b73bc00a7744be24b - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (init) do
ne sucessfuly
2017-09-06 07:52:59 - (network)[INFO][127.0.0.1:60332]: GET http://127.0.0.1:5000/ays/repository/orchestrator-server/aysrun/5e78a6b30bdc325867a632591f71dc62 200 311
INFO:network:
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.2500e26ba690c7c2446d356acf27d819 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (monitor)
done sucessfuly
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.bf88ca403cad3bcb160364254f0fad51 - INFO - job job: node.zero-os!248a07e3cbf0 (monitor) done sucessful
y
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.ec1f0b9a279169abd52dcb0acb129b41 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (start) d
one sucessfuly
[Wed06 07:53] - Service.py :68 :j.atyourservice.server.service - INFO - init service vdisks_0002d29d3b_248a07e771c0 from container
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.cba3758fbc20c1c48d5dc4393362fea3 - INFO - job job: container!vdisks_0002d29d3b_248a07e771c0 (input) d
one sucessfuly
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.7aa32b2a200f604a83c47a80426d2b86 - INFO - job job: container!vdisks_0002d29d3b_248a07e771c0 (init) do
ne sucessfuly
[Wed06 07:53] - Job.py :56 :j.core.jobcontroller.job.085a260274a6d8acdda1fde6d6ef8fb7 - ERROR - Traceback (most recent call last):
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 3, in init
start_dependent_services(job)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 695, in start_dependent_services
nbd_container = create_zerodisk_container(job, service.parent)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 106, in create_zerodisk_container
j.tools.async.wrappers.sync(containerservice.executeAction('start', context=job.context))
File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/async/Wrappers.py", line 12, in sync
return loop.run_until_complete(coro)
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 723, in executeAction
return await self.executeActionJob(action, args, context=context)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 748, in executeActionJob
result = await job.execute()
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/Job.py", line 335, in execute
self._future = self._loop.run_in_executor(None, self.method, self)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/Job.py", line 222, in method
return self.sourceLoader.get_method(self.model.dbobj.actionName)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/SourceLoader.py", line 78, in get_method
return getattr(self._module, name)
AttributeError: module '9b6d933c9f8fb5f1a31ab98e21817b5c' has no attribute 'start'
```
### software version
master 54b57002 | 1.0 | Internal server error when create VM - ### ays log
```
[Wed06 07:52] - RunStep.py :84 :j.atyourservice.server - INFO - runstep 1: ok [165/1811]
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.59cf481a4d05a0a53dad5f96ab62e925 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (input) d
one sucessfuly
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.fe04dd442aacd43b73bc00a7744be24b - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (init) do
ne sucessfuly
2017-09-06 07:52:59 - (network)[INFO][127.0.0.1:60332]: GET http://127.0.0.1:5000/ays/repository/orchestrator-server/aysrun/5e78a6b30bdc325867a632591f71dc62 200 311
INFO:network:
[Wed06 07:52] - Job.py :66 :j.core.jobcontroller.job.2500e26ba690c7c2446d356acf27d819 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (monitor)
done sucessfuly
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.bf88ca403cad3bcb160364254f0fad51 - INFO - job job: node.zero-os!248a07e3cbf0 (monitor) done sucessful
y
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.ec1f0b9a279169abd52dcb0acb129b41 - INFO - job job: container!vdisks_0002d29d3b_248a07e3cbf0 (start) d
one sucessfuly
[Wed06 07:53] - Service.py :68 :j.atyourservice.server.service - INFO - init service vdisks_0002d29d3b_248a07e771c0 from container
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.cba3758fbc20c1c48d5dc4393362fea3 - INFO - job job: container!vdisks_0002d29d3b_248a07e771c0 (input) d
one sucessfuly
[Wed06 07:53] - Job.py :66 :j.core.jobcontroller.job.7aa32b2a200f604a83c47a80426d2b86 - INFO - job job: container!vdisks_0002d29d3b_248a07e771c0 (init) do
ne sucessfuly
[Wed06 07:53] - Job.py :56 :j.core.jobcontroller.job.085a260274a6d8acdda1fde6d6ef8fb7 - ERROR - Traceback (most recent call last):
File "/usr/lib/python3.5/concurrent/futures/thread.py", line 55, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 3, in init
start_dependent_services(job)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 695, in start_dependent_services
nbd_container = create_zerodisk_container(job, service.parent)
File "/tmp/actions/vm/0718e22a1fd24e5422b555ce7ee89c0c.py", line 106, in create_zerodisk_container
j.tools.async.wrappers.sync(containerservice.executeAction('start', context=job.context))
File "/opt/code/github/jumpscale/lib9/JumpScale9Lib/tools/async/Wrappers.py", line 12, in sync
return loop.run_until_complete(coro)
File "/usr/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/usr/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 723, in executeAction
return await self.executeActionJob(action, args, context=context)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/ays/lib/Service.py", line 748, in executeActionJob
result = await job.execute()
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/Job.py", line 335, in execute
self._future = self._loop.run_in_executor(None, self.method, self)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/Job.py", line 222, in method
return self.sourceLoader.get_method(self.model.dbobj.actionName)
File "/opt/code/github/jumpscale/ays9/JumpScale9AYS/jobcontroller/SourceLoader.py", line 78, in get_method
return getattr(self._module, name)
AttributeError: module '9b6d933c9f8fb5f1a31ab98e21817b5c' has no attribute 'start'
```
### software version
master 54b57002 | non_defect | internal server error when create vm ays log runstep py j atyourservice server info runstep ok job py j core jobcontroller job info job job container vdisks input d one sucessfuly job py j core jobcontroller job info job job container vdisks init do ne sucessfuly network get info network job py j core jobcontroller job info job job container vdisks monitor done sucessfuly job py j core jobcontroller job info job job node zero os monitor done sucessful y job py j core jobcontroller job info job job container vdisks start d one sucessfuly service py j atyourservice server service info init service vdisks from container job py j core jobcontroller job info job job container vdisks input d one sucessfuly job py j core jobcontroller job info job job container vdisks init do ne sucessfuly job py j core jobcontroller job error traceback most recent call last file usr lib concurrent futures thread py line in run result self fn self args self kwargs file tmp actions vm py line in init start dependent services job file tmp actions vm py line in start dependent services nbd container create zerodisk container job service parent file tmp actions vm py line in create zerodisk container j tools async wrappers sync containerservice executeaction start context job context file opt code github jumpscale tools async wrappers py line in sync return loop run until complete coro file usr lib asyncio base events py line in run until complete return future result file usr lib asyncio futures py line in result raise self exception file usr lib asyncio tasks py line in step result coro send none file opt code github jumpscale ays lib service py line in executeaction return await self executeactionjob action args context context file opt code github jumpscale ays lib service py line in executeactionjob result await job execute file opt code github jumpscale jobcontroller job py line in execute self future self loop run in executor none self method self file opt code github jumpscale jobcontroller job py line in method return self sourceloader get method self model dbobj actionname file opt code github jumpscale jobcontroller sourceloader py line in get method return getattr self module name attributeerror module has no attribute start software version master | 0 |
66,257 | 20,105,743,398 | IssuesEvent | 2022-02-07 10:18:38 | vector-im/element-ios | https://api.github.com/repos/vector-im/element-ios | closed | MXSession is paused after any sync request is legitimately cancelled | T-Defect A-Timeline S-Minor O-Occasional | `MXSession` is paused after any sync request is legitimately cancelled (e.g. a screen that triggered it deallocated), and there is no mechanism that resumes it whilst the app is in foreground. As a result, the app will stop recieving new messages from the server.
# Steps to reproduce
- navigate to a room B from a deeplink in room A
- pull down to refresh in room B
- before the refresh completes (1-2 seconds) navigate back to room A
- at this point a `URLRequest` has been cancelled (correctly so, the screen that triggered it is gone), but that cancellation causes pausing of the whole `MXSession`
- from here on no new messages will arrive unless you background / foreground the app
| 1.0 | MXSession is paused after any sync request is legitimately cancelled - `MXSession` is paused after any sync request is legitimately cancelled (e.g. a screen that triggered it deallocated), and there is no mechanism that resumes it whilst the app is in foreground. As a result, the app will stop recieving new messages from the server.
# Steps to reproduce
- navigate to a room B from a deeplink in room A
- pull down to refresh in room B
- before the refresh completes (1-2 seconds) navigate back to room A
- at this point a `URLRequest` has been cancelled (correctly so, the screen that triggered it is gone), but that cancellation causes pausing of the whole `MXSession`
- from here on no new messages will arrive unless you background / foreground the app
| defect | mxsession is paused after any sync request is legitimately cancelled mxsession is paused after any sync request is legitimately cancelled e g a screen that triggered it deallocated and there is no mechanism that resumes it whilst the app is in foreground as a result the app will stop recieving new messages from the server steps to reproduce navigate to a room b from a deeplink in room a pull down to refresh in room b before the refresh completes seconds navigate back to room a at this point a urlrequest has been cancelled correctly so the screen that triggered it is gone but that cancellation causes pausing of the whole mxsession from here on no new messages will arrive unless you background foreground the app | 1 |
27,355 | 4,971,581,770 | IssuesEvent | 2016-12-05 19:07:00 | catmaid/CATMAID | https://api.github.com/repos/catmaid/CATMAID | opened | 3d viewer connector restriction does not always respect selection table checks | context: 3d-viewer type: defect | To reproduce:
1. Add several skeletons to the 3d viewer's subbed selection table
2. Toggle the visibility of several skeletons off, and the visibility of pre/post sites for several visible skeletons off
3. Switch connector restriction to all shared
4. Switch connector restriction back to all connectors
Connectors for hidden skeletons will be visible. Connectors for hidden pre/post sites on visible skeletons will also be visible. | 1.0 | 3d viewer connector restriction does not always respect selection table checks - To reproduce:
1. Add several skeletons to the 3d viewer's subbed selection table
2. Toggle the visibility of several skeletons off, and the visibility of pre/post sites for several visible skeletons off
3. Switch connector restriction to all shared
4. Switch connector restriction back to all connectors
Connectors for hidden skeletons will be visible. Connectors for hidden pre/post sites on visible skeletons will also be visible. | defect | viewer connector restriction does not always respect selection table checks to reproduce add several skeletons to the viewer s subbed selection table toggle the visibility of several skeletons off and the visibility of pre post sites for several visible skeletons off switch connector restriction to all shared switch connector restriction back to all connectors connectors for hidden skeletons will be visible connectors for hidden pre post sites on visible skeletons will also be visible | 1 |
59,206 | 14,369,084,132 | IssuesEvent | 2020-12-01 09:18:32 | ignatandrei/stankins | https://api.github.com/repos/ignatandrei/stankins | closed | CVE-2019-10747 (High) detected in set-value-0.4.3.tgz, set-value-2.0.0.tgz | security vulnerability | ## CVE-2019-10747 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>set-value-0.4.3.tgz</b>, <b>set-value-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>set-value-0.4.3.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz">https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/union-value/node_modules/set-value/package.json,/tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/union-value/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- webpack-4.19.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- union-value-1.0.0.tgz
- :x: **set-value-0.4.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>set-value-2.0.0.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/set-value/package.json,/tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- webpack-4.19.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
set-value is vulnerable to Prototype Pollution in versions lower than 3.0.1. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using any of the constructor, prototype and _proto_ payloads.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10747>CVE-2019-10747</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f">https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f</a></p>
<p>Release Date: 2019-07-24</p>
<p>Fix Resolution: 2.0.1,3.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-10747 (High) detected in set-value-0.4.3.tgz, set-value-2.0.0.tgz - ## CVE-2019-10747 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>set-value-0.4.3.tgz</b>, <b>set-value-2.0.0.tgz</b></p></summary>
<p>
<details><summary><b>set-value-0.4.3.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz">https://registry.npmjs.org/set-value/-/set-value-0.4.3.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsAliveAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/union-value/node_modules/set-value/package.json,/tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/union-value/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- webpack-4.19.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- union-value-1.0.0.tgz
- :x: **set-value-0.4.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>set-value-2.0.0.tgz</b></p></summary>
<p>Create nested values and any intermediaries using dot notation (`'a.b.c'`) paths.</p>
<p>Library home page: <a href="https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz">https://registry.npmjs.org/set-value/-/set-value-2.0.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/set-value/package.json,/tmp/ws-scm/stankins/stankinsv2/solution/StankinsV2/StankinsDataWebAngular/node_modules/set-value/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.10.7.tgz (Root Library)
- webpack-4.19.1.tgz
- micromatch-3.1.10.tgz
- snapdragon-0.8.2.tgz
- base-0.11.2.tgz
- cache-base-1.0.1.tgz
- :x: **set-value-2.0.0.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/stankins/commit/525550ef1e023c62d5d53d2f2bce03d5d168d46e">525550ef1e023c62d5d53d2f2bce03d5d168d46e</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
set-value is vulnerable to Prototype Pollution in versions lower than 3.0.1. The function mixin-deep could be tricked into adding or modifying properties of Object.prototype using any of the constructor, prototype and _proto_ payloads.
<p>Publish Date: 2019-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10747>CVE-2019-10747</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f">https://github.com/jonschlinkert/set-value/commit/95e9d9923f8a8b4a01da1ea138fcc39ec7b6b15f</a></p>
<p>Release Date: 2019-07-24</p>
<p>Fix Resolution: 2.0.1,3.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in set value tgz set value tgz cve high severity vulnerability vulnerable libraries set value tgz set value tgz set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file tmp ws scm stankins solution stankinsaliveangular package json path to vulnerable library tmp ws scm stankins solution stankinsdatawebangular node modules union value node modules set value package json tmp ws scm stankins solution stankinsdatawebangular node modules union value node modules set value package json dependency hierarchy build angular tgz root library webpack tgz micromatch tgz snapdragon tgz base tgz cache base tgz union value tgz x set value tgz vulnerable library set value tgz create nested values and any intermediaries using dot notation a b c paths library home page a href path to dependency file tmp ws scm stankins solution stankinsdatawebangular package json path to vulnerable library tmp ws scm stankins solution stankinsdatawebangular node modules set value package json tmp ws scm stankins solution stankinsdatawebangular node modules set value package json dependency hierarchy build angular tgz root library webpack tgz micromatch tgz snapdragon tgz base tgz cache base tgz x set value tgz vulnerable library found in head commit a href vulnerability details set value is vulnerable to prototype pollution in versions lower than the function mixin deep could be tricked into adding or modifying properties of object prototype using any of the constructor prototype and proto payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
29,517 | 5,713,661,510 | IssuesEvent | 2017-04-19 08:24:38 | BOINC/boinc | https://api.github.com/repos/BOINC/boinc | closed | Web Computing preferences for disk % don't propagate correctly | C: Client - Daemon E: 1 day P: Major T: Defect | The disk settting on client and web have check boxes, if you uncheck then Use no more than --- % of Total, when it's propigated to the client this has the setting of Use no more than 0 % of total not dissabled as per the web
| 1.0 | Web Computing preferences for disk % don't propagate correctly - The disk settting on client and web have check boxes, if you uncheck then Use no more than --- % of Total, when it's propigated to the client this has the setting of Use no more than 0 % of total not dissabled as per the web
| defect | web computing preferences for disk don t propagate correctly the disk settting on client and web have check boxes if you uncheck then use no more than of total when it s propigated to the client this has the setting of use no more than of total not dissabled as per the web | 1 |
18,299 | 3,041,566,792 | IssuesEvent | 2015-08-07 22:15:48 | francoisferland/casiousbmididriver | https://api.github.com/repos/francoisferland/casiousbmididriver | closed | Computer doesn't recognize the keyboard | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. install the program
2. plug in the keyboard
3.
What is the expected output? What do you see instead?
no MIDI device detected in the MIDI/Audio settings
What version of the product are you using? On what operating system?
PX-320, tried the USB cables and made sure they work right, Mac OS X 10.8.2
Please provide any additional information below.
I installed the drivers a few times to get them to work. Is there something I
could be doing wrong? I used the latest .pkg on the front page
```
Original issue reported on code.google.com by `smmon...@gmail.com` on 10 Jan 2013 at 1:34 | 1.0 | Computer doesn't recognize the keyboard - ```
What steps will reproduce the problem?
1. install the program
2. plug in the keyboard
3.
What is the expected output? What do you see instead?
no MIDI device detected in the MIDI/Audio settings
What version of the product are you using? On what operating system?
PX-320, tried the USB cables and made sure they work right, Mac OS X 10.8.2
Please provide any additional information below.
I installed the drivers a few times to get them to work. Is there something I
could be doing wrong? I used the latest .pkg on the front page
```
Original issue reported on code.google.com by `smmon...@gmail.com` on 10 Jan 2013 at 1:34 | defect | computer doesn t recognize the keyboard what steps will reproduce the problem install the program plug in the keyboard what is the expected output what do you see instead no midi device detected in the midi audio settings what version of the product are you using on what operating system px tried the usb cables and made sure they work right mac os x please provide any additional information below i installed the drivers a few times to get them to work is there something i could be doing wrong i used the latest pkg on the front page original issue reported on code google com by smmon gmail com on jan at | 1 |
239,144 | 7,787,088,231 | IssuesEvent | 2018-06-06 21:07:39 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | 7.5.0 Screen turns black after initial loading screen finishes | High Priority | Open Eco from Steam => Loading Screen finishes => screen is black, should be loading up the title screen. Eco Cursor is still present though.

| 1.0 | 7.5.0 Screen turns black after initial loading screen finishes - Open Eco from Steam => Loading Screen finishes => screen is black, should be loading up the title screen. Eco Cursor is still present though.

| non_defect | screen turns black after initial loading screen finishes open eco from steam loading screen finishes screen is black should be loading up the title screen eco cursor is still present though | 0 |
379,638 | 26,379,041,847 | IssuesEvent | 2023-01-12 06:48:01 | Guila767/ModelMapper | https://api.github.com/repos/Guila767/ModelMapper | closed | Support for generic models | documentation enhancement wontfix issue | Currently, the source generator does not support mapping for generic models, even though it will be able to generate the code but with not the expected behavior | 1.0 | Support for generic models - Currently, the source generator does not support mapping for generic models, even though it will be able to generate the code but with not the expected behavior | non_defect | support for generic models currently the source generator does not support mapping for generic models even though it will be able to generate the code but with not the expected behavior | 0 |
24,962 | 4,155,378,987 | IssuesEvent | 2016-06-16 14:45:51 | contao/core | https://api.github.com/repos/contao/core | closed | pages always come from cache, regardless of backend login or frontend preview | defect | ## Problem description
Even if you log into the backend and/or use the frontend preview, pages are _always_ loaded from the cache, when present, under certain circumstances. (More apt description: pages are _not_ loaded from the cache under certain circumstances, when logged into the backend - see below.)
## Reproduction
1. add `var_dump($_SESSION['DISABLE_CACHE']);` [here](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L293) (or `echo 'foo';` after the condition, or whatever you like)
2. create a page named "Foo" in your site structure
3. enable caching on this page
4. log out of the backend
5. open the page "Foo" in the frontend (this puts it into the cache)
6. restart your browser
7. log into the backend again
8. open the page "Foo" __directly__ in the Frontend. Do not open any other page in the Frontend before that.
The page "Foo" will always come from the cache, no matter how many times you reload.
If you want to check the same thing with the frontend preview, you need to do the following:
1. add `var_dump($_SESSION['DISABLE_CACHE']);` [here](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L293) (or `echo 'foo';` after the condition, or whatever you like)
2. create one or more pages in your site structure
3. enable caching __on the website root__
4. log out of the backend
5. open the website in the frontend and open __all__ pages (so that all pages reside in the page cache)
6. restart your browser
7. log into the backend again
8. open any page in the frontend or use the preview mode - pages will always come from the cache
_Note:_ You need at least Contao `3.5.0` (I think) for the second reproduction steps to work, because in older versions the index page might not get properly cached.
## Cause
I have outlined the cause already here in german: https://github.com/contao/core/issues/8233#issuecomment-191749398
Usually, pages won't be loaded from the cache under a number of conditions: [FrontendIndex.php#L295](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L295)
```php
// Build the page if a user is (potentially) logged in or there is POST data
if (!empty($_POST) || \Input::cookie('FE_USER_AUTH') || \Input::cookie('FE_AUTO_LOGIN') || $_SESSION['DISABLE_CACHE'] || isset($_SESSION['LOGIN_ERROR']) || \Config::get('debugMode'))
{
return;
}
```
One of these conditions is the session variable
```php
$_SESSION['DISABLE_CACHE']
```
This variable will __only__ be set to true in the function [Frontend::getLoginStatus](https://github.com/contao/core/blob/3.5.8/system/modules/core/classes/Frontend.php#L503)
```php
protected function getLoginStatus($strCookie)
```
and this function is __only__ called by [FrontendIndex::__construct](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L35), but only __after__
```php
$this->outputFromCache();
```
Thus, if you never, ever visit a page in the frontend which is __not already present__ in the page cache, the variable `$_SESSION['DISABLE_CACHE']` will never be set to true and thus you will always receive the page from the cache, even when logged into the backend. | 1.0 | pages always come from cache, regardless of backend login or frontend preview - ## Problem description
Even if you log into the backend and/or use the frontend preview, pages are _always_ loaded from the cache, when present, under certain circumstances. (More apt description: pages are _not_ loaded from the cache under certain circumstances, when logged into the backend - see below.)
## Reproduction
1. add `var_dump($_SESSION['DISABLE_CACHE']);` [here](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L293) (or `echo 'foo';` after the condition, or whatever you like)
2. create a page named "Foo" in your site structure
3. enable caching on this page
4. log out of the backend
5. open the page "Foo" in the frontend (this puts it into the cache)
6. restart your browser
7. log into the backend again
8. open the page "Foo" __directly__ in the Frontend. Do not open any other page in the Frontend before that.
The page "Foo" will always come from the cache, no matter how many times you reload.
If you want to check the same thing with the frontend preview, you need to do the following:
1. add `var_dump($_SESSION['DISABLE_CACHE']);` [here](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L293) (or `echo 'foo';` after the condition, or whatever you like)
2. create one or more pages in your site structure
3. enable caching __on the website root__
4. log out of the backend
5. open the website in the frontend and open __all__ pages (so that all pages reside in the page cache)
6. restart your browser
7. log into the backend again
8. open any page in the frontend or use the preview mode - pages will always come from the cache
_Note:_ You need at least Contao `3.5.0` (I think) for the second reproduction steps to work, because in older versions the index page might not get properly cached.
## Cause
I have outlined the cause already here in german: https://github.com/contao/core/issues/8233#issuecomment-191749398
Usually, pages won't be loaded from the cache under a number of conditions: [FrontendIndex.php#L295](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L295)
```php
// Build the page if a user is (potentially) logged in or there is POST data
if (!empty($_POST) || \Input::cookie('FE_USER_AUTH') || \Input::cookie('FE_AUTO_LOGIN') || $_SESSION['DISABLE_CACHE'] || isset($_SESSION['LOGIN_ERROR']) || \Config::get('debugMode'))
{
return;
}
```
One of these conditions is the session variable
```php
$_SESSION['DISABLE_CACHE']
```
This variable will __only__ be set to true in the function [Frontend::getLoginStatus](https://github.com/contao/core/blob/3.5.8/system/modules/core/classes/Frontend.php#L503)
```php
protected function getLoginStatus($strCookie)
```
and this function is __only__ called by [FrontendIndex::__construct](https://github.com/contao/core/blob/3.5.8/system/modules/core/controllers/FrontendIndex.php#L35), but only __after__
```php
$this->outputFromCache();
```
Thus, if you never, ever visit a page in the frontend which is __not already present__ in the page cache, the variable `$_SESSION['DISABLE_CACHE']` will never be set to true and thus you will always receive the page from the cache, even when logged into the backend. | defect | pages always come from cache regardless of backend login or frontend preview problem description even if you log into the backend and or use the frontend preview pages are always loaded from the cache when present under certain circumstances more apt description pages are not loaded from the cache under certain circumstances when logged into the backend see below reproduction add var dump session or echo foo after the condition or whatever you like create a page named foo in your site structure enable caching on this page log out of the backend open the page foo in the frontend this puts it into the cache restart your browser log into the backend again open the page foo directly in the frontend do not open any other page in the frontend before that the page foo will always come from the cache no matter how many times you reload if you want to check the same thing with the frontend preview you need to do the following add var dump session or echo foo after the condition or whatever you like create one or more pages in your site structure enable caching on the website root log out of the backend open the website in the frontend and open all pages so that all pages reside in the page cache restart your browser log into the backend again open any page in the frontend or use the preview mode pages will always come from the cache note you need at least contao i think for the second reproduction steps to work because in older versions the index page might not get properly cached cause i have outlined the cause already here in german usually pages won t be loaded from the cache under a number of conditions php build the page if a user is potentially logged in or there is post data if empty post input cookie fe user auth input cookie fe auto login session isset session config get debugmode return one of these conditions is the session variable php session this variable will only be set to true in the function php protected function getloginstatus strcookie and this function is only called by but only after php this outputfromcache thus if you never ever visit a page in the frontend which is not already present in the page cache the variable session will never be set to true and thus you will always receive the page from the cache even when logged into the backend | 1 |
45,983 | 13,055,832,925 | IssuesEvent | 2020-07-30 02:52:13 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | make I3Pruner and I3TimeShifter not log-fatal if their input isn't in the frame (Trac #374) | Incomplete Migration Migrated from Trac combo simulation defect | Migrated from https://code.icecube.wisc.edu/ticket/374
```json
{
"status": "closed",
"changetime": "2012-03-17T23:05:58",
"description": "The default behavior of I3Pruner and I3TimeShifter (as called by the trigger-sim traySegement) is to look for both InIceRawData and IceTopRawData. For some non-standards sets, IceTopRawData is missing, which causes the standard run scripts to exit when they reach this point. A better solution would be to get these two modules to log-warn if they can't find the inputs. \n\n",
"reporter": "gladstone",
"cc": "",
"resolution": "fixed",
"_ts": "1332025558000000",
"component": "combo simulation",
"summary": "make I3Pruner and I3TimeShifter not log-fatal if their input isn't in the frame",
"priority": "normal",
"keywords": "",
"time": "2012-03-07T19:07:43",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| 1.0 | make I3Pruner and I3TimeShifter not log-fatal if their input isn't in the frame (Trac #374) - Migrated from https://code.icecube.wisc.edu/ticket/374
```json
{
"status": "closed",
"changetime": "2012-03-17T23:05:58",
"description": "The default behavior of I3Pruner and I3TimeShifter (as called by the trigger-sim traySegement) is to look for both InIceRawData and IceTopRawData. For some non-standards sets, IceTopRawData is missing, which causes the standard run scripts to exit when they reach this point. A better solution would be to get these two modules to log-warn if they can't find the inputs. \n\n",
"reporter": "gladstone",
"cc": "",
"resolution": "fixed",
"_ts": "1332025558000000",
"component": "combo simulation",
"summary": "make I3Pruner and I3TimeShifter not log-fatal if their input isn't in the frame",
"priority": "normal",
"keywords": "",
"time": "2012-03-07T19:07:43",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
| defect | make and not log fatal if their input isn t in the frame trac migrated from json status closed changetime description the default behavior of and as called by the trigger sim traysegement is to look for both inicerawdata and icetoprawdata for some non standards sets icetoprawdata is missing which causes the standard run scripts to exit when they reach this point a better solution would be to get these two modules to log warn if they can t find the inputs n n reporter gladstone cc resolution fixed ts component combo simulation summary make and not log fatal if their input isn t in the frame priority normal keywords time milestone owner olivas type defect | 1 |
1,705 | 2,603,969,814 | IssuesEvent | 2015-02-24 18:59:58 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳阴茎长疙瘩怎么回事 | auto-migrated Priority-Medium Type-Defect | ```
沈阳阴茎长疙瘩怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:24 | 1.0 | 沈阳阴茎长疙瘩怎么回事 - ```
沈阳阴茎长疙瘩怎么回事〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:24 | defect | 沈阳阴茎长疙瘩怎么回事 沈阳阴茎长疙瘩怎么回事〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
68,024 | 17,116,761,947 | IssuesEvent | 2021-07-11 14:15:00 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | I got "ibopencv_cudaarithm.so.4.5: undefined symbol" when I import cv2 | category: build/install category: gpu/cuda (contrib) incomplete question (invalid tracker) | I just install opencv 4.5.2 from source on my ubuntu 20.04, when I import cv2. Igot the following problem:
(base) jim@jim-AERO-15-X9:~/anaconda3/lib/python3.8/site-packages/cv2/python-3.8$ python
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/libopencv_cudaarithm.so.4.5: undefined symbol: _ZN2cv4cuda14StreamAccessor9getStreamERKNS0_6StreamE
Could you help me to solve this? Thanks a lot!! | 1.0 | I got "ibopencv_cudaarithm.so.4.5: undefined symbol" when I import cv2 - I just install opencv 4.5.2 from source on my ubuntu 20.04, when I import cv2. Igot the following problem:
(base) jim@jim-AERO-15-X9:~/anaconda3/lib/python3.8/site-packages/cv2/python-3.8$ python
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /usr/local/lib/libopencv_cudaarithm.so.4.5: undefined symbol: _ZN2cv4cuda14StreamAccessor9getStreamERKNS0_6StreamE
Could you help me to solve this? Thanks a lot!! | non_defect | i got ibopencv cudaarithm so undefined symbol when i import i just install opencv from source on my ubuntu when i import igot the following problem base jim jim aero lib site packages python python python default sep anaconda inc on linux type help copyright credits or license for more information import traceback most recent call last file line in importerror usr local lib libopencv cudaarithm so undefined symbol could you help me to solve this thanks a lot | 0 |
5,163 | 5,532,887,463 | IssuesEvent | 2017-03-21 11:52:46 | NixOS/security | https://api.github.com/repos/NixOS/security | opened | Roundup: [oss-security] audiofile: heap-based buffer overflow in alaw2linear_buf (G711.cpp) | Security | Here is a vulnerability from the oss-security mailing list
for [Vulnerability Roundup 26](https://github.com/NixOS/nixpkgs/issues/24161).
# Instructions:
## Identification
Identify if we have the software, in 16.09, 17.03, and unstable.
Then determine if we are vulnerable, and make a comment with
your findings. It can also be helpful to specify if you think there is
a patch, or if it can be fixed via a general update.
Example:
```
unstable: we are not vulnerable (link to the package)
17.03: we are vulnerable (link to the package)
16.09: we don't have it packaged
```
**IMPORTANT**: If you believe there are possibly related issues, bring
them up on the _parent_ issue!
## Patching
Start by commenting on this issue saying you're working on a patch.
This way, we don't duplicate work.
If you open a pull request, tag this issue _and_ the master issue
for the roundup.
If you commit the patch directly to a branch, please leave a comment
on this issue with the branch and the commit hash, example:
```
fixed:
release-16.09: abc123
```
## Upon Completion ...
- [ ] Update Graham's database
## Info
_Triage Indicator:_
```
-needs-triage +roundup26 thread:0000000000003e19 # [oss-security] audiofile: heap-based buffer overflow in alaw2linear_buf (G711.cpp)
```
- File Search: https://search.nix.gsc.io/?q=audiofile&i=fosho&repos=nixos-nixpkgs
- GitHub Search: https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=audiofile+in%3Apath&type=Code
---
##### "Agostino Sarubbo" <ago-at-gentoo.org>, `591403.244153725-sendEmail@localhost`
Description:
audiofile is a C-based library for reading and writing audio files in many common formats.
A fuzz on it discovered an heap overflow.
The complete ASan output:
# sfconvert @@ out.mp3 format aiff
==2480==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f5eb894d800 at pc 0x7f5eb85a699f bp 0x7ffe19064df0 sp 0x7ffe19064de8
WRITE of size 2 at 0x7f5eb894d800 thread T0
#0 0x7f5eb85a699e in alaw2linear_buf(unsigned char const*, short*, int) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13
#1 0x7f5eb85a699e in G711::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:209
#2 0x7f5eb858d05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14
#3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29
#4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
#5 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
#6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)
0x7f5eb894d800 is located 0 bytes to the right of 393216-byte region [0x7f5eb88ed800,0x7f5eb894d800)
allocated by thread T0 here:
#0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
#1 0x50bb48 in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:17
#2 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
#3 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13 in alaw2linear_buf(unsigned char
const*, short*, int)
Shadow bytes around the buggy address:
0x0fec57121ab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ae0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0fec57121b00:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==2480==ABORTING
Affected version:
0.3.6
Fixed version:
N/A
Commit fix:
N/A
Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.
CVE:
N/A
Reproducer:
https://github.com/asarubbo/poc/blob/master/00184-audiofile-heapoverflow-alaw2linear_buf
Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue
Note:
This bug was found with American Fuzzy Lop.
Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-alaw2linear_buf-g711-cpp
--
Agostino Sarubbo
Gentoo Linux Developer
---
##### Agostino Sarubbo <ago-at-gentoo.org>, `3284786.rxzJs1xbWv@blackgate`
On Sunday 26 February 2017 11:50:44 Agostino Sarubbo wrote:
> Permalink:
> https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow
> -in-alaw2linear_buf-g711-cpp
This is CVE-2017-6830
--
Agostino Sarubbo
Gentoo Linux Developer
---
| True | Roundup: [oss-security] audiofile: heap-based buffer overflow in alaw2linear_buf (G711.cpp) - Here is a vulnerability from the oss-security mailing list
for [Vulnerability Roundup 26](https://github.com/NixOS/nixpkgs/issues/24161).
# Instructions:
## Identification
Identify if we have the software, in 16.09, 17.03, and unstable.
Then determine if we are vulnerable, and make a comment with
your findings. It can also be helpful to specify if you think there is
a patch, or if it can be fixed via a general update.
Example:
```
unstable: we are not vulnerable (link to the package)
17.03: we are vulnerable (link to the package)
16.09: we don't have it packaged
```
**IMPORTANT**: If you believe there are possibly related issues, bring
them up on the _parent_ issue!
## Patching
Start by commenting on this issue saying you're working on a patch.
This way, we don't duplicate work.
If you open a pull request, tag this issue _and_ the master issue
for the roundup.
If you commit the patch directly to a branch, please leave a comment
on this issue with the branch and the commit hash, example:
```
fixed:
release-16.09: abc123
```
## Upon Completion ...
- [ ] Update Graham's database
## Info
_Triage Indicator:_
```
-needs-triage +roundup26 thread:0000000000003e19 # [oss-security] audiofile: heap-based buffer overflow in alaw2linear_buf (G711.cpp)
```
- File Search: https://search.nix.gsc.io/?q=audiofile&i=fosho&repos=nixos-nixpkgs
- GitHub Search: https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=audiofile+in%3Apath&type=Code
---
##### "Agostino Sarubbo" <ago-at-gentoo.org>, `591403.244153725-sendEmail@localhost`
Description:
audiofile is a C-based library for reading and writing audio files in many common formats.
A fuzz on it discovered an heap overflow.
The complete ASan output:
# sfconvert @@ out.mp3 format aiff
==2480==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f5eb894d800 at pc 0x7f5eb85a699f bp 0x7ffe19064df0 sp 0x7ffe19064de8
WRITE of size 2 at 0x7f5eb894d800 thread T0
#0 0x7f5eb85a699e in alaw2linear_buf(unsigned char const*, short*, int) /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13
#1 0x7f5eb85a699e in G711::runPull() /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:209
#2 0x7f5eb858d05a in afReadFrames /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/data.cpp:222:14
#3 0x50bbeb in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:340:29
#4 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
#5 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
#6 0x419f48 in _init (/usr/bin/sfconvert+0x419f48)
0x7f5eb894d800 is located 0 bytes to the right of 393216-byte region [0x7f5eb88ed800,0x7f5eb894d800)
allocated by thread T0 here:
#0 0x4d2d08 in malloc /tmp/portage/sys-devel/llvm-3.9.1-r1/work/llvm-3.9.1.src/projects/compiler-rt/lib/asan/asan_malloc_linux.cc:64
#1 0x50bb48 in copyaudiodata /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:327:17
#2 0x50b050 in main /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/sfcommands/sfconvert.c:248:17
#3 0x7f5eb766278f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
SUMMARY: AddressSanitizer: heap-buffer-overflow /tmp/portage/media-libs/audiofile-0.3.6-r1/work/audiofile-0.3.6/libaudiofile/modules/G711.cpp:54:13 in alaw2linear_buf(unsigned char
const*, short*, int)
Shadow bytes around the buggy address:
0x0fec57121ab0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ac0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ad0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121ae0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0fec57121af0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0fec57121b00:[fa]fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b10: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b20: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b30: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b40: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0fec57121b50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==2480==ABORTING
Affected version:
0.3.6
Fixed version:
N/A
Commit fix:
N/A
Credit:
This bug was discovered by Agostino Sarubbo of Gentoo.
CVE:
N/A
Reproducer:
https://github.com/asarubbo/poc/blob/master/00184-audiofile-heapoverflow-alaw2linear_buf
Timeline:
2017-02-20: bug discovered and reported to upstream
2017-02-20: blog post about the issue
Note:
This bug was found with American Fuzzy Lop.
Permalink:
https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow-in-alaw2linear_buf-g711-cpp
--
Agostino Sarubbo
Gentoo Linux Developer
---
##### Agostino Sarubbo <ago-at-gentoo.org>, `3284786.rxzJs1xbWv@blackgate`
On Sunday 26 February 2017 11:50:44 Agostino Sarubbo wrote:
> Permalink:
> https://blogs.gentoo.org/ago/2017/02/20/audiofile-heap-based-buffer-overflow
> -in-alaw2linear_buf-g711-cpp
This is CVE-2017-6830
--
Agostino Sarubbo
Gentoo Linux Developer
---
| non_defect | roundup audiofile heap based buffer overflow in buf cpp here is a vulnerability from the oss security mailing list for instructions identification identify if we have the software in and unstable then determine if we are vulnerable and make a comment with your findings it can also be helpful to specify if you think there is a patch or if it can be fixed via a general update example unstable we are not vulnerable link to the package we are vulnerable link to the package we don t have it packaged important if you believe there are possibly related issues bring them up on the parent issue patching start by commenting on this issue saying you re working on a patch this way we don t duplicate work if you open a pull request tag this issue and the master issue for the roundup if you commit the patch directly to a branch please leave a comment on this issue with the branch and the commit hash example fixed release upon completion update graham s database info triage indicator needs triage thread audiofile heap based buffer overflow in buf cpp file search github search agostino sarubbo sendemail localhost description audiofile is a c based library for reading and writing audio files in many common formats a fuzz on it discovered an heap overflow the complete asan output sfconvert out format aiff error addresssanitizer heap buffer overflow on address at pc bp sp write of size at thread in buf unsigned char const short int tmp portage media libs audiofile work audiofile libaudiofile modules cpp in runpull tmp portage media libs audiofile work audiofile libaudiofile modules cpp in afreadframes tmp portage media libs audiofile work audiofile libaudiofile data cpp in copyaudiodata tmp portage media libs audiofile work audiofile sfcommands sfconvert c in main tmp portage media libs audiofile work audiofile sfcommands sfconvert c in libc start main tmp portage sys libs glibc work glibc csu csu libc start c in init usr bin sfconvert is located bytes to the right of byte region allocated by thread here in malloc tmp portage sys devel llvm work llvm src projects compiler rt lib asan asan malloc linux cc in copyaudiodata tmp portage media libs audiofile work audiofile sfcommands sfconvert c in main tmp portage media libs audiofile work audiofile sfcommands sfconvert c in libc start main tmp portage sys libs glibc work glibc csu csu libc start c summary addresssanitizer heap buffer overflow tmp portage media libs audiofile work audiofile libaudiofile modules cpp in buf unsigned char const short int shadow bytes around the buggy address fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa shadow byte legend one shadow byte represents application bytes addressable partially addressable heap left redzone fa heap right redzone fb freed heap region fd stack left redzone stack mid redzone stack right redzone stack partial redzone stack after return stack use after scope global redzone global init order poisoned by user container overflow fc array cookie ac intra object redzone bb asan internal fe left alloca redzone ca right alloca redzone cb aborting affected version fixed version n a commit fix n a credit this bug was discovered by agostino sarubbo of gentoo cve n a reproducer timeline bug discovered and reported to upstream blog post about the issue note this bug was found with american fuzzy lop permalink agostino sarubbo gentoo linux developer agostino sarubbo blackgate on sunday february agostino sarubbo wrote permalink in buf cpp this is cve agostino sarubbo gentoo linux developer | 0 |
26,008 | 12,822,124,634 | IssuesEvent | 2020-07-06 09:16:53 | WordPress/gutenberg | https://api.github.com/repos/WordPress/gutenberg | reopened | Bundling Front-end Assets | Framework Needs Dev [Status] In Progress [Type] Overview [Type] Performance | Explore ways in which front-end styles for blocks could be assembled based on _which blocks are used_ in a given page response.
*Note:* this is a deviation from how themes and resources have operated so far, since a theme usually adds a single stylesheet including all possible HTML a user might add, as well as handling for widgets, etc, that might never be present on a page.
The granularity of blocks, and our ability to tell on the server which blocks are being used, should open up opportunities for being more intelligent about enqueueing these assets.
### Considerations
This would need to have proper hooks for disabling and extending, as there will be issues with async loading of content (like infinite scroll) and caching mechanisms if the bundle is dynamic.
Furthermore, it should at least consider how theme supplied styles for blocks might interop. See https://github.com/WordPress/gutenberg/issues/5360 | True | Bundling Front-end Assets - Explore ways in which front-end styles for blocks could be assembled based on _which blocks are used_ in a given page response.
*Note:* this is a deviation from how themes and resources have operated so far, since a theme usually adds a single stylesheet including all possible HTML a user might add, as well as handling for widgets, etc, that might never be present on a page.
The granularity of blocks, and our ability to tell on the server which blocks are being used, should open up opportunities for being more intelligent about enqueueing these assets.
### Considerations
This would need to have proper hooks for disabling and extending, as there will be issues with async loading of content (like infinite scroll) and caching mechanisms if the bundle is dynamic.
Furthermore, it should at least consider how theme supplied styles for blocks might interop. See https://github.com/WordPress/gutenberg/issues/5360 | non_defect | bundling front end assets explore ways in which front end styles for blocks could be assembled based on which blocks are used in a given page response note this is a deviation from how themes and resources have operated so far since a theme usually adds a single stylesheet including all possible html a user might add as well as handling for widgets etc that might never be present on a page the granularity of blocks and our ability to tell on the server which blocks are being used should open up opportunities for being more intelligent about enqueueing these assets considerations this would need to have proper hooks for disabling and extending as there will be issues with async loading of content like infinite scroll and caching mechanisms if the bundle is dynamic furthermore it should at least consider how theme supplied styles for blocks might interop see | 0 |
274,807 | 30,178,728,778 | IssuesEvent | 2023-07-04 07:23:57 | joshnewton31080/WebGoat | https://api.github.com/repos/joshnewton31080/WebGoat | opened | CVE-2022-1259 (High) detected in undertow-core-2.2.10.Final.jar | Mend: dependency security vulnerability | ## CVE-2022-1259 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-2.2.10.Final.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/undertow/undertow-core/2.2.10.Final/undertow-core-2.2.10.Final.jar</p>
<p>
Dependency Hierarchy:
- webwolf-8.2.3-SNAPSHOT.jar (Root Library)
- spring-boot-starter-undertow-2.5.4.jar
- :x: **undertow-core-2.2.10.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/WebGoat/commit/e7564c1173880dc1f705b984990b5ab4330140e2">e7564c1173880dc1f705b984990b5ab4330140e2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Undertow. A potential security issue in flow control handling by the browser over HTTP/2 may cause overhead or a denial of service in the server. This flaw exists because of an incomplete fix for CVE-2021-3629.
<p>Publish Date: 2022-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1259>CVE-2022-1259</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=2072339">https://bugzilla.redhat.com/show_bug.cgi?id=2072339</a></p>
<p>Release Date: 2022-08-31</p>
<p>Fix Resolution: io.undertow:undertow-core:2.3.6.Final</p>
</p>
</details>
<p></p>
| True | CVE-2022-1259 (High) detected in undertow-core-2.2.10.Final.jar - ## CVE-2022-1259 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undertow-core-2.2.10.Final.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /webgoat-integration-tests/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/undertow/undertow-core/2.2.10.Final/undertow-core-2.2.10.Final.jar</p>
<p>
Dependency Hierarchy:
- webwolf-8.2.3-SNAPSHOT.jar (Root Library)
- spring-boot-starter-undertow-2.5.4.jar
- :x: **undertow-core-2.2.10.Final.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshnewton31080/WebGoat/commit/e7564c1173880dc1f705b984990b5ab4330140e2">e7564c1173880dc1f705b984990b5ab4330140e2</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was found in Undertow. A potential security issue in flow control handling by the browser over HTTP/2 may cause overhead or a denial of service in the server. This flaw exists because of an incomplete fix for CVE-2021-3629.
<p>Publish Date: 2022-08-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-1259>CVE-2022-1259</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=2072339">https://bugzilla.redhat.com/show_bug.cgi?id=2072339</a></p>
<p>Release Date: 2022-08-31</p>
<p>Fix Resolution: io.undertow:undertow-core:2.3.6.Final</p>
</p>
</details>
<p></p>
| non_defect | cve high detected in undertow core final jar cve high severity vulnerability vulnerable library undertow core final jar path to dependency file webgoat integration tests pom xml path to vulnerable library home wss scanner repository io undertow undertow core final undertow core final jar dependency hierarchy webwolf snapshot jar root library spring boot starter undertow jar x undertow core final jar vulnerable library found in head commit a href found in base branch develop vulnerability details a flaw was found in undertow a potential security issue in flow control handling by the browser over http may cause overhead or a denial of service in the server this flaw exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io undertow undertow core final | 0 |
802,009 | 28,565,784,806 | IssuesEvent | 2023-04-21 01:55:02 | nimblehq/ic-flutter-taher-toby | https://api.github.com/repos/nimblehq/ic-flutter-taher-toby | closed | [UI] As a user, I can see the survey questions | type : feature priority : medium @0.3.0 @0.4.0 | ## Why
Show the survey question, so the user can see what the question is about.
## Acceptance Criteria
- Add navigation from the start survey button to this screen
- Add a close button with navigation back to the home screen (skip dialog)
- Add a next button with navigation to the next question
- Add a submit button (if it's the last question in the list)
- Show a horizontal page viewer (snapped)
- Show the question number (Ex: 1/5)
- Show the question
- Don't display the answers yet
## Design
[Figma: Question](https://www.figma.com/file/GjRPOjDyZ6f4EDL3wKarRK/Challenge---Mobile-App?node-id=34-765&t=uagdNoJjVpHWN6Ve-0)

## Resources
N/A | 1.0 | [UI] As a user, I can see the survey questions - ## Why
Show the survey question, so the user can see what the question is about.
## Acceptance Criteria
- Add navigation from the start survey button to this screen
- Add a close button with navigation back to the home screen (skip dialog)
- Add a next button with navigation to the next question
- Add a submit button (if it's the last question in the list)
- Show a horizontal page viewer (snapped)
- Show the question number (Ex: 1/5)
- Show the question
- Don't display the answers yet
## Design
[Figma: Question](https://www.figma.com/file/GjRPOjDyZ6f4EDL3wKarRK/Challenge---Mobile-App?node-id=34-765&t=uagdNoJjVpHWN6Ve-0)

## Resources
N/A | non_defect | as a user i can see the survey questions why show the survey question so the user can see what the question is about acceptance criteria add navigation from the start survey button to this screen add a close button with navigation back to the home screen skip dialog add a next button with navigation to the next question add a submit button if it s the last question in the list show a horizontal page viewer snapped show the question number ex show the question don t display the answers yet design resources n a | 0 |
67,411 | 20,961,610,716 | IssuesEvent | 2022-03-27 21:49:08 | abedmaatalla/sipdroid | https://api.github.com/repos/abedmaatalla/sipdroid | closed | SipDroid not accepting/processing CANCEL | Priority-Medium Type-Defect auto-migrated | ```
What steps will reproduce the problem?
1. register the phone against NetSapiens NMS
2. configure a pre existing extension for Synchronous Calling (Follow Me)
3. call the parent linked extension from another phone
4. don't answer the call, hangup the calling phone
What is the expected output? What do you see instead?
I would expect the phone to stop ringing, instead it continues. When I look at
the trace, I see that the NMS switch is sending a CANCEL to the phone, but it's
ignoring it, causing retransmissions, and after several, sipdroid responds with
a 403 Forbidden.
You should be able to see the trace here:
https://boss.netsapiens.com/nsflow/trace/call_trace_admin_versature_com_13517909
77/index_with_frames.html
What version of the product are you using? On what device/operating system?
Version 2.7 Beta, on a Samsung/Google Nexus 7 running Android 4.1.2 (Jelly Bean)
Which SIP server are you using? What happens with PBXes?
NetSapiens SiPBx 1-1215c. Haven't tried with PBXes, but I suspect this problem
will not occur.
Which type of network are you using?
Tried this and seen the same behaviour over WIND 3G, and via wifi.
Please provide any additional information below.
I suspect this is an issue in the signalling from NetSapiens, since I have used
this phone in a similar configuration against Asterisk 1.8.x with no issues.
Either NetSapiens or SipDroid is not adhering the RFC properly, and I'd like to
be able to identify which one, so this can be escalated to NetSapiens if
necessary.
```
Original issue reported on code.google.com by `squig...@versature.com` on 1 Nov 2012 at 5:52
| 1.0 | SipDroid not accepting/processing CANCEL - ```
What steps will reproduce the problem?
1. register the phone against NetSapiens NMS
2. configure a pre existing extension for Synchronous Calling (Follow Me)
3. call the parent linked extension from another phone
4. don't answer the call, hangup the calling phone
What is the expected output? What do you see instead?
I would expect the phone to stop ringing, instead it continues. When I look at
the trace, I see that the NMS switch is sending a CANCEL to the phone, but it's
ignoring it, causing retransmissions, and after several, sipdroid responds with
a 403 Forbidden.
You should be able to see the trace here:
https://boss.netsapiens.com/nsflow/trace/call_trace_admin_versature_com_13517909
77/index_with_frames.html
What version of the product are you using? On what device/operating system?
Version 2.7 Beta, on a Samsung/Google Nexus 7 running Android 4.1.2 (Jelly Bean)
Which SIP server are you using? What happens with PBXes?
NetSapiens SiPBx 1-1215c. Haven't tried with PBXes, but I suspect this problem
will not occur.
Which type of network are you using?
Tried this and seen the same behaviour over WIND 3G, and via wifi.
Please provide any additional information below.
I suspect this is an issue in the signalling from NetSapiens, since I have used
this phone in a similar configuration against Asterisk 1.8.x with no issues.
Either NetSapiens or SipDroid is not adhering the RFC properly, and I'd like to
be able to identify which one, so this can be escalated to NetSapiens if
necessary.
```
Original issue reported on code.google.com by `squig...@versature.com` on 1 Nov 2012 at 5:52
| defect | sipdroid not accepting processing cancel what steps will reproduce the problem register the phone against netsapiens nms configure a pre existing extension for synchronous calling follow me call the parent linked extension from another phone don t answer the call hangup the calling phone what is the expected output what do you see instead i would expect the phone to stop ringing instead it continues when i look at the trace i see that the nms switch is sending a cancel to the phone but it s ignoring it causing retransmissions and after several sipdroid responds with a forbidden you should be able to see the trace here index with frames html what version of the product are you using on what device operating system version beta on a samsung google nexus running android jelly bean which sip server are you using what happens with pbxes netsapiens sipbx haven t tried with pbxes but i suspect this problem will not occur which type of network are you using tried this and seen the same behaviour over wind and via wifi please provide any additional information below i suspect this is an issue in the signalling from netsapiens since i have used this phone in a similar configuration against asterisk x with no issues either netsapiens or sipdroid is not adhering the rfc properly and i d like to be able to identify which one so this can be escalated to netsapiens if necessary original issue reported on code google com by squig versature com on nov at | 1 |
702,546 | 24,125,156,282 | IssuesEvent | 2022-09-20 23:07:10 | apache/hudi | https://api.github.com/repos/apache/hudi | closed | [SUPPORT]Caused by: java.lang.IllegalArgumentException at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31) | priority:major spark writer-core multi-writer | **_Tips before filing an issue_**
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- YES
- Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
- If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
**Describe the problem you faced**
A clear and concise description of the problem.
While writing the incremental data with concurrency we are getting below mentioned `error`. Also i noticed in issues [HUDI-2641](https://issues.apache.org/jira/browse/HUDI-2641) its fixed in version 0.10.0 and we are using 0.10.1, `hudi-spark3.1.2-bundle_2.12-0.10.1.jar` with `spark-avro_2.12-3.1.2.jar`:
```bash
Caused by: java.lang.IllegalArgumentException
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:466)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionRequestedToInflight(HoodieActiveTimeline.java:528)
at org.apache.hudi.table.action.commit.BaseCommitActionExecutor.saveWorkloadProfileMetadataToInflight(BaseCommitActionExecutor.java:115)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:162)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:82)
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:56)
... 45 more
```
**To Reproduce**
Steps to reproduce the behavior:
1. `append` or `overwrite` data to hudi table concurrently
**Expected behavior**
We expect it to write to tables with no exceptions or errors
**Environment Description**
* Hudi version : 0.10.1
* Spark version : 3.1
* Storage (HDFS/S3/GCS..) : S3
* Running on Docker? (yes/no) : no
**Additional context**
Add any other context about the problem here.
We are running this hudi merge via glue jobs and using below jars:
```bash
1. calcite-core-1.16.0.jar
2. hudi-spark3.1.2-bundle_2.12-0.10.1.jar
3. spark-avro_2.12/3.1.2/spark-avro_2.12-3.1.2.jar
```
**Stacktrace**
```
2022-08-21 03:47:44,696 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last):
File "/tmp/upsert-delete.py", line 267, in <module>
main()
File "/tmp/upsert-delete.py", line 254, in main
for result in executor.map(start_merging, df_prefix_map_list):
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
yield fs.pop().result()
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib64/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/upsert-delete.py", line 246, in start_merging
set_delete_markers(moids_df, combined_conf)
File "/tmp/upsert-delete.py", line 128, in set_delete_markers
.mode('append') \
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1107, in save
self._jwrite.save()
File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o1573.save.
: org.apache.hudi.exception.HoodieUpsertException: Failed to upsert for commit time 20220821034051823
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:63)
at org.apache.hudi.table.action.commit.SparkUpsertCommitActionExecutor.execute(SparkUpsertCommitActionExecutor.java:46)
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.upsert(HoodieSparkCopyOnWriteTable.java:119)
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.upsert(HoodieSparkCopyOnWriteTable.java:103)
at org.apache.hudi.client.SparkRDDWriteClient.upsert(SparkRDDWriteClient.java:160)
at org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:217)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:277)
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:164)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:185)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:181)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalArgumentException
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:466)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionRequestedToInflight(HoodieActiveTimeline.java:528)
at org.apache.hudi.table.action.commit.BaseCommitActionExecutor.saveWorkloadProfileMetadataToInflight(BaseCommitActionExecutor.java:115)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:162)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:82)
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:56)
... 45 more
```
| 1.0 | [SUPPORT]Caused by: java.lang.IllegalArgumentException at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31) - **_Tips before filing an issue_**
- Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
- YES
- Join the mailing list to engage in conversations and get faster support at dev-subscribe@hudi.apache.org.
- If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
**Describe the problem you faced**
A clear and concise description of the problem.
While writing the incremental data with concurrency we are getting below mentioned `error`. Also i noticed in issues [HUDI-2641](https://issues.apache.org/jira/browse/HUDI-2641) its fixed in version 0.10.0 and we are using 0.10.1, `hudi-spark3.1.2-bundle_2.12-0.10.1.jar` with `spark-avro_2.12-3.1.2.jar`:
```bash
Caused by: java.lang.IllegalArgumentException
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:466)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionRequestedToInflight(HoodieActiveTimeline.java:528)
at org.apache.hudi.table.action.commit.BaseCommitActionExecutor.saveWorkloadProfileMetadataToInflight(BaseCommitActionExecutor.java:115)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:162)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:82)
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:56)
... 45 more
```
**To Reproduce**
Steps to reproduce the behavior:
1. `append` or `overwrite` data to hudi table concurrently
**Expected behavior**
We expect it to write to tables with no exceptions or errors
**Environment Description**
* Hudi version : 0.10.1
* Spark version : 3.1
* Storage (HDFS/S3/GCS..) : S3
* Running on Docker? (yes/no) : no
**Additional context**
Add any other context about the problem here.
We are running this hudi merge via glue jobs and using below jars:
```bash
1. calcite-core-1.16.0.jar
2. hudi-spark3.1.2-bundle_2.12-0.10.1.jar
3. spark-avro_2.12/3.1.2/spark-avro_2.12-3.1.2.jar
```
**Stacktrace**
```
2022-08-21 03:47:44,696 ERROR [main] glue.ProcessLauncher (Logging.scala:logError(73)): Error from Python:Traceback (most recent call last):
File "/tmp/upsert-delete.py", line 267, in <module>
main()
File "/tmp/upsert-delete.py", line 254, in main
for result in executor.map(start_merging, df_prefix_map_list):
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 598, in result_iterator
yield fs.pop().result()
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/usr/lib64/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/usr/lib64/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/tmp/upsert-delete.py", line 246, in start_merging
set_delete_markers(moids_df, combined_conf)
File "/tmp/upsert-delete.py", line 128, in set_delete_markers
.mode('append') \
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1107, in save
self._jwrite.save()
File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
File "/opt/amazon/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o1573.save.
: org.apache.hudi.exception.HoodieUpsertException: Failed to upsert for commit time 20220821034051823
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:63)
at org.apache.hudi.table.action.commit.SparkUpsertCommitActionExecutor.execute(SparkUpsertCommitActionExecutor.java:46)
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.upsert(HoodieSparkCopyOnWriteTable.java:119)
at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.upsert(HoodieSparkCopyOnWriteTable.java:103)
at org.apache.hudi.client.SparkRDDWriteClient.upsert(SparkRDDWriteClient.java:160)
at org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:217)
at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:277)
at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:164)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:185)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:223)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:220)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:181)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.IllegalArgumentException
at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionState(HoodieActiveTimeline.java:466)
at org.apache.hudi.common.table.timeline.HoodieActiveTimeline.transitionRequestedToInflight(HoodieActiveTimeline.java:528)
at org.apache.hudi.table.action.commit.BaseCommitActionExecutor.saveWorkloadProfileMetadataToInflight(BaseCommitActionExecutor.java:115)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:162)
at org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.execute(BaseSparkCommitActionExecutor.java:82)
at org.apache.hudi.table.action.commit.AbstractWriteHelper.write(AbstractWriteHelper.java:56)
... 45 more
```
| non_defect | caused by java lang illegalargumentexception at org apache hudi common util validationutils checkargument validationutils java tips before filing an issue have you gone through our yes join the mailing list to engage in conversations and get faster support at dev subscribe hudi apache org if you have triaged this as a bug then file an directly describe the problem you faced a clear and concise description of the problem while writing the incremental data with concurrency we are getting below mentioned error also i noticed in issues its fixed in version and we are using hudi bundle jar with spark avro jar bash caused by java lang illegalargumentexception at org apache hudi common util validationutils checkargument validationutils java at org apache hudi common table timeline hoodieactivetimeline transitionstate hoodieactivetimeline java at org apache hudi common table timeline hoodieactivetimeline transitionrequestedtoinflight hoodieactivetimeline java at org apache hudi table action commit basecommitactionexecutor saveworkloadprofilemetadatatoinflight basecommitactionexecutor java at org apache hudi table action commit basesparkcommitactionexecutor execute basesparkcommitactionexecutor java at org apache hudi table action commit basesparkcommitactionexecutor execute basesparkcommitactionexecutor java at org apache hudi table action commit abstractwritehelper write abstractwritehelper java more to reproduce steps to reproduce the behavior append or overwrite data to hudi table concurrently expected behavior we expect it to write to tables with no exceptions or errors environment description hudi version spark version storage hdfs gcs running on docker yes no no additional context add any other context about the problem here we are running this hudi merge via glue jobs and using below jars bash calcite core jar hudi bundle jar spark avro spark avro jar stacktrace error glue processlauncher logging scala logerror error from python traceback most recent call last file tmp upsert delete py line in main file tmp upsert delete py line in main for result in executor map start merging df prefix map list file usr concurrent futures base py line in result iterator yield fs pop result file usr concurrent futures base py line in result return self get result file usr concurrent futures base py line in get result raise self exception file usr concurrent futures thread py line in run result self fn self args self kwargs file tmp upsert delete py line in start merging set delete markers moids df combined conf file tmp upsert delete py line in set delete markers mode append file opt amazon spark python lib pyspark zip pyspark sql readwriter py line in save self jwrite save file opt amazon spark python lib src zip java gateway py line in call answer self gateway client self target id self name file opt amazon spark python lib pyspark zip pyspark sql utils py line in deco return f a kw file opt amazon spark python lib src zip protocol py line in get return value format target id name value protocol an error occurred while calling save org apache hudi exception hoodieupsertexception failed to upsert for commit time at org apache hudi table action commit abstractwritehelper write abstractwritehelper java at org apache hudi table action commit sparkupsertcommitactionexecutor execute sparkupsertcommitactionexecutor java at org apache hudi table hoodiesparkcopyonwritetable upsert hoodiesparkcopyonwritetable java at org apache hudi table hoodiesparkcopyonwritetable upsert hoodiesparkcopyonwritetable java at org apache hudi client sparkrddwriteclient upsert sparkrddwriteclient java at org apache hudi datasourceutils dowriteoperation datasourceutils java at org apache hudi hoodiesparksqlwriter write hoodiesparksqlwriter scala at org apache hudi defaultsource createrelation defaultsource scala at org apache spark sql execution datasources saveintodatasourcecommand run saveintodatasourcecommand scala at org apache spark sql execution command executedcommandexec sideeffectresult lzycompute commands scala at org apache spark sql execution command executedcommandexec sideeffectresult commands scala at org apache spark sql execution command executedcommandexec doexecute commands scala at org apache spark sql execution sparkplan anonfun execute sparkplan scala at org apache spark sql execution sparkplan anonfun executequery sparkplan scala at org apache spark rdd rddoperationscope withscope rddoperationscope scala at org apache spark sql execution sparkplan executequery sparkplan scala at org apache spark sql execution sparkplan execute sparkplan scala at org apache spark sql execution queryexecution tordd lzycompute queryexecution scala at org apache spark sql execution queryexecution tordd queryexecution scala at org apache spark sql dataframewriter anonfun runcommand dataframewriter scala at org apache spark sql catalyst queryplanningtracker withtracker queryplanningtracker scala at org apache spark sql execution sqlexecution withtracker sqlexecution scala at org apache spark sql execution sqlexecution executequery sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql catalyst queryplanningtracker withtracker queryplanningtracker scala at org apache spark sql execution sqlexecution withtracker sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql execution sqlexecution withsqlconfpropagated sqlexecution scala at org apache spark sql execution sqlexecution anonfun withnewexecutionid sqlexecution scala at org apache spark sql sparksession withactive sparksession scala at org apache spark sql execution sqlexecution withnewexecutionid sqlexecution scala at org apache spark sql dataframewriter runcommand dataframewriter scala at org apache spark sql dataframewriter dataframewriter scala at org apache spark sql dataframewriter saveinternal dataframewriter scala at org apache spark sql dataframewriter save dataframewriter scala at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at reflection methodinvoker invoke methodinvoker java at reflection reflectionengine invoke reflectionengine java at gateway invoke gateway java at commands abstractcommand invokemethod abstractcommand java at commands callcommand execute callcommand java at gatewayconnection run gatewayconnection java at java lang thread run thread java caused by java lang illegalargumentexception at org apache hudi common util validationutils checkargument validationutils java at org apache hudi common table timeline hoodieactivetimeline transitionstate hoodieactivetimeline java at org apache hudi common table timeline hoodieactivetimeline transitionrequestedtoinflight hoodieactivetimeline java at org apache hudi table action commit basecommitactionexecutor saveworkloadprofilemetadatatoinflight basecommitactionexecutor java at org apache hudi table action commit basesparkcommitactionexecutor execute basesparkcommitactionexecutor java at org apache hudi table action commit basesparkcommitactionexecutor execute basesparkcommitactionexecutor java at org apache hudi table action commit abstractwritehelper write abstractwritehelper java more | 0 |
233,596 | 25,765,642,908 | IssuesEvent | 2022-12-09 01:26:25 | gmright2/DEFOLD_Gmright_INLINE | https://api.github.com/repos/gmright2/DEFOLD_Gmright_INLINE | opened | CVE-2022-23491 (Medium) detected in certifi-2020.4.5.1-py2.py3-none-any.whl | security vulnerability | ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2020.4.5.1-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /Gmright-system/requirements.txt</p>
<p>Path to vulnerable library: /Gmright-system/requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.23.0-py2.py3-none-any.whl (Root Library)
- :x: **certifi-2020.4.5.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gmright2/DEFOLD_Gmright_INLINE/commit/414c91fb659115b560dd639377fb1f7a92b7d9df">414c91fb659115b560dd639377fb1f7a92b7d9df</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-23491 (Medium) detected in certifi-2020.4.5.1-py2.py3-none-any.whl - ## CVE-2022-23491 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>certifi-2020.4.5.1-py2.py3-none-any.whl</b></p></summary>
<p>Python package for providing Mozilla's CA Bundle.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/57/2b/26e37a4b034800c960a00c4e1b3d9ca5d7014e983e6e729e33ea2f36426c/certifi-2020.4.5.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /Gmright-system/requirements.txt</p>
<p>Path to vulnerable library: /Gmright-system/requirements.txt</p>
<p>
Dependency Hierarchy:
- requests-2.23.0-py2.py3-none-any.whl (Root Library)
- :x: **certifi-2020.4.5.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gmright2/DEFOLD_Gmright_INLINE/commit/414c91fb659115b560dd639377fb1f7a92b7d9df">414c91fb659115b560dd639377fb1f7a92b7d9df</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Certifi is a curated collection of Root Certificates for validating the trustworthiness of SSL certificates while verifying the identity of TLS hosts. Certifi 2022.12.07 removes root certificates from "TrustCor" from the root store. These are in the process of being removed from Mozilla's trust store. TrustCor's root certificates are being removed pursuant to an investigation prompted by media reporting that TrustCor's ownership also operated a business that produced spyware. Conclusions of Mozilla's investigation can be found in the linked google group discussion.
<p>Publish Date: 2022-12-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-23491>CVE-2022-23491</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-23491">https://www.cve.org/CVERecord?id=CVE-2022-23491</a></p>
<p>Release Date: 2022-12-07</p>
<p>Fix Resolution: certifi - 2022.12.07</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in certifi none any whl cve medium severity vulnerability vulnerable library certifi none any whl python package for providing mozilla s ca bundle library home page a href path to dependency file gmright system requirements txt path to vulnerable library gmright system requirements txt dependency hierarchy requests none any whl root library x certifi none any whl vulnerable library found in head commit a href found in base branch master vulnerability details certifi is a curated collection of root certificates for validating the trustworthiness of ssl certificates while verifying the identity of tls hosts certifi removes root certificates from trustcor from the root store these are in the process of being removed from mozilla s trust store trustcor s root certificates are being removed pursuant to an investigation prompted by media reporting that trustcor s ownership also operated a business that produced spyware conclusions of mozilla s investigation can be found in the linked google group discussion publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope changed impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution certifi step up your open source security game with mend | 0 |
18,022 | 3,019,971,071 | IssuesEvent | 2015-07-31 02:29:39 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | [2.7] Bug in I18n / __x / context when reading .mo file | Defect i18n | When using __x() for translations the translated string from the po file is not being used.
Consider the following line:
```php
echo __x('Stream or download audiobook/video/other product type', 'Stream or download %s »', mb_strtolower($this->Product->type($product)));
```
After running extract and translating the extract string using PoEdit my default.po looks like the following:
```
[snip]
#: View/Themed/RwdBase/Products/customer_index.ctp:112
msgctxt "Stream or download audiobook/video/other product type"
msgid "Stream or download %s »"
msgstr "Stream eller download %s »"
[/snip]
```
If I debug the __x() / I18n::translate() call I see the following data was extracted from the generated .mo-file:
```php
if ($context) {
pr('singular: ' . $singular);
pr('context: ' . $context);
pr('domain: ' . $domain);
pr('category: ' . $_this->category);
prd($_this->_domains[$domain][$_this->_lang]);
}
```
The relevant output is:
```php
singular: Stream or download %s »
context: Stream or download audiobook/video/other product type
domain: default
category: LC_MESSAGES
[snip]
'Stream or download audiobook/video/other product typeStream or download %s »' => array(
'' => 'Stream eller download %s »'
),
[/snip]
```
Looks a bit funky to me since I18n::translate is looking for the following:
```php
if (!empty($_this->_domains[$domain][$_this->_lang][$_this->category][$singular][$context])) {
```
Either the string extracted from the .mo-file is placed wrong in the array or I18n::translate should look in:
```php
if (!empty($_this->_domains[$domain][$_this->_lang][$_this->category][$context . $singular])) {
```
My money is on the array being wrong though - can anyone confirm? | 1.0 | [2.7] Bug in I18n / __x / context when reading .mo file - When using __x() for translations the translated string from the po file is not being used.
Consider the following line:
```php
echo __x('Stream or download audiobook/video/other product type', 'Stream or download %s »', mb_strtolower($this->Product->type($product)));
```
After running extract and translating the extract string using PoEdit my default.po looks like the following:
```
[snip]
#: View/Themed/RwdBase/Products/customer_index.ctp:112
msgctxt "Stream or download audiobook/video/other product type"
msgid "Stream or download %s »"
msgstr "Stream eller download %s »"
[/snip]
```
If I debug the __x() / I18n::translate() call I see the following data was extracted from the generated .mo-file:
```php
if ($context) {
pr('singular: ' . $singular);
pr('context: ' . $context);
pr('domain: ' . $domain);
pr('category: ' . $_this->category);
prd($_this->_domains[$domain][$_this->_lang]);
}
```
The relevant output is:
```php
singular: Stream or download %s »
context: Stream or download audiobook/video/other product type
domain: default
category: LC_MESSAGES
[snip]
'Stream or download audiobook/video/other product typeStream or download %s »' => array(
'' => 'Stream eller download %s »'
),
[/snip]
```
Looks a bit funky to me since I18n::translate is looking for the following:
```php
if (!empty($_this->_domains[$domain][$_this->_lang][$_this->category][$singular][$context])) {
```
Either the string extracted from the .mo-file is placed wrong in the array or I18n::translate should look in:
```php
if (!empty($_this->_domains[$domain][$_this->_lang][$_this->category][$context . $singular])) {
```
My money is on the array being wrong though - can anyone confirm? | defect | bug in x context when reading mo file when using x for translations the translated string from the po file is not being used consider the following line php echo x stream or download audiobook video other product type stream or download s raquo mb strtolower this product type product after running extract and translating the extract string using poedit my default po looks like the following view themed rwdbase products customer index ctp msgctxt stream or download audiobook video other product type msgid stream or download s raquo msgstr stream eller download s raquo if i debug the x translate call i see the following data was extracted from the generated mo file php if context pr singular singular pr context context pr domain domain pr category this category prd this domains the relevant output is php singular stream or download s » context stream or download audiobook video other product type domain default category lc messages stream or download audiobook video other product typestream or download s raquo array stream eller download s raquo looks a bit funky to me since translate is looking for the following php if empty this domains either the string extracted from the mo file is placed wrong in the array or translate should look in php if empty this domains my money is on the array being wrong though can anyone confirm | 1 |
579,703 | 17,197,012,425 | IssuesEvent | 2021-07-16 19:04:11 | VA-Explorer/va_explorer | https://api.github.com/repos/VA-Explorer/va_explorer | opened | Tweak readability for unknown values such as dk or nan | Domain: API/ Databases Language: Python Priority: Low Source: Pilot Status: On-Deck Type: Maintainance good first issue | **What is the expected state?**
I want to understand less defined VA properties/answers such as "DK" by having them read a more descriptive "Don't Know"
**What is the actual state?**
VAs currently list 'dk', 'nan', or doesn't list anything at all for certain properties/ VA responses.
**Relevant context**
A cursory search showed "dk, 'dk, 'nan, or unknown in these potentially relevant files
- `va_data_management/utils/location_assignment.py`
- `va_data_management/utils/validate.py`
- `va_data_management/tests/test_loading.py`
- `va_data_management/views.py`
- `home/views.py`
- `users/utils/field_worker_linking.py`
- `va_analytics/utils/plotting.py`
| 1.0 | Tweak readability for unknown values such as dk or nan - **What is the expected state?**
I want to understand less defined VA properties/answers such as "DK" by having them read a more descriptive "Don't Know"
**What is the actual state?**
VAs currently list 'dk', 'nan', or doesn't list anything at all for certain properties/ VA responses.
**Relevant context**
A cursory search showed "dk, 'dk, 'nan, or unknown in these potentially relevant files
- `va_data_management/utils/location_assignment.py`
- `va_data_management/utils/validate.py`
- `va_data_management/tests/test_loading.py`
- `va_data_management/views.py`
- `home/views.py`
- `users/utils/field_worker_linking.py`
- `va_analytics/utils/plotting.py`
| non_defect | tweak readability for unknown values such as dk or nan what is the expected state i want to understand less defined va properties answers such as dk by having them read a more descriptive don t know what is the actual state vas currently list dk nan or doesn t list anything at all for certain properties va responses relevant context a cursory search showed dk dk nan or unknown in these potentially relevant files va data management utils location assignment py va data management utils validate py va data management tests test loading py va data management views py home views py users utils field worker linking py va analytics utils plotting py | 0 |
19,783 | 13,452,092,217 | IssuesEvent | 2020-09-08 21:27:02 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Building project with PreserveCompilationContext and PreserveCompilationReferences set to true, creates .dlls in refs folder without modifed date | area-infrastructure | <!--
More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting **non-security** bugs and feature requests.
If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to secure@microsoft.com. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email.
For other types of questions, consider using [StackOverflow](https://stackoverflow.com).
-->
### Describe the bug
Building project with PreserveCompilationContext and PreserveCompilationReferences set to true, creates .dlls in refs folder without modifed date.
### To Reproduce
1. Create an empty asp core project.
2. Add this code to .csproj
```
<PropertyGroup>
<PreserveCompilationReferences>true</PreserveCompilationReferences>
<PreserveCompilationContext>true</PreserveCompilationContext>
</PropertyGroup>
```
3. Build the project.
4. Go to `bin/../../refs` folder.
5. See .dll with no Modified date.

### Further technical details
- ASP.NET Core version: 3.1.6
```
.NET Core SDK (reflecting any global.json):
Version: 3.1.302
Commit: 41faccf259
Runtime Environment:
OS Name: Windows
OS Version: 10.0.17763
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.1.302\
Host (useful for support):
Version: 3.1.6
Commit: 3acd9b0cd1
.NET Core SDKs installed:
1.0.4 [C:\Program Files\dotnet\sdk]
2.1.4 [C:\Program Files\dotnet\sdk]
2.1.200 [C:\Program Files\dotnet\sdk]
2.1.201 [C:\Program Files\dotnet\sdk]
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.302 [C:\Program Files\dotnet\sdk]
2.1.402 [C:\Program Files\dotnet\sdk]
2.1.505 [C:\Program Files\dotnet\sdk]
2.1.602 [C:\Program Files\dotnet\sdk]
2.2.105 [C:\Program Files\dotnet\sdk]
3.0.100 [C:\Program Files\dotnet\sdk]
3.1.101 [C:\Program Files\dotnet\sdk]
3.1.201 [C:\Program Files\dotnet\sdk]
3.1.301 [C:\Program Files\dotnet\sdk]
3.1.302 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.5 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
### Notes
This issue does not happen when targeting older ASP.NET Core (3.0, 2.2).
I've tried specifying multiple 3.1 SDKs and the same bug happens.
Seems to be related https://github.com/dotnet/extensions/issues/2750 | 1.0 | Building project with PreserveCompilationContext and PreserveCompilationReferences set to true, creates .dlls in refs folder without modifed date - <!--
More information on our issue management policies can be found here: https://aka.ms/aspnet/issue-policies
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting **non-security** bugs and feature requests.
If you believe you have an issue that affects the SECURITY of the platform, please do NOT create an issue and instead email your issue details to secure@microsoft.com. Your report may be eligible for our [bug bounty](https://www.microsoft.com/en-us/msrc/bounty-dot-net-core) but ONLY if it is reported through email.
For other types of questions, consider using [StackOverflow](https://stackoverflow.com).
-->
### Describe the bug
Building project with PreserveCompilationContext and PreserveCompilationReferences set to true, creates .dlls in refs folder without modifed date.
### To Reproduce
1. Create an empty asp core project.
2. Add this code to .csproj
```
<PropertyGroup>
<PreserveCompilationReferences>true</PreserveCompilationReferences>
<PreserveCompilationContext>true</PreserveCompilationContext>
</PropertyGroup>
```
3. Build the project.
4. Go to `bin/../../refs` folder.
5. See .dll with no Modified date.

### Further technical details
- ASP.NET Core version: 3.1.6
```
.NET Core SDK (reflecting any global.json):
Version: 3.1.302
Commit: 41faccf259
Runtime Environment:
OS Name: Windows
OS Version: 10.0.17763
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.1.302\
Host (useful for support):
Version: 3.1.6
Commit: 3acd9b0cd1
.NET Core SDKs installed:
1.0.4 [C:\Program Files\dotnet\sdk]
2.1.4 [C:\Program Files\dotnet\sdk]
2.1.200 [C:\Program Files\dotnet\sdk]
2.1.201 [C:\Program Files\dotnet\sdk]
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.302 [C:\Program Files\dotnet\sdk]
2.1.402 [C:\Program Files\dotnet\sdk]
2.1.505 [C:\Program Files\dotnet\sdk]
2.1.602 [C:\Program Files\dotnet\sdk]
2.2.105 [C:\Program Files\dotnet\sdk]
3.0.100 [C:\Program Files\dotnet\sdk]
3.1.101 [C:\Program Files\dotnet\sdk]
3.1.201 [C:\Program Files\dotnet\sdk]
3.1.301 [C:\Program Files\dotnet\sdk]
3.1.302 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.2 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.17 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.2.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.5 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.0.0 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.1 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.3 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.5 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
```
### Notes
This issue does not happen when targeting older ASP.NET Core (3.0, 2.2).
I've tried specifying multiple 3.1 SDKs and the same bug happens.
Seems to be related https://github.com/dotnet/extensions/issues/2750 | non_defect | building project with preservecompilationcontext and preservecompilationreferences set to true creates dlls in refs folder without modifed date more information on our issue management policies can be found here please keep in mind that the github issue tracker is not intended as a general support forum but for reporting non security bugs and feature requests if you believe you have an issue that affects the security of the platform please do not create an issue and instead email your issue details to secure microsoft com your report may be eligible for our but only if it is reported through email for other types of questions consider using describe the bug building project with preservecompilationcontext and preservecompilationreferences set to true creates dlls in refs folder without modifed date to reproduce create an empty asp core project add this code to csproj true true build the project go to bin refs folder see dll with no modified date further technical details asp net core version net core sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit net core sdks installed net core runtimes installed microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft aspnetcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app microsoft windowsdesktop app notes this issue does not happen when targeting older asp net core i ve tried specifying multiple sdks and the same bug happens seems to be related | 0 |
44,664 | 12,312,092,833 | IssuesEvent | 2020-05-12 13:27:06 | contao/contao | https://api.github.com/repos/contao/contao | closed | Contao 4.9.2: News -> target = external URL -> 1 in frontend | defect | Contao 4.9.2
News-Bundle
Target = external URL

creates a "1" in the frontend.

event_list.html5: $this->details
it echos a bool(1)

because of ModuleEventReader.php Line 231

This "1" I do not expect.
| 1.0 | Contao 4.9.2: News -> target = external URL -> 1 in frontend - Contao 4.9.2
News-Bundle
Target = external URL

creates a "1" in the frontend.

event_list.html5: $this->details
it echos a bool(1)

because of ModuleEventReader.php Line 231

This "1" I do not expect.
| defect | contao news target external url in frontend contao news bundle target external url creates a in the frontend event list this details it echos a bool because of moduleeventreader php line this i do not expect | 1 |
418,245 | 28,114,126,630 | IssuesEvent | 2023-03-31 09:26:33 | SpeciLiam/ped | https://api.github.com/repos/SpeciLiam/ped | opened | Design Error | type.DocumentationBug severity.VeryLow | 
When using windows terminal as instructed I do not get the same coloring in as shown in the user guide.
<!--session: 1680252509045-839ef1eb-f9b6-4baa-9b8d-a2762a20c76e-->
<!--Version: Web v3.4.7--> | 1.0 | Design Error - 
When using windows terminal as instructed I do not get the same coloring in as shown in the user guide.
<!--session: 1680252509045-839ef1eb-f9b6-4baa-9b8d-a2762a20c76e-->
<!--Version: Web v3.4.7--> | non_defect | design error when using windows terminal as instructed i do not get the same coloring in as shown in the user guide | 0 |
45,553 | 12,855,817,382 | IssuesEvent | 2020-07-09 06:18:20 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | NumberHelper uses deprecated Number Utility method | defect | This is a (multiple allowed):
* [x] bug
* [x] enhancement
* CakePHP Version: >3.9.0
### What you did
Set the Default Currency for the NumberHelper in AppView: `$this->Number->defaultCurrency('EUR');`
### What happened
Raised this Deprecation Warning:
```
Deprecated (16384): Number::defaultCurrency() is deprecated. Use Number::setDefaultCurrency()/getDefaultCurrency() instead. - /usr/www/xxx/vendor/cakephp/cakephp/src/View/Helper/NumberHelper.php, line: 228
You can disable deprecation warnings by setting `Error.errorLevel` to `E_ALL & ~E_USER_DEPRECATED` in your config/app.php. [CORE/src/Core/functions.php, line 305]
```
The NumberHelper doesn't have these getter/setter methods, only the Cake\I18n\Number Utility.
### What you expected to happen
Getter/Setter methods for NumberHelper. | 1.0 | NumberHelper uses deprecated Number Utility method - This is a (multiple allowed):
* [x] bug
* [x] enhancement
* CakePHP Version: >3.9.0
### What you did
Set the Default Currency for the NumberHelper in AppView: `$this->Number->defaultCurrency('EUR');`
### What happened
Raised this Deprecation Warning:
```
Deprecated (16384): Number::defaultCurrency() is deprecated. Use Number::setDefaultCurrency()/getDefaultCurrency() instead. - /usr/www/xxx/vendor/cakephp/cakephp/src/View/Helper/NumberHelper.php, line: 228
You can disable deprecation warnings by setting `Error.errorLevel` to `E_ALL & ~E_USER_DEPRECATED` in your config/app.php. [CORE/src/Core/functions.php, line 305]
```
The NumberHelper doesn't have these getter/setter methods, only the Cake\I18n\Number Utility.
### What you expected to happen
Getter/Setter methods for NumberHelper. | defect | numberhelper uses deprecated number utility method this is a multiple allowed bug enhancement cakephp version what you did set the default currency for the numberhelper in appview this number defaultcurrency eur what happened raised this deprecation warning deprecated number defaultcurrency is deprecated use number setdefaultcurrency getdefaultcurrency instead usr www xxx vendor cakephp cakephp src view helper numberhelper php line you can disable deprecation warnings by setting error errorlevel to e all e user deprecated in your config app php the numberhelper doesn t have these getter setter methods only the cake number utility what you expected to happen getter setter methods for numberhelper | 1 |
69,601 | 22,554,120,097 | IssuesEvent | 2022-06-27 08:44:02 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | New composer has a weirdly sized clickable area | T-Defect X-Blocked X-Regression S-Minor X-Release-Blocker A-Composer O-Frequent Team: Delight | ### Steps to reproduce
1. Click the top, left, right, or bottom of the composer (within the border)
2. Composer loses or doesn't gain focus
### Outcome
#### What did you expect?
The entire area within the border to be clickable
#### What happened instead?
Only this area is clickable:

### Operating system
Windows 10
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | 1.0 | New composer has a weirdly sized clickable area - ### Steps to reproduce
1. Click the top, left, right, or bottom of the composer (within the border)
2. Composer loses or doesn't gain focus
### Outcome
#### What did you expect?
The entire area within the border to be clickable
#### What happened instead?
Only this area is clickable:

### Operating system
Windows 10
### Application version
_No response_
### How did you install the app?
_No response_
### Homeserver
_No response_
### Will you send logs?
No | defect | new composer has a weirdly sized clickable area steps to reproduce click the top left right or bottom of the composer within the border composer loses or doesn t gain focus outcome what did you expect the entire area within the border to be clickable what happened instead only this area is clickable operating system windows application version no response how did you install the app no response homeserver no response will you send logs no | 1 |
16,916 | 2,962,459,650 | IssuesEvent | 2015-07-10 00:33:37 | cakephp/cakephp | https://api.github.com/repos/cakephp/cakephp | closed | timeAgoInWords() sometimes doesn't include the 'ago' word | Defect i18n | I noticed that while using the `timeAgoInWords` method of `I18/Time`, sometimes the 'ago' word didn't show up.
To be honest, I didn't test it a lot, but I think that it is a bug on the last line of `timeAgoInWords`:
return $relativeDate;
I think that it should be
return sprintf($relativeString, $relativeDate);
I would do a pull request, but I am not 100% sure that this is the solution to the problem and I am not fluent enough with git, yet. | 1.0 | timeAgoInWords() sometimes doesn't include the 'ago' word - I noticed that while using the `timeAgoInWords` method of `I18/Time`, sometimes the 'ago' word didn't show up.
To be honest, I didn't test it a lot, but I think that it is a bug on the last line of `timeAgoInWords`:
return $relativeDate;
I think that it should be
return sprintf($relativeString, $relativeDate);
I would do a pull request, but I am not 100% sure that this is the solution to the problem and I am not fluent enough with git, yet. | defect | timeagoinwords sometimes doesn t include the ago word i noticed that while using the timeagoinwords method of time sometimes the ago word didn t show up to be honest i didn t test it a lot but i think that it is a bug on the last line of timeagoinwords return relativedate i think that it should be return sprintf relativestring relativedate i would do a pull request but i am not sure that this is the solution to the problem and i am not fluent enough with git yet | 1 |
4,214 | 2,610,089,380 | IssuesEvent | 2015-02-26 18:27:03 | chrsmith/dsdsdaadf | https://api.github.com/repos/chrsmith/dsdsdaadf | opened | 深圳痘痘哪家好 | auto-migrated Priority-Medium Type-Defect | ```
深圳痘痘哪家好【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:35 | 1.0 | 深圳痘痘哪家好 - ```
深圳痘痘哪家好【深圳韩方科颜全国热线400-869-1818,24小时QQ4
008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方��
�—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科�
��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康
祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治��
�粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘�
��
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:35 | defect | 深圳痘痘哪家好 深圳痘痘哪家好【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘方�� �—韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方科� ��专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健康 祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业治�� �粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘痘� �� original issue reported on code google com by szft com on may at | 1 |
132,466 | 10,756,610,342 | IssuesEvent | 2019-10-31 11:35:15 | microsoft/azure-tools-for-java | https://api.github.com/repos/microsoft/azure-tools-for-java | opened | [intelliJ][spark on arcadia]Unreasonable error message when compute is empty. | HDInsight IntelliJ Internal Test | Build:
azure-toolkit-for-intellij-2019.2.develop.1281.10-31-2019
azure-toolkit-for-intellij-LATEST-EAP-SNAPSHOT.develop.10727955.10-24-2019
Env:
19.3 EAP
19.2
Repro Steps:
1. Sign in Azure
2. Open Run/Debug Configurations window then create an arcadia configuration.
Result:
Suggest to change the error message.

3. Sign out azure.
4. Open Run/Debug Configurations window.
Result:

| 1.0 | [intelliJ][spark on arcadia]Unreasonable error message when compute is empty. - Build:
azure-toolkit-for-intellij-2019.2.develop.1281.10-31-2019
azure-toolkit-for-intellij-LATEST-EAP-SNAPSHOT.develop.10727955.10-24-2019
Env:
19.3 EAP
19.2
Repro Steps:
1. Sign in Azure
2. Open Run/Debug Configurations window then create an arcadia configuration.
Result:
Suggest to change the error message.

3. Sign out azure.
4. Open Run/Debug Configurations window.
Result:

| non_defect | unreasonable error message when compute is empty build azure toolkit for intellij develop azure toolkit for intellij latest eap snapshot develop env eap repro steps sign in azure open run debug configurations window then create an arcadia configuration result suggest to change the error message sign out azure open run debug configurations window result | 0 |
78,075 | 22,110,174,089 | IssuesEvent | 2022-06-01 20:27:42 | NixOS/nixpkgs | https://api.github.com/repos/NixOS/nixpkgs | closed | vimPlugins.markdown-preview-nvim doesn't build after an update because of a missing node dependency | 0.kind: build failure | ### Steps To Reproduce
Steps to reproduce the behavior:
1. build `vimPlugins.markdown-preview-nvim`
### Build log
```
@nix { "action": "setPhase", "phase": "unpackPhase" }
unpacking sources
unpacking source archive /nix/store/8bawhfmsl3xs4lildgwf8llqcxin2byw-source
source root is source
@nix { "action": "setPhase", "phase": "patchPhase" }
patching sources
applying patch /nix/store/gm7v5lr1ng6jaqa3m880wsnrmphzbq8q-fix-node-paths.patch
patching file autoload/health/mkdp.vim
patching file autoload/mkdp/rpc.vim
@nix { "action": "setPhase", "phase": "configurePhase" }
configuring
@nix { "action": "setPhase", "phase": "buildPhase" }
building
@nix { "action": "setPhase", "phase": "installPhase" }
installing
@nix { "action": "setPhase", "phase": "fixupPhase" }
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13
strip is /nix/store/r7r10qvsqlnvbzjkjinvscjlahqbxifl-gcc-wrapper-11.3.0/bin/strip
patching script interpreter paths in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13
/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/install.sh: interpreter directive changed from "#!/bin/sh" to "/nix/store/0d3wgx8x6dxdb2cpnq105z23hah07z7l-bash-5.1-p16/bin/sh"
/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/release.sh: interpreter directive changed from "#!/usr/bin/env bash" to "/nix/store/0d3wgx8x6dxdb2cpnq105z23hah07z7l-bash-5.1-p16/bin/bash"
checking for references to /build/ in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13...
@nix { "action": "setPhase", "phase": "installCheckPhase" }
running install tests
node:internal/modules/cjs/loader:936
throw err;
^
Error: Cannot find module '@chemzqm/neovim'
Require stack:
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/load.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/index.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/index.js
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.<anonymous> (/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js:3:16)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/load.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/index.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/index.js'
]
}
```
### Additional context
I updated the vim plugins in https://github.com/NixOS/nixpkgs/pull/175324 and mistakenly added the new `@chemzqm/neovim` node dependency of markdown-preview-nvim to the generated pkgs/development/node-packages/node-packages.nix, I was informed of the problem [here](https://github.com/NixOS/nixpkgs/pull/175488#issuecomment-1141445954) when someone re-generated that file. I have trouble finding the proper place to add it now, pinging the relevant people to point me in the right direction.
### Notify maintainers
@oxalica
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 5.17.9-zen1-1-zen, Arch Linux, noversion, rolling`
- multi-user?: `no`
- sandbox: `yes`
- version: `nix-env (Nix) 2.8.1`
- channels(dettorer): `"nixgl, nixpkgs, stable-21.11"`
- nixpkgs: `/home/dettorer/.nix-defexpr/channels/nixpkgs`
```
| 1.0 | vimPlugins.markdown-preview-nvim doesn't build after an update because of a missing node dependency - ### Steps To Reproduce
Steps to reproduce the behavior:
1. build `vimPlugins.markdown-preview-nvim`
### Build log
```
@nix { "action": "setPhase", "phase": "unpackPhase" }
unpacking sources
unpacking source archive /nix/store/8bawhfmsl3xs4lildgwf8llqcxin2byw-source
source root is source
@nix { "action": "setPhase", "phase": "patchPhase" }
patching sources
applying patch /nix/store/gm7v5lr1ng6jaqa3m880wsnrmphzbq8q-fix-node-paths.patch
patching file autoload/health/mkdp.vim
patching file autoload/mkdp/rpc.vim
@nix { "action": "setPhase", "phase": "configurePhase" }
configuring
@nix { "action": "setPhase", "phase": "buildPhase" }
building
@nix { "action": "setPhase", "phase": "installPhase" }
installing
@nix { "action": "setPhase", "phase": "fixupPhase" }
post-installation fixup
shrinking RPATHs of ELF executables and libraries in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13
strip is /nix/store/r7r10qvsqlnvbzjkjinvscjlahqbxifl-gcc-wrapper-11.3.0/bin/strip
patching script interpreter paths in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13
/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/install.sh: interpreter directive changed from "#!/bin/sh" to "/nix/store/0d3wgx8x6dxdb2cpnq105z23hah07z7l-bash-5.1-p16/bin/sh"
/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/release.sh: interpreter directive changed from "#!/usr/bin/env bash" to "/nix/store/0d3wgx8x6dxdb2cpnq105z23hah07z7l-bash-5.1-p16/bin/bash"
checking for references to /build/ in /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13...
@nix { "action": "setPhase", "phase": "installCheckPhase" }
running install tests
node:internal/modules/cjs/loader:936
throw err;
^
Error: Cannot find module '@chemzqm/neovim'
Require stack:
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/load.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/index.js
- /nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/index.js
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:933:15)
at Function.Module._load (node:internal/modules/cjs/loader:778:27)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.<anonymous> (/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js:3:16)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19) {
code: 'MODULE_NOT_FOUND',
requireStack: [
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/preloadmodules.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/load.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/lib/app/index.js',
'/nix/store/hhh0zdi1cyr3n706swpg2c4hbh7j3ll2-vimplugin-markdown-preview.nvim-2022-05-13/app/index.js'
]
}
```
### Additional context
I updated the vim plugins in https://github.com/NixOS/nixpkgs/pull/175324 and mistakenly added the new `@chemzqm/neovim` node dependency of markdown-preview-nvim to the generated pkgs/development/node-packages/node-packages.nix, I was informed of the problem [here](https://github.com/NixOS/nixpkgs/pull/175488#issuecomment-1141445954) when someone re-generated that file. I have trouble finding the proper place to add it now, pinging the relevant people to point me in the right direction.
### Notify maintainers
@oxalica
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
- system: `"x86_64-linux"`
- host os: `Linux 5.17.9-zen1-1-zen, Arch Linux, noversion, rolling`
- multi-user?: `no`
- sandbox: `yes`
- version: `nix-env (Nix) 2.8.1`
- channels(dettorer): `"nixgl, nixpkgs, stable-21.11"`
- nixpkgs: `/home/dettorer/.nix-defexpr/channels/nixpkgs`
```
| non_defect | vimplugins markdown preview nvim doesn t build after an update because of a missing node dependency steps to reproduce steps to reproduce the behavior build vimplugins markdown preview nvim build log nix action setphase phase unpackphase unpacking sources unpacking source archive nix store source source root is source nix action setphase phase patchphase patching sources applying patch nix store fix node paths patch patching file autoload health mkdp vim patching file autoload mkdp rpc vim nix action setphase phase configurephase configuring nix action setphase phase buildphase building nix action setphase phase installphase installing nix action setphase phase fixupphase post installation fixup shrinking rpaths of elf executables and libraries in nix store vimplugin markdown preview nvim strip is nix store gcc wrapper bin strip patching script interpreter paths in nix store vimplugin markdown preview nvim nix store vimplugin markdown preview nvim app install sh interpreter directive changed from bin sh to nix store bash bin sh nix store vimplugin markdown preview nvim release sh interpreter directive changed from usr bin env bash to nix store bash bin bash checking for references to build in nix store vimplugin markdown preview nvim nix action setphase phase installcheckphase running install tests node internal modules cjs loader throw err error cannot find module chemzqm neovim require stack nix store vimplugin markdown preview nvim app lib app preloadmodules js nix store vimplugin markdown preview nvim app lib app load js nix store vimplugin markdown preview nvim app lib app index js nix store vimplugin markdown preview nvim app index js at function module resolvefilename node internal modules cjs loader at function module load node internal modules cjs loader at module require node internal modules cjs loader at require node internal modules cjs helpers at object nix store vimplugin markdown preview nvim app lib app preloadmodules js at module compile node internal modules cjs loader at object module extensions js node internal modules cjs loader at module load node internal modules cjs loader at function module load node internal modules cjs loader at module require node internal modules cjs loader code module not found requirestack nix store vimplugin markdown preview nvim app lib app preloadmodules js nix store vimplugin markdown preview nvim app lib app load js nix store vimplugin markdown preview nvim app lib app index js nix store vimplugin markdown preview nvim app index js additional context i updated the vim plugins in and mistakenly added the new chemzqm neovim node dependency of markdown preview nvim to the generated pkgs development node packages node packages nix i was informed of the problem when someone re generated that file i have trouble finding the proper place to add it now pinging the relevant people to point me in the right direction notify maintainers oxalica metadata please run nix shell p nix info run nix info m and paste the result console nix shell p nix info run nix info m system linux host os linux zen arch linux noversion rolling multi user no sandbox yes version nix env nix channels dettorer nixgl nixpkgs stable nixpkgs home dettorer nix defexpr channels nixpkgs | 0 |
71,280 | 23,521,960,761 | IssuesEvent | 2022-08-19 07:04:38 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Widget postConstruct lifecycle callback is invoked with non-fully constructed widget instance | :lady_beetle: defect | ### Describe the bug
Client-side widget lifecycle callbacks were added I believe in PF 9, which are super useful for various purposes. I was just using the `postConstruct` callback, when I noticed a minor issue with it: Currently, `postConstruct` [is called during the init method of the BaseWidget](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/core/core.widget.js#L335-L337). But the init method is called by the individual widget during the `super` call (e.g. [see here for spitButton](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/forms/forms.splitbutton.js#L55)). So at this point, the widget was not fully initialized yet, and thus lacks e.g. [various properties that are set only during the `init` method](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/forms/forms.splitbutton.js#L57).
This isn't a major issue and can be worked around e.g. by using `setTimeout` or promises, but perhaps the callback could be invoked a bit later after the widget was initialized?
For example, the following code to add a `aria-label` attribute to the more button of the split button (for accessibility) does not work like this (this should probably be fixed on its own right, but that's just one example):
```xml
<p:splitButton>
<f:attribute name="widgetPostConstruct" value="this.menuButton.attr("aria-label", "Show all items")" />
</p:splitButton>
```
To make it work, we need to delay:
```xml
<p:splitButton>
<f:attribute name="widgetPostConstruct" value="Promise.resolve().then(() => this.menuButton.attr("aria-label", "Show all items"))" />
</p:splitButton>
```
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
11
### Theme
_No response_
### JSF implementation
_No response_
### JSF version
_No response_
### Browser(s)
_No response_ | 1.0 | Widget postConstruct lifecycle callback is invoked with non-fully constructed widget instance - ### Describe the bug
Client-side widget lifecycle callbacks were added I believe in PF 9, which are super useful for various purposes. I was just using the `postConstruct` callback, when I noticed a minor issue with it: Currently, `postConstruct` [is called during the init method of the BaseWidget](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/core/core.widget.js#L335-L337). But the init method is called by the individual widget during the `super` call (e.g. [see here for spitButton](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/forms/forms.splitbutton.js#L55)). So at this point, the widget was not fully initialized yet, and thus lacks e.g. [various properties that are set only during the `init` method](https://github.com/primefaces/primefaces/blob/master/primefaces/src/main/resources/META-INF/resources/primefaces/forms/forms.splitbutton.js#L57).
This isn't a major issue and can be worked around e.g. by using `setTimeout` or promises, but perhaps the callback could be invoked a bit later after the widget was initialized?
For example, the following code to add a `aria-label` attribute to the more button of the split button (for accessibility) does not work like this (this should probably be fixed on its own right, but that's just one example):
```xml
<p:splitButton>
<f:attribute name="widgetPostConstruct" value="this.menuButton.attr("aria-label", "Show all items")" />
</p:splitButton>
```
To make it work, we need to delay:
```xml
<p:splitButton>
<f:attribute name="widgetPostConstruct" value="Promise.resolve().then(() => this.menuButton.attr("aria-label", "Show all items"))" />
</p:splitButton>
```
### Reproducer
_No response_
### Expected behavior
_No response_
### PrimeFaces edition
_No response_
### PrimeFaces version
11
### Theme
_No response_
### JSF implementation
_No response_
### JSF version
_No response_
### Browser(s)
_No response_ | defect | widget postconstruct lifecycle callback is invoked with non fully constructed widget instance describe the bug client side widget lifecycle callbacks were added i believe in pf which are super useful for various purposes i was just using the postconstruct callback when i noticed a minor issue with it currently postconstruct but the init method is called by the individual widget during the super call e g so at this point the widget was not fully initialized yet and thus lacks e g this isn t a major issue and can be worked around e g by using settimeout or promises but perhaps the callback could be invoked a bit later after the widget was initialized for example the following code to add a aria label attribute to the more button of the split button for accessibility does not work like this this should probably be fixed on its own right but that s just one example xml to make it work we need to delay xml this menubutton attr aria label show all items reproducer no response expected behavior no response primefaces edition no response primefaces version theme no response jsf implementation no response jsf version no response browser s no response | 1 |
25,019 | 4,174,182,665 | IssuesEvent | 2016-06-21 13:19:42 | zotonic/zotonic | https://api.github.com/repos/zotonic/zotonic | closed | Restart runs the skeleton in config, leading to unexpected module changes | core defect security | Case: site has this line:
```
%% What skeleton site this site is based on; for installing the initial data.
{
skeleton, empty
},
```
This also activates `mod_admin_only` (although not documented).
Now after a restart, the site config is read again and `mod_admin_only` gets enabled. This should not be happening: a restart should always keep all modules enabled/disabled as before, regardless of the site.modules and the skeleton configuration. | 1.0 | Restart runs the skeleton in config, leading to unexpected module changes - Case: site has this line:
```
%% What skeleton site this site is based on; for installing the initial data.
{
skeleton, empty
},
```
This also activates `mod_admin_only` (although not documented).
Now after a restart, the site config is read again and `mod_admin_only` gets enabled. This should not be happening: a restart should always keep all modules enabled/disabled as before, regardless of the site.modules and the skeleton configuration. | defect | restart runs the skeleton in config leading to unexpected module changes case site has this line what skeleton site this site is based on for installing the initial data skeleton empty this also activates mod admin only although not documented now after a restart the site config is read again and mod admin only gets enabled this should not be happening a restart should always keep all modules enabled disabled as before regardless of the site modules and the skeleton configuration | 1 |
70,239 | 23,066,938,974 | IssuesEvent | 2022-07-25 14:38:19 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Call history from a virtual room | T-Defect | ### Steps to reproduce
1. Call someone via a virtual room.
2. Not seeing any calls which come from virtual rooms in the room timeline
### Outcome
#### What did you expect?
To see the call in the timeline.
#### What happened instead?
It is in the virtual room which is hidden
This is implemented on Android and iOS just not on web.
### Operating system
Windows
### Browser information
102.0.5005.115 (Official Build) (64-bit)
### URL for webapp
private server
### Application version
private fork
### Homeserver
private server
### Will you send logs?
No | 1.0 | Call history from a virtual room - ### Steps to reproduce
1. Call someone via a virtual room.
2. Not seeing any calls which come from virtual rooms in the room timeline
### Outcome
#### What did you expect?
To see the call in the timeline.
#### What happened instead?
It is in the virtual room which is hidden
This is implemented on Android and iOS just not on web.
### Operating system
Windows
### Browser information
102.0.5005.115 (Official Build) (64-bit)
### URL for webapp
private server
### Application version
private fork
### Homeserver
private server
### Will you send logs?
No | defect | call history from a virtual room steps to reproduce call someone via a virtual room not seeing any calls which come from virtual rooms in the room timeline outcome what did you expect to see the call in the timeline what happened instead it is in the virtual room which is hidden this is implemented on android and ios just not on web operating system windows browser information official build bit url for webapp private server application version private fork homeserver private server will you send logs no | 1 |
31,238 | 6,472,600,740 | IssuesEvent | 2017-08-17 14:18:32 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | TestHarness doesn't list failed tests at the bottom | C: TestHarness P: normal T: defect | ### Description of the enhancement or error report
In the final results, the failed tests are scattered throughout the list of tests.
It used to list failed tests at the bottom to make it easier to see what failed.
### Rationale for the enhancement or information for reproducing the error
An example,
https://www.moosebuild.org/job/108957/
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Make it easier to see what tests failed. | 1.0 | TestHarness doesn't list failed tests at the bottom - ### Description of the enhancement or error report
In the final results, the failed tests are scattered throughout the list of tests.
It used to list failed tests at the bottom to make it easier to see what failed.
### Rationale for the enhancement or information for reproducing the error
An example,
https://www.moosebuild.org/job/108957/
### Identified impact
(i.e. Internal object changes, limited interface changes, public API change, or a list of specific applications impacted)
Make it easier to see what tests failed. | defect | testharness doesn t list failed tests at the bottom description of the enhancement or error report in the final results the failed tests are scattered throughout the list of tests it used to list failed tests at the bottom to make it easier to see what failed rationale for the enhancement or information for reproducing the error an example identified impact i e internal object changes limited interface changes public api change or a list of specific applications impacted make it easier to see what tests failed | 1 |
7,525 | 2,610,404,087 | IssuesEvent | 2015-02-26 20:11:21 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | closed | GC Muunilinst | auto-migrated Priority-Medium Type-Defect | ```
In the Clone Wars GC Muunilinst starts the game with a Mining Facility, even
though the planet cannot build Mining Facilities. I tried to sell it, and when
I did I could not rebuild it. Either make Muunilinst capable of Mining
Facilities, or remove the facility it starts with.
```
-----
Original issue reported on code.google.com by `jkouzman...@gmail.com` on 29 Jun 2011 at 4:35 | 1.0 | GC Muunilinst - ```
In the Clone Wars GC Muunilinst starts the game with a Mining Facility, even
though the planet cannot build Mining Facilities. I tried to sell it, and when
I did I could not rebuild it. Either make Muunilinst capable of Mining
Facilities, or remove the facility it starts with.
```
-----
Original issue reported on code.google.com by `jkouzman...@gmail.com` on 29 Jun 2011 at 4:35 | defect | gc muunilinst in the clone wars gc muunilinst starts the game with a mining facility even though the planet cannot build mining facilities i tried to sell it and when i did i could not rebuild it either make muunilinst capable of mining facilities or remove the facility it starts with original issue reported on code google com by jkouzman gmail com on jun at | 1 |
80,450 | 30,294,273,356 | IssuesEvent | 2023-07-09 17:02:25 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | make[1]: *** No rule to make target 'module/Module.symvers', needed by 'all-am'. Stop. | Type: Defect | Hello to everyone.
I'm trying to compile ZFS within ubuntu 22.10 that I have installed on Windows 11 via WSL2. This is the tutorial that I'm following :
https://github.com/alexhaydock/zfs-on-wsl
The commands that I have issued are :
```
sudo tar -zxvf zfs-2.1.0-for-5.13.9-penguins-rule.tgz -C .
cd /usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule
./configure --includedir=/usr/include/tirpc/ --without-python
```
(this command is not present on the tutorial but it is needed)
The full log is here :
https://pastebin.ubuntu.com/p/zHNFR52FVW/
basically the compilation ends with this error and I don't know how to fix it :
```
Making install in module
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make -C /usr/src/linux-5.15.38-penguins-rule M="$PWD" modules_install \
INSTALL_MOD_PATH= \
INSTALL_MOD_DIR=extra \
KERNELRELEASE=5.15.38-penguins-rule
make[2]: Entering directory '/usr/src/linux-5.15.38-penguins-rule'
arch/x86/Makefile:142: CONFIG_X86_X32 enabled but no binutils support
cat: /home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module/modules.order: No such file or directory
DEPMOD /lib/modules/5.15.38-penguins-rule
make[2]: Leaving directory '/usr/src/linux-5.15.38-penguins-rule'
kmoddir=/lib/modules/5.15.38-penguins-rule; \
if [ -n "" ]; then \
find $kmoddir -name 'modules.*' -delete; \
fi
sysmap=/boot/System.map-5.15.38-penguins-rule; \
{ [ -f "$sysmap" ] && [ $(wc -l < "$sysmap") -ge 100 ]; } || \
sysmap=/usr/lib/debug/boot/System.map-5.15.38-penguins-rule; \
if [ -f $sysmap ]; then \
depmod -ae -F $sysmap 5.15.38-penguins-rule; \
fi
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make[1]: *** No rule to make target 'module/Module.symvers', needed by 'all-am'. Stop.
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make: *** [Makefile:920: install-recursive] Error 1
```
The solution could be here :
https://github.com/openzfs/zfs/issues/9133#issuecomment-520563793
where he says :
> Description: Use obj-m instead of subdir-m. Do not use subdir-m to visit module Makefile. And so on...
Unfortunately I haven't understood what to do. | 1.0 | make[1]: *** No rule to make target 'module/Module.symvers', needed by 'all-am'. Stop. - Hello to everyone.
I'm trying to compile ZFS within ubuntu 22.10 that I have installed on Windows 11 via WSL2. This is the tutorial that I'm following :
https://github.com/alexhaydock/zfs-on-wsl
The commands that I have issued are :
```
sudo tar -zxvf zfs-2.1.0-for-5.13.9-penguins-rule.tgz -C .
cd /usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule
./configure --includedir=/usr/include/tirpc/ --without-python
```
(this command is not present on the tutorial but it is needed)
The full log is here :
https://pastebin.ubuntu.com/p/zHNFR52FVW/
basically the compilation ends with this error and I don't know how to fix it :
```
Making install in module
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make -C /usr/src/linux-5.15.38-penguins-rule M="$PWD" modules_install \
INSTALL_MOD_PATH= \
INSTALL_MOD_DIR=extra \
KERNELRELEASE=5.15.38-penguins-rule
make[2]: Entering directory '/usr/src/linux-5.15.38-penguins-rule'
arch/x86/Makefile:142: CONFIG_X86_X32 enabled but no binutils support
cat: /home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module/modules.order: No such file or directory
DEPMOD /lib/modules/5.15.38-penguins-rule
make[2]: Leaving directory '/usr/src/linux-5.15.38-penguins-rule'
kmoddir=/lib/modules/5.15.38-penguins-rule; \
if [ -n "" ]; then \
find $kmoddir -name 'modules.*' -delete; \
fi
sysmap=/boot/System.map-5.15.38-penguins-rule; \
{ [ -f "$sysmap" ] && [ $(wc -l < "$sysmap") -ge 100 ]; } || \
sysmap=/usr/lib/debug/boot/System.map-5.15.38-penguins-rule; \
if [ -f $sysmap ]; then \
depmod -ae -F $sysmap 5.15.38-penguins-rule; \
fi
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule/module'
make[1]: Entering directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make[1]: *** No rule to make target 'module/Module.symvers', needed by 'all-am'. Stop.
make[1]: Leaving directory '/home/marietto/Scaricati/usr/src/zfs-2.1.4-for-linux-5.15.38-penguins-rule'
make: *** [Makefile:920: install-recursive] Error 1
```
The solution could be here :
https://github.com/openzfs/zfs/issues/9133#issuecomment-520563793
where he says :
> Description: Use obj-m instead of subdir-m. Do not use subdir-m to visit module Makefile. And so on...
Unfortunately I haven't understood what to do. | defect | make no rule to make target module module symvers needed by all am stop hello to everyone i m trying to compile zfs within ubuntu that i have installed on windows via this is the tutorial that i m following the commands that i have issued are sudo tar zxvf zfs for penguins rule tgz c cd usr src zfs for linux penguins rule configure includedir usr include tirpc without python this command is not present on the tutorial but it is needed the full log is here basically the compilation ends with this error and i don t know how to fix it making install in module make entering directory home marietto scaricati usr src zfs for linux penguins rule module make c usr src linux penguins rule m pwd modules install install mod path install mod dir extra kernelrelease penguins rule make entering directory usr src linux penguins rule arch makefile config enabled but no binutils support cat home marietto scaricati usr src zfs for linux penguins rule module modules order no such file or directory depmod lib modules penguins rule make leaving directory usr src linux penguins rule kmoddir lib modules penguins rule if then find kmoddir name modules delete fi sysmap boot system map penguins rule sysmap usr lib debug boot system map penguins rule if then depmod ae f sysmap penguins rule fi make leaving directory home marietto scaricati usr src zfs for linux penguins rule module make entering directory home marietto scaricati usr src zfs for linux penguins rule make no rule to make target module module symvers needed by all am stop make leaving directory home marietto scaricati usr src zfs for linux penguins rule make error the solution could be here where he says description use obj m instead of subdir m do not use subdir m to visit module makefile and so on unfortunately i haven t understood what to do | 1 |
54,839 | 13,961,026,104 | IssuesEvent | 2020-10-25 00:33:04 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Since 3.14 a record's identity, overridden by forcedType, is no longer read after insert | T: Defect | ### Expected behavior
After insert the primary key should be read back from mysql. Even for a primary key where the type is modified by a `forcedType` like
```xml
<forcedType>
<name>BIGINT</name>
<types>BIGINT UNSIGNED.*</types>
</forcedType>
```
### Actual behavior
In 3.14 the primary key is no longer read after calling `insert()`.
### Steps to reproduce the problem
- MCVE https://github.com/jOOQ/jOOQ-mcve/pull/5 reproduces the problem.
I've noticed two things.
1. The use of `forcedType` means `.identity(true)` is not output for the `TableField` of the generated `TableImpl`. This was already happening in 3.13.
2. In 3.14 it appears `getIdentity` stopped generating a reference to the table's key. Instead it outputs `super.getIdentity()`.
Perhaps propagating `identity=true` into a `forcedType` would make sense near [here](https://github.com/jOOQ/jOOQ/blob/9d5c87d35ac75d46f63bc511b7f44a5e52d8f54e/jOOQ-meta/src/main/java/org/jooq/meta/AbstractTypedElementDefinition.java#L307)? Assuming I've actually understood what is happening. My apologies if I have no idea what I'm talking about!
I acknowledge using `forcedType` on a primary key like in https://github.com/jOOQ/jOOQ-mcve/pull/5 is perhaps questionable but this issue showed up in an attempt to upgrade jooq at my employer.
### Versions
- jOOQ: 3.14
- Java: 11 but I don't think it will matter.
- Database (include vendor): Mysql
- OS: osx but I don't think it will matter.
- JDBC Driver (include name if inofficial driver): mysql connectorj
| 1.0 | Since 3.14 a record's identity, overridden by forcedType, is no longer read after insert - ### Expected behavior
After insert the primary key should be read back from mysql. Even for a primary key where the type is modified by a `forcedType` like
```xml
<forcedType>
<name>BIGINT</name>
<types>BIGINT UNSIGNED.*</types>
</forcedType>
```
### Actual behavior
In 3.14 the primary key is no longer read after calling `insert()`.
### Steps to reproduce the problem
- MCVE https://github.com/jOOQ/jOOQ-mcve/pull/5 reproduces the problem.
I've noticed two things.
1. The use of `forcedType` means `.identity(true)` is not output for the `TableField` of the generated `TableImpl`. This was already happening in 3.13.
2. In 3.14 it appears `getIdentity` stopped generating a reference to the table's key. Instead it outputs `super.getIdentity()`.
Perhaps propagating `identity=true` into a `forcedType` would make sense near [here](https://github.com/jOOQ/jOOQ/blob/9d5c87d35ac75d46f63bc511b7f44a5e52d8f54e/jOOQ-meta/src/main/java/org/jooq/meta/AbstractTypedElementDefinition.java#L307)? Assuming I've actually understood what is happening. My apologies if I have no idea what I'm talking about!
I acknowledge using `forcedType` on a primary key like in https://github.com/jOOQ/jOOQ-mcve/pull/5 is perhaps questionable but this issue showed up in an attempt to upgrade jooq at my employer.
### Versions
- jOOQ: 3.14
- Java: 11 but I don't think it will matter.
- Database (include vendor): Mysql
- OS: osx but I don't think it will matter.
- JDBC Driver (include name if inofficial driver): mysql connectorj
| defect | since a record s identity overridden by forcedtype is no longer read after insert expected behavior after insert the primary key should be read back from mysql even for a primary key where the type is modified by a forcedtype like xml bigint bigint unsigned actual behavior in the primary key is no longer read after calling insert steps to reproduce the problem mcve reproduces the problem i ve noticed two things the use of forcedtype means identity true is not output for the tablefield of the generated tableimpl this was already happening in in it appears getidentity stopped generating a reference to the table s key instead it outputs super getidentity perhaps propagating identity true into a forcedtype would make sense near assuming i ve actually understood what is happening my apologies if i have no idea what i m talking about i acknowledge using forcedtype on a primary key like in is perhaps questionable but this issue showed up in an attempt to upgrade jooq at my employer versions jooq java but i don t think it will matter database include vendor mysql os osx but i don t think it will matter jdbc driver include name if inofficial driver mysql connectorj | 1 |
67,383 | 20,961,608,735 | IssuesEvent | 2022-03-27 21:48:45 | abedmaatalla/sipdroid | https://api.github.com/repos/abedmaatalla/sipdroid | closed | Product too awesome | Priority-Medium Type-Defect auto-migrated | ```
This product is too good.
1. It easily hooked my Google Voice account. That's supposed to be hard. It
builds character.
2. Audio sounds great. This is terrible. I was expecting a normal phone-quality
experience. Instead I got crystal clear voice quality on both ends.
3. The program runs too fast on my old phone. My several years' old LG Optimus
V should not be performing as smoothly as it does while using sipdroid. The
instant responsiveness is unnerving as I'm used to waiting for a response for
many of my Android apps -- you know, like stepping on ice when you expect
concrete.
In all seriousness, THANK YOU! This app is amazing.
```
Original issue reported on code.google.com by `paul.t.o...@gmail.com` on 6 Feb 2013 at 6:54
| 1.0 | Product too awesome - ```
This product is too good.
1. It easily hooked my Google Voice account. That's supposed to be hard. It
builds character.
2. Audio sounds great. This is terrible. I was expecting a normal phone-quality
experience. Instead I got crystal clear voice quality on both ends.
3. The program runs too fast on my old phone. My several years' old LG Optimus
V should not be performing as smoothly as it does while using sipdroid. The
instant responsiveness is unnerving as I'm used to waiting for a response for
many of my Android apps -- you know, like stepping on ice when you expect
concrete.
In all seriousness, THANK YOU! This app is amazing.
```
Original issue reported on code.google.com by `paul.t.o...@gmail.com` on 6 Feb 2013 at 6:54
| defect | product too awesome this product is too good it easily hooked my google voice account that s supposed to be hard it builds character audio sounds great this is terrible i was expecting a normal phone quality experience instead i got crystal clear voice quality on both ends the program runs too fast on my old phone my several years old lg optimus v should not be performing as smoothly as it does while using sipdroid the instant responsiveness is unnerving as i m used to waiting for a response for many of my android apps you know like stepping on ice when you expect concrete in all seriousness thank you this app is amazing original issue reported on code google com by paul t o gmail com on feb at | 1 |
61,376 | 17,023,679,314 | IssuesEvent | 2021-07-03 03:15:46 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | denomination=jehovahs_witness shouldn't render cross | Component: mapnik Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 8.40pm, Sunday, 6th February 2011]**
Whilst the wiki recommends religion=christian, denomination=jehovahs_witness tagging (http://wiki.openstreetmap.org/wiki/Key:denomination) then this produces a cross rendering.
However Jehovah's Witnesses do not believe in the cross as a symbol (see http://en.wikipedia.org/wiki/Jehovah's_Witnesses#Jehovah_and_Jesus_Christ) so the current rendering is inappropriate, and potentially offensive.
I recommend making denomination=jehovahs_witnesss "overide" religion=christian to end up rendering the standard place of worship square.
The current rendering is preventing users from tagging correctly (after all correct tagging is the most important part of OSM). I know this from a conversation about this with the person who reverted my edits to http://www.openstreetmap.org/browse/way/66349427/history | 1.0 | denomination=jehovahs_witness shouldn't render cross - **[Submitted to the original trac issue database at 8.40pm, Sunday, 6th February 2011]**
Whilst the wiki recommends religion=christian, denomination=jehovahs_witness tagging (http://wiki.openstreetmap.org/wiki/Key:denomination) then this produces a cross rendering.
However Jehovah's Witnesses do not believe in the cross as a symbol (see http://en.wikipedia.org/wiki/Jehovah's_Witnesses#Jehovah_and_Jesus_Christ) so the current rendering is inappropriate, and potentially offensive.
I recommend making denomination=jehovahs_witnesss "overide" religion=christian to end up rendering the standard place of worship square.
The current rendering is preventing users from tagging correctly (after all correct tagging is the most important part of OSM). I know this from a conversation about this with the person who reverted my edits to http://www.openstreetmap.org/browse/way/66349427/history | defect | denomination jehovahs witness shouldn t render cross whilst the wiki recommends religion christian denomination jehovahs witness tagging then this produces a cross rendering however jehovah s witnesses do not believe in the cross as a symbol see so the current rendering is inappropriate and potentially offensive i recommend making denomination jehovahs witnesss overide religion christian to end up rendering the standard place of worship square the current rendering is preventing users from tagging correctly after all correct tagging is the most important part of osm i know this from a conversation about this with the person who reverted my edits to | 1 |
56,101 | 14,929,856,380 | IssuesEvent | 2021-01-25 01:04:22 | AeroScripts/QuestieDev | https://api.github.com/repos/AeroScripts/QuestieDev | closed | Map target issues | Type - Defect | <!-- READ THIS FIRST
Hello, thanks for taking the time to report a bug!
Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie
Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is:
* @AeroScripts / Aero#1357 (Discord)
* @BreakBB / TheCrux#1702 (Discord)
* @drejjmit / Drejjmit#8241 (Discord)
* @Dyaxler / Dyaxler#0086 (Discord)
* @gogo1951 / Gogo#0298 (Discord)
If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs¤cy_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted
You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7
-->
## Bug description
<!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. --> quest targets on map and available new quests don’t show up on main map or minimap. They stopped showing up mid game. Did EVERYTHING to get addon to work. Only thing shows on map is where finished quests can be turn in at.
## Screenshots
<!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. -->
## Questie version
<!--
Which version of Questie are you using? You can find it by:
- 1. Hovering over the Questie Minimap Icon
- 2. looking at your Questie.toc file (open it with any text editor).
It looks something like this: "v5.9.0" or "## Version: 5.9.0".
--> 6.4.2
| 1.0 | Map target issues - <!-- READ THIS FIRST
Hello, thanks for taking the time to report a bug!
Before you proceed, please verify that you're running the latest version of Questie. The easiest way to do this is via the Twitch client, but you can also download the latest version here: https://www.curseforge.com/wow/addons/questie
Questie is one of the most popular Classic WoW addons, with over 22M downloads. However, like almost all WoW addons, it's built and maintained by a team of volunteers. The current Questie team is:
* @AeroScripts / Aero#1357 (Discord)
* @BreakBB / TheCrux#1702 (Discord)
* @drejjmit / Drejjmit#8241 (Discord)
* @Dyaxler / Dyaxler#0086 (Discord)
* @gogo1951 / Gogo#0298 (Discord)
If you'd like to help, please consider making a donation. You can do so here: https://www.paypal.com/cgi-bin/webscr?cmd=_donations&business=aero1861%40gmail%2ecom&lc=CA&item_name=Questie%20Devs¤cy_code=USD&bn=PP%2dDonationsBF%3abtn_donate_LG%2egif%3aNonHosted
You can also help as a tester, developer or translator, please join the Questie Discord here https://discord.gg/fYcQfv7
-->
## Bug description
<!-- Explain in detail what the bug is and how you encountered it. If possible explain how it can be reproduced. --> quest targets on map and available new quests don’t show up on main map or minimap. They stopped showing up mid game. Did EVERYTHING to get addon to work. Only thing shows on map is where finished quests can be turn in at.
## Screenshots
<!-- If you can, add a screenshot to help explaining the bug. Simply drag and drop the image in this input field, no need to upload it to any other image platform. -->
## Questie version
<!--
Which version of Questie are you using? You can find it by:
- 1. Hovering over the Questie Minimap Icon
- 2. looking at your Questie.toc file (open it with any text editor).
It looks something like this: "v5.9.0" or "## Version: 5.9.0".
--> 6.4.2
| defect | map target issues read this first hello thanks for taking the time to report a bug before you proceed please verify that you re running the latest version of questie the easiest way to do this is via the twitch client but you can also download the latest version here questie is one of the most popular classic wow addons with over downloads however like almost all wow addons it s built and maintained by a team of volunteers the current questie team is aeroscripts aero discord breakbb thecrux discord drejjmit drejjmit discord dyaxler dyaxler discord gogo discord if you d like to help please consider making a donation you can do so here you can also help as a tester developer or translator please join the questie discord here bug description quest targets on map and available new quests don’t show up on main map or minimap they stopped showing up mid game did everything to get addon to work only thing shows on map is where finished quests can be turn in at screenshots questie version which version of questie are you using you can find it by hovering over the questie minimap icon looking at your questie toc file open it with any text editor it looks something like this or version | 1 |
147,208 | 13,203,023,645 | IssuesEvent | 2020-08-14 13:27:31 | epankratz/twenty-questions | https://api.github.com/repos/epankratz/twenty-questions | closed | Docstrings | documentation | - Add docstrings to the methods that don't have them yet
- Unify/update "arg" and "returns" lists in all docstrings | 1.0 | Docstrings - - Add docstrings to the methods that don't have them yet
- Unify/update "arg" and "returns" lists in all docstrings | non_defect | docstrings add docstrings to the methods that don t have them yet unify update arg and returns lists in all docstrings | 0 |
347,888 | 10,435,947,997 | IssuesEvent | 2019-09-17 18:26:36 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Add DuckDuckGo Lite as pre-populated search engines in Android | QA/No android-related priority/P2 | <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Right now Android shares the same pre-populated search engine list with desktop, we need to add DuckDuckGo Lite for Android only.
## Steps to Reproduce
1. Open search engine settings on Android
## Actual result:
Search engine list: google, bing, ddg, qwant, startpage
## Expected result:
Search engine list: google, bing, ddg, ddg lite, qwant, startpage | 1.0 | Add DuckDuckGo Lite as pre-populated search engines in Android - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Right now Android shares the same pre-populated search engine list with desktop, we need to add DuckDuckGo Lite for Android only.
## Steps to Reproduce
1. Open search engine settings on Android
## Actual result:
Search engine list: google, bing, ddg, qwant, startpage
## Expected result:
Search engine list: google, bing, ddg, ddg lite, qwant, startpage | non_defect | add duckduckgo lite as pre populated search engines in android have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description right now android shares the same pre populated search engine list with desktop we need to add duckduckgo lite for android only steps to reproduce open search engine settings on android actual result search engine list google bing ddg qwant startpage expected result search engine list google bing ddg ddg lite qwant startpage | 0 |
59,402 | 17,023,117,096 | IssuesEvent | 2021-07-03 00:27:06 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | Accented e in property value refuses upload | Component: josm Priority: major Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 9.48pm, Wednesday, 17th May 2006]**
Adding a property value which includes accented e (), like the danish street type all, results in JOSM refusing to upload the changes with a "Error while parsing: An error occurred:: 500 Internal Server Error".
This is using latest snapshot:
Path: josm
URL: http://www.eigenheimstrasse.de/svn/josm
Repository Root: http://www.eigenheimstrasse.de/svn/josm
Repository UUID: 0c6e7542-c601-0410-84e7-c038aed88b3b
Revision: 101
Node Kind: directory
Last Changed Author: imi
Last Changed Rev: 101
Last Changed Date: 2006-05-03 22:21:02 +0200 (Wed, 03 May 2006)
Seen on both Windows 98 and Windows Server 2003 using java version "1.5.0_06", Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05), Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode).
Mikkel, | 1.0 | Accented e in property value refuses upload - **[Submitted to the original trac issue database at 9.48pm, Wednesday, 17th May 2006]**
Adding a property value which includes accented e (), like the danish street type all, results in JOSM refusing to upload the changes with a "Error while parsing: An error occurred:: 500 Internal Server Error".
This is using latest snapshot:
Path: josm
URL: http://www.eigenheimstrasse.de/svn/josm
Repository Root: http://www.eigenheimstrasse.de/svn/josm
Repository UUID: 0c6e7542-c601-0410-84e7-c038aed88b3b
Revision: 101
Node Kind: directory
Last Changed Author: imi
Last Changed Rev: 101
Last Changed Date: 2006-05-03 22:21:02 +0200 (Wed, 03 May 2006)
Seen on both Windows 98 and Windows Server 2003 using java version "1.5.0_06", Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_06-b05), Java HotSpot(TM) Client VM (build 1.5.0_06-b05, mixed mode).
Mikkel, | defect | accented e in property value refuses upload adding a property value which includes accented e like the danish street type all results in josm refusing to upload the changes with a error while parsing an error occurred internal server error this is using latest snapshot path josm url repository root repository uuid revision node kind directory last changed author imi last changed rev last changed date wed may seen on both windows and windows server using java version java tm runtime environment standard edition build java hotspot tm client vm build mixed mode mikkel | 1 |
140,571 | 32,029,267,389 | IssuesEvent | 2023-09-22 11:05:49 | matsim-org/matsim-libs | https://api.github.com/repos/matsim-org/matsim-libs | opened | remove fastCapacityUpdate in QueueWithBuffer | maintenance code sprint decision item | @kainagel and me came across this 1-2 years ago.
the fast capacity update makes the code very complicated and verbose and seems not really to be (heavily used).
| 1.0 | remove fastCapacityUpdate in QueueWithBuffer - @kainagel and me came across this 1-2 years ago.
the fast capacity update makes the code very complicated and verbose and seems not really to be (heavily used).
| non_defect | remove fastcapacityupdate in queuewithbuffer kainagel and me came across this years ago the fast capacity update makes the code very complicated and verbose and seems not really to be heavily used | 0 |
33,808 | 12,220,322,237 | IssuesEvent | 2020-05-02 01:00:11 | finos/secref-data | https://api.github.com/repos/finos/secref-data | opened | CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz | security vulnerability | ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/secref-data/website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/secref-data/website/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- docusaurus-1.14.4.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-merge-rules-4.0.3.tgz
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"4.2.0","isTransitiveDependency":true,"dependencyTree":"docusaurus:1.14.4;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-merge-rules:4.0.3;postcss-selector-parser:3.1.1;dot-prop:4.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"}],"vulnerabilityIdentifier":"CVE-2020-8116","vulnerabilityDetails":"Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-8116 (High) detected in dot-prop-4.2.0.tgz - ## CVE-2020-8116 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>dot-prop-4.2.0.tgz</b></p></summary>
<p>Get, set, or delete a property from a nested object using a dot path</p>
<p>Library home page: <a href="https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz">https://registry.npmjs.org/dot-prop/-/dot-prop-4.2.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/secref-data/website/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/secref-data/website/node_modules/dot-prop/package.json</p>
<p>
Dependency Hierarchy:
- docusaurus-1.14.4.tgz (Root Library)
- cssnano-4.1.10.tgz
- cssnano-preset-default-4.0.7.tgz
- postcss-merge-rules-4.0.3.tgz
- postcss-selector-parser-3.1.1.tgz
- :x: **dot-prop-4.2.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.
<p>Publish Date: 2020-02-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116>CVE-2020-8116</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-8116</a></p>
<p>Release Date: 2020-02-04</p>
<p>Fix Resolution: dot-prop - 5.1.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"dot-prop","packageVersion":"4.2.0","isTransitiveDependency":true,"dependencyTree":"docusaurus:1.14.4;cssnano:4.1.10;cssnano-preset-default:4.0.7;postcss-merge-rules:4.0.3;postcss-selector-parser:3.1.1;dot-prop:4.2.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"dot-prop - 5.1.1"}],"vulnerabilityIdentifier":"CVE-2020-8116","vulnerabilityDetails":"Prototype pollution vulnerability in dot-prop npm package version 5.1.0 and earlier allows an attacker to add arbitrary properties to JavaScript language constructs such as objects.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8116","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in dot prop tgz cve high severity vulnerability vulnerable library dot prop tgz get set or delete a property from a nested object using a dot path library home page a href path to dependency file tmp ws scm secref data website package json path to vulnerable library tmp ws scm secref data website node modules dot prop package json dependency hierarchy docusaurus tgz root library cssnano tgz cssnano preset default tgz postcss merge rules tgz postcss selector parser tgz x dot prop tgz vulnerable library vulnerability details prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution dot prop isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails prototype pollution vulnerability in dot prop npm package version and earlier allows an attacker to add arbitrary properties to javascript language constructs such as objects vulnerabilityurl | 0 |
52,088 | 13,211,387,378 | IssuesEvent | 2020-08-15 22:46:36 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [iceprod2] remove site_id from db/settings table (Trac #1669) | Incomplete Migration Migrated from Trac defect iceprod | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1669">https://code.icecube.wisc.edu/projects/icecube/ticket/1669</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-05-09T21:55:16",
"_ts": "1462830916934929",
"description": "The site_id is provided in the json config of the site, so a duplicate in a database table sounds like trouble. I'm pretty sure they aren't in sync. Remove this from the database.",
"reporter": "david.schultz",
"cc": "",
"resolution": "wontfix",
"time": "2016-04-28T15:42:18",
"component": "iceprod",
"summary": "[iceprod2] remove site_id from db/settings table",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [iceprod2] remove site_id from db/settings table (Trac #1669) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1669">https://code.icecube.wisc.edu/projects/icecube/ticket/1669</a>, reported by david.schultzand owned by david.schultz</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-05-09T21:55:16",
"_ts": "1462830916934929",
"description": "The site_id is provided in the json config of the site, so a duplicate in a database table sounds like trouble. I'm pretty sure they aren't in sync. Remove this from the database.",
"reporter": "david.schultz",
"cc": "",
"resolution": "wontfix",
"time": "2016-04-28T15:42:18",
"component": "iceprod",
"summary": "[iceprod2] remove site_id from db/settings table",
"priority": "critical",
"keywords": "",
"milestone": "",
"owner": "david.schultz",
"type": "defect"
}
```
</p>
</details>
| defect | remove site id from db settings table trac migrated from json status closed changetime ts description the site id is provided in the json config of the site so a duplicate in a database table sounds like trouble i m pretty sure they aren t in sync remove this from the database reporter david schultz cc resolution wontfix time component iceprod summary remove site id from db settings table priority critical keywords milestone owner david schultz type defect | 1 |
56,979 | 15,557,597,708 | IssuesEvent | 2021-03-16 09:19:10 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | opened | Default 'Go To Home' shortcut doesn't work on mac | A-Shortcuts T-Defect | It's cmd+alt+H which is 'hide other windows' | 1.0 | Default 'Go To Home' shortcut doesn't work on mac - It's cmd+alt+H which is 'hide other windows' | defect | default go to home shortcut doesn t work on mac it s cmd alt h which is hide other windows | 1 |
243,591 | 18,718,466,822 | IssuesEvent | 2021-11-03 09:01:48 | nipreps/smriprep | https://api.github.com/repos/nipreps/smriprep | closed | tutorial for smri | documentation | **Please describe how the documentation should be improved.**
A clear and concise description of the underdocumented feature.
Hi!
I saw the nipype tutorial for fmri preprocessing (https://miykael.github.io/nipype_tutorial/).
Is there any tutorial video or document related to smri?
Thanks! | 1.0 | tutorial for smri - **Please describe how the documentation should be improved.**
A clear and concise description of the underdocumented feature.
Hi!
I saw the nipype tutorial for fmri preprocessing (https://miykael.github.io/nipype_tutorial/).
Is there any tutorial video or document related to smri?
Thanks! | non_defect | tutorial for smri please describe how the documentation should be improved a clear and concise description of the underdocumented feature hi i saw the nipype tutorial for fmri preprocessing is there any tutorial video or document related to smri thanks | 0 |
72,956 | 15,252,047,998 | IssuesEvent | 2021-02-20 01:20:06 | mrcelewis/flink | https://api.github.com/repos/mrcelewis/flink | closed | WS-2016-0039 (High) detected in shell-quote-0.0.1.tgz - autoclosed | security vulnerability | ## WS-2016-0039 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-0.0.1.tgz</b></p></summary>
<p>quote and parse shell commands</p>
<p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz</a></p>
<p>Path to dependency file: flink/flink-runtime-web/web-dashboard/package.json</p>
<p>Path to vulnerable library: flink/flink-runtime-web/web-dashboard/node_modules/shell-quote/package.json</p>
<p>
Dependency Hierarchy:
- browserify-9.0.8.tgz (Root Library)
- :x: **shell-quote-0.0.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm module "shell-quote" cannot correctly escape "greater than" and "lower than" operator used for redirection in shell. This might be possible vulnerability for many application which depends on shell-quote.
<p>Publish Date: 2016-05-20
<p>URL: <a href=https://github.com/substack/node-shell-quote/commit/70e9eb2a854eb56a3dfa255be12610a722bbe080>WS-2016-0039</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/117">https://nodesecurity.io/advisories/117</a></p>
<p>Release Date: 2016-06-21</p>
<p>Fix Resolution: Upgrade to at least version 1.6.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"shell-quote","packageVersion":"0.0.1","packageFilePaths":["/flink-runtime-web/web-dashboard/package.json"],"isTransitiveDependency":true,"dependencyTree":"browserify:9.0.8;shell-quote:0.0.1","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2016-0039","vulnerabilityDetails":"The npm module \"shell-quote\" cannot correctly escape \"greater than\" and \"lower than\" operator used for redirection in shell. This might be possible vulnerability for many application which depends on shell-quote.","vulnerabilityUrl":"https://github.com/substack/node-shell-quote/commit/70e9eb2a854eb56a3dfa255be12610a722bbe080","cvss2Severity":"high","cvss2Score":"8.4","extraData":{}}</REMEDIATE> --> | True | WS-2016-0039 (High) detected in shell-quote-0.0.1.tgz - autoclosed - ## WS-2016-0039 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-0.0.1.tgz</b></p></summary>
<p>quote and parse shell commands</p>
<p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz</a></p>
<p>Path to dependency file: flink/flink-runtime-web/web-dashboard/package.json</p>
<p>Path to vulnerable library: flink/flink-runtime-web/web-dashboard/node_modules/shell-quote/package.json</p>
<p>
Dependency Hierarchy:
- browserify-9.0.8.tgz (Root Library)
- :x: **shell-quote-0.0.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm module "shell-quote" cannot correctly escape "greater than" and "lower than" operator used for redirection in shell. This might be possible vulnerability for many application which depends on shell-quote.
<p>Publish Date: 2016-05-20
<p>URL: <a href=https://github.com/substack/node-shell-quote/commit/70e9eb2a854eb56a3dfa255be12610a722bbe080>WS-2016-0039</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>8.4</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/117">https://nodesecurity.io/advisories/117</a></p>
<p>Release Date: 2016-06-21</p>
<p>Fix Resolution: Upgrade to at least version 1.6.1</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"shell-quote","packageVersion":"0.0.1","packageFilePaths":["/flink-runtime-web/web-dashboard/package.json"],"isTransitiveDependency":true,"dependencyTree":"browserify:9.0.8;shell-quote:0.0.1","isMinimumFixVersionAvailable":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"WS-2016-0039","vulnerabilityDetails":"The npm module \"shell-quote\" cannot correctly escape \"greater than\" and \"lower than\" operator used for redirection in shell. This might be possible vulnerability for many application which depends on shell-quote.","vulnerabilityUrl":"https://github.com/substack/node-shell-quote/commit/70e9eb2a854eb56a3dfa255be12610a722bbe080","cvss2Severity":"high","cvss2Score":"8.4","extraData":{}}</REMEDIATE> --> | non_defect | ws high detected in shell quote tgz autoclosed ws high severity vulnerability vulnerable library shell quote tgz quote and parse shell commands library home page a href path to dependency file flink flink runtime web web dashboard package json path to vulnerable library flink flink runtime web web dashboard node modules shell quote package json dependency hierarchy browserify tgz root library x shell quote tgz vulnerable library found in base branch master vulnerability details the npm module shell quote cannot correctly escape greater than and lower than operator used for redirection in shell this might be possible vulnerability for many application which depends on shell quote publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution upgrade to at least version isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree browserify shell quote isminimumfixversionavailable false basebranches vulnerabilityidentifier ws vulnerabilitydetails the npm module shell quote cannot correctly escape greater than and lower than operator used for redirection in shell this might be possible vulnerability for many application which depends on shell quote vulnerabilityurl | 0 |
8,631 | 2,611,533,706 | IssuesEvent | 2015-02-27 06:04:28 | chrsmith/hedgewars | https://api.github.com/repos/chrsmith/hedgewars | closed | Guests can draw on a hand-drawn map | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
You need two persons for this to test. One is the host, the other one is the
guest.
1. Host: Host a server!
2. Host: Make a simple hand-drawn map, for example, a circle.
3. Guest: Join the server!
4. Guest: Click on the map!
5. Guest: Draw around the map a bit!
6. Guest: Go back!
7. Guest: Get ready!
8. Host: Start the game.
What is the expected output? What do you see instead?
I expect that the reproduction just mentioned would fail at the fifth step
because nothing happens if a guest clicks on the map, so the guest can’t mess
around with the map. But the guest goes straight into the editing screen
instead (editing is possible), which is bad.
After the guest finished drawing, the host still sees a circle but the guest
sees something other than a circle.
After the eigth step, the game refuses to start the game with an error message
which says that not all players have the same map.
What version of the product are you using? On what operating system?
- Hedgewars 0.9.19
- GNU/Linux, Linux 3.9.4
Please provide any additional information below.
No!
```
Original issue reported on code.google.com by `almikes@aol.com` on 3 Sep 2013 at 8:58 | 1.0 | Guests can draw on a hand-drawn map - ```
What steps will reproduce the problem?
You need two persons for this to test. One is the host, the other one is the
guest.
1. Host: Host a server!
2. Host: Make a simple hand-drawn map, for example, a circle.
3. Guest: Join the server!
4. Guest: Click on the map!
5. Guest: Draw around the map a bit!
6. Guest: Go back!
7. Guest: Get ready!
8. Host: Start the game.
What is the expected output? What do you see instead?
I expect that the reproduction just mentioned would fail at the fifth step
because nothing happens if a guest clicks on the map, so the guest can’t mess
around with the map. But the guest goes straight into the editing screen
instead (editing is possible), which is bad.
After the guest finished drawing, the host still sees a circle but the guest
sees something other than a circle.
After the eigth step, the game refuses to start the game with an error message
which says that not all players have the same map.
What version of the product are you using? On what operating system?
- Hedgewars 0.9.19
- GNU/Linux, Linux 3.9.4
Please provide any additional information below.
No!
```
Original issue reported on code.google.com by `almikes@aol.com` on 3 Sep 2013 at 8:58 | defect | guests can draw on a hand drawn map what steps will reproduce the problem you need two persons for this to test one is the host the other one is the guest host host a server host make a simple hand drawn map for example a circle guest join the server guest click on the map guest draw around the map a bit guest go back guest get ready host start the game what is the expected output what do you see instead i expect that the reproduction just mentioned would fail at the fifth step because nothing happens if a guest clicks on the map so the guest can’t mess around with the map but the guest goes straight into the editing screen instead editing is possible which is bad after the guest finished drawing the host still sees a circle but the guest sees something other than a circle after the eigth step the game refuses to start the game with an error message which says that not all players have the same map what version of the product are you using on what operating system hedgewars gnu linux linux please provide any additional information below no original issue reported on code google com by almikes aol com on sep at | 1 |
54,090 | 11,187,925,276 | IssuesEvent | 2020-01-02 01:28:45 | huynhp24/project-winter-19 | https://api.github.com/repos/huynhp24/project-winter-19 | opened | Change magic numbers | bad code | magic numbers in MyPlayer.java and MyGdxGame.java - action method
change to either global constants or enums or something | 1.0 | Change magic numbers - magic numbers in MyPlayer.java and MyGdxGame.java - action method
change to either global constants or enums or something | non_defect | change magic numbers magic numbers in myplayer java and mygdxgame java action method change to either global constants or enums or something | 0 |
190,655 | 22,146,949,280 | IssuesEvent | 2022-06-03 13:03:35 | rakheend-org/new-med-repo | https://api.github.com/repos/rakheend-org/new-med-repo | opened | jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3) | security vulnerability | <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /pository/javax/servlet/jstl/1.2/jstl-1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/rakheend-org/new-med-repo/commit/b27bf3d34bc18233dc210e3531d5d479a99e56be">b27bf3d34bc18233dc210e3531d5d479a99e56be</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2015-0254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>jstl-1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /pository/javax/servlet/jstl/1.2/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rakheend-org/new-med-repo/commit/b27bf3d34bc18233dc210e3531d5d479a99e56be">b27bf3d34bc18233dc210e3531d5d479a99e56be</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"javax.servlet","packageName":"jstl","packageVersion":"1.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"javax.servlet:jstl:1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.taglibs:taglibs-standard-impl:1.2.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-0254","vulnerabilityDetails":"Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) \u003cx:parse\u003e or (2) \u003cx:transform\u003e JSTL XML tag.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}]</REMEDIATE> --> | True | jstl-1.2.jar: 1 vulnerabilities (highest severity is: 7.3) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jstl-1.2.jar</b></p></summary>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /pository/javax/servlet/jstl/1.2/jstl-1.2.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/rakheend-org/new-med-repo/commit/b27bf3d34bc18233dc210e3531d5d479a99e56be">b27bf3d34bc18233dc210e3531d5d479a99e56be</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2015-0254](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.3 | jstl-1.2.jar | Direct | org.apache.taglibs:taglibs-standard-impl:1.2.3 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2015-0254</summary>
### Vulnerable Library - <b>jstl-1.2.jar</b></p>
<p></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /pository/javax/servlet/jstl/1.2/jstl-1.2.jar</p>
<p>
Dependency Hierarchy:
- :x: **jstl-1.2.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rakheend-org/new-med-repo/commit/b27bf3d34bc18233dc210e3531d5d479a99e56be">b27bf3d34bc18233dc210e3531d5d479a99e56be</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) <x:parse> or (2) <x:transform> JSTL XML tag.
<p>Publish Date: 2015-03-09
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254>CVE-2015-0254</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.3</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tomcat.apache.org/taglibs/standard/">https://tomcat.apache.org/taglibs/standard/</a></p>
<p>Release Date: 2015-03-09</p>
<p>Fix Resolution: org.apache.taglibs:taglibs-standard-impl:1.2.3</p>
</p>
<p></p>
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
</details>
<!-- <REMEDIATE>[{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"javax.servlet","packageName":"jstl","packageVersion":"1.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"javax.servlet:jstl:1.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.taglibs:taglibs-standard-impl:1.2.3","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2015-0254","vulnerabilityDetails":"Apache Standard Taglibs before 1.2.3 allows remote attackers to execute arbitrary code or conduct external XML entity (XXE) attacks via a crafted XSLT extension in a (1) \u003cx:parse\u003e or (2) \u003cx:transform\u003e JSTL XML tag.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-0254","cvss3Severity":"high","cvss3Score":"7.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"Low"},"extraData":{}}]</REMEDIATE> --> | non_defect | jstl jar vulnerabilities highest severity is vulnerable library jstl jar path to dependency file pom xml path to vulnerable library pository javax servlet jstl jstl jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high jstl jar direct org apache taglibs taglibs standard impl details cve vulnerable library jstl jar path to dependency file pom xml path to vulnerable library pository javax servlet jstl jstl jar dependency hierarchy x jstl jar vulnerable library found in head commit a href found in base branch master vulnerability details apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a or jstl xml tag publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache taglibs taglibs standard impl step up your open source security game with mend istransitivedependency false dependencytree javax servlet jstl isminimumfixversionavailable true minimumfixversion org apache taglibs taglibs standard impl isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails apache standard taglibs before allows remote attackers to execute arbitrary code or conduct external xml entity xxe attacks via a crafted xslt extension in a parse or transform jstl xml tag vulnerabilityurl | 0 |
55,052 | 14,165,720,055 | IssuesEvent | 2020-11-12 07:45:38 | hazelcast/hazelcast-docker | https://api.github.com/repos/hazelcast/hazelcast-docker | closed | Potential performance regression in 4.0.2 image | Type: Defect | `hazelcast/hazelcast:4.0.2` is around 30% slower than a single member .jar running with OpenJDK 11.0.8+10.
See https://github.com/hazelcast/hazelcast-nodejs-client/issues/559#issuecomment-696116277 | 1.0 | Potential performance regression in 4.0.2 image - `hazelcast/hazelcast:4.0.2` is around 30% slower than a single member .jar running with OpenJDK 11.0.8+10.
See https://github.com/hazelcast/hazelcast-nodejs-client/issues/559#issuecomment-696116277 | defect | potential performance regression in image hazelcast hazelcast is around slower than a single member jar running with openjdk see | 1 |
15,527 | 10,316,028,158 | IssuesEvent | 2019-08-30 09:02:09 | terraform-providers/terraform-provider-azurerm | https://api.github.com/repos/terraform-providers/terraform-provider-azurerm | closed | Availability Zones for AKS cluster | enhancement preview service/kubernetes-cluster | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
https://docs.microsoft.com/en-us/azure/aks/availability-zones
This function is Preview phase, but do you have any plan to develop for this feature?
I want to create AKS cluster w/ availability-zones via Terraform.
### New or Affected Resource(s)
* azurerm_kubernetes_cluster
### Potential Terraform Configuration
I think possibly to add like node_count or node_zone on the terraform file.
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_kubernetes_cluster" "az-sample" {
name = "az-sample"
location = "eastus2"
resource_group_name = "az-sample"
dns_prefix = "az-sample"
kubernetes_version = "1.13.7"
linux_profile {
admin_username = "azuser"
ssh_key {
key_data = "${file("./id_rsa.pub")}"
}
}
agent_pool_profile {
name = "default"
**node_count = 3**
**node_zone = "1,2,3"**
vm_size = "Standard_B8ms"
os_type = "Linux"
os_disk_size_gb = 30
vnet_subnet_id = "${data.azurerm_subnet.express_route.id}"
}
service_principal {
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
}
network_profile {
network_plugin = "azure"
}
}
```
### References
https://docs.microsoft.com/en-us/azure/aks/availability-zones
* #0000 | 1.0 | Availability Zones for AKS cluster - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
https://docs.microsoft.com/en-us/azure/aks/availability-zones
This function is Preview phase, but do you have any plan to develop for this feature?
I want to create AKS cluster w/ availability-zones via Terraform.
### New or Affected Resource(s)
* azurerm_kubernetes_cluster
### Potential Terraform Configuration
I think possibly to add like node_count or node_zone on the terraform file.
```hcl
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
resource "azurerm_kubernetes_cluster" "az-sample" {
name = "az-sample"
location = "eastus2"
resource_group_name = "az-sample"
dns_prefix = "az-sample"
kubernetes_version = "1.13.7"
linux_profile {
admin_username = "azuser"
ssh_key {
key_data = "${file("./id_rsa.pub")}"
}
}
agent_pool_profile {
name = "default"
**node_count = 3**
**node_zone = "1,2,3"**
vm_size = "Standard_B8ms"
os_type = "Linux"
os_disk_size_gb = 30
vnet_subnet_id = "${data.azurerm_subnet.express_route.id}"
}
service_principal {
client_id = "${var.azure_client_id}"
client_secret = "${var.azure_client_secret}"
}
network_profile {
network_plugin = "azure"
}
}
```
### References
https://docs.microsoft.com/en-us/azure/aks/availability-zones
* #0000 | non_defect | availability zones for aks cluster community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description this function is preview phase but do you have any plan to develop for this feature i want to create aks cluster w availability zones via terraform new or affected resource s azurerm kubernetes cluster potential terraform configuration i think possibly to add like node count or node zone on the terraform file hcl copy paste your terraform configurations here for large terraform configs please use a service like dropbox and share a link to the zip file for security you can also encrypt the files using our gpg public key resource azurerm kubernetes cluster az sample name az sample location resource group name az sample dns prefix az sample kubernetes version linux profile admin username azuser ssh key key data file id rsa pub agent pool profile name default node count node zone vm size standard os type linux os disk size gb vnet subnet id data azurerm subnet express route id service principal client id var azure client id client secret var azure client secret network profile network plugin azure references | 0 |
436,044 | 12,544,542,844 | IssuesEvent | 2020-06-05 17:22:38 | tgstation/tgstation | https://api.github.com/repos/tgstation/tgstation | closed | Something is deleting turf air datums and it's killing the server | Atmospherics Bug Priority: High | <!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID:
[138752](https://scrubby.melonmesa.com/round/138752)
https://tgstation13.org/parsed-logs/manuel/data/logs/2020/06/04/round-138752/
## Testmerges:
Not relevant.
## Reproduction:
Something has deleted the air from these two turfs. Caused the entire server to stall.

```
15:48:30 [0x200a618] (197,120,2) || the BZ canister was destroyed.
15:48:33 [0x2009213] (195,126,2) || the plasma canister was destroyed.
15:48:33 [0x20095f6] (196,126,2) || the plasma canister was destroyed.
```
Canisters were destroyed on these turfs.
Here is another round where something similar happened:
https://tgstation13.org/parsed-logs/terry/data/logs/2020/06/02/round-138583/runtime.txt
Note that it happened on Pubby both times, might be map related or just general canister nonesense. | 1.0 | Something is deleting turf air datums and it's killing the server - <!-- Write **BELOW** The Headers and **ABOVE** The comments else it may not be viewable -->
## Round ID:
[138752](https://scrubby.melonmesa.com/round/138752)
https://tgstation13.org/parsed-logs/manuel/data/logs/2020/06/04/round-138752/
## Testmerges:
Not relevant.
## Reproduction:
Something has deleted the air from these two turfs. Caused the entire server to stall.

```
15:48:30 [0x200a618] (197,120,2) || the BZ canister was destroyed.
15:48:33 [0x2009213] (195,126,2) || the plasma canister was destroyed.
15:48:33 [0x20095f6] (196,126,2) || the plasma canister was destroyed.
```
Canisters were destroyed on these turfs.
Here is another round where something similar happened:
https://tgstation13.org/parsed-logs/terry/data/logs/2020/06/02/round-138583/runtime.txt
Note that it happened on Pubby both times, might be map related or just general canister nonesense. | non_defect | something is deleting turf air datums and it s killing the server round id testmerges not relevant reproduction something has deleted the air from these two turfs caused the entire server to stall the bz canister was destroyed the plasma canister was destroyed the plasma canister was destroyed canisters were destroyed on these turfs here is another round where something similar happened note that it happened on pubby both times might be map related or just general canister nonesense | 0 |
45,591 | 12,891,065,458 | IssuesEvent | 2020-07-13 17:03:53 | idaholab/moose | https://api.github.com/repos/idaholab/moose | closed | Error message construct_side_set_from_node_set should be construct_side_list_from_node_list | T: defect | ## Bug Description
The error message for the need to create sidesets from nodesets gives the wrong fix.
```
*** ERROR ***
/Users/topher/projects/meitner/empire/problems/unit_assembly/thermo-mechanical-n-hp/black_box/area_test.i:45: (Postprocessors/inside_area_nodeset/boundary):
the following side set ids do not exist on the mesh: 101
MOOSE distinguishes between "node sets" and "side sets" depending on whether
you are using "Nodal" or "Integrated" BCs respectively. Node sets corresponding
to your side sets are constructed for you by default.
Try setting "Mesh/construct_side_set_from_node_set=true" if you see this error.
Note: If you are running with adaptivity you should prefer using side sets.
```
`Mesh/construct_side_set_from_node_set` should be `Mesh/construct_side_list_from_node_list`.
## Steps to Reproduce
Run a mesh with only node sets and set a `AreaPostprocessor`
## Impact
Gives user the wrong fix. | 1.0 | Error message construct_side_set_from_node_set should be construct_side_list_from_node_list - ## Bug Description
The error message for the need to create sidesets from nodesets gives the wrong fix.
```
*** ERROR ***
/Users/topher/projects/meitner/empire/problems/unit_assembly/thermo-mechanical-n-hp/black_box/area_test.i:45: (Postprocessors/inside_area_nodeset/boundary):
the following side set ids do not exist on the mesh: 101
MOOSE distinguishes between "node sets" and "side sets" depending on whether
you are using "Nodal" or "Integrated" BCs respectively. Node sets corresponding
to your side sets are constructed for you by default.
Try setting "Mesh/construct_side_set_from_node_set=true" if you see this error.
Note: If you are running with adaptivity you should prefer using side sets.
```
`Mesh/construct_side_set_from_node_set` should be `Mesh/construct_side_list_from_node_list`.
## Steps to Reproduce
Run a mesh with only node sets and set a `AreaPostprocessor`
## Impact
Gives user the wrong fix. | defect | error message construct side set from node set should be construct side list from node list bug description the error message for the need to create sidesets from nodesets gives the wrong fix error users topher projects meitner empire problems unit assembly thermo mechanical n hp black box area test i postprocessors inside area nodeset boundary the following side set ids do not exist on the mesh moose distinguishes between node sets and side sets depending on whether you are using nodal or integrated bcs respectively node sets corresponding to your side sets are constructed for you by default try setting mesh construct side set from node set true if you see this error note if you are running with adaptivity you should prefer using side sets mesh construct side set from node set should be mesh construct side list from node list steps to reproduce run a mesh with only node sets and set a areapostprocessor impact gives user the wrong fix | 1 |
62,726 | 17,186,354,705 | IssuesEvent | 2021-07-16 02:58:31 | chorman0773/Clever-ISA | https://api.github.com/repos/chorman0773/Clever-ISA | closed | Clarify `ss` for Memory Reference Operands and Add size control for referent | I-defect I-unclear X-main | For Memory Reference Long Immediate Operands, it's currently unclear whether the size control bits present in the operand layout control the size of the Memory Address or the Referent. Clarify that the former applies and add size control bits for the referent. | 1.0 | Clarify `ss` for Memory Reference Operands and Add size control for referent - For Memory Reference Long Immediate Operands, it's currently unclear whether the size control bits present in the operand layout control the size of the Memory Address or the Referent. Clarify that the former applies and add size control bits for the referent. | defect | clarify ss for memory reference operands and add size control for referent for memory reference long immediate operands it s currently unclear whether the size control bits present in the operand layout control the size of the memory address or the referent clarify that the former applies and add size control bits for the referent | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.