Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 4
112
| repo_url
stringlengths 33
141
| action
stringclasses 3
values | title
stringlengths 1
1.02k
| labels
stringlengths 4
1.54k
| body
stringlengths 1
262k
| index
stringclasses 17
values | text_combine
stringlengths 95
262k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
695,040
| 23,841,474,340
|
IssuesEvent
|
2022-09-06 10:35:42
|
status-im/status-desktop
|
https://api.github.com/repos/status-im/status-desktop
|
opened
|
Can't show pinned messages anymore
|
bug priority 2: medium E:Communities
|
When a channel has pinned messages, there's a tiny "pin" icon in the chat toolbar.
That icon is clickable, and opens a dialog with all pinned messages.

Clicking the pin icon doesn't do anything anymore. In fact there's a QML error:
```sh
qrc:/app/AppLayouts/Chat/views/ChatHeaderContentView.qml:290: ReferenceError: messageStore is not defined
```
|
1.0
|
Can't show pinned messages anymore - When a channel has pinned messages, there's a tiny "pin" icon in the chat toolbar.
That icon is clickable, and opens a dialog with all pinned messages.

Clicking the pin icon doesn't do anything anymore. In fact there's a QML error:
```sh
qrc:/app/AppLayouts/Chat/views/ChatHeaderContentView.qml:290: ReferenceError: messageStore is not defined
```
|
non_test
|
can t show pinned messages anymore when a channel has pinned messages there s a tiny pin icon in the chat toolbar that icon is clickable and opens a dialog with all pinned messages clicking the pin icon doesn t do anything anymore in fact there s a qml error sh qrc app applayouts chat views chatheadercontentview qml referenceerror messagestore is not defined
| 0
|
177,064
| 28,315,077,475
|
IssuesEvent
|
2023-04-10 18:51:23
|
phetsims/my-solar-system
|
https://api.github.com/repos/phetsims/my-solar-system
|
closed
|
Z order of bodies and Panel.
|
design:general
|
For https://github.com/phetsims/qa/issues/927
Test device
Dell XPS 15
Operating System
Windows 10
Browser
Chrome
Problem description
Bodies hidden behind panels
This has been mostly likely discussed during a design meeting, but i may be worth revisiting.
PhET has a handful of simulation where the "play-area" occupies the entire screen.
- Energy Skate Park
- CCK
- Charges And Fields
- Pendulum Lab
- Gravity and Orbits
In most cases, we let the user drag objects over the panel. There doesn't seem any ill effects associated with having objects on top of a panel.
Gravity and orbits is rather the exception in that respect as it attempts to limit the ability of a user to dragged an object over the panel and the velocity vectors appear behind the panels, which I find odd. I personally prefer to have the object that is dragged on top on the z-order.



This is related to https://github.com/phetsims/my-solar-system/issues/129 but yet different.
This is not a bug per say, so feel free to close if you feel strongly about your design choice.
|
1.0
|
Z order of bodies and Panel. - For https://github.com/phetsims/qa/issues/927
Test device
Dell XPS 15
Operating System
Windows 10
Browser
Chrome
Problem description
Bodies hidden behind panels
This has been mostly likely discussed during a design meeting, but i may be worth revisiting.
PhET has a handful of simulation where the "play-area" occupies the entire screen.
- Energy Skate Park
- CCK
- Charges And Fields
- Pendulum Lab
- Gravity and Orbits
In most cases, we let the user drag objects over the panel. There doesn't seem any ill effects associated with having objects on top of a panel.
Gravity and orbits is rather the exception in that respect as it attempts to limit the ability of a user to dragged an object over the panel and the velocity vectors appear behind the panels, which I find odd. I personally prefer to have the object that is dragged on top on the z-order.



This is related to https://github.com/phetsims/my-solar-system/issues/129 but yet different.
This is not a bug per say, so feel free to close if you feel strongly about your design choice.
|
non_test
|
z order of bodies and panel for test device dell xps operating system windows browser chrome problem description bodies hidden behind panels this has been mostly likely discussed during a design meeting but i may be worth revisiting phet has a handful of simulation where the play area occupies the entire screen energy skate park cck charges and fields pendulum lab gravity and orbits in most cases we let the user drag objects over the panel there doesn t seem any ill effects associated with having objects on top of a panel gravity and orbits is rather the exception in that respect as it attempts to limit the ability of a user to dragged an object over the panel and the velocity vectors appear behind the panels which i find odd i personally prefer to have the object that is dragged on top on the z order this is related to but yet different this is not a bug per say so feel free to close if you feel strongly about your design choice
| 0
|
182,927
| 31,029,350,023
|
IssuesEvent
|
2023-08-10 11:22:42
|
Shimpei-GANGAN/create-nuxt3-app
|
https://api.github.com/repos/Shimpei-GANGAN/create-nuxt3-app
|
closed
|
【タスク】Nuxt3環境構築
|
🏰 design / consider
|
## 内容
- Nuxt3環境を構築する。以下のパッケージの追加を行う
- Vitest
- ESLint
- Prettier
## 備考
- UIライブラリについては別ブランチ or 別リポジトリでテンプレートを作ることになる。以下のように出来ると理想ではある。
- create-nuxt3-app/chakraui
- create-nuxt3-app/vuetify
- `nuxi --template`の機能でインストールできると便利だよな
## TODO
- [ ] Nuxt3環境構築
- [ ] Vitest: #4 に対応を分割
- [x] ESLintおよびESLintモジュール
- [x] Prettier
|
1.0
|
【タスク】Nuxt3環境構築 - ## 内容
- Nuxt3環境を構築する。以下のパッケージの追加を行う
- Vitest
- ESLint
- Prettier
## 備考
- UIライブラリについては別ブランチ or 別リポジトリでテンプレートを作ることになる。以下のように出来ると理想ではある。
- create-nuxt3-app/chakraui
- create-nuxt3-app/vuetify
- `nuxi --template`の機能でインストールできると便利だよな
## TODO
- [ ] Nuxt3環境構築
- [ ] Vitest: #4 に対応を分割
- [x] ESLintおよびESLintモジュール
- [x] Prettier
|
non_test
|
【タスク】 内容 。以下のパッケージの追加を行う vitest eslint prettier 備考 uiライブラリについては別ブランチ or 別リポジトリでテンプレートを作ることになる。以下のように出来ると理想ではある。 create app chakraui create app vuetify nuxi template の機能でインストールできると便利だよな todo vitest に対応を分割 eslintおよびeslintモジュール prettier
| 0
|
293,688
| 25,317,132,881
|
IssuesEvent
|
2022-11-17 22:47:56
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
tests: drivers: gpio_basic_api and gpio_api_1pin: convert to new ztest API
|
Enhancement area: GPIO area: Tests
|
**Is your enhancement proposal related to a problem? Please describe.**
#46960 adds support for the new ztest API to `tests/drivers/gpio/gpio_get_direction/`, but the other gpio driver tests are still using the old ztest API.
**Describe the solution you'd like**
Should probably migrate gpio driver tests to the new ztest API.
**Describe alternatives you've considered**
@mnkp - feel free to delegate to @henrikbrixandersen or myself if you are too busy.
**Additional context**
Should list the change in https://github.com/zephyrproject-rtos/zephyr/issues/47002 when complete.
|
1.0
|
tests: drivers: gpio_basic_api and gpio_api_1pin: convert to new ztest API - **Is your enhancement proposal related to a problem? Please describe.**
#46960 adds support for the new ztest API to `tests/drivers/gpio/gpio_get_direction/`, but the other gpio driver tests are still using the old ztest API.
**Describe the solution you'd like**
Should probably migrate gpio driver tests to the new ztest API.
**Describe alternatives you've considered**
@mnkp - feel free to delegate to @henrikbrixandersen or myself if you are too busy.
**Additional context**
Should list the change in https://github.com/zephyrproject-rtos/zephyr/issues/47002 when complete.
|
test
|
tests drivers gpio basic api and gpio api convert to new ztest api is your enhancement proposal related to a problem please describe adds support for the new ztest api to tests drivers gpio gpio get direction but the other gpio driver tests are still using the old ztest api describe the solution you d like should probably migrate gpio driver tests to the new ztest api describe alternatives you ve considered mnkp feel free to delegate to henrikbrixandersen or myself if you are too busy additional context should list the change in when complete
| 1
|
162,612
| 12,682,992,080
|
IssuesEvent
|
2020-06-19 18:40:03
|
brimsec/brim
|
https://api.github.com/repos/brimsec/brim
|
closed
|
reload is flaky
|
bug test
|
[This test run](https://github.com/brimsec/brim/runs/777908057?check_suite_focus=true) failed in Spectron `app.browserWindow.reload()`. We know this is so because it's written inside an `appStep`, and that `appStep` never completed.
```
[2020-06-16 19:44:14.955 debug]: Starting step "app reload"
[2020-06-16 19:44:16.769 error]: handleError: Test hit exception: unknown error: cannot determine loading status
from unknown error: unhandled inspector error: {"code":-32000,"message":"Inspected target navigated or closed"}
```
Looks related to or actually is https://github.com/electron-userland/spectron/issues/493 .
|
1.0
|
reload is flaky - [This test run](https://github.com/brimsec/brim/runs/777908057?check_suite_focus=true) failed in Spectron `app.browserWindow.reload()`. We know this is so because it's written inside an `appStep`, and that `appStep` never completed.
```
[2020-06-16 19:44:14.955 debug]: Starting step "app reload"
[2020-06-16 19:44:16.769 error]: handleError: Test hit exception: unknown error: cannot determine loading status
from unknown error: unhandled inspector error: {"code":-32000,"message":"Inspected target navigated or closed"}
```
Looks related to or actually is https://github.com/electron-userland/spectron/issues/493 .
|
test
|
reload is flaky failed in spectron app browserwindow reload we know this is so because it s written inside an appstep and that appstep never completed starting step app reload handleerror test hit exception unknown error cannot determine loading status from unknown error unhandled inspector error code message inspected target navigated or closed looks related to or actually is
| 1
|
228,558
| 25,219,056,714
|
IssuesEvent
|
2022-11-14 11:20:26
|
freedomofpress/dangerzone
|
https://api.github.com/repos/freedomofpress/dangerzone
|
closed
|
Failed CLI execution produces an empty "safe" document
|
bug security
|
## Description
Running Dangerzone CLI on a Linux environment that does not have Podman properly configured, throws an error during document conversion. A side effect of this error is that it produces an *empty* `*-safe.pdf` document.
## Steps to Reproduce
OS: Ubuntu 22.04
Release: 0.3.2
* Ensure you have access to the Dangerzone CLI (run `dangerzone-cli --help`).
* Make sure that `podman images ls` fails. You can temporarily move the `/usr/bin/podman` binary to somewhere else, for instance.
* Have an example PDF document ready.
* Run `dangerzone-cli test.pdf` within that container, where `test.pdf` is your example PDF.
* List the contents of your directory. You should see a `test-safe.pdf` there.
## Expected Behavior
Fail the execution without creating a safe document.
|
True
|
Failed CLI execution produces an empty "safe" document - ## Description
Running Dangerzone CLI on a Linux environment that does not have Podman properly configured, throws an error during document conversion. A side effect of this error is that it produces an *empty* `*-safe.pdf` document.
## Steps to Reproduce
OS: Ubuntu 22.04
Release: 0.3.2
* Ensure you have access to the Dangerzone CLI (run `dangerzone-cli --help`).
* Make sure that `podman images ls` fails. You can temporarily move the `/usr/bin/podman` binary to somewhere else, for instance.
* Have an example PDF document ready.
* Run `dangerzone-cli test.pdf` within that container, where `test.pdf` is your example PDF.
* List the contents of your directory. You should see a `test-safe.pdf` there.
## Expected Behavior
Fail the execution without creating a safe document.
|
non_test
|
failed cli execution produces an empty safe document description running dangerzone cli on a linux environment that does not have podman properly configured throws an error during document conversion a side effect of this error is that it produces an empty safe pdf document steps to reproduce os ubuntu release ensure you have access to the dangerzone cli run dangerzone cli help make sure that podman images ls fails you can temporarily move the usr bin podman binary to somewhere else for instance have an example pdf document ready run dangerzone cli test pdf within that container where test pdf is your example pdf list the contents of your directory you should see a test safe pdf there expected behavior fail the execution without creating a safe document
| 0
|
35,712
| 5,003,854,973
|
IssuesEvent
|
2016-12-12 02:01:15
|
FaradayRF/FaradayRF-Hardware
|
https://api.github.com/repos/FaradayRF/FaradayRF-Hardware
|
opened
|
First Batch Rev D1 Build
|
Testing
|
# First Batch Boards
We've already brought up the first of twelve boards on #46 so here are the next 11! Order of building these units will be the same as the first being SMA connector, GPS, and finally Power/MOSFET connectors. None of them will initially get a JTAG connector but they may eventually. Finally I will use prebuilt binaries to update the firmware of the CC430 via USB.
|
1.0
|
First Batch Rev D1 Build - # First Batch Boards
We've already brought up the first of twelve boards on #46 so here are the next 11! Order of building these units will be the same as the first being SMA connector, GPS, and finally Power/MOSFET connectors. None of them will initially get a JTAG connector but they may eventually. Finally I will use prebuilt binaries to update the firmware of the CC430 via USB.
|
test
|
first batch rev build first batch boards we ve already brought up the first of twelve boards on so here are the next order of building these units will be the same as the first being sma connector gps and finally power mosfet connectors none of them will initially get a jtag connector but they may eventually finally i will use prebuilt binaries to update the firmware of the via usb
| 1
|
86,089
| 15,755,328,070
|
IssuesEvent
|
2021-03-31 01:34:54
|
ysmanohar/DashBoard
|
https://api.github.com/repos/ysmanohar/DashBoard
|
opened
|
WS-2019-0425 (Medium) detected in multiple libraries
|
security vulnerability
|
## WS-2019-0425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mocha-2.5.3.tgz</b>, <b>mocha-3.5.3.tgz</b>, <b>mocha-1.21.4.tgz</b>, <b>mocha-1.21.5.tgz</b></p></summary>
<p>
<details><summary><b>mocha-2.5.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-2.5.3.tgz">https://registry.npmjs.org/mocha/-/mocha-2.5.3.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/chai/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/async/node_modules/mocha/package.json,DashBoard/bower_components/async/node_modules/mocha/package.json,DashBoard/bower_components/async/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-2.5.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-3.5.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-3.5.3.tgz">https://registry.npmjs.org/mocha/-/mocha-3.5.3.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/es6-promise/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/es6-promise/node_modules/mocha/package.json,DashBoard/bower_components/es6-promise/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-3.5.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-1.21.4.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-1.21.4.tgz">https://registry.npmjs.org/mocha/-/mocha-1.21.4.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/sinon-chai/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/sinon-chai/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-1.21.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-1.21.5.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-1.21.5.tgz">https://registry.npmjs.org/mocha/-/mocha-1.21.5.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/es6-promise/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/es6-promise/node_modules/promises-aplus-tests-phantom/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- promises-aplus-tests-phantom-2.1.0-revise.tgz (Root Library)
- :x: **mocha-1.21.5.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mocha is vulnerable to ReDoS attack. If the stack trace in utils.js begins with a large error message, and full-trace is not enabled, utils.stackTraceFilter() will take exponential run time.
<p>Publish Date: 2019-01-24
<p>URL: <a href=https://github.com/mochajs/mocha/commit/1a43d8b11a64e4e85fe2a61aed91c259bbbac559>WS-2019-0425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="v6.0.0">v6.0.0</a></p>
<p>Release Date: 2020-05-07</p>
<p>Fix Resolution: https://github.com/mochajs/mocha/commit/1a43d8b11a64e4e85fe2a61aed91c259bbbac559</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2019-0425 (Medium) detected in multiple libraries - ## WS-2019-0425 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>mocha-2.5.3.tgz</b>, <b>mocha-3.5.3.tgz</b>, <b>mocha-1.21.4.tgz</b>, <b>mocha-1.21.5.tgz</b></p></summary>
<p>
<details><summary><b>mocha-2.5.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-2.5.3.tgz">https://registry.npmjs.org/mocha/-/mocha-2.5.3.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/chai/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/async/node_modules/mocha/package.json,DashBoard/bower_components/async/node_modules/mocha/package.json,DashBoard/bower_components/async/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-2.5.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-3.5.3.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-3.5.3.tgz">https://registry.npmjs.org/mocha/-/mocha-3.5.3.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/es6-promise/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/es6-promise/node_modules/mocha/package.json,DashBoard/bower_components/es6-promise/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-3.5.3.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-1.21.4.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-1.21.4.tgz">https://registry.npmjs.org/mocha/-/mocha-1.21.4.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/sinon-chai/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/sinon-chai/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- :x: **mocha-1.21.4.tgz** (Vulnerable Library)
</details>
<details><summary><b>mocha-1.21.5.tgz</b></p></summary>
<p>simple, flexible, fun test framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/mocha/-/mocha-1.21.5.tgz">https://registry.npmjs.org/mocha/-/mocha-1.21.5.tgz</a></p>
<p>Path to dependency file: /DashBoard/bower_components/es6-promise/package.json</p>
<p>Path to vulnerable library: DashBoard/bower_components/es6-promise/node_modules/promises-aplus-tests-phantom/node_modules/mocha/package.json</p>
<p>
Dependency Hierarchy:
- promises-aplus-tests-phantom-2.1.0-revise.tgz (Root Library)
- :x: **mocha-1.21.5.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Mocha is vulnerable to ReDoS attack. If the stack trace in utils.js begins with a large error message, and full-trace is not enabled, utils.stackTraceFilter() will take exponential run time.
<p>Publish Date: 2019-01-24
<p>URL: <a href=https://github.com/mochajs/mocha/commit/1a43d8b11a64e4e85fe2a61aed91c259bbbac559>WS-2019-0425</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="v6.0.0">v6.0.0</a></p>
<p>Release Date: 2020-05-07</p>
<p>Fix Resolution: https://github.com/mochajs/mocha/commit/1a43d8b11a64e4e85fe2a61aed91c259bbbac559</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws medium detected in multiple libraries ws medium severity vulnerability vulnerable libraries mocha tgz mocha tgz mocha tgz mocha tgz mocha tgz simple flexible fun test framework library home page a href path to dependency file dashboard bower components chai package json path to vulnerable library dashboard bower components async node modules mocha package json dashboard bower components async node modules mocha package json dashboard bower components async node modules mocha package json dependency hierarchy x mocha tgz vulnerable library mocha tgz simple flexible fun test framework library home page a href path to dependency file dashboard bower components promise package json path to vulnerable library dashboard bower components promise node modules mocha package json dashboard bower components promise node modules mocha package json dependency hierarchy x mocha tgz vulnerable library mocha tgz simple flexible fun test framework library home page a href path to dependency file dashboard bower components sinon chai package json path to vulnerable library dashboard bower components sinon chai node modules mocha package json dependency hierarchy x mocha tgz vulnerable library mocha tgz simple flexible fun test framework library home page a href path to dependency file dashboard bower components promise package json path to vulnerable library dashboard bower components promise node modules promises aplus tests phantom node modules mocha package json dependency hierarchy promises aplus tests phantom revise tgz root library x mocha tgz vulnerable library vulnerability details mocha is vulnerable to redos attack if the stack trace in utils js begins with a large error message and full trace is not enabled utils stacktracefilter will take exponential run time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin release date fix resolution step up your open source security game with whitesource
| 0
|
754,464
| 26,389,716,175
|
IssuesEvent
|
2023-01-12 14:52:25
|
l7mp/stunner
|
https://api.github.com/repos/l7mp/stunner
|
opened
|
MIlestone v1.14: Performance: Per-allocation CPU load-balancing
|
priority: low type: enhancement
|
This issue is to plan & discuss the performance optimizations that should go into v1.14.
**Problem:** Currently STUNner UDP performance is limited at about 100-200 kpps per UDP listener (i.e., per UDP Gateway/listener in the Kubernetes Gateway API terminology). This is because we allocate a single `net.PacketConn` per UDP listener, which is then [drained by a single CPU thread/go-routine](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/server.go#L68). This means that all client allocations made via that listener will share the same CPU thread and there is no way to load-balance client allocations across CPUs; i.e., each listener is restricted to a single CPU. If STUNner is exposed via a single UDP listener (the most common setting) then it will be restricted to about 1200-1500 mcore.
**Notes:**
- **This is not a problem in Kubernetes:** instead of vertical scaling (let a single STUNner instance use as many CPUs as available), Kubernetes defaults to horizontal scaling; if a single `stunnerd` pod is a bottleneck we simply fire up more (e.g., using HPA). In fact, the single-CPU-restriction makes HPA *simpler* since the CPU triggers are easier to set (e.g., we have to scale-out when when the average CPU load approaches 1000 mcores); when the application can vertically scale to some arbitrary number of CPUs by itself we never know how to fix the CPU trigger for HPA (this is when vertical scaling interferes with horizonmtal scaling). Eventually we'll have as many pods as CPU cores and Kubernetes will readily load-balance client connections across our pods. This makes us wonder whether to solve the vertical scaling problem *at all*, since there is very little use of such a feature in Kubernetes.
- **The single-CPU restriction apples per-UDP-listener:** if STUNner is exposed via multiple UDP TURN listeners then each listener will receive a separate CPU thread.
- **This limitation applies to UDP only:** for TCP, TLS and DTLS the TURN sockets are connected back to the client and therefore [a separate CPU thread/go-routine is created for each allocation](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/server.go#L96).
**Solution:** The plan is to create a separate `net.Conn` for each UDP allocation, by (1) sharing the same listener server address using `REUSEADDR/REUSEPORT`, (2) connecting each per-allocation connection back to the client (this will turn the `net.PacketConn` into a connected `net.Conn`), and (3) firing up a separate read-loop/go-routine per each allocation/socket. Extreme care must be taken though in implementing this: if we blindly create a new socket per received UDP packet then a simple UDP portscan will DoS the TURN listener.
**Plan:**
1. Move the creation of per-allocation connection creation after the client has authenticated with the server, e.g., [when the TURN allocation request has been successfully processed](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/internal/server/turn.go#L151). Note that this still allows a client with a valid credential to DoS the server, so we need to quota per-client connections.
2. Implement per-client quotas as per RFC8656, Section 7.2., "Receiving an Allocate Request", point 10:
> At any point, the server MAY choose to reject the request with a 486 (Allocation Quota Reached) error if it feels the client is trying to exceed some locally defined allocation quota. The server is free to define this allocation quota any way it wishes, but it SHOULD define it based on the username used to authenticate the request and not on the client's transport address.
3. Expose the client quota via `turn.ServerConfig`. Possibly also expose a setting to let users to opt in to per-allocation CPU load-balancing.
4. Test and upstream.
Feedback appreciated.
|
1.0
|
MIlestone v1.14: Performance: Per-allocation CPU load-balancing - This issue is to plan & discuss the performance optimizations that should go into v1.14.
**Problem:** Currently STUNner UDP performance is limited at about 100-200 kpps per UDP listener (i.e., per UDP Gateway/listener in the Kubernetes Gateway API terminology). This is because we allocate a single `net.PacketConn` per UDP listener, which is then [drained by a single CPU thread/go-routine](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/server.go#L68). This means that all client allocations made via that listener will share the same CPU thread and there is no way to load-balance client allocations across CPUs; i.e., each listener is restricted to a single CPU. If STUNner is exposed via a single UDP listener (the most common setting) then it will be restricted to about 1200-1500 mcore.
**Notes:**
- **This is not a problem in Kubernetes:** instead of vertical scaling (let a single STUNner instance use as many CPUs as available), Kubernetes defaults to horizontal scaling; if a single `stunnerd` pod is a bottleneck we simply fire up more (e.g., using HPA). In fact, the single-CPU-restriction makes HPA *simpler* since the CPU triggers are easier to set (e.g., we have to scale-out when when the average CPU load approaches 1000 mcores); when the application can vertically scale to some arbitrary number of CPUs by itself we never know how to fix the CPU trigger for HPA (this is when vertical scaling interferes with horizonmtal scaling). Eventually we'll have as many pods as CPU cores and Kubernetes will readily load-balance client connections across our pods. This makes us wonder whether to solve the vertical scaling problem *at all*, since there is very little use of such a feature in Kubernetes.
- **The single-CPU restriction apples per-UDP-listener:** if STUNner is exposed via multiple UDP TURN listeners then each listener will receive a separate CPU thread.
- **This limitation applies to UDP only:** for TCP, TLS and DTLS the TURN sockets are connected back to the client and therefore [a separate CPU thread/go-routine is created for each allocation](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/server.go#L96).
**Solution:** The plan is to create a separate `net.Conn` for each UDP allocation, by (1) sharing the same listener server address using `REUSEADDR/REUSEPORT`, (2) connecting each per-allocation connection back to the client (this will turn the `net.PacketConn` into a connected `net.Conn`), and (3) firing up a separate read-loop/go-routine per each allocation/socket. Extreme care must be taken though in implementing this: if we blindly create a new socket per received UDP packet then a simple UDP portscan will DoS the TURN listener.
**Plan:**
1. Move the creation of per-allocation connection creation after the client has authenticated with the server, e.g., [when the TURN allocation request has been successfully processed](https://github.com/l7mp/turn/blob/7bd80d5f800480042e404d1d49e5a6d20377127e/internal/server/turn.go#L151). Note that this still allows a client with a valid credential to DoS the server, so we need to quota per-client connections.
2. Implement per-client quotas as per RFC8656, Section 7.2., "Receiving an Allocate Request", point 10:
> At any point, the server MAY choose to reject the request with a 486 (Allocation Quota Reached) error if it feels the client is trying to exceed some locally defined allocation quota. The server is free to define this allocation quota any way it wishes, but it SHOULD define it based on the username used to authenticate the request and not on the client's transport address.
3. Expose the client quota via `turn.ServerConfig`. Possibly also expose a setting to let users to opt in to per-allocation CPU load-balancing.
4. Test and upstream.
Feedback appreciated.
|
non_test
|
milestone performance per allocation cpu load balancing this issue is to plan discuss the performance optimizations that should go into problem currently stunner udp performance is limited at about kpps per udp listener i e per udp gateway listener in the kubernetes gateway api terminology this is because we allocate a single net packetconn per udp listener which is then this means that all client allocations made via that listener will share the same cpu thread and there is no way to load balance client allocations across cpus i e each listener is restricted to a single cpu if stunner is exposed via a single udp listener the most common setting then it will be restricted to about mcore notes this is not a problem in kubernetes instead of vertical scaling let a single stunner instance use as many cpus as available kubernetes defaults to horizontal scaling if a single stunnerd pod is a bottleneck we simply fire up more e g using hpa in fact the single cpu restriction makes hpa simpler since the cpu triggers are easier to set e g we have to scale out when when the average cpu load approaches mcores when the application can vertically scale to some arbitrary number of cpus by itself we never know how to fix the cpu trigger for hpa this is when vertical scaling interferes with horizonmtal scaling eventually we ll have as many pods as cpu cores and kubernetes will readily load balance client connections across our pods this makes us wonder whether to solve the vertical scaling problem at all since there is very little use of such a feature in kubernetes the single cpu restriction apples per udp listener if stunner is exposed via multiple udp turn listeners then each listener will receive a separate cpu thread this limitation applies to udp only for tcp tls and dtls the turn sockets are connected back to the client and therefore solution the plan is to create a separate net conn for each udp allocation by sharing the same listener server address using reuseaddr reuseport connecting each per allocation connection back to the client this will turn the net packetconn into a connected net conn and firing up a separate read loop go routine per each allocation socket extreme care must be taken though in implementing this if we blindly create a new socket per received udp packet then a simple udp portscan will dos the turn listener plan move the creation of per allocation connection creation after the client has authenticated with the server e g note that this still allows a client with a valid credential to dos the server so we need to quota per client connections implement per client quotas as per section receiving an allocate request point at any point the server may choose to reject the request with a allocation quota reached error if it feels the client is trying to exceed some locally defined allocation quota the server is free to define this allocation quota any way it wishes but it should define it based on the username used to authenticate the request and not on the client s transport address expose the client quota via turn serverconfig possibly also expose a setting to let users to opt in to per allocation cpu load balancing test and upstream feedback appreciated
| 0
|
42,357
| 5,435,081,906
|
IssuesEvent
|
2017-03-05 14:01:06
|
openbmc/openbmc-test-automation
|
https://api.github.com/repos/openbmc/openbmc-test-automation
|
closed
|
[Automation] BMC boot count
|
feature Test
|
We are seeing lately phantom reset in code update.. so we want to add logic to find way to track the boot count.. look like as per Dev Folks either uptime or /proc/stat btime makes up for it..
need to derived logic from it to keep the boot count.
|
1.0
|
[Automation] BMC boot count - We are seeing lately phantom reset in code update.. so we want to add logic to find way to track the boot count.. look like as per Dev Folks either uptime or /proc/stat btime makes up for it..
need to derived logic from it to keep the boot count.
|
test
|
bmc boot count we are seeing lately phantom reset in code update so we want to add logic to find way to track the boot count look like as per dev folks either uptime or proc stat btime makes up for it need to derived logic from it to keep the boot count
| 1
|
224,147
| 24,769,703,913
|
IssuesEvent
|
2022-10-23 01:12:09
|
snykiotcubedev/arangodb-3.7.6
|
https://api.github.com/repos/snykiotcubedev/arangodb-3.7.6
|
reopened
|
CVE-2018-11694 (Medium) detected in node-sass-4.14.1.tgz
|
security vulnerability
|
## CVE-2018-11694 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-11694 (Medium) detected in node-sass-4.14.1.tgz - ## CVE-2018-11694 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>
Dependency Hierarchy:
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
An issue was discovered in LibSass through 3.5.4. A NULL pointer dereference was found in the function Sass::Functions::selector_append which could be leveraged by an attacker to cause a denial of service (application crash) or possibly have unspecified other impact.
<p>Publish Date: 2018-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11694>CVE-2018-11694</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2018-06-04</p>
<p>Fix Resolution: 5.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in node sass tgz cve medium severity vulnerability vulnerable library node sass tgz wrapper around libsass library home page a href dependency hierarchy x node sass tgz vulnerable library found in head commit a href found in base branch main vulnerability details an issue was discovered in libsass through a null pointer dereference was found in the function sass functions selector append which could be leveraged by an attacker to cause a denial of service application crash or possibly have unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version release date fix resolution step up your open source security game with mend
| 0
|
142
| 2,494,758,213
|
IssuesEvent
|
2015-01-06 01:11:45
|
yaobinshi/test1
|
https://api.github.com/repos/yaobinshi/test1
|
opened
|
upload from server did not work
|
Category: UI Component: Rank Component: Tester Priority: High Status: Closed Tracker: Bug
|
---
Author Name: **larry shi**
Original Redmine Issue: 87, http://www.fossology.org/issues/87
Original Date: 2011/12/16
Original Assignee: Mary Laser
---
upload from server did not work
cause: fosscp_agent is passed away.
how to fix this issue
my suggestion is:
upload file with cp2foss directly.
Make sense?
|
1.0
|
upload from server did not work - ---
Author Name: **larry shi**
Original Redmine Issue: 87, http://www.fossology.org/issues/87
Original Date: 2011/12/16
Original Assignee: Mary Laser
---
upload from server did not work
cause: fosscp_agent is passed away.
how to fix this issue
my suggestion is:
upload file with cp2foss directly.
Make sense?
|
test
|
upload from server did not work author name larry shi original redmine issue original date original assignee mary laser upload from server did not work cause fosscp agent is passed away how to fix this issue my suggestion is upload file with directly make sense
| 1
|
224,241
| 17,673,813,941
|
IssuesEvent
|
2021-08-23 09:45:03
|
spring-projects/spring-framework
|
https://api.github.com/repos/spring-projects/spring-framework
|
closed
|
Introduce `ExceptionCollector` testing utility
|
in: test type: enhancement
|
In order to support _soft assertions_ in #26917 and #26969, we need common support for tracking multiple failures and generating a single `AssertionError` containing those failures as suppressed exceptions.
|
1.0
|
Introduce `ExceptionCollector` testing utility - In order to support _soft assertions_ in #26917 and #26969, we need common support for tracking multiple failures and generating a single `AssertionError` containing those failures as suppressed exceptions.
|
test
|
introduce exceptioncollector testing utility in order to support soft assertions in and we need common support for tracking multiple failures and generating a single assertionerror containing those failures as suppressed exceptions
| 1
|
327,150
| 28,045,873,642
|
IssuesEvent
|
2023-03-28 22:50:12
|
finos/waltz
|
https://api.github.com/repos/finos/waltz
|
closed
|
Search: searching for a guid throws an error
|
bug small change fixed (test & close)
|
### Description
An actor was registered with a name/external-id as a guid. Searching for that guid causes an error to be displayed.
Not sure that the actor had anything to do with it. May have just been the first time we noticed it.
### Waltz Version
1.47.1
### Steps to Reproduce
Search for guid
### Expected Result
Entity should be found (or no hits returned).
### Actual Result
Error message:

|
1.0
|
Search: searching for a guid throws an error - ### Description
An actor was registered with a name/external-id as a guid. Searching for that guid causes an error to be displayed.
Not sure that the actor had anything to do with it. May have just been the first time we noticed it.
### Waltz Version
1.47.1
### Steps to Reproduce
Search for guid
### Expected Result
Entity should be found (or no hits returned).
### Actual Result
Error message:

|
test
|
search searching for a guid throws an error description an actor was registered with a name external id as a guid searching for that guid causes an error to be displayed not sure that the actor had anything to do with it may have just been the first time we noticed it waltz version steps to reproduce search for guid expected result entity should be found or no hits returned actual result error message
| 1
|
135,750
| 11,016,601,106
|
IssuesEvent
|
2019-12-05 06:00:05
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
closed
|
Update '运算符' to '文件共享' on the prompt dialog when renaming a file share with an exist name
|
:gear: blobs :gear: files 🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.11.0
**Build:** [20191105.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3213388)
**Branch:** rel/1.11.0
**Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra
**Language:** Chinese(zh-CN)
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Chinese (simplified)' -> Restart Storage Explorer.
3. Expand one storage account -> File Shares.
4. Create two file shares -> Rename one file share using another file share's name.
5. Check the prompt dialog.
**Expect Experience:**
Show '具有该名称的**文件共享**已存在' on the dialog.
**Actual Experience:**
Show '具有该名称的**运算符**已存在' on the dialog.

**More Info:**
1. This issue also reproduces for one blob container. (Update '运算符' to 'blob 容器')
2. The prompt dialog shows like below in Chinese (traditional).

|
1.0
|
Update '运算符' to '文件共享' on the prompt dialog when renaming a file share with an exist name - **Storage Explorer Version:** 1.11.0
**Build:** [20191105.2](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3213388)
**Branch:** rel/1.11.0
**Platform/OS:** Windows 10/ Linux Ubuntu 19.04/macOS High Sierra
**Language:** Chinese(zh-CN)
**Architecture:** ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select 'Chinese (simplified)' -> Restart Storage Explorer.
3. Expand one storage account -> File Shares.
4. Create two file shares -> Rename one file share using another file share's name.
5. Check the prompt dialog.
**Expect Experience:**
Show '具有该名称的**文件共享**已存在' on the dialog.
**Actual Experience:**
Show '具有该名称的**运算符**已存在' on the dialog.

**More Info:**
1. This issue also reproduces for one blob container. (Update '运算符' to 'blob 容器')
2. The prompt dialog shows like below in Chinese (traditional).

|
test
|
update 运算符 to 文件共享 on the prompt dialog when renaming a file share with an exist name storage explorer version build branch rel platform os windows linux ubuntu macos high sierra language chinese zh cn architecture regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select chinese simplified restart storage explorer expand one storage account file shares create two file shares rename one file share using another file share s name check the prompt dialog expect experience show 具有该名称的 文件共享 已存在 on the dialog actual experience show 具有该名称的 运算符 已存在 on the dialog more info this issue also reproduces for one blob container update 运算符 to blob 容器 the prompt dialog shows like below in chinese traditional
| 1
|
99,712
| 8,709,144,773
|
IssuesEvent
|
2018-12-06 13:08:17
|
NKCR-INPROVE/evidence.periodik
|
https://api.github.com/repos/NKCR-INPROVE/evidence.periodik
|
closed
|
počet stran u editace exempláře
|
ready for test
|
u čísla 1 jsme upravili a specifikovali počet stran, následně při editu exempláře se zobrazují špatně checkboxy pro výběr stran.

|
1.0
|
počet stran u editace exempláře - u čísla 1 jsme upravili a specifikovali počet stran, následně při editu exempláře se zobrazují špatně checkboxy pro výběr stran.

|
test
|
počet stran u editace exempláře u čísla jsme upravili a specifikovali počet stran následně při editu exempláře se zobrazují špatně checkboxy pro výběr stran
| 1
|
281,103
| 30,872,817,016
|
IssuesEvent
|
2023-08-03 12:32:25
|
hinoshiba/news
|
https://api.github.com/repos/hinoshiba/news
|
closed
|
[SecurityWeek] Ransomware Attacks on Industrial Organizations Doubled in Past Year: Report
|
SecurityWeek Stale
|
The number of ransomware attacks targeting industrial organizations and infrastructure has doubled since the second quarter of 2022, according to Dragos.
The post [Ransomware Attacks on Industrial Organizations Doubled in Past Year: Report](https://www.securityweek.com/ransomware-attacks-on-industrial-organizations-doubled-in-past-year-report/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/ransomware-attacks-on-industrial-organizations-doubled-in-past-year-report/>
|
True
|
[SecurityWeek] Ransomware Attacks on Industrial Organizations Doubled in Past Year: Report -
The number of ransomware attacks targeting industrial organizations and infrastructure has doubled since the second quarter of 2022, according to Dragos.
The post [Ransomware Attacks on Industrial Organizations Doubled in Past Year: Report](https://www.securityweek.com/ransomware-attacks-on-industrial-organizations-doubled-in-past-year-report/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/ransomware-attacks-on-industrial-organizations-doubled-in-past-year-report/>
|
non_test
|
ransomware attacks on industrial organizations doubled in past year report the number of ransomware attacks targeting industrial organizations and infrastructure has doubled since the second quarter of according to dragos the post appeared first on
| 0
|
10,543
| 6,794,477,718
|
IssuesEvent
|
2017-11-01 12:20:46
|
Elgg/Elgg
|
https://api.github.com/repos/Elgg/Elgg
|
closed
|
Linkify headers in group modules
|
easy ui usability
|
E.g. Link "Group blog". Replace the "view all" link with the "Write a blog post" link in that position.
|
True
|
Linkify headers in group modules - E.g. Link "Group blog". Replace the "view all" link with the "Write a blog post" link in that position.
|
non_test
|
linkify headers in group modules e g link group blog replace the view all link with the write a blog post link in that position
| 0
|
113,741
| 17,150,895,063
|
IssuesEvent
|
2021-07-13 20:26:56
|
snowdensb/braindump
|
https://api.github.com/repos/snowdensb/braindump
|
opened
|
CVE-2020-8203 (High) detected in lodash-4.16.6.tgz, lodash-1.0.2.tgz
|
security vulnerability
|
## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.16.6.tgz</b>, <b>lodash-1.0.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.16.6.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.16.6.tgz">https://registry.npmjs.org/lodash/-/lodash-4.16.6.tgz</a></p>
<p>Path to dependency file: braindump/package.json</p>
<p>Path to vulnerable library: braindump/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- node-sass-3.12.1.tgz
- sass-graph-2.1.2.tgz
- :x: **lodash-4.16.6.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: braindump/package.json</p>
<p>Path to vulnerable library: braindump/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.16.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:2.3.2;node-sass:3.12.1;sass-graph:2.1.2;lodash:4.16.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"1.0.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp:3.9.1;vinyl-fs:0.3.14;glob-watcher:0.0.6;gaze:0.5.2;globule:0.1.0;lodash:1.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-8203 (High) detected in lodash-4.16.6.tgz, lodash-1.0.2.tgz - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>lodash-4.16.6.tgz</b>, <b>lodash-1.0.2.tgz</b></p></summary>
<p>
<details><summary><b>lodash-4.16.6.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.16.6.tgz">https://registry.npmjs.org/lodash/-/lodash-4.16.6.tgz</a></p>
<p>Path to dependency file: braindump/package.json</p>
<p>Path to vulnerable library: braindump/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- gulp-sass-2.3.2.tgz (Root Library)
- node-sass-3.12.1.tgz
- sass-graph-2.1.2.tgz
- :x: **lodash-4.16.6.tgz** (Vulnerable Library)
</details>
<details><summary><b>lodash-1.0.2.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz">https://registry.npmjs.org/lodash/-/lodash-1.0.2.tgz</a></p>
<p>Path to dependency file: braindump/package.json</p>
<p>Path to vulnerable library: braindump/node_modules/lodash</p>
<p>
Dependency Hierarchy:
- gulp-3.9.1.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **lodash-1.0.2.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-10-21</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"4.16.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp-sass:2.3.2;node-sass:3.12.1;sass-graph:2.1.2;lodash:4.16.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"},{"packageType":"javascript/Node.js","packageName":"lodash","packageVersion":"1.0.2","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"gulp:3.9.1;vinyl-fs:0.3.14;glob-watcher:0.0.6;gaze:0.5.2;globule:0.1.0;lodash:1.0.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"lodash - 4.17.19"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-8203","vulnerabilityDetails":"Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_test
|
cve high detected in lodash tgz lodash tgz cve high severity vulnerability vulnerable libraries lodash tgz lodash tgz lodash tgz lodash modular utilities library home page a href path to dependency file braindump package json path to vulnerable library braindump node modules lodash dependency hierarchy gulp sass tgz root library node sass tgz sass graph tgz x lodash tgz vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file braindump package json path to vulnerable library braindump node modules lodash dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree gulp sass node sass sass graph lodash isminimumfixversionavailable true minimumfixversion lodash packagetype javascript node js packagename lodash packageversion packagefilepaths istransitivedependency true dependencytree gulp vinyl fs glob watcher gaze globule lodash isminimumfixversionavailable true minimumfixversion lodash basebranches vulnerabilityidentifier cve vulnerabilitydetails prototype pollution attack when using zipobjectdeep in lodash before vulnerabilityurl
| 0
|
70,812
| 15,111,824,712
|
IssuesEvent
|
2021-02-08 20:58:46
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
Remove contexts from CircleCI until we have the ability to limit context access by security groups
|
devops operations security
|
## Description
Contexts in CircleCI are currently only able to be created with a default 'All members' group, which gives access to all members of the github Org 'department-of-veterans-affairs'. This can be limited by security groups within the circleci interface, based on github teams within the org, however, configuring those settings requires administrative privileges on the org.
It appears that the only workaround for the time being is to manage the AWS and github tokens by putting them in the per-project ENV settings. This means that if we change these credentials, we have to change them in _all_ of the projects, but this is safer than keeping them in a context.
## Background/context/resources
https://dsva.slack.com/archives/C01CJV0L9PS/p1612651359029600
## Technical notes
_Notes around work that is happening, if applicable_
---
## Tasks
For each project:
- [ ] Determine what contexts a job needs (defined in the workflows section of the config.yml)
- [ ] Determine what permissions are conferred by those contexts
- [ ] Create Environmental Variables in the project that replicate the credentials that would have been in the context
Once all have been done:
- [ ] Delete the global contexts
## Definition of Done
- [ ] AWS credentials and github credentials will no longer be stored in the global CircleCI Organizational contexts.
---
### Reminders
- [ ] Please attach your team label and any other appropriate label(s)
- [ ] Please attach the needs grooming tag if needed
- [ ] Please connect to an epic
|
True
|
Remove contexts from CircleCI until we have the ability to limit context access by security groups - ## Description
Contexts in CircleCI are currently only able to be created with a default 'All members' group, which gives access to all members of the github Org 'department-of-veterans-affairs'. This can be limited by security groups within the circleci interface, based on github teams within the org, however, configuring those settings requires administrative privileges on the org.
It appears that the only workaround for the time being is to manage the AWS and github tokens by putting them in the per-project ENV settings. This means that if we change these credentials, we have to change them in _all_ of the projects, but this is safer than keeping them in a context.
## Background/context/resources
https://dsva.slack.com/archives/C01CJV0L9PS/p1612651359029600
## Technical notes
_Notes around work that is happening, if applicable_
---
## Tasks
For each project:
- [ ] Determine what contexts a job needs (defined in the workflows section of the config.yml)
- [ ] Determine what permissions are conferred by those contexts
- [ ] Create Environmental Variables in the project that replicate the credentials that would have been in the context
Once all have been done:
- [ ] Delete the global contexts
## Definition of Done
- [ ] AWS credentials and github credentials will no longer be stored in the global CircleCI Organizational contexts.
---
### Reminders
- [ ] Please attach your team label and any other appropriate label(s)
- [ ] Please attach the needs grooming tag if needed
- [ ] Please connect to an epic
|
non_test
|
remove contexts from circleci until we have the ability to limit context access by security groups description contexts in circleci are currently only able to be created with a default all members group which gives access to all members of the github org department of veterans affairs this can be limited by security groups within the circleci interface based on github teams within the org however configuring those settings requires administrative privileges on the org it appears that the only workaround for the time being is to manage the aws and github tokens by putting them in the per project env settings this means that if we change these credentials we have to change them in all of the projects but this is safer than keeping them in a context background context resources technical notes notes around work that is happening if applicable tasks for each project determine what contexts a job needs defined in the workflows section of the config yml determine what permissions are conferred by those contexts create environmental variables in the project that replicate the credentials that would have been in the context once all have been done delete the global contexts definition of done aws credentials and github credentials will no longer be stored in the global circleci organizational contexts reminders please attach your team label and any other appropriate label s please attach the needs grooming tag if needed please connect to an epic
| 0
|
36,484
| 5,060,658,526
|
IssuesEvent
|
2016-12-22 12:53:33
|
SpamExperts/SpamPAD
|
https://api.github.com/repos/SpamExperts/SpamPAD
|
closed
|
Add replacement for Plugin::URIEval
|
enhancement Plugin Testing
|
- [Plugin::URIEval](http://svn.apache.org/repos/asf/spamassassin/tags/sa-update_3.3.0_20070730085137/lib/Mail/SpamAssassin/Plugin/URIEval.pm)
- check_for_http_redirector
- check_https_ip_mismatch
- check_uri_truncated
Most of these can probably be already done with the URIDetail plugin
|
1.0
|
Add replacement for Plugin::URIEval - - [Plugin::URIEval](http://svn.apache.org/repos/asf/spamassassin/tags/sa-update_3.3.0_20070730085137/lib/Mail/SpamAssassin/Plugin/URIEval.pm)
- check_for_http_redirector
- check_https_ip_mismatch
- check_uri_truncated
Most of these can probably be already done with the URIDetail plugin
|
test
|
add replacement for plugin urieval check for http redirector check https ip mismatch check uri truncated most of these can probably be already done with the uridetail plugin
| 1
|
7,316
| 6,826,670,745
|
IssuesEvent
|
2017-11-08 14:50:57
|
drud/ddev
|
https://api.github.com/repos/drud/ddev
|
closed
|
[Meta] Handle or resolve Windows Compatibility Issues like hosts file/privilege escalation
|
incubate needs decision security
|
This is a follow-up to https://github.com/drud/ddev/issues/196#issuecomment-302441130, specifically with respect to the following:
> Windows Compatibility: Addressing areas where linux/macOS assumptions fail (/etc/hosts, linux commands, .exe naming convention requirements on Windows, etc.).
* Currently we check for the existence of the sudo command, and if it is found, use it to escalate privileges to add an entry to the /etc/hosts file. This is not a valid technique on Windows native, as sudo is not available and privilege escalation is done in other ways.
* The *purpose* for the privilege escalation is to edit the hosts file, which is at /Windows/System32/drivers/etc/hosts instead of /etc/hosts. It's our belief that the hosts management library we're using can actually handle this on windows if we have a good privilege escalation technique.
## Related source links or issues:
* Parent issue: https://github.com/drud/ddev/issues/196#issuecomment-302441130
* Discussion of providing a wildcard DNS entry to serve this function: https://github.com/drud/ddev/issues/175
|
True
|
[Meta] Handle or resolve Windows Compatibility Issues like hosts file/privilege escalation - This is a follow-up to https://github.com/drud/ddev/issues/196#issuecomment-302441130, specifically with respect to the following:
> Windows Compatibility: Addressing areas where linux/macOS assumptions fail (/etc/hosts, linux commands, .exe naming convention requirements on Windows, etc.).
* Currently we check for the existence of the sudo command, and if it is found, use it to escalate privileges to add an entry to the /etc/hosts file. This is not a valid technique on Windows native, as sudo is not available and privilege escalation is done in other ways.
* The *purpose* for the privilege escalation is to edit the hosts file, which is at /Windows/System32/drivers/etc/hosts instead of /etc/hosts. It's our belief that the hosts management library we're using can actually handle this on windows if we have a good privilege escalation technique.
## Related source links or issues:
* Parent issue: https://github.com/drud/ddev/issues/196#issuecomment-302441130
* Discussion of providing a wildcard DNS entry to serve this function: https://github.com/drud/ddev/issues/175
|
non_test
|
handle or resolve windows compatibility issues like hosts file privilege escalation this is a follow up to specifically with respect to the following windows compatibility addressing areas where linux macos assumptions fail etc hosts linux commands exe naming convention requirements on windows etc currently we check for the existence of the sudo command and if it is found use it to escalate privileges to add an entry to the etc hosts file this is not a valid technique on windows native as sudo is not available and privilege escalation is done in other ways the purpose for the privilege escalation is to edit the hosts file which is at windows drivers etc hosts instead of etc hosts it s our belief that the hosts management library we re using can actually handle this on windows if we have a good privilege escalation technique related source links or issues parent issue discussion of providing a wildcard dns entry to serve this function
| 0
|
89,783
| 8,213,859,586
|
IssuesEvent
|
2018-09-04 20:56:03
|
aspnet/SignalR
|
https://api.github.com/repos/aspnet/SignalR
|
closed
|
Test failure: CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser
|
Branch:2.1 PRI: 0 - Critical test-failure
|
This test [fails](http://aspnetci/viewLog.html?buildId=542609&buildTypeId=Releases_21Public_UbuntuUniverse) occasionally with the following error:
```
Microsoft.AspNetCore.SignalR.HubException : An unexpected error occurred invoking 'EchoUser' on the server. RedisConnectionException: No connection is available to service this operation: PUBLISH Microsoft.AspNetCore.SignalR.Redis.Tests.EchoHub:user:userA; SocketFailure on 127.0.0.1:6379/Subscription, origin: Error, input-buffer: 0, outstanding: 0, last-read: 0s ago, last-write: 0s ago, unanswered-write: 1994017s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, last-mbeat: -1s ago, global: 0s ago
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsyncCore(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken) in /_/src/Microsoft.AspNetCore.SignalR.Client.Core/HubConnection.cs:line 381
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsync(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken) in /_/src/Microsoft.AspNetCore.SignalR.Client.Core/HubConnection.cs:line 193
at System.Threading.Tasks.TaskExtensions.OrTimeout(Task task, TimeSpan timeout, String memberName, String filePath, Nullable`1 lineNumber) in /_/test/Microsoft.AspNetCore.SignalR.Tests.Utils/TaskExtensions.cs:line 35
at Microsoft.AspNetCore.SignalR.Redis.Tests.RedisEndToEndTests.CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser(HttpTransportType transportType, String protocolName) in /_/test/Microsoft.AspNetCore.SignalR.Redis.Tests/RedisEndToEnd.cs:line 115
--- End of stack trace from previous location where exception was thrown ---
------- Stdout: -------
| [2018-08-30T20:16:56] TestLifetime Information: Starting test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser-WebSockets-json
| [2018-08-30T20:16:56] TestLifetime Information: Starting test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser_WebSockets_json
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport Information: Starting transport. Transfer mode: Text. Url: 'ws://127.0.0.1:33597/echo?id=OduIO4jO2JHklOZWOuP1kQ'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.HttpConnection Information: HttpConnection Started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: Using HubProtocol 'json v1'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: HubConnection started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport Information: Starting transport. Transfer mode: Text. Url: 'ws://127.0.0.1:36607/echo?id=tr1aizv-iIj-6im05oX9YQ'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.HttpConnection Information: HttpConnection Started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: Using HubProtocol 'json v1'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: HubConnection started.
| [2018-08-30T20:16:56] TestLifetime Information: Finished test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser_WebSockets_json in 0.0154677s
```
Other tests within that build may have failed with a similar message, but they are not listed here. Check the link above for more info.
This test failed on 2.1.
CC @muratg
This issue was made automatically. If there is a problem contact ryanbrandenburg.
|
1.0
|
Test failure: CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser - This test [fails](http://aspnetci/viewLog.html?buildId=542609&buildTypeId=Releases_21Public_UbuntuUniverse) occasionally with the following error:
```
Microsoft.AspNetCore.SignalR.HubException : An unexpected error occurred invoking 'EchoUser' on the server. RedisConnectionException: No connection is available to service this operation: PUBLISH Microsoft.AspNetCore.SignalR.Redis.Tests.EchoHub:user:userA; SocketFailure on 127.0.0.1:6379/Subscription, origin: Error, input-buffer: 0, outstanding: 0, last-read: 0s ago, last-write: 0s ago, unanswered-write: 1994017s ago, keep-alive: 60s, pending: 0, state: Connecting, last-heartbeat: never, last-mbeat: -1s ago, global: 0s ago
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsyncCore(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken) in /_/src/Microsoft.AspNetCore.SignalR.Client.Core/HubConnection.cs:line 381
at Microsoft.AspNetCore.SignalR.Client.HubConnection.InvokeCoreAsync(String methodName, Type returnType, Object[] args, CancellationToken cancellationToken) in /_/src/Microsoft.AspNetCore.SignalR.Client.Core/HubConnection.cs:line 193
at System.Threading.Tasks.TaskExtensions.OrTimeout(Task task, TimeSpan timeout, String memberName, String filePath, Nullable`1 lineNumber) in /_/test/Microsoft.AspNetCore.SignalR.Tests.Utils/TaskExtensions.cs:line 35
at Microsoft.AspNetCore.SignalR.Redis.Tests.RedisEndToEndTests.CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser(HttpTransportType transportType, String protocolName) in /_/test/Microsoft.AspNetCore.SignalR.Redis.Tests/RedisEndToEnd.cs:line 115
--- End of stack trace from previous location where exception was thrown ---
------- Stdout: -------
| [2018-08-30T20:16:56] TestLifetime Information: Starting test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser-WebSockets-json
| [2018-08-30T20:16:56] TestLifetime Information: Starting test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser_WebSockets_json
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport Information: Starting transport. Transfer mode: Text. Url: 'ws://127.0.0.1:33597/echo?id=OduIO4jO2JHklOZWOuP1kQ'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.HttpConnection Information: HttpConnection Started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: Using HubProtocol 'json v1'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: HubConnection started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.Internal.WebSocketsTransport Information: Starting transport. Transfer mode: Text. Url: 'ws://127.0.0.1:36607/echo?id=tr1aizv-iIj-6im05oX9YQ'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.Http.Connections.Client.HttpConnection Information: HttpConnection Started.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: Using HubProtocol 'json v1'.
| [2018-08-30T20:16:56] Microsoft.AspNetCore.SignalR.Client.HubConnection Information: HubConnection started.
| [2018-08-30T20:16:56] TestLifetime Information: Finished test CanSendAndReceiveUserMessagesFromMultipleConnectionsWithSameUser_WebSockets_json in 0.0154677s
```
Other tests within that build may have failed with a similar message, but they are not listed here. Check the link above for more info.
This test failed on 2.1.
CC @muratg
This issue was made automatically. If there is a problem contact ryanbrandenburg.
|
test
|
test failure cansendandreceiveusermessagesfrommultipleconnectionswithsameuser this test occasionally with the following error microsoft aspnetcore signalr hubexception an unexpected error occurred invoking echouser on the server redisconnectionexception no connection is available to service this operation publish microsoft aspnetcore signalr redis tests echohub user usera socketfailure on subscription origin error input buffer outstanding last read ago last write ago unanswered write ago keep alive pending state connecting last heartbeat never last mbeat ago global ago at microsoft aspnetcore signalr client hubconnection invokecoreasynccore string methodname type returntype object args cancellationtoken cancellationtoken in src microsoft aspnetcore signalr client core hubconnection cs line at microsoft aspnetcore signalr client hubconnection invokecoreasync string methodname type returntype object args cancellationtoken cancellationtoken in src microsoft aspnetcore signalr client core hubconnection cs line at system threading tasks taskextensions ortimeout task task timespan timeout string membername string filepath nullable linenumber in test microsoft aspnetcore signalr tests utils taskextensions cs line at microsoft aspnetcore signalr redis tests redisendtoendtests cansendandreceiveusermessagesfrommultipleconnectionswithsameuser httptransporttype transporttype string protocolname in test microsoft aspnetcore signalr redis tests redisendtoend cs line end of stack trace from previous location where exception was thrown stdout testlifetime information starting test cansendandreceiveusermessagesfrommultipleconnectionswithsameuser websockets json testlifetime information starting test cansendandreceiveusermessagesfrommultipleconnectionswithsameuser websockets json microsoft aspnetcore http connections client internal websocketstransport information starting transport transfer mode text url ws echo id microsoft aspnetcore http connections client httpconnection information httpconnection started microsoft aspnetcore signalr client hubconnection information using hubprotocol json microsoft aspnetcore signalr client hubconnection information hubconnection started microsoft aspnetcore http connections client internal websocketstransport information starting transport transfer mode text url ws echo id iij microsoft aspnetcore http connections client httpconnection information httpconnection started microsoft aspnetcore signalr client hubconnection information using hubprotocol json microsoft aspnetcore signalr client hubconnection information hubconnection started testlifetime information finished test cansendandreceiveusermessagesfrommultipleconnectionswithsameuser websockets json in other tests within that build may have failed with a similar message but they are not listed here check the link above for more info this test failed on cc muratg this issue was made automatically if there is a problem contact ryanbrandenburg
| 1
|
135,723
| 12,690,482,965
|
IssuesEvent
|
2020-06-21 12:23:40
|
jery33/index-replication
|
https://api.github.com/repos/jery33/index-replication
|
closed
|
Update README
|
documentation
|
Write longer project description in README, include:
- Basic results of the projects
- Methods used within the project
- Current and future functionalities of the application
|
1.0
|
Update README - Write longer project description in README, include:
- Basic results of the projects
- Methods used within the project
- Current and future functionalities of the application
|
non_test
|
update readme write longer project description in readme include basic results of the projects methods used within the project current and future functionalities of the application
| 0
|
46,003
| 5,774,638,656
|
IssuesEvent
|
2017-04-28 07:50:21
|
kbyyd24/lost-and-found-user
|
https://api.github.com/repos/kbyyd24/lost-and-found-user
|
closed
|
use SpringRunner instead of MockitoJUnitRunner in test cases
|
refactor test
|
In test cases for services, useing `@RunWith(MockitoJUnitRunner.class)` now.
Need to refactor it to use `@RunWith(SpringRunner)`, and change other relative annotations.
|
1.0
|
use SpringRunner instead of MockitoJUnitRunner in test cases - In test cases for services, useing `@RunWith(MockitoJUnitRunner.class)` now.
Need to refactor it to use `@RunWith(SpringRunner)`, and change other relative annotations.
|
test
|
use springrunner instead of mockitojunitrunner in test cases in test cases for services useing runwith mockitojunitrunner class now need to refactor it to use runwith springrunner and change other relative annotations
| 1
|
61,915
| 15,106,415,918
|
IssuesEvent
|
2021-02-08 14:17:00
|
rvesse/airline
|
https://api.github.com/repos/rvesse/airline
|
opened
|
Add `module-info.java` to all modules
|
build dependencies enhancement user-experience
|
In working on some areas of new functionality targeted for 2.9 and which require new modules to be provided to isolate the new features from users who don't want/need them (and in one case the associated dependencies) I realised that currently only a couple of our modules actually have `module-info.java` present. These are `airline` and `airline-io` which represent the core of Airline but doesn't cover the extended help system nor other new features that are under development.
Missing `module-info.java` files:
- [ ] `airline-help-bash`
- [ ] `airline-help-external` (new for 2.9)
- [ ] `airline-help-html`
- [ ] `airline-help-man`
- [ ] `airline-help-markdown`
- [ ] `airline-maven-plugin` (Not sure if this is even feasible but putting on the list for completeness)
- [ ] `airline-prompts` (new for 2.9)
@jfallows previously did some work in #92 when JPMS support (and `module-info.java` was originally added) to add the initial files and suggested a possible methodology for automatically deriving these files (https://github.com/rvesse/airline/pull/92#issuecomment-484786320)
I have been trying to get that working locally but have been running into issues with cryptic `jdeps` error messages about multi-release modules that I am unsure how to proceed past.
Another option would be just to hand author each of these since each module is relatively small.
One potential pitfall we might hit regardless of approach is that there may be some overlapping packages between these modules and the core modules which would have to be resolved. So far I've spotted the following from a quick inspection:
- `airline-help-bash` uses package `com.github.rvesse.airline.annotations.help` which is already exported by the core `airline` module
- [ ] Any overlapping packages resolved (this will be a *Breaking Change* for existing users)
Also since I'm not personally using JPMS anywhere currently some suggestions/pointers on how to set up a test project that would validate that all the modules are correctly set up would also be useful.
|
1.0
|
Add `module-info.java` to all modules - In working on some areas of new functionality targeted for 2.9 and which require new modules to be provided to isolate the new features from users who don't want/need them (and in one case the associated dependencies) I realised that currently only a couple of our modules actually have `module-info.java` present. These are `airline` and `airline-io` which represent the core of Airline but doesn't cover the extended help system nor other new features that are under development.
Missing `module-info.java` files:
- [ ] `airline-help-bash`
- [ ] `airline-help-external` (new for 2.9)
- [ ] `airline-help-html`
- [ ] `airline-help-man`
- [ ] `airline-help-markdown`
- [ ] `airline-maven-plugin` (Not sure if this is even feasible but putting on the list for completeness)
- [ ] `airline-prompts` (new for 2.9)
@jfallows previously did some work in #92 when JPMS support (and `module-info.java` was originally added) to add the initial files and suggested a possible methodology for automatically deriving these files (https://github.com/rvesse/airline/pull/92#issuecomment-484786320)
I have been trying to get that working locally but have been running into issues with cryptic `jdeps` error messages about multi-release modules that I am unsure how to proceed past.
Another option would be just to hand author each of these since each module is relatively small.
One potential pitfall we might hit regardless of approach is that there may be some overlapping packages between these modules and the core modules which would have to be resolved. So far I've spotted the following from a quick inspection:
- `airline-help-bash` uses package `com.github.rvesse.airline.annotations.help` which is already exported by the core `airline` module
- [ ] Any overlapping packages resolved (this will be a *Breaking Change* for existing users)
Also since I'm not personally using JPMS anywhere currently some suggestions/pointers on how to set up a test project that would validate that all the modules are correctly set up would also be useful.
|
non_test
|
add module info java to all modules in working on some areas of new functionality targeted for and which require new modules to be provided to isolate the new features from users who don t want need them and in one case the associated dependencies i realised that currently only a couple of our modules actually have module info java present these are airline and airline io which represent the core of airline but doesn t cover the extended help system nor other new features that are under development missing module info java files airline help bash airline help external new for airline help html airline help man airline help markdown airline maven plugin not sure if this is even feasible but putting on the list for completeness airline prompts new for jfallows previously did some work in when jpms support and module info java was originally added to add the initial files and suggested a possible methodology for automatically deriving these files i have been trying to get that working locally but have been running into issues with cryptic jdeps error messages about multi release modules that i am unsure how to proceed past another option would be just to hand author each of these since each module is relatively small one potential pitfall we might hit regardless of approach is that there may be some overlapping packages between these modules and the core modules which would have to be resolved so far i ve spotted the following from a quick inspection airline help bash uses package com github rvesse airline annotations help which is already exported by the core airline module any overlapping packages resolved this will be a breaking change for existing users also since i m not personally using jpms anywhere currently some suggestions pointers on how to set up a test project that would validate that all the modules are correctly set up would also be useful
| 0
|
247,661
| 20,987,503,556
|
IssuesEvent
|
2022-03-29 05:49:48
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: jepsen/bank-multitable/split failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker
|
roachtest.jepsen/bank-multitable/split [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/jepsen/bank-multitable/split) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
(1) attached stack trace
-- stack trace:
| main.(*clusterImpl).RunE
| main/pkg/cmd/roachtest/cluster.go:1987
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) output in run_054806.933618735_n6_bash
Wraps: (3) bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.0.49 -n 10.142.0.153 -n 10.142.0.40 -n 10.142.0.160 -n 10.142.0.129 \
| --test bank-multitable --nemesis split \
| > invoke.log 2>&1 \
| " returned
| stderr:
|
| stdout:
Wraps: (4) SSH_PROBLEM
Wraps: (5) Node 6. Command with error:
| ``````
| bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.0.49 -n 10.142.0.153 -n 10.142.0.40 -n 10.142.0.160 -n 10.142.0.129 \
| --test bank-multitable --nemesis split \
| > invoke.log 2>&1 \
| "
| ``````
Wraps: (6) exit status 255
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/bank-multitable/split.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: jepsen/bank-multitable/split failed - roachtest.jepsen/bank-multitable/split [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=4713654&tab=artifacts#/jepsen/bank-multitable/split) on master @ [29716850b181718594663889ddb5f479fef7a305](https://github.com/cockroachdb/cockroach/commits/29716850b181718594663889ddb5f479fef7a305):
```
(1) attached stack trace
-- stack trace:
| main.(*clusterImpl).RunE
| main/pkg/cmd/roachtest/cluster.go:1987
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func1
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:172
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.runJepsen.func3
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/jepsen.go:210
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (2) output in run_054806.933618735_n6_bash
Wraps: (3) bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.0.49 -n 10.142.0.153 -n 10.142.0.40 -n 10.142.0.160 -n 10.142.0.129 \
| --test bank-multitable --nemesis split \
| > invoke.log 2>&1 \
| " returned
| stderr:
|
| stdout:
Wraps: (4) SSH_PROBLEM
Wraps: (5) Node 6. Command with error:
| ``````
| bash -e -c "\
| cd /mnt/data1/jepsen/cockroachdb && set -eo pipefail && \
| ~/lein run test \
| --tarball file://${PWD}/cockroach.tgz \
| --username ${USER} \
| --ssh-private-key ~/.ssh/id_rsa \
| --os ubuntu \
| --time-limit 300 \
| --concurrency 30 \
| --recovery-time 25 \
| --test-count 1 \
| -n 10.142.0.49 -n 10.142.0.153 -n 10.142.0.40 -n 10.142.0.160 -n 10.142.0.129 \
| --test bank-multitable --nemesis split \
| > invoke.log 2>&1 \
| "
| ``````
Wraps: (6) exit status 255
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.SSH (5) *hintdetail.withDetail (6) *exec.ExitError
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*jepsen/bank-multitable/split.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
test
|
roachtest jepsen bank multitable split failed roachtest jepsen bank multitable split with on master attached stack trace stack trace main clusterimpl rune main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go github com cockroachdb cockroach pkg cmd roachtest tests runjepsen github com cockroachdb cockroach pkg cmd roachtest tests jepsen go runtime goexit goroot src runtime asm s wraps output in run bash wraps bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test bank multitable nemesis split invoke log returned stderr stdout wraps ssh problem wraps node command with error bash e c cd mnt jepsen cockroachdb set eo pipefail lein run test tarball file pwd cockroach tgz username user ssh private key ssh id rsa os ubuntu time limit concurrency recovery time test count n n n n n test bank multitable nemesis split invoke log wraps exit status error types withstack withstack errutil withprefix cluster withcommanddetails errors ssh hintdetail withdetail exec exiterror help see see cc cockroachdb kv triage
| 1
|
126,520
| 10,426,054,341
|
IssuesEvent
|
2019-09-16 16:41:24
|
elastic/apm-agent-dotnet
|
https://api.github.com/repos/elastic/apm-agent-dotnet
|
opened
|
Tests fail because of StackOverflowException in logger code
|
bug developer only flaky test test
|
This issue doesn't seem to affect production use cases - it affects only use cases in tests.
Because of #492 any use of logger throws `InvalidOperationException` but logger's code tries to log this exception using logger:
https://github.com/elastic/apm-agent-dotnet/blob/68ec732ce08af467f1fb3e823e9559cda18b3bf3/src/Elastic.Apm/Logging/IApmLoggingExtensions.cs#L27-L30
which throws `InvalidOperationException` again eventually causing `StackOverflowException`
|
2.0
|
Tests fail because of StackOverflowException in logger code - This issue doesn't seem to affect production use cases - it affects only use cases in tests.
Because of #492 any use of logger throws `InvalidOperationException` but logger's code tries to log this exception using logger:
https://github.com/elastic/apm-agent-dotnet/blob/68ec732ce08af467f1fb3e823e9559cda18b3bf3/src/Elastic.Apm/Logging/IApmLoggingExtensions.cs#L27-L30
which throws `InvalidOperationException` again eventually causing `StackOverflowException`
|
test
|
tests fail because of stackoverflowexception in logger code this issue doesn t seem to affect production use cases it affects only use cases in tests because of any use of logger throws invalidoperationexception but logger s code tries to log this exception using logger which throws invalidoperationexception again eventually causing stackoverflowexception
| 1
|
15,593
| 10,150,848,732
|
IssuesEvent
|
2019-08-05 18:45:42
|
pulumi/pulumi
|
https://api.github.com/repos/pulumi/pulumi
|
closed
|
Only prompt for secret passphrase if I'm using secrets
|
area/cli feature/q3 impact/usability kind/enhancement
|
If I opt into using the `passphrase` backend, I'm prompted to enter my passphrase anytime I run `pulumi up`, etc, even when I don't have any secrets. This was surprising to me. It seems we should only prompt if I'm actually storing secrets that'll need to be decrypted.
|
True
|
Only prompt for secret passphrase if I'm using secrets - If I opt into using the `passphrase` backend, I'm prompted to enter my passphrase anytime I run `pulumi up`, etc, even when I don't have any secrets. This was surprising to me. It seems we should only prompt if I'm actually storing secrets that'll need to be decrypted.
|
non_test
|
only prompt for secret passphrase if i m using secrets if i opt into using the passphrase backend i m prompted to enter my passphrase anytime i run pulumi up etc even when i don t have any secrets this was surprising to me it seems we should only prompt if i m actually storing secrets that ll need to be decrypted
| 0
|
307,111
| 26,518,436,132
|
IssuesEvent
|
2023-01-18 23:11:05
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
closed
|
DISABLED test_sample_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches.<locals>.DummyTestClass)
|
module: flaky-tests skipped module: unknown
|
Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_sample_input_dynamic_shapes) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10713308472).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_sample_input_dynamic_shapes`
Error retrieving /opt/conda/lib/python3.10/site-packages/torch/_dynamo/testing.py: Error: Statuscode 301
|
1.0
|
DISABLED test_sample_input_dynamic_shapes (torch._dynamo.testing.make_test_cls_with_patches.<locals>.DummyTestClass) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/failure/test_sample_input_dynamic_shapes) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/10713308472).
Over the past 72 hours, it has flakily failed in 2 workflow(s).
**Debugging instructions (after clicking on the recent samples link):**
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Grep for `test_sample_input_dynamic_shapes`
Error retrieving /opt/conda/lib/python3.10/site-packages/torch/_dynamo/testing.py: Error: Statuscode 301
|
test
|
disabled test sample input dynamic shapes torch dynamo testing make test cls with patches dummytestclass platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has flakily failed in workflow s debugging instructions after clicking on the recent samples link to find relevant log snippets click on the workflow logs linked above grep for test sample input dynamic shapes error retrieving opt conda lib site packages torch dynamo testing py error statuscode
| 1
|
180,503
| 13,934,543,663
|
IssuesEvent
|
2020-10-22 10:10:40
|
ibm-openbmc/dev
|
https://api.github.com/repos/ibm-openbmc/dev
|
closed
|
pHAL: Host Dumps phase-1
|
Epic Test on SIM-HW pHAL_Staging prio_medium
|
This epic is used to track , SBE/Hostboot/Power Management/HW dump related work required in BMC.
|
1.0
|
pHAL: Host Dumps phase-1 - This epic is used to track , SBE/Hostboot/Power Management/HW dump related work required in BMC.
|
test
|
phal host dumps phase this epic is used to track sbe hostboot power management hw dump related work required in bmc
| 1
|
142,856
| 11,497,161,755
|
IssuesEvent
|
2020-02-12 09:31:36
|
microsoft/AzureStorageExplorer
|
https://api.github.com/repos/microsoft/AzureStorageExplorer
|
opened
|
There are strings not localized in Settings Sign-in part
|
🌐 localization 🧪 testing
|
**Storage Explorer Version:** 1.12.0
**Build**: [20200212.1](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3464483&view=results)
**Branch**: master
**Language**: Czech/ German/ Spanish/ French/ Hungarian/ Italian/ Japanese/ Korean/ Dutch/ Polish/ Portuguese (Brazil)/ Portuguese (Portugal)/ Russian/ Swedish/ Turkish/ Chinese (Simplified) / Chinese (Traditional)
**Platform/OS**: Windows 10/ Linux Ubuntu 18.04
**Architecture**: ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '**svenska**' -> Restart Storage Explorer.
3. Open 'Settings' -> Localized 'Application' -> Localized 'Sign-in'.
**Expect Experience:**
The strings that need localized are all localized.
**Actual Experience:**
There are strings not localized in Settings Sign-in part.

|
1.0
|
There are strings not localized in Settings Sign-in part - **Storage Explorer Version:** 1.12.0
**Build**: [20200212.1](https://devdiv.visualstudio.com/DevDiv/_build/results?buildId=3464483&view=results)
**Branch**: master
**Language**: Czech/ German/ Spanish/ French/ Hungarian/ Italian/ Japanese/ Korean/ Dutch/ Polish/ Portuguese (Brazil)/ Portuguese (Portugal)/ Russian/ Swedish/ Turkish/ Chinese (Simplified) / Chinese (Traditional)
**Platform/OS**: Windows 10/ Linux Ubuntu 18.04
**Architecture**: ia32/x64
**Regression From:** Not a regression
**Steps to reproduce:**
1. Launch Storage Explorer.
2. Open 'Settings' -> Application (Regional Settings) -> Select '**svenska**' -> Restart Storage Explorer.
3. Open 'Settings' -> Localized 'Application' -> Localized 'Sign-in'.
**Expect Experience:**
The strings that need localized are all localized.
**Actual Experience:**
There are strings not localized in Settings Sign-in part.

|
test
|
there are strings not localized in settings sign in part storage explorer version build branch master language czech german spanish french hungarian italian japanese korean dutch polish portuguese brazil portuguese portugal russian swedish turkish chinese simplified chinese traditional platform os windows linux ubuntu architecture regression from not a regression steps to reproduce launch storage explorer open settings application regional settings select svenska restart storage explorer open settings localized application localized sign in expect experience the strings that need localized are all localized actual experience there are strings not localized in settings sign in part
| 1
|
281,478
| 24,397,041,623
|
IssuesEvent
|
2022-10-04 20:17:55
|
CDLUC3/dmp-hub-sam
|
https://api.github.com/repos/CDLUC3/dmp-hub-sam
|
closed
|
Update integration test to ensure it does not mint DMP IDs
|
testing ezid
|
Update the integration tests so that (if run in prod) do not mint new DMP IDs
|
1.0
|
Update integration test to ensure it does not mint DMP IDs - Update the integration tests so that (if run in prod) do not mint new DMP IDs
|
test
|
update integration test to ensure it does not mint dmp ids update the integration tests so that if run in prod do not mint new dmp ids
| 1
|
199,982
| 15,085,287,848
|
IssuesEvent
|
2021-02-05 18:26:02
|
rstudio/rstudio
|
https://api.github.com/repos/rstudio/rstudio
|
closed
|
RStudio's diagnostic engine incorrectly flags whitespace as unnecessary
|
bug diagnostics test
|
https://community.rstudio.com/t/the-tidyverse-style-guide-inconsistent-with-rstudio-style-diagnostics/94060
---
### System details
RStudio Edition : Desktop [Open Source]
RStudio Version : 1.4.1522
OS Version : macOS Big Sur 10.16
R Version : R version 4.0.3 (2020-10-10)
The following R code is incorrectly diagnosed with unnecessary whitespace:
```
foo <- { 1 + 1 }
```
<img width="222" alt="Screen Shot 2021-01-27 at 9 41 58 AM" src="https://user-images.githubusercontent.com/1976582/106031617-54d31000-6084-11eb-8c78-abae0adede8c.png">
|
1.0
|
RStudio's diagnostic engine incorrectly flags whitespace as unnecessary - https://community.rstudio.com/t/the-tidyverse-style-guide-inconsistent-with-rstudio-style-diagnostics/94060
---
### System details
RStudio Edition : Desktop [Open Source]
RStudio Version : 1.4.1522
OS Version : macOS Big Sur 10.16
R Version : R version 4.0.3 (2020-10-10)
The following R code is incorrectly diagnosed with unnecessary whitespace:
```
foo <- { 1 + 1 }
```
<img width="222" alt="Screen Shot 2021-01-27 at 9 41 58 AM" src="https://user-images.githubusercontent.com/1976582/106031617-54d31000-6084-11eb-8c78-abae0adede8c.png">
|
test
|
rstudio s diagnostic engine incorrectly flags whitespace as unnecessary system details rstudio edition desktop rstudio version os version macos big sur r version r version the following r code is incorrectly diagnosed with unnecessary whitespace foo img width alt screen shot at am src
| 1
|
160,394
| 20,099,788,071
|
IssuesEvent
|
2022-02-07 01:35:33
|
jbarrus/diagram-js
|
https://api.github.com/repos/jbarrus/diagram-js
|
closed
|
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz - autoclosed
|
security vulnerability
|
## CVE-2019-20920 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-1.1.2.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2020-10-15</p>
<p>Fix Resolution (handlebars): 4.5.3</p>
<p>Direct dependency fix Resolution (karma-coverage): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-20920 (High) detected in handlebars-4.1.2.tgz - autoclosed - ## CVE-2019-20920 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.1.2.tgz</b></p></summary>
<p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p>
<p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.1.2.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/handlebars/package.json</p>
<p>
Dependency Hierarchy:
- karma-coverage-1.1.2.tgz (Root Library)
- istanbul-0.4.5.tgz
- :x: **handlebars-4.1.2.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
<p>Publish Date: 2020-09-30
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20920>CVE-2019-20920</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1324">https://www.npmjs.com/advisories/1324</a></p>
<p>Release Date: 2020-10-15</p>
<p>Fix Resolution (handlebars): 4.5.3</p>
<p>Direct dependency fix Resolution (karma-coverage): 2.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in handlebars tgz autoclosed cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file package json path to vulnerable library node modules handlebars package json dependency hierarchy karma coverage tgz root library istanbul tgz x handlebars tgz vulnerable library vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim s browser effectively serving as xss publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars direct dependency fix resolution karma coverage step up your open source security game with whitesource
| 0
|
199,610
| 15,051,185,737
|
IssuesEvent
|
2021-02-03 13:49:21
|
AdoptOpenJDK/openjdk-infrastructure
|
https://api.github.com/repos/AdoptOpenJDK/openjdk-infrastructure
|
closed
|
build-osuosl-aix71-ppc64-1: tests which specify -Xgc:metronome fail (openj9 option)
|
testFail
|
Tests which use the openj9 command line option `-Xgc:metronome` fail when run on `build-osuosl-aix71-ppc64-1` but pass when run on `test-ibm-aix71-ppc64-2`.
The symptoms are the same as those described in https://github.com/eclipse/openj9/issues/10579, where the underlying reason was said to be "AIX machine is not configured for High Resolution Clock to support Metronome".
|
1.0
|
build-osuosl-aix71-ppc64-1: tests which specify -Xgc:metronome fail (openj9 option) - Tests which use the openj9 command line option `-Xgc:metronome` fail when run on `build-osuosl-aix71-ppc64-1` but pass when run on `test-ibm-aix71-ppc64-2`.
The symptoms are the same as those described in https://github.com/eclipse/openj9/issues/10579, where the underlying reason was said to be "AIX machine is not configured for High Resolution Clock to support Metronome".
|
test
|
build osuosl tests which specify xgc metronome fail option tests which use the command line option xgc metronome fail when run on build osuosl but pass when run on test ibm the symptoms are the same as those described in where the underlying reason was said to be aix machine is not configured for high resolution clock to support metronome
| 1
|
21,999
| 3,768,991,107
|
IssuesEvent
|
2016-03-16 08:44:06
|
Qabel/qabel-android
|
https://api.github.com/repos/Qabel/qabel-android
|
closed
|
Create Account Logo wrong dimension
|
bug design
|
In the Create Account View - the Qabel Logo has wrong Dimensions and lacks resolution:

please replace with logo from #316
|
1.0
|
Create Account Logo wrong dimension - In the Create Account View - the Qabel Logo has wrong Dimensions and lacks resolution:

please replace with logo from #316
|
non_test
|
create account logo wrong dimension in the create account view the qabel logo has wrong dimensions and lacks resolution please replace with logo from
| 0
|
730,483
| 25,174,759,019
|
IssuesEvent
|
2022-11-11 08:13:05
|
leanprover/lean4
|
https://api.github.com/repos/leanprover/lean4
|
closed
|
square braces can cause exponential time/memory consumption
|
low priority
|
The following input (found via fuzz testing) causes `lean` to take 140 seconds and consume 11.6 GB:
```lean
def foo : [[[[[[[[[[[[[
```
Adding more braces seems to make it exponentially worse.
My expectation is that, even if lean must perform exponential work in some situation, some kind of heartbeat will trigger before things get out of hand like this.
```
Lean (version 4.0.0-nightly-2022-10-18, commit faa612e7b79a, Release)
```
|
1.0
|
square braces can cause exponential time/memory consumption - The following input (found via fuzz testing) causes `lean` to take 140 seconds and consume 11.6 GB:
```lean
def foo : [[[[[[[[[[[[[
```
Adding more braces seems to make it exponentially worse.
My expectation is that, even if lean must perform exponential work in some situation, some kind of heartbeat will trigger before things get out of hand like this.
```
Lean (version 4.0.0-nightly-2022-10-18, commit faa612e7b79a, Release)
```
|
non_test
|
square braces can cause exponential time memory consumption the following input found via fuzz testing causes lean to take seconds and consume gb lean def foo adding more braces seems to make it exponentially worse my expectation is that even if lean must perform exponential work in some situation some kind of heartbeat will trigger before things get out of hand like this lean version nightly commit release
| 0
|
391,366
| 26,889,119,453
|
IssuesEvent
|
2023-02-06 07:19:16
|
riff-lang/riff
|
https://api.github.com/repos/riff-lang/riff
|
closed
|
Man page(s) needed
|
documentation
|
Even a bare minimum man page should be shipped with `riff`. Currently there's nothing other than the documentation on the website.
This also raises the issue of where the language documentation should reside; either here or in its own repo.
|
1.0
|
Man page(s) needed - Even a bare minimum man page should be shipped with `riff`. Currently there's nothing other than the documentation on the website.
This also raises the issue of where the language documentation should reside; either here or in its own repo.
|
non_test
|
man page s needed even a bare minimum man page should be shipped with riff currently there s nothing other than the documentation on the website this also raises the issue of where the language documentation should reside either here or in its own repo
| 0
|
75,276
| 9,834,171,233
|
IssuesEvent
|
2019-06-17 09:03:20
|
nim-lang/Nim
|
https://api.github.com/repos/nim-lang/Nim
|
closed
|
TinyC is not documented
|
Documentation
|
TinyC support is not documented, is builtin but Disabled by default, it prints an error and quit,
but the Nim manual dont explain how to enable TinyC, for users of the language.
I know how to manually git clone and pass compile params and manually build,
but I want Nim with TinyC support with the vanilla standard `choosenim` tooling,
I know that adding `-d:tinyc` somewhere on a `*.cfg` file probably will make `choosenim` install with TinyC support,
but I dont know how to do it, is not documented,
being a builtin feature it should be documented for v1.0.
I know is not better than the default target, but is like an intermediate step between NimScript and C targets.
If you quickly comment how to properly enable it with `choosenim` and vanilla tooling, I can send a Documentation PR.
:slightly_smiling_face:
|
1.0
|
TinyC is not documented - TinyC support is not documented, is builtin but Disabled by default, it prints an error and quit,
but the Nim manual dont explain how to enable TinyC, for users of the language.
I know how to manually git clone and pass compile params and manually build,
but I want Nim with TinyC support with the vanilla standard `choosenim` tooling,
I know that adding `-d:tinyc` somewhere on a `*.cfg` file probably will make `choosenim` install with TinyC support,
but I dont know how to do it, is not documented,
being a builtin feature it should be documented for v1.0.
I know is not better than the default target, but is like an intermediate step between NimScript and C targets.
If you quickly comment how to properly enable it with `choosenim` and vanilla tooling, I can send a Documentation PR.
:slightly_smiling_face:
|
non_test
|
tinyc is not documented tinyc support is not documented is builtin but disabled by default it prints an error and quit but the nim manual dont explain how to enable tinyc for users of the language i know how to manually git clone and pass compile params and manually build but i want nim with tinyc support with the vanilla standard choosenim tooling i know that adding d tinyc somewhere on a cfg file probably will make choosenim install with tinyc support but i dont know how to do it is not documented being a builtin feature it should be documented for i know is not better than the default target but is like an intermediate step between nimscript and c targets if you quickly comment how to properly enable it with choosenim and vanilla tooling i can send a documentation pr slightly smiling face
| 0
|
201,605
| 15,214,947,764
|
IssuesEvent
|
2021-02-17 13:51:24
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
opened
|
SharedClasses.SCM23.MultiThreadMultiCL crash in JIT vmState=0x0008000c
|
comp:jit segfault test failure
|
https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/7
SharedClasses.SCM23.MultiThreadMultiCL_0
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/7/system_test_output.tar.gz
```
STF 04:26:07.616 - +------ Step 6 - Start java processes using LoaderSlaveMultiThreadMultiCL
STF 04:26:07.616 - | Run multiple concurrent foreground processes
...
MTM1 stderr #0: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x8986f5) [0x7fdf7495d6f5]
MTM1 stderr #1: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x8a3090) [0x7fdf74968090]
MTM1 stderr #2: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x1702be) [0x7fdf742352be]
MTM1 stderr #3: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x29c3a) [0x7fdf764edc3a]
MTM1 stderr #4: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890) [0x7fdf78d15890]
MTM1 stderr #5: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x3b237) [0x7fdf6f612237]
MTM1 stderr #6: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x39213) [0x7fdf6f610213]
MTM1 stderr #7: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x3957c) [0x7fdf6f61057c]
MTM1 stderr #8: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x1b642) [0x7fdf6f5f2642]
MTM1 stderr #9: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x2339f1) [0x7fdf742f89f1]
MTM1 stderr #10: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x18466e) [0x7fdf7424966e]
MTM1 stderr #11: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x2a773) [0x7fdf764ee773]
MTM1 stderr #12: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x185b85) [0x7fdf7424ab85]
MTM1 stderr #13: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x186108) [0x7fdf7424b108]
MTM1 stderr #14: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x1815ab) [0x7fdf742465ab]
MTM1 stderr #15: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181a72) [0x7fdf74246a72]
MTM1 stderr #16: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181b1a) [0x7fdf74246b1a]
MTM1 stderr #17: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x2a773) [0x7fdf764ee773]
MTM1 stderr #18: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181f74) [0x7fdf74246f74]
MTM1 stderr #19: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9thr29.so(+0xe4f6) [0x7fdf762b74f6]
MTM1 stderr #20: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fdf78d0a6db]
MTM1 stderr #21: function clone+0x3f [0x7fdf7861e88f]
MTM1 stderr Unhandled exception
MTM1 stderr Type=Segmentation error vmState=0x0008000c
MTM1 stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000002
MTM1 stderr Handler1=00007FDF7678E380 Handler2=00007FDF764EDA10 InaccessibleAddress=00007FDDACEFED90
MTM1 stderr RDI=00007FDDD4822FE0 RSI=00007FDDACEFED78 RAX=00007FDF6F86CD60 RBX=00007FDDB8003C00
MTM1 stderr RCX=00007FDDD48231F0 RDX=0000000000000000 R8=0000000000000001 R9=0000000000000000
MTM1 stderr R10=0000000000000000 R11=0000000000000000 R12=00007FDF77AE2360 R13=00007FDDD4822FE0
MTM1 stderr R14=0000000000000000 R15=00007FDF700ECE00
MTM1 stderr RIP=00007FDF6F612237 GS=0000 FS=0000 RSP=00007FDDD4822EA8
MTM1 stderr EFlags=0000000000010246 CS=0033 RBP=00007FDF700EDD08 ERR=0000000000000007
MTM1 stderr TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00007FDDACEFED90
MTM1 stderr xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312)
MTM1 stderr xmm1 00007fdd9ddb6dec (f: 2648403456.000000, d: 6.946060e-310)
MTM1 stderr xmm2 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm3 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm5 69727453656b616d (f: 1701536128.000000, d: 8.828705e+199)
MTM1 stderr xmm6 6e495f747365542f (f: 1936020480.000000, d: 1.834326e+223)
MTM1 stderr xmm7 63732f396a6e6570 (f: 1785619840.000000, d: 1.158423e+171)
MTM1 stderr xmm8 00007fdd38a4f430 (f: 950334528.000000, d: 6.945976e-310)
MTM1 stderr xmm9 69696b1869696969 (f: 1768515968.000000, d: 6.080151e+199)
MTM1 stderr xmm10 6969696a69696969 (f: 1768515968.000000, d: 6.078581e+199)
MTM1 stderr xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr Module=/home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so
MTM1 stderr Module_base_address=00007FDF6F5D7000
MTM1 stderr Target=2_90_20210216_21 (Linux 4.15.0-96-generic)
MTM1 stderr CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
MTM1 stderr ----------- Stack Backtrace -----------
MTM1 stderr (0x00007FDF6F612237 [libj9shr29.so+0x3b237])
MTM1 stderr (0x00007FDF6F610213 [libj9shr29.so+0x39213])
MTM1 stderr (0x00007FDF6F61057C [libj9shr29.so+0x3957c])
MTM1 stderr (0x00007FDF6F5F2642 [libj9shr29.so+0x1b642])
MTM1 stderr (0x00007FDF742F89F1 [libj9jit29.so+0x2339f1])
MTM1 stderr (0x00007FDF7424966E [libj9jit29.so+0x18466e])
MTM1 stderr (0x00007FDF764EE773 [libj9prt29.so+0x2a773])
MTM1 stderr (0x00007FDF7424AB85 [libj9jit29.so+0x185b85])
MTM1 stderr (0x00007FDF7424B108 [libj9jit29.so+0x186108])
MTM1 stderr (0x00007FDF742465AB [libj9jit29.so+0x1815ab])
MTM1 stderr (0x00007FDF74246A72 [libj9jit29.so+0x181a72])
MTM1 stderr (0x00007FDF74246B1A [libj9jit29.so+0x181b1a])
MTM1 stderr (0x00007FDF764EE773 [libj9prt29.so+0x2a773])
MTM1 stderr (0x00007FDF74246F74 [libj9jit29.so+0x181f74])
MTM1 stderr (0x00007FDF762B74F6 [libj9thr29.so+0xe4f6])
MTM1 stderr (0x00007FDF78D0A6DB [libpthread.so.0+0x76db])
MTM1 stderr clone+0x3f (0x00007FDF7861E88F [libc.so.6+0x12188f])
MTM1 stderr ---------------------------------------
```
|
1.0
|
SharedClasses.SCM23.MultiThreadMultiCL crash in JIT vmState=0x0008000c - https://ci.eclipse.org/openj9/job/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/7
SharedClasses.SCM23.MultiThreadMultiCL_0
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/7/system_test_output.tar.gz
```
STF 04:26:07.616 - +------ Step 6 - Start java processes using LoaderSlaveMultiThreadMultiCL
STF 04:26:07.616 - | Run multiple concurrent foreground processes
...
MTM1 stderr #0: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x8986f5) [0x7fdf7495d6f5]
MTM1 stderr #1: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x8a3090) [0x7fdf74968090]
MTM1 stderr #2: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x1702be) [0x7fdf742352be]
MTM1 stderr #3: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x29c3a) [0x7fdf764edc3a]
MTM1 stderr #4: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890) [0x7fdf78d15890]
MTM1 stderr #5: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x3b237) [0x7fdf6f612237]
MTM1 stderr #6: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x39213) [0x7fdf6f610213]
MTM1 stderr #7: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x3957c) [0x7fdf6f61057c]
MTM1 stderr #8: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so(+0x1b642) [0x7fdf6f5f2642]
MTM1 stderr #9: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x2339f1) [0x7fdf742f89f1]
MTM1 stderr #10: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x18466e) [0x7fdf7424966e]
MTM1 stderr #11: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x2a773) [0x7fdf764ee773]
MTM1 stderr #12: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x185b85) [0x7fdf7424ab85]
MTM1 stderr #13: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x186108) [0x7fdf7424b108]
MTM1 stderr #14: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x1815ab) [0x7fdf742465ab]
MTM1 stderr #15: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181a72) [0x7fdf74246a72]
MTM1 stderr #16: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181b1a) [0x7fdf74246b1a]
MTM1 stderr #17: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9prt29.so(+0x2a773) [0x7fdf764ee773]
MTM1 stderr #18: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9jit29.so(+0x181f74) [0x7fdf74246f74]
MTM1 stderr #19: /home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9thr29.so(+0xe4f6) [0x7fdf762b74f6]
MTM1 stderr #20: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7fdf78d0a6db]
MTM1 stderr #21: function clone+0x3f [0x7fdf7861e88f]
MTM1 stderr Unhandled exception
MTM1 stderr Type=Segmentation error vmState=0x0008000c
MTM1 stderr J9Generic_Signal_Number=00000018 Signal_Number=0000000b Error_Value=00000000 Signal_Code=00000002
MTM1 stderr Handler1=00007FDF7678E380 Handler2=00007FDF764EDA10 InaccessibleAddress=00007FDDACEFED90
MTM1 stderr RDI=00007FDDD4822FE0 RSI=00007FDDACEFED78 RAX=00007FDF6F86CD60 RBX=00007FDDB8003C00
MTM1 stderr RCX=00007FDDD48231F0 RDX=0000000000000000 R8=0000000000000001 R9=0000000000000000
MTM1 stderr R10=0000000000000000 R11=0000000000000000 R12=00007FDF77AE2360 R13=00007FDDD4822FE0
MTM1 stderr R14=0000000000000000 R15=00007FDF700ECE00
MTM1 stderr RIP=00007FDF6F612237 GS=0000 FS=0000 RSP=00007FDDD4822EA8
MTM1 stderr EFlags=0000000000010246 CS=0033 RBP=00007FDF700EDD08 ERR=0000000000000007
MTM1 stderr TRAPNO=000000000000000E OLDMASK=0000000000000000 CR2=00007FDDACEFED90
MTM1 stderr xmm0 0000003000000020 (f: 32.000000, d: 1.018558e-312)
MTM1 stderr xmm1 00007fdd9ddb6dec (f: 2648403456.000000, d: 6.946060e-310)
MTM1 stderr xmm2 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm3 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm4 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm5 69727453656b616d (f: 1701536128.000000, d: 8.828705e+199)
MTM1 stderr xmm6 6e495f747365542f (f: 1936020480.000000, d: 1.834326e+223)
MTM1 stderr xmm7 63732f396a6e6570 (f: 1785619840.000000, d: 1.158423e+171)
MTM1 stderr xmm8 00007fdd38a4f430 (f: 950334528.000000, d: 6.945976e-310)
MTM1 stderr xmm9 69696b1869696969 (f: 1768515968.000000, d: 6.080151e+199)
MTM1 stderr xmm10 6969696a69696969 (f: 1768515968.000000, d: 6.078581e+199)
MTM1 stderr xmm11 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm12 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm13 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm14 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr xmm15 0000000000000000 (f: 0.000000, d: 0.000000e+00)
MTM1 stderr Module=/home/jenkins/workspace/Test_openjdk11_j9_extended.system_x86-64_linux_mixed_Nightly_testList_2/openjdkbinary/j2sdk-image/lib/default/libj9shr29.so
MTM1 stderr Module_base_address=00007FDF6F5D7000
MTM1 stderr Target=2_90_20210216_21 (Linux 4.15.0-96-generic)
MTM1 stderr CPU=amd64 (4 logical CPUs) (0x5e2815000 RAM)
MTM1 stderr ----------- Stack Backtrace -----------
MTM1 stderr (0x00007FDF6F612237 [libj9shr29.so+0x3b237])
MTM1 stderr (0x00007FDF6F610213 [libj9shr29.so+0x39213])
MTM1 stderr (0x00007FDF6F61057C [libj9shr29.so+0x3957c])
MTM1 stderr (0x00007FDF6F5F2642 [libj9shr29.so+0x1b642])
MTM1 stderr (0x00007FDF742F89F1 [libj9jit29.so+0x2339f1])
MTM1 stderr (0x00007FDF7424966E [libj9jit29.so+0x18466e])
MTM1 stderr (0x00007FDF764EE773 [libj9prt29.so+0x2a773])
MTM1 stderr (0x00007FDF7424AB85 [libj9jit29.so+0x185b85])
MTM1 stderr (0x00007FDF7424B108 [libj9jit29.so+0x186108])
MTM1 stderr (0x00007FDF742465AB [libj9jit29.so+0x1815ab])
MTM1 stderr (0x00007FDF74246A72 [libj9jit29.so+0x181a72])
MTM1 stderr (0x00007FDF74246B1A [libj9jit29.so+0x181b1a])
MTM1 stderr (0x00007FDF764EE773 [libj9prt29.so+0x2a773])
MTM1 stderr (0x00007FDF74246F74 [libj9jit29.so+0x181f74])
MTM1 stderr (0x00007FDF762B74F6 [libj9thr29.so+0xe4f6])
MTM1 stderr (0x00007FDF78D0A6DB [libpthread.so.0+0x76db])
MTM1 stderr clone+0x3f (0x00007FDF7861E88F [libc.so.6+0x12188f])
MTM1 stderr ---------------------------------------
```
|
test
|
sharedclasses multithreadmulticl crash in jit vmstate sharedclasses multithreadmulticl stf step start java processes using loaderslavemultithreadmulticl stf run multiple concurrent foreground processes stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr lib linux gnu libpthread so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr lib linux gnu libpthread so stderr function clone stderr unhandled exception stderr type segmentation error vmstate stderr signal number signal number error value signal code stderr inaccessibleaddress stderr rdi rsi rax rbx stderr rcx rdx stderr stderr stderr rip gs fs rsp stderr eflags cs rbp err stderr trapno oldmask stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr f d stderr module home jenkins workspace test extended system linux mixed nightly testlist openjdkbinary image lib default so stderr module base address stderr target linux generic stderr cpu logical cpus ram stderr stack backtrace stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr stderr clone stderr
| 1
|
10,298
| 2,941,224,496
|
IssuesEvent
|
2015-07-02 05:59:05
|
javaslang/javaslang
|
https://api.github.com/repos/javaslang/javaslang
|
opened
|
[Epic] Re-work Match
|
design/refactoring
|
This includes
* #225 Rename Match.caze to Match.when ✓
* #227 Howto process further with Match.orElse() ✓
* #302 Match to allow Scala style matching on a range of values
* ~~#317 Adding matching on a range of values~~
* #322 Get Match.of.when type signature right
* #324 Change Match.when/then(Function1) to (Function)
* #325 Further simplification of Match API
|
1.0
|
[Epic] Re-work Match - This includes
* #225 Rename Match.caze to Match.when ✓
* #227 Howto process further with Match.orElse() ✓
* #302 Match to allow Scala style matching on a range of values
* ~~#317 Adding matching on a range of values~~
* #322 Get Match.of.when type signature right
* #324 Change Match.when/then(Function1) to (Function)
* #325 Further simplification of Match API
|
non_test
|
re work match this includes rename match caze to match when ✓ howto process further with match orelse ✓ match to allow scala style matching on a range of values adding matching on a range of values get match of when type signature right change match when then to function further simplification of match api
| 0
|
83,360
| 15,705,839,539
|
IssuesEvent
|
2021-03-26 16:38:58
|
LalithK90/nandanaMotors
|
https://api.github.com/repos/LalithK90/nandanaMotors
|
opened
|
CVE-2020-1935 (Medium) detected in tomcat-embed-core-9.0.30.jar
|
security vulnerability
|
## CVE-2020-1935 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nandanaMotors/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/nandanaMotors/commit/9ef1b8872dbf2aed871ace97625df8c02d6e4ee9">9ef1b8872dbf2aed871ace97625df8c02d6e4ee9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Tomcat 9.0.0.M1 to 9.0.30, 8.5.0 to 8.5.50 and 7.0.0 to 7.0.99 the HTTP header parsing code used an approach to end-of-line parsing that allowed some invalid HTTP headers to be parsed as valid. This led to a possibility of HTTP Request Smuggling if Tomcat was located behind a reverse proxy that incorrectly handled the invalid Transfer-Encoding header in a particular manner. Such a reverse proxy is considered unlikely.
<p>Publish Date: 2020-02-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1935>CVE-2020-1935</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6v7p-v754-j89v">https://github.com/advisories/GHSA-6v7p-v754-j89v</a></p>
<p>Release Date: 2020-02-24</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.100,8.5.51,9.0.31;org.apache.tomcat:tomcat-coyote:7.0.100,8.5.51,9.0.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-1935 (Medium) detected in tomcat-embed-core-9.0.30.jar - ## CVE-2020-1935 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-9.0.30.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="https://tomcat.apache.org/">https://tomcat.apache.org/</a></p>
<p>Path to dependency file: nandanaMotors/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/org.apache.tomcat.embed/tomcat-embed-core/9.0.30/ad32909314fe2ba02cec036434c0addd19bcc580/tomcat-embed-core-9.0.30.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.2.4.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-2.2.4.RELEASE.jar
- :x: **tomcat-embed-core-9.0.30.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/LalithK90/nandanaMotors/commit/9ef1b8872dbf2aed871ace97625df8c02d6e4ee9">9ef1b8872dbf2aed871ace97625df8c02d6e4ee9</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Apache Tomcat 9.0.0.M1 to 9.0.30, 8.5.0 to 8.5.50 and 7.0.0 to 7.0.99 the HTTP header parsing code used an approach to end-of-line parsing that allowed some invalid HTTP headers to be parsed as valid. This led to a possibility of HTTP Request Smuggling if Tomcat was located behind a reverse proxy that incorrectly handled the invalid Transfer-Encoding header in a particular manner. Such a reverse proxy is considered unlikely.
<p>Publish Date: 2020-02-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-1935>CVE-2020-1935</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-6v7p-v754-j89v">https://github.com/advisories/GHSA-6v7p-v754-j89v</a></p>
<p>Release Date: 2020-02-24</p>
<p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:7.0.100,8.5.51,9.0.31;org.apache.tomcat:tomcat-coyote:7.0.100,8.5.51,9.0.31</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file nandanamotors build gradle path to vulnerable library home wss scanner gradle caches modules files org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details in apache tomcat to to and to the http header parsing code used an approach to end of line parsing that allowed some invalid http headers to be parsed as valid this led to a possibility of http request smuggling if tomcat was located behind a reverse proxy that incorrectly handled the invalid transfer encoding header in a particular manner such a reverse proxy is considered unlikely publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote step up your open source security game with whitesource
| 0
|
42,474
| 9,219,316,784
|
IssuesEvent
|
2019-03-11 15:11:20
|
process-engine/process_engine_runtime
|
https://api.github.com/repos/process-engine/process_engine_runtime
|
closed
|
🔒 ✨ Add post-migration scripts for ensuring data consistency
|
code quality enhancement
|
#### Describe your issue
Executing migrations comes with limitations, the most severe one being that one migration file only gets one Sequelize-Instance and is therefore only able to connect to one single database.
This can be very problematic, if the tables are spread across multiple databases, like we currently do with the BPMN-Studio.
For example, when adding new columns to a table, which requires data from another table in another database, we will not be able to fill that column with any useful data.
This is because the Sequelize Instance the migration script gets supplied with can only access the one database it is already connected to.
#### Possible solution
We need a way to ensure that all data across all our databases and tables will remain consistent, after running migrations.
Since we cannot do that during the migrations themselves, we should consider adding some scripts that can accomplish this at a later point.
Because we have full control over what we place in those scripts, we can open up as many Sequelize connections as we like to do what we need.
There are already a number of columns in our databases that would benefit from this.
These are located in the following tables:
Correlations:
- ProcessModelId (Can be retrieved from the FlowNodeInstance table)
- State (Can be calculated by getting and evaluating all FlowNodeInstances)
ExternalTasks:
- ProcessModelId (Can be retrieved from the FlowNodeInstance table)
FlowNodeInstance:
- BpmnType (Can be determined through the respective ProcessModel)
- EventType (Can be determined through the respective ProcessModel)
All these columns were added after the initial implementation and require data from outside their own table.
One set of databases I examined earlier today showed some gaping holes after its migrations, precisely because of the issue I described at the beginning.
Since we will be shipping out the ProcessEngine really soon now, we should be able to provide a way to ensure data consistency.
Post-Migration scripts would greatly help with that.
#### Issue checklist
Please check the boxes in this list after submitting your Issue:
- [x] I've checked if this issue already exists
- [x] I've included all the information that i think is relevant
- [x] I've added logs and/or screenshots (if applicable)
- [x] I've mentioned PRs and issues that relate to this one
|
1.0
|
🔒 ✨ Add post-migration scripts for ensuring data consistency - #### Describe your issue
Executing migrations comes with limitations, the most severe one being that one migration file only gets one Sequelize-Instance and is therefore only able to connect to one single database.
This can be very problematic, if the tables are spread across multiple databases, like we currently do with the BPMN-Studio.
For example, when adding new columns to a table, which requires data from another table in another database, we will not be able to fill that column with any useful data.
This is because the Sequelize Instance the migration script gets supplied with can only access the one database it is already connected to.
#### Possible solution
We need a way to ensure that all data across all our databases and tables will remain consistent, after running migrations.
Since we cannot do that during the migrations themselves, we should consider adding some scripts that can accomplish this at a later point.
Because we have full control over what we place in those scripts, we can open up as many Sequelize connections as we like to do what we need.
There are already a number of columns in our databases that would benefit from this.
These are located in the following tables:
Correlations:
- ProcessModelId (Can be retrieved from the FlowNodeInstance table)
- State (Can be calculated by getting and evaluating all FlowNodeInstances)
ExternalTasks:
- ProcessModelId (Can be retrieved from the FlowNodeInstance table)
FlowNodeInstance:
- BpmnType (Can be determined through the respective ProcessModel)
- EventType (Can be determined through the respective ProcessModel)
All these columns were added after the initial implementation and require data from outside their own table.
One set of databases I examined earlier today showed some gaping holes after its migrations, precisely because of the issue I described at the beginning.
Since we will be shipping out the ProcessEngine really soon now, we should be able to provide a way to ensure data consistency.
Post-Migration scripts would greatly help with that.
#### Issue checklist
Please check the boxes in this list after submitting your Issue:
- [x] I've checked if this issue already exists
- [x] I've included all the information that i think is relevant
- [x] I've added logs and/or screenshots (if applicable)
- [x] I've mentioned PRs and issues that relate to this one
|
non_test
|
🔒 ✨ add post migration scripts for ensuring data consistency describe your issue executing migrations comes with limitations the most severe one being that one migration file only gets one sequelize instance and is therefore only able to connect to one single database this can be very problematic if the tables are spread across multiple databases like we currently do with the bpmn studio for example when adding new columns to a table which requires data from another table in another database we will not be able to fill that column with any useful data this is because the sequelize instance the migration script gets supplied with can only access the one database it is already connected to possible solution we need a way to ensure that all data across all our databases and tables will remain consistent after running migrations since we cannot do that during the migrations themselves we should consider adding some scripts that can accomplish this at a later point because we have full control over what we place in those scripts we can open up as many sequelize connections as we like to do what we need there are already a number of columns in our databases that would benefit from this these are located in the following tables correlations processmodelid can be retrieved from the flownodeinstance table state can be calculated by getting and evaluating all flownodeinstances externaltasks processmodelid can be retrieved from the flownodeinstance table flownodeinstance bpmntype can be determined through the respective processmodel eventtype can be determined through the respective processmodel all these columns were added after the initial implementation and require data from outside their own table one set of databases i examined earlier today showed some gaping holes after its migrations precisely because of the issue i described at the beginning since we will be shipping out the processengine really soon now we should be able to provide a way to ensure data consistency post migration scripts would greatly help with that issue checklist please check the boxes in this list after submitting your issue i ve checked if this issue already exists i ve included all the information that i think is relevant i ve added logs and or screenshots if applicable i ve mentioned prs and issues that relate to this one
| 0
|
20,251
| 3,321,367,396
|
IssuesEvent
|
2015-11-09 08:28:06
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
No need to using `CacheLoader` inside client/server side cache proxies
|
Team: Client Team: Integration Type: Defect
|
At the moment both of client and server side cache proxies create configured `CacheLoader` through their factories however they are not used but just created, validated and closed. So seems that no need to using them at cache proxy level since they are handled at record store level at server side.
See http://stackoverflow.com/questions/33556896/why-hazelcast-cacheloader-class-needs-to-be-visible-by-all-clients
|
1.0
|
No need to using `CacheLoader` inside client/server side cache proxies - At the moment both of client and server side cache proxies create configured `CacheLoader` through their factories however they are not used but just created, validated and closed. So seems that no need to using them at cache proxy level since they are handled at record store level at server side.
See http://stackoverflow.com/questions/33556896/why-hazelcast-cacheloader-class-needs-to-be-visible-by-all-clients
|
non_test
|
no need to using cacheloader inside client server side cache proxies at the moment both of client and server side cache proxies create configured cacheloader through their factories however they are not used but just created validated and closed so seems that no need to using them at cache proxy level since they are handled at record store level at server side see
| 0
|
257,906
| 22,262,527,468
|
IssuesEvent
|
2022-06-10 02:44:55
|
streamnative/pulsar
|
https://api.github.com/repos/streamnative/pulsar
|
opened
|
ISSUE-16000: [test] NPE of StringSchema for branch-2.10
|
component/test
|
Original Issue: apache/pulsar#16000
---
**Describe the bug**
https://github.com/apache/pulsar/runs/6812506434?check_suite_focus=true
```
Error: Tests run: 4, Failures: 1, Errors: 0, Skipped: 3, Time elapsed: 7.561 s <<< FAILURE! - in org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest
Error: testRecreateNamespace(org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest) Time elapsed: 0.2 s <<< FAILURE!
java.lang.NullPointerException
at java.base/java.lang.String.getBytes(String.java:963)
at org.apache.pulsar.client.impl.schema.StringSchema.encode(StringSchema.java:98)
at org.apache.pulsar.client.impl.schema.StringSchema.encode(StringSchema.java:34)
at org.apache.pulsar.client.impl.TypedMessageBuilderImpl.value(TypedMessageBuilderImpl.java:173)
at org.apache.pulsar.client.impl.ProducerBase.send(ProducerBase.java:62)
at org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest.testRecreateNamespace(BrokerServiceBundlesCacheInvalidationTest.java:57)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
**To Reproduce**
Always fail on CI but it's not able to reproduce on my laptop
|
1.0
|
ISSUE-16000: [test] NPE of StringSchema for branch-2.10 - Original Issue: apache/pulsar#16000
---
**Describe the bug**
https://github.com/apache/pulsar/runs/6812506434?check_suite_focus=true
```
Error: Tests run: 4, Failures: 1, Errors: 0, Skipped: 3, Time elapsed: 7.561 s <<< FAILURE! - in org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest
Error: testRecreateNamespace(org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest) Time elapsed: 0.2 s <<< FAILURE!
java.lang.NullPointerException
at java.base/java.lang.String.getBytes(String.java:963)
at org.apache.pulsar.client.impl.schema.StringSchema.encode(StringSchema.java:98)
at org.apache.pulsar.client.impl.schema.StringSchema.encode(StringSchema.java:34)
at org.apache.pulsar.client.impl.TypedMessageBuilderImpl.value(TypedMessageBuilderImpl.java:173)
at org.apache.pulsar.client.impl.ProducerBase.send(ProducerBase.java:62)
at org.apache.pulsar.broker.service.BrokerServiceBundlesCacheInvalidationTest.testRecreateNamespace(BrokerServiceBundlesCacheInvalidationTest.java:57)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:132)
at org.testng.internal.InvokeMethodRunnable.runOne(InvokeMethodRunnable.java:45)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:73)
at org.testng.internal.InvokeMethodRunnable.call(InvokeMethodRunnable.java:11)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
```
**To Reproduce**
Always fail on CI but it's not able to reproduce on my laptop
|
test
|
issue npe of stringschema for branch original issue apache pulsar describe the bug error tests run failures errors skipped time elapsed s failure in org apache pulsar broker service brokerservicebundlescacheinvalidationtest error testrecreatenamespace org apache pulsar broker service brokerservicebundlescacheinvalidationtest time elapsed s failure java lang nullpointerexception at java base java lang string getbytes string java at org apache pulsar client impl schema stringschema encode stringschema java at org apache pulsar client impl schema stringschema encode stringschema java at org apache pulsar client impl typedmessagebuilderimpl value typedmessagebuilderimpl java at org apache pulsar client impl producerbase send producerbase java at org apache pulsar broker service brokerservicebundlescacheinvalidationtest testrecreatenamespace brokerservicebundlescacheinvalidationtest java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org testng internal methodinvocationhelper invokemethod methodinvocationhelper java at org testng internal invokemethodrunnable runone invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at org testng internal invokemethodrunnable call invokemethodrunnable java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java to reproduce always fail on ci but it s not able to reproduce on my laptop
| 1
|
38,866
| 15,818,234,917
|
IssuesEvent
|
2021-04-05 15:44:20
|
aws/aws-sdk-ruby
|
https://api.github.com/repos/aws/aws-sdk-ruby
|
closed
|
Aws::Sigv4::Signer presign_url expires_in option not supported for SQS url presigning?
|
service-api
|
We're pre-signing URL's for a resource to poll a specific SQS queue. The `expires_at` option does not seem to be respected for SQS.
We tried setting the `expires_in` to 1 day, 10 seconds, 5 days...etc. Each time, the URL is issued correctly but only lasts for 15 mins (the default).
## Signer
Upon inspecting the AWS-sigv4 lib, the `presign_url` appears to be doing the right thing when looking at the [generated params](https://github.com/aws/aws-sdk-ruby/blob/version-3/gems/aws-sigv4/lib/aws-sigv4/signer.rb#L397). The example below is for `expires_in=10`:
```ruby
puts params
{"X-Amz-Algorithm"=>"AWS4-HMAC-SHA256", "X-Amz-Credential"=>"<cred_removed>/20210325/<aws_region_removed>/sqs/aws4_request", "X-Amz-Date"=>"20210325T185128Z", "X-Amz-Expires"=>"10", "X-Amz-SignedHeaders"=>"host"}
```
Yet despite `"X-Amz-Expires"=>"10"`, the resulting URL will still last for 15 mins.
## Steps to Reproduce
```ruby
signer = Aws::Sigv4::Signer.new(
service: "sqs",
region: <your_aws_region>,
access_key_id: <your aws access key>,
secret_access_key: <your secret key>
)
presigned_url = signer.presign_url(
http_method: "get",
url: <url to your sqs queue with action Action=ReceiveMessage and MessageId set to what you wish to poll>,
body: nil,
expires_in: 10
)
```
## Additional Info
gem version: `aws-sigv4-1.2.2`
Ruby version:
```shell
$ ruby -v
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]
```
please let me know if you need more details.
edit: changed `Action=SendMessage` to `Action=ReceiveMessage`
|
1.0
|
Aws::Sigv4::Signer presign_url expires_in option not supported for SQS url presigning? - We're pre-signing URL's for a resource to poll a specific SQS queue. The `expires_at` option does not seem to be respected for SQS.
We tried setting the `expires_in` to 1 day, 10 seconds, 5 days...etc. Each time, the URL is issued correctly but only lasts for 15 mins (the default).
## Signer
Upon inspecting the AWS-sigv4 lib, the `presign_url` appears to be doing the right thing when looking at the [generated params](https://github.com/aws/aws-sdk-ruby/blob/version-3/gems/aws-sigv4/lib/aws-sigv4/signer.rb#L397). The example below is for `expires_in=10`:
```ruby
puts params
{"X-Amz-Algorithm"=>"AWS4-HMAC-SHA256", "X-Amz-Credential"=>"<cred_removed>/20210325/<aws_region_removed>/sqs/aws4_request", "X-Amz-Date"=>"20210325T185128Z", "X-Amz-Expires"=>"10", "X-Amz-SignedHeaders"=>"host"}
```
Yet despite `"X-Amz-Expires"=>"10"`, the resulting URL will still last for 15 mins.
## Steps to Reproduce
```ruby
signer = Aws::Sigv4::Signer.new(
service: "sqs",
region: <your_aws_region>,
access_key_id: <your aws access key>,
secret_access_key: <your secret key>
)
presigned_url = signer.presign_url(
http_method: "get",
url: <url to your sqs queue with action Action=ReceiveMessage and MessageId set to what you wish to poll>,
body: nil,
expires_in: 10
)
```
## Additional Info
gem version: `aws-sigv4-1.2.2`
Ruby version:
```shell
$ ruby -v
ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]
```
please let me know if you need more details.
edit: changed `Action=SendMessage` to `Action=ReceiveMessage`
|
non_test
|
aws signer presign url expires in option not supported for sqs url presigning we re pre signing url s for a resource to poll a specific sqs queue the expires at option does not seem to be respected for sqs we tried setting the expires in to day seconds days etc each time the url is issued correctly but only lasts for mins the default signer upon inspecting the aws lib the presign url appears to be doing the right thing when looking at the the example below is for expires in ruby puts params x amz algorithm hmac x amz credential sqs request x amz date x amz expires x amz signedheaders host yet despite x amz expires the resulting url will still last for mins steps to reproduce ruby signer aws signer new service sqs region access key id secret access key presigned url signer presign url http method get url body nil expires in additional info gem version aws ruby version shell ruby v ruby revision please let me know if you need more details edit changed action sendmessage to action receivemessage
| 0
|
37,577
| 5,120,742,932
|
IssuesEvent
|
2017-01-09 06:01:06
|
Sententiaregum/Sententiaregum
|
https://api.github.com/repos/Sententiaregum/Sententiaregum
|
closed
|
refactor frontend testing aliases
|
Frontend Refactor Testing
|
when having multiple internal `form` components, they don't conflict due to mocha's internal behavior, but having multiple tests with the same name is not very clean.
|
1.0
|
refactor frontend testing aliases - when having multiple internal `form` components, they don't conflict due to mocha's internal behavior, but having multiple tests with the same name is not very clean.
|
test
|
refactor frontend testing aliases when having multiple internal form components they don t conflict due to mocha s internal behavior but having multiple tests with the same name is not very clean
| 1
|
223,932
| 17,647,230,265
|
IssuesEvent
|
2021-08-20 08:07:59
|
ladybug-tools/honeybee-vtk
|
https://api.github.com/repos/ladybug-tools/honeybee-vtk
|
closed
|
Improve unit tests
|
unit-tests
|
Remove common objects created in all the tests and create them outside just once.
- [ ] _helper_test
- [ ] actors_test
- [ ] assistant_test
- [ ] camera_test
- [ ] data_test
- [ ] legend_parameter_test
- [ ] model_test
- [ ] scene_test
- [ ] cli export_test
- [ ] cli translate test
|
1.0
|
Improve unit tests - Remove common objects created in all the tests and create them outside just once.
- [ ] _helper_test
- [ ] actors_test
- [ ] assistant_test
- [ ] camera_test
- [ ] data_test
- [ ] legend_parameter_test
- [ ] model_test
- [ ] scene_test
- [ ] cli export_test
- [ ] cli translate test
|
test
|
improve unit tests remove common objects created in all the tests and create them outside just once helper test actors test assistant test camera test data test legend parameter test model test scene test cli export test cli translate test
| 1
|
73,419
| 7,333,835,202
|
IssuesEvent
|
2018-03-05 20:42:54
|
devtools-html/debugger.html
|
https://api.github.com/repos/devtools-html/debugger.html
|
closed
|
[Stepping] Re-enable babel-stepping test
|
in progress testing
|
We recently had an issue with travis, which allowed the test to land before it was consistently working. We should prioritize re-enabling it.
|
1.0
|
[Stepping] Re-enable babel-stepping test - We recently had an issue with travis, which allowed the test to land before it was consistently working. We should prioritize re-enabling it.
|
test
|
re enable babel stepping test we recently had an issue with travis which allowed the test to land before it was consistently working we should prioritize re enabling it
| 1
|
41,548
| 5,374,583,439
|
IssuesEvent
|
2017-02-23 00:54:32
|
Microsoft/vscode
|
https://api.github.com/repos/Microsoft/vscode
|
closed
|
Test: JSDoc Auto Complete
|
testplan-item
|
#20161 #15850
**OS**
- [x] Mac @weinand
- [x] Windows @mousetraps
- [x] Linux @Tyriar
Complexity 2
1. Create a new project with a blank js file.
2. In the js file, trigger suggestions. There should be an entry at the top for JSDoc comments.
3. Try hitting Enter to complete and see that the the comment is inserted.
4. Type out `/**`
5. The suggestion list should show up automatically and the line be read `/**| */`
6. Hit enter to complete the doc
7. create a function like `function add(a, b) { return a + b; }`
8. Try inserting a jsdoc comment before the function. This time, `@param` should be created automatically for `a` and `b`. Both should have `any` type specifiers.
9. Open a ts document and try creating a jsdoc for a function.
10. `@param` should be created but there should be no type specifiers.
11. Try inserting jsdocs in other locations in the file. Only functions currently add any content to the doc.
**Notes**
You should only see a jsdoc suggestion when on blank lines or when starting with a doc comment template. If `|` is the cursor:
```
// These lines all should show the jsdoc completion item when using ctrl+space
|
|
|xyz
/|
/**|
/**|xyz
ab/**|
// While cases like this should not
ab|
ab.|
```
|
1.0
|
Test: JSDoc Auto Complete - #20161 #15850
**OS**
- [x] Mac @weinand
- [x] Windows @mousetraps
- [x] Linux @Tyriar
Complexity 2
1. Create a new project with a blank js file.
2. In the js file, trigger suggestions. There should be an entry at the top for JSDoc comments.
3. Try hitting Enter to complete and see that the the comment is inserted.
4. Type out `/**`
5. The suggestion list should show up automatically and the line be read `/**| */`
6. Hit enter to complete the doc
7. create a function like `function add(a, b) { return a + b; }`
8. Try inserting a jsdoc comment before the function. This time, `@param` should be created automatically for `a` and `b`. Both should have `any` type specifiers.
9. Open a ts document and try creating a jsdoc for a function.
10. `@param` should be created but there should be no type specifiers.
11. Try inserting jsdocs in other locations in the file. Only functions currently add any content to the doc.
**Notes**
You should only see a jsdoc suggestion when on blank lines or when starting with a doc comment template. If `|` is the cursor:
```
// These lines all should show the jsdoc completion item when using ctrl+space
|
|
|xyz
/|
/**|
/**|xyz
ab/**|
// While cases like this should not
ab|
ab.|
```
|
test
|
test jsdoc auto complete os mac weinand windows mousetraps linux tyriar complexity create a new project with a blank js file in the js file trigger suggestions there should be an entry at the top for jsdoc comments try hitting enter to complete and see that the the comment is inserted type out the suggestion list should show up automatically and the line be read hit enter to complete the doc create a function like function add a b return a b try inserting a jsdoc comment before the function this time param should be created automatically for a and b both should have any type specifiers open a ts document and try creating a jsdoc for a function param should be created but there should be no type specifiers try inserting jsdocs in other locations in the file only functions currently add any content to the doc notes you should only see a jsdoc suggestion when on blank lines or when starting with a doc comment template if is the cursor these lines all should show the jsdoc completion item when using ctrl space xyz xyz ab while cases like this should not ab ab
| 1
|
162,162
| 12,625,157,195
|
IssuesEvent
|
2020-06-14 10:24:30
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Dungeon] Issue upon entering an instance/dungeon
|
Confirmed By Tester Dungeon/Raid
|
**Links:**
N/A.
**What is happening:**
After accepting a dungeon queue for Stonecore, my client crashed. After relogging, I was in Dalaran (The current location of my Hearthstone) without a group and unable to see whispers from any group members or see whispers I sent to them. To them I am either dead or offline when I was walking around in Dalaran/Orgrimmar. Relogging seems to have no effect on this bizarre state; even multiple attempts to relog did nothing to fix this issue.
**What should happen:**
N/A.
|
1.0
|
[Dungeon] Issue upon entering an instance/dungeon - **Links:**
N/A.
**What is happening:**
After accepting a dungeon queue for Stonecore, my client crashed. After relogging, I was in Dalaran (The current location of my Hearthstone) without a group and unable to see whispers from any group members or see whispers I sent to them. To them I am either dead or offline when I was walking around in Dalaran/Orgrimmar. Relogging seems to have no effect on this bizarre state; even multiple attempts to relog did nothing to fix this issue.
**What should happen:**
N/A.
|
test
|
issue upon entering an instance dungeon links n a what is happening after accepting a dungeon queue for stonecore my client crashed after relogging i was in dalaran the current location of my hearthstone without a group and unable to see whispers from any group members or see whispers i sent to them to them i am either dead or offline when i was walking around in dalaran orgrimmar relogging seems to have no effect on this bizarre state even multiple attempts to relog did nothing to fix this issue what should happen n a
| 1
|
223,816
| 17,632,708,804
|
IssuesEvent
|
2021-08-19 09:58:26
|
open-telemetry/opentelemetry-java-instrumentation
|
https://api.github.com/repos/open-telemetry/opentelemetry-java-instrumentation
|
closed
|
SpringRestTemplateTest should be moved out of http-url-connection-test
|
enhancement area:tests
|
There seems to be a `SpringRestTemplateTest` in http-url-connection javaagent tests, presumably to make sure it works using the standard instrumentation. This should instead be in the spring-web package and use `testInstrumentation` on the http-url-connection instrumentation to follow our normal pattern. Should be able to reduce duplication doing so too
|
1.0
|
SpringRestTemplateTest should be moved out of http-url-connection-test - There seems to be a `SpringRestTemplateTest` in http-url-connection javaagent tests, presumably to make sure it works using the standard instrumentation. This should instead be in the spring-web package and use `testInstrumentation` on the http-url-connection instrumentation to follow our normal pattern. Should be able to reduce duplication doing so too
|
test
|
springresttemplatetest should be moved out of http url connection test there seems to be a springresttemplatetest in http url connection javaagent tests presumably to make sure it works using the standard instrumentation this should instead be in the spring web package and use testinstrumentation on the http url connection instrumentation to follow our normal pattern should be able to reduce duplication doing so too
| 1
|
133,980
| 18,399,564,408
|
IssuesEvent
|
2021-10-12 14:54:53
|
opencollective/opencollective
|
https://api.github.com/repos/opencollective/opencollective
|
opened
|
Should we use Hackerone to handle security reports?
|
security discussion
|
HackerOne is a very popular solution to handle security reports, the process of communicating with researchers and paying them. It has quite a large community, and projects are publicly listed there so that can bring a lot of attention.
**Pros**:
- Promotes to a large community of security researchers: more reports, better reports
- Public history: people will be able to see that we invest in security
- Standard tool with all the best practices to centralize reports
- [Free](https://www.hackerone.com/company/open-source-community) for open-source
**Cons**:
- Do we have the capacity to handle many reports (including very minor ones)?
- It's cool to eat our own dog food by using expenses to pay people
|
True
|
Should we use Hackerone to handle security reports? - HackerOne is a very popular solution to handle security reports, the process of communicating with researchers and paying them. It has quite a large community, and projects are publicly listed there so that can bring a lot of attention.
**Pros**:
- Promotes to a large community of security researchers: more reports, better reports
- Public history: people will be able to see that we invest in security
- Standard tool with all the best practices to centralize reports
- [Free](https://www.hackerone.com/company/open-source-community) for open-source
**Cons**:
- Do we have the capacity to handle many reports (including very minor ones)?
- It's cool to eat our own dog food by using expenses to pay people
|
non_test
|
should we use hackerone to handle security reports hackerone is a very popular solution to handle security reports the process of communicating with researchers and paying them it has quite a large community and projects are publicly listed there so that can bring a lot of attention pros promotes to a large community of security researchers more reports better reports public history people will be able to see that we invest in security standard tool with all the best practices to centralize reports for open source cons do we have the capacity to handle many reports including very minor ones it s cool to eat our own dog food by using expenses to pay people
| 0
|
201,965
| 15,240,417,720
|
IssuesEvent
|
2021-02-19 06:39:17
|
pytorch/pytorch
|
https://api.github.com/repos/pytorch/pytorch
|
opened
|
OpInfo to support `sample_inputs_func` with Iterable output.
|
module: testing
|
## 🚀 Feature
It would make things easier and more transparent/modular if OpInfo could support samplers which comply to the Iterable protocol.
## Motivation & Pitch
OpInfo does support a parameter `sample_inputs_func` which generates inputs for test cases. However, the `len` function is applied to the output of this function meaning that whichever output `sample_inputs_func` returns, it has to comply to the Sequence protocol. This comes with certain limitations:
* Enforces writing one long sampler with either a huge tuple, or with appending to a list which does not look elegant, nor splits testing into independent parts.
* Does not allow mixing in generators or arbitrary iterables for that matter. That is a matter of taste, but using `yield` instead of `my_list.append(...)` seems to look better.
* If we are to split a sampler into subsamplers, we have to make sure that they all generate the SAME sequence, such that we could concatenate it like, for example, `sample_inputs_func = sampler1 + sampler2 + ...`. If `sample_inputs_func` were Iterable, we could have simply chained arbitrary Iterable samplers like `sample_inputs_func = itertools.chain(sampler1, sampler2, ....)`. Not only does it make adding new samplers easier, this way we could reuse same samplers for different operations by adding them to different chains.
CC @mruberry
|
1.0
|
OpInfo to support `sample_inputs_func` with Iterable output. - ## 🚀 Feature
It would make things easier and more transparent/modular if OpInfo could support samplers which comply to the Iterable protocol.
## Motivation & Pitch
OpInfo does support a parameter `sample_inputs_func` which generates inputs for test cases. However, the `len` function is applied to the output of this function meaning that whichever output `sample_inputs_func` returns, it has to comply to the Sequence protocol. This comes with certain limitations:
* Enforces writing one long sampler with either a huge tuple, or with appending to a list which does not look elegant, nor splits testing into independent parts.
* Does not allow mixing in generators or arbitrary iterables for that matter. That is a matter of taste, but using `yield` instead of `my_list.append(...)` seems to look better.
* If we are to split a sampler into subsamplers, we have to make sure that they all generate the SAME sequence, such that we could concatenate it like, for example, `sample_inputs_func = sampler1 + sampler2 + ...`. If `sample_inputs_func` were Iterable, we could have simply chained arbitrary Iterable samplers like `sample_inputs_func = itertools.chain(sampler1, sampler2, ....)`. Not only does it make adding new samplers easier, this way we could reuse same samplers for different operations by adding them to different chains.
CC @mruberry
|
test
|
opinfo to support sample inputs func with iterable output 🚀 feature it would make things easier and more transparent modular if opinfo could support samplers which comply to the iterable protocol motivation pitch opinfo does support a parameter sample inputs func which generates inputs for test cases however the len function is applied to the output of this function meaning that whichever output sample inputs func returns it has to comply to the sequence protocol this comes with certain limitations enforces writing one long sampler with either a huge tuple or with appending to a list which does not look elegant nor splits testing into independent parts does not allow mixing in generators or arbitrary iterables for that matter that is a matter of taste but using yield instead of my list append seems to look better if we are to split a sampler into subsamplers we have to make sure that they all generate the same sequence such that we could concatenate it like for example sample inputs func if sample inputs func were iterable we could have simply chained arbitrary iterable samplers like sample inputs func itertools chain not only does it make adding new samplers easier this way we could reuse same samplers for different operations by adding them to different chains cc mruberry
| 1
|
175,972
| 14,546,716,944
|
IssuesEvent
|
2020-12-15 21:41:40
|
retaildevcrews/ngsa
|
https://api.github.com/repos/retaildevcrews/ngsa
|
closed
|
Add a "Getting Started" .md in docs and link in root readme
|
Documentation
|
## Description
- [ ] Add a "Getting Started" .md in docs
- [ ] link in root readme
- References #190
|
1.0
|
Add a "Getting Started" .md in docs and link in root readme - ## Description
- [ ] Add a "Getting Started" .md in docs
- [ ] link in root readme
- References #190
|
non_test
|
add a getting started md in docs and link in root readme description add a getting started md in docs link in root readme references
| 0
|
116,428
| 9,852,184,366
|
IssuesEvent
|
2019-06-19 12:17:41
|
aplneto/medmapper
|
https://api.github.com/repos/aplneto/medmapper
|
closed
|
Assets
|
behaviour-test front
|
Lista de Assets e Collections:
### navigation
- [x] _navbar.html.slim
- [x] _footer.html.slim
- [x] _header.html.slim
### comments
- [x] _comment.html.slim
- [x] _form.html.slim
### health_units
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _health_unit.html.slim
### user_profile
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
### professional_profile
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
### service_provider
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
|
1.0
|
Assets - Lista de Assets e Collections:
### navigation
- [x] _navbar.html.slim
- [x] _footer.html.slim
- [x] _header.html.slim
### comments
- [x] _comment.html.slim
- [x] _form.html.slim
### health_units
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _health_unit.html.slim
### user_profile
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
### professional_profile
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
### service_provider
- [x] _form.html.slim
- [x] _edit_form.html.slim
- [x] _user_profile.html.slim
|
test
|
assets lista de assets e collections navigation navbar html slim footer html slim header html slim comments comment html slim form html slim health units form html slim edit form html slim health unit html slim user profile form html slim edit form html slim user profile html slim professional profile form html slim edit form html slim user profile html slim service provider form html slim edit form html slim user profile html slim
| 1
|
37,156
| 5,104,091,166
|
IssuesEvent
|
2017-01-04 23:39:09
|
FreeCodeCamp/FreeCodeCamp
|
https://api.github.com/repos/FreeCodeCamp/FreeCodeCamp
|
closed
|
Falsy Bouncer has JSON.stringify() incorrectly change NaN to null
|
help wanted tests
|
Challenge [Falsy Bouncer](https://www.freecodecamp.com/challenges/falsy-bouncer#?solution=function%20bouncer(arr)%20%7B%0A%0A%20%20function%20notFalse(item)%20%7B%0A%20%20%20%20if(item%20%3D%3D%3D%20false)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20%22%22)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20null)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%200)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20undefined)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20%22NaN%22)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20%7B%0A%20%20%20%20%20%20return%20true%3B%0A%20%20%20%20%7D%0A%20%20%7D%0A%0A%20%20var%20newArr%20%3D%20arr.filter(notFalse)%3B%0A%20%20return%20newArr%3B%0A%7D%0A%0Abouncer(%5B8%2C%20null%2C%200%2C%20NaN%2C%20undefined%2C%20%22%22%5D)%3B%0A) has an issue.
User Agent is: <code>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36</code>.
Please describe how to reproduce this issue, and include links to screenshots if possible.
My code:
```javascript
function bouncer(arr) {
function notFalse(item) {
if(item === false) {
return false;
} else if (item === "") {
return false;
} else if (item === null) {
return false;
} else if (item === 0) {
return false;
} else if (item === undefined) {
return false;
} else if (item === "NaN") {
return false;
} else {
return true;
}
}
var newArr = arr.filter(notFalse);
return newArr;
}
bouncer([8, null, 0, NaN, undefined, ""]);
```
While I eventually found a simpler solution to this challenge, when I used the above code the filter wasn't catching null, i.e. it was returning [8, null]. I even changed it to return newArr[1] === null to make sure it wasn't just a problem with the notFalse function. When I copied the same script into atom and ran it, it returned the expected newArr, i.e. [8]. If I'm just missing something, please let me know!
|
1.0
|
Falsy Bouncer has JSON.stringify() incorrectly change NaN to null - Challenge [Falsy Bouncer](https://www.freecodecamp.com/challenges/falsy-bouncer#?solution=function%20bouncer(arr)%20%7B%0A%0A%20%20function%20notFalse(item)%20%7B%0A%20%20%20%20if(item%20%3D%3D%3D%20false)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20%22%22)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20null)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%200)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20undefined)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20if%20(item%20%3D%3D%3D%20%22NaN%22)%20%7B%0A%20%20%20%20%20%20return%20false%3B%0A%20%20%20%20%7D%20else%20%7B%0A%20%20%20%20%20%20return%20true%3B%0A%20%20%20%20%7D%0A%20%20%7D%0A%0A%20%20var%20newArr%20%3D%20arr.filter(notFalse)%3B%0A%20%20return%20newArr%3B%0A%7D%0A%0Abouncer(%5B8%2C%20null%2C%200%2C%20NaN%2C%20undefined%2C%20%22%22%5D)%3B%0A) has an issue.
User Agent is: <code>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36</code>.
Please describe how to reproduce this issue, and include links to screenshots if possible.
My code:
```javascript
function bouncer(arr) {
function notFalse(item) {
if(item === false) {
return false;
} else if (item === "") {
return false;
} else if (item === null) {
return false;
} else if (item === 0) {
return false;
} else if (item === undefined) {
return false;
} else if (item === "NaN") {
return false;
} else {
return true;
}
}
var newArr = arr.filter(notFalse);
return newArr;
}
bouncer([8, null, 0, NaN, undefined, ""]);
```
While I eventually found a simpler solution to this challenge, when I used the above code the filter wasn't catching null, i.e. it was returning [8, null]. I even changed it to return newArr[1] === null to make sure it wasn't just a problem with the notFalse function. When I copied the same script into atom and ran it, it returned the expected newArr, i.e. [8]. If I'm just missing something, please let me know!
|
test
|
falsy bouncer has json stringify incorrectly change nan to null challenge has an issue user agent is mozilla macintosh intel mac os x applewebkit khtml like gecko chrome safari please describe how to reproduce this issue and include links to screenshots if possible my code javascript function bouncer arr function notfalse item if item false return false else if item return false else if item null return false else if item return false else if item undefined return false else if item nan return false else return true var newarr arr filter notfalse return newarr bouncer while i eventually found a simpler solution to this challenge when i used the above code the filter wasn t catching null i e it was returning i even changed it to return newarr null to make sure it wasn t just a problem with the notfalse function when i copied the same script into atom and ran it it returned the expected newarr i e if i m just missing something please let me know
| 1
|
159,341
| 12,474,433,245
|
IssuesEvent
|
2020-05-29 09:38:35
|
aliasrobotics/RVD
|
https://api.github.com/repos/aliasrobotics/RVD
|
opened
|
Using xmlrpclib to parse untrusted XML data is known to be vulnerable to..., /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63
|
bandit bug static analysis testing triage
|
```yaml
{
"id": 1,
"title": "Using xmlrpclib to parse untrusted XML data is known to be vulnerable to..., /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63",
"type": "bug",
"description": "HIGH confidence of HIGH severity bug. Using xmlrpclib to parse untrusted XML data is known to be vulnerable to XML attacks. Use defused.xmlrpc.monkey_patch() function to monkey-patch xmlrpclib and mitigate XML vulnerabilities. at /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-29 (09:38)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-29 (09:38)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "/opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
1.0
|
Using xmlrpclib to parse untrusted XML data is known to be vulnerable to..., /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63 - ```yaml
{
"id": 1,
"title": "Using xmlrpclib to parse untrusted XML data is known to be vulnerable to..., /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63",
"type": "bug",
"description": "HIGH confidence of HIGH severity bug. Using xmlrpclib to parse untrusted XML data is known to be vulnerable to XML attacks. Use defused.xmlrpc.monkey_patch() function to monkey-patch xmlrpclib and mitigate XML vulnerabilities. at /opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63 See links for more info on the bug.",
"cwe": "None",
"cve": "None",
"keywords": [
"bandit",
"bug",
"static analysis",
"testing",
"triage",
"bug"
],
"system": "",
"vendor": null,
"severity": {
"rvss-score": 0,
"rvss-vector": "",
"severity-description": "",
"cvss-score": 0,
"cvss-vector": ""
},
"links": "",
"flaw": {
"phase": "testing",
"specificity": "subject-specific",
"architectural-location": "application-specific",
"application": "N/A",
"subsystem": "N/A",
"package": "N/A",
"languages": "None",
"date-detected": "2020-05-29 (09:38)",
"detected-by": "Alias Robotics",
"detected-by-method": "testing static",
"date-reported": "2020-05-29 (09:38)",
"reported-by": "Alias Robotics",
"reported-by-relationship": "automatic",
"issue": "",
"reproducibility": "always",
"trace": "/opt/ros_noetic_ws/src/ros_comm/rospy/src/rospy/core.py:63",
"reproduction": "See artifacts below (if available)",
"reproduction-image": ""
},
"exploitation": {
"description": "",
"exploitation-image": "",
"exploitation-vector": ""
},
"mitigation": {
"description": "",
"pull-request": "",
"date-mitigation": ""
}
}
```
|
test
|
using xmlrpclib to parse untrusted xml data is known to be vulnerable to opt ros noetic ws src ros comm rospy src rospy core py yaml id title using xmlrpclib to parse untrusted xml data is known to be vulnerable to opt ros noetic ws src ros comm rospy src rospy core py type bug description high confidence of high severity bug using xmlrpclib to parse untrusted xml data is known to be vulnerable to xml attacks use defused xmlrpc monkey patch function to monkey patch xmlrpclib and mitigate xml vulnerabilities at opt ros noetic ws src ros comm rospy src rospy core py see links for more info on the bug cwe none cve none keywords bandit bug static analysis testing triage bug system vendor null severity rvss score rvss vector severity description cvss score cvss vector links flaw phase testing specificity subject specific architectural location application specific application n a subsystem n a package n a languages none date detected detected by alias robotics detected by method testing static date reported reported by alias robotics reported by relationship automatic issue reproducibility always trace opt ros noetic ws src ros comm rospy src rospy core py reproduction see artifacts below if available reproduction image exploitation description exploitation image exploitation vector mitigation description pull request date mitigation
| 1
|
254,988
| 21,891,808,300
|
IssuesEvent
|
2022-05-20 03:04:40
|
tikv/pd
|
https://api.github.com/repos/tikv/pd
|
closed
|
Data race in TestSplitPaused test
|
component/testing type/ci
|
## Flaky Test
### Which jobs are failing
FAIL github.com/tikv/pd/server/cluster 55.402s
### CI link
https://github.com/tikv/pd/runs/6188856882?check_suite_focus=true
### Reason for failure (if possible)
```shell
WARNING: DATA RACE
Write at 0x00c00241a460 by goroutine 45:
github.com/tikv/pd/server/cluster.(*testUnsafeRecoverSuite).TestSplitPaused()
/home/runner/work/pd/pd/server/cluster/unsafe_recovery_controller_test.go:672 +0x4cb
runtime.call16()
/opt/hostedtoolcache/go/1.18.0/x64/src/runtime/asm_amd64.s:701 +0x48
reflect.Value.Call()
/opt/hostedtoolcache/go/1.18.0/x64/src/reflect/value.go:339 +0xd7
github.com/pingcap/check.(*suiteRunner).forkTest.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:850 +0xa71
github.com/pingcap/check.(*suiteRunner).forkCall.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:739 +0x11e
Previous read at 0x00c00241a460 by goroutine 93:
github.com/tikv/pd/server/cluster.(*RaftCluster).GetUnsafeRecoveryController()
/home/runner/work/pd/pd/server/cluster/cluster.go:551 +0xe4
github.com/tikv/pd/server/cluster.(*scheduleController).AllowSchedule()
/home/runner/work/pd/pd/server/cluster/coordinator.go:904 +0x38
github.com/tikv/pd/server/cluster.(*coordinator).runScheduler()
/home/runner/work/pd/pd/server/cluster/coordinator.go:802 +0x3de
github.com/tikv/pd/server/cluster.(*coordinator).addScheduler.func2()
/home/runner/work/pd/pd/server/cluster/coordinator.go:638 +0x47
Goroutine 45 (running) created at:
github.com/pingcap/check.(*suiteRunner).forkCall()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:734 +0x5dd
github.com/pingcap/check.(*suiteRunner).forkTest()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:832 +0x164
github.com/pingcap/check.(*suiteRunner).doRun()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:666 +0x1fa
github.com/pingcap/check.(*suiteRunner).run()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:696 +0xe5
github.com/pingcap/check.Run()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:150 +0x49
github.com/pingcap/check.RunAll()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:113 +0x710
github.com/pingcap/check.TestingT()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:99 +0x646
github.com/tikv/pd/server/cluster.Test()
/home/runner/work/pd/pd/server/cluster/cluster_test.go:48 +0x2e
testing.tRunner()
/opt/hostedtoolcache/go/1.18.0/x64/src/testing/testing.go:1439 +0x213
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.18.0/x64/src/testing/testing.go:1486 +0x47
Goroutine 93 (running) created at:
github.com/tikv/pd/server/cluster.(*coordinator).addScheduler()
/home/runner/work/pd/pd/server/cluster/coordinator.go:638 +0x35e
github.com/tikv/pd/server/cluster.(*coordinator).run()
/home/runner/work/pd/pd/server/cluster/coordinator.go:399 +0x193a
github.com/tikv/pd/server/cluster.(*testUnsafeRecoverSuite).TestSplitPaused()
/home/runner/work/pd/pd/server/cluster/unsafe_recovery_controller_test.go:665 +0x2ca
runtime.call16()
/opt/hostedtoolcache/go/1.18.0/x64/src/runtime/asm_amd64.s:701 +0x48
reflect.Value.Call()
/opt/hostedtoolcache/go/1.18.0/x64/src/reflect/value.go:339 +0xd7
github.com/pingcap/check.(*suiteRunner).forkTest.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:850 +0xa71
github.com/pingcap/check.(*suiteRunner).forkCall.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:739 +0x11e
==================
```
|
1.0
|
Data race in TestSplitPaused test - ## Flaky Test
### Which jobs are failing
FAIL github.com/tikv/pd/server/cluster 55.402s
### CI link
https://github.com/tikv/pd/runs/6188856882?check_suite_focus=true
### Reason for failure (if possible)
```shell
WARNING: DATA RACE
Write at 0x00c00241a460 by goroutine 45:
github.com/tikv/pd/server/cluster.(*testUnsafeRecoverSuite).TestSplitPaused()
/home/runner/work/pd/pd/server/cluster/unsafe_recovery_controller_test.go:672 +0x4cb
runtime.call16()
/opt/hostedtoolcache/go/1.18.0/x64/src/runtime/asm_amd64.s:701 +0x48
reflect.Value.Call()
/opt/hostedtoolcache/go/1.18.0/x64/src/reflect/value.go:339 +0xd7
github.com/pingcap/check.(*suiteRunner).forkTest.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:850 +0xa71
github.com/pingcap/check.(*suiteRunner).forkCall.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:739 +0x11e
Previous read at 0x00c00241a460 by goroutine 93:
github.com/tikv/pd/server/cluster.(*RaftCluster).GetUnsafeRecoveryController()
/home/runner/work/pd/pd/server/cluster/cluster.go:551 +0xe4
github.com/tikv/pd/server/cluster.(*scheduleController).AllowSchedule()
/home/runner/work/pd/pd/server/cluster/coordinator.go:904 +0x38
github.com/tikv/pd/server/cluster.(*coordinator).runScheduler()
/home/runner/work/pd/pd/server/cluster/coordinator.go:802 +0x3de
github.com/tikv/pd/server/cluster.(*coordinator).addScheduler.func2()
/home/runner/work/pd/pd/server/cluster/coordinator.go:638 +0x47
Goroutine 45 (running) created at:
github.com/pingcap/check.(*suiteRunner).forkCall()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:734 +0x5dd
github.com/pingcap/check.(*suiteRunner).forkTest()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:832 +0x164
github.com/pingcap/check.(*suiteRunner).doRun()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:666 +0x1fa
github.com/pingcap/check.(*suiteRunner).run()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:696 +0xe5
github.com/pingcap/check.Run()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:150 +0x49
github.com/pingcap/check.RunAll()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:113 +0x710
github.com/pingcap/check.TestingT()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/run.go:99 +0x646
github.com/tikv/pd/server/cluster.Test()
/home/runner/work/pd/pd/server/cluster/cluster_test.go:48 +0x2e
testing.tRunner()
/opt/hostedtoolcache/go/1.18.0/x64/src/testing/testing.go:1439 +0x213
testing.(*T).Run.func1()
/opt/hostedtoolcache/go/1.18.0/x64/src/testing/testing.go:1486 +0x47
Goroutine 93 (running) created at:
github.com/tikv/pd/server/cluster.(*coordinator).addScheduler()
/home/runner/work/pd/pd/server/cluster/coordinator.go:638 +0x35e
github.com/tikv/pd/server/cluster.(*coordinator).run()
/home/runner/work/pd/pd/server/cluster/coordinator.go:399 +0x193a
github.com/tikv/pd/server/cluster.(*testUnsafeRecoverSuite).TestSplitPaused()
/home/runner/work/pd/pd/server/cluster/unsafe_recovery_controller_test.go:665 +0x2ca
runtime.call16()
/opt/hostedtoolcache/go/1.18.0/x64/src/runtime/asm_amd64.s:701 +0x48
reflect.Value.Call()
/opt/hostedtoolcache/go/1.18.0/x64/src/reflect/value.go:339 +0xd7
github.com/pingcap/check.(*suiteRunner).forkTest.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:850 +0xa71
github.com/pingcap/check.(*suiteRunner).forkCall.func1()
/home/runner/go/pkg/mod/github.com/pingcap/check@v0.0.0-20211026125417-57bd13f7b5f0/check.go:739 +0x11e
==================
```
|
test
|
data race in testsplitpaused test flaky test which jobs are failing fail github com tikv pd server cluster ci link reason for failure if possible shell warning data race write at by goroutine github com tikv pd server cluster testunsaferecoversuite testsplitpaused home runner work pd pd server cluster unsafe recovery controller test go runtime opt hostedtoolcache go src runtime asm s reflect value call opt hostedtoolcache go src reflect value go github com pingcap check suiterunner forktest home runner go pkg mod github com pingcap check check go github com pingcap check suiterunner forkcall home runner go pkg mod github com pingcap check check go previous read at by goroutine github com tikv pd server cluster raftcluster getunsaferecoverycontroller home runner work pd pd server cluster cluster go github com tikv pd server cluster schedulecontroller allowschedule home runner work pd pd server cluster coordinator go github com tikv pd server cluster coordinator runscheduler home runner work pd pd server cluster coordinator go github com tikv pd server cluster coordinator addscheduler home runner work pd pd server cluster coordinator go goroutine running created at github com pingcap check suiterunner forkcall home runner go pkg mod github com pingcap check check go github com pingcap check suiterunner forktest home runner go pkg mod github com pingcap check check go github com pingcap check suiterunner dorun home runner go pkg mod github com pingcap check check go github com pingcap check suiterunner run home runner go pkg mod github com pingcap check check go github com pingcap check run home runner go pkg mod github com pingcap check run go github com pingcap check runall home runner go pkg mod github com pingcap check run go github com pingcap check testingt home runner go pkg mod github com pingcap check run go github com tikv pd server cluster test home runner work pd pd server cluster cluster test go testing trunner opt hostedtoolcache go src testing testing go testing t run opt hostedtoolcache go src testing testing go goroutine running created at github com tikv pd server cluster coordinator addscheduler home runner work pd pd server cluster coordinator go github com tikv pd server cluster coordinator run home runner work pd pd server cluster coordinator go github com tikv pd server cluster testunsaferecoversuite testsplitpaused home runner work pd pd server cluster unsafe recovery controller test go runtime opt hostedtoolcache go src runtime asm s reflect value call opt hostedtoolcache go src reflect value go github com pingcap check suiterunner forktest home runner go pkg mod github com pingcap check check go github com pingcap check suiterunner forkcall home runner go pkg mod github com pingcap check check go
| 1
|
1,817
| 20,121,589,125
|
IssuesEvent
|
2022-02-08 03:18:24
|
ppy/osu
|
https://api.github.com/repos/ppy/osu
|
closed
|
Client crashing when connection is cut while spectating in a mp lobby
|
area:multiplayer type:reliability
|
### Discussed in https://github.com/ppy/osu/discussions/16821
<div type='discussions-op-text'>
<sup>Originally posted by **nzxl101** February 8, 2022</sup>
Should be expected, but could be handled better imo.
Back button gives an error and after clicking for 2-3 times the client just decides to crash.
Steps to reproduce:
1. Join a multiplayer lobby
2. Spectate
3. Cut internet connection
https://user-images.githubusercontent.com/63413558/152900105-f78ccccb-f692-4d38-af69-b6c924f3cd6a.mp4
[runtime.log](https://github.com/ppy/osu/files/8019894/runtime.log)
[network.log](https://github.com/ppy/osu/files/8019895/network.log)
</div>
|
True
|
Client crashing when connection is cut while spectating in a mp lobby - ### Discussed in https://github.com/ppy/osu/discussions/16821
<div type='discussions-op-text'>
<sup>Originally posted by **nzxl101** February 8, 2022</sup>
Should be expected, but could be handled better imo.
Back button gives an error and after clicking for 2-3 times the client just decides to crash.
Steps to reproduce:
1. Join a multiplayer lobby
2. Spectate
3. Cut internet connection
https://user-images.githubusercontent.com/63413558/152900105-f78ccccb-f692-4d38-af69-b6c924f3cd6a.mp4
[runtime.log](https://github.com/ppy/osu/files/8019894/runtime.log)
[network.log](https://github.com/ppy/osu/files/8019895/network.log)
</div>
|
non_test
|
client crashing when connection is cut while spectating in a mp lobby discussed in originally posted by february should be expected but could be handled better imo back button gives an error and after clicking for times the client just decides to crash steps to reproduce join a multiplayer lobby spectate cut internet connection
| 0
|
126,380
| 4,989,985,859
|
IssuesEvent
|
2016-12-08 13:47:53
|
openvstorage/volumedriver
|
https://api.github.com/repos/openvstorage/volumedriver
|
opened
|
live migrate has non-deterministic outcome
|
priority_critical SRP
|
As observed on OVH demo environment
Scenario: vm running fio, migrate to another location
Observed outcome: intermittent failure/success
VM had read-only fs, it got an IO error and for some reason HA kicked in
Potential issue: race between threshold on voldrv side and edge following, as suggested by @redlicha: the threshold on voldrv side could be set to 0, but this might not be what we want in this kind of demo?
So when it works it is by change on not by design, the edge client checks the location periodically and will connect to the current voldrv ...
This needs to be investigated further as suggested to be able to give a proper demo of this feature
Might be related to a restarted proxy on the OVH environment, which could have triggered HA ...
|
1.0
|
live migrate has non-deterministic outcome - As observed on OVH demo environment
Scenario: vm running fio, migrate to another location
Observed outcome: intermittent failure/success
VM had read-only fs, it got an IO error and for some reason HA kicked in
Potential issue: race between threshold on voldrv side and edge following, as suggested by @redlicha: the threshold on voldrv side could be set to 0, but this might not be what we want in this kind of demo?
So when it works it is by change on not by design, the edge client checks the location periodically and will connect to the current voldrv ...
This needs to be investigated further as suggested to be able to give a proper demo of this feature
Might be related to a restarted proxy on the OVH environment, which could have triggered HA ...
|
non_test
|
live migrate has non deterministic outcome as observed on ovh demo environment scenario vm running fio migrate to another location observed outcome intermittent failure success vm had read only fs it got an io error and for some reason ha kicked in potential issue race between threshold on voldrv side and edge following as suggested by redlicha the threshold on voldrv side could be set to but this might not be what we want in this kind of demo so when it works it is by change on not by design the edge client checks the location periodically and will connect to the current voldrv this needs to be investigated further as suggested to be able to give a proper demo of this feature might be related to a restarted proxy on the ovh environment which could have triggered ha
| 0
|
11,129
| 4,159,594,893
|
IssuesEvent
|
2016-06-17 09:41:55
|
TEAMMATES/teammates
|
https://api.github.com/repos/TEAMMATES/teammates
|
closed
|
Split instructorFeedbackEdit.js into smaller files
|
a-CodeQuality f-Submissions p.Low
|
Currently, `instructorFeedbackEdit.js` is about 1.7k lines long, containing code for the behaviors of all question types. Perhaps we can extract the question-specific code into separate .js files, and have a common .js file shared by the questions.
`instructorFeedbackEdit.js` should then contain code relating to general operations like enabling question edits, feedback path, visibility options etc.
|
1.0
|
Split instructorFeedbackEdit.js into smaller files - Currently, `instructorFeedbackEdit.js` is about 1.7k lines long, containing code for the behaviors of all question types. Perhaps we can extract the question-specific code into separate .js files, and have a common .js file shared by the questions.
`instructorFeedbackEdit.js` should then contain code relating to general operations like enabling question edits, feedback path, visibility options etc.
|
non_test
|
split instructorfeedbackedit js into smaller files currently instructorfeedbackedit js is about lines long containing code for the behaviors of all question types perhaps we can extract the question specific code into separate js files and have a common js file shared by the questions instructorfeedbackedit js should then contain code relating to general operations like enabling question edits feedback path visibility options etc
| 0
|
831,042
| 32,036,645,387
|
IssuesEvent
|
2023-09-22 15:49:55
|
AdguardTeam/AdguardBrowserExtension
|
https://api.github.com/repos/AdguardTeam/AdguardBrowserExtension
|
closed
|
store-jp.nintendo.com - issue with login
|
Bug Resolution: Cannot Reproduce Status: Resolved Priority: P4
|
### AdGuard Extension version
4.2.168
### Browser version
Firefox 117.0.1
### OS version
Windows 11
### What filters do you have enabled?
No filters
### What Stealth Mode options do you have enabled?
_No response_
### Issue Details
I can't log in using Firefox and our extension. I've got the same (sometimes different) error as explained here https://github.com/AdguardTeam/AdguardFilters/issues/162227 -> https://streamable.com/pb3dk4.
To reproduce:
1. Disable all filters
2. Open https://store-jp.nintendo.com/
3. Click on the menu that is located at the left up column, then on the element (see Screenshot 1)
4. Fill in your credentials
5. Log In
6. Eventually, you'll get an error. Mostly, it's the same as on [video ](https://streamable.com/pb3dk4) - time stamp `1.22`.
### Expected Behavior
Successful login.
### Screenshots
<details><summary>Screenshot 1:</summary>

</details>
### Additional Information
_No response_
|
1.0
|
store-jp.nintendo.com - issue with login - ### AdGuard Extension version
4.2.168
### Browser version
Firefox 117.0.1
### OS version
Windows 11
### What filters do you have enabled?
No filters
### What Stealth Mode options do you have enabled?
_No response_
### Issue Details
I can't log in using Firefox and our extension. I've got the same (sometimes different) error as explained here https://github.com/AdguardTeam/AdguardFilters/issues/162227 -> https://streamable.com/pb3dk4.
To reproduce:
1. Disable all filters
2. Open https://store-jp.nintendo.com/
3. Click on the menu that is located at the left up column, then on the element (see Screenshot 1)
4. Fill in your credentials
5. Log In
6. Eventually, you'll get an error. Mostly, it's the same as on [video ](https://streamable.com/pb3dk4) - time stamp `1.22`.
### Expected Behavior
Successful login.
### Screenshots
<details><summary>Screenshot 1:</summary>

</details>
### Additional Information
_No response_
|
non_test
|
store jp nintendo com issue with login adguard extension version browser version firefox os version windows what filters do you have enabled no filters what stealth mode options do you have enabled no response issue details i can t log in using firefox and our extension i ve got the same sometimes different error as explained here to reproduce disable all filters open click on the menu that is located at the left up column then on the element see screenshot fill in your credentials log in eventually you ll get an error mostly it s the same as on time stamp expected behavior successful login screenshots screenshot additional information no response
| 0
|
20,261
| 15,208,023,393
|
IssuesEvent
|
2021-02-17 01:35:42
|
joffrey-bion/seven-wonders
|
https://api.github.com/repos/joffrey-bion/seven-wonders
|
closed
|
Display prepared cards (back) of other players
|
usability
|
We should show which players have prepared their cards, to know who to push 😄
Additionally, a loading element in place of the prepared card when the user hasn't prepared his move yet would be great.
(seeing stuff move makes it more lively)
|
True
|
Display prepared cards (back) of other players - We should show which players have prepared their cards, to know who to push 😄
Additionally, a loading element in place of the prepared card when the user hasn't prepared his move yet would be great.
(seeing stuff move makes it more lively)
|
non_test
|
display prepared cards back of other players we should show which players have prepared their cards to know who to push 😄 additionally a loading element in place of the prepared card when the user hasn t prepared his move yet would be great seeing stuff move makes it more lively
| 0
|
433,287
| 30,321,528,405
|
IssuesEvent
|
2023-07-10 19:38:20
|
microsoft/dynamics365patternspractices
|
https://api.github.com/repos/microsoft/dynamics365patternspractices
|
opened
|
[AREA]: Manage project supply chain
|
documentation business-process project to profit
|
### Contact details
mpoirier@microsoft.com
### Organization type
Microsoft employee
### End-to-end business process
Project to profit
### Specify the business process area name for the article.
Manage project supply chain
### Enter any additional comments or information you want us to know about this business process area.
Draft in progress
### Specify the date you expect the article to be completed and ready for review.
7/31/2023
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
1.0
|
[AREA]: Manage project supply chain - ### Contact details
mpoirier@microsoft.com
### Organization type
Microsoft employee
### End-to-end business process
Project to profit
### Specify the business process area name for the article.
Manage project supply chain
### Enter any additional comments or information you want us to know about this business process area.
Draft in progress
### Specify the date you expect the article to be completed and ready for review.
7/31/2023
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
non_test
|
manage project supply chain contact details mpoirier microsoft com organization type microsoft employee end to end business process project to profit specify the business process area name for the article manage project supply chain enter any additional comments or information you want us to know about this business process area draft in progress specify the date you expect the article to be completed and ready for review code of conduct i agree to follow this project s code of conduct
| 0
|
223,311
| 7,451,937,600
|
IssuesEvent
|
2018-03-29 06:13:27
|
mazmaz2k/Modular-Construction-of-Minimal-Models
|
https://api.github.com/repos/mazmaz2k/Modular-Construction-of-Minimal-Models
|
closed
|
change algorithm to find node saperator
|
Difficulty - High Graph Iteration3 Priority - High
|
in previous Tests (Grid like Graph for example) we found that in some graphs we can't find node separator.
We will try to change the structure of the algorithm to find node separator.
|
1.0
|
change algorithm to find node saperator - in previous Tests (Grid like Graph for example) we found that in some graphs we can't find node separator.
We will try to change the structure of the algorithm to find node separator.
|
non_test
|
change algorithm to find node saperator in previous tests grid like graph for example we found that in some graphs we can t find node separator we will try to change the structure of the algorithm to find node separator
| 0
|
24,163
| 4,063,739,613
|
IssuesEvent
|
2016-05-26 01:34:26
|
rethinkdb/horizon
|
https://api.github.com/repos/rethinkdb/horizon
|
closed
|
Fix race conditions in subscription client tests
|
client testing
|
Right now you will occasionally get "Write invalidated by another request, try again", which should not happen in the client tests with a single client.
|
1.0
|
Fix race conditions in subscription client tests - Right now you will occasionally get "Write invalidated by another request, try again", which should not happen in the client tests with a single client.
|
test
|
fix race conditions in subscription client tests right now you will occasionally get write invalidated by another request try again which should not happen in the client tests with a single client
| 1
|
489,262
| 14,103,663,766
|
IssuesEvent
|
2020-11-06 10:37:33
|
ooni/probe-engine
|
https://api.github.com/repos/ooni/probe-engine
|
reopened
|
go1.15: go build -v fails unless -tags PSIPHON_DISABLE_QUIC is passed
|
bug effort/S interrupt priority/high
|
(We have now [completely mitigated](https://github.com/ooni/probe-engine/issues/866#issuecomment-696170540) this issue by building atop the [documented workaround](https://github.com/ooni/probe-engine/issues/866#issuecomment-689532954).)
This is the output I obtain when compiling using Go 1.15:
```
% ./miniooni -h
panic: qtls.ConnectionState not compatible with tls.ConnectionState
goroutine 1 [running]:
github.com/Psiphon-Labs/quic-go/internal/handshake.init.1()
/Users/sbs/go/pkg/mod/github.com/!psiphon-!labs/quic-go@v0.14.1-0.20200306193310-474e74c89fab/internal/handshake/unsafe.go:17 +0x12e
```
The reason seems to be that Go 1.15 changed its `tls.ConnectionState` structure. There is [a fix applied upstream](https://github.com/lucas-clemente/quic-go/commit/125318d9c948c380e1a8d3421bdbcada437efe0a) that handles this issue. It seems Psiphon is using its own fork of quic-go: https://github.com/Psiphon-Labs/quic-go.
This means that, for fixing the issue, the Psiphon devs need to backport the patch. For now, it's important to use Go 1.14 only for building OONI.
|
1.0
|
go1.15: go build -v fails unless -tags PSIPHON_DISABLE_QUIC is passed - (We have now [completely mitigated](https://github.com/ooni/probe-engine/issues/866#issuecomment-696170540) this issue by building atop the [documented workaround](https://github.com/ooni/probe-engine/issues/866#issuecomment-689532954).)
This is the output I obtain when compiling using Go 1.15:
```
% ./miniooni -h
panic: qtls.ConnectionState not compatible with tls.ConnectionState
goroutine 1 [running]:
github.com/Psiphon-Labs/quic-go/internal/handshake.init.1()
/Users/sbs/go/pkg/mod/github.com/!psiphon-!labs/quic-go@v0.14.1-0.20200306193310-474e74c89fab/internal/handshake/unsafe.go:17 +0x12e
```
The reason seems to be that Go 1.15 changed its `tls.ConnectionState` structure. There is [a fix applied upstream](https://github.com/lucas-clemente/quic-go/commit/125318d9c948c380e1a8d3421bdbcada437efe0a) that handles this issue. It seems Psiphon is using its own fork of quic-go: https://github.com/Psiphon-Labs/quic-go.
This means that, for fixing the issue, the Psiphon devs need to backport the patch. For now, it's important to use Go 1.14 only for building OONI.
|
non_test
|
go build v fails unless tags psiphon disable quic is passed we have now this issue by building atop the this is the output i obtain when compiling using go miniooni h panic qtls connectionstate not compatible with tls connectionstate goroutine github com psiphon labs quic go internal handshake init users sbs go pkg mod github com psiphon labs quic go internal handshake unsafe go the reason seems to be that go changed its tls connectionstate structure there is that handles this issue it seems psiphon is using its own fork of quic go this means that for fixing the issue the psiphon devs need to backport the patch for now it s important to use go only for building ooni
| 0
|
34,613
| 14,446,965,819
|
IssuesEvent
|
2020-12-08 02:35:16
|
goharbor/harbor-operator
|
https://api.github.com/repos/goharbor/harbor-operator
|
closed
|
minio name format is not correct: two -- included
|
area/dependent-services dependency/cache kind/bug release/1.0
|
```shell
steven@steven-zou:~$ k8s get all -n cluster-sample-ns
NAME READY STATUS RESTARTS AGE
pod/minio--harborcluster-sample-zone-harbor-0 1/1 Running 0 4m22s
pod/minio--harborcluster-sample-zone-harbor-1 0/1 ImagePullBackOff 0 3m3s
pod/postgresql-cluster-sample-ns-harborcluster-sample-0 1/1 Running 0 4m21s
pod/postgresql-cluster-sample-ns-harborcluster-sample-1 1/1 Running 0 4m7s
pod/rfr-harborcluster-sample-redis-0 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-hm6js 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-sv654 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-wbt9h 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/harborcluster-sample-redis ClusterIP 10.98.49.182 <none> 6379/TCP 4m22s
service/minio--harborcluster-sample ClusterIP 10.109.245.0 <none> 9000/TCP 4m22s
service/minio--harborcluster-sample-hl ClusterIP None <none> 9000/TCP 4m22s
service/postgresql-cluster-sample-ns-harborcluster-sample ClusterIP 10.106.44.172 <none> 5432/TCP 4m22s
service/postgresql-cluster-sample-ns-harborcluster-sample-config ClusterIP None <none> <none> 3m58s
service/postgresql-cluster-sample-ns-harborcluster-sample-repl ClusterIP 10.107.195.109 <none> 5432/TCP 4m22s
service/rfs-harborcluster-sample-redis ClusterIP 10.110.10.162 <none> 26379/TCP 4m22s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rfs-harborcluster-sample-redis 3/3 3 3 4m21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rfs-harborcluster-sample-redis-656c4879d9 3 3 3 4m21s
NAME READY AGE
statefulset.apps/minio--harborcluster-sample-zone-harbor 1/2 4m22s
statefulset.apps/postgresql-cluster-sample-ns-harborcluster-sample 2/2 4m21s
statefulset.apps/rfr-harborcluster-sample-redis
```
|
1.0
|
minio name format is not correct: two -- included - ```shell
steven@steven-zou:~$ k8s get all -n cluster-sample-ns
NAME READY STATUS RESTARTS AGE
pod/minio--harborcluster-sample-zone-harbor-0 1/1 Running 0 4m22s
pod/minio--harborcluster-sample-zone-harbor-1 0/1 ImagePullBackOff 0 3m3s
pod/postgresql-cluster-sample-ns-harborcluster-sample-0 1/1 Running 0 4m21s
pod/postgresql-cluster-sample-ns-harborcluster-sample-1 1/1 Running 0 4m7s
pod/rfr-harborcluster-sample-redis-0 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-hm6js 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-sv654 1/1 Running 0 4m21s
pod/rfs-harborcluster-sample-redis-656c4879d9-wbt9h 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/harborcluster-sample-redis ClusterIP 10.98.49.182 <none> 6379/TCP 4m22s
service/minio--harborcluster-sample ClusterIP 10.109.245.0 <none> 9000/TCP 4m22s
service/minio--harborcluster-sample-hl ClusterIP None <none> 9000/TCP 4m22s
service/postgresql-cluster-sample-ns-harborcluster-sample ClusterIP 10.106.44.172 <none> 5432/TCP 4m22s
service/postgresql-cluster-sample-ns-harborcluster-sample-config ClusterIP None <none> <none> 3m58s
service/postgresql-cluster-sample-ns-harborcluster-sample-repl ClusterIP 10.107.195.109 <none> 5432/TCP 4m22s
service/rfs-harborcluster-sample-redis ClusterIP 10.110.10.162 <none> 26379/TCP 4m22s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rfs-harborcluster-sample-redis 3/3 3 3 4m21s
NAME DESIRED CURRENT READY AGE
replicaset.apps/rfs-harborcluster-sample-redis-656c4879d9 3 3 3 4m21s
NAME READY AGE
statefulset.apps/minio--harborcluster-sample-zone-harbor 1/2 4m22s
statefulset.apps/postgresql-cluster-sample-ns-harborcluster-sample 2/2 4m21s
statefulset.apps/rfr-harborcluster-sample-redis
```
|
non_test
|
minio name format is not correct two included shell steven steven zou get all n cluster sample ns name ready status restarts age pod minio harborcluster sample zone harbor running pod minio harborcluster sample zone harbor imagepullbackoff pod postgresql cluster sample ns harborcluster sample running pod postgresql cluster sample ns harborcluster sample running pod rfr harborcluster sample redis running pod rfs harborcluster sample redis running pod rfs harborcluster sample redis running pod rfs harborcluster sample redis running name type cluster ip external ip port s age service harborcluster sample redis clusterip tcp service minio harborcluster sample clusterip tcp service minio harborcluster sample hl clusterip none tcp service postgresql cluster sample ns harborcluster sample clusterip tcp service postgresql cluster sample ns harborcluster sample config clusterip none service postgresql cluster sample ns harborcluster sample repl clusterip tcp service rfs harborcluster sample redis clusterip tcp name ready up to date available age deployment apps rfs harborcluster sample redis name desired current ready age replicaset apps rfs harborcluster sample redis name ready age statefulset apps minio harborcluster sample zone harbor statefulset apps postgresql cluster sample ns harborcluster sample statefulset apps rfr harborcluster sample redis
| 0
|
240,372
| 18,346,543,729
|
IssuesEvent
|
2021-10-08 07:12:52
|
Samsung/thorvg
|
https://api.github.com/repos/Samsung/thorvg
|
closed
|
Release Official CAPIs
|
documentation example
|
TODO:
1 Remove APIs if the default APIs (C++) have been deprecated
2 Remove Beta Tags unless Default APIs are not officially released
3 Synchronize the API description with the default APIs
4 Review & Refine API Docs if its necessary
5 Add missing CAPI Unit Test
|
1.0
|
Release Official CAPIs - TODO:
1 Remove APIs if the default APIs (C++) have been deprecated
2 Remove Beta Tags unless Default APIs are not officially released
3 Synchronize the API description with the default APIs
4 Review & Refine API Docs if its necessary
5 Add missing CAPI Unit Test
|
non_test
|
release official capis todo remove apis if the default apis c have been deprecated remove beta tags unless default apis are not officially released synchronize the api description with the default apis review refine api docs if its necessary add missing capi unit test
| 0
|
182,443
| 30,849,745,821
|
IssuesEvent
|
2023-08-02 15:53:01
|
CrocSwap/ambient-ts-app
|
https://api.github.com/repos/CrocSwap/ambient-ts-app
|
closed
|
[Enhancement]: Links dropdown UI updates
|
enhancement look-and-feel med-prio needs-design-signoff
|
### Requirements
- reduce the thickness of the logout button hover state border to match the main submit button type (eg. submit swap)
<img width="280" alt="image" src="https://github.com/CrocSwap/ambient-ts-app/assets/45405267/284bc3a6-e570-49e2-a739-434adf5f730a">
### Figma
_No response_
### Assumptions
_No response_
|
1.0
|
[Enhancement]: Links dropdown UI updates - ### Requirements
- reduce the thickness of the logout button hover state border to match the main submit button type (eg. submit swap)
<img width="280" alt="image" src="https://github.com/CrocSwap/ambient-ts-app/assets/45405267/284bc3a6-e570-49e2-a739-434adf5f730a">
### Figma
_No response_
### Assumptions
_No response_
|
non_test
|
links dropdown ui updates requirements reduce the thickness of the logout button hover state border to match the main submit button type eg submit swap img width alt image src figma no response assumptions no response
| 0
|
26,606
| 4,236,035,543
|
IssuesEvent
|
2016-07-05 17:04:09
|
semperfiwebdesign/all-in-one-seo-pack
|
https://api.github.com/repos/semperfiwebdesign/all-in-one-seo-pack
|
closed
|
Check to see if Jetpack's sitemap module is deactivated when AIOSEOP's isn't active.
|
COMPATIBILITY Needs Testing
|
If so, is that new?
|
1.0
|
Check to see if Jetpack's sitemap module is deactivated when AIOSEOP's isn't active. - If so, is that new?
|
test
|
check to see if jetpack s sitemap module is deactivated when aioseop s isn t active if so is that new
| 1
|
271,676
| 20,710,798,721
|
IssuesEvent
|
2022-03-12 00:27:09
|
suborbital/docs
|
https://api.github.com/repos/suborbital/docs
|
closed
|
Tweaks on behalf of newbies
|
documentation
|
- [x] For the "Intro to WebAssembly" section, switch the order of "History of the Internet" and "Why WebAssembly?" because the latter is a question most new users will have and the former could lose folks who think they should proceed through resources in the order given (whether we think they "should" think they need to do that doesn't matter; we don't want to lose—exclude—people over it).
- [x] Also in the "Intro to WebAssembly" section: do we want to link to outside resources like these?
- [x] #62
|
1.0
|
Tweaks on behalf of newbies - - [x] For the "Intro to WebAssembly" section, switch the order of "History of the Internet" and "Why WebAssembly?" because the latter is a question most new users will have and the former could lose folks who think they should proceed through resources in the order given (whether we think they "should" think they need to do that doesn't matter; we don't want to lose—exclude—people over it).
- [x] Also in the "Intro to WebAssembly" section: do we want to link to outside resources like these?
- [x] #62
|
non_test
|
tweaks on behalf of newbies for the intro to webassembly section switch the order of history of the internet and why webassembly because the latter is a question most new users will have and the former could lose folks who think they should proceed through resources in the order given whether we think they should think they need to do that doesn t matter we don t want to lose—exclude—people over it also in the intro to webassembly section do we want to link to outside resources like these
| 0
|
66,602
| 14,788,945,311
|
IssuesEvent
|
2021-01-12 09:54:34
|
andygonzalez2010/store
|
https://api.github.com/repos/andygonzalez2010/store
|
opened
|
CVE-2020-7662 (High) detected in websocket-extensions-0.1.3.tgz
|
security vulnerability
|
## CVE-2020-7662 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>websocket-extensions-0.1.3.tgz</b></p></summary>
<p>Generic extension manager for WebSocket connections</p>
<p>Library home page: <a href="https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz">https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz</a></p>
<p>Path to dependency file: store/package.json</p>
<p>Path to vulnerable library: store/node_modules/websocket-extensions/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- sockjs-0.3.19.tgz
- faye-websocket-0.10.0.tgz
- websocket-driver-0.7.0.tgz
- :x: **websocket-extensions-0.1.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
websocket-extensions npm module prior to 0.1.4 allows Denial of Service (DoS) via Regex Backtracking. The extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two-byte sequence of a backslash and some other character. This could be abused by an attacker to conduct Regex Denial Of Service (ReDoS) on a single-threaded server by providing a malicious payload with the Sec-WebSocket-Extensions header.
<p>Publish Date: 2020-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7662>CVE-2020-7662</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662</a></p>
<p>Release Date: 2020-06-02</p>
<p>Fix Resolution: websocket-extensions:0.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7662 (High) detected in websocket-extensions-0.1.3.tgz - ## CVE-2020-7662 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>websocket-extensions-0.1.3.tgz</b></p></summary>
<p>Generic extension manager for WebSocket connections</p>
<p>Library home page: <a href="https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz">https://registry.npmjs.org/websocket-extensions/-/websocket-extensions-0.1.3.tgz</a></p>
<p>Path to dependency file: store/package.json</p>
<p>Path to vulnerable library: store/node_modules/websocket-extensions/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.2.1.tgz (Root Library)
- sockjs-0.3.19.tgz
- faye-websocket-0.10.0.tgz
- websocket-driver-0.7.0.tgz
- :x: **websocket-extensions-0.1.3.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/andygonzalez2010/store/commit/3f6d614029f4d6cfdddfcef8468949cb7822503c">3f6d614029f4d6cfdddfcef8468949cb7822503c</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
websocket-extensions npm module prior to 0.1.4 allows Denial of Service (DoS) via Regex Backtracking. The extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two-byte sequence of a backslash and some other character. This could be abused by an attacker to conduct Regex Denial Of Service (ReDoS) on a single-threaded server by providing a malicious payload with the Sec-WebSocket-Extensions header.
<p>Publish Date: 2020-06-02
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7662>CVE-2020-7662</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7662</a></p>
<p>Release Date: 2020-06-02</p>
<p>Fix Resolution: websocket-extensions:0.1.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve high detected in websocket extensions tgz cve high severity vulnerability vulnerable library websocket extensions tgz generic extension manager for websocket connections library home page a href path to dependency file store package json path to vulnerable library store node modules websocket extensions package json dependency hierarchy webpack dev server tgz root library sockjs tgz faye websocket tgz websocket driver tgz x websocket extensions tgz vulnerable library found in head commit a href found in base branch master vulnerability details websocket extensions npm module prior to allows denial of service dos via regex backtracking the extension parser may take quadratic time when parsing a header containing an unclosed string parameter value whose content is a repeating two byte sequence of a backslash and some other character this could be abused by an attacker to conduct regex denial of service redos on a single threaded server by providing a malicious payload with the sec websocket extensions header publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution websocket extensions step up your open source security game with whitesource
| 0
|
351,745
| 32,024,492,788
|
IssuesEvent
|
2023-09-22 07:52:00
|
elastic/elasticsearch
|
https://api.github.com/repos/elastic/elasticsearch
|
closed
|
[CI] MixedClusterClientYamlTestSuiteIT test {p0=indices.get_index_template/10_basic/Get data stream lifecycle with default rollover} failing
|
>test >test-failure :Data Management/Data streams Team:Data Management
|
**Build scan:**
https://gradle-enterprise.elastic.co/s/reps7enlap2do/tests/:qa:mixed-cluster:v8.10.1%23mixedClusterTest/org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT/test%20%7Bp0=indices.get_index_template%2F10_basic%2FGet%20data%20stream%20lifecycle%20with%20default%20rollover%7D
**Reproduction line:**
```
./gradlew ':qa:mixed-cluster:v8.10.1#mixedClusterTest' -Dtests.class="org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT" -Dtests.method="test {p0=indices.get_index_template/10_basic/Get data stream lifecycle with default rollover}" -Dtests.seed=FC8165EFACCE4D47 -Dtests.bwc=true -Dtests.locale=mk-MK -Dtests.timezone=Asia/Dhaka -Druntime.java=20
```
**Applicable branches:**
main
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT&tests.test=test%20%7Bp0%3Dindices.get_index_template/10_basic/Get%20data%20stream%20lifecycle%20with%20default%20rollover%7D
**Failure excerpt:**
```
java.lang.AssertionError: Failure at [indices.get_index_template/10_basic:167]: field [index_templates.0.index_template.template.lifecycle.enabled] is null
at __randomizedtesting.SeedInfo.seed([FC8165EFACCE4D47:74D55A35023220BF]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:582)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:534)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
Caused by: java.lang.AssertionError: field [index_templates.0.index_template.template.lifecycle.enabled] is null
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at org.elasticsearch.test.rest.yaml.section.MatchAssertion.doAssert(MatchAssertion.java:78)
at org.elasticsearch.test.rest.yaml.section.Assertion.execute(Assertion.java:65)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:562)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:534)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
```
|
2.0
|
[CI] MixedClusterClientYamlTestSuiteIT test {p0=indices.get_index_template/10_basic/Get data stream lifecycle with default rollover} failing - **Build scan:**
https://gradle-enterprise.elastic.co/s/reps7enlap2do/tests/:qa:mixed-cluster:v8.10.1%23mixedClusterTest/org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT/test%20%7Bp0=indices.get_index_template%2F10_basic%2FGet%20data%20stream%20lifecycle%20with%20default%20rollover%7D
**Reproduction line:**
```
./gradlew ':qa:mixed-cluster:v8.10.1#mixedClusterTest' -Dtests.class="org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT" -Dtests.method="test {p0=indices.get_index_template/10_basic/Get data stream lifecycle with default rollover}" -Dtests.seed=FC8165EFACCE4D47 -Dtests.bwc=true -Dtests.locale=mk-MK -Dtests.timezone=Asia/Dhaka -Druntime.java=20
```
**Applicable branches:**
main
**Reproduces locally?:**
Yes
**Failure history:**
https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.backwards.MixedClusterClientYamlTestSuiteIT&tests.test=test%20%7Bp0%3Dindices.get_index_template/10_basic/Get%20data%20stream%20lifecycle%20with%20default%20rollover%7D
**Failure excerpt:**
```
java.lang.AssertionError: Failure at [indices.get_index_template/10_basic:167]: field [index_templates.0.index_template.template.lifecycle.enabled] is null
at __randomizedtesting.SeedInfo.seed([FC8165EFACCE4D47:74D55A35023220BF]:0)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:582)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:534)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
Caused by: java.lang.AssertionError: field [index_templates.0.index_template.template.lifecycle.enabled] is null
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertNotNull(Assert.java:712)
at org.elasticsearch.test.rest.yaml.section.MatchAssertion.doAssert(MatchAssertion.java:78)
at org.elasticsearch.test.rest.yaml.section.Assertion.execute(Assertion.java:65)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.executeSection(ESClientYamlSuiteTestCase.java:562)
at org.elasticsearch.test.rest.yaml.ESClientYamlSuiteTestCase.test(ESClientYamlSuiteTestCase.java:534)
at jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
at java.lang.reflect.Method.invoke(Method.java:578)
at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1758)
at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:946)
at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:982)
at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:48)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:45)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:843)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:490)
at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:955)
at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:840)
at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:891)
at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:902)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43)
at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390)
at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850)
at java.lang.Thread.run(Thread.java:1623)
```
|
test
|
mixedclusterclientyamltestsuiteit test indices get index template basic get data stream lifecycle with default rollover failing build scan reproduction line gradlew qa mixed cluster mixedclustertest dtests class org elasticsearch backwards mixedclusterclientyamltestsuiteit dtests method test indices get index template basic get data stream lifecycle with default rollover dtests seed dtests bwc true dtests locale mk mk dtests timezone asia dhaka druntime java applicable branches main reproduces locally yes failure history failure excerpt java lang assertionerror failure at field is null at randomizedtesting seedinfo seed at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java caused by java lang assertionerror field is null at org junit assert fail assert java at org junit assert asserttrue assert java at org junit assert assertnotnull assert java at org elasticsearch test rest yaml section matchassertion doassert matchassertion java at org elasticsearch test rest yaml section assertion execute assertion java at org elasticsearch test rest yaml esclientyamlsuitetestcase executesection esclientyamlsuitetestcase java at org elasticsearch test rest yaml esclientyamlsuitetestcase test esclientyamlsuitetestcase java at jdk internal reflect directmethodhandleaccessor invoke directmethodhandleaccessor java at java lang reflect method invoke method java at com carrotsearch randomizedtesting randomizedrunner invoke randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulesetupteardownchained evaluate testrulesetupteardownchained java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulethreadandtestname evaluate testrulethreadandtestname java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol forktimeoutingtask threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol evaluate threadleakcontrol java at com carrotsearch randomizedtesting randomizedrunner runsingletest randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at com carrotsearch randomizedtesting randomizedrunner evaluate randomizedrunner java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java
| 1
|
216,184
| 24,245,132,926
|
IssuesEvent
|
2022-09-27 09:55:24
|
hellohaptik/chatbot_ner
|
https://api.github.com/repos/hellohaptik/chatbot_ner
|
closed
|
CVE-2021-41496 (Medium) detected in numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl - autoclosed
|
security vulnerability
|
## CVE-2021-41496 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9b/04/c3846024ddc7514cde17087f62f0502abf85c53e8f69f6312c70db6d144e/numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/9b/04/c3846024ddc7514cde17087f62f0502abf85c53e8f69f6312c70db6d144e/numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/datastore,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hellohaptik/chatbot_ner/commit/a835daa282bf10ee52224e097ff04df34ab7852d">a835daa282bf10ee52224e097ff04df34ab7852d</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Buffer overflow in the array_from_pyobj function of fortranobject.c in NumPy < 1.19, which allows attackers to conduct a Denial of Service attacks by carefully constructing an array with negative values. NOTE: The vendor does not agree this is a vulnerability; the negative dimensions can only be created by an already privileged user (or internally).
Mend Note: After conducting further research, Mend has determined that numpy versions before 1.22.0 are vulnerable to CVE-2021-41496
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41496>CVE-2021-41496</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-41496 (Medium) detected in numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl - autoclosed - ## CVE-2021-41496 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl</b></p></summary>
<p>NumPy is the fundamental package for array computing with Python.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/9b/04/c3846024ddc7514cde17087f62f0502abf85c53e8f69f6312c70db6d144e/numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl">https://files.pythonhosted.org/packages/9b/04/c3846024ddc7514cde17087f62f0502abf85c53e8f69f6312c70db6d144e/numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt,/datastore,/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **numpy-1.19.2-cp37-cp37m-manylinux2010_x86_64.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hellohaptik/chatbot_ner/commit/a835daa282bf10ee52224e097ff04df34ab7852d">a835daa282bf10ee52224e097ff04df34ab7852d</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
** DISPUTED ** Buffer overflow in the array_from_pyobj function of fortranobject.c in NumPy < 1.19, which allows attackers to conduct a Denial of Service attacks by carefully constructing an array with negative values. NOTE: The vendor does not agree this is a vulnerability; the negative dimensions can only be created by an already privileged user (or internally).
Mend Note: After conducting further research, Mend has determined that numpy versions before 1.22.0 are vulnerable to CVE-2021-41496
<p>Publish Date: 2021-12-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-41496>CVE-2021-41496</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
cve medium detected in numpy whl autoclosed cve medium severity vulnerability vulnerable library numpy whl numpy is the fundamental package for array computing with python library home page a href path to dependency file requirements txt path to vulnerable library requirements txt datastore requirements txt dependency hierarchy x numpy whl vulnerable library found in head commit a href found in base branch develop vulnerability details disputed buffer overflow in the array from pyobj function of fortranobject c in numpy which allows attackers to conduct a denial of service attacks by carefully constructing an array with negative values note the vendor does not agree this is a vulnerability the negative dimensions can only be created by an already privileged user or internally mend note after conducting further research mend has determined that numpy versions before are vulnerable to cve publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend
| 0
|
277,353
| 24,063,392,063
|
IssuesEvent
|
2022-09-17 05:49:29
|
JuliaDocs/Documenter.jl
|
https://api.github.com/repos/JuliaDocs/Documenter.jl
|
closed
|
Intermittent CI failures with PDF builds
|
Type: Tests Format: LaTeX
|
The "PDF/LaTeX backend" stage often fails on CI, causing the docs not to deploy (since they only deploy if that stage passes):
```
PDF/LaTeX: simple: Test Failed at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:24
Expression: joinpath(build_dir, "DocumenterLaTeXSimple.pdf") |> isfile
Stacktrace:
[1] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:24 [inlined]
[2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[3] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:21 [inlined]
[4] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[5] top-level scope at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:20
PDF/LaTeX: Test Failed at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:32
Expression: joinpath(build_dir, "DocumenterLaTeX.pdf") |> isfile
Stacktrace:
[1] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:32 [inlined]
[2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[3] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:29 [inlined]
[4] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[5] top-level scope at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:20
Test Summary: | Pass Fail Total
Examples/LaTeX | 2 2 4
PDF/LaTeX: simple | 1 1 2
PDF/LaTeX | 1 1 2
ERROR: LoadError: Some tests did not pass: 2 passed, 2 failed, 0 errored, 0 broken.
in expression starting at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:19
```
Restarting the build usually fixes it though.
|
1.0
|
Intermittent CI failures with PDF builds - The "PDF/LaTeX backend" stage often fails on CI, causing the docs not to deploy (since they only deploy if that stage passes):
```
PDF/LaTeX: simple: Test Failed at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:24
Expression: joinpath(build_dir, "DocumenterLaTeXSimple.pdf") |> isfile
Stacktrace:
[1] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:24 [inlined]
[2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[3] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:21 [inlined]
[4] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[5] top-level scope at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:20
PDF/LaTeX: Test Failed at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:32
Expression: joinpath(build_dir, "DocumenterLaTeX.pdf") |> isfile
Stacktrace:
[1] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:32 [inlined]
[2] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[3] macro expansion at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:29 [inlined]
[4] macro expansion at /buildworker/worker/package_linux64/build/usr/share/julia/stdlib/v1.0/Test/src/Test.jl:1083 [inlined]
[5] top-level scope at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:20
Test Summary: | Pass Fail Total
Examples/LaTeX | 2 2 4
PDF/LaTeX: simple | 1 1 2
PDF/LaTeX | 1 1 2
ERROR: LoadError: Some tests did not pass: 2 passed, 2 failed, 0 errored, 0 broken.
in expression starting at /home/travis/build/JuliaDocs/Documenter.jl/test/examples/tests_latex.jl:19
```
Restarting the build usually fixes it though.
|
test
|
intermittent ci failures with pdf builds the pdf latex backend stage often fails on ci causing the docs not to deploy since they only deploy if that stage passes pdf latex simple test failed at home travis build juliadocs documenter jl test examples tests latex jl expression joinpath build dir documenterlatexsimple pdf isfile stacktrace macro expansion at home travis build juliadocs documenter jl test examples tests latex jl macro expansion at buildworker worker package build usr share julia stdlib test src test jl macro expansion at home travis build juliadocs documenter jl test examples tests latex jl macro expansion at buildworker worker package build usr share julia stdlib test src test jl top level scope at home travis build juliadocs documenter jl test examples tests latex jl pdf latex test failed at home travis build juliadocs documenter jl test examples tests latex jl expression joinpath build dir documenterlatex pdf isfile stacktrace macro expansion at home travis build juliadocs documenter jl test examples tests latex jl macro expansion at buildworker worker package build usr share julia stdlib test src test jl macro expansion at home travis build juliadocs documenter jl test examples tests latex jl macro expansion at buildworker worker package build usr share julia stdlib test src test jl top level scope at home travis build juliadocs documenter jl test examples tests latex jl test summary pass fail total examples latex pdf latex simple pdf latex error loaderror some tests did not pass passed failed errored broken in expression starting at home travis build juliadocs documenter jl test examples tests latex jl restarting the build usually fixes it though
| 1
|
73,872
| 7,360,739,372
|
IssuesEvent
|
2018-03-10 21:41:59
|
magneticstain/Inquisition
|
https://api.github.com/repos/magneticstain/Inquisition
|
opened
|
Review Code
|
enhancement testing
|
Now that the first submodule has been completed (Alerts), we should comb through the come to review it and optimize/improve any deficiencies identified.
|
1.0
|
Review Code - Now that the first submodule has been completed (Alerts), we should comb through the come to review it and optimize/improve any deficiencies identified.
|
test
|
review code now that the first submodule has been completed alerts we should comb through the come to review it and optimize improve any deficiencies identified
| 1
|
48,310
| 25,478,655,903
|
IssuesEvent
|
2022-11-25 17:11:51
|
OpenNMT/OpenNMT-py
|
https://api.github.com/repos/OpenNMT/OpenNMT-py
|
closed
|
VRAM usage not constant in v2.0.0rc1
|
type:performance
|
Using the same (Transformer Big) model parameters in 1.2.0 and 2.0.0rc1, my 3090 24GB will run out of memory at random times during the training process (uses from 75% to more than 100% of 24GB), even when I try to reduce the batch size.
In 1.2.0, the training uses a constant amount of vram (12GB/24GB)
Also, speedwise, 1.2.0 seems faster when training (~10500/14000 tok/s vs ~9000/11000 tok/s)
See snippet for parameters:
2.0.0rc1
```
world_size: 1
gpu_ranks: [0]
queue_size: 10000
bucket_size: 32768
world_size: 1
gpu_ranks: [0]
batch_type: "tokens"
batch_size: 4096
valid_batch_size: 1
max_generator_batches: 2
accum_count: [4]
accum_steps: [0]
model_dtype: "fp32"
optim: "adam"
learning_rate: 2
warmup_steps: 8000
decay_method: "noam"
adam_beta2: 0.998
max_grad_norm: 0
label_smoothing: 0.1
param_init: 0
param_init_glorot: true
normalization: "tokens"
encoder_type: transformer
decoder_type: transformer
position_encoding: true
enc_layers: 6
dec_layers: 6
heads: 16
rnn_size: 1024
word_vec_size: 1024
transformer_ff: 4096
dropout_steps: [0]
dropout: [0.3]
attention_dropout: [0.1]
```
vs 1.2.0
``` --layers 6 --rnn_size 1024 --word_vec_size 1024 --transformer_ff 4096 --heads 16 \
--encoder_type transformer --decoder_type transformer --position_encoding \
--train_steps 300000 --max_generator_batches 2 --dropout 0.1 \
--batch_size 4096 --batch_type tokens --normalization tokens --accum_count 2 \
--optim adam --adam_beta2 0.998 --decay_method noam --warmup_steps 8000 --learning_rate 2 \
--max_grad_norm 0 --param_init 0 --param_init_glorot \
--label_smoothing 0.1 --valid_steps 10000 --save_checkpoint_steps 10000 \
--world_size 1 --gpu_ranks 0
```
Not sure if the VRAM usage is more constrained because of sharding (which v2 doesn't seem to use).
|
True
|
VRAM usage not constant in v2.0.0rc1 - Using the same (Transformer Big) model parameters in 1.2.0 and 2.0.0rc1, my 3090 24GB will run out of memory at random times during the training process (uses from 75% to more than 100% of 24GB), even when I try to reduce the batch size.
In 1.2.0, the training uses a constant amount of vram (12GB/24GB)
Also, speedwise, 1.2.0 seems faster when training (~10500/14000 tok/s vs ~9000/11000 tok/s)
See snippet for parameters:
2.0.0rc1
```
world_size: 1
gpu_ranks: [0]
queue_size: 10000
bucket_size: 32768
world_size: 1
gpu_ranks: [0]
batch_type: "tokens"
batch_size: 4096
valid_batch_size: 1
max_generator_batches: 2
accum_count: [4]
accum_steps: [0]
model_dtype: "fp32"
optim: "adam"
learning_rate: 2
warmup_steps: 8000
decay_method: "noam"
adam_beta2: 0.998
max_grad_norm: 0
label_smoothing: 0.1
param_init: 0
param_init_glorot: true
normalization: "tokens"
encoder_type: transformer
decoder_type: transformer
position_encoding: true
enc_layers: 6
dec_layers: 6
heads: 16
rnn_size: 1024
word_vec_size: 1024
transformer_ff: 4096
dropout_steps: [0]
dropout: [0.3]
attention_dropout: [0.1]
```
vs 1.2.0
``` --layers 6 --rnn_size 1024 --word_vec_size 1024 --transformer_ff 4096 --heads 16 \
--encoder_type transformer --decoder_type transformer --position_encoding \
--train_steps 300000 --max_generator_batches 2 --dropout 0.1 \
--batch_size 4096 --batch_type tokens --normalization tokens --accum_count 2 \
--optim adam --adam_beta2 0.998 --decay_method noam --warmup_steps 8000 --learning_rate 2 \
--max_grad_norm 0 --param_init 0 --param_init_glorot \
--label_smoothing 0.1 --valid_steps 10000 --save_checkpoint_steps 10000 \
--world_size 1 --gpu_ranks 0
```
Not sure if the VRAM usage is more constrained because of sharding (which v2 doesn't seem to use).
|
non_test
|
vram usage not constant in using the same transformer big model parameters in and my will run out of memory at random times during the training process uses from to more than of even when i try to reduce the batch size in the training uses a constant amount of vram also speedwise seems faster when training tok s vs tok s see snippet for parameters world size gpu ranks queue size bucket size world size gpu ranks batch type tokens batch size valid batch size max generator batches accum count accum steps model dtype optim adam learning rate warmup steps decay method noam adam max grad norm label smoothing param init param init glorot true normalization tokens encoder type transformer decoder type transformer position encoding true enc layers dec layers heads rnn size word vec size transformer ff dropout steps dropout attention dropout vs layers rnn size word vec size transformer ff heads encoder type transformer decoder type transformer position encoding train steps max generator batches dropout batch size batch type tokens normalization tokens accum count optim adam adam decay method noam warmup steps learning rate max grad norm param init param init glorot label smoothing valid steps save checkpoint steps world size gpu ranks not sure if the vram usage is more constrained because of sharding which doesn t seem to use
| 0
|
146,759
| 11,754,782,680
|
IssuesEvent
|
2020-03-13 08:02:24
|
microsoft/appcenter
|
https://api.github.com/repos/microsoft/appcenter
|
closed
|
Appium update from 1.11.0 to any version with appium-desktop version available
|
feature request test
|
**Describe the solution you'd like**
Update appium to ANY version with desktop version available.
https://github.com/appium/appium-desktop
**Describe alternatives you've considered**
As an alternative: provide a fixed appium-desktop v1.11.0 for all platrforms
**Additional context**
Appium desktop 1.11.0 binaries removed due to some major bug
Reference: https://github.com/appium/appium-desktop/releases/tag/v1.11.0
This means you can't just take a desktop version and use it, you have to get appium elsewhere.
Usability is seriously damaged by this situation.
P.S.
in my case, instructions of "how to compile webDriver for appium on MAC" did not work and took a lot of time.
|
1.0
|
Appium update from 1.11.0 to any version with appium-desktop version available - **Describe the solution you'd like**
Update appium to ANY version with desktop version available.
https://github.com/appium/appium-desktop
**Describe alternatives you've considered**
As an alternative: provide a fixed appium-desktop v1.11.0 for all platrforms
**Additional context**
Appium desktop 1.11.0 binaries removed due to some major bug
Reference: https://github.com/appium/appium-desktop/releases/tag/v1.11.0
This means you can't just take a desktop version and use it, you have to get appium elsewhere.
Usability is seriously damaged by this situation.
P.S.
in my case, instructions of "how to compile webDriver for appium on MAC" did not work and took a lot of time.
|
test
|
appium update from to any version with appium desktop version available describe the solution you d like update appium to any version with desktop version available describe alternatives you ve considered as an alternative provide a fixed appium desktop for all platrforms additional context appium desktop binaries removed due to some major bug reference this means you can t just take a desktop version and use it you have to get appium elsewhere usability is seriously damaged by this situation p s in my case instructions of how to compile webdriver for appium on mac did not work and took a lot of time
| 1
|
55,984
| 3,075,605,943
|
IssuesEvent
|
2015-08-20 14:29:31
|
RobotiumTech/robotium
|
https://api.github.com/repos/RobotiumTech/robotium
|
closed
|
Drag is not working on an object
|
bug imported Priority-Medium wontfix
|
_From [ajemeis...@gmail.com](https://code.google.com/u/116899119933759348359/) on June 27, 2014 08:19:43_
What steps will reproduce the problem? 1.Create a demo app to drag a button from one part of the screen to the other (I can send you mine if you'd like)
2. First drag any area that is not the button (with pointer location on so you can see the drag after it happens)
3. After seeing that that is successful, try dragging an object (that button) across the screen. (use something like solo.drag(400f, 400f, 350f, 900f, 10)) What is the expected output? What do you see instead? The button should be dragged and released where the end coordinates are. Instead, a drag begins and a DragShadow appears for the button, but then the drag immediately halts, and refuses to perform any more steps on the AVD. What version of the product are you using? On what operating system? Robotium 5.2.1 running android junit tests
Robotium 4.3.1 running calabash-android tests
Windows 8
AVD Api 16-19 and Genymotion Api 19 Please provide any additional information below. I can provide you with a sample if you'd like to see that the drag just stops. I have also created a stack overflow question for this @ http://stackoverflow.com/questions/24438463/calabash-android-dragging-button-with-drag-shadow I got drag to work for a rectangle drawn on canvas, but performing a drag over an object halts immediately
_Original issue: http://code.google.com/p/robotium/issues/detail?id=618_
|
1.0
|
Drag is not working on an object - _From [ajemeis...@gmail.com](https://code.google.com/u/116899119933759348359/) on June 27, 2014 08:19:43_
What steps will reproduce the problem? 1.Create a demo app to drag a button from one part of the screen to the other (I can send you mine if you'd like)
2. First drag any area that is not the button (with pointer location on so you can see the drag after it happens)
3. After seeing that that is successful, try dragging an object (that button) across the screen. (use something like solo.drag(400f, 400f, 350f, 900f, 10)) What is the expected output? What do you see instead? The button should be dragged and released where the end coordinates are. Instead, a drag begins and a DragShadow appears for the button, but then the drag immediately halts, and refuses to perform any more steps on the AVD. What version of the product are you using? On what operating system? Robotium 5.2.1 running android junit tests
Robotium 4.3.1 running calabash-android tests
Windows 8
AVD Api 16-19 and Genymotion Api 19 Please provide any additional information below. I can provide you with a sample if you'd like to see that the drag just stops. I have also created a stack overflow question for this @ http://stackoverflow.com/questions/24438463/calabash-android-dragging-button-with-drag-shadow I got drag to work for a rectangle drawn on canvas, but performing a drag over an object halts immediately
_Original issue: http://code.google.com/p/robotium/issues/detail?id=618_
|
non_test
|
drag is not working on an object from on june what steps will reproduce the problem create a demo app to drag a button from one part of the screen to the other i can send you mine if you d like first drag any area that is not the button with pointer location on so you can see the drag after it happens after seeing that that is successful try dragging an object that button across the screen use something like solo drag what is the expected output what do you see instead the button should be dragged and released where the end coordinates are instead a drag begins and a dragshadow appears for the button but then the drag immediately halts and refuses to perform any more steps on the avd what version of the product are you using on what operating system robotium running android junit tests robotium running calabash android tests windows avd api and genymotion api please provide any additional information below i can provide you with a sample if you d like to see that the drag just stops i have also created a stack overflow question for this i got drag to work for a rectangle drawn on canvas but performing a drag over an object halts immediately original issue
| 0
|
155,116
| 12,239,254,931
|
IssuesEvent
|
2020-05-04 21:19:21
|
bcgov/range-web
|
https://api.github.com/repos/bcgov/range-web
|
closed
|
Prevent staff and AH from seeing each others in progress RUPs (Privacy feature).
|
Enhancement Has Important Notes medium ready to test
|
EDIT: Hi Mike here, @LisaMoore1 skip to the bottom to see what to test. : )
@micheal-w-wells can you take a look at this while reviewing the other versioning stuff or forward on to Caleb? Thanks
I’ve put a bug label on this as I think it is a bug but since I am not able to see what is happening with the different versions. Will need someone else to confirm whether this aspect of versions has been addressed or not.
Part of our relationship is that if an AH is working at adding content staff cannot see it until the AH actually chooses to submit it to staff.
Same applies to amendments — until amendments are signed and submitted staff should not be able to view content
AHs should not be able to view any content staff are working on in amendments or drafts until staff submit it to the AH.
Basically, if a version is an an “edit” mode for the other user I (the person logged in) should not be able to see the real time content (what has been changed since I last had control). Note that you can see this intent in the headers that are in place indicating to AH that their content will not be visible to staff until they submit.
|
1.0
|
Prevent staff and AH from seeing each others in progress RUPs (Privacy feature). - EDIT: Hi Mike here, @LisaMoore1 skip to the bottom to see what to test. : )
@micheal-w-wells can you take a look at this while reviewing the other versioning stuff or forward on to Caleb? Thanks
I’ve put a bug label on this as I think it is a bug but since I am not able to see what is happening with the different versions. Will need someone else to confirm whether this aspect of versions has been addressed or not.
Part of our relationship is that if an AH is working at adding content staff cannot see it until the AH actually chooses to submit it to staff.
Same applies to amendments — until amendments are signed and submitted staff should not be able to view content
AHs should not be able to view any content staff are working on in amendments or drafts until staff submit it to the AH.
Basically, if a version is an an “edit” mode for the other user I (the person logged in) should not be able to see the real time content (what has been changed since I last had control). Note that you can see this intent in the headers that are in place indicating to AH that their content will not be visible to staff until they submit.
|
test
|
prevent staff and ah from seeing each others in progress rups privacy feature edit hi mike here skip to the bottom to see what to test micheal w wells can you take a look at this while reviewing the other versioning stuff or forward on to caleb thanks i’ve put a bug label on this as i think it is a bug but since i am not able to see what is happening with the different versions will need someone else to confirm whether this aspect of versions has been addressed or not part of our relationship is that if an ah is working at adding content staff cannot see it until the ah actually chooses to submit it to staff same applies to amendments — until amendments are signed and submitted staff should not be able to view content ahs should not be able to view any content staff are working on in amendments or drafts until staff submit it to the ah basically if a version is an an “edit” mode for the other user i the person logged in should not be able to see the real time content what has been changed since i last had control note that you can see this intent in the headers that are in place indicating to ah that their content will not be visible to staff until they submit
| 1
|
206,473
| 15,731,680,197
|
IssuesEvent
|
2021-03-29 17:22:13
|
celo-org/celo-monorepo
|
https://api.github.com/repos/celo-org/celo-monorepo
|
closed
|
[FLAKEY TEST] end-to-end-geth-validator-order-test -> celotool -> governance tests -> Validator ordering -> properly orders validators randomly
|
FLAKEY celotool end-to-end-geth-validator-order-test
|
FlakeTracker closed this issue after commit fb8da80fbe7ff0444d3f8137c935154dd3677313
Discovered in PR https://github.com/celo-org/celo-monorepo/pull/4303
Attempt No. 1:
AssertionError: 0x41a865269F5182Ea69bFb0cFa1E8411C82aDE941 should have mined 6 blocks: expected 7 to equal 6
at Context.<anonymous> (/home/circleci/app/packages/celotool/src/e2e-tests/validator_order_tests.ts:76:16)
at Generator.next (<anonymous>)
at fulfilled (/home/circleci/app/packages/celotool/src/e2e-tests/validator_order_tests.ts:5:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
Attempt No. 2:
Test Passed!
|
1.0
|
[FLAKEY TEST] end-to-end-geth-validator-order-test -> celotool -> governance tests -> Validator ordering -> properly orders validators randomly - FlakeTracker closed this issue after commit fb8da80fbe7ff0444d3f8137c935154dd3677313
Discovered in PR https://github.com/celo-org/celo-monorepo/pull/4303
Attempt No. 1:
AssertionError: 0x41a865269F5182Ea69bFb0cFa1E8411C82aDE941 should have mined 6 blocks: expected 7 to equal 6
at Context.<anonymous> (/home/circleci/app/packages/celotool/src/e2e-tests/validator_order_tests.ts:76:16)
at Generator.next (<anonymous>)
at fulfilled (/home/circleci/app/packages/celotool/src/e2e-tests/validator_order_tests.ts:5:58)
at process._tickCallback (internal/process/next_tick.js:68:7)
Attempt No. 2:
Test Passed!
|
test
|
end to end geth validator order test celotool governance tests validator ordering properly orders validators randomly flaketracker closed this issue after commit discovered in pr attempt no assertionerror should have mined blocks expected to equal at context home circleci app packages celotool src tests validator order tests ts at generator next at fulfilled home circleci app packages celotool src tests validator order tests ts at process tickcallback internal process next tick js attempt no test passed
| 1
|
105,364
| 9,078,801,694
|
IssuesEvent
|
2019-02-16 00:06:33
|
strongbox/strongbox
|
https://api.github.com/repos/strongbox/strongbox
|
opened
|
org.carlspring.strongbox.artifact.ArtifactNotFoundException: null
|
good first issue help wanted testing
|
# Task Description
The `org.carlspring.strongbox.controllers.layout.maven.MavenArtifactControllerTest.testNonExistingArtifactDownload` causes the following exception to be thrown:
```
14:22:55.669 14-02-2019 | ERROR | kJoinPool-1-worker-1 | o.c.s.providers.repository.ProxyRepositoryProvider | Failed to resolve Path for proxied artifact [/home/jenkins/workspace/jenkins-strongbox-strongbox-master-2023-SGXT8nC/strongbox-web-core/target/strongbox-vault/storages/storage-common-proxies/maven-central/john/doe]
org.carlspring.strongbox.artifact.ArtifactNotFoundException: null
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher.getConnection(RemoteArtifactStreamFetcher.java:70)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher.access$000(RemoteArtifactStreamFetcher.java:15)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.getConnection(RemoteArtifactStreamFetcher.java:117)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.getTarget(RemoteArtifactStreamFetcher.java:128)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.available(RemoteArtifactStreamFetcher.java:167)
at java.io.FilterInputStream.available(FilterInputStream.java:168)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryInputStream.available(ProxyRepositoryInputStream.java:94)
at java.io.BufferedInputStream.available(BufferedInputStream.java:410)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryArtifactResolver.doFetch(ProxyRepositoryArtifactResolver.java:88)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryArtifactResolver.fetchRemoteResource(ProxyRepositoryArtifactResolver.java:75)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider.resolvePathExclusive(ProxyRepositoryProvider.java:101)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider.fetchPath(ProxyRepositoryProvider.java:73)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider.fetchPath(AbstractRepositoryProvider.java:231)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider.fetchPath(AbstractRepositoryProvider.java:43)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider$$FastClassBySpringCGLIB$$f2b828e0.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:684)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider$$EnhancerBySpringCGLIB$$72184b3.fetchPath(<generated>)
at org.carlspring.strongbox.services.impl.ArtifactResolutionServiceImpl.resolvePath(ArtifactResolutionServiceImpl.java:98)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController.download(MavenArtifactController.java:112)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController$$FastClassBySpringCGLIB$$760c4ba2.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController$$EnhancerBySpringCGLIB$$1342010a.download(<generated>)
at sun.reflect.GeneratedMethodAccessor523.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
at org.springframework.test.web.servlet.TestDispatcherServlet.service(TestDispatcherServlet.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.springframework.mock.web.MockFilterChain$ServletFilterProxy.doFilter(MockFilterChain.java:166)
at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
at org.carlspring.strongbox.security.authentication.StrongboxAuthenticationFilter.doFilterInternal(StrongboxAuthenticationFilter.java:63)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.carlspring.strongbox.security.authentication.CustomAnonymousAuthenticationFilter.doFilter(CustomAnonymousAuthenticationFilter.java:61)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133)
at org.springframework.test.web.servlet.MockMvc.perform(MockMvc.java:182)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.performRequest(MockMvcRequestSenderImpl.java:194)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.sendRequest(MockMvcRequestSenderImpl.java:430)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.get(MockMvcRequestSenderImpl.java:608)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.get(MockMvcRequestSenderImpl.java:76)
at org.carlspring.strongbox.rest.client.RestAssuredArtifactClient.getResourceWithResponse(RestAssuredArtifactClient.java:276)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactControllerTest.testNonExistingArtifactDownload(MavenArtifactControllerTest.java:676)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:532)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:171)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:167)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:114)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:59)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:108)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:140)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:120)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:140)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:120)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
404
```
If this is a real issue, we need to investigate it. If it's an expected exception, then we should use the respective JUnit methods to intercept this and keep it silent, as the output logs should not throw unnecessary exceptions, because these can be alarming and confusing when looking out build logs.
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @fuss86
|
1.0
|
org.carlspring.strongbox.artifact.ArtifactNotFoundException: null - # Task Description
The `org.carlspring.strongbox.controllers.layout.maven.MavenArtifactControllerTest.testNonExistingArtifactDownload` causes the following exception to be thrown:
```
14:22:55.669 14-02-2019 | ERROR | kJoinPool-1-worker-1 | o.c.s.providers.repository.ProxyRepositoryProvider | Failed to resolve Path for proxied artifact [/home/jenkins/workspace/jenkins-strongbox-strongbox-master-2023-SGXT8nC/strongbox-web-core/target/strongbox-vault/storages/storage-common-proxies/maven-central/john/doe]
org.carlspring.strongbox.artifact.ArtifactNotFoundException: null
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher.getConnection(RemoteArtifactStreamFetcher.java:70)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher.access$000(RemoteArtifactStreamFetcher.java:15)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.getConnection(RemoteArtifactStreamFetcher.java:117)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.getTarget(RemoteArtifactStreamFetcher.java:128)
at org.carlspring.strongbox.providers.repository.proxied.RemoteArtifactStreamFetcher$RemoteArtifactInputStream.available(RemoteArtifactStreamFetcher.java:167)
at java.io.FilterInputStream.available(FilterInputStream.java:168)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryInputStream.available(ProxyRepositoryInputStream.java:94)
at java.io.BufferedInputStream.available(BufferedInputStream.java:410)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryArtifactResolver.doFetch(ProxyRepositoryArtifactResolver.java:88)
at org.carlspring.strongbox.providers.repository.proxied.ProxyRepositoryArtifactResolver.fetchRemoteResource(ProxyRepositoryArtifactResolver.java:75)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider.resolvePathExclusive(ProxyRepositoryProvider.java:101)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider.fetchPath(ProxyRepositoryProvider.java:73)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider.fetchPath(AbstractRepositoryProvider.java:231)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider.fetchPath(AbstractRepositoryProvider.java:43)
at org.carlspring.strongbox.providers.io.AbstractRepositoryProvider$$FastClassBySpringCGLIB$$f2b828e0.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:684)
at org.carlspring.strongbox.providers.repository.ProxyRepositoryProvider$$EnhancerBySpringCGLIB$$72184b3.fetchPath(<generated>)
at org.carlspring.strongbox.services.impl.ArtifactResolutionServiceImpl.resolvePath(ArtifactResolutionServiceImpl.java:98)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController.download(MavenArtifactController.java:112)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController$$FastClassBySpringCGLIB$$760c4ba2.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:69)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactController$$EnhancerBySpringCGLIB$$1342010a.download(<generated>)
at sun.reflect.GeneratedMethodAccessor523.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:189)
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138)
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895)
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:800)
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1038)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:942)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005)
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:897)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:882)
at org.springframework.test.web.servlet.TestDispatcherServlet.service(TestDispatcherServlet.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.springframework.mock.web.MockFilterChain$ServletFilterProxy.doFilter(MockFilterChain.java:166)
at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:320)
at org.carlspring.strongbox.security.authentication.StrongboxAuthenticationFilter.doFilterInternal(StrongboxAuthenticationFilter.java:63)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:119)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.carlspring.strongbox.security.authentication.CustomAnonymousAuthenticationFilter.doFilter(CustomAnonymousAuthenticationFilter.java:61)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:96)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:74)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56)
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:215)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:178)
at org.springframework.mock.web.MockFilterChain.doFilter(MockFilterChain.java:133)
at org.springframework.test.web.servlet.MockMvc.perform(MockMvc.java:182)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.performRequest(MockMvcRequestSenderImpl.java:194)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.sendRequest(MockMvcRequestSenderImpl.java:430)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.get(MockMvcRequestSenderImpl.java:608)
at io.restassured.module.mockmvc.internal.MockMvcRequestSenderImpl.get(MockMvcRequestSenderImpl.java:76)
at org.carlspring.strongbox.rest.client.RestAssuredArtifactClient.getResourceWithResponse(RestAssuredArtifactClient.java:276)
at org.carlspring.strongbox.controllers.layout.maven.MavenArtifactControllerTest.testNonExistingArtifactDownload(MavenArtifactControllerTest.java:676)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:532)
at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:115)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:171)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:167)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:114)
at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:59)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:108)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:140)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:120)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.executeNonConcurrentTasks(ForkJoinPoolHierarchicalTestExecutorService.java:140)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService.invokeAll(ForkJoinPoolHierarchicalTestExecutorService.java:120)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$4(NodeTestTask.java:112)
at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:72)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:98)
at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:74)
at org.junit.platform.engine.support.hierarchical.ForkJoinPoolHierarchicalTestExecutorService$ExclusiveTask.compute(ForkJoinPoolHierarchicalTestExecutorService.java:170)
at java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
404
```
If this is a real issue, we need to investigate it. If it's an expected exception, then we should use the respective JUnit methods to intercept this and keep it silent, as the output logs should not throw unnecessary exceptions, because these can be alarming and confusing when looking out build logs.
# Help
* [Our chat](https://chat.carlspring.org/)
* Points of contact:
* @carlspring
* @sbespalov
* @fuss86
|
test
|
org carlspring strongbox artifact artifactnotfoundexception null task description the org carlspring strongbox controllers layout maven mavenartifactcontrollertest testnonexistingartifactdownload causes the following exception to be thrown error kjoinpool worker o c s providers repository proxyrepositoryprovider failed to resolve path for proxied artifact org carlspring strongbox artifact artifactnotfoundexception null at org carlspring strongbox providers repository proxied remoteartifactstreamfetcher getconnection remoteartifactstreamfetcher java at org carlspring strongbox providers repository proxied remoteartifactstreamfetcher access remoteartifactstreamfetcher java at org carlspring strongbox providers repository proxied remoteartifactstreamfetcher remoteartifactinputstream getconnection remoteartifactstreamfetcher java at org carlspring strongbox providers repository proxied remoteartifactstreamfetcher remoteartifactinputstream gettarget remoteartifactstreamfetcher java at org carlspring strongbox providers repository proxied remoteartifactstreamfetcher remoteartifactinputstream available remoteartifactstreamfetcher java at java io filterinputstream available filterinputstream java at org carlspring strongbox providers repository proxied proxyrepositoryinputstream available proxyrepositoryinputstream java at java io bufferedinputstream available bufferedinputstream java at org carlspring strongbox providers repository proxied proxyrepositoryartifactresolver dofetch proxyrepositoryartifactresolver java at org carlspring strongbox providers repository proxied proxyrepositoryartifactresolver fetchremoteresource proxyrepositoryartifactresolver java at org carlspring strongbox providers repository proxyrepositoryprovider resolvepathexclusive proxyrepositoryprovider java at org carlspring strongbox providers repository proxyrepositoryprovider fetchpath proxyrepositoryprovider java at org carlspring strongbox providers io abstractrepositoryprovider fetchpath abstractrepositoryprovider java at org carlspring strongbox providers io abstractrepositoryprovider fetchpath abstractrepositoryprovider java at org carlspring strongbox providers io abstractrepositoryprovider fastclassbyspringcglib invoke at org springframework cglib proxy methodproxy invoke methodproxy java at org springframework aop framework cglibaopproxy dynamicadvisedinterceptor intercept cglibaopproxy java at org carlspring strongbox providers repository proxyrepositoryprovider enhancerbyspringcglib fetchpath at org carlspring strongbox services impl artifactresolutionserviceimpl resolvepath artifactresolutionserviceimpl java at org carlspring strongbox controllers layout maven mavenartifactcontroller download mavenartifactcontroller java at org carlspring strongbox controllers layout maven mavenartifactcontroller fastclassbyspringcglib invoke at org springframework cglib proxy methodproxy invoke methodproxy java at org springframework aop framework cglibaopproxy cglibmethodinvocation invokejoinpoint cglibaopproxy java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework security access intercept aopalliance methodsecurityinterceptor invoke methodsecurityinterceptor java at org springframework aop framework reflectivemethodinvocation proceed reflectivemethodinvocation java at org springframework aop framework cglibaopproxy dynamicadvisedinterceptor intercept cglibaopproxy java at org carlspring strongbox controllers layout maven mavenartifactcontroller enhancerbyspringcglib download at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org springframework web method support invocablehandlermethod doinvoke invocablehandlermethod java at org springframework web method support invocablehandlermethod invokeforrequest invocablehandlermethod java at org springframework web servlet mvc method annotation servletinvocablehandlermethod invokeandhandle servletinvocablehandlermethod java at org springframework web servlet mvc method annotation requestmappinghandleradapter invokehandlermethod requestmappinghandleradapter java at org springframework web servlet mvc method annotation requestmappinghandleradapter handleinternal requestmappinghandleradapter java at org springframework web servlet mvc method abstracthandlermethodadapter handle abstracthandlermethodadapter java at org springframework web servlet dispatcherservlet dodispatch dispatcherservlet java at org springframework web servlet dispatcherservlet doservice dispatcherservlet java at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet doget frameworkservlet java at javax servlet http httpservlet service httpservlet java at org springframework web servlet frameworkservlet service frameworkservlet java at org springframework test web servlet testdispatcherservlet service testdispatcherservlet java at javax servlet http httpservlet service httpservlet java at org springframework mock web mockfilterchain servletfilterproxy dofilter mockfilterchain java at org springframework mock web mockfilterchain dofilter mockfilterchain java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org carlspring strongbox security authentication strongboxauthenticationfilter dofilterinternal strongboxauthenticationfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web access exceptiontranslationfilter dofilter exceptiontranslationfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web session sessionmanagementfilter dofilter sessionmanagementfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org carlspring strongbox security authentication customanonymousauthenticationfilter dofilter customanonymousauthenticationfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web servletapi securitycontextholderawarerequestfilter dofilter securitycontextholderawarerequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web savedrequest requestcacheawarefilter dofilter requestcacheawarefilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web authentication logout logoutfilter dofilter logoutfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework web filter corsfilter dofilterinternal corsfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web header headerwriterfilter dofilterinternal headerwriterfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web context securitycontextpersistencefilter dofilter securitycontextpersistencefilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web context request async webasyncmanagerintegrationfilter dofilterinternal webasyncmanagerintegrationfilter java at org springframework web filter onceperrequestfilter dofilter onceperrequestfilter java at org springframework security web filterchainproxy virtualfilterchain dofilter filterchainproxy java at org springframework security web filterchainproxy dofilterinternal filterchainproxy java at org springframework security web filterchainproxy dofilter filterchainproxy java at org springframework mock web mockfilterchain dofilter mockfilterchain java at org springframework test web servlet mockmvc perform mockmvc java at io restassured module mockmvc internal mockmvcrequestsenderimpl performrequest mockmvcrequestsenderimpl java at io restassured module mockmvc internal mockmvcrequestsenderimpl sendrequest mockmvcrequestsenderimpl java at io restassured module mockmvc internal mockmvcrequestsenderimpl get mockmvcrequestsenderimpl java at io restassured module mockmvc internal mockmvcrequestsenderimpl get mockmvcrequestsenderimpl java at org carlspring strongbox rest client restassuredartifactclient getresourcewithresponse restassuredartifactclient java at org carlspring strongbox controllers layout maven mavenartifactcontrollertest testnonexistingartifactdownload mavenartifactcontrollertest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit platform commons util reflectionutils invokemethod reflectionutils java at org junit jupiter engine execution executableinvoker invoke executableinvoker java at org junit jupiter engine descriptor testmethodtestdescriptor lambda invoketestmethod testmethodtestdescriptor java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit jupiter engine descriptor testmethodtestdescriptor invoketestmethod testmethodtestdescriptor java at org junit jupiter engine descriptor testmethodtestdescriptor execute testmethodtestdescriptor java at org junit jupiter engine descriptor testmethodtestdescriptor execute testmethodtestdescriptor java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice executenonconcurrenttasks forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice invokeall forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice executenonconcurrenttasks forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice invokeall forkjoinpoolhierarchicaltestexecutorservice java at org junit platform engine support hierarchical nodetesttask lambda executerecursively nodetesttask java at org junit platform engine support hierarchical throwablecollector execute throwablecollector java at org junit platform engine support hierarchical nodetesttask executerecursively nodetesttask java at org junit platform engine support hierarchical nodetesttask execute nodetesttask java at org junit platform engine support hierarchical forkjoinpoolhierarchicaltestexecutorservice exclusivetask compute forkjoinpoolhierarchicaltestexecutorservice java at java util concurrent recursiveaction exec recursiveaction java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue runtask forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java if this is a real issue we need to investigate it if it s an expected exception then we should use the respective junit methods to intercept this and keep it silent as the output logs should not throw unnecessary exceptions because these can be alarming and confusing when looking out build logs help points of contact carlspring sbespalov
| 1
|
321,398
| 27,526,547,694
|
IssuesEvent
|
2023-03-06 18:29:21
|
Realm667/WolfenDoom
|
https://api.github.com/repos/Realm667/WolfenDoom
|
closed
|
[C3M3_A] Various issues
|
playtesting gameplay mapping
|
C3M3_A ...
- [x] 1. Not that BJ would want to, but destroying the radio at the start still makes static.
- [x] 2. Some odd floor scrolling at sector 148.
- [x] 3. Half barrel of zyklon at x 2173 ,y -3417.
❌ 4. Does the phonograph at x 6632 , y 13533 play ? --Probably when you tested it, you didn't noticed it was destroyed, the grenadier next to it might have broken it, it works as usual. - ozy81
- [x] 5. faux chimney ...

|
1.0
|
[C3M3_A] Various issues - C3M3_A ...
- [x] 1. Not that BJ would want to, but destroying the radio at the start still makes static.
- [x] 2. Some odd floor scrolling at sector 148.
- [x] 3. Half barrel of zyklon at x 2173 ,y -3417.
❌ 4. Does the phonograph at x 6632 , y 13533 play ? --Probably when you tested it, you didn't noticed it was destroyed, the grenadier next to it might have broken it, it works as usual. - ozy81
- [x] 5. faux chimney ...

|
test
|
various issues a not that bj would want to but destroying the radio at the start still makes static some odd floor scrolling at sector half barrel of zyklon at x y ❌ does the phonograph at x y play probably when you tested it you didn t noticed it was destroyed the grenadier next to it might have broken it it works as usual faux chimney
| 1
|
661,010
| 22,038,390,894
|
IssuesEvent
|
2022-05-29 00:30:27
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
PPP: gsm_modem: LCP never gets past REQUEST_SENT phase
|
bug priority: low area: Modem Stale
|
**Describe the bug**
I am trying sample [gsm_modem](https://github.com/zephyrproject-rtos/zephyr/tree/main/samples/net/gsm_modem). Board is communicating with modem over UART. `gsm_ppp` puts modem correctly into PPP mode and they start PPP communication but LCP fails. It is sending Configure-Req messages, timeouts and in the end link is terminated.
- I am using modem Quectel BG95.
- My modem is connected to NB-IoT network.
- I did not change CONFIG_MODEM_GSM_APN so it is using "internet" as default. I was trying to put APN that was returned by AT+COPS? command but AT+CGDCONT command fails if APN has spaces in it (APN returned by COPS has spaces).
- I was listening what does modem sends over UART and it sends some binary PPP data and then it sends NO CARRIER
- When I was using CONFIG_NET_PPP_LOG_LEVEL_DBG=y logs were huge and there was a lot of `<dbg> net_ppp.ppp_consume_ringbuf: Ringbuf 0x20001f54 is empty!` messages.
This is my prj.conf
```
# UART support
CONFIG_SERIAL=y
# GSM modem support
CONFIG_MODEM=y
CONFIG_MODEM_GSM_PPP=y
# PPP networking support
CONFIG_NET_DRIVERS=y
CONFIG_NET_PPP=y
CONFIG_NET_L2_PPP=y
CONFIG_NET_NATIVE=y
CONFIG_NETWORKING=y
CONFIG_NET_L2_PPP_TIMEOUT=25000
# IPv4 enables PPP IPCP support
CONFIG_NET_IPV4=y
CONFIG_NET_IPV6=n
# Network management events
CONFIG_NET_CONNECTION_MANAGER=y
# Log buffers, modem and PPP
CONFIG_LOG=y
CONFIG_NET_LOG=y
CONFIG_LOG_BUFFER_SIZE=16384
CONFIG_LOG_STRDUP_BUF_COUNT=200
CONFIG_MODEM_LOG_LEVEL_DBG=y
#CONFIG_NET_PPP_LOG_LEVEL_DBG=y
CONFIG_NET_L2_PPP_LOG_LEVEL_DBG=y
CONFIG_NET_MGMT_EVENT_LOG_LEVEL_DBG=y
CONFIG_NET_CONNECTION_MANAGER_LOG_LEVEL_DBG=y
CONFIG_NET_SHELL=y
CONFIG_MODEM_SHELL=y
CONFIG_ENTROPY_GENERATOR=y
CONFIG_TEST_RANDOM_GENERATOR=y
```
This is logs and console output:
```
*** Booting Zephyr OS build zephyr-v2.6.0-5357-g81b1e7fdacf0 ***
[00:00:00.000,000] <dbg> modem_gsm.gsm_init: Generic GSM modem (0x200008b0)
[00:00:00.000,000] <dbg> modem_gsm.gsm_init: iface->read 0x80139cb iface->write 0x8013a01
[00:00:00.000,000] <dbg> modem_gsm.gsm_rx: starting
[00:00:00.000,000] <dbg> modem_gsm.gsm_configure: Starting modem 0x200008b0 configuration
[00:00:00.000,000] <dbg> net_mgmt.net_mgmt_event_init: (main): Net MGMT initialized: queue of 2 entries, stack size of 768
[00:00:00.000,000] <dbg> net_l2_ppp.net_ppp_init: (main): Initializing PPP L2 0x20001cf0 for iface 0x200003a0
[00:00:00.001,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.001,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.006,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+COPS:] (len:21)
[00:00:00.006,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.006,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (conn_mgr): Adding event callback 0x20002b38
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (conn_mgr): Adding event callback 0x20002b4c
[00:00:00.007,000] <dbg> conn_mgr.conn_mgr_handler: (conn_mgr): Connection Manager started
[00:00:00.007,000] <dbg> net_l2_ppp.tx_handler: (tx_handler_thread): PPP TX started
[00:00:00.007,000] <inf> sample_gsm_ppp: Board 'stm32f4_disco' APN 'internet' UART 'UART_2' device 0x8014d94 (gsm_ppp)
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (main): Adding event callback 0x200023b4
[00:00:00.008,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.008,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.059,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.059,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.111,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.111,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.163,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.163,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.217,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.217,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.219,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:7)
[00:00:00.219,000] <inf> modem_gsm: Manufacturer: Quectel
[00:00:00.220,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.220,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.272,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:7)
[00:00:00.272,000] <inf> modem_gsm: Model: BG95-M3
[00:00:00.273,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.273,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.325,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:14)
[00:00:00.325,000] <inf> modem_gsm: Revision: BG95M3LAR02A03
[00:00:00.326,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.326,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.379,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:15)
[00:00:00.379,000] <inf> modem_gsm: IMEI: 867730057661898
[00:00:00.380,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.380,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.433,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:15)
[00:00:00.433,000] <inf> modem_gsm: IMSI: 219101135971259
[00:00:00.433,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.433,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.487,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:27)
[00:00:00.487,000] <inf> modem_gsm: ICCID: 8938591419010305548F
[00:00:00.488,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.488,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.490,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+CGATT:] (len:9)
[00:00:00.490,000] <inf> modem_gsm: Attached to packet service!
[00:00:00.490,000] <dbg> modem_gsm.gsm_finalize_connection: modem attach returned 0, read RSSI
[00:00:00.491,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.491,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.491,000] <dbg> modem_gsm.gsm_finalize_connection: Not valid RSSI, retrying...
[00:00:00.491,000] <dbg> net_l2_ppp.ppp_startup: (sysworkq): PPP 0x20001cf0 startup for interface 0x200003a0
[00:00:00.491,000] <dbg> net_l2_ppp.ipcp_init: (sysworkq): proto IPCP (0x8021) fsm 0x20001e08
[00:00:00.491,000] <dbg> net_l2_ppp.lcp_init: (sysworkq): proto LCP (0xc021) fsm 0x20001d38
[00:00:00.493,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.493,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.491,000] <dbg> modem_gsm.gsm_configure: Starting modem 0x200008b0 configuration
[00:00:02.492,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:02.492,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.495,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+CSQ:] (len:11)
[00:00:02.495,000] <inf> modem_gsm: RSSI: -71
[00:00:02.495,000] <dbg> modem_gsm.gsm_finalize_connection: modem setup returned 0, enable PPP
[00:00:02.495,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:02.495,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.495,000] <inf> net_ppp: Initializing PPP to use UART_2
[00:00:02.495,000] <dbg> net_l2_ppp.carrier_on_off: (sysworkq): Carrier ON for interface 0x200003a0
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_phase_debug: (sysworkq): [0x20001cf0] phase DEAD (0) => ESTABLISH (1) (start_ppp():209)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_fsm_lower_up: (sysworkq): [LCP/0x20001d38] Current state INITIAL (0)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state INITIAL (0) => CLOSED (2) (ppp_fsm_lower_up():311)
[00:00:02.495,000] <dbg> net_l2_ppp.start_ppp: (sysworkq): Starting LCP
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_fsm_open: (sysworkq): [LCP/0x20001d38] Current state CLOSED (2)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state CLOSED (2) => REQUEST_SENT (6) (ppp_fsm_open():334)
[00:00:02.495,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:02.495,000] <dbg> net_mgmt.net_mgmt_event_notify_with_info: (sysworkq): Notifying Event layer 1 code 1 type 2
[00:00:02.495,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:02.495,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:00:02.497,000] <dbg> net_mgmt.mgmt_thread: (net_mgmt): Handling events, forwarding it relevantly
[00:00:02.497,000] <dbg> net_mgmt.mgmt_run_callbacks: (net_mgmt): Event layer 1 code 1 cmd 2
[00:00:02.497,000] <dbg> net_mgmt.mgmt_run_callbacks: (net_mgmt): Running callback 0x20002b38 : 0x800ae29
[00:00:02.497,000] <dbg> conn_mgr.conn_mgr_iface_events_handler: (net_mgmt): Iface event 3489726466 received on iface 1 (0x200003a0)
[00:00:02.497,000] <dbg> conn_mgr.conn_mgr_iface_events_handler: (net_mgmt): Iface index 0
[00:00:27.495,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:00:27.495,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:27.495,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:27.495,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:27.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:00:27.502,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:27.502,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 00 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:27.502,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:27.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:27.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 02 01 00 04 |.!....
[00:00:27.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 4
[00:00:28.502,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:28.502,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 01 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:28.502,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:29.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:29.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 02 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:29.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:30.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:30.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 03 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:30.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:31.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:31.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 04 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:31.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:32.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:32.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 05 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:32.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:33.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:33.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 06 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:33.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:34.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:34.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 07 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:34.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:35.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:35.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 08 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:35.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:36.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:36.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 09 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:36.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:37.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:37.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0a 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:37.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:38.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:38.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0b 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:38.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:39.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:39.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0c 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:39.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:40.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:40.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0d 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:40.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:41.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:41.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0e 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:41.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:42.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:42.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0f 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:42.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:43.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:43.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 10 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:43.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:44.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:44.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 11 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:44.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:45.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:45.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 12 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:45.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:46.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:46.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 13 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:46.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:52.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:00:52.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:52.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:52.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:52.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:01:17.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:01:17.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:01:17.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:01:17.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:01:17.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:01:42.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:01:42.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:01:42.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:01:42.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:01:42.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:07.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:07.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:07.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:07.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:07.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:32.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:32.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:32.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:32.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:32.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:57.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:57.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:57.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:57.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:57.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:03:22.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:03:22.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:03:22.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:03:22.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:03:22.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:03:47.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:03:47.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:03:47.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:03:47.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:03:47.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Configure-Req retransmit limit 0 reached
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state REQUEST_SENT (6) => STOPPED (3) (ppp_fsm_timeout():113)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_change_phase_debug: (sysworkq): [0x20001cf0] phase ESTABLISH (1) => DEAD (0) (ppp_link_terminated():123)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_link_terminated: (sysworkq): [0x20001cf0] Link terminated
```
Environment:
zephyr sdk 0.13.2
VERSION
`VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 99
VERSION_TWEAK = 0
EXTRAVERSION =
`
|
1.0
|
PPP: gsm_modem: LCP never gets past REQUEST_SENT phase - **Describe the bug**
I am trying sample [gsm_modem](https://github.com/zephyrproject-rtos/zephyr/tree/main/samples/net/gsm_modem). Board is communicating with modem over UART. `gsm_ppp` puts modem correctly into PPP mode and they start PPP communication but LCP fails. It is sending Configure-Req messages, timeouts and in the end link is terminated.
- I am using modem Quectel BG95.
- My modem is connected to NB-IoT network.
- I did not change CONFIG_MODEM_GSM_APN so it is using "internet" as default. I was trying to put APN that was returned by AT+COPS? command but AT+CGDCONT command fails if APN has spaces in it (APN returned by COPS has spaces).
- I was listening what does modem sends over UART and it sends some binary PPP data and then it sends NO CARRIER
- When I was using CONFIG_NET_PPP_LOG_LEVEL_DBG=y logs were huge and there was a lot of `<dbg> net_ppp.ppp_consume_ringbuf: Ringbuf 0x20001f54 is empty!` messages.
This is my prj.conf
```
# UART support
CONFIG_SERIAL=y
# GSM modem support
CONFIG_MODEM=y
CONFIG_MODEM_GSM_PPP=y
# PPP networking support
CONFIG_NET_DRIVERS=y
CONFIG_NET_PPP=y
CONFIG_NET_L2_PPP=y
CONFIG_NET_NATIVE=y
CONFIG_NETWORKING=y
CONFIG_NET_L2_PPP_TIMEOUT=25000
# IPv4 enables PPP IPCP support
CONFIG_NET_IPV4=y
CONFIG_NET_IPV6=n
# Network management events
CONFIG_NET_CONNECTION_MANAGER=y
# Log buffers, modem and PPP
CONFIG_LOG=y
CONFIG_NET_LOG=y
CONFIG_LOG_BUFFER_SIZE=16384
CONFIG_LOG_STRDUP_BUF_COUNT=200
CONFIG_MODEM_LOG_LEVEL_DBG=y
#CONFIG_NET_PPP_LOG_LEVEL_DBG=y
CONFIG_NET_L2_PPP_LOG_LEVEL_DBG=y
CONFIG_NET_MGMT_EVENT_LOG_LEVEL_DBG=y
CONFIG_NET_CONNECTION_MANAGER_LOG_LEVEL_DBG=y
CONFIG_NET_SHELL=y
CONFIG_MODEM_SHELL=y
CONFIG_ENTROPY_GENERATOR=y
CONFIG_TEST_RANDOM_GENERATOR=y
```
This is logs and console output:
```
*** Booting Zephyr OS build zephyr-v2.6.0-5357-g81b1e7fdacf0 ***
[00:00:00.000,000] <dbg> modem_gsm.gsm_init: Generic GSM modem (0x200008b0)
[00:00:00.000,000] <dbg> modem_gsm.gsm_init: iface->read 0x80139cb iface->write 0x8013a01
[00:00:00.000,000] <dbg> modem_gsm.gsm_rx: starting
[00:00:00.000,000] <dbg> modem_gsm.gsm_configure: Starting modem 0x200008b0 configuration
[00:00:00.000,000] <dbg> net_mgmt.net_mgmt_event_init: (main): Net MGMT initialized: queue of 2 entries, stack size of 768
[00:00:00.000,000] <dbg> net_l2_ppp.net_ppp_init: (main): Initializing PPP L2 0x20001cf0 for iface 0x200003a0
[00:00:00.001,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.001,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.006,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+COPS:] (len:21)
[00:00:00.006,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.006,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (conn_mgr): Adding event callback 0x20002b38
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (conn_mgr): Adding event callback 0x20002b4c
[00:00:00.007,000] <dbg> conn_mgr.conn_mgr_handler: (conn_mgr): Connection Manager started
[00:00:00.007,000] <dbg> net_l2_ppp.tx_handler: (tx_handler_thread): PPP TX started
[00:00:00.007,000] <inf> sample_gsm_ppp: Board 'stm32f4_disco' APN 'internet' UART 'UART_2' device 0x8014d94 (gsm_ppp)
[00:00:00.007,000] <dbg> net_mgmt.net_mgmt_add_event_callback: (main): Adding event callback 0x200023b4
[00:00:00.008,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.008,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.059,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.059,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.111,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.111,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.163,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.163,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.217,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.217,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.219,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:7)
[00:00:00.219,000] <inf> modem_gsm: Manufacturer: Quectel
[00:00:00.220,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.220,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.272,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:7)
[00:00:00.272,000] <inf> modem_gsm: Model: BG95-M3
[00:00:00.273,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.273,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.325,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:14)
[00:00:00.325,000] <inf> modem_gsm: Revision: BG95M3LAR02A03
[00:00:00.326,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.326,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.379,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:15)
[00:00:00.379,000] <inf> modem_gsm: IMEI: 867730057661898
[00:00:00.380,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.380,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.433,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:15)
[00:00:00.433,000] <inf> modem_gsm: IMSI: 219101135971259
[00:00:00.433,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.433,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.487,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [] (len:27)
[00:00:00.487,000] <inf> modem_gsm: ICCID: 8938591419010305548F
[00:00:00.488,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.488,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.490,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+CGATT:] (len:9)
[00:00:00.490,000] <inf> modem_gsm: Attached to packet service!
[00:00:00.490,000] <dbg> modem_gsm.gsm_finalize_connection: modem attach returned 0, read RSSI
[00:00:00.491,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.491,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:00.491,000] <dbg> modem_gsm.gsm_finalize_connection: Not valid RSSI, retrying...
[00:00:00.491,000] <dbg> net_l2_ppp.ppp_startup: (sysworkq): PPP 0x20001cf0 startup for interface 0x200003a0
[00:00:00.491,000] <dbg> net_l2_ppp.ipcp_init: (sysworkq): proto IPCP (0x8021) fsm 0x20001e08
[00:00:00.491,000] <dbg> net_l2_ppp.lcp_init: (sysworkq): proto LCP (0xc021) fsm 0x20001d38
[00:00:00.493,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:00.493,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.491,000] <dbg> modem_gsm.gsm_configure: Starting modem 0x200008b0 configuration
[00:00:02.492,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:02.492,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.495,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [+CSQ:] (len:11)
[00:00:02.495,000] <inf> modem_gsm: RSSI: -71
[00:00:02.495,000] <dbg> modem_gsm.gsm_finalize_connection: modem setup returned 0, enable PPP
[00:00:02.495,000] <dbg> modem_cmd_handler.cmd_handler_process_rx_buf: match cmd [OK] (len:2)
[00:00:02.495,000] <dbg> modem_gsm.gsm_cmd_ok: ok
[00:00:02.495,000] <inf> net_ppp: Initializing PPP to use UART_2
[00:00:02.495,000] <dbg> net_l2_ppp.carrier_on_off: (sysworkq): Carrier ON for interface 0x200003a0
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_phase_debug: (sysworkq): [0x20001cf0] phase DEAD (0) => ESTABLISH (1) (start_ppp():209)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_fsm_lower_up: (sysworkq): [LCP/0x20001d38] Current state INITIAL (0)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state INITIAL (0) => CLOSED (2) (ppp_fsm_lower_up():311)
[00:00:02.495,000] <dbg> net_l2_ppp.start_ppp: (sysworkq): Starting LCP
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_fsm_open: (sysworkq): [LCP/0x20001d38] Current state CLOSED (2)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state CLOSED (2) => REQUEST_SENT (6) (ppp_fsm_open():334)
[00:00:02.495,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:02.495,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:02.495,000] <dbg> net_mgmt.net_mgmt_event_notify_with_info: (sysworkq): Notifying Event layer 1 code 1 type 2
[00:00:02.495,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:02.495,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:00:02.497,000] <dbg> net_mgmt.mgmt_thread: (net_mgmt): Handling events, forwarding it relevantly
[00:00:02.497,000] <dbg> net_mgmt.mgmt_run_callbacks: (net_mgmt): Event layer 1 code 1 cmd 2
[00:00:02.497,000] <dbg> net_mgmt.mgmt_run_callbacks: (net_mgmt): Running callback 0x20002b38 : 0x800ae29
[00:00:02.497,000] <dbg> conn_mgr.conn_mgr_iface_events_handler: (net_mgmt): Iface event 3489726466 received on iface 1 (0x200003a0)
[00:00:02.497,000] <dbg> conn_mgr.conn_mgr_iface_events_handler: (net_mgmt): Iface index 0
[00:00:27.495,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:00:27.495,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:27.495,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:27.495,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:27.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:00:27.502,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:27.502,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 00 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:27.502,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:27.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:27.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 02 01 00 04 |.!....
[00:00:27.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 4
[00:00:28.502,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:28.502,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 01 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:28.502,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:29.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:29.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 02 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:29.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:30.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:30.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 03 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:30.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:31.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:31.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 04 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:31.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:32.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:32.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 05 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:32.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:33.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:33.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 06 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:33.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:34.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:34.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 07 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:34.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:35.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:35.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 08 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:35.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:36.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:36.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 09 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:36.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:37.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:37.503,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0a 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:37.503,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:38.503,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:38.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0b 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:38.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:39.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:39.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0c 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:39.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:40.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:40.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0d 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:40.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:41.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:41.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0e 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:41.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:42.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:42.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 0f 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:42.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:43.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:43.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 10 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:43.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:44.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:44.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 11 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:44.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:45.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:45.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 12 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:45.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:46.504,000] <dbg> net_l2_ppp.net_pkt_hexdump: recv L2
[00:00:46.504,000] <dbg> net_l2_ppp: 0x2000cf24
c0 21 01 13 00 19 02 06 00 00 00 00 03 05 c2 23 |.!...... .......#
05 05 06 ba d7 7d ec 07 02 08 02 |.....}.. ...
[00:00:46.504,000] <dbg> net_l2_ppp.ppp_fsm_input: (rx_q[0]): [LCP/0x20001d38] Too long msg 25
[00:00:52.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:00:52.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:00:52.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:00:52.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:00:52.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:01:17.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:01:17.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:01:17.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:01:17.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:01:17.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:01:42.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:01:42.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:01:42.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:01:42.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:01:42.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:07.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:07.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:07.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:07.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:07.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:32.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:32.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:32.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:32.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:32.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:02:57.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:02:57.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:02:57.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:02:57.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:02:57.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:03:22.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:03:22.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:03:22.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:03:22.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:03:22.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:03:47.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:03:47.496,000] <dbg> net_l2_ppp.fsm_send_configure_req: (sysworkq): [LCP/0x20001d38] Sending Configure-Req (1) id 1 to peer while in REQUEST_SENT (6)
[00:03:47.496,000] <dbg> net_l2_ppp.ppp_send_pkt: (sysworkq): [LCP/0x20001d38] Sending 6 bytes pkt 0x2000d014 (options len 0)
[00:03:47.496,000] <dbg> net_l2_ppp.net_pkt_hexdump: send L2
[00:03:47.496,000] <dbg> net_l2_ppp: 0x2000d014
c0 21 01 01 00 04 |.!....
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Current state REQUEST_SENT (6)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_fsm_timeout: (sysworkq): [LCP/0x20001d38] Configure-Req retransmit limit 0 reached
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_change_state_debug: (sysworkq): [LCP/0x20001d38] state REQUEST_SENT (6) => STOPPED (3) (ppp_fsm_timeout():113)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_change_phase_debug: (sysworkq): [0x20001cf0] phase ESTABLISH (1) => DEAD (0) (ppp_link_terminated():123)
[00:04:12.496,000] <dbg> net_l2_ppp.ppp_link_terminated: (sysworkq): [0x20001cf0] Link terminated
```
Environment:
zephyr sdk 0.13.2
VERSION
`VERSION_MAJOR = 2
VERSION_MINOR = 7
PATCHLEVEL = 99
VERSION_TWEAK = 0
EXTRAVERSION =
`
|
non_test
|
ppp gsm modem lcp never gets past request sent phase describe the bug i am trying sample board is communicating with modem over uart gsm ppp puts modem correctly into ppp mode and they start ppp communication but lcp fails it is sending configure req messages timeouts and in the end link is terminated i am using modem quectel my modem is connected to nb iot network i did not change config modem gsm apn so it is using internet as default i was trying to put apn that was returned by at cops command but at cgdcont command fails if apn has spaces in it apn returned by cops has spaces i was listening what does modem sends over uart and it sends some binary ppp data and then it sends no carrier when i was using config net ppp log level dbg y logs were huge and there was a lot of net ppp ppp consume ringbuf ringbuf is empty messages this is my prj conf uart support config serial y gsm modem support config modem y config modem gsm ppp y ppp networking support config net drivers y config net ppp y config net ppp y config net native y config networking y config net ppp timeout enables ppp ipcp support config net y config net n network management events config net connection manager y log buffers modem and ppp config log y config net log y config log buffer size config log strdup buf count config modem log level dbg y config net ppp log level dbg y config net ppp log level dbg y config net mgmt event log level dbg y config net connection manager log level dbg y config net shell y config modem shell y config entropy generator y config test random generator y this is logs and console output booting zephyr os build zephyr modem gsm gsm init generic gsm modem modem gsm gsm init iface read iface write modem gsm gsm rx starting modem gsm gsm configure starting modem configuration net mgmt net mgmt event init main net mgmt initialized queue of entries stack size of net ppp net ppp init main initializing ppp for iface modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok net mgmt net mgmt add event callback conn mgr adding event callback net mgmt net mgmt add event callback conn mgr adding event callback conn mgr conn mgr handler conn mgr connection manager started net ppp tx handler tx handler thread ppp tx started sample gsm ppp board disco apn internet uart uart device gsm ppp net mgmt net mgmt add event callback main adding event callback modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm manufacturer quectel modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm model modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm revision modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm imei modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm imsi modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm iccid modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm attached to packet service modem gsm gsm finalize connection modem attach returned read rssi modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem gsm gsm finalize connection not valid rssi retrying net ppp ppp startup sysworkq ppp startup for interface net ppp ipcp init sysworkq proto ipcp fsm net ppp lcp init sysworkq proto lcp fsm modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem gsm gsm configure starting modem configuration modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok modem cmd handler cmd handler process rx buf match cmd len modem gsm rssi modem gsm gsm finalize connection modem setup returned enable ppp modem cmd handler cmd handler process rx buf match cmd len modem gsm gsm cmd ok ok net ppp initializing ppp to use uart net ppp carrier on off sysworkq carrier on for interface net ppp ppp change phase debug sysworkq phase dead establish start ppp net ppp ppp fsm lower up sysworkq current state initial net ppp ppp change state debug sysworkq state initial closed ppp fsm lower up net ppp start ppp sysworkq starting lcp net ppp ppp fsm open sysworkq current state closed net ppp ppp change state debug sysworkq state closed request sent ppp fsm open net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net mgmt net mgmt event notify with info sysworkq notifying event layer code type net ppp net pkt hexdump send net ppp net mgmt mgmt thread net mgmt handling events forwarding it relevantly net mgmt mgmt run callbacks net mgmt event layer code cmd net mgmt mgmt run callbacks net mgmt running callback conn mgr conn mgr iface events handler net mgmt iface event received on iface conn mgr conn mgr iface events handler net mgmt iface index net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp net pkt hexdump recv net ppp ba ec net ppp ppp fsm input rx q too long msg net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp fsm send configure req sysworkq sending configure req id to peer while in request sent net ppp ppp send pkt sysworkq sending bytes pkt options len net ppp net pkt hexdump send net ppp net ppp ppp fsm timeout sysworkq current state request sent net ppp ppp fsm timeout sysworkq configure req retransmit limit reached net ppp ppp change state debug sysworkq state request sent stopped ppp fsm timeout net ppp ppp change phase debug sysworkq phase establish dead ppp link terminated net ppp ppp link terminated sysworkq link terminated environment zephyr sdk version version major version minor patchlevel version tweak extraversion
| 0
|
133,044
| 12,530,443,482
|
IssuesEvent
|
2020-06-04 13:05:39
|
thomaspoignant/api-scenario
|
https://api.github.com/repos/thomaspoignant/api-scenario
|
closed
|
Documentation on docker hub
|
documentation
|
Currently the docker hub page https://hub.docker.com/repository/docker/thomaspoignant/api-scenario is empty, we should prepare the documentation on how to use the docker image.
|
1.0
|
Documentation on docker hub - Currently the docker hub page https://hub.docker.com/repository/docker/thomaspoignant/api-scenario is empty, we should prepare the documentation on how to use the docker image.
|
non_test
|
documentation on docker hub currently the docker hub page is empty we should prepare the documentation on how to use the docker image
| 0
|
389,101
| 11,497,369,806
|
IssuesEvent
|
2020-02-12 09:54:53
|
AY1920S2-CS2103T-W13-4/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W13-4/main
|
opened
|
viewDifficulty
|
priority.Low type.Story
|
As a user I can see which pathways would be more challenging (i.e. Level 3K, 4K, 5K modules), so that I can choose a better course pathway in terms of maximising GPA/fulfilling course requirements.
|
1.0
|
viewDifficulty - As a user I can see which pathways would be more challenging (i.e. Level 3K, 4K, 5K modules), so that I can choose a better course pathway in terms of maximising GPA/fulfilling course requirements.
|
non_test
|
viewdifficulty as a user i can see which pathways would be more challenging i e level modules so that i can choose a better course pathway in terms of maximising gpa fulfilling course requirements
| 0
|
84,553
| 24,343,763,250
|
IssuesEvent
|
2022-10-02 02:50:21
|
andymina/seam-carving
|
https://api.github.com/repos/andymina/seam-carving
|
closed
|
Update pip requirements
|
build
|
The pip `requirements.txt` released in v1.0.0 are not all necessary. Update `requirements.txt` to only include what's needed.
|
1.0
|
Update pip requirements - The pip `requirements.txt` released in v1.0.0 are not all necessary. Update `requirements.txt` to only include what's needed.
|
non_test
|
update pip requirements the pip requirements txt released in are not all necessary update requirements txt to only include what s needed
| 0
|
261,339
| 22,739,712,959
|
IssuesEvent
|
2022-07-07 01:42:17
|
Merck/metalite.ae
|
https://api.github.com/repos/Merck/metalite.ae
|
closed
|
Independent Testing for n_subject.R
|
independent test
|
- Test plan of `n_subject`:
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the number of subjects in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the number of subject per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
- Test plan of `avg_event`:
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the average number of events in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the average number of events per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
- Test plan of `avg_duration`
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the average duration in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the average duration per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
|
1.0
|
Independent Testing for n_subject.R - - Test plan of `n_subject`:
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the number of subjects in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the number of subject per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
- Test plan of `avg_event`:
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the average number of events in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the average number of events per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
- Test plan of `avg_duration`
+ if `group = ...` is not a factor, throw errors
+ if `par = NULL`, return the average duration in each group (take the `r2rtf::r2rtf_adae` dataset as an example)
+ if, say, `par = AEDECOD`, return the average duration per group per AE (take the `r2rtf::r2rtf_adae` dataset as an example)
|
test
|
independent testing for n subject r test plan of n subject if group is not a factor throw errors if par null return the number of subjects in each group take the adae dataset as an example if say par aedecod return the number of subject per group per ae take the adae dataset as an example test plan of avg event if group is not a factor throw errors if par null return the average number of events in each group take the adae dataset as an example if say par aedecod return the average number of events per group per ae take the adae dataset as an example test plan of avg duration if group is not a factor throw errors if par null return the average duration in each group take the adae dataset as an example if say par aedecod return the average duration per group per ae take the adae dataset as an example
| 1
|
451,573
| 32,034,156,531
|
IssuesEvent
|
2023-09-22 14:15:45
|
cometbft/cometbft
|
https://api.github.com/repos/cometbft/cometbft
|
closed
|
docs: Update docs to reflect max bytes check
|
bug documentation
|
### Description
The documentation currently shows an example of simply returning the list of transactions sent in the `RequestPrepareProposal.` However, according to the CometBFT spec, the Application is responsible for ensuring the total bytes of transactions returned does not exceed `RequestPrepareProposal.max_tx_bytes.`
The current behavior in the codebase adheres to the spec by only returning transactions that fit within the max bytes limit. However, the documentation example could cause confusion by not demonstrating the max bytes check.
### Fix
The documentation example should be updated to demonstrate checking the total transaction bytes and only returning transactions up to the max bytes limit.
A note should be added to the documentation clarifying that the Application is responsible for enforcing the max bytes limit, even if more transactions are sent in the `RequestPrepareProposal.`
This will ensure the documentation matches both the spec and the actual code behavior.
|
1.0
|
docs: Update docs to reflect max bytes check - ### Description
The documentation currently shows an example of simply returning the list of transactions sent in the `RequestPrepareProposal.` However, according to the CometBFT spec, the Application is responsible for ensuring the total bytes of transactions returned does not exceed `RequestPrepareProposal.max_tx_bytes.`
The current behavior in the codebase adheres to the spec by only returning transactions that fit within the max bytes limit. However, the documentation example could cause confusion by not demonstrating the max bytes check.
### Fix
The documentation example should be updated to demonstrate checking the total transaction bytes and only returning transactions up to the max bytes limit.
A note should be added to the documentation clarifying that the Application is responsible for enforcing the max bytes limit, even if more transactions are sent in the `RequestPrepareProposal.`
This will ensure the documentation matches both the spec and the actual code behavior.
|
non_test
|
docs update docs to reflect max bytes check description the documentation currently shows an example of simply returning the list of transactions sent in the requestprepareproposal however according to the cometbft spec the application is responsible for ensuring the total bytes of transactions returned does not exceed requestprepareproposal max tx bytes the current behavior in the codebase adheres to the spec by only returning transactions that fit within the max bytes limit however the documentation example could cause confusion by not demonstrating the max bytes check fix the documentation example should be updated to demonstrate checking the total transaction bytes and only returning transactions up to the max bytes limit a note should be added to the documentation clarifying that the application is responsible for enforcing the max bytes limit even if more transactions are sent in the requestprepareproposal this will ensure the documentation matches both the spec and the actual code behavior
| 0
|
85,219
| 24,543,995,487
|
IssuesEvent
|
2022-10-12 07:19:06
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[SB] Create/edit study > Questionnaires > Questionnaire is not getting displayed in the draft status in the following scenario
|
Bug P1 Study builder Process: Fixed Process: Tested dev
|
**Steps:**
1. Sign in to SB
2. Navigate to the study list screen
3. Click on Create/edit study
4. Go to questionnaires
5. Add the questionnaire with the Charts option (Admin should select other than 'Onetime frequency in the 'Schedule' tab)
6. Click on Done and complete the questionnaire
7. Click on Edit questionnaire or copy questionnaire
8. Change the scheduling option to 'Onetime' and click on Done (An error message will be displayed to complete the section of the chart questions)
9. Navigate to the questionnaires list screen and observe the question
**AR:** The questionnaire is not getting displayed in the draft status
**ER:** The questionnaire should be displayed in the draft status until the admin completes the questionnaire
[Note: Issues should also fixed for copied questionnaires ]
**As per the 8th step**

**Actual**

|
1.0
|
[SB] Create/edit study > Questionnaires > Questionnaire is not getting displayed in the draft status in the following scenario - **Steps:**
1. Sign in to SB
2. Navigate to the study list screen
3. Click on Create/edit study
4. Go to questionnaires
5. Add the questionnaire with the Charts option (Admin should select other than 'Onetime frequency in the 'Schedule' tab)
6. Click on Done and complete the questionnaire
7. Click on Edit questionnaire or copy questionnaire
8. Change the scheduling option to 'Onetime' and click on Done (An error message will be displayed to complete the section of the chart questions)
9. Navigate to the questionnaires list screen and observe the question
**AR:** The questionnaire is not getting displayed in the draft status
**ER:** The questionnaire should be displayed in the draft status until the admin completes the questionnaire
[Note: Issues should also fixed for copied questionnaires ]
**As per the 8th step**

**Actual**

|
non_test
|
create edit study questionnaires questionnaire is not getting displayed in the draft status in the following scenario steps sign in to sb navigate to the study list screen click on create edit study go to questionnaires add the questionnaire with the charts option admin should select other than onetime frequency in the schedule tab click on done and complete the questionnaire click on edit questionnaire or copy questionnaire change the scheduling option to onetime and click on done an error message will be displayed to complete the section of the chart questions navigate to the questionnaires list screen and observe the question ar the questionnaire is not getting displayed in the draft status er the questionnaire should be displayed in the draft status until the admin completes the questionnaire as per the step actual
| 0
|
349,014
| 10,455,889,302
|
IssuesEvent
|
2019-09-19 22:44:30
|
openshift/odo
|
https://api.github.com/repos/openshift/odo
|
closed
|
Painfully slow `odo project list`
|
priority/Medium state/Ready
|
`oc get projects` takes under the second, but `odo project list` is 13 seconds :-(
```
▶ gtime -f "%es" oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-node-lease Active
kube-public Active
kube-system Active
oibwrltsgf Active
openshift Active
openshift-apiserver Active
openshift-apiserver-operator Active
openshift-authentication Active
openshift-authentication-operator Active
openshift-cloud-credential-operator Active
openshift-cluster-machine-approver Active
openshift-cluster-node-tuning-operator Active
openshift-cluster-samples-operator Active
openshift-cluster-storage-operator Active
openshift-cluster-version Active
openshift-config Active
openshift-config-managed Active
openshift-console Active
openshift-console-operator Active
openshift-controller-manager Active
openshift-controller-manager-operator Active
openshift-dns Active
openshift-dns-operator Active
openshift-etcd Active
openshift-image-registry Active
openshift-infra Active
openshift-ingress Active
openshift-ingress-operator Active
openshift-insights Active
openshift-kube-apiserver Active
openshift-kube-apiserver-operator Active
openshift-kube-controller-manager Active
openshift-kube-controller-manager-operator Active
openshift-kube-scheduler Active
openshift-kube-scheduler-operator Active
openshift-machine-api Active
openshift-machine-config-operator Active
openshift-marketplace Active
openshift-monitoring Active
openshift-multus Active
openshift-network-operator Active
openshift-node Active
openshift-operator-lifecycle-manager Active
openshift-operators Active
openshift-sdn Active
openshift-service-ca Active
openshift-service-ca-operator Active
openshift-service-catalog-apiserver-operator Active
openshift-service-catalog-controller-manager-operator Active
rgzcsmmdch Active
uhrnluzzwm Active
0.73s
▶ gtime -f "%es" odo project list
ACTIVE NAME
default
kube-node-lease
kube-public
kube-system
oibwrltsgf
openshift
openshift-apiserver
openshift-apiserver-operator
openshift-authentication
openshift-authentication-operator
openshift-cloud-credential-operator
openshift-cluster-machine-approver
openshift-cluster-node-tuning-operator
openshift-cluster-samples-operator
openshift-cluster-storage-operator
openshift-cluster-version
openshift-config
openshift-config-managed
openshift-console
openshift-console-operator
openshift-controller-manager
openshift-controller-manager-operator
openshift-dns
openshift-dns-operator
openshift-etcd
openshift-image-registry
openshift-infra
openshift-ingress
openshift-ingress-operator
openshift-insights
openshift-kube-apiserver
openshift-kube-apiserver-operator
openshift-kube-controller-manager
openshift-kube-controller-manager-operator
openshift-kube-scheduler
openshift-kube-scheduler-operator
openshift-machine-api
openshift-machine-config-operator
openshift-marketplace
openshift-monitoring
openshift-multus
openshift-network-operator
openshift-node
openshift-operator-lifecycle-manager
openshift-operators
openshift-sdn
openshift-service-ca
openshift-service-ca-operator
openshift-service-catalog-apiserver-operator
openshift-service-catalog-controller-manager-operator
rgzcsmmdch
uhrnluzzwm
13.62s
```
|
1.0
|
Painfully slow `odo project list` - `oc get projects` takes under the second, but `odo project list` is 13 seconds :-(
```
▶ gtime -f "%es" oc get projects
NAME DISPLAY NAME STATUS
default Active
kube-node-lease Active
kube-public Active
kube-system Active
oibwrltsgf Active
openshift Active
openshift-apiserver Active
openshift-apiserver-operator Active
openshift-authentication Active
openshift-authentication-operator Active
openshift-cloud-credential-operator Active
openshift-cluster-machine-approver Active
openshift-cluster-node-tuning-operator Active
openshift-cluster-samples-operator Active
openshift-cluster-storage-operator Active
openshift-cluster-version Active
openshift-config Active
openshift-config-managed Active
openshift-console Active
openshift-console-operator Active
openshift-controller-manager Active
openshift-controller-manager-operator Active
openshift-dns Active
openshift-dns-operator Active
openshift-etcd Active
openshift-image-registry Active
openshift-infra Active
openshift-ingress Active
openshift-ingress-operator Active
openshift-insights Active
openshift-kube-apiserver Active
openshift-kube-apiserver-operator Active
openshift-kube-controller-manager Active
openshift-kube-controller-manager-operator Active
openshift-kube-scheduler Active
openshift-kube-scheduler-operator Active
openshift-machine-api Active
openshift-machine-config-operator Active
openshift-marketplace Active
openshift-monitoring Active
openshift-multus Active
openshift-network-operator Active
openshift-node Active
openshift-operator-lifecycle-manager Active
openshift-operators Active
openshift-sdn Active
openshift-service-ca Active
openshift-service-ca-operator Active
openshift-service-catalog-apiserver-operator Active
openshift-service-catalog-controller-manager-operator Active
rgzcsmmdch Active
uhrnluzzwm Active
0.73s
▶ gtime -f "%es" odo project list
ACTIVE NAME
default
kube-node-lease
kube-public
kube-system
oibwrltsgf
openshift
openshift-apiserver
openshift-apiserver-operator
openshift-authentication
openshift-authentication-operator
openshift-cloud-credential-operator
openshift-cluster-machine-approver
openshift-cluster-node-tuning-operator
openshift-cluster-samples-operator
openshift-cluster-storage-operator
openshift-cluster-version
openshift-config
openshift-config-managed
openshift-console
openshift-console-operator
openshift-controller-manager
openshift-controller-manager-operator
openshift-dns
openshift-dns-operator
openshift-etcd
openshift-image-registry
openshift-infra
openshift-ingress
openshift-ingress-operator
openshift-insights
openshift-kube-apiserver
openshift-kube-apiserver-operator
openshift-kube-controller-manager
openshift-kube-controller-manager-operator
openshift-kube-scheduler
openshift-kube-scheduler-operator
openshift-machine-api
openshift-machine-config-operator
openshift-marketplace
openshift-monitoring
openshift-multus
openshift-network-operator
openshift-node
openshift-operator-lifecycle-manager
openshift-operators
openshift-sdn
openshift-service-ca
openshift-service-ca-operator
openshift-service-catalog-apiserver-operator
openshift-service-catalog-controller-manager-operator
rgzcsmmdch
uhrnluzzwm
13.62s
```
|
non_test
|
painfully slow odo project list oc get projects takes under the second but odo project list is seconds ▶ gtime f es oc get projects name display name status default active kube node lease active kube public active kube system active oibwrltsgf active openshift active openshift apiserver active openshift apiserver operator active openshift authentication active openshift authentication operator active openshift cloud credential operator active openshift cluster machine approver active openshift cluster node tuning operator active openshift cluster samples operator active openshift cluster storage operator active openshift cluster version active openshift config active openshift config managed active openshift console active openshift console operator active openshift controller manager active openshift controller manager operator active openshift dns active openshift dns operator active openshift etcd active openshift image registry active openshift infra active openshift ingress active openshift ingress operator active openshift insights active openshift kube apiserver active openshift kube apiserver operator active openshift kube controller manager active openshift kube controller manager operator active openshift kube scheduler active openshift kube scheduler operator active openshift machine api active openshift machine config operator active openshift marketplace active openshift monitoring active openshift multus active openshift network operator active openshift node active openshift operator lifecycle manager active openshift operators active openshift sdn active openshift service ca active openshift service ca operator active openshift service catalog apiserver operator active openshift service catalog controller manager operator active rgzcsmmdch active uhrnluzzwm active ▶ gtime f es odo project list active name default kube node lease kube public kube system oibwrltsgf openshift openshift apiserver openshift apiserver operator openshift authentication openshift authentication operator openshift cloud credential operator openshift cluster machine approver openshift cluster node tuning operator openshift cluster samples operator openshift cluster storage operator openshift cluster version openshift config openshift config managed openshift console openshift console operator openshift controller manager openshift controller manager operator openshift dns openshift dns operator openshift etcd openshift image registry openshift infra openshift ingress openshift ingress operator openshift insights openshift kube apiserver openshift kube apiserver operator openshift kube controller manager openshift kube controller manager operator openshift kube scheduler openshift kube scheduler operator openshift machine api openshift machine config operator openshift marketplace openshift monitoring openshift multus openshift network operator openshift node openshift operator lifecycle manager openshift operators openshift sdn openshift service ca openshift service ca operator openshift service catalog apiserver operator openshift service catalog controller manager operator rgzcsmmdch uhrnluzzwm
| 0
|
181,828
| 21,664,451,327
|
IssuesEvent
|
2022-05-07 01:23:38
|
n-devs/reactIOTEAU
|
https://api.github.com/repos/n-devs/reactIOTEAU
|
closed
|
WS-2020-0146 (High) detected in highcharts-5.0.15.tgz - autoclosed
|
security vulnerability
|
## WS-2020-0146 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highcharts-5.0.15.tgz</b></p></summary>
<p>JavaScript charting framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/highcharts/-/highcharts-5.0.15.tgz">https://registry.npmjs.org/highcharts/-/highcharts-5.0.15.tgz</a></p>
<p>Path to dependency file: /reactIOTEAU/IOT-v0.2/package.json</p>
<p>Path to vulnerable library: reactIOTEAU/IOT-v0.2/node_modules/highcharts/package.json,reactIOTEAU/IOT-v0.2/node_modules/highcharts/package.json</p>
<p>
Dependency Hierarchy:
- :x: **highcharts-5.0.15.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of highcharts prior to 7.2.2 or 8.1.1 are vulnerable to Cross-Site Scripting (XSS). The package fails to sanitize href values and does not restrict URL schemes, allowing attackers to execute arbitrary JavaScript in a victim's browser if they click the link.
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://github.com/highcharts/highcharts/commit/55c39dd55f12ce8dfab84f8ec13ad81423bee9f5>WS-2020-0146</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/highcharts/highcharts/tree/v7.2.2, https://github.com/highcharts/highcharts/tree/v8.1.1">https://github.com/highcharts/highcharts/tree/v7.2.2, https://github.com/highcharts/highcharts/tree/v8.1.1</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: v7.2.2, v8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2020-0146 (High) detected in highcharts-5.0.15.tgz - autoclosed - ## WS-2020-0146 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>highcharts-5.0.15.tgz</b></p></summary>
<p>JavaScript charting framework</p>
<p>Library home page: <a href="https://registry.npmjs.org/highcharts/-/highcharts-5.0.15.tgz">https://registry.npmjs.org/highcharts/-/highcharts-5.0.15.tgz</a></p>
<p>Path to dependency file: /reactIOTEAU/IOT-v0.2/package.json</p>
<p>Path to vulnerable library: reactIOTEAU/IOT-v0.2/node_modules/highcharts/package.json,reactIOTEAU/IOT-v0.2/node_modules/highcharts/package.json</p>
<p>
Dependency Hierarchy:
- :x: **highcharts-5.0.15.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Versions of highcharts prior to 7.2.2 or 8.1.1 are vulnerable to Cross-Site Scripting (XSS). The package fails to sanitize href values and does not restrict URL schemes, allowing attackers to execute arbitrary JavaScript in a victim's browser if they click the link.
<p>Publish Date: 2020-08-25
<p>URL: <a href=https://github.com/highcharts/highcharts/commit/55c39dd55f12ce8dfab84f8ec13ad81423bee9f5>WS-2020-0146</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/highcharts/highcharts/tree/v7.2.2, https://github.com/highcharts/highcharts/tree/v8.1.1">https://github.com/highcharts/highcharts/tree/v7.2.2, https://github.com/highcharts/highcharts/tree/v8.1.1</a></p>
<p>Release Date: 2020-08-25</p>
<p>Fix Resolution: v7.2.2, v8.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_test
|
ws high detected in highcharts tgz autoclosed ws high severity vulnerability vulnerable library highcharts tgz javascript charting framework library home page a href path to dependency file reactioteau iot package json path to vulnerable library reactioteau iot node modules highcharts package json reactioteau iot node modules highcharts package json dependency hierarchy x highcharts tgz vulnerable library vulnerability details versions of highcharts prior to or are vulnerable to cross site scripting xss the package fails to sanitize href values and does not restrict url schemes allowing attackers to execute arbitrary javascript in a victim s browser if they click the link publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
142,217
| 11,459,343,034
|
IssuesEvent
|
2020-02-07 06:57:38
|
rsx-labs/aide-frontend
|
https://api.github.com/repos/rsx-labs/aide-frontend
|
closed
|
[Assets] Missing Page Numbering for List of Unapproved Assets module
|
Bug For QA Testing Medium Priority
|
**Describe the bug**
Add page numbering on the List of Unapproved Assets module.
**Version (please complete the following information):**
- Version 3.1.0
|
1.0
|
[Assets] Missing Page Numbering for List of Unapproved Assets module - **Describe the bug**
Add page numbering on the List of Unapproved Assets module.
**Version (please complete the following information):**
- Version 3.1.0
|
test
|
missing page numbering for list of unapproved assets module describe the bug add page numbering on the list of unapproved assets module version please complete the following information version
| 1
|
137,876
| 11,166,042,409
|
IssuesEvent
|
2019-12-27 11:41:35
|
pytest-dev/pytest
|
https://api.github.com/repos/pytest-dev/pytest
|
closed
|
Allow using unittest asserts with pytest without TestCase
|
plugin: unittest type: proposal
|
The stdlib unittests are full of useful test utilities to perform tests on sets, dict (see https://github.com/pytest-dev/pytest/issues/2376), etc. And other frameworks add their own layer on top of that, like Django adding assertXXXX to test templates and responses.
How about mixing the best of both worlds ? These assertXXXX are, for a reason I can't get, real methods and not classmethods (whereas they visibly never used the instance attributes), but I don't think this could be changed in stdlib, although I don't think there would be risks of regressions regarding custom user code.
So why not expose somewhere a utility to easily call these assertXXXX (from basic TestCase or any other subclass) from the outside, like simple utilities ? Thus we can used both fixtures and these extended assertXXXX. Has it already been attempted ?
|
1.0
|
Allow using unittest asserts with pytest without TestCase - The stdlib unittests are full of useful test utilities to perform tests on sets, dict (see https://github.com/pytest-dev/pytest/issues/2376), etc. And other frameworks add their own layer on top of that, like Django adding assertXXXX to test templates and responses.
How about mixing the best of both worlds ? These assertXXXX are, for a reason I can't get, real methods and not classmethods (whereas they visibly never used the instance attributes), but I don't think this could be changed in stdlib, although I don't think there would be risks of regressions regarding custom user code.
So why not expose somewhere a utility to easily call these assertXXXX (from basic TestCase or any other subclass) from the outside, like simple utilities ? Thus we can used both fixtures and these extended assertXXXX. Has it already been attempted ?
|
test
|
allow using unittest asserts with pytest without testcase the stdlib unittests are full of useful test utilities to perform tests on sets dict see etc and other frameworks add their own layer on top of that like django adding assertxxxx to test templates and responses how about mixing the best of both worlds these assertxxxx are for a reason i can t get real methods and not classmethods whereas they visibly never used the instance attributes but i don t think this could be changed in stdlib although i don t think there would be risks of regressions regarding custom user code so why not expose somewhere a utility to easily call these assertxxxx from basic testcase or any other subclass from the outside like simple utilities thus we can used both fixtures and these extended assertxxxx has it already been attempted
| 1
|
28,448
| 4,403,009,507
|
IssuesEvent
|
2016-08-11 05:19:03
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Activating balancer service does not result in any lb instance being created.
|
area/LBRefactor kind/bug status/to-test
|
Rancher Version: latest cattle build from lbrefactormetadata from https://github.com/alena1108/cattle.git
Steps to reproduce the problem:
Create a Balancer service with the following definition:
```
"name": "LB-test154557",
"state": "active",
"accountId": "1a5",
"assignServiceIpAddress": false,
"certificateIds": null,
"config": null,
"created": "2016-08-08T22:18:20Z",
"createdTS": 1470694700000,
"currentScale": 0,
"defaultCertificateId": null,
"description": null,
"environmentId": "1e152",
"externalId": null,
"fqdn": null,
"healthState": "unhealthy",
"kind": "balancerService",
"launchConfig": {
"healthCheck": {
"healthyThreshold": 2,
"initializingTimeout": null,
"interval": 2000,
"name": null,
"port": 10241,
"recreateOnQuorumStrategyConfig": null,
"reinitializingTimeout": null,
"requestLine": null,
"responseTimeout": 2000,
"strategy": null,
"unhealthyThreshold": 3,
},
"kind": "container",
"networkMode": "managed",
"ports": [
"55342:9001/tcp",
],
"privileged": false,
"publishAllPorts": false,
"readOnly": false,
"startOnCreate": true,
"stdinOpen": false,
"tty": false,
"version": "0",
"vcpu": 1,
},
"metadata": {
"lb": {
"certs": [ ],
"default_cert": null,
"port_rules": [
{
"source_port": 9001,
"protocol": "http",
"path": null,
"hostname": null,
"service": "test122008/test985739",
"target_port": 80,
"priority": null,
"backend_name": null,
"selector": null,
},
],
"config": null,
"stickiness_policy": null,
},
},
"portRules": [
{
"backendName": null,
"hostname": null,
"path": null,
"priority": null,
"protocol": "http",
"selector": null,
"serviceId": 348,
"sourcePort": 9001,
"targetPort": 80,
},
],
"publicEndpoints": null,
"removed": null,
"retainIp": null,
"scale": 1,
"scalePolicy": null,
"secondaryLaunchConfigs": [ ],
"selectorLink": null,
"startOnCreate": false,
"stickinessPolicy": null,
"transitioning": "no",
"transitioningMessage": null,
"transitioningProgress": null,
"upgrade": null,
"uuid": "b107fe24-015f-4b15-aae2-ec8beadfb171",
"vip": null
```
Once the service is activated , there is no lb instance that gets created.
|
1.0
|
Activating balancer service does not result in any lb instance being created. - Rancher Version: latest cattle build from lbrefactormetadata from https://github.com/alena1108/cattle.git
Steps to reproduce the problem:
Create a Balancer service with the following definition:
```
"name": "LB-test154557",
"state": "active",
"accountId": "1a5",
"assignServiceIpAddress": false,
"certificateIds": null,
"config": null,
"created": "2016-08-08T22:18:20Z",
"createdTS": 1470694700000,
"currentScale": 0,
"defaultCertificateId": null,
"description": null,
"environmentId": "1e152",
"externalId": null,
"fqdn": null,
"healthState": "unhealthy",
"kind": "balancerService",
"launchConfig": {
"healthCheck": {
"healthyThreshold": 2,
"initializingTimeout": null,
"interval": 2000,
"name": null,
"port": 10241,
"recreateOnQuorumStrategyConfig": null,
"reinitializingTimeout": null,
"requestLine": null,
"responseTimeout": 2000,
"strategy": null,
"unhealthyThreshold": 3,
},
"kind": "container",
"networkMode": "managed",
"ports": [
"55342:9001/tcp",
],
"privileged": false,
"publishAllPorts": false,
"readOnly": false,
"startOnCreate": true,
"stdinOpen": false,
"tty": false,
"version": "0",
"vcpu": 1,
},
"metadata": {
"lb": {
"certs": [ ],
"default_cert": null,
"port_rules": [
{
"source_port": 9001,
"protocol": "http",
"path": null,
"hostname": null,
"service": "test122008/test985739",
"target_port": 80,
"priority": null,
"backend_name": null,
"selector": null,
},
],
"config": null,
"stickiness_policy": null,
},
},
"portRules": [
{
"backendName": null,
"hostname": null,
"path": null,
"priority": null,
"protocol": "http",
"selector": null,
"serviceId": 348,
"sourcePort": 9001,
"targetPort": 80,
},
],
"publicEndpoints": null,
"removed": null,
"retainIp": null,
"scale": 1,
"scalePolicy": null,
"secondaryLaunchConfigs": [ ],
"selectorLink": null,
"startOnCreate": false,
"stickinessPolicy": null,
"transitioning": "no",
"transitioningMessage": null,
"transitioningProgress": null,
"upgrade": null,
"uuid": "b107fe24-015f-4b15-aae2-ec8beadfb171",
"vip": null
```
Once the service is activated , there is no lb instance that gets created.
|
test
|
activating balancer service does not result in any lb instance being created rancher version latest cattle build from lbrefactormetadata from steps to reproduce the problem create a balancer service with the following definition name lb state active accountid assignserviceipaddress false certificateids null config null created createdts currentscale defaultcertificateid null description null environmentid externalid null fqdn null healthstate unhealthy kind balancerservice launchconfig healthcheck healthythreshold initializingtimeout null interval name null port recreateonquorumstrategyconfig null reinitializingtimeout null requestline null responsetimeout strategy null unhealthythreshold kind container networkmode managed ports tcp privileged false publishallports false readonly false startoncreate true stdinopen false tty false version vcpu metadata lb certs default cert null port rules source port protocol http path null hostname null service target port priority null backend name null selector null config null stickiness policy null portrules backendname null hostname null path null priority null protocol http selector null serviceid sourceport targetport publicendpoints null removed null retainip null scale scalepolicy null secondarylaunchconfigs selectorlink null startoncreate false stickinesspolicy null transitioning no transitioningmessage null transitioningprogress null upgrade null uuid vip null once the service is activated there is no lb instance that gets created
| 1
|
202,655
| 15,294,358,611
|
IssuesEvent
|
2021-02-24 02:18:22
|
eclipse/openj9
|
https://api.github.com/repos/eclipse/openj9
|
closed
|
shrtest_win_SE80_0 testConstructor The specified module could not be found
|
test failure
|
https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1/246
shrtest_win_SE80_0
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1/246/functional_test_output.tar.gz
```
OSCacheTests started
OSCacheTestSysv begin
testBasic begin
testBasic: block accessed of length 1048384
testBasic: contents correct
JVMSHRC010I Shared cache "OSCacheUnitTest1" is destroyed
testBasic: PASS
testMultipleCreate begin
JVMSHRC010I Shared cache "OSCTest1" is destroyed
JVMSHRC010I Shared cache "OSCTest1" is destroyed
testMultipleCreate: PASS
testConstructor begin
JVMSHRC659E An error has occurred while opening shared memory
JVMSHRC336E Port layer error code = 1
JVMSHRC337E Platform error message: The specified module could not be found
JVMSHRC662I Error recovery: destroyed semaphore set associated with shared class cache.
JVMSHRC680E Error recovery failure: Failed to remove the semaphore set control file C290M4F0A32_testConstructor_G41L00 associated with shared class cache.
JVMSHRC336E Port layer error code = 17
An unhandled error (4) has occurred.
J9Generic_Signal_Number=00000004
ExceptionCode=c0000005
ExceptionAddress=71F7BFE0
ContextFlags=0001007f
Handler1=01036450
Handler2=71F755D0
InaccessibleReadAddress=00000013
EDI=0053E280
ESI=00000013
EAX=FFFFFFFF
EBX=00000000
ECX=00000014
EDX=FFFFFFFF
EIP=71F7BFE0
ESP=0053E1C8
EBP=FFFFFFFF
EFLAGS=00210246
Module=F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdkbinary\j2sdk-image\jre\bin\default\J9PRT29.dll
Module_base_address=71F60000
Offset_in_DLL=0001bfe0
JVMDUMP039I Processing dump event "abort", detail "" at 2021/02/07 22:34:56 - please wait.
JVMDUMP032I JVM requested System dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\core.20210207.223456.2548.0001.dmp' in response to an event
JVMDUMP010I System dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\core.20210207.223456.2548.0001.dmp
JVMDUMP032I JVM requested Java dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\javacore.20210207.223456.2548.0002.txt' in response to an event
JVMDUMP010I Java dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\javacore.20210207.223456.2548.0002.txt
JVMDUMP032I JVM requested Snap dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\Snap.20210207.223456.2548.0003.trc' in response to an event
JVMDUMP010I Snap dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\Snap.20210207.223456.2548.0003.trc
JVMDUMP032I JVM requested JIT dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\jitdump.20210207.223456.2548.0004.dmp' in response to an event
JVMDUMP010I JIT dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\jitdump.20210207.223456.2548.0004.dmp
JVMDUMP013I Processed dump event "abort", detail "".
```
|
1.0
|
shrtest_win_SE80_0 testConstructor The specified module could not be found - https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1/246
shrtest_win_SE80_0
https://140-211-168-230-openstack.osuosl.org/artifactory/ci-eclipse-openj9/Test/Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1/246/functional_test_output.tar.gz
```
OSCacheTests started
OSCacheTestSysv begin
testBasic begin
testBasic: block accessed of length 1048384
testBasic: contents correct
JVMSHRC010I Shared cache "OSCacheUnitTest1" is destroyed
testBasic: PASS
testMultipleCreate begin
JVMSHRC010I Shared cache "OSCTest1" is destroyed
JVMSHRC010I Shared cache "OSCTest1" is destroyed
testMultipleCreate: PASS
testConstructor begin
JVMSHRC659E An error has occurred while opening shared memory
JVMSHRC336E Port layer error code = 1
JVMSHRC337E Platform error message: The specified module could not be found
JVMSHRC662I Error recovery: destroyed semaphore set associated with shared class cache.
JVMSHRC680E Error recovery failure: Failed to remove the semaphore set control file C290M4F0A32_testConstructor_G41L00 associated with shared class cache.
JVMSHRC336E Port layer error code = 17
An unhandled error (4) has occurred.
J9Generic_Signal_Number=00000004
ExceptionCode=c0000005
ExceptionAddress=71F7BFE0
ContextFlags=0001007f
Handler1=01036450
Handler2=71F755D0
InaccessibleReadAddress=00000013
EDI=0053E280
ESI=00000013
EAX=FFFFFFFF
EBX=00000000
ECX=00000014
EDX=FFFFFFFF
EIP=71F7BFE0
ESP=0053E1C8
EBP=FFFFFFFF
EFLAGS=00210246
Module=F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdkbinary\j2sdk-image\jre\bin\default\J9PRT29.dll
Module_base_address=71F60000
Offset_in_DLL=0001bfe0
JVMDUMP039I Processing dump event "abort", detail "" at 2021/02/07 22:34:56 - please wait.
JVMDUMP032I JVM requested System dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\core.20210207.223456.2548.0001.dmp' in response to an event
JVMDUMP010I System dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\core.20210207.223456.2548.0001.dmp
JVMDUMP032I JVM requested Java dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\javacore.20210207.223456.2548.0002.txt' in response to an event
JVMDUMP010I Java dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\javacore.20210207.223456.2548.0002.txt
JVMDUMP032I JVM requested Snap dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\Snap.20210207.223456.2548.0003.trc' in response to an event
JVMDUMP010I Snap dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\Snap.20210207.223456.2548.0003.trc
JVMDUMP032I JVM requested JIT dump using 'F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\jitdump.20210207.223456.2548.0004.dmp' in response to an event
JVMDUMP010I JIT dump written to F:\Users\jenkins\workspace\Test_openjdk8_j9_sanity.functional_x86-32_windows_Nightly_testList_1\openjdk-tests\TKG\output_16127567165590\shrtest_win_SE80_0\jitdump.20210207.223456.2548.0004.dmp
JVMDUMP013I Processed dump event "abort", detail "".
```
|
test
|
shrtest win testconstructor the specified module could not be found shrtest win oscachetests started oscachetestsysv begin testbasic begin testbasic block accessed of length testbasic contents correct shared cache is destroyed testbasic pass testmultiplecreate begin shared cache is destroyed shared cache is destroyed testmultiplecreate pass testconstructor begin an error has occurred while opening shared memory port layer error code platform error message the specified module could not be found error recovery destroyed semaphore set associated with shared class cache error recovery failure failed to remove the semaphore set control file testconstructor associated with shared class cache port layer error code an unhandled error has occurred signal number exceptioncode exceptionaddress contextflags inaccessiblereadaddress edi esi eax ffffffff ebx ecx edx ffffffff eip esp ebp ffffffff eflags module f users jenkins workspace test sanity functional windows nightly testlist openjdkbinary image jre bin default dll module base address offset in dll processing dump event abort detail at please wait jvm requested system dump using f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win core dmp in response to an event system dump written to f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win core dmp jvm requested java dump using f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win javacore txt in response to an event java dump written to f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win javacore txt jvm requested snap dump using f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win snap trc in response to an event snap dump written to f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win snap trc jvm requested jit dump using f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win jitdump dmp in response to an event jit dump written to f users jenkins workspace test sanity functional windows nightly testlist openjdk tests tkg output shrtest win jitdump dmp processed dump event abort detail
| 1
|
327,891
| 24,159,847,226
|
IssuesEvent
|
2022-09-22 10:41:47
|
sillsdev/ptx2pdf
|
https://api.github.com/repos/sillsdev/ptx2pdf
|
closed
|
Nested character styles not behaving as expected
|
No action Documentation
|
A user has reported that nested character styles are being reset too early:

I wondered if it was the \bdit* which was turning off both the \it and the \bd and causing the issue, so I temporarily changed the \bdit ... \bdit* to just a \it ... \it* but the bug persisted:

|
1.0
|
Nested character styles not behaving as expected - A user has reported that nested character styles are being reset too early:

I wondered if it was the \bdit* which was turning off both the \it and the \bd and causing the issue, so I temporarily changed the \bdit ... \bdit* to just a \it ... \it* but the bug persisted:

|
non_test
|
nested character styles not behaving as expected a user has reported that nested character styles are being reset too early i wondered if it was the bdit which was turning off both the it and the bd and causing the issue so i temporarily changed the bdit bdit to just a it it but the bug persisted
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.