Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
757
| labels
stringlengths 4
664
| body
stringlengths 3
261k
| index
stringclasses 10
values | text_combine
stringlengths 96
261k
| label
stringclasses 2
values | text
stringlengths 96
232k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
54,411
| 13,651,545,018
|
IssuesEvent
|
2020-09-27 01:55:15
|
Cockatrice/Cockatrice
|
https://api.github.com/repos/Cockatrice/Cockatrice
|
closed
|
Cockatrice ignores saved "Show buddies only games" setting
|
App - Cockatrice Defect - Regression
|
<b>System Information:</b>
<!-- Go to "Help → View Debug Log" and copy all lines above the separation here! -->
<!-- If you can't install Cockatrice to access that information, make
sure to include your OS and the app version from the setup file here -->
Client Version: 2.5.2-beta (2018-06-10)
Client Operating System: Arch Linux
Build Architecture: 64-bit
Qt Version: 5.11.0
System Locale: en_US
__________________________________________________________________________________________
<!-- Explain your issue/request/suggestion in detail here! -->
<!-- This repository is ONLY about development of the Cockatrice app.
If you have any problems with a server (e.g. registering, connecting, ban...)
you have to contact that server's owner/admin.
Check this list of public servers with webpage links and contact details:
https://github.com/Cockatrice/Cockatrice/wiki/Public-Servers -->
Regardless of what filter I used previously, the saved setting for filtering buddies only games (`filter_games/show_buddies_only_games` in `gamefilters.ini`) is completely ignored, and instead is always false when i launch trice.
Additionally, trice does not save my preference at all when i close the app.
This was last fixed in #2434, but it seems to have been broken since.
|
1.0
|
Cockatrice ignores saved "Show buddies only games" setting - <b>System Information:</b>
<!-- Go to "Help → View Debug Log" and copy all lines above the separation here! -->
<!-- If you can't install Cockatrice to access that information, make
sure to include your OS and the app version from the setup file here -->
Client Version: 2.5.2-beta (2018-06-10)
Client Operating System: Arch Linux
Build Architecture: 64-bit
Qt Version: 5.11.0
System Locale: en_US
__________________________________________________________________________________________
<!-- Explain your issue/request/suggestion in detail here! -->
<!-- This repository is ONLY about development of the Cockatrice app.
If you have any problems with a server (e.g. registering, connecting, ban...)
you have to contact that server's owner/admin.
Check this list of public servers with webpage links and contact details:
https://github.com/Cockatrice/Cockatrice/wiki/Public-Servers -->
Regardless of what filter I used previously, the saved setting for filtering buddies only games (`filter_games/show_buddies_only_games` in `gamefilters.ini`) is completely ignored, and instead is always false when i launch trice.
Additionally, trice does not save my preference at all when i close the app.
This was last fixed in #2434, but it seems to have been broken since.
|
defect
|
cockatrice ignores saved show buddies only games setting system information if you can t install cockatrice to access that information make sure to include your os and the app version from the setup file here client version beta client operating system arch linux build architecture bit qt version system locale en us this repository is only about development of the cockatrice app if you have any problems with a server e g registering connecting ban you have to contact that server s owner admin check this list of public servers with webpage links and contact details regardless of what filter i used previously the saved setting for filtering buddies only games filter games show buddies only games in gamefilters ini is completely ignored and instead is always false when i launch trice additionally trice does not save my preference at all when i close the app this was last fixed in but it seems to have been broken since
| 1
|
40,131
| 9,852,617,081
|
IssuesEvent
|
2019-06-19 13:14:56
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Remove unnecessary {@inheritDoc} Javadoc tags in jOOQ's internals
|
C: Documentation E: All Editions P: Low R: Fixed T: Defect
|
Some of jOOQ's internals make use of an unnecessary `{@inheritDoc}` tag:
```java
/**
* {@inheritDoc}
*/
```
Just like the ones that are being generated: #8685, these could be removed.
|
1.0
|
Remove unnecessary {@inheritDoc} Javadoc tags in jOOQ's internals - Some of jOOQ's internals make use of an unnecessary `{@inheritDoc}` tag:
```java
/**
* {@inheritDoc}
*/
```
Just like the ones that are being generated: #8685, these could be removed.
|
defect
|
remove unnecessary inheritdoc javadoc tags in jooq s internals some of jooq s internals make use of an unnecessary inheritdoc tag java inheritdoc just like the ones that are being generated these could be removed
| 1
|
14,452
| 2,812,163,967
|
IssuesEvent
|
2015-05-18 06:27:27
|
minux/go-tour
|
https://api.github.com/repos/minux/go-tour
|
closed
|
ui: interface should be clearer
|
auto-migrated Priority-Medium Type-Defect
|
```
Key issues:
- The arrows shouldn't be only at the bottom of the screen. They should be
located near the other controls. They are not very discoverable. I would expect
the forward/back/toc/page# elements to be colocated.
- The table of contents icon can be confusing. It looks like a forward/backward
control, but it's not.
```
Original issue reported on code.google.com by `a...@golang.org` on 14 Sep 2012 at 3:55
|
1.0
|
ui: interface should be clearer - ```
Key issues:
- The arrows shouldn't be only at the bottom of the screen. They should be
located near the other controls. They are not very discoverable. I would expect
the forward/back/toc/page# elements to be colocated.
- The table of contents icon can be confusing. It looks like a forward/backward
control, but it's not.
```
Original issue reported on code.google.com by `a...@golang.org` on 14 Sep 2012 at 3:55
|
defect
|
ui interface should be clearer key issues the arrows shouldn t be only at the bottom of the screen they should be located near the other controls they are not very discoverable i would expect the forward back toc page elements to be colocated the table of contents icon can be confusing it looks like a forward backward control but it s not original issue reported on code google com by a golang org on sep at
| 1
|
227,541
| 18,068,104,724
|
IssuesEvent
|
2021-09-20 21:46:07
|
NCAR/DART
|
https://api.github.com/repos/NCAR/DART
|
closed
|
obs_converter run_tests.csh upgrade
|
Refactor Test Trivial
|
The obs_converter/run_tests.csh runs preprocess twice, basically. Once in the quickbuild.csh and then again during the 'run' portion of the script.
Also need to have separate input.nml.testing scripts for some converters. Some converters have hardcoded output names that do not match the input.nml examples in the converters' work directory.
|
1.0
|
obs_converter run_tests.csh upgrade - The obs_converter/run_tests.csh runs preprocess twice, basically. Once in the quickbuild.csh and then again during the 'run' portion of the script.
Also need to have separate input.nml.testing scripts for some converters. Some converters have hardcoded output names that do not match the input.nml examples in the converters' work directory.
|
non_defect
|
obs converter run tests csh upgrade the obs converter run tests csh runs preprocess twice basically once in the quickbuild csh and then again during the run portion of the script also need to have separate input nml testing scripts for some converters some converters have hardcoded output names that do not match the input nml examples in the converters work directory
| 0
|
9,198
| 2,615,138,585
|
IssuesEvent
|
2015-03-01 06:12:28
|
chrsmith/reaver-wps
|
https://api.github.com/repos/chrsmith/reaver-wps
|
closed
|
Hang on Waiting for beacon
|
auto-migrated Priority-Medium Type-Defect
|
```
What steps will reproduce the problem?
1. reaver -i wlan0 -b XX:XX:XX:XX:XX:XX
What is the expected output? What do you see instead?
Reaver hangs on "Waiting for beacon from XX:XX:XX:XX:XX:XX". It is not a range
problem, I am able to wirelessly connect to the router using the PSK. It always
hangs on this message on all 3 routers I have tried.
What version of the product are you using? On what operating system?
ver: reaver rev 22
os: fedora 16
wifi: Ralink corp. RT2860 (Device 2790)
driver: rt2800pci
Please provide any additional information below.
Specifying essid and using -c and -f does not cure the problem, -vv provides no
more information. Hangs for 5 minutes, still waiting.
```
Original issue reported on code.google.com by `rnd44...@gmail.com` on 30 Dec 2011 at 3:20
|
1.0
|
Hang on Waiting for beacon - ```
What steps will reproduce the problem?
1. reaver -i wlan0 -b XX:XX:XX:XX:XX:XX
What is the expected output? What do you see instead?
Reaver hangs on "Waiting for beacon from XX:XX:XX:XX:XX:XX". It is not a range
problem, I am able to wirelessly connect to the router using the PSK. It always
hangs on this message on all 3 routers I have tried.
What version of the product are you using? On what operating system?
ver: reaver rev 22
os: fedora 16
wifi: Ralink corp. RT2860 (Device 2790)
driver: rt2800pci
Please provide any additional information below.
Specifying essid and using -c and -f does not cure the problem, -vv provides no
more information. Hangs for 5 minutes, still waiting.
```
Original issue reported on code.google.com by `rnd44...@gmail.com` on 30 Dec 2011 at 3:20
|
defect
|
hang on waiting for beacon what steps will reproduce the problem reaver i b xx xx xx xx xx xx what is the expected output what do you see instead reaver hangs on waiting for beacon from xx xx xx xx xx xx it is not a range problem i am able to wirelessly connect to the router using the psk it always hangs on this message on all routers i have tried what version of the product are you using on what operating system ver reaver rev os fedora wifi ralink corp device driver please provide any additional information below specifying essid and using c and f does not cure the problem vv provides no more information hangs for minutes still waiting original issue reported on code google com by gmail com on dec at
| 1
|
446,069
| 31,392,269,219
|
IssuesEvent
|
2023-08-26 13:41:30
|
JosephHewitt/wardriver_rev3
|
https://api.github.com/repos/JosephHewitt/wardriver_rev3
|
closed
|
boards.txt vs docs? 2.0.7 or 2.0.9?
|
documentation
|
I noticed the docs show when setting up the IDE to load boards version 2.0.7. boards.txt in the code shows "esp32:esp32@2.0.9". Should the docs refer to 2.0.9 as well and should users update their IDE configs?
|
1.0
|
boards.txt vs docs? 2.0.7 or 2.0.9? - I noticed the docs show when setting up the IDE to load boards version 2.0.7. boards.txt in the code shows "esp32:esp32@2.0.9". Should the docs refer to 2.0.9 as well and should users update their IDE configs?
|
non_defect
|
boards txt vs docs or i noticed the docs show when setting up the ide to load boards version boards txt in the code shows should the docs refer to as well and should users update their ide configs
| 0
|
48,214
| 13,067,536,398
|
IssuesEvent
|
2020-07-31 00:46:33
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
[core-removal] move documentation to sphinx (Trac #1988)
|
Migrated from Trac analysis defect
|
Migrated from https://code.icecube.wisc.edu/ticket/1988
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"description": "",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067295757382",
"component": "analysis",
"summary": "[core-removal] move documentation to sphinx",
"priority": "normal",
"keywords": "",
"time": "2017-04-26T19:06:22",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
1.0
|
[core-removal] move documentation to sphinx (Trac #1988) -
Migrated from https://code.icecube.wisc.edu/ticket/1988
```json
{
"status": "closed",
"changetime": "2019-02-13T14:14:55",
"description": "",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067295757382",
"component": "analysis",
"summary": "[core-removal] move documentation to sphinx",
"priority": "normal",
"keywords": "",
"time": "2017-04-26T19:06:22",
"milestone": "",
"owner": "",
"type": "defect"
}
```
|
defect
|
move documentation to sphinx trac migrated from json status closed changetime description reporter kjmeagher cc resolution fixed ts component analysis summary move documentation to sphinx priority normal keywords time milestone owner type defect
| 1
|
60,523
| 17,023,447,535
|
IssuesEvent
|
2021-07-03 02:04:50
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
RTL names in LTR locale shown funny
|
Component: website Priority: major Resolution: fixed Type: defect
|
**[Submitted to the original trac issue database at 12.26pm, Friday, 24th July 2009]**
Names in right-to-left languages (eg Hebrew) are displayed weirdly when shown using left-to-right locale (eg English).
Example:
http://www.openstreetmap.org/browse/node/34621917
see the page title, which is normally displayed in the form "name (number)" but is shown as unusual "number) name)" in such cases. Tag name=value looks ok, probably because it isn't interpolated, or just because there is nothing to the right of the name that would be moved to the left.
Note that i am not sure how it is correct for RTL languages, but it doesn't look right in my (LTR) locale.
Might be remotely related to #1930
|
1.0
|
RTL names in LTR locale shown funny - **[Submitted to the original trac issue database at 12.26pm, Friday, 24th July 2009]**
Names in right-to-left languages (eg Hebrew) are displayed weirdly when shown using left-to-right locale (eg English).
Example:
http://www.openstreetmap.org/browse/node/34621917
see the page title, which is normally displayed in the form "name (number)" but is shown as unusual "number) name)" in such cases. Tag name=value looks ok, probably because it isn't interpolated, or just because there is nothing to the right of the name that would be moved to the left.
Note that i am not sure how it is correct for RTL languages, but it doesn't look right in my (LTR) locale.
Might be remotely related to #1930
|
defect
|
rtl names in ltr locale shown funny names in right to left languages eg hebrew are displayed weirdly when shown using left to right locale eg english example see the page title which is normally displayed in the form name number but is shown as unusual number name in such cases tag name value looks ok probably because it isn t interpolated or just because there is nothing to the right of the name that would be moved to the left note that i am not sure how it is correct for rtl languages but it doesn t look right in my ltr locale might be remotely related to
| 1
|
142,009
| 13,003,987,287
|
IssuesEvent
|
2020-07-24 08:02:25
|
hlfsousa/ncml-binding
|
https://api.github.com/repos/hlfsousa/ncml-binding
|
opened
|
Create examples project
|
documentation enhancement
|
We need some samples to aid support and to function as knowledge base/kickstart. So we create a parent examples project and one module for each set of features to explored. Here is a starting list:
- Code generation as build step
- Custom code in generated files
- Custom property names
- Reading and processing a file
- Writing a file
- Custom templates
Each example project must include a Readme.md and be thoroughly documented. The Wiki must only include a pointer to the examples parent project in order to avoid duplication. Instead of associating all work to this issue, create a new issue for each project and link to this one. Only the parent examples project should be linked here..
|
1.0
|
Create examples project - We need some samples to aid support and to function as knowledge base/kickstart. So we create a parent examples project and one module for each set of features to explored. Here is a starting list:
- Code generation as build step
- Custom code in generated files
- Custom property names
- Reading and processing a file
- Writing a file
- Custom templates
Each example project must include a Readme.md and be thoroughly documented. The Wiki must only include a pointer to the examples parent project in order to avoid duplication. Instead of associating all work to this issue, create a new issue for each project and link to this one. Only the parent examples project should be linked here..
|
non_defect
|
create examples project we need some samples to aid support and to function as knowledge base kickstart so we create a parent examples project and one module for each set of features to explored here is a starting list code generation as build step custom code in generated files custom property names reading and processing a file writing a file custom templates each example project must include a readme md and be thoroughly documented the wiki must only include a pointer to the examples parent project in order to avoid duplication instead of associating all work to this issue create a new issue for each project and link to this one only the parent examples project should be linked here
| 0
|
557,226
| 16,504,223,735
|
IssuesEvent
|
2021-05-25 17:14:27
|
Scratch-Bookmarklets/Scratch-Bookmarklets.github.io
|
https://api.github.com/repos/Scratch-Bookmarklets/Scratch-Bookmarklets.github.io
|
closed
|
Bug: Wrong Address
|
Medium Priority Security bug
|
**Describe the bug**
if you type `http://scratch-bookmarklets.gihub.io/` or `https://scratch-bookmarklets.gihub.io/` it redirects to `http://www1.gihub.io/?tm=1&subid4=1621948496.0043001679&KW1=Dedicated%20Server%20New%20Jersey&KW2=Dedicated%20Server%20North%20America&KW3=Dedicated%20Server%20Asia&KW4=Dedicated%20Server%20Europe&searchbox=0&domainname=0&backfill=0`
**To Reproduce**
type `http://scratch-bookmarklets.gihub.io/` or `https://scratch-bookmarklets.gihub.io/` in the searchbar
**Expected behavior**
be our site
**Browser Information**
ChromeOS 13904.37.0, Chrome 91.0.4472.66
|
1.0
|
Bug: Wrong Address - **Describe the bug**
if you type `http://scratch-bookmarklets.gihub.io/` or `https://scratch-bookmarklets.gihub.io/` it redirects to `http://www1.gihub.io/?tm=1&subid4=1621948496.0043001679&KW1=Dedicated%20Server%20New%20Jersey&KW2=Dedicated%20Server%20North%20America&KW3=Dedicated%20Server%20Asia&KW4=Dedicated%20Server%20Europe&searchbox=0&domainname=0&backfill=0`
**To Reproduce**
type `http://scratch-bookmarklets.gihub.io/` or `https://scratch-bookmarklets.gihub.io/` in the searchbar
**Expected behavior**
be our site
**Browser Information**
ChromeOS 13904.37.0, Chrome 91.0.4472.66
|
non_defect
|
bug wrong address describe the bug if you type or it redirects to to reproduce type or in the searchbar expected behavior be our site browser information chromeos chrome
| 0
|
648,600
| 21,190,028,239
|
IssuesEvent
|
2022-04-08 16:19:58
|
craftercms/craftercms
|
https://api.github.com/repos/craftercms/craftercms
|
closed
|
[plugins] Chatbot plugins aren't working, bug in FreeMarker's include
|
bug priority: high
|
### Bug Report
#### Crafter CMS Version
`4.0.0-SNAPSHOT-dc9526`
#### Date of Build
`4/7/2022`
#### Describe the bug
Chatbot plugins aren't working, since there is a bug with FreeMarker's include. For example, Collect.Chat plugin interpolation `${id}` value in the .ftl code, is calling the id of the plugin `w.CollectId = "org.craftercms.plugin.collectChat"` instead of the proper chatbot's id. Users can workaround this issue by changing the interpolation value by the respective the chatbot's id or token.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial BP.
2. Go to 'Project Tools'
3. Go to 'Plugin Management'
4. Install a 'Chatbot Plugin'
5. The 'Chatbot Plugin' won't appear in preview, even with enabled `true`.
6. Edit the plugin's .ftl, and change the `${id}` for the chatbot's id or token.
#### Logs
The browser's web tool console will show errors when trying to access this plugins, similar to the next ones:
```
XHR GET https://load.collect.chat/bots/org.craftercms.plugin.collectChat
[HTTP/2 400 Bad Request 403ms]
```
OR
```
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://load.collect.chat/bots/org.craftercms.plugin.collectChat. (Reason: CORS header 'Access-Control-Allow-Origin' missing). Status code: 400.
```
#### Screenshots
https://user-images.githubusercontent.com/90878288/162325026-748b8ee8-ea95-4206-a3db-3fd0bd5c2ad0.mp4
|
1.0
|
[plugins] Chatbot plugins aren't working, bug in FreeMarker's include - ### Bug Report
#### Crafter CMS Version
`4.0.0-SNAPSHOT-dc9526`
#### Date of Build
`4/7/2022`
#### Describe the bug
Chatbot plugins aren't working, since there is a bug with FreeMarker's include. For example, Collect.Chat plugin interpolation `${id}` value in the .ftl code, is calling the id of the plugin `w.CollectId = "org.craftercms.plugin.collectChat"` instead of the proper chatbot's id. Users can workaround this issue by changing the interpolation value by the respective the chatbot's id or token.
#### To Reproduce
Steps to reproduce the behavior:
1. Create a site based on Editorial BP.
2. Go to 'Project Tools'
3. Go to 'Plugin Management'
4. Install a 'Chatbot Plugin'
5. The 'Chatbot Plugin' won't appear in preview, even with enabled `true`.
6. Edit the plugin's .ftl, and change the `${id}` for the chatbot's id or token.
#### Logs
The browser's web tool console will show errors when trying to access this plugins, similar to the next ones:
```
XHR GET https://load.collect.chat/bots/org.craftercms.plugin.collectChat
[HTTP/2 400 Bad Request 403ms]
```
OR
```
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://load.collect.chat/bots/org.craftercms.plugin.collectChat. (Reason: CORS header 'Access-Control-Allow-Origin' missing). Status code: 400.
```
#### Screenshots
https://user-images.githubusercontent.com/90878288/162325026-748b8ee8-ea95-4206-a3db-3fd0bd5c2ad0.mp4
|
non_defect
|
chatbot plugins aren t working bug in freemarker s include bug report crafter cms version snapshot date of build describe the bug chatbot plugins aren t working since there is a bug with freemarker s include for example collect chat plugin interpolation id value in the ftl code is calling the id of the plugin w collectid org craftercms plugin collectchat instead of the proper chatbot s id users can workaround this issue by changing the interpolation value by the respective the chatbot s id or token to reproduce steps to reproduce the behavior create a site based on editorial bp go to project tools go to plugin management install a chatbot plugin the chatbot plugin won t appear in preview even with enabled true edit the plugin s ftl and change the id for the chatbot s id or token logs the browser s web tool console will show errors when trying to access this plugins similar to the next ones xhr get or cross origin request blocked the same origin policy disallows reading the remote resource at reason cors header access control allow origin missing status code screenshots
| 0
|
66,735
| 20,607,914,606
|
IssuesEvent
|
2022-03-07 04:07:02
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: AssertionError in test__dual_annealing.py in test_bounds_class
|
defect scipy.optimize
|
### Describe your issue.
While testing scipy/optimize/tests, I noticed that test_bounds_class in test__dual_annealing.py throws an AssertionError at line 365
`assert_allclose(ret_bounds_class.x, np.arange(-2, 3), atol=1e-8)`.
However, easing the absolute tolerance to say, 1e-7 lets the test pass, although I'm not sure if this is the right thing to do.
### Reproducing Code Example
```python
python runtests.py scipy/optimize/tests
```
### Error message
```shell
scipy/optimize/tests/test__dual_annealing.py:365: in test_bounds_class
assert_allclose(ret_bounds_class.x, np.arange(-2, 3), atol=1e-8)
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=1e-08
E
E Mismatched elements: 1 / 5 (20%)
E Max absolute difference: 1.0046973e-08
E Max relative difference: 0.
E x: array([-2.000000e+00, -1.000000e+00, -1.004697e-08, 1.000000e+00,
E 2.000000e+00])
E y: array([-2, -1, 0, 1, 2])
bounds = Bounds(array([-5.12, -5.12, -5.12, 1. , 2. ]), array([-2. , -1. , 5.12, 5.12, 5.12]))
bounds_old = [(-5.12, -2.0), (-5.12, -1.0), (-5.12, 5.12), (1.0, 5.12), (2.0, 5.12)]
func = <function TestDualAnnealing.test_bounds_class.<locals>.func at 0x7fffe25069e0>
lw = [-5.12, -5.12, -5.12, 1.0, 2.0]
ret_bounds_class = fun: 10.000000000000021
message: ['Maximum number of iteration reached']
nfev: 10199
nhev: 0
nit: 1... success: True
x: array([-2.0000000e+00, -1.0000000e+00, -1.0046973e-08, 1.0000000e+00,
2.0000000e+00])
ret_bounds_list = fun: 10.000000000000021
message: ['Maximum number of iteration reached']
nfev: 10199
nhev: 0
nit: 1... success: True
x: array([-2.0000000e+00, -1.0000000e+00, -1.0046973e-08, 1.0000000e+00,
2.0000000e+00])
self = <scipy.optimize.tests.test__dual_annealing.TestDualAnnealing object at 0x7fffe317d7b0>
up = [-2.0, -1.0, 5.12, 5.12, 5.12]
```
### SciPy/NumPy/Python version information
1.9.0 1.22.2 sys.version_info(major=3, minor=10, micro=2, releaselevel='final', serial=0)
|
1.0
|
BUG: AssertionError in test__dual_annealing.py in test_bounds_class - ### Describe your issue.
While testing scipy/optimize/tests, I noticed that test_bounds_class in test__dual_annealing.py throws an AssertionError at line 365
`assert_allclose(ret_bounds_class.x, np.arange(-2, 3), atol=1e-8)`.
However, easing the absolute tolerance to say, 1e-7 lets the test pass, although I'm not sure if this is the right thing to do.
### Reproducing Code Example
```python
python runtests.py scipy/optimize/tests
```
### Error message
```shell
scipy/optimize/tests/test__dual_annealing.py:365: in test_bounds_class
assert_allclose(ret_bounds_class.x, np.arange(-2, 3), atol=1e-8)
E AssertionError:
E Not equal to tolerance rtol=1e-07, atol=1e-08
E
E Mismatched elements: 1 / 5 (20%)
E Max absolute difference: 1.0046973e-08
E Max relative difference: 0.
E x: array([-2.000000e+00, -1.000000e+00, -1.004697e-08, 1.000000e+00,
E 2.000000e+00])
E y: array([-2, -1, 0, 1, 2])
bounds = Bounds(array([-5.12, -5.12, -5.12, 1. , 2. ]), array([-2. , -1. , 5.12, 5.12, 5.12]))
bounds_old = [(-5.12, -2.0), (-5.12, -1.0), (-5.12, 5.12), (1.0, 5.12), (2.0, 5.12)]
func = <function TestDualAnnealing.test_bounds_class.<locals>.func at 0x7fffe25069e0>
lw = [-5.12, -5.12, -5.12, 1.0, 2.0]
ret_bounds_class = fun: 10.000000000000021
message: ['Maximum number of iteration reached']
nfev: 10199
nhev: 0
nit: 1... success: True
x: array([-2.0000000e+00, -1.0000000e+00, -1.0046973e-08, 1.0000000e+00,
2.0000000e+00])
ret_bounds_list = fun: 10.000000000000021
message: ['Maximum number of iteration reached']
nfev: 10199
nhev: 0
nit: 1... success: True
x: array([-2.0000000e+00, -1.0000000e+00, -1.0046973e-08, 1.0000000e+00,
2.0000000e+00])
self = <scipy.optimize.tests.test__dual_annealing.TestDualAnnealing object at 0x7fffe317d7b0>
up = [-2.0, -1.0, 5.12, 5.12, 5.12]
```
### SciPy/NumPy/Python version information
1.9.0 1.22.2 sys.version_info(major=3, minor=10, micro=2, releaselevel='final', serial=0)
|
defect
|
bug assertionerror in test dual annealing py in test bounds class describe your issue while testing scipy optimize tests i noticed that test bounds class in test dual annealing py throws an assertionerror at line assert allclose ret bounds class x np arange atol however easing the absolute tolerance to say lets the test pass although i m not sure if this is the right thing to do reproducing code example python python runtests py scipy optimize tests error message shell scipy optimize tests test dual annealing py in test bounds class assert allclose ret bounds class x np arange atol e assertionerror e not equal to tolerance rtol atol e e mismatched elements e max absolute difference e max relative difference e x array e e y array bounds bounds array array bounds old func func at lw ret bounds class fun message nfev nhev nit success true x array ret bounds list fun message nfev nhev nit success true x array self up scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
41,487
| 10,483,341,565
|
IssuesEvent
|
2019-09-24 13:52:06
|
google/flogger
|
https://api.github.com/repos/google/flogger
|
closed
|
Flogger Log4J backend causes Gradle 5.6 Sync to fail
|
P3 type=defect
|
Adding the Log4J backend causes an internal Gradle exception:
```
<ij_msg_gr>Project resolve errors<ij_msg_gr><ij_nav>A:\Tobi\Workspaces\JamesBot\build.gradle<ij_nav><i><b>root project 'james': Unable to resolve additional project configuration.</b><eol>Details: org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies for configuration ':runtimeClasspath'.<eol>Caused by: org.gradle.internal.resolve.ArtifactNotFoundException: Could not find jmxtools.jar (com.sun.jdmk:jmxtools:1.2.1).<eol>Searched in the following locations:<eol> https://jcenter.bintray.com/com/sun/jdmk/jmxtools/1.2.1/jmxtools-1.2.1.jar</i>
CONFIGURE SUCCESSFUL in 0s
```
The faulty dependencies:
```groovy
compileOnly 'com.google.flogger:flogger:0.4'
runtimeOnly 'com.google.flogger:flogger-log4j-backend:0.4'
```
|
1.0
|
Flogger Log4J backend causes Gradle 5.6 Sync to fail - Adding the Log4J backend causes an internal Gradle exception:
```
<ij_msg_gr>Project resolve errors<ij_msg_gr><ij_nav>A:\Tobi\Workspaces\JamesBot\build.gradle<ij_nav><i><b>root project 'james': Unable to resolve additional project configuration.</b><eol>Details: org.gradle.api.artifacts.ResolveException: Could not resolve all dependencies for configuration ':runtimeClasspath'.<eol>Caused by: org.gradle.internal.resolve.ArtifactNotFoundException: Could not find jmxtools.jar (com.sun.jdmk:jmxtools:1.2.1).<eol>Searched in the following locations:<eol> https://jcenter.bintray.com/com/sun/jdmk/jmxtools/1.2.1/jmxtools-1.2.1.jar</i>
CONFIGURE SUCCESSFUL in 0s
```
The faulty dependencies:
```groovy
compileOnly 'com.google.flogger:flogger:0.4'
runtimeOnly 'com.google.flogger:flogger-log4j-backend:0.4'
```
|
defect
|
flogger backend causes gradle sync to fail adding the backend causes an internal gradle exception project resolve errors a tobi workspaces jamesbot build gradle root project james unable to resolve additional project configuration details org gradle api artifacts resolveexception could not resolve all dependencies for configuration runtimeclasspath caused by org gradle internal resolve artifactnotfoundexception could not find jmxtools jar com sun jdmk jmxtools searched in the following locations configure successful in the faulty dependencies groovy compileonly com google flogger flogger runtimeonly com google flogger flogger backend
| 1
|
67,107
| 20,905,301,627
|
IssuesEvent
|
2022-03-24 01:02:21
|
jccastillo0007/equus-ui
|
https://api.github.com/repos/jccastillo0007/equus-ui
|
closed
|
No permite almacenar el listado de procesos.
|
Defecto
|
Cuando le das guardar, te marca el error mostrado
<img width="697" alt="procesos" src="https://user-images.githubusercontent.com/2912775/158649084-d0821c94-5c69-403b-a473-c4a128eb773e.PNG">
|
1.0
|
No permite almacenar el listado de procesos. - Cuando le das guardar, te marca el error mostrado
<img width="697" alt="procesos" src="https://user-images.githubusercontent.com/2912775/158649084-d0821c94-5c69-403b-a473-c4a128eb773e.PNG">
|
defect
|
no permite almacenar el listado de procesos cuando le das guardar te marca el error mostrado img width alt procesos src
| 1
|
58,823
| 16,812,410,076
|
IssuesEvent
|
2021-06-17 00:42:43
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Notifications panel, and room unread count, don't include keyword notifications
|
A-Notif-Panel A-Notifications P2 S-Major T-Defect
|
If keyword notifications are enabled, they only go to push notifications, which if missed are really difficult to track down because there is no other way to find out where the title bar notification count is coming from, and the user simply has to go from room to room until the notification count disappears, and then to find the notification itself has to manually search the room! Very bad UX which would be fixed by adding the keyword notification to the room unread count as well as the notification panel
|
1.0
|
Notifications panel, and room unread count, don't include keyword notifications - If keyword notifications are enabled, they only go to push notifications, which if missed are really difficult to track down because there is no other way to find out where the title bar notification count is coming from, and the user simply has to go from room to room until the notification count disappears, and then to find the notification itself has to manually search the room! Very bad UX which would be fixed by adding the keyword notification to the room unread count as well as the notification panel
|
defect
|
notifications panel and room unread count don t include keyword notifications if keyword notifications are enabled they only go to push notifications which if missed are really difficult to track down because there is no other way to find out where the title bar notification count is coming from and the user simply has to go from room to room until the notification count disappears and then to find the notification itself has to manually search the room very bad ux which would be fixed by adding the keyword notification to the room unread count as well as the notification panel
| 1
|
10,931
| 2,622,850,351
|
IssuesEvent
|
2015-03-04 08:04:52
|
max99x/pagemon-chrome-ext
|
https://api.github.com/repos/max99x/pagemon-chrome-ext
|
closed
|
Page Monitor Failed To Detect Changes To Reservations Page
|
auto-migrated Priority-Medium Type-Defect
|
```
I was trying to monitor the reservations page for a restaurant that I wanted to
get into as the reservations were usually used up before I looked. Page Monitor
has never alerted me to page changes despite me popping over on occasion and
finding changes.
What steps will reproduce the problem?
Configure Page Monitor to watch http://mvink.com/ink-reservations/
What is the expected output? What do you see instead?
If there are no reservations there is no form. Just a message saying to try
back. When there are reservations available there are two drop-down boxes that
change based on availability.
What version of the Chrome are you using? On what operating system?
The very latest version of Chrome on Windows 7 SP1 64-bit.
Please provide any additional information below.
Nothing else to say.
```
Original issue reported on code.google.com by `irthe...@gmail.com` on 11 Oct 2011 at 9:18
|
1.0
|
Page Monitor Failed To Detect Changes To Reservations Page - ```
I was trying to monitor the reservations page for a restaurant that I wanted to
get into as the reservations were usually used up before I looked. Page Monitor
has never alerted me to page changes despite me popping over on occasion and
finding changes.
What steps will reproduce the problem?
Configure Page Monitor to watch http://mvink.com/ink-reservations/
What is the expected output? What do you see instead?
If there are no reservations there is no form. Just a message saying to try
back. When there are reservations available there are two drop-down boxes that
change based on availability.
What version of the Chrome are you using? On what operating system?
The very latest version of Chrome on Windows 7 SP1 64-bit.
Please provide any additional information below.
Nothing else to say.
```
Original issue reported on code.google.com by `irthe...@gmail.com` on 11 Oct 2011 at 9:18
|
defect
|
page monitor failed to detect changes to reservations page i was trying to monitor the reservations page for a restaurant that i wanted to get into as the reservations were usually used up before i looked page monitor has never alerted me to page changes despite me popping over on occasion and finding changes what steps will reproduce the problem configure page monitor to watch what is the expected output what do you see instead if there are no reservations there is no form just a message saying to try back when there are reservations available there are two drop down boxes that change based on availability what version of the chrome are you using on what operating system the very latest version of chrome on windows bit please provide any additional information below nothing else to say original issue reported on code google com by irthe gmail com on oct at
| 1
|
374,293
| 26,110,218,549
|
IssuesEvent
|
2022-12-27 18:37:05
|
AntiMicro/antimicro
|
https://api.github.com/repos/AntiMicro/antimicro
|
closed
|
Document setting up build environment under Wine
|
documentation
|
Verified under a 32bit prefix. Should work fine under 64bit? Need to check... Qt, CMake, and SDL install fine without any hackery. WIX, however, requires .NET. WIX 3.10 requires both 3.5.1 (sp1?) and 4.0, so install both...
Requirements:
1. Wine
2. winetricks ([https://wiki.winehq.org/Winetricks](https://wiki.winehq.org/Winetricks))
3. Setup files, as listed in the Readme. (i.e. Qt, SDL, CMake, WIX)
Setup Steps:
1. Create a new 32bit prefix. On 64bit machines, will need to do: `WINEPREFIX=/some/dir WINEARCH=win32 wine wineboot`
2. Install .NET 3.5 sp1 and corefonts: `WINEPREFIX=/some/dir winetricks dotnet35sp1 corefonts`
3. Install .NET 4: `WINEPREFIX=/some/dir winetricks dotnet40`
4. Follow Readme.
|
1.0
|
Document setting up build environment under Wine - Verified under a 32bit prefix. Should work fine under 64bit? Need to check... Qt, CMake, and SDL install fine without any hackery. WIX, however, requires .NET. WIX 3.10 requires both 3.5.1 (sp1?) and 4.0, so install both...
Requirements:
1. Wine
2. winetricks ([https://wiki.winehq.org/Winetricks](https://wiki.winehq.org/Winetricks))
3. Setup files, as listed in the Readme. (i.e. Qt, SDL, CMake, WIX)
Setup Steps:
1. Create a new 32bit prefix. On 64bit machines, will need to do: `WINEPREFIX=/some/dir WINEARCH=win32 wine wineboot`
2. Install .NET 3.5 sp1 and corefonts: `WINEPREFIX=/some/dir winetricks dotnet35sp1 corefonts`
3. Install .NET 4: `WINEPREFIX=/some/dir winetricks dotnet40`
4. Follow Readme.
|
non_defect
|
document setting up build environment under wine verified under a prefix should work fine under need to check qt cmake and sdl install fine without any hackery wix however requires net wix requires both and so install both requirements wine winetricks setup files as listed in the readme i e qt sdl cmake wix setup steps create a new prefix on machines will need to do wineprefix some dir winearch wine wineboot install net and corefonts wineprefix some dir winetricks corefonts install net wineprefix some dir winetricks follow readme
| 0
|
49,748
| 3,004,144,150
|
IssuesEvent
|
2015-07-25 16:37:35
|
mistic100/Piwigo
|
https://api.github.com/repos/mistic100/Piwigo
|
closed
|
[albums] The menu is not in "compact mode" when you view the Batch Manager
|
low priority minor
|
**Reported by Rio on 6 Apr 2011 17:15**
In administration, the left menu is not in "compact mode" when you view the Batch Manager.
This happens only if you are logged in Italian (Tested on EN, FR, IT)
I'll update if I find an other example.
**Steps to reproduce:** Login and change language on IT if necessary.
In administration (Amministrazione) go to Photos (Foto) and atch Manager. (Gestione dei lotti)
[Piwigo Bugtracker #2252](http://piwigo.org/bugs/view.php?id=2252)
|
1.0
|
[albums] The menu is not in "compact mode" when you view the Batch Manager - **Reported by Rio on 6 Apr 2011 17:15**
In administration, the left menu is not in "compact mode" when you view the Batch Manager.
This happens only if you are logged in Italian (Tested on EN, FR, IT)
I'll update if I find an other example.
**Steps to reproduce:** Login and change language on IT if necessary.
In administration (Amministrazione) go to Photos (Foto) and atch Manager. (Gestione dei lotti)
[Piwigo Bugtracker #2252](http://piwigo.org/bugs/view.php?id=2252)
|
non_defect
|
the menu is not in compact mode when you view the batch manager reported by rio on apr in administration the left menu is not in quot compact mode quot when you view the batch manager this happens only if you are logged in italian tested on en fr it i ll update if i find an other example steps to reproduce login and change language on it if necessary in administration amministrazione go to photos foto and atch manager gestione dei lotti
| 0
|
29,766
| 5,873,808,933
|
IssuesEvent
|
2017-05-15 14:46:55
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
closed
|
Datatable: Bug in the datatable component when it is the child of a column
|
defect duplicate
|
**1) Environment**
PrimeFaces version: All version above 6.0.16
Application server + version: GlassFish Server Open Source Edition 4.1 (build 13) / Payara Server 4.1.1.163 #badassfish (build 215)
Affected browsers: All
**2) Expected behavior**
If the datatable is within a column, the columns of the datatable are not duplicated with empty columns.
**3) Actual behavior**
If the datatable is a child of a column of a panelgrid the columns inside the datatable are bugged, for each column added in the datatable it creates an empty one.
**4) Steps to reproduce**
```
<p:panelGrid>
<p:row>
<p:column>
<p:dataTable value="#{myController.list}" var="item">
<p:column headerText="Text 1">
<h:outputText value="#{item.value1}"/>
</p:column>
<p:column headerText="Text 2">
<h:outputText value="#{item.value2}"/>
</p:column>
<p:column headerText="Text 3">
<h:outputText value="#{item.value3}"/>
</p:column>
</p:dataTable>
</p:column>
</p:row>
</p:panelGrid>
```
|
1.0
|
Datatable: Bug in the datatable component when it is the child of a column - **1) Environment**
PrimeFaces version: All version above 6.0.16
Application server + version: GlassFish Server Open Source Edition 4.1 (build 13) / Payara Server 4.1.1.163 #badassfish (build 215)
Affected browsers: All
**2) Expected behavior**
If the datatable is within a column, the columns of the datatable are not duplicated with empty columns.
**3) Actual behavior**
If the datatable is a child of a column of a panelgrid the columns inside the datatable are bugged, for each column added in the datatable it creates an empty one.
**4) Steps to reproduce**
```
<p:panelGrid>
<p:row>
<p:column>
<p:dataTable value="#{myController.list}" var="item">
<p:column headerText="Text 1">
<h:outputText value="#{item.value1}"/>
</p:column>
<p:column headerText="Text 2">
<h:outputText value="#{item.value2}"/>
</p:column>
<p:column headerText="Text 3">
<h:outputText value="#{item.value3}"/>
</p:column>
</p:dataTable>
</p:column>
</p:row>
</p:panelGrid>
```
|
defect
|
datatable bug in the datatable component when it is the child of a column environment primefaces version all version above application server version glassfish server open source edition build payara server badassfish build affected browsers all expected behavior if the datatable is within a column the columns of the datatable are not duplicated with empty columns actual behavior if the datatable is a child of a column of a panelgrid the columns inside the datatable are bugged for each column added in the datatable it creates an empty one steps to reproduce
| 1
|
1,220
| 2,601,760,450
|
IssuesEvent
|
2015-02-24 00:34:53
|
chrsmith/bwapi
|
https://api.github.com/repos/chrsmith/bwapi
|
closed
|
Unknown crash using Mind Control?
|
auto-migrated Milestone-Tournament Priority-Medium Type-Defect Usability
|
```
There is an unknown crash (seemingly random) with (I think) Mind Control.
Don't know the exact cause so it's impossible to reproduce.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2011 at 7:17
|
1.0
|
Unknown crash using Mind Control? - ```
There is an unknown crash (seemingly random) with (I think) Mind Control.
Don't know the exact cause so it's impossible to reproduce.
```
-----
Original issue reported on code.google.com by `AHeinerm` on 12 Feb 2011 at 7:17
|
defect
|
unknown crash using mind control there is an unknown crash seemingly random with i think mind control don t know the exact cause so it s impossible to reproduce original issue reported on code google com by aheinerm on feb at
| 1
|
670,288
| 22,683,853,005
|
IssuesEvent
|
2022-07-04 12:24:12
|
MSRevive/MSCScripts
|
https://api.github.com/repos/MSRevive/MSCScripts
|
opened
|
Charge bug allows normal attacks to deal more damage than charged attacks.
|
bug alpha high priority
|
Allows doing more than charge-1 damage if you qualify for charge-2 but not charge-1 and charge past level 1.
Additionally, normal attack's damage should not be buffed at all if the player does not qualify for charge-1, but still qualifies for charge-2.
|
1.0
|
Charge bug allows normal attacks to deal more damage than charged attacks. - Allows doing more than charge-1 damage if you qualify for charge-2 but not charge-1 and charge past level 1.
Additionally, normal attack's damage should not be buffed at all if the player does not qualify for charge-1, but still qualifies for charge-2.
|
non_defect
|
charge bug allows normal attacks to deal more damage than charged attacks allows doing more than charge damage if you qualify for charge but not charge and charge past level additionally normal attack s damage should not be buffed at all if the player does not qualify for charge but still qualifies for charge
| 0
|
8,445
| 2,611,497,699
|
IssuesEvent
|
2015-02-27 05:36:42
|
chrsmith/hedgewars
|
https://api.github.com/repos/chrsmith/hedgewars
|
closed
|
Freezer problem less fuel, I do not want a gun off
|
auto-migrated Priority-Medium Type-Defect
|
```
Freezer problem less fuel, I do not want a gun off.
Version 0.9.18
```
Original issue reported on code.google.com by `krdrt5...@hotmail.com` on 4 Nov 2012 at 11:11
* Blocking: #427
Attachments:
* [hw_2012-10-31_00-25-1049094.png](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-471/comment-0/hw_2012-10-31_00-25-1049094.png)
|
1.0
|
Freezer problem less fuel, I do not want a gun off - ```
Freezer problem less fuel, I do not want a gun off.
Version 0.9.18
```
Original issue reported on code.google.com by `krdrt5...@hotmail.com` on 4 Nov 2012 at 11:11
* Blocking: #427
Attachments:
* [hw_2012-10-31_00-25-1049094.png](https://storage.googleapis.com/google-code-attachments/hedgewars/issue-471/comment-0/hw_2012-10-31_00-25-1049094.png)
|
defect
|
freezer problem less fuel i do not want a gun off freezer problem less fuel i do not want a gun off version original issue reported on code google com by hotmail com on nov at blocking attachments
| 1
|
146,856
| 11,759,224,977
|
IssuesEvent
|
2020-03-13 16:51:08
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
closed
|
test_issue_2594_non_invalidated_cache failing on multiple platforms
|
3000.1 Fixed Pending Verification Test Failure
|
### Description of Issue
This test `integration.states.test_virtualenv_mod.VirtualenvTest.test_issue_2594_non_invalidated_cache`
failed on multiple platforms. https://jenkinsci.saltstack.com/job/pr-fedora30-py3/job/master/621/
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/states/test_virtualenv_mod.py", line 122, in test_issue_2594_non_invalidated_cache
self.assertSaltTrueReturn(ret)
File "/tmp/kitchen/testing/tests/support/mixins.py", line 644, in assertSaltTrueReturn
**(next(six.itervalues(ret)))
AssertionError: False is not True. Salt Comment:
virtualenv exists
Collecting zope.interface==4.0.1
Downloading zope.interface-4.0.1.tar.gz (136 kB)
ERROR: Command errored out with exit status 1:
command: /tmp/salt-tests-tmpdir/issue-2594-ve/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4af8gbu0/zope.interface/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4af8gbu0/zope.interface/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-4af8gbu0/zope.interface/pip-egg-info
cwd: /tmp/pip-install-4af8gbu0/zope.interface/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-4af8gbu0/zope.interface/setup.py", line 138, in <module>
**extra)
NameError: name 'extra' is not defined
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
|
1.0
|
test_issue_2594_non_invalidated_cache failing on multiple platforms - ### Description of Issue
This test `integration.states.test_virtualenv_mod.VirtualenvTest.test_issue_2594_non_invalidated_cache`
failed on multiple platforms. https://jenkinsci.saltstack.com/job/pr-fedora30-py3/job/master/621/
```
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/states/test_virtualenv_mod.py", line 122, in test_issue_2594_non_invalidated_cache
self.assertSaltTrueReturn(ret)
File "/tmp/kitchen/testing/tests/support/mixins.py", line 644, in assertSaltTrueReturn
**(next(six.itervalues(ret)))
AssertionError: False is not True. Salt Comment:
virtualenv exists
Collecting zope.interface==4.0.1
Downloading zope.interface-4.0.1.tar.gz (136 kB)
ERROR: Command errored out with exit status 1:
command: /tmp/salt-tests-tmpdir/issue-2594-ve/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4af8gbu0/zope.interface/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4af8gbu0/zope.interface/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-4af8gbu0/zope.interface/pip-egg-info
cwd: /tmp/pip-install-4af8gbu0/zope.interface/
Complete output (5 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-4af8gbu0/zope.interface/setup.py", line 138, in <module>
**extra)
NameError: name 'extra' is not defined
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
|
non_defect
|
test issue non invalidated cache failing on multiple platforms description of issue this test integration states test virtualenv mod virtualenvtest test issue non invalidated cache failed on multiple platforms traceback most recent call last file tmp kitchen testing tests integration states test virtualenv mod py line in test issue non invalidated cache self assertsalttruereturn ret file tmp kitchen testing tests support mixins py line in assertsalttruereturn next six itervalues ret assertionerror false is not true salt comment virtualenv exists collecting zope interface downloading zope interface tar gz kb error command errored out with exit status command tmp salt tests tmpdir issue ve bin python c import sys setuptools tokenize sys argv tmp pip install zope interface setup py file tmp pip install zope interface setup py f getattr tokenize open open file code f read replace r n n f close exec compile code file exec egg info egg base tmp pip install zope interface pip egg info cwd tmp pip install zope interface complete output lines traceback most recent call last file line in file tmp pip install zope interface setup py line in extra nameerror name extra is not defined error command errored out with exit status python setup py egg info check the logs for full command output
| 0
|
344,974
| 10,351,046,459
|
IssuesEvent
|
2019-09-05 05:33:00
|
ehimetakahashilab/Hybrid-TPI
|
https://api.github.com/repos/ehimetakahashilab/Hybrid-TPI
|
opened
|
README.mdを書く
|
Priority: low
|
## WHY
- このrepoが何をするものか,動作環境,使い方を書いておきたい
## WHAT
以下のものを書く
- 何をするものか
- 動作環境
- 使い方
|
1.0
|
README.mdを書く - ## WHY
- このrepoが何をするものか,動作環境,使い方を書いておきたい
## WHAT
以下のものを書く
- 何をするものか
- 動作環境
- 使い方
|
non_defect
|
readme mdを書く why このrepoが何をするものか,動作環境,使い方を書いておきたい what 以下のものを書く 何をするものか 動作環境 使い方
| 0
|
303,361
| 22,972,429,371
|
IssuesEvent
|
2022-07-20 05:20:25
|
mecaman91/test-project-board
|
https://api.github.com/repos/mecaman91/test-project-board
|
closed
|
깃헙 프로젝트와 이슈 정리하기
|
documentation
|
깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
- [x] 프로젝트 배타 만들기
- [x] 카드 목록 만들기 - 강의 커리큘럼 참고
- [x] 이슈로 적절히 바꾸기
|
1.0
|
깃헙 프로젝트와 이슈 정리하기 - 깃헙 프로젝트를 세팅하고, 카드를 만들어 정리하자.
- [x] 프로젝트 배타 만들기
- [x] 카드 목록 만들기 - 강의 커리큘럼 참고
- [x] 이슈로 적절히 바꾸기
|
non_defect
|
깃헙 프로젝트와 이슈 정리하기 깃헙 프로젝트를 세팅하고 카드를 만들어 정리하자 프로젝트 배타 만들기 카드 목록 만들기 강의 커리큘럼 참고 이슈로 적절히 바꾸기
| 0
|
280,644
| 30,841,151,051
|
IssuesEvent
|
2023-08-02 10:43:57
|
nejads/aws
|
https://api.github.com/repos/nejads/aws
|
closed
|
CVE-2019-14892 (Critical) detected in jackson-databind-2.8.5.jar, jackson-databind-2.9.4.jar - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2019-14892 - Critical Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.5.jar</b>, <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /iot-notification-button/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /aws-alexa-skill/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- ask-sdk-2.5.5.jar (Root Library)
- ask-sdk-core-2.5.5.jar
- ask-sdk-model-1.5.0.jar
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/nejads/aws/commit/7752bd9fd55f5460198675483d4721cd8bb10f19">7752bd9fd55f5460198675483d4721cd8bb10f19</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10</p>
<p>Direct dependency fix Resolution (com.amazon.alexa:ask-sdk): 2.22.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-14892 (Critical) detected in jackson-databind-2.8.5.jar, jackson-databind-2.9.4.jar - autoclosed - ## CVE-2019-14892 - Critical Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.8.5.jar</b>, <b>jackson-databind-2.9.4.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.8.5.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /iot-notification-button/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.5/jackson-databind-2.8.5.jar</p>
<p>
Dependency Hierarchy:
- :x: **jackson-databind-2.8.5.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.9.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /aws-alexa-skill/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.9.4/jackson-databind-2.9.4.jar</p>
<p>
Dependency Hierarchy:
- ask-sdk-2.5.5.jar (Root Library)
- ask-sdk-core-2.5.5.jar
- ask-sdk-model-1.5.0.jar
- :x: **jackson-databind-2.9.4.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/nejads/aws/commit/7752bd9fd55f5460198675483d4721cd8bb10f19">7752bd9fd55f5460198675483d4721cd8bb10f19</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/critical_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
A flaw was discovered in jackson-databind in versions before 2.9.10, 2.8.11.5 and 2.6.7.3, where it would permit polymorphic deserialization of a malicious object using commons-configuration 1 and 2 JNDI classes. An attacker could use this flaw to execute arbitrary code.
<p>Publish Date: 2020-03-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-14892>CVE-2019-14892</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-03-02</p>
<p>Fix Resolution (com.fasterxml.jackson.core:jackson-databind): 2.9.10</p>
<p>Direct dependency fix Resolution (com.amazon.alexa:ask-sdk): 2.22.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve critical detected in jackson databind jar jackson databind jar autoclosed cve critical severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file iot notification button pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file aws alexa skill pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy ask sdk jar root library ask sdk core jar ask sdk model jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details a flaw was discovered in jackson databind in versions before and where it would permit polymorphic deserialization of a malicious object using commons configuration and jndi classes an attacker could use this flaw to execute arbitrary code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind direct dependency fix resolution com amazon alexa ask sdk step up your open source security game with mend
| 0
|
107,152
| 9,203,289,522
|
IssuesEvent
|
2019-03-08 01:47:46
|
saltstack/salt
|
https://api.github.com/repos/saltstack/salt
|
opened
|
integration.states.test_file.FileTest.test_managed_file_with_grains_data
|
2017.7 2017.7.9 Test Failure
|
```
integration.states.test_file.FileTest.test_managed_file_with_grains_data
--------------------------------------------------------------------------------
2017.7.9 failed salt-ubuntu-1804-py3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
2017.7 failed salt-fedora-28-py3, salt-ubuntu-1804-py3
--------------------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
False is not true
................................................................................
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/states/test_file.py", line 362, in test_managed_file_with_grains_data
self.assertTrue(os.path.exists(grain_path))
AssertionError: False is not true
```
|
1.0
|
integration.states.test_file.FileTest.test_managed_file_with_grains_data - ```
integration.states.test_file.FileTest.test_managed_file_with_grains_data
--------------------------------------------------------------------------------
2017.7.9 failed salt-ubuntu-1804-py3
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
2017.7 failed salt-fedora-28-py3, salt-ubuntu-1804-py3
--------------------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
False is not true
................................................................................
Traceback (most recent call last):
File "/tmp/kitchen/testing/tests/integration/states/test_file.py", line 362, in test_managed_file_with_grains_data
self.assertTrue(os.path.exists(grain_path))
AssertionError: False is not true
```
|
non_defect
|
integration states test file filetest test managed file with grains data integration states test file filetest test managed file with grains data failed salt ubuntu failed salt fedora salt ubuntu false is not true traceback most recent call last file tmp kitchen testing tests integration states test file py line in test managed file with grains data self asserttrue os path exists grain path assertionerror false is not true
| 0
|
18,562
| 10,256,459,026
|
IssuesEvent
|
2019-08-21 17:43:35
|
pcrane70/grav
|
https://api.github.com/repos/pcrane70/grav
|
opened
|
CVE-2015-9251 (Medium) detected in jquery-2.1.4.min.js, jquery-2.2.4.min.js
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-2.2.4.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /grav/system/assets/jquery/jquery-2.1.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /grav/system/assets/jquery/jquery-2.x.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/pcrane70/grav/commit/cc1f0fb4799657622e0c88fa23b8da09a5632b25">cc1f0fb4799657622e0c88fa23b8da09a5632b25</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.2.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"}],"vulnerabilityIdentifier":"CVE-2015-9251","vulnerabilityDetails":"jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2015-9251","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2015-9251 (Medium) detected in jquery-2.1.4.min.js, jquery-2.2.4.min.js - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-2.2.4.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /grav/system/assets/jquery/jquery-2.1.4.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-2.2.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.4/jquery.min.js</a></p>
<p>Path to vulnerable library: /grav/system/assets/jquery/jquery-2.x.min.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.2.4.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/pcrane70/grav/commit/cc1f0fb4799657622e0c88fa23b8da09a5632b25">cc1f0fb4799657622e0c88fa23b8da09a5632b25</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.1.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.1.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"},{"packageType":"JavaScript","packageName":"jquery","packageVersion":"2.2.4","isTransitiveDependency":false,"dependencyTree":"jquery:2.2.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v3.0.0"}],"vulnerabilityIdentifier":"CVE-2015-9251","vulnerabilityDetails":"jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.","vulnerabilityUrl":"https://cve.mitre.org/cgi-bin/cvename.cgi?name\u003dCVE-2015-9251","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_defect
|
cve medium detected in jquery min js jquery min js cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to vulnerable library grav system assets jquery jquery min js dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library grav system assets jquery jquery x min js dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed vulnerabilityurl
| 0
|
107,222
| 4,295,016,393
|
IssuesEvent
|
2016-07-19 04:12:02
|
Baystation12/Baystation12
|
https://api.github.com/repos/Baystation12/Baystation12
|
closed
|
Runtimes are no longer saved since #13290
|
☢ runtime error ☢ ⚠ priority: high ⚠
|
<!--
If a specific field doesn't apply, remove it!
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Joke or spammed issues can and will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Description of issue
Runtime logs are no longer saved since #13290
#### Difference between expected and actual behavior
Runtime logs should save but instead there are no actual runtimes or folders associated with runtimes.
#### Steps to reproduce
1. Start server
2. Force a runtime
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
#13290
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
This has been happening since the 8th of June when #13290 was merged.
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [ ] Issue could be reproduced by different players
- [ ] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
1.0
|
Runtimes are no longer saved since #13290 - <!--
If a specific field doesn't apply, remove it!
Anything inside tags like these is a comment and will not be displayed in the final issue.
Be careful not to write inside them!
Joke or spammed issues can and will result in punishment.
PUT YOUR ANSWERS ON THE BLANK LINES BELOW THE HEADERS
(The lines with four #'s)
Don't edit them or delete them it's part of the formatting
-->
#### Description of issue
Runtime logs are no longer saved since #13290
#### Difference between expected and actual behavior
Runtime logs should save but instead there are no actual runtimes or folders associated with runtimes.
#### Steps to reproduce
1. Start server
2. Force a runtime
#### Specific information for locating
<!-- e.g. an object name, paste specific message outputs... -->
#13290
#### Length of time in which bug has been known to occur
<!--
Be specific if you approximately know the time it's been occurring
for—this can speed up finding the source. If you're not sure
about it, tell us too!
-->
This has been happening since the 8th of June when #13290 was merged.
#### Issue bingo
Please check whatever applies. More checkboxes checked increase your chances of the issue being looked at sooner.
<!-- Check these by writing an x inside the [ ] (like this: [x])-->
<!-- Don't forget to remove the space between the brackets, or it won't work! -->
- [x] Issue could be reproduced at least once
- [ ] Issue could be reproduced by different players
- [ ] Issue could be reproduced in multiple rounds
- [x] Issue happened in a recent (less than 7 days ago) round
- [x] [Couldn't find an existing issue about this](https://github.com/Baystation12/Baystation12/issues)
|
non_defect
|
runtimes are no longer saved since if a specific field doesn t apply remove it anything inside tags like these is a comment and will not be displayed in the final issue be careful not to write inside them joke or spammed issues can and will result in punishment put your answers on the blank lines below the headers the lines with four s don t edit them or delete them it s part of the formatting description of issue runtime logs are no longer saved since difference between expected and actual behavior runtime logs should save but instead there are no actual runtimes or folders associated with runtimes steps to reproduce start server force a runtime specific information for locating length of time in which bug has been known to occur be specific if you approximately know the time it s been occurring for—this can speed up finding the source if you re not sure about it tell us too this has been happening since the of june when was merged issue bingo please check whatever applies more checkboxes checked increase your chances of the issue being looked at sooner issue could be reproduced at least once issue could be reproduced by different players issue could be reproduced in multiple rounds issue happened in a recent less than days ago round
| 0
|
33,491
| 12,216,629,268
|
IssuesEvent
|
2020-05-01 15:32:00
|
gualtierotesta/PlayWithVertx
|
https://api.github.com/repos/gualtierotesta/PlayWithVertx
|
opened
|
CVE-2020-10672 (High) detected in jackson-databind-2.9.9.1.jar
|
security vulnerability
|
## CVE-2020-10672 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/PlayWithVertx/reactive/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200430182636_KARNPL/downloadResource_OQPUDD/20200430182652/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- vertx-web-client-3.8.5.jar (Root Library)
- vertx-core-3.8.5.jar
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gualtierotesta/PlayWithVertx/commit/b7acd7a62fe132528accbe699734453ee3704f7d">b7acd7a62fe132528accbe699734453ee3704f7d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.aries.transaction.jms.internal.XaPooledConnectionFactory (aka aries.transaction.jms).
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10672>CVE-2020-10672</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10672">https://nvd.nist.gov/vuln/detail/CVE-2020-10672</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-10672 (High) detected in jackson-databind-2.9.9.1.jar - ## CVE-2020-10672 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.9.1.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /tmp/ws-scm/PlayWithVertx/reactive/pom.xml</p>
<p>Path to vulnerable library: /tmp/ws-ua_20200430182636_KARNPL/downloadResource_OQPUDD/20200430182652/jackson-databind-2.9.9.1.jar</p>
<p>
Dependency Hierarchy:
- vertx-web-client-3.8.5.jar (Root Library)
- vertx-core-3.8.5.jar
- :x: **jackson-databind-2.9.9.1.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/gualtierotesta/PlayWithVertx/commit/b7acd7a62fe132528accbe699734453ee3704f7d">b7acd7a62fe132528accbe699734453ee3704f7d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.aries.transaction.jms.internal.XaPooledConnectionFactory (aka aries.transaction.jms).
<p>Publish Date: 2020-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-10672>CVE-2020-10672</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-10672">https://nvd.nist.gov/vuln/detail/CVE-2020-10672</a></p>
<p>Release Date: 2020-03-18</p>
<p>Fix Resolution: jackson-databind-2.9.10.4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm playwithvertx reactive pom xml path to vulnerable library tmp ws ua karnpl downloadresource oqpudd jackson databind jar dependency hierarchy vertx web client jar root library vertx core jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache aries transaction jms internal xapooledconnectionfactory aka aries transaction jms publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jackson databind step up your open source security game with whitesource
| 0
|
184,289
| 31,851,057,522
|
IssuesEvent
|
2023-09-15 01:44:32
|
APPSCHOOL2-Android/FinalProject-ShoppingMallService-team4
|
https://api.github.com/repos/APPSCHOOL2-Android/FinalProject-ShoppingMallService-team4
|
closed
|
[Design] login toolbar ui 생성
|
Design
|
### 목표
login toolbar ui 생성
### 체크리스트
- [x] loginFragment toolbar UI 생성
|
1.0
|
[Design] login toolbar ui 생성 - ### 목표
login toolbar ui 생성
### 체크리스트
- [x] loginFragment toolbar UI 생성
|
non_defect
|
login toolbar ui 생성 목표 login toolbar ui 생성 체크리스트 loginfragment toolbar ui 생성
| 0
|
20,530
| 30,448,571,097
|
IssuesEvent
|
2023-07-16 01:45:55
|
MuradAkh/LittleLogistics
|
https://api.github.com/repos/MuradAkh/LittleLogistics
|
closed
|
Create 0.5.1 with 1.18.2
|
compatibility
|
You'll need to update your code to accommodate changes to Create which were made post- Create v0.5.1a.
The following error was produced by using:
Forge 40.2.9
create-1.18.2-0.5.1.b.jar
littlelogistics-mc1.18.2-v1.2.6.jar
`[10Jun2023 15:54:49.877] [Server thread/ERROR] [net.minecraftforge.eventbus.EventSubclassTransformer/EVENTBUS]: Could not find parent com/simibubi/create/content/contraptions/components/structureMovement/train/capability/MinecartController for class dev/murad/shipping/compatibility/create/CapabilityInjector$TrainCarController in classloader cpw.mods.modlauncher.TransformingClassLoader@38ef1a0a on thread Thread[Server thread,8,SERVER]
[10Jun2023 15:54:49.877] [Server thread/ERROR] [net.minecraftforge.eventbus.EventSubclassTransformer/EVENTBUS]: An error occurred building event handler
java.lang.ClassNotFoundException: com.simibubi.create.content.contraptions.components.structureMovement.train.capability.MinecartController
at jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source) ~[?:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:137) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:137) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at net.minecraftforge.eventbus.EventSubclassTransformer.buildEvents(EventSubclassTransformer.java:62) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventSubclassTransformer.transform(EventSubclassTransformer.java:44) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBusEngine.processClass(EventBusEngine.java:21) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.service.ModLauncherService.processClassWithFlags(ModLauncherService.java:20) ~[eventbus-5.0.3.jar%232!/:5.0.3+70+master.d7d405b]
at cpw.mods.modlauncher.LaunchPluginHandler.offerClassNodeToPlugins(LaunchPluginHandler.java:88) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.modlauncher.ClassTransformer.transform(ClassTransformer.java:120) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.modlauncher.TransformingClassLoader.maybeTransformClassBytes(TransformingClassLoader.java:50) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.cl.ModuleClassLoader.readerToClass(ModuleClassLoader.java:113) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.lambda$findClass$15(ModuleClassLoader.java:219) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.loadFromModule(ModuleClassLoader.java:229) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.findClass(ModuleClassLoader.java:219) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:135) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at dev.murad.shipping.compatibility.create.CapabilityInjector.constructMinecartControllerCapability(CapabilityInjector.java:42) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at dev.murad.shipping.entity.custom.train.wagon.SeaterCarEntity.initCompat(SeaterCarEntity.java:39) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at dev.murad.shipping.entity.custom.train.wagon.SeaterCarEntity.<init>(SeaterCarEntity.java:29) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at net.minecraft.world.entity.EntityType.m_20615_(EntityType.java:460) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at ovh.corail.tombstone.helper.TameableType.init(TameableType.java:72) ~[tombstone-7.6.5-1.18.2.jar%23221!/:7.6.5]
at ovh.corail.tombstone.event.EventHandler.onWorldLoad(EventHandler.java:287) ~[tombstone-7.6.5-1.18.2.jar%23221!/:7.6.5]
at net.minecraftforge.eventbus.ASMEventHandler_286_EventHandler_onWorldLoad_Load.invoke(.dynamic) ~[?:?]
at net.minecraftforge.eventbus.ASMEventHandler.invoke(ASMEventHandler.java:85) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBus.post(EventBus.java:302) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBus.post(EventBus.java:283) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraft.server.MinecraftServer.m_129815_(MinecraftServer.java:361) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_130006_(MinecraftServer.java:316) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.dedicated.DedicatedServer.m_7038_(DedicatedServer.java:173) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_130011_(MinecraftServer.java:661) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_177918_(MinecraftServer.java:261) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at java.lang.Thread.run(Unknown Source) [?:?]`
|
True
|
Create 0.5.1 with 1.18.2 - You'll need to update your code to accommodate changes to Create which were made post- Create v0.5.1a.
The following error was produced by using:
Forge 40.2.9
create-1.18.2-0.5.1.b.jar
littlelogistics-mc1.18.2-v1.2.6.jar
`[10Jun2023 15:54:49.877] [Server thread/ERROR] [net.minecraftforge.eventbus.EventSubclassTransformer/EVENTBUS]: Could not find parent com/simibubi/create/content/contraptions/components/structureMovement/train/capability/MinecartController for class dev/murad/shipping/compatibility/create/CapabilityInjector$TrainCarController in classloader cpw.mods.modlauncher.TransformingClassLoader@38ef1a0a on thread Thread[Server thread,8,SERVER]
[10Jun2023 15:54:49.877] [Server thread/ERROR] [net.minecraftforge.eventbus.EventSubclassTransformer/EVENTBUS]: An error occurred building event handler
java.lang.ClassNotFoundException: com.simibubi.create.content.contraptions.components.structureMovement.train.capability.MinecartController
at jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown Source) ~[?:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:137) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:137) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at net.minecraftforge.eventbus.EventSubclassTransformer.buildEvents(EventSubclassTransformer.java:62) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventSubclassTransformer.transform(EventSubclassTransformer.java:44) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBusEngine.processClass(EventBusEngine.java:21) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.service.ModLauncherService.processClassWithFlags(ModLauncherService.java:20) ~[eventbus-5.0.3.jar%232!/:5.0.3+70+master.d7d405b]
at cpw.mods.modlauncher.LaunchPluginHandler.offerClassNodeToPlugins(LaunchPluginHandler.java:88) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.modlauncher.ClassTransformer.transform(ClassTransformer.java:120) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.modlauncher.TransformingClassLoader.maybeTransformClassBytes(TransformingClassLoader.java:50) ~[modlauncher-9.1.3.jar%235!/:?]
at cpw.mods.cl.ModuleClassLoader.readerToClass(ModuleClassLoader.java:113) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.lambda$findClass$15(ModuleClassLoader.java:219) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.loadFromModule(ModuleClassLoader.java:229) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.findClass(ModuleClassLoader.java:219) ~[securejarhandler-1.0.8.jar:?]
at cpw.mods.cl.ModuleClassLoader.loadClass(ModuleClassLoader.java:135) ~[securejarhandler-1.0.8.jar:?]
at java.lang.ClassLoader.loadClass(Unknown Source) ~[?:?]
at dev.murad.shipping.compatibility.create.CapabilityInjector.constructMinecartControllerCapability(CapabilityInjector.java:42) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at dev.murad.shipping.entity.custom.train.wagon.SeaterCarEntity.initCompat(SeaterCarEntity.java:39) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at dev.murad.shipping.entity.custom.train.wagon.SeaterCarEntity.<init>(SeaterCarEntity.java:29) ~[littlelogistics-mc1.18.2-v1.2.6.jar%23147!/:1.2.6]
at net.minecraft.world.entity.EntityType.m_20615_(EntityType.java:460) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at ovh.corail.tombstone.helper.TameableType.init(TameableType.java:72) ~[tombstone-7.6.5-1.18.2.jar%23221!/:7.6.5]
at ovh.corail.tombstone.event.EventHandler.onWorldLoad(EventHandler.java:287) ~[tombstone-7.6.5-1.18.2.jar%23221!/:7.6.5]
at net.minecraftforge.eventbus.ASMEventHandler_286_EventHandler_onWorldLoad_Load.invoke(.dynamic) ~[?:?]
at net.minecraftforge.eventbus.ASMEventHandler.invoke(ASMEventHandler.java:85) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBus.post(EventBus.java:302) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraftforge.eventbus.EventBus.post(EventBus.java:283) ~[eventbus-5.0.3.jar%232!/:?]
at net.minecraft.server.MinecraftServer.m_129815_(MinecraftServer.java:361) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_130006_(MinecraftServer.java:316) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.dedicated.DedicatedServer.m_7038_(DedicatedServer.java:173) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_130011_(MinecraftServer.java:661) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at net.minecraft.server.MinecraftServer.m_177918_(MinecraftServer.java:261) ~[server-1.18.2-20220404.173914-srg.jar%23236!/:?]
at java.lang.Thread.run(Unknown Source) [?:?]`
|
non_defect
|
create with you ll need to update your code to accommodate changes to create which were made post create the following error was produced by using forge create b jar littlelogistics jar could not find parent com simibubi create content contraptions components structuremovement train capability minecartcontroller for class dev murad shipping compatibility create capabilityinjector traincarcontroller in classloader cpw mods modlauncher transformingclassloader on thread thread an error occurred building event handler java lang classnotfoundexception com simibubi create content contraptions components structuremovement train capability minecartcontroller at jdk internal loader builtinclassloader loadclass unknown source at java lang classloader loadclass unknown source at cpw mods cl moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass unknown source at cpw mods cl moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass unknown source at net minecraftforge eventbus eventsubclasstransformer buildevents eventsubclasstransformer java at net minecraftforge eventbus eventsubclasstransformer transform eventsubclasstransformer java at net minecraftforge eventbus eventbusengine processclass eventbusengine java at net minecraftforge eventbus service modlauncherservice processclasswithflags modlauncherservice java at cpw mods modlauncher launchpluginhandler offerclassnodetoplugins launchpluginhandler java at cpw mods modlauncher classtransformer transform classtransformer java at cpw mods modlauncher transformingclassloader maybetransformclassbytes transformingclassloader java at cpw mods cl moduleclassloader readertoclass moduleclassloader java at cpw mods cl moduleclassloader lambda findclass moduleclassloader java at cpw mods cl moduleclassloader loadfrommodule moduleclassloader java at cpw mods cl moduleclassloader findclass moduleclassloader java at cpw mods cl moduleclassloader loadclass moduleclassloader java at java lang classloader loadclass unknown source at dev murad shipping compatibility create capabilityinjector constructminecartcontrollercapability capabilityinjector java at dev murad shipping entity custom train wagon seatercarentity initcompat seatercarentity java at dev murad shipping entity custom train wagon seatercarentity seatercarentity java at net minecraft world entity entitytype m entitytype java at ovh corail tombstone helper tameabletype init tameabletype java at ovh corail tombstone event eventhandler onworldload eventhandler java at net minecraftforge eventbus asmeventhandler eventhandler onworldload load invoke dynamic at net minecraftforge eventbus asmeventhandler invoke asmeventhandler java at net minecraftforge eventbus eventbus post eventbus java at net minecraftforge eventbus eventbus post eventbus java at net minecraft server minecraftserver m minecraftserver java at net minecraft server minecraftserver m minecraftserver java at net minecraft server dedicated dedicatedserver m dedicatedserver java at net minecraft server minecraftserver m minecraftserver java at net minecraft server minecraftserver m minecraftserver java at java lang thread run unknown source
| 0
|
25,729
| 4,426,448,826
|
IssuesEvent
|
2016-08-16 18:21:37
|
troessner/reek
|
https://api.github.com/repos/troessner/reek
|
closed
|
Getting error when running reek
|
defect
|
When I run `reek` I get this:
```
/Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `block in ignored_iterator?': undefined method `name' for #<#<Class:0x007ffb5131e328>:0x007ffb5131e2b0> (NoMethodError)
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `any?'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `ignored_iterator?'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:115:in `increment_depth'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:105:in `block in scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `map'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:105:in `block in scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `map'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:70:in `find_deepest_iterator'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:48:in `inspect'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_detector.rb:47:in `run_for'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:47:in `block in examine'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:46:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:46:in `examine'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:81:in `block in run'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:85:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:87:in `block in each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:87:in `block in each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:80:in `run'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:37:in `initialize'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:24:in `new'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:24:in `block in populate_reporter_with_smells'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:23:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:23:in `populate_reporter_with_smells'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:15:in `execute'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/application.rb:28:in `execute'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/bin/reek:13:in `<top (required)>'
from /Users/eric/.rbenv/versions/2.3.1/bin/reek:23:in `load'
from /Users/eric/.rbenv/versions/2.3.1/bin/reek:23:in `<main>'
```
I removed the .reek file assuming I botched it up, but it still gives the same error. My code passes rubocop, and it does still work. Any ideas how I can figure out what the issue is? The word "name" is pretty common in the code, so that's not much to go on.
|
1.0
|
Getting error when running reek - When I run `reek` I get this:
```
/Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `block in ignored_iterator?': undefined method `name' for #<#<Class:0x007ffb5131e328>:0x007ffb5131e2b0> (NoMethodError)
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `any?'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:124:in `ignored_iterator?'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:115:in `increment_depth'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:105:in `block in scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `map'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:105:in `block in scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `map'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:100:in `scout'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:70:in `find_deepest_iterator'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/nested_iterators.rb:48:in `inspect'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_detector.rb:47:in `run_for'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:47:in `block in examine'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:46:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/smells/smell_repository.rb:46:in `examine'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:81:in `block in run'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:85:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:87:in `block in each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:87:in `block in each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/context/code_context.rb:86:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:80:in `run'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/examiner.rb:37:in `initialize'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:24:in `new'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:24:in `block in populate_reporter_with_smells'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:23:in `each'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:23:in `populate_reporter_with_smells'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/command/report_command.rb:15:in `execute'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/lib/reek/cli/application.rb:28:in `execute'
from /Users/eric/.rbenv/versions/2.3.1/lib/ruby/gems/2.3.0/gems/reek-4.0.3/bin/reek:13:in `<top (required)>'
from /Users/eric/.rbenv/versions/2.3.1/bin/reek:23:in `load'
from /Users/eric/.rbenv/versions/2.3.1/bin/reek:23:in `<main>'
```
I removed the .reek file assuming I botched it up, but it still gives the same error. My code passes rubocop, and it does still work. Any ideas how I can figure out what the issue is? The word "name" is pretty common in the code, so that's not much to go on.
|
defect
|
getting error when running reek when i run reek i get this users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in block in ignored iterator undefined method name for nomethoderror from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in any from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in ignored iterator from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in increment depth from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in block in scout from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in map from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in scout from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in block in scout from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in map from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in scout from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in find deepest iterator from users eric rbenv versions lib ruby gems gems reek lib reek smells nested iterators rb in inspect from users eric rbenv versions lib ruby gems gems reek lib reek smells smell detector rb in run for from users eric rbenv versions lib ruby gems gems reek lib reek smells smell repository rb in block in examine from users eric rbenv versions lib ruby gems gems reek lib reek smells smell repository rb in each from users eric rbenv versions lib ruby gems gems reek lib reek smells smell repository rb in examine from users eric rbenv versions lib ruby gems gems reek lib reek examiner rb in block in run from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in block in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in block in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in each from users eric rbenv versions lib ruby gems gems reek lib reek context code context rb in each from users eric rbenv versions lib ruby gems gems reek lib reek examiner rb in run from users eric rbenv versions lib ruby gems gems reek lib reek examiner rb in initialize from users eric rbenv versions lib ruby gems gems reek lib reek cli command report command rb in new from users eric rbenv versions lib ruby gems gems reek lib reek cli command report command rb in block in populate reporter with smells from users eric rbenv versions lib ruby gems gems reek lib reek cli command report command rb in each from users eric rbenv versions lib ruby gems gems reek lib reek cli command report command rb in populate reporter with smells from users eric rbenv versions lib ruby gems gems reek lib reek cli command report command rb in execute from users eric rbenv versions lib ruby gems gems reek lib reek cli application rb in execute from users eric rbenv versions lib ruby gems gems reek bin reek in from users eric rbenv versions bin reek in load from users eric rbenv versions bin reek in i removed the reek file assuming i botched it up but it still gives the same error my code passes rubocop and it does still work any ideas how i can figure out what the issue is the word name is pretty common in the code so that s not much to go on
| 1
|
323,271
| 23,940,076,055
|
IssuesEvent
|
2022-09-11 19:33:00
|
danmassarano/chatbot
|
https://api.github.com/repos/danmassarano/chatbot
|
closed
|
Make repo public
|
documentation todo ci
|
Add code quality and dependencies checks in CI
https://github.com/danmassarano/chatbot/blob/c005ba789178e46f885c2eb0d6f3b8fd5112c7dc/issues.md#L6
```markdown
<!-- TODO: Add build step in CI
labels: ci
assignees: danmassarano
-->
<!-- TODO: Make repo public
Add code quality and dependencies checks in CI
labels: ci, documentation
assignees: danmassarano
-->
<!-- TODO: Add more badges
- CI | build | coverage | documentation | versions | style | security | dependencies | quality
labels: ci, documentation
```
|
1.0
|
Make repo public - Add code quality and dependencies checks in CI
https://github.com/danmassarano/chatbot/blob/c005ba789178e46f885c2eb0d6f3b8fd5112c7dc/issues.md#L6
```markdown
<!-- TODO: Add build step in CI
labels: ci
assignees: danmassarano
-->
<!-- TODO: Make repo public
Add code quality and dependencies checks in CI
labels: ci, documentation
assignees: danmassarano
-->
<!-- TODO: Add more badges
- CI | build | coverage | documentation | versions | style | security | dependencies | quality
labels: ci, documentation
```
|
non_defect
|
make repo public add code quality and dependencies checks in ci markdown todo add build step in ci labels ci assignees danmassarano todo make repo public add code quality and dependencies checks in ci labels ci documentation assignees danmassarano todo add more badges ci build coverage documentation versions style security dependencies quality labels ci documentation
| 0
|
172,694
| 13,327,751,884
|
IssuesEvent
|
2020-08-27 13:36:53
|
Human-Connection/Human-Connection
|
https://api.github.com/repos/Human-Connection/Human-Connection
|
reopened
|
🔧 [Refactor] backend test db setups/teardowns
|
backend refactor stale testing
|
## :zap: Refactor ticket
Backend tests are slow, because the test DB is often cleared after each test of a suite.
Seting up the database only once before and clearing it only after each suite reduces the test time by e.g. 60%.


### Motive

### Implementation
- fix `before|afterAll`, `before|afterEach` blocks in tests
### See also
For time logs of the `before|afterAll` block executions, see this [Travis log](https://travis-ci.com/github/Human-Connection/Human-Connection/builds/159993119#L1435)
|
1.0
|
🔧 [Refactor] backend test db setups/teardowns - ## :zap: Refactor ticket
Backend tests are slow, because the test DB is often cleared after each test of a suite.
Seting up the database only once before and clearing it only after each suite reduces the test time by e.g. 60%.


### Motive

### Implementation
- fix `before|afterAll`, `before|afterEach` blocks in tests
### See also
For time logs of the `before|afterAll` block executions, see this [Travis log](https://travis-ci.com/github/Human-Connection/Human-Connection/builds/159993119#L1435)
|
non_defect
|
🔧 backend test db setups teardowns zap refactor ticket backend tests are slow because the test db is often cleared after each test of a suite seting up the database only once before and clearing it only after each suite reduces the test time by e g motive implementation fix before afterall before aftereach blocks in tests see also for time logs of the before afterall block executions see this
| 0
|
11,562
| 14,440,035,382
|
IssuesEvent
|
2020-12-07 15:05:42
|
jhu-idc/iDC-general
|
https://api.github.com/repos/jhu-idc/iDC-general
|
closed
|
Implement the ability to revert Drupal nodes by the handler.
|
Graph Processor ingest
|
This might be able to be accomplished asynchronously.
Estimate: 2 days
|
1.0
|
Implement the ability to revert Drupal nodes by the handler. - This might be able to be accomplished asynchronously.
Estimate: 2 days
|
non_defect
|
implement the ability to revert drupal nodes by the handler this might be able to be accomplished asynchronously estimate days
| 0
|
387,217
| 11,458,084,865
|
IssuesEvent
|
2020-02-07 02:01:55
|
HackIllinois/android
|
https://api.github.com/repos/HackIllinois/android
|
closed
|
Add camera icon to top right of event info page for staff
|
good first issue priority
|
This will open a page to check in for that event.
|
1.0
|
Add camera icon to top right of event info page for staff - This will open a page to check in for that event.
|
non_defect
|
add camera icon to top right of event info page for staff this will open a page to check in for that event
| 0
|
788,888
| 27,771,917,264
|
IssuesEvent
|
2023-03-16 14:57:05
|
o3de/o3de
|
https://api.github.com/repos/o3de/o3de
|
closed
|
Asset Browser: Min-width for docked Asset Browser Inspector is unusable
|
kind/bug sig/content triage/accepted priority/minor feature/asset-browser
|
**Describe the bug**
Min-width for docked Asset Browser Inspector is unusable.
**Steps to reproduce**
1. Launch the Editor and enable WIP Asset Browser features via the Console:
```
ed_useWIPAssetBrowserDesign true
```
2. Open the Asset Browser, and select any asset.
3. Open the Asset Browser Inspector and dock the window NOT on the Entity Inspector window.
4. Resize the docked Asset Browser Inspector to it's smallest allowed width.
**Expected behavior**
Tool remains usable when resized.
**Actual behavior**
Text wraps un-intuitively and tool is largely unusable.
**Screenshots/Video**

**Found in Branch**
development
**Commit ID from [o3de/o3de](https://github.com/o3de/o3de) Repository**
945c0ee96955487f0fabc35deaff1ea6629a4fd3
|
1.0
|
Asset Browser: Min-width for docked Asset Browser Inspector is unusable - **Describe the bug**
Min-width for docked Asset Browser Inspector is unusable.
**Steps to reproduce**
1. Launch the Editor and enable WIP Asset Browser features via the Console:
```
ed_useWIPAssetBrowserDesign true
```
2. Open the Asset Browser, and select any asset.
3. Open the Asset Browser Inspector and dock the window NOT on the Entity Inspector window.
4. Resize the docked Asset Browser Inspector to it's smallest allowed width.
**Expected behavior**
Tool remains usable when resized.
**Actual behavior**
Text wraps un-intuitively and tool is largely unusable.
**Screenshots/Video**

**Found in Branch**
development
**Commit ID from [o3de/o3de](https://github.com/o3de/o3de) Repository**
945c0ee96955487f0fabc35deaff1ea6629a4fd3
|
non_defect
|
asset browser min width for docked asset browser inspector is unusable describe the bug min width for docked asset browser inspector is unusable steps to reproduce launch the editor and enable wip asset browser features via the console ed usewipassetbrowserdesign true open the asset browser and select any asset open the asset browser inspector and dock the window not on the entity inspector window resize the docked asset browser inspector to it s smallest allowed width expected behavior tool remains usable when resized actual behavior text wraps un intuitively and tool is largely unusable screenshots video found in branch development commit id from repository
| 0
|
14,493
| 2,813,622,728
|
IssuesEvent
|
2015-05-18 15:35:14
|
ndt-project/ndt
|
https://api.github.com/repos/ndt-project/ndt
|
closed
|
Exceptions visible on the Applet console
|
Component-GUI Milestone-NDT-3.7.0 Severity-Medium Type-Defect
|
Original [issue 99](https://code.google.com/p/ndt/issues/detail?id=99) created by aaronmbr on 2013-11-22T09:56:49.000Z:
Steps to reproduce:
1. Run NDT tests from the Applet.
2. Look at the Java console.
Expected results:
There is no exceptions on the console.
Actual results:
There are the following exceptions on the console:
Exception occured reading a web100 var DataOctetsOut: - java.lang.NumberFormatException: For input string: "4049166464"
Exception occured reading a web100 var HCDataOctetsOut: - java.lang.NumberFormatException: For input string: "8344133760"
Exception occured reading a web100 var SndInitial: - java.lang.NumberFormatException: For input string: "2499521446"
Exception occured reading a web100 var RecInitial: - java.lang.NumberFormatException: For input string: "2585486446"
Exception occured reading a web100 var SndUna: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var SndNxt: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var SndMax: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var ThruOctetsAcked: - java.lang.NumberFormatException: For input string: "4045086721"
Exception occured reading a web100 var HCThruOctetsAcked: - java.lang.NumberFormatException: For input string: "8340054017"
Exception occured reading a web100 var RcvNxt: - java.lang.NumberFormatException: For input string: "2585486447"
Exception occured reading a web100 var LimCwnd: - java.lang.NumberFormatException: For input string: "4294901828"
Exception occured reading a web100 var DataBytesOut: - java.lang.NumberFormatException: For input string: "8344133760"
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
1.0
|
Exceptions visible on the Applet console - Original [issue 99](https://code.google.com/p/ndt/issues/detail?id=99) created by aaronmbr on 2013-11-22T09:56:49.000Z:
Steps to reproduce:
1. Run NDT tests from the Applet.
2. Look at the Java console.
Expected results:
There is no exceptions on the console.
Actual results:
There are the following exceptions on the console:
Exception occured reading a web100 var DataOctetsOut: - java.lang.NumberFormatException: For input string: "4049166464"
Exception occured reading a web100 var HCDataOctetsOut: - java.lang.NumberFormatException: For input string: "8344133760"
Exception occured reading a web100 var SndInitial: - java.lang.NumberFormatException: For input string: "2499521446"
Exception occured reading a web100 var RecInitial: - java.lang.NumberFormatException: For input string: "2585486446"
Exception occured reading a web100 var SndUna: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var SndNxt: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var SndMax: - java.lang.NumberFormatException: For input string: "2249640871"
Exception occured reading a web100 var ThruOctetsAcked: - java.lang.NumberFormatException: For input string: "4045086721"
Exception occured reading a web100 var HCThruOctetsAcked: - java.lang.NumberFormatException: For input string: "8340054017"
Exception occured reading a web100 var RcvNxt: - java.lang.NumberFormatException: For input string: "2585486447"
Exception occured reading a web100 var LimCwnd: - java.lang.NumberFormatException: For input string: "4294901828"
Exception occured reading a web100 var DataBytesOut: - java.lang.NumberFormatException: For input string: "8344133760"
<b>What is the expected output? What do you see instead?</b>
<b>Please use labels and text to provide additional information.</b>
|
defect
|
exceptions visible on the applet console original created by aaronmbr on steps to reproduce run ndt tests from the applet look at the java console expected results there is no exceptions on the console actual results there are the following exceptions on the console exception occured reading a var dataoctetsout java lang numberformatexception for input string quot quot exception occured reading a var hcdataoctetsout java lang numberformatexception for input string quot quot exception occured reading a var sndinitial java lang numberformatexception for input string quot quot exception occured reading a var recinitial java lang numberformatexception for input string quot quot exception occured reading a var snduna java lang numberformatexception for input string quot quot exception occured reading a var sndnxt java lang numberformatexception for input string quot quot exception occured reading a var sndmax java lang numberformatexception for input string quot quot exception occured reading a var thruoctetsacked java lang numberformatexception for input string quot quot exception occured reading a var hcthruoctetsacked java lang numberformatexception for input string quot quot exception occured reading a var rcvnxt java lang numberformatexception for input string quot quot exception occured reading a var limcwnd java lang numberformatexception for input string quot quot exception occured reading a var databytesout java lang numberformatexception for input string quot quot what is the expected output what do you see instead please use labels and text to provide additional information
| 1
|
68,127
| 17,170,012,994
|
IssuesEvent
|
2021-07-15 02:01:13
|
microsoft/appcenter
|
https://api.github.com/repos/microsoft/appcenter
|
closed
|
Customizable email notification content in build configuration
|
Stale build distribute feature request reviewed-DRI
|
**Describe the solution you'd like**
I wish to send, as much as possible an automated message, a custom one when setup a build configuration. Now, it's not possible.

I wish to customize with some environment variables, will help us to provide more details about build.
**Describe alternatives you've considered**
I don't see other alternative.
|
1.0
|
Customizable email notification content in build configuration - **Describe the solution you'd like**
I wish to send, as much as possible an automated message, a custom one when setup a build configuration. Now, it's not possible.

I wish to customize with some environment variables, will help us to provide more details about build.
**Describe alternatives you've considered**
I don't see other alternative.
|
non_defect
|
customizable email notification content in build configuration describe the solution you d like i wish to send as much as possible an automated message a custom one when setup a build configuration now it s not possible i wish to customize with some environment variables will help us to provide more details about build describe alternatives you ve considered i don t see other alternative
| 0
|
19,547
| 3,220,185,478
|
IssuesEvent
|
2015-10-08 13:51:20
|
contao/core-bundle
|
https://api.github.com/repos/contao/core-bundle
|
closed
|
Warning: grapheme_substr(): grapheme_substr: length is beyond start
|
defect
|
Warning: grapheme_substr(): grapheme_substr: length is beyond start
500 Internal Server Error - ContextErrorException
My installed versions:
````
contao/core-bundle 4.1.x-dev 090055e Contao 4 core bundle
contao/installation-bundle dev-master c424de5 Contao 4 installation bundle
````
I tried to edit my themes (get parameters: `contao?do=themes`).
My theme is called `TICK`.
For more information see attached screenshot.

|
1.0
|
Warning: grapheme_substr(): grapheme_substr: length is beyond start - Warning: grapheme_substr(): grapheme_substr: length is beyond start
500 Internal Server Error - ContextErrorException
My installed versions:
````
contao/core-bundle 4.1.x-dev 090055e Contao 4 core bundle
contao/installation-bundle dev-master c424de5 Contao 4 installation bundle
````
I tried to edit my themes (get parameters: `contao?do=themes`).
My theme is called `TICK`.
For more information see attached screenshot.

|
defect
|
warning grapheme substr grapheme substr length is beyond start warning grapheme substr grapheme substr length is beyond start internal server error contexterrorexception my installed versions contao core bundle x dev contao core bundle contao installation bundle dev master contao installation bundle i tried to edit my themes get parameters contao do themes my theme is called tick for more information see attached screenshot
| 1
|
231,849
| 7,643,937,532
|
IssuesEvent
|
2018-05-08 14:10:06
|
tugcanolgun/Cpp-Projects
|
https://api.github.com/repos/tugcanolgun/Cpp-Projects
|
opened
|
Orginize programs with individual readme's
|
enhancement low priority
|
It would be better if each program has its own readme file with compile how to's and examples.
|
1.0
|
Orginize programs with individual readme's - It would be better if each program has its own readme file with compile how to's and examples.
|
non_defect
|
orginize programs with individual readme s it would be better if each program has its own readme file with compile how to s and examples
| 0
|
39,883
| 9,726,248,643
|
IssuesEvent
|
2019-05-30 10:56:28
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
DataTable expanded rows collapse when modifying one property of a record
|
defect
|
**I'm submitting a ...**
```
[X] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
``
**Current behavior**
If you are using the row expansion feature, and saving the expanded row data in the state (as demonstrated here https://www.primefaces.org/primereact/#/datatable/rowexpand), all of the expanded rows will collapse if any of the records' fields are changed when passing them in from Redux.
It seems like the table loses track of the mapping between the expanded row data and the modified record data being passed in as a prop to 'values'.
**Expected behavior**
Expanded rows should not collapse when the records passed in to the table are not new, just one or more fields modified.
**Minimal reproduction of the problem with instructions**
1) Setup Prime React DataTable with expander column. Records should be provided from redux state slice.
2) expand row
3) have a button in the expanded row update a field on one of the records through a redux action/reducer.
4) You'll notice the row collapses, despite the expandedRowData still being present.
https://codesandbox.io/s/m752k9jq8y
**Please tell us about your environment:**
Windows/IntelliJ/Tomcat
* **React version:**
16.8.3
* **PrimeReact version:**
2.0.1
* **Browser:**
ALL
* **Language:**
ES6/7
|
1.0
|
DataTable expanded rows collapse when modifying one property of a record - **I'm submitting a ...**
```
[X] bug report
[ ] feature request
[ ] support request => Please do not submit support request here, instead see https://forum.primefaces.org/viewforum.php?f=57
``
**Current behavior**
If you are using the row expansion feature, and saving the expanded row data in the state (as demonstrated here https://www.primefaces.org/primereact/#/datatable/rowexpand), all of the expanded rows will collapse if any of the records' fields are changed when passing them in from Redux.
It seems like the table loses track of the mapping between the expanded row data and the modified record data being passed in as a prop to 'values'.
**Expected behavior**
Expanded rows should not collapse when the records passed in to the table are not new, just one or more fields modified.
**Minimal reproduction of the problem with instructions**
1) Setup Prime React DataTable with expander column. Records should be provided from redux state slice.
2) expand row
3) have a button in the expanded row update a field on one of the records through a redux action/reducer.
4) You'll notice the row collapses, despite the expandedRowData still being present.
https://codesandbox.io/s/m752k9jq8y
**Please tell us about your environment:**
Windows/IntelliJ/Tomcat
* **React version:**
16.8.3
* **PrimeReact version:**
2.0.1
* **Browser:**
ALL
* **Language:**
ES6/7
|
defect
|
datatable expanded rows collapse when modifying one property of a record i m submitting a bug report feature request support request please do not submit support request here instead see current behavior if you are using the row expansion feature and saving the expanded row data in the state as demonstrated here all of the expanded rows will collapse if any of the records fields are changed when passing them in from redux it seems like the table loses track of the mapping between the expanded row data and the modified record data being passed in as a prop to values expected behavior expanded rows should not collapse when the records passed in to the table are not new just one or more fields modified minimal reproduction of the problem with instructions setup prime react datatable with expander column records should be provided from redux state slice expand row have a button in the expanded row update a field on one of the records through a redux action reducer you ll notice the row collapses despite the expandedrowdata still being present please tell us about your environment windows intellij tomcat react version primereact version browser all language
| 1
|
79,571
| 28,376,362,323
|
IssuesEvent
|
2023-04-12 21:12:00
|
DependencyTrack/dependency-track
|
https://api.github.com/repos/DependencyTrack/dependency-track
|
closed
|
OpenID login button not showing behind proxy
|
defect p2
|
### Current Behavior:
I am trying to connect Dependency Track login screen to an OpenID instance (Azure AD).
But the OpenID login button is not showing, and I get the "Failed to determine availability of OpenID Connect" message.
After digging up a bit, it appears that the call to myapiserver.com/api/v1/oidc/available returns a 504 error code.
The thing is I am behind a corporate proxy, and I cannot access internet without proxy.
Still, I have declared the following environment variables : http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, ALPINE_HTTP_PROXY_ADDRESS, ALPINE_HTTP_PROXY_PORT.
The proxy declaration works for the vuln DB downloads, but doesn't work for reaching out the OIDC provider (microsoft).
I checked the [alpine dependency](https://github.com/stevespringett/Alpine) that handles the OIDC part, that leads to the underlying [oauth-sdk OIDC](https://github.com/hidglobal/oauth-2.0-sdk-with-openid-connect-extensions).
And it seems to me that there is no proxy support in that part.
Or rather, environmnent variables for proxy do exist in Alpine, but are never passed to the underlying oauth-sdk OIDC library (that supports proxy), so the server just cannot reach the outside for OIDC.
Did I miss something to make it work ?
### Steps to Reproduce:
Try to use an external OIDC provider when inside a corporate environment that needs a proxy to go out.
### Expected Behavior:
Proxy is supported to reach the OIDC provider
### Environment:
- Dependency-Track Version: 4.5.0/4.5.1 (apiserver/frontend)
- Distribution: Docker
Let me know if you need additional information, and thanks for your time.
|
1.0
|
OpenID login button not showing behind proxy - ### Current Behavior:
I am trying to connect Dependency Track login screen to an OpenID instance (Azure AD).
But the OpenID login button is not showing, and I get the "Failed to determine availability of OpenID Connect" message.
After digging up a bit, it appears that the call to myapiserver.com/api/v1/oidc/available returns a 504 error code.
The thing is I am behind a corporate proxy, and I cannot access internet without proxy.
Still, I have declared the following environment variables : http_proxy, https_proxy, HTTP_PROXY, HTTPS_PROXY, ALPINE_HTTP_PROXY_ADDRESS, ALPINE_HTTP_PROXY_PORT.
The proxy declaration works for the vuln DB downloads, but doesn't work for reaching out the OIDC provider (microsoft).
I checked the [alpine dependency](https://github.com/stevespringett/Alpine) that handles the OIDC part, that leads to the underlying [oauth-sdk OIDC](https://github.com/hidglobal/oauth-2.0-sdk-with-openid-connect-extensions).
And it seems to me that there is no proxy support in that part.
Or rather, environmnent variables for proxy do exist in Alpine, but are never passed to the underlying oauth-sdk OIDC library (that supports proxy), so the server just cannot reach the outside for OIDC.
Did I miss something to make it work ?
### Steps to Reproduce:
Try to use an external OIDC provider when inside a corporate environment that needs a proxy to go out.
### Expected Behavior:
Proxy is supported to reach the OIDC provider
### Environment:
- Dependency-Track Version: 4.5.0/4.5.1 (apiserver/frontend)
- Distribution: Docker
Let me know if you need additional information, and thanks for your time.
|
defect
|
openid login button not showing behind proxy current behavior i am trying to connect dependency track login screen to an openid instance azure ad but the openid login button is not showing and i get the failed to determine availability of openid connect message after digging up a bit it appears that the call to myapiserver com api oidc available returns a error code the thing is i am behind a corporate proxy and i cannot access internet without proxy still i have declared the following environment variables http proxy https proxy http proxy https proxy alpine http proxy address alpine http proxy port the proxy declaration works for the vuln db downloads but doesn t work for reaching out the oidc provider microsoft i checked the that handles the oidc part that leads to the underlying and it seems to me that there is no proxy support in that part or rather environmnent variables for proxy do exist in alpine but are never passed to the underlying oauth sdk oidc library that supports proxy so the server just cannot reach the outside for oidc did i miss something to make it work steps to reproduce try to use an external oidc provider when inside a corporate environment that needs a proxy to go out expected behavior proxy is supported to reach the oidc provider environment dependency track version apiserver frontend distribution docker let me know if you need additional information and thanks for your time
| 1
|
78,252
| 27,393,224,812
|
IssuesEvent
|
2023-02-28 17:38:28
|
SeleniumHQ/selenium
|
https://api.github.com/repos/SeleniumHQ/selenium
|
opened
|
[🐛 Bug]: selenium-manager segmentation fault
|
I-defect needs-triaging
|
### What happened?
TLDR: When running selenium-manager, I get a segmentation fault.
Long version: When running my ruby rspec capybara tests, I get errors like this:
```
7) email subscriptions sign up for Daily & Weekly
Failure/Error: visit '/'
NoMethodError:
undefined method `positive?' for nil:NilClass
if status.exitstatus.positive?
^^^^^^^^^^
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/selenium_manager.rb:80:in `run'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/selenium_manager.rb:41:in `driver_path'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:107:in `binary_path'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:74:in `initialize'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:32:in `new'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:32:in `chrome'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/chrome/driver.rb:35:in `initialize'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/driver.rb:47:in `new'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/driver.rb:47:in `for'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver.rb:88:in `for'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/selenium/driver.rb:83:in `browser'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/selenium/driver.rb:104:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/session.rb:280:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/dsl.rb:52:in `call'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/dsl.rb:52:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/rspec-rails-6.0.1/lib/rspec/rails/example/feature_example_group.rb:29:in `visit'
# ./spec/features/homepage/email_subscriptions_spec.rb:88:in `block (2 levels) in <top (required)>'
```
That got me playing around with https://github.com/SeleniumHQ/selenium/blob/trunk/rb/lib/selenium/webdriver/common/selenium_manager.rb#L41. When I run `selenium-manager` directly (see "Relevant log output" section), I get a segfault, which is probably messing with Open3 and causing the error that I'm seeing.
### How can we reproduce the issue?
```shell
See the relevant log output. I am running the selenium-manager executable out of my ruby gem's bin directory. I see the same issue for both chromedriver and geckodriver.
```
### Relevant log output
```shell
$ /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/bin/linux/selenium-manager --driver chromedriver --trace
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Using shell command to find out chrome version
DEBUG Running command: "google-chrome --version"
DEBUG Output: "Google Chrome 110.0.5481.177 "
DEBUG The version of chrome is 110.0.5481.177
TRACE Writing metadata to /home/ben/.cache/selenium/selenium-manager.json
DEBUG Detected browser: chrome 110
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Reading chromedriver version from https://chromedriver.storage.googleapis.com/LATEST_RELEASE_110
[1] 296300 segmentation fault (core dumped) --driver chromedriver --trace
$ /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/bin/linux/selenium-manager --driver geckodriver --trace
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Using shell command to find out firefox version
DEBUG Running command: "firefox -v"
DEBUG Output: "Mozilla Firefox 110.0"
DEBUG The version of firefox is 110.0
TRACE Writing metadata to /home/ben/.cache/selenium/selenium-manager.json
DEBUG Detected browser: firefox 110
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
[1] 297570 segmentation fault (core dumped) --driver geckodriver --trace
```
### Operating System
Ubuntu 22.10
### Selenium version
Ruby 4.8.1
### What are the browser(s) and version(s) where you see this issue?
Google Chrome 110.0.5481.177 and Mozilla Firefox 110.0
### What are the browser driver(s) and version(s) where you see this issue?
I'm not sure selenium-manager gets far enough
### Are you using Selenium Grid?
I don't think so
|
1.0
|
[🐛 Bug]: selenium-manager segmentation fault - ### What happened?
TLDR: When running selenium-manager, I get a segmentation fault.
Long version: When running my ruby rspec capybara tests, I get errors like this:
```
7) email subscriptions sign up for Daily & Weekly
Failure/Error: visit '/'
NoMethodError:
undefined method `positive?' for nil:NilClass
if status.exitstatus.positive?
^^^^^^^^^^
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/selenium_manager.rb:80:in `run'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/selenium_manager.rb:41:in `driver_path'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:107:in `binary_path'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:74:in `initialize'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:32:in `new'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/service.rb:32:in `chrome'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/chrome/driver.rb:35:in `initialize'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/driver.rb:47:in `new'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver/common/driver.rb:47:in `for'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/lib/selenium/webdriver.rb:88:in `for'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/selenium/driver.rb:83:in `browser'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/selenium/driver.rb:104:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/session.rb:280:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/dsl.rb:52:in `call'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/capybara-3.38.0/lib/capybara/dsl.rb:52:in `visit'
# /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/rspec-rails-6.0.1/lib/rspec/rails/example/feature_example_group.rb:29:in `visit'
# ./spec/features/homepage/email_subscriptions_spec.rb:88:in `block (2 levels) in <top (required)>'
```
That got me playing around with https://github.com/SeleniumHQ/selenium/blob/trunk/rb/lib/selenium/webdriver/common/selenium_manager.rb#L41. When I run `selenium-manager` directly (see "Relevant log output" section), I get a segfault, which is probably messing with Open3 and causing the error that I'm seeing.
### How can we reproduce the issue?
```shell
See the relevant log output. I am running the selenium-manager executable out of my ruby gem's bin directory. I see the same issue for both chromedriver and geckodriver.
```
### Relevant log output
```shell
$ /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/bin/linux/selenium-manager --driver chromedriver --trace
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Using shell command to find out chrome version
DEBUG Running command: "google-chrome --version"
DEBUG Output: "Google Chrome 110.0.5481.177 "
DEBUG The version of chrome is 110.0.5481.177
TRACE Writing metadata to /home/ben/.cache/selenium/selenium-manager.json
DEBUG Detected browser: chrome 110
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Reading chromedriver version from https://chromedriver.storage.googleapis.com/LATEST_RELEASE_110
[1] 296300 segmentation fault (core dumped) --driver chromedriver --trace
$ /home/ben/.rbenv/versions/3.1.3/lib/ruby/gems/3.1.0/gems/selenium-webdriver-4.8.1/bin/linux/selenium-manager --driver geckodriver --trace
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
DEBUG Using shell command to find out firefox version
DEBUG Running command: "firefox -v"
DEBUG Output: "Mozilla Firefox 110.0"
DEBUG The version of firefox is 110.0
TRACE Writing metadata to /home/ben/.cache/selenium/selenium-manager.json
DEBUG Detected browser: firefox 110
TRACE Reading metadata from /home/ben/.cache/selenium/selenium-manager.json
[1] 297570 segmentation fault (core dumped) --driver geckodriver --trace
```
### Operating System
Ubuntu 22.10
### Selenium version
Ruby 4.8.1
### What are the browser(s) and version(s) where you see this issue?
Google Chrome 110.0.5481.177 and Mozilla Firefox 110.0
### What are the browser driver(s) and version(s) where you see this issue?
I'm not sure selenium-manager gets far enough
### Are you using Selenium Grid?
I don't think so
|
defect
|
selenium manager segmentation fault what happened tldr when running selenium manager i get a segmentation fault long version when running my ruby rspec capybara tests i get errors like this email subscriptions sign up for daily weekly failure error visit nomethoderror undefined method positive for nil nilclass if status exitstatus positive home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common selenium manager rb in run home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common selenium manager rb in driver path home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common service rb in binary path home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common service rb in initialize home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common service rb in new home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common service rb in chrome home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver chrome driver rb in initialize home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common driver rb in new home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver common driver rb in for home ben rbenv versions lib ruby gems gems selenium webdriver lib selenium webdriver rb in for home ben rbenv versions lib ruby gems gems capybara lib capybara selenium driver rb in browser home ben rbenv versions lib ruby gems gems capybara lib capybara selenium driver rb in visit home ben rbenv versions lib ruby gems gems capybara lib capybara session rb in visit home ben rbenv versions lib ruby gems gems capybara lib capybara dsl rb in call home ben rbenv versions lib ruby gems gems capybara lib capybara dsl rb in visit home ben rbenv versions lib ruby gems gems rspec rails lib rspec rails example feature example group rb in visit spec features homepage email subscriptions spec rb in block levels in that got me playing around with when i run selenium manager directly see relevant log output section i get a segfault which is probably messing with and causing the error that i m seeing how can we reproduce the issue shell see the relevant log output i am running the selenium manager executable out of my ruby gem s bin directory i see the same issue for both chromedriver and geckodriver relevant log output shell home ben rbenv versions lib ruby gems gems selenium webdriver bin linux selenium manager driver chromedriver trace trace reading metadata from home ben cache selenium selenium manager json debug using shell command to find out chrome version debug running command google chrome version debug output google chrome debug the version of chrome is trace writing metadata to home ben cache selenium selenium manager json debug detected browser chrome trace reading metadata from home ben cache selenium selenium manager json debug reading chromedriver version from segmentation fault core dumped driver chromedriver trace home ben rbenv versions lib ruby gems gems selenium webdriver bin linux selenium manager driver geckodriver trace trace reading metadata from home ben cache selenium selenium manager json debug using shell command to find out firefox version debug running command firefox v debug output mozilla firefox debug the version of firefox is trace writing metadata to home ben cache selenium selenium manager json debug detected browser firefox trace reading metadata from home ben cache selenium selenium manager json segmentation fault core dumped driver geckodriver trace operating system ubuntu selenium version ruby what are the browser s and version s where you see this issue google chrome and mozilla firefox what are the browser driver s and version s where you see this issue i m not sure selenium manager gets far enough are you using selenium grid i don t think so
| 1
|
142,586
| 5,476,614,708
|
IssuesEvent
|
2017-03-11 22:18:09
|
botstory/botstory
|
https://api.github.com/repos/botstory/botstory
|
opened
|
simplify story switch
|
enhancement priority 2 -very critical
|
- [ ] story switch should be like story loop and be able to catch messages direct
|
1.0
|
simplify story switch - - [ ] story switch should be like story loop and be able to catch messages direct
|
non_defect
|
simplify story switch story switch should be like story loop and be able to catch messages direct
| 0
|
51,954
| 13,211,351,158
|
IssuesEvent
|
2020-08-15 22:30:26
|
icecube-trac/tix4
|
https://api.github.com/repos/icecube-trac/tix4
|
opened
|
steamshovel remove segfault (Trac #1355)
|
Incomplete Migration Migrated from Trac combo reconstruction defect
|
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1355">https://code.icecube.wisc.edu/projects/icecube/ticket/1355</a>, reported by berghaus</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-29T10:55:37",
"_ts": "1446116137408985",
"description": "Steamshovel keeps segfaulting when I try to remove stuff. No difference if an event is currently displayed or not. I'm using icerec-V04-11-02 and Ubuntu 15.04, but it happened with earlier releases and Ubuntu versions (14.04) as well. Apparently nobody else has this problem...",
"reporter": "berghaus",
"cc": "",
"resolution": "invalid",
"time": "2015-09-18T07:21:20",
"component": "combo reconstruction",
"summary": "steamshovel remove segfault",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
steamshovel remove segfault (Trac #1355) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1355">https://code.icecube.wisc.edu/projects/icecube/ticket/1355</a>, reported by berghaus</summary>
<p>
```json
{
"status": "closed",
"changetime": "2015-10-29T10:55:37",
"_ts": "1446116137408985",
"description": "Steamshovel keeps segfaulting when I try to remove stuff. No difference if an event is currently displayed or not. I'm using icerec-V04-11-02 and Ubuntu 15.04, but it happened with earlier releases and Ubuntu versions (14.04) as well. Apparently nobody else has this problem...",
"reporter": "berghaus",
"cc": "",
"resolution": "invalid",
"time": "2015-09-18T07:21:20",
"component": "combo reconstruction",
"summary": "steamshovel remove segfault",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "",
"type": "defect"
}
```
</p>
</details>
|
defect
|
steamshovel remove segfault trac migrated from json status closed changetime ts description steamshovel keeps segfaulting when i try to remove stuff no difference if an event is currently displayed or not i m using icerec and ubuntu but it happened with earlier releases and ubuntu versions as well apparently nobody else has this problem reporter berghaus cc resolution invalid time component combo reconstruction summary steamshovel remove segfault priority normal keywords milestone owner type defect
| 1
|
259,384
| 8,198,185,449
|
IssuesEvent
|
2018-08-31 15:34:45
|
End-to-end-provenance/RDataTracker
|
https://api.github.com/repos/End-to-end-provenance/RDataTracker
|
closed
|
Bug in console DDG JSON
|
medium priority ready
|
```
{
"prefix" : {
"prov" : "http://www.w3.org/ns/prov#",
"rdt" : "http://rdatatracker.org/"
},
"activity":{
"p1" : {
"rdt:name" : "Console",
"rdt:type" : "Start",
"rdt:elapsedTime" : "0.407",
"rdt:scriptNum" : "NA",
"rdt:startLine" : "NA",
"rdt:startCol" : "NA",
"rdt:endLine" : "NA",
"rdt:endCol" : "NA"
} ,
"p2" : {
"rdt:name" : "data(iris)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.485",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "1",
"rdt:startCol" : "1",
"rdt:endLine" : "1",
"rdt:endCol" : "10"
} ,
"p3" : {
"rdt:name" : "write.csv(iris, \"my_data.csv\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.67",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "2",
"rdt:startCol" : "1",
"rdt:endLine" : "2",
"rdt:endCol" : "29"
} ,
"p4" : {
"rdt:name" : "x = read.csv(\"my_data.csv\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.943",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "3",
"rdt:startCol" : "1",
"rdt:endLine" : "3",
"rdt:endCol" : "1"
} ,
"p5" : {
"rdt:name" : "y = x",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.115",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "3",
"rdt:startCol" : "5",
"rdt:endLine" : "3",
"rdt:endCol" : "27"
} ,
"p6" : {
"rdt:name" : "y[5, 3] = 500",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.129",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "4",
"rdt:startCol" : "1",
"rdt:endLine" : "4",
"rdt:endCol" : "1"
} ,
"p7" : {
"rdt:name" : "str(x)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.156",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "4",
"rdt:startCol" : "5",
"rdt:endLine" : "4",
"rdt:endCol" : "5"
} ,
"p8" : {
"rdt:name" : "str(y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.174",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "1",
"rdt:endLine" : "5",
"rdt:endCol" : "6"
} ,
"p9" : {
"rdt:name" : "pdf(\"cor_plot.pdf\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.186",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "8",
"rdt:endLine" : "5",
"rdt:endCol" : "8"
} ,
"p10" : {
"rdt:name" : "str(y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.191",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "10",
"rdt:endLine" : "5",
"rdt:endCol" : "12"
} ,
"p11" : {
"rdt:name" : "plot(Sepal.Length ~ Sepal.Width, data = y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.205",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "6",
"rdt:startCol" : "1",
"rdt:endLine" : "6",
"rdt:endCol" : "6"
} ,
"p12" : {
"rdt:name" : "abline(v = 500)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.225",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "7",
"rdt:startCol" : "1",
"rdt:endLine" : "7",
"rdt:endCol" : "6"
} ,
"p13" : {
"rdt:name" : "dev.off()",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.23",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "8",
"rdt:startCol" : "1",
"rdt:endLine" : "8",
"rdt:endCol" : "19"
} ,
"p14" : {
"rdt:name" : "Console",
"rdt:type" : "Finish",
"rdt:elapsedTime" : "1.243",
"rdt:scriptNum" : "NA",
"rdt:startLine" : "NA",
"rdt:startCol" : "NA",
"rdt:endLine" : "NA",
"rdt:endCol" : "NA"
} ,
"environment" : {
"rdt:name" : "environment",
"rdt:architecture" : "x86_64",
"rdt:operatingSystem" : "unix",
"rdt:language" : "R",
"rdt:rVersion" : "R version 3.4.0 (2017-04-21)",
"rdt:script" : "",
"rdt:sourcedScripts" : ,
"rdt:scriptTimeStamp" : "",
"rdt:workingDirectory" : "/Users/hermes/Labs/HF/projects/e2ep/projects/codecleanR/json",
"rdt:ddgDirectory" : "console_ddg",
"rdt:ddgTimeStamp" : "2017-06-22T12.20.57EDT",
"rdt:rdatatrackerVersion" : "2.26.1",
"rdt:installedPackages" : [
{"package" : "base", "version" : "3.4.0"},
{"package" : "datasets", "version" : "3.4.0"},
{"package" : "graphics", "version" : "3.4.0"},
{"package" : "grDevices", "version" : "3.4.0"},
{"package" : "methods", "version" : "3.4.0"},
{"package" : "RDataTracker", "version" : "2.26.1"},
{"package" : "stats", "version" : "3.4.0"},
{"package" : "utils", "version" : "3.4.0"}]
}},
"entity":{
"d1" : {
"rdt:name" : "iris [ENV]",
"rdt:value" : "data/1-iris.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,5], "type":["numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "TRUE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d2" : {
"rdt:name" : "my_data.csv",
"rdt:value" : "data/2-my_data.csv",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d3" : {
"rdt:name" : "my_data.csv",
"rdt:value" : "data/3-my_data.csv",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d4" : {
"rdt:name" : "x",
"rdt:value" : "data/4-x.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d5" : {
"rdt:name" : "y [ENV]",
"rdt:value" : "data/5-y.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "TRUE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d6" : {
"rdt:name" : "y",
"rdt:value" : "data/6-y.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d7" : {
"rdt:name" : "cor_plot.pdf",
"rdt:value" : "data/7-cor_plot.pdf",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
}},
"wasInformedBy":{
"e1" : {
"prov:informant" : "p1",
"prov:informed" : "p2"
} ,
"e3" : {
"prov:informant" : "p2",
"prov:informed" : "p3"
} ,
"e6" : {
"prov:informant" : "p3",
"prov:informed" : "p4"
} ,
"e9" : {
"prov:informant" : "p4",
"prov:informed" : "p5"
} ,
"e11" : {
"prov:informant" : "p5",
"prov:informed" : "p6"
} ,
"e14" : {
"prov:informant" : "p6",
"prov:informed" : "p7"
} ,
"e16" : {
"prov:informant" : "p7",
"prov:informed" : "p8"
} ,
"e18" : {
"prov:informant" : "p8",
"prov:informed" : "p9"
} ,
"e19" : {
"prov:informant" : "p9",
"prov:informed" : "p10"
} ,
"e21" : {
"prov:informant" : "p10",
"prov:informed" : "p11"
} ,
"e23" : {
"prov:informant" : "p11",
"prov:informed" : "p12"
} ,
"e24" : {
"prov:informant" : "p12",
"prov:informed" : "p13"
} ,
"e26" : {
"prov:informant" : "p13",
"prov:informed" : "p14"
}},
"wasGeneratedBy":{
"e5" : {
"prov:entity" : "d2",
"prov:activity" : "p3"
} ,
"e8" : {
"prov:entity" : "d4",
"prov:activity" : "p4"
} ,
"e13" : {
"prov:entity" : "d6",
"prov:activity" : "p6"
} ,
"e25" : {
"prov:entity" : "d7",
"prov:activity" : "p13"
}},
"used":{
"e2" : {
"prov:activity" : "p2",
"prov:entity" : "d1"
} ,
"e4" : {
"prov:activity" : "p3",
"prov:entity" : "d1"
} ,
"e7" : {
"prov:activity" : "p4",
"prov:entity" : "d3"
} ,
"e10" : {
"prov:activity" : "p5",
"prov:entity" : "d4"
} ,
"e12" : {
"prov:activity" : "p6",
"prov:entity" : "d5"
} ,
"e15" : {
"prov:activity" : "p7",
"prov:entity" : "d4"
} ,
"e17" : {
"prov:activity" : "p8",
"prov:entity" : "d6"
} ,
"e20" : {
"prov:activity" : "p10",
"prov:entity" : "d6"
} ,
"e22" : {
"prov:activity" : "p11",
"prov:entity" : "d6"
}}
}
```
DDG generated by Matt (`development` branch). A number of issues:
1- it is not a valid JSON
2- the same file appears as two distinct nodes (see image bellow)

3- the edge connecting `data(xxx)` statement to the variable seems to be reversed

|
1.0
|
Bug in console DDG JSON - ```
{
"prefix" : {
"prov" : "http://www.w3.org/ns/prov#",
"rdt" : "http://rdatatracker.org/"
},
"activity":{
"p1" : {
"rdt:name" : "Console",
"rdt:type" : "Start",
"rdt:elapsedTime" : "0.407",
"rdt:scriptNum" : "NA",
"rdt:startLine" : "NA",
"rdt:startCol" : "NA",
"rdt:endLine" : "NA",
"rdt:endCol" : "NA"
} ,
"p2" : {
"rdt:name" : "data(iris)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.485",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "1",
"rdt:startCol" : "1",
"rdt:endLine" : "1",
"rdt:endCol" : "10"
} ,
"p3" : {
"rdt:name" : "write.csv(iris, \"my_data.csv\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.67",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "2",
"rdt:startCol" : "1",
"rdt:endLine" : "2",
"rdt:endCol" : "29"
} ,
"p4" : {
"rdt:name" : "x = read.csv(\"my_data.csv\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "0.943",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "3",
"rdt:startCol" : "1",
"rdt:endLine" : "3",
"rdt:endCol" : "1"
} ,
"p5" : {
"rdt:name" : "y = x",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.115",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "3",
"rdt:startCol" : "5",
"rdt:endLine" : "3",
"rdt:endCol" : "27"
} ,
"p6" : {
"rdt:name" : "y[5, 3] = 500",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.129",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "4",
"rdt:startCol" : "1",
"rdt:endLine" : "4",
"rdt:endCol" : "1"
} ,
"p7" : {
"rdt:name" : "str(x)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.156",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "4",
"rdt:startCol" : "5",
"rdt:endLine" : "4",
"rdt:endCol" : "5"
} ,
"p8" : {
"rdt:name" : "str(y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.174",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "1",
"rdt:endLine" : "5",
"rdt:endCol" : "6"
} ,
"p9" : {
"rdt:name" : "pdf(\"cor_plot.pdf\")",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.186",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "8",
"rdt:endLine" : "5",
"rdt:endCol" : "8"
} ,
"p10" : {
"rdt:name" : "str(y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.191",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "5",
"rdt:startCol" : "10",
"rdt:endLine" : "5",
"rdt:endCol" : "12"
} ,
"p11" : {
"rdt:name" : "plot(Sepal.Length ~ Sepal.Width, data = y)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.205",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "6",
"rdt:startCol" : "1",
"rdt:endLine" : "6",
"rdt:endCol" : "6"
} ,
"p12" : {
"rdt:name" : "abline(v = 500)",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.225",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "7",
"rdt:startCol" : "1",
"rdt:endLine" : "7",
"rdt:endCol" : "6"
} ,
"p13" : {
"rdt:name" : "dev.off()",
"rdt:type" : "Operation",
"rdt:elapsedTime" : "1.23",
"rdt:scriptNum" : "-1",
"rdt:startLine" : "8",
"rdt:startCol" : "1",
"rdt:endLine" : "8",
"rdt:endCol" : "19"
} ,
"p14" : {
"rdt:name" : "Console",
"rdt:type" : "Finish",
"rdt:elapsedTime" : "1.243",
"rdt:scriptNum" : "NA",
"rdt:startLine" : "NA",
"rdt:startCol" : "NA",
"rdt:endLine" : "NA",
"rdt:endCol" : "NA"
} ,
"environment" : {
"rdt:name" : "environment",
"rdt:architecture" : "x86_64",
"rdt:operatingSystem" : "unix",
"rdt:language" : "R",
"rdt:rVersion" : "R version 3.4.0 (2017-04-21)",
"rdt:script" : "",
"rdt:sourcedScripts" : ,
"rdt:scriptTimeStamp" : "",
"rdt:workingDirectory" : "/Users/hermes/Labs/HF/projects/e2ep/projects/codecleanR/json",
"rdt:ddgDirectory" : "console_ddg",
"rdt:ddgTimeStamp" : "2017-06-22T12.20.57EDT",
"rdt:rdatatrackerVersion" : "2.26.1",
"rdt:installedPackages" : [
{"package" : "base", "version" : "3.4.0"},
{"package" : "datasets", "version" : "3.4.0"},
{"package" : "graphics", "version" : "3.4.0"},
{"package" : "grDevices", "version" : "3.4.0"},
{"package" : "methods", "version" : "3.4.0"},
{"package" : "RDataTracker", "version" : "2.26.1"},
{"package" : "stats", "version" : "3.4.0"},
{"package" : "utils", "version" : "3.4.0"}]
}},
"entity":{
"d1" : {
"rdt:name" : "iris [ENV]",
"rdt:value" : "data/1-iris.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,5], "type":["numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "TRUE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d2" : {
"rdt:name" : "my_data.csv",
"rdt:value" : "data/2-my_data.csv",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d3" : {
"rdt:name" : "my_data.csv",
"rdt:value" : "data/3-my_data.csv",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d4" : {
"rdt:name" : "x",
"rdt:value" : "data/4-x.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d5" : {
"rdt:name" : "y [ENV]",
"rdt:value" : "data/5-y.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "TRUE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d6" : {
"rdt:name" : "y",
"rdt:value" : "data/6-y.csv",
"rdt:valType" : {"container":"data_frame", "dimension":[150,6], "type":["integer","numeric","numeric","numeric","numeric","factor"]},
"rdt:type" : "Snapshot",
"rdt:scope" : "R_GlobalEnv",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
} ,
"d7" : {
"rdt:name" : "cor_plot.pdf",
"rdt:value" : "data/7-cor_plot.pdf",
"rdt:valType" : {"container":"vector", "dimension":[1], "type":["character"]},
"rdt:type" : "File",
"rdt:scope" : "undefined",
"rdt:fromEnv" : "FALSE",
"rdt:timestamp" : "",
"rdt:location" : ""
}},
"wasInformedBy":{
"e1" : {
"prov:informant" : "p1",
"prov:informed" : "p2"
} ,
"e3" : {
"prov:informant" : "p2",
"prov:informed" : "p3"
} ,
"e6" : {
"prov:informant" : "p3",
"prov:informed" : "p4"
} ,
"e9" : {
"prov:informant" : "p4",
"prov:informed" : "p5"
} ,
"e11" : {
"prov:informant" : "p5",
"prov:informed" : "p6"
} ,
"e14" : {
"prov:informant" : "p6",
"prov:informed" : "p7"
} ,
"e16" : {
"prov:informant" : "p7",
"prov:informed" : "p8"
} ,
"e18" : {
"prov:informant" : "p8",
"prov:informed" : "p9"
} ,
"e19" : {
"prov:informant" : "p9",
"prov:informed" : "p10"
} ,
"e21" : {
"prov:informant" : "p10",
"prov:informed" : "p11"
} ,
"e23" : {
"prov:informant" : "p11",
"prov:informed" : "p12"
} ,
"e24" : {
"prov:informant" : "p12",
"prov:informed" : "p13"
} ,
"e26" : {
"prov:informant" : "p13",
"prov:informed" : "p14"
}},
"wasGeneratedBy":{
"e5" : {
"prov:entity" : "d2",
"prov:activity" : "p3"
} ,
"e8" : {
"prov:entity" : "d4",
"prov:activity" : "p4"
} ,
"e13" : {
"prov:entity" : "d6",
"prov:activity" : "p6"
} ,
"e25" : {
"prov:entity" : "d7",
"prov:activity" : "p13"
}},
"used":{
"e2" : {
"prov:activity" : "p2",
"prov:entity" : "d1"
} ,
"e4" : {
"prov:activity" : "p3",
"prov:entity" : "d1"
} ,
"e7" : {
"prov:activity" : "p4",
"prov:entity" : "d3"
} ,
"e10" : {
"prov:activity" : "p5",
"prov:entity" : "d4"
} ,
"e12" : {
"prov:activity" : "p6",
"prov:entity" : "d5"
} ,
"e15" : {
"prov:activity" : "p7",
"prov:entity" : "d4"
} ,
"e17" : {
"prov:activity" : "p8",
"prov:entity" : "d6"
} ,
"e20" : {
"prov:activity" : "p10",
"prov:entity" : "d6"
} ,
"e22" : {
"prov:activity" : "p11",
"prov:entity" : "d6"
}}
}
```
DDG generated by Matt (`development` branch). A number of issues:
1- it is not a valid JSON
2- the same file appears as two distinct nodes (see image bellow)

3- the edge connecting `data(xxx)` statement to the variable seems to be reversed

|
non_defect
|
bug in console ddg json prefix prov rdt activity rdt name console rdt type start rdt elapsedtime rdt scriptnum na rdt startline na rdt startcol na rdt endline na rdt endcol na rdt name data iris rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name write csv iris my data csv rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name x read csv my data csv rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name y x rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name y rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name str x rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name str y rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name pdf cor plot pdf rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name str y rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name plot sepal length sepal width data y rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name abline v rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name dev off rdt type operation rdt elapsedtime rdt scriptnum rdt startline rdt startcol rdt endline rdt endcol rdt name console rdt type finish rdt elapsedtime rdt scriptnum na rdt startline na rdt startcol na rdt endline na rdt endcol na environment rdt name environment rdt architecture rdt operatingsystem unix rdt language r rdt rversion r version rdt script rdt sourcedscripts rdt scripttimestamp rdt workingdirectory users hermes labs hf projects projects codecleanr json rdt ddgdirectory console ddg rdt ddgtimestamp rdt rdatatrackerversion rdt installedpackages package base version package datasets version package graphics version package grdevices version package methods version package rdatatracker version package stats version package utils version entity rdt name iris rdt value data iris csv rdt valtype container data frame dimension type rdt type snapshot rdt scope r globalenv rdt fromenv true rdt timestamp rdt location rdt name my data csv rdt value data my data csv rdt valtype container vector dimension type rdt type file rdt scope undefined rdt fromenv false rdt timestamp rdt location rdt name my data csv rdt value data my data csv rdt valtype container vector dimension type rdt type file rdt scope undefined rdt fromenv false rdt timestamp rdt location rdt name x rdt value data x csv rdt valtype container data frame dimension type rdt type snapshot rdt scope r globalenv rdt fromenv false rdt timestamp rdt location rdt name y rdt value data y csv rdt valtype container data frame dimension type rdt type snapshot rdt scope r globalenv rdt fromenv true rdt timestamp rdt location rdt name y rdt value data y csv rdt valtype container data frame dimension type rdt type snapshot rdt scope r globalenv rdt fromenv false rdt timestamp rdt location rdt name cor plot pdf rdt value data cor plot pdf rdt valtype container vector dimension type rdt type file rdt scope undefined rdt fromenv false rdt timestamp rdt location wasinformedby prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed prov informant prov informed wasgeneratedby prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity used prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity prov activity prov entity ddg generated by matt development branch a number of issues it is not a valid json the same file appears as two distinct nodes see image bellow the edge connecting data xxx statement to the variable seems to be reversed
| 0
|
32,950
| 6,975,884,518
|
IssuesEvent
|
2017-12-12 09:06:41
|
primefaces/primereact
|
https://api.github.com/repos/primefaces/primereact
|
closed
|
AutoComplete dosn't accept spaces
|
defect
|
Hi there,
navigate to https://www.primefaces.org/primereact/#/autocomplete and search for "American Samoa". The space after "American" isn't accepted and rendered by the control.
Bye,
Mariusz
|
1.0
|
AutoComplete dosn't accept spaces - Hi there,
navigate to https://www.primefaces.org/primereact/#/autocomplete and search for "American Samoa". The space after "American" isn't accepted and rendered by the control.
Bye,
Mariusz
|
defect
|
autocomplete dosn t accept spaces hi there navigate to and search for american samoa the space after american isn t accepted and rendered by the control bye mariusz
| 1
|
706,912
| 24,288,226,953
|
IssuesEvent
|
2022-09-29 01:44:00
|
oppia/oppia-android
|
https://api.github.com/repos/oppia/oppia-android
|
closed
|
Marquee restarts automatically in certain scenarios.
|
Priority: Essential issue_type_bug issue_user_impact_low user_team issue_temp_ben_triaged issue_user_learner
|
**Describe the bug**
In ideal case, Marquee should only start when user click on toolbar but Marquee restarts automatically in certain scenarios after Marquee stops for the first time.
**To Reproduce**
Steps to reproduce the behavior:
General Steps for all below mentioned cases:
1) Go to The Meaning of Equal Parts exploration
2) Go to Second state. Note that the title is not moving.
3) Click on Toolbar and Marquee starts moving.
Case 1:
4) Once marquee stops, pull down notification bar of your mobile from top to bottom and then pull up, you will see Marquee moving again without clicking the toolbar.
Case 2:
4) Once marquee stops, click on EditText, Marquee starts moving again. Now wait for it to stop don't do anything.
Also after the marquee stops, then press back button to close the keyboard and you will see Marquee starts moving again
Case 3:
4) Once marquee stops, click on hint button, Marquee starts moving again.
Case 4:
4) Once marquee stops, click on close button at top-left, Marquee starts moving again.
Case 5:
4) Once marquee stops, on clicking it again, Marquee won't start no matter how many times user clicks on the toolbar.
**Expected behavior**
Marquess effect should start only on click on the toolbar and not automatically & it should starts every time user clicks on the toolbar after marquee stops moving.
For reference please refer to this issue: #1770
|
1.0
|
Marquee restarts automatically in certain scenarios. - **Describe the bug**
In ideal case, Marquee should only start when user click on toolbar but Marquee restarts automatically in certain scenarios after Marquee stops for the first time.
**To Reproduce**
Steps to reproduce the behavior:
General Steps for all below mentioned cases:
1) Go to The Meaning of Equal Parts exploration
2) Go to Second state. Note that the title is not moving.
3) Click on Toolbar and Marquee starts moving.
Case 1:
4) Once marquee stops, pull down notification bar of your mobile from top to bottom and then pull up, you will see Marquee moving again without clicking the toolbar.
Case 2:
4) Once marquee stops, click on EditText, Marquee starts moving again. Now wait for it to stop don't do anything.
Also after the marquee stops, then press back button to close the keyboard and you will see Marquee starts moving again
Case 3:
4) Once marquee stops, click on hint button, Marquee starts moving again.
Case 4:
4) Once marquee stops, click on close button at top-left, Marquee starts moving again.
Case 5:
4) Once marquee stops, on clicking it again, Marquee won't start no matter how many times user clicks on the toolbar.
**Expected behavior**
Marquess effect should start only on click on the toolbar and not automatically & it should starts every time user clicks on the toolbar after marquee stops moving.
For reference please refer to this issue: #1770
|
non_defect
|
marquee restarts automatically in certain scenarios describe the bug in ideal case marquee should only start when user click on toolbar but marquee restarts automatically in certain scenarios after marquee stops for the first time to reproduce steps to reproduce the behavior general steps for all below mentioned cases go to the meaning of equal parts exploration go to second state note that the title is not moving click on toolbar and marquee starts moving case once marquee stops pull down notification bar of your mobile from top to bottom and then pull up you will see marquee moving again without clicking the toolbar case once marquee stops click on edittext marquee starts moving again now wait for it to stop don t do anything also after the marquee stops then press back button to close the keyboard and you will see marquee starts moving again case once marquee stops click on hint button marquee starts moving again case once marquee stops click on close button at top left marquee starts moving again case once marquee stops on clicking it again marquee won t start no matter how many times user clicks on the toolbar expected behavior marquess effect should start only on click on the toolbar and not automatically it should starts every time user clicks on the toolbar after marquee stops moving for reference please refer to this issue
| 0
|
33,017
| 6,996,264,838
|
IssuesEvent
|
2017-12-15 23:20:49
|
MDAnalysis/mdanalysis
|
https://api.github.com/repos/MDAnalysis/mdanalysis
|
closed
|
Set b-factor for PDB file
|
Component-Core Component-Docs defect
|
### Expected behaviour
I have a pdb file which I want to set b-factors to the atoms.
### Actual behaviour
The b-factor field in the pdb file is not changed
### Code to reproduce the behaviour
``` python
from MDAnalysis import Universe
from six import StringIO
pdb = 'ATOM 1 N MET S 1 85.260 60.210 43.770 1.00 0.00 SYST N'
u = Universe(StringIO(pdb), format='pdb')
u.atoms.bfactors = 0.5
u.atoms.write('test.pdb')
```
The content of test.pdb:
> HEADER
> TITLE MDANALYSIS FRAME 0: Created by PDBWriter
> CRYST1 0.000 0.000 0.000 0.00 0.00 0.00 P 1 1
> ATOM 1 N MET S 1 85.260 60.210 43.770 1.00 0.00 SYST N
> END
### Currently version of MDAnalysis:
py35, mda0.16.1
|
1.0
|
Set b-factor for PDB file - ### Expected behaviour
I have a pdb file which I want to set b-factors to the atoms.
### Actual behaviour
The b-factor field in the pdb file is not changed
### Code to reproduce the behaviour
``` python
from MDAnalysis import Universe
from six import StringIO
pdb = 'ATOM 1 N MET S 1 85.260 60.210 43.770 1.00 0.00 SYST N'
u = Universe(StringIO(pdb), format='pdb')
u.atoms.bfactors = 0.5
u.atoms.write('test.pdb')
```
The content of test.pdb:
> HEADER
> TITLE MDANALYSIS FRAME 0: Created by PDBWriter
> CRYST1 0.000 0.000 0.000 0.00 0.00 0.00 P 1 1
> ATOM 1 N MET S 1 85.260 60.210 43.770 1.00 0.00 SYST N
> END
### Currently version of MDAnalysis:
py35, mda0.16.1
|
defect
|
set b factor for pdb file expected behaviour i have a pdb file which i want to set b factors to the atoms actual behaviour the b factor field in the pdb file is not changed code to reproduce the behaviour python from mdanalysis import universe from six import stringio pdb atom n met s syst n u universe stringio pdb format pdb u atoms bfactors u atoms write test pdb the content of test pdb header title mdanalysis frame created by pdbwriter p atom n met s syst n end currently version of mdanalysis
| 1
|
101,509
| 16,512,294,835
|
IssuesEvent
|
2021-05-26 06:28:37
|
valtech-ch/microservice-kubernetes-cluster
|
https://api.github.com/repos/valtech-ch/microservice-kubernetes-cluster
|
opened
|
CVE-2020-14062 (High) detected in jackson-databind-2.9.8.jar
|
security vulnerability
|
## CVE-2020-14062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: microservice-kubernetes-cluster/functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-function-adapter-azure-3.1.2.jar (Root Library)
- spring-cloud-function-context-3.1.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/eb274179a823f7d17154880d5a503973bae259a0">eb274179a823f7d17154880d5a503973bae259a0</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14062>CVE-2020-14062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-14062 (High) detected in jackson-databind-2.9.8.jar - ## CVE-2020-14062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.9.8.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: microservice-kubernetes-cluster/functions/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.9.8/11283f21cc480aa86c4df7a0a3243ec508372ed2/jackson-databind-2.9.8.jar</p>
<p>
Dependency Hierarchy:
- spring-cloud-function-adapter-azure-3.1.2.jar (Root Library)
- spring-cloud-function-context-3.1.2.jar
- :x: **jackson-databind-2.9.8.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/valtech-ch/microservice-kubernetes-cluster/commit/eb274179a823f7d17154880d5a503973bae259a0">eb274179a823f7d17154880d5a503973bae259a0</a></p>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-14062>CVE-2020-14062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file microservice kubernetes cluster functions build gradle path to vulnerable library home wss scanner gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spring cloud function adapter azure jar root library spring cloud function context jar x jackson databind jar vulnerable library found in head commit a href found in base branch develop vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com sun org apache xalan internal lib sql jndiconnectionpool aka publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
| 0
|
43,065
| 11,460,526,898
|
IssuesEvent
|
2020-02-07 09:55:50
|
contao/contao
|
https://api.github.com/repos/contao/contao
|
closed
|
Favicon kann beim Startpunkt nicht eingefügt werden
|
defect
|
Wenn ich im Backend einer 4.9 (neuste dev) beim Startpunkt ein Favicon auswähle und auf Anwenden klicke passiert nichts.
In der Browser-Konsole steht folgende Fehlermeldung:
```
TypeError: t.field is undefined mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:6:145
initialize https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:6
initialize https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
r https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
e https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
callback https://www.contao49.local/contao?do=page&act=edit&id=1&rt=tN0k0YTM0VXnUItr2PJoznJzfPPyk3VDzLNaUnuIyKc&ref=Kbh54rlS:296
openModalSelector https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:7
<anonym> self-hosted:876
a https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
```
|
1.0
|
Favicon kann beim Startpunkt nicht eingefügt werden - Wenn ich im Backend einer 4.9 (neuste dev) beim Startpunkt ein Favicon auswähle und auf Anwenden klicke passiert nichts.
In der Browser-Konsole steht folgende Fehlermeldung:
```
TypeError: t.field is undefined mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:6:145
initialize https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:6
initialize https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
r https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
e https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
callback https://www.contao49.local/contao?do=page&act=edit&id=1&rt=tN0k0YTM0VXnUItr2PJoznJzfPPyk3VDzLNaUnuIyKc&ref=Kbh54rlS:296
openModalSelector https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:7
<anonym> self-hosted:876
a https://www.contao49.local/assets/js/mootools.min.js,mooRainbow.min.js,chosen.min.js,simplemodal.min....-3d1161f5.js:1
```
|
defect
|
favicon kann beim startpunkt nicht eingefügt werden wenn ich im backend einer neuste dev beim startpunkt ein favicon auswähle und auf anwenden klicke passiert nichts in der browser konsole steht folgende fehlermeldung typeerror t field is undefined mootools min js moorainbow min js chosen min js simplemodal min js initialize initialize r e callback openmodalselector self hosted a
| 1
|
7,858
| 4,082,099,243
|
IssuesEvent
|
2016-05-31 11:33:34
|
shockone/black-screen
|
https://api.github.com/repos/shockone/black-screen
|
closed
|
Automate build?
|
Build Workflow
|
Maybe look into https://github.com/mainyaa/gulp-electron and https://codeship.com so that you can build a executable for all platforms automatically on a successful deploy.
|
1.0
|
Automate build? - Maybe look into https://github.com/mainyaa/gulp-electron and https://codeship.com so that you can build a executable for all platforms automatically on a successful deploy.
|
non_defect
|
automate build maybe look into and so that you can build a executable for all platforms automatically on a successful deploy
| 0
|
395,380
| 27,067,380,153
|
IssuesEvent
|
2023-02-14 02:11:32
|
APSIMInitiative/ApsimX
|
https://api.github.com/repos/APSIMInitiative/ApsimX
|
closed
|
APSIM Next Gen Site showing broken links
|
documentation
|
The contribute section has broken links to source tree and CLI pages.
|
1.0
|
APSIM Next Gen Site showing broken links - The contribute section has broken links to source tree and CLI pages.
|
non_defect
|
apsim next gen site showing broken links the contribute section has broken links to source tree and cli pages
| 0
|
80,694
| 30,493,791,779
|
IssuesEvent
|
2023-07-18 09:26:59
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
closed
|
BUG: pip download does not downloads dependencies
|
defect
|
### Describe your issue.
When you use pip download scipy on one machine the use the downloaded files to install it on another machine it fails because packages are missing (meson-python with specific version, patchelf with minimal version...).
It requires user to download dependencies one by one after obtaining the errors which is a pain.
### Reproducing Code Example
```python
requirements.txt containing scipy and other things
python3 -m pip download -r requirements.txt --trusted-host=pypi.org --trusted-host=files.pythonhosted.org --no-cache-dir
ssh on a machine with no internet
python3 -m pip install scipy --no-index --find-links=. --upgrade --no-cache-dir
```
### Error message
```shell
ERROR: Could not find a version that satisfies the requirement meson-python<0.13.0,>=0.11.0 (from versions: none)
ERROR: No matching distribution found for meson-python<0.13.0,>=0.11.0
```
### SciPy/NumPy/Python version and system information
```shell
scipy==1.11.1
numpy==1.21.6
python==3.8.17
Same behaviour observed with python=3.9.17
```
|
1.0
|
BUG: pip download does not downloads dependencies - ### Describe your issue.
When you use pip download scipy on one machine the use the downloaded files to install it on another machine it fails because packages are missing (meson-python with specific version, patchelf with minimal version...).
It requires user to download dependencies one by one after obtaining the errors which is a pain.
### Reproducing Code Example
```python
requirements.txt containing scipy and other things
python3 -m pip download -r requirements.txt --trusted-host=pypi.org --trusted-host=files.pythonhosted.org --no-cache-dir
ssh on a machine with no internet
python3 -m pip install scipy --no-index --find-links=. --upgrade --no-cache-dir
```
### Error message
```shell
ERROR: Could not find a version that satisfies the requirement meson-python<0.13.0,>=0.11.0 (from versions: none)
ERROR: No matching distribution found for meson-python<0.13.0,>=0.11.0
```
### SciPy/NumPy/Python version and system information
```shell
scipy==1.11.1
numpy==1.21.6
python==3.8.17
Same behaviour observed with python=3.9.17
```
|
defect
|
bug pip download does not downloads dependencies describe your issue when you use pip download scipy on one machine the use the downloaded files to install it on another machine it fails because packages are missing meson python with specific version patchelf with minimal version it requires user to download dependencies one by one after obtaining the errors which is a pain reproducing code example python requirements txt containing scipy and other things m pip download r requirements txt trusted host pypi org trusted host files pythonhosted org no cache dir ssh on a machine with no internet m pip install scipy no index find links upgrade no cache dir error message shell error could not find a version that satisfies the requirement meson python from versions none error no matching distribution found for meson python scipy numpy python version and system information shell scipy numpy python same behaviour observed with python
| 1
|
45,025
| 7,155,521,413
|
IssuesEvent
|
2018-01-26 13:10:10
|
Capitains/flask-capitains-nemo
|
https://api.github.com/repos/Capitains/flask-capitains-nemo
|
closed
|
heroku deployment documentation
|
documentation
|
If we're going to recommend deployment on heroku, we might want to document some things
to run locally, you need to install these packages:
python-dev
libxml2-dev
libxslt-dev
gunicorn is probably not such a great idea for large CTS repos, as it seems fixed at a 30ms timeout. Waitress works a little better (see http://blog.etianen.com/blog/2014/01/19/gunicorn-heroku-django/)
|
1.0
|
heroku deployment documentation - If we're going to recommend deployment on heroku, we might want to document some things
to run locally, you need to install these packages:
python-dev
libxml2-dev
libxslt-dev
gunicorn is probably not such a great idea for large CTS repos, as it seems fixed at a 30ms timeout. Waitress works a little better (see http://blog.etianen.com/blog/2014/01/19/gunicorn-heroku-django/)
|
non_defect
|
heroku deployment documentation if we re going to recommend deployment on heroku we might want to document some things to run locally you need to install these packages python dev dev libxslt dev gunicorn is probably not such a great idea for large cts repos as it seems fixed at a timeout waitress works a little better see
| 0
|
72,562
| 24,183,115,942
|
IssuesEvent
|
2022-09-23 10:47:18
|
matrix-org/synapse
|
https://api.github.com/repos/matrix-org/synapse
|
closed
|
Faster joins: incoming federation requests during resync may be incorrectly rejected
|
A-Federation A-Federated-Join T-Defect
|
If a remote server sends us a federation request (such as `get_missing_events`, or requesting a space summary) during a state resync, we may well reject that request due to not thinking the remote server is in the room.
Possibly we need to define some return code to say "sorry, can't auth you right now".
|
1.0
|
Faster joins: incoming federation requests during resync may be incorrectly rejected - If a remote server sends us a federation request (such as `get_missing_events`, or requesting a space summary) during a state resync, we may well reject that request due to not thinking the remote server is in the room.
Possibly we need to define some return code to say "sorry, can't auth you right now".
|
defect
|
faster joins incoming federation requests during resync may be incorrectly rejected if a remote server sends us a federation request such as get missing events or requesting a space summary during a state resync we may well reject that request due to not thinking the remote server is in the room possibly we need to define some return code to say sorry can t auth you right now
| 1
|
182,062
| 14,100,482,006
|
IssuesEvent
|
2020-11-06 04:19:13
|
mono/mono
|
https://api.github.com/repos/mono/mono
|
closed
|
[netcore] Make System.Reflection.Tests.TypeInfoTests.GetMethod Pass
|
area-netcore: CoreLib epic: CoreFX tests
|
Similar to #15029
This is a tracking issue.
|
1.0
|
[netcore] Make System.Reflection.Tests.TypeInfoTests.GetMethod Pass - Similar to #15029
This is a tracking issue.
|
non_defect
|
make system reflection tests typeinfotests getmethod pass similar to this is a tracking issue
| 0
|
76,195
| 26,290,204,136
|
IssuesEvent
|
2023-01-08 10:08:15
|
Default-Cube-Studios/Movement-Template
|
https://api.github.com/repos/Default-Cube-Studios/Movement-Template
|
closed
|
External collider
|
Defect Redo
|
In its current state, collision is detected and managed by the root Player game object, which clutters scripts, and makes collision detection much less variable. By having a separate game object manage collisions, gameplay can feel more dynamic as the collider can be dynamically resized.
|
1.0
|
External collider - In its current state, collision is detected and managed by the root Player game object, which clutters scripts, and makes collision detection much less variable. By having a separate game object manage collisions, gameplay can feel more dynamic as the collider can be dynamically resized.
|
defect
|
external collider in its current state collision is detected and managed by the root player game object which clutters scripts and makes collision detection much less variable by having a separate game object manage collisions gameplay can feel more dynamic as the collider can be dynamically resized
| 1
|
74,724
| 25,282,446,624
|
IssuesEvent
|
2022-11-16 16:41:30
|
WongNung/WongNung
|
https://api.github.com/repos/WongNung/WongNung
|
closed
|
[CC36] Feed getting review doesn't include type hint
|
Defect
|
### Description of Defect
<!-- Type out your description for defect. -->
`review` variable should be type hinted as `Optional[Review]`
https://github.com/WongNung/WongNung/blob/c1e00be407edb76c97eb0cc706c1f364ca12001a/wongnung/views/feed.py#L26
|
1.0
|
[CC36] Feed getting review doesn't include type hint - ### Description of Defect
<!-- Type out your description for defect. -->
`review` variable should be type hinted as `Optional[Review]`
https://github.com/WongNung/WongNung/blob/c1e00be407edb76c97eb0cc706c1f364ca12001a/wongnung/views/feed.py#L26
|
defect
|
feed getting review doesn t include type hint description of defect review variable should be type hinted as optional
| 1
|
73,457
| 14,076,260,690
|
IssuesEvent
|
2020-11-04 10:13:37
|
godweiyang/godweiyang.github.io
|
https://api.github.com/repos/godweiyang/godweiyang.github.io
|
opened
|
【每日算法Day 69】面试经典题:分发糖果问题 | 韦阳的博客
|
2020/03/14/leetcode-135/ Gitalk
|
https://godweiyang.com/2020/03/14/leetcode-135/
关注公众号【算法码上来】,每日算法干货马上就来!
题目链接LeetCode 135. 分发糖果
题目描述老师想给孩子们分发糖果,有 $N$ 个孩子站成了一条直线,老师会根据每个孩子的表现,预先给他们评分。
你需要按照以下要求,帮助老师
|
1.0
|
【每日算法Day 69】面试经典题:分发糖果问题 | 韦阳的博客 - https://godweiyang.com/2020/03/14/leetcode-135/
关注公众号【算法码上来】,每日算法干货马上就来!
题目链接LeetCode 135. 分发糖果
题目描述老师想给孩子们分发糖果,有 $N$ 个孩子站成了一条直线,老师会根据每个孩子的表现,预先给他们评分。
你需要按照以下要求,帮助老师
|
non_defect
|
【每日算法day 】面试经典题:分发糖果问题 韦阳的博客 关注公众号【算法码上来】,每日算法干货马上就来! 题目链接leetcode 分发糖果 题目描述老师想给孩子们分发糖果,有 n 个孩子站成了一条直线,老师会根据每个孩子的表现,预先给他们评分。 你需要按照以下要求,帮助老师
| 0
|
509,416
| 14,730,360,496
|
IssuesEvent
|
2021-01-06 13:05:12
|
kubeflow/website
|
https://api.github.com/repos/kubeflow/website
|
closed
|
Why are pipelines docs not under components?
|
area/docs area/pipelines kind/feature lifecycle/stale priority/p1
|
The pipelines docs appear to be split
There is a top level page for pipelines and then there is a page under components of kubeflow?
Could we move all of the pipelines docs under components of Kubeflow so that it is more consistent with how we handle the other applications in Kubeflow?
/assign @joeliedtke
/cc @Bobgy
|
1.0
|
Why are pipelines docs not under components? - The pipelines docs appear to be split
There is a top level page for pipelines and then there is a page under components of kubeflow?
Could we move all of the pipelines docs under components of Kubeflow so that it is more consistent with how we handle the other applications in Kubeflow?
/assign @joeliedtke
/cc @Bobgy
|
non_defect
|
why are pipelines docs not under components the pipelines docs appear to be split there is a top level page for pipelines and then there is a page under components of kubeflow could we move all of the pipelines docs under components of kubeflow so that it is more consistent with how we handle the other applications in kubeflow assign joeliedtke cc bobgy
| 0
|
12,051
| 2,678,671,836
|
IssuesEvent
|
2015-03-26 12:34:26
|
aleksabl/jaxb-fluent-api-ext
|
https://api.github.com/repos/aleksabl/jaxb-fluent-api-ext
|
closed
|
Create withNew methods for choices
|
auto-migrated Priority-Medium Type-Defect
|
```
When we have some structure like this:
* <pre>
* <complexType>
* <complexContent>
* <restriction base="{http://www.w3.org/2001/XMLSchema}anyType">
* <choice maxOccurs="unbounded">
* <group ref="{http://www.w3.org/1999/XSL/Format}marker_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}block_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}neutral_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}float_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}footnote_List"/>
* </choice>
* <attGroup ref="{http://www.w3.org/1999/XSL/Format}inheritable_properties_List"/>
* <attribute name="flow-name" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* </restriction>
* </complexContent>
* </complexType>
* </pre>
JAXB generates this structure:
@XmlElements({
@XmlElement(name = "block", type = Block.class),
@XmlElement(name = "multi-properties", type = MultiProperties.class),
@XmlElement(name = "wrapper", type = Wrapper.class),
@XmlElement(name = "retrieve-marker", type = RetrieveMarker.class),
@XmlElement(name = "footnote", type = Footnote.class),
@XmlElement(name = "marker", type = Marker.class),
@XmlElement(name = "float", type = Float.class),
@XmlElement(name = "multi-switch", type = MultiSwitch.class),
@XmlElement(name = "list-block", type = ListBlock.class),
@XmlElement(name = "table", type = Table.class),
@XmlElement(name = "block-container", type = BlockContainer.class),
@XmlElement(name = "table-and-caption", type = TableAndCaption.class)
})
protected List<Object> markerOrBlockOrBlockContainer;
I would be very very useful to have one withNew method for each choice, in
order to add new elements to markerOrBlockOrBlockContainer and hide this
List<Object> type.
Best regards,
Ricardo
```
Original issue reported on code.google.com by `bori...@gmail.com` on 11 Jun 2011 at 4:48
Attachments:
* [Flow.java](https://storage.googleapis.com/google-code-attachments/jaxb-fluent-api-ext/issue-7/comment-0/Flow.java)
|
1.0
|
Create withNew methods for choices - ```
When we have some structure like this:
* <pre>
* <complexType>
* <complexContent>
* <restriction base="{http://www.w3.org/2001/XMLSchema}anyType">
* <choice maxOccurs="unbounded">
* <group ref="{http://www.w3.org/1999/XSL/Format}marker_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}block_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}neutral_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}float_List"/>
* <group ref="{http://www.w3.org/1999/XSL/Format}footnote_List"/>
* </choice>
* <attGroup ref="{http://www.w3.org/1999/XSL/Format}inheritable_properties_List"/>
* <attribute name="flow-name" use="required" type="{http://www.w3.org/2001/XMLSchema}string" />
* </restriction>
* </complexContent>
* </complexType>
* </pre>
JAXB generates this structure:
@XmlElements({
@XmlElement(name = "block", type = Block.class),
@XmlElement(name = "multi-properties", type = MultiProperties.class),
@XmlElement(name = "wrapper", type = Wrapper.class),
@XmlElement(name = "retrieve-marker", type = RetrieveMarker.class),
@XmlElement(name = "footnote", type = Footnote.class),
@XmlElement(name = "marker", type = Marker.class),
@XmlElement(name = "float", type = Float.class),
@XmlElement(name = "multi-switch", type = MultiSwitch.class),
@XmlElement(name = "list-block", type = ListBlock.class),
@XmlElement(name = "table", type = Table.class),
@XmlElement(name = "block-container", type = BlockContainer.class),
@XmlElement(name = "table-and-caption", type = TableAndCaption.class)
})
protected List<Object> markerOrBlockOrBlockContainer;
I would be very very useful to have one withNew method for each choice, in
order to add new elements to markerOrBlockOrBlockContainer and hide this
List<Object> type.
Best regards,
Ricardo
```
Original issue reported on code.google.com by `bori...@gmail.com` on 11 Jun 2011 at 4:48
Attachments:
* [Flow.java](https://storage.googleapis.com/google-code-attachments/jaxb-fluent-api-ext/issue-7/comment-0/Flow.java)
|
defect
|
create withnew methods for choices when we have some structure like this lt complextype lt complexcontent lt restriction base lt choice maxoccurs unbounded lt group ref lt group ref lt group ref lt group ref lt group ref lt choice lt attgroup ref lt attribute name flow name use required type lt restriction lt complexcontent lt complextype jaxb generates this structure xmlelements xmlelement name block type block class xmlelement name multi properties type multiproperties class xmlelement name wrapper type wrapper class xmlelement name retrieve marker type retrievemarker class xmlelement name footnote type footnote class xmlelement name marker type marker class xmlelement name float type float class xmlelement name multi switch type multiswitch class xmlelement name list block type listblock class xmlelement name table type table class xmlelement name block container type blockcontainer class xmlelement name table and caption type tableandcaption class protected list markerorblockorblockcontainer i would be very very useful to have one withnew method for each choice in order to add new elements to markerorblockorblockcontainer and hide this list type best regards ricardo original issue reported on code google com by bori gmail com on jun at attachments
| 1
|
57,399
| 15,765,331,278
|
IssuesEvent
|
2021-03-31 14:03:27
|
snowplow/snowplow-javascript-tracker
|
https://api.github.com/repos/snowplow/snowplow-javascript-tracker
|
closed
|
Fix plugins-umd.zip not being upload to GitHub release assets
|
type:defect
|
**Describe the bug**
UMD files for plugins are zipped as `plugins-umd.zip` but we try to upload `plugins.zip` so the plugins are not available in the GitHub release
|
1.0
|
Fix plugins-umd.zip not being upload to GitHub release assets - **Describe the bug**
UMD files for plugins are zipped as `plugins-umd.zip` but we try to upload `plugins.zip` so the plugins are not available in the GitHub release
|
defect
|
fix plugins umd zip not being upload to github release assets describe the bug umd files for plugins are zipped as plugins umd zip but we try to upload plugins zip so the plugins are not available in the github release
| 1
|
126,578
| 17,084,939,242
|
IssuesEvent
|
2021-07-08 10:32:30
|
etsy/open-api
|
https://api.github.com/repos/etsy/open-api
|
opened
|
[ENDPOINT] Proposed design change for createDraftListing
|
API Design
|
**Current Endpoint Design**
_What in the current endpoint design do you think is missing or find confusing?_
The `createDraftListing` endpoint already accepts `is_personalizable`, but misses other optional settings.
I'd like to see this extended to match the available options on the shop manager GUI.
**Proposed Endpoint Design Change**
_Add the following fields to the API endpoint:_
- Required
- Max length
- Personalization message
**Why are you proposing this change?**
_Please provide a reason for the change you're proposing._
In my case, we have different listings with different available personalization options. It's not always immediately clear to the customer what on the product can be changed. Allowing us automatically import the help texts from our data sources would dramatically reduce the time needed to get new listings live and the customer would benefit from a better journey. Also helps to get products shipped as fast as possible, without the need to get in touch with the customer to fix potential unfulfillable personalization requests.
|
1.0
|
[ENDPOINT] Proposed design change for createDraftListing - **Current Endpoint Design**
_What in the current endpoint design do you think is missing or find confusing?_
The `createDraftListing` endpoint already accepts `is_personalizable`, but misses other optional settings.
I'd like to see this extended to match the available options on the shop manager GUI.
**Proposed Endpoint Design Change**
_Add the following fields to the API endpoint:_
- Required
- Max length
- Personalization message
**Why are you proposing this change?**
_Please provide a reason for the change you're proposing._
In my case, we have different listings with different available personalization options. It's not always immediately clear to the customer what on the product can be changed. Allowing us automatically import the help texts from our data sources would dramatically reduce the time needed to get new listings live and the customer would benefit from a better journey. Also helps to get products shipped as fast as possible, without the need to get in touch with the customer to fix potential unfulfillable personalization requests.
|
non_defect
|
proposed design change for createdraftlisting current endpoint design what in the current endpoint design do you think is missing or find confusing the createdraftlisting endpoint already accepts is personalizable but misses other optional settings i d like to see this extended to match the available options on the shop manager gui proposed endpoint design change add the following fields to the api endpoint required max length personalization message why are you proposing this change please provide a reason for the change you re proposing in my case we have different listings with different available personalization options it s not always immediately clear to the customer what on the product can be changed allowing us automatically import the help texts from our data sources would dramatically reduce the time needed to get new listings live and the customer would benefit from a better journey also helps to get products shipped as fast as possible without the need to get in touch with the customer to fix potential unfulfillable personalization requests
| 0
|
71,864
| 15,209,863,077
|
IssuesEvent
|
2021-02-17 06:18:25
|
YJSoft/syntaxhighlighter
|
https://api.github.com/repos/YJSoft/syntaxhighlighter
|
opened
|
CVE-2020-7729 (High) detected in librejslibrejs-7.19
|
security vulnerability
|
## CVE-2020-7729 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>librejslibrejs-7.19</b></p></summary>
<p>
<p>Gnu Distributions</p>
<p>Library home page: <a href=https://ftp.gnu.org/gnu/librejs?wsslib=librejs>https://ftp.gnu.org/gnu/librejs?wsslib=librejs</a></p>
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/syntaxhighlighter/commit/7161194500204a098f69d41f3418e91d5ff7cbb7">7161194500204a098f69d41f3418e91d5ff7cbb7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>syntaxhighlighter/node_modules/grunt-phplint/node_modules/grunt/lib/grunt/file.js</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>syntaxhighlighter/node_modules/grunt-phplint/node_modules/grunt/lib/grunt/file.js</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package grunt before 1.3.0 are vulnerable to Arbitrary Code Execution due to the default usage of the function load() instead of its secure replacement safeLoad() of the package js-yaml inside grunt.file.readYAML.
<p>Publish Date: 2020-09-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7729>CVE-2020-7729</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7729">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7729</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7729 (High) detected in librejslibrejs-7.19 - ## CVE-2020-7729 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>librejslibrejs-7.19</b></p></summary>
<p>
<p>Gnu Distributions</p>
<p>Library home page: <a href=https://ftp.gnu.org/gnu/librejs?wsslib=librejs>https://ftp.gnu.org/gnu/librejs?wsslib=librejs</a></p>
<p>Found in HEAD commit: <a href="https://github.com/YJSoft/syntaxhighlighter/commit/7161194500204a098f69d41f3418e91d5ff7cbb7">7161194500204a098f69d41f3418e91d5ff7cbb7</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>syntaxhighlighter/node_modules/grunt-phplint/node_modules/grunt/lib/grunt/file.js</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>syntaxhighlighter/node_modules/grunt-phplint/node_modules/grunt/lib/grunt/file.js</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package grunt before 1.3.0 are vulnerable to Arbitrary Code Execution due to the default usage of the function load() instead of its secure replacement safeLoad() of the package js-yaml inside grunt.file.readYAML.
<p>Publish Date: 2020-09-03
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7729>CVE-2020-7729</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7729">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7729</a></p>
<p>Release Date: 2020-07-21</p>
<p>Fix Resolution: 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in librejslibrejs cve high severity vulnerability vulnerable library librejslibrejs gnu distributions library home page a href found in head commit a href found in base branch master vulnerable source files syntaxhighlighter node modules grunt phplint node modules grunt lib grunt file js syntaxhighlighter node modules grunt phplint node modules grunt lib grunt file js vulnerability details the package grunt before are vulnerable to arbitrary code execution due to the default usage of the function load instead of its secure replacement safeload of the package js yaml inside grunt file readyaml publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
3,886
| 2,610,083,472
|
IssuesEvent
|
2015-02-26 18:25:30
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳熬夜长痘痘怎么办
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳熬夜长痘痘怎么办【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:49
|
1.0
|
深圳熬夜长痘痘怎么办 - ```
深圳熬夜长痘痘怎么办【深圳韩方科颜全国热线400-869-1818,24
小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩��
�秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,�
��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹
”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内��
�业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上�
��痘痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 6:49
|
defect
|
深圳熬夜长痘痘怎么办 深圳熬夜长痘痘怎么办【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
| 1
|
63,725
| 17,871,249,330
|
IssuesEvent
|
2021-09-06 15:50:56
|
hazelcast/hazelcast-cpp-client
|
https://api.github.com/repos/hazelcast/hazelcast-cpp-client
|
opened
|
Member codec missing version check for field `address_map`
|
Type: Defect
|
`address_map` field was added after protocol version 2.0 and hence the custom codec should check against this. See Java [codec](https://github.com/hazelcast/hazelcast/blob/v4.2.2/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/custom/MemberInfoCodec.java#L65).
This is not a problem when working with latest patch server version [4.0.3 since that server](https://github.com/hazelcast/hazelcast/blob/v4.0.3/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/custom/MemberInfoCodec.java#L47) encodes the field.
|
1.0
|
Member codec missing version check for field `address_map` - `address_map` field was added after protocol version 2.0 and hence the custom codec should check against this. See Java [codec](https://github.com/hazelcast/hazelcast/blob/v4.2.2/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/custom/MemberInfoCodec.java#L65).
This is not a problem when working with latest patch server version [4.0.3 since that server](https://github.com/hazelcast/hazelcast/blob/v4.0.3/hazelcast/src/main/java/com/hazelcast/client/impl/protocol/codec/custom/MemberInfoCodec.java#L47) encodes the field.
|
defect
|
member codec missing version check for field address map address map field was added after protocol version and hence the custom codec should check against this see java this is not a problem when working with latest patch server version encodes the field
| 1
|
80,779
| 30,523,686,634
|
IssuesEvent
|
2023-07-19 09:45:43
|
vector-im/element-web
|
https://api.github.com/repos/vector-im/element-web
|
closed
|
Threads filter does the opposite of what you select
|
T-Defect S-Minor O-Uncommon A-Threads Z-MadLittleMods
|
### Steps to reproduce
1. Open the threads list
2. Select 'my threads' from the dropdown
3. Observe that the list still shows all threads
4. Select 'all threads' from the dropdown
5. Observe that the list now shows only threads you have participated in
### Outcome
#### What did you expect?
'My threads' should show my threads, and 'all threads' should show all threads
#### What happened instead?
'My threads' shows all threads, and 'all threads' shows my threads
### Operating system
NixOS unstable
### Browser information
Firefox 107.0.1
### URL for webapp
develop.element.io
### Application version
Element version: a7124c935ece-react-8869374c6bfb-js-447319737a05 Olm version: 3.2.12
### Homeserver
Synapse 1.72.0
### Will you send logs?
No
|
1.0
|
Threads filter does the opposite of what you select - ### Steps to reproduce
1. Open the threads list
2. Select 'my threads' from the dropdown
3. Observe that the list still shows all threads
4. Select 'all threads' from the dropdown
5. Observe that the list now shows only threads you have participated in
### Outcome
#### What did you expect?
'My threads' should show my threads, and 'all threads' should show all threads
#### What happened instead?
'My threads' shows all threads, and 'all threads' shows my threads
### Operating system
NixOS unstable
### Browser information
Firefox 107.0.1
### URL for webapp
develop.element.io
### Application version
Element version: a7124c935ece-react-8869374c6bfb-js-447319737a05 Olm version: 3.2.12
### Homeserver
Synapse 1.72.0
### Will you send logs?
No
|
defect
|
threads filter does the opposite of what you select steps to reproduce open the threads list select my threads from the dropdown observe that the list still shows all threads select all threads from the dropdown observe that the list now shows only threads you have participated in outcome what did you expect my threads should show my threads and all threads should show all threads what happened instead my threads shows all threads and all threads shows my threads operating system nixos unstable browser information firefox url for webapp develop element io application version element version react js olm version homeserver synapse will you send logs no
| 1
|
34,569
| 7,457,387,809
|
IssuesEvent
|
2018-03-30 03:57:56
|
kerdokullamae/test_koik_issued
|
https://api.github.com/repos/kerdokullamae/test_koik_issued
|
closed
|
KÜ muutmisteenused annavad vea
|
C: AIS P: highest R: fixed T: defect
|
**Reported by sven syld on 24 Mar 2014 08:00 UTC**
Tarvo Kärberg @ 17.03:
Erik katsetas saadetud näidete abil teenust ja sai allolevad tulemused. Punktis 1 on vastuseks veateade. Punktis 2 vastust ei antud, kuid AISis päringu teel saadetud muudatusi näha ka ei olnud. Milles võib probleem olla, kuidas need teenused toimima saada?
1.
```
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap..org/soap/envelope/"
xmlns="http://ais.ra.ee/schemas/descriptionUnit" xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/07/secext"
xmlns:wsu="http://schemas.xmlsoap.org/ws/2002/07/utility">
<SOAP-ENV:Body>
<setRequest>
<descriptionUnit>
<id>331522</id>
<referencesReferences>
<reference>
<referenceTypeId>KAART_ID</referenceTypeId>
<referenceValue>EAA.1.2.C-I-1</referenceValue>
<name>yyy</name>
</reference>
</referencesReferences>
</descriptionUnit>
</setRequest>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Response:
faultcode=SOAP-ENV:Server
faultstring=Call to undefined function Dira\DescriptionUnitBundle\Service\pre()
```
2.
```
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns="http://ais.ra.ee/schemas/descriptionUnit" xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/07/secext"
xmlns:wsu="http://schemas.xmlsoap.org/ws/2002/07/utility">
<SOAP-ENV:Body>
<setRequest>
<descriptionUnit>
<id>331522</id>
<descriptionUnitMetas>
<descriptionUnitMeta>
<xMetadataGroupId>MAP</xMetadataGroupId>
<xMetadataId>DIMENSIONS</xMetadataId>
<value>49x34/A2</value>
</descriptionUnitMeta>
</descriptionUnitMetas>
</descriptionUnit>
</setRequest>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
No response.
```
|
1.0
|
KÜ muutmisteenused annavad vea - **Reported by sven syld on 24 Mar 2014 08:00 UTC**
Tarvo Kärberg @ 17.03:
Erik katsetas saadetud näidete abil teenust ja sai allolevad tulemused. Punktis 1 on vastuseks veateade. Punktis 2 vastust ei antud, kuid AISis päringu teel saadetud muudatusi näha ka ei olnud. Milles võib probleem olla, kuidas need teenused toimima saada?
1.
```
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap..org/soap/envelope/"
xmlns="http://ais.ra.ee/schemas/descriptionUnit" xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/07/secext"
xmlns:wsu="http://schemas.xmlsoap.org/ws/2002/07/utility">
<SOAP-ENV:Body>
<setRequest>
<descriptionUnit>
<id>331522</id>
<referencesReferences>
<reference>
<referenceTypeId>KAART_ID</referenceTypeId>
<referenceValue>EAA.1.2.C-I-1</referenceValue>
<name>yyy</name>
</reference>
</referencesReferences>
</descriptionUnit>
</setRequest>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Response:
faultcode=SOAP-ENV:Server
faultstring=Call to undefined function Dira\DescriptionUnitBundle\Service\pre()
```
2.
```
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns="http://ais.ra.ee/schemas/descriptionUnit" xmlns:wsse="http://schemas.xmlsoap.org/ws/2002/07/secext"
xmlns:wsu="http://schemas.xmlsoap.org/ws/2002/07/utility">
<SOAP-ENV:Body>
<setRequest>
<descriptionUnit>
<id>331522</id>
<descriptionUnitMetas>
<descriptionUnitMeta>
<xMetadataGroupId>MAP</xMetadataGroupId>
<xMetadataId>DIMENSIONS</xMetadataId>
<value>49x34/A2</value>
</descriptionUnitMeta>
</descriptionUnitMetas>
</descriptionUnit>
</setRequest>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
No response.
```
|
defect
|
kü muutmisteenused annavad vea reported by sven syld on mar utc tarvo kärberg erik katsetas saadetud näidete abil teenust ja sai allolevad tulemused punktis on vastuseks veateade punktis vastust ei antud kuid aisis päringu teel saadetud muudatusi näha ka ei olnud milles võib probleem olla kuidas need teenused toimima saada soap env envelope xmlns soap env xmlns xmlns wsse xmlns wsu kaart id eaa c i yyy response faultcode soap env server faultstring call to undefined function dira descriptionunitbundle service pre soap env envelope xmlns soap env xmlns xmlns wsse xmlns wsu map dimensions no response
| 1
|
61,872
| 14,643,034,113
|
IssuesEvent
|
2020-12-25 14:14:26
|
fu1771695yongxie/freeCodeCamp
|
https://api.github.com/repos/fu1771695yongxie/freeCodeCamp
|
opened
|
CVE-2020-8203 (High) detected in lodash-1.3.1.tgz
|
security vulnerability
|
## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.3.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz">https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz</a></p>
<p>Path to dependency file: freeCodeCamp/tools/contributor/lib/package.json</p>
<p>Path to vulnerable library: freeCodeCamp/tools/contributor/lib/node_modules/travis-ci/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- travis-ci-2.2.0.tgz (Root Library)
- :x: **lodash-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/freeCodeCamp/commit/94f16dd247ad5d29a6c8a99c82d0c620274be868">94f16dd247ad5d29a6c8a99c82d0c620274be868</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-8203 (High) detected in lodash-1.3.1.tgz - ## CVE-2020-8203 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-1.3.1.tgz</b></p></summary>
<p>A utility library delivering consistency, customization, performance, and extras.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz">https://registry.npmjs.org/lodash/-/lodash-1.3.1.tgz</a></p>
<p>Path to dependency file: freeCodeCamp/tools/contributor/lib/package.json</p>
<p>Path to vulnerable library: freeCodeCamp/tools/contributor/lib/node_modules/travis-ci/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- travis-ci-2.2.0.tgz (Root Library)
- :x: **lodash-1.3.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fu1771695yongxie/freeCodeCamp/commit/94f16dd247ad5d29a6c8a99c82d0c620274be868">94f16dd247ad5d29a6c8a99c82d0c620274be868</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prototype pollution attack when using _.zipObjectDeep in lodash before 4.17.20.
<p>Publish Date: 2020-07-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8203>CVE-2020-8203</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.npmjs.com/advisories/1523">https://www.npmjs.com/advisories/1523</a></p>
<p>Release Date: 2020-07-23</p>
<p>Fix Resolution: lodash - 4.17.19</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in lodash tgz cve high severity vulnerability vulnerable library lodash tgz a utility library delivering consistency customization performance and extras library home page a href path to dependency file freecodecamp tools contributor lib package json path to vulnerable library freecodecamp tools contributor lib node modules travis ci node modules lodash package json dependency hierarchy travis ci tgz root library x lodash tgz vulnerable library found in head commit a href vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash step up your open source security game with whitesource
| 0
|
11,591
| 17,498,757,509
|
IssuesEvent
|
2021-08-10 06:33:49
|
HonkingGoose/throwaway-renovate-form-migration
|
https://api.github.com/repos/HonkingGoose/throwaway-renovate-form-migration
|
opened
|
test of upstream renovate bug report
|
type:bug status:requirements priority-5-triage
|
### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of Renovate you run.
25.70.2
### Describe the bug
Lorem ipsum
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste any log here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
No reproduction repository
|
1.0
|
test of upstream renovate bug report - ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of Renovate you run.
25.70.2
### Describe the bug
Lorem ipsum
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste any log here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
No reproduction repository
|
non_defect
|
test of upstream renovate bug report how are you running renovate whitesource renovate hosted app on github com please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of renovate you run describe the bug lorem ipsum relevant debug logs logs copy paste any log here between the starting and ending backticks have you created a minimal reproduction repository no reproduction repository
| 0
|
47,563
| 13,056,247,020
|
IssuesEvent
|
2020-07-30 04:06:52
|
icecube-trac/tix2
|
https://api.github.com/repos/icecube-trac/tix2
|
closed
|
double-muon::doublefit_10par.py (Trac #753)
|
Migrated from Trac combo reconstruction defect
|
1) fails to converge
2) looks like maybe there is an error in logging too
```text
INFO (Python): CHECK(10): twomuT1SPE status=OK zenith=39.98221966718791 deg, azimuth=252.47838314181124 deg; nch=30 (doublefit_10par.py:290 in checkdoublefits)
FATAL (Python): event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (doublefit_10par.py:314 in checkdoublefits)
ERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())
ERROR (I3Module): <function checkdoublefits at 0xa874924>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))
Traceback (most recent call last):
File "/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py", line 351, in ?
tray.Execute()
File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/I3Tray.py", line 231, in Execute
super(I3Tray, self).Execute()
File "/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py", line 314, in checkdoublefits
icetray.logging.log_fatal("event %d fit=%s: got status=%s, expected %s" % \
File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (in checkdoublefits)
NOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:533 in void I3Tray::Finish())
```
Migrated from https://code.icecube.wisc.edu/ticket/753
```json
{
"status": "closed",
"changetime": "2015-04-08T09:03:37",
"description": "1) fails to converge\n2) looks like maybe there is an error in logging too\n\n{{{\nINFO (Python): CHECK(10): twomuT1SPE status=OK zenith=39.98221966718791 deg, azimuth=252.47838314181124 deg; nch=30 (doublefit_10par.py:290 in checkdoublefits)\nFATAL (Python): event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (doublefit_10par.py:314 in checkdoublefits)\nERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())\nERROR (I3Module): <function checkdoublefits at 0xa874924>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py\", line 351, in ?\n tray.Execute()\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/I3Tray.py\", line 231, in Execute\n super(I3Tray, self).Execute()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py\", line 314, in checkdoublefits\n icetray.logging.log_fatal(\"event %d fit=%s: got status=%s, expected %s\" % \\\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (in checkdoublefits)\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:533 in void I3Tray::Finish())\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1428483817721454",
"component": "combo reconstruction",
"summary": "double-muon::doublefit_10par.py",
"priority": "normal",
"keywords": "double_muon tests logging",
"time": "2014-09-05T21:44:28",
"milestone": "",
"owner": "meike.dewith",
"type": "defect"
}
```
|
1.0
|
double-muon::doublefit_10par.py (Trac #753) - 1) fails to converge
2) looks like maybe there is an error in logging too
```text
INFO (Python): CHECK(10): twomuT1SPE status=OK zenith=39.98221966718791 deg, azimuth=252.47838314181124 deg; nch=30 (doublefit_10par.py:290 in checkdoublefits)
FATAL (Python): event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (doublefit_10par.py:314 in checkdoublefits)
ERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())
ERROR (I3Module): <function checkdoublefits at 0xa874924>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))
Traceback (most recent call last):
File "/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py", line 351, in ?
tray.Execute()
File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/I3Tray.py", line 231, in Execute
super(I3Tray, self).Execute()
File "/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py", line 314, in checkdoublefits
icetray.logging.log_fatal("event %d fit=%s: got status=%s, expected %s" % \
File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal
raise RuntimeError(message + " (in " + tb[2] + ")")
RuntimeError: event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (in checkdoublefits)
NOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:533 in void I3Tray::Finish())
```
Migrated from https://code.icecube.wisc.edu/ticket/753
```json
{
"status": "closed",
"changetime": "2015-04-08T09:03:37",
"description": "1) fails to converge\n2) looks like maybe there is an error in logging too\n\n{{{\nINFO (Python): CHECK(10): twomuT1SPE status=OK zenith=39.98221966718791 deg, azimuth=252.47838314181124 deg; nch=30 (doublefit_10par.py:290 in checkdoublefits)\nFATAL (Python): event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (doublefit_10par.py:314 in checkdoublefits)\nERROR (PythonFunction): Error running python function as module: (PythonFunction.cxx:173 in virtual void PythonFunction::Process())\nERROR (I3Module): <function checkdoublefits at 0xa874924>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py\", line 351, in ?\n tray.Execute()\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/I3Tray.py\", line 231, in Execute\n super(I3Tray, self).Execute()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/double-muon/resources/scripts/doublefit_10par.py\", line 314, in checkdoublefits\n icetray.logging.log_fatal(\"event %d fit=%s: got status=%s, expected %s\" % \\\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: event 10 fit=twomuT1SPE: got status=OK, expected FailedToConverge (in checkdoublefits)\nNOTICE (I3Tray): I3Tray finishing... (I3Tray.cxx:533 in void I3Tray::Finish())\n}}}",
"reporter": "nega",
"cc": "",
"resolution": "wontfix",
"_ts": "1428483817721454",
"component": "combo reconstruction",
"summary": "double-muon::doublefit_10par.py",
"priority": "normal",
"keywords": "double_muon tests logging",
"time": "2014-09-05T21:44:28",
"milestone": "",
"owner": "meike.dewith",
"type": "defect"
}
```
|
defect
|
double muon doublefit py trac fails to converge looks like maybe there is an error in logging too text info python check status ok zenith deg azimuth deg nch doublefit py in checkdoublefits fatal python event fit got status ok expected failedtoconverge doublefit py in checkdoublefits error pythonfunction error running python function as module pythonfunction cxx in virtual void pythonfunction process error exception thrown cxx in void do void traceback most recent call last file build buildslave foraii quick icerec source double muon resources scripts doublefit py line in tray execute file build buildslave foraii quick icerec build lib py line in execute super self execute file build buildslave foraii quick icerec source double muon resources scripts doublefit py line in checkdoublefits icetray logging log fatal event d fit s got status s expected s file build buildslave foraii quick icerec build lib icecube icetray py line in log fatal raise runtimeerror message in tb runtimeerror event fit got status ok expected failedtoconverge in checkdoublefits notice finishing cxx in void finish migrated from json status closed changetime description fails to converge looks like maybe there is an error in logging too n n ninfo python check status ok zenith deg azimuth deg nch doublefit py in checkdoublefits nfatal python event fit got status ok expected failedtoconverge doublefit py in checkdoublefits nerror pythonfunction error running python function as module pythonfunction cxx in virtual void pythonfunction process nerror exception thrown cxx in void do void ntraceback most recent call last n file build buildslave foraii quick icerec source double muon resources scripts doublefit py line in n tray execute n file build buildslave foraii quick icerec build lib py line in execute n super self execute n file build buildslave foraii quick icerec source double muon resources scripts doublefit py line in checkdoublefits n icetray logging log fatal event d fit s got status s expected s n file build buildslave foraii quick icerec build lib icecube icetray py line in log fatal n raise runtimeerror message in tb nruntimeerror event fit got status ok expected failedtoconverge in checkdoublefits nnotice finishing cxx in void finish n reporter nega cc resolution wontfix ts component combo reconstruction summary double muon doublefit py priority normal keywords double muon tests logging time milestone owner meike dewith type defect
| 1
|
124,024
| 12,223,782,998
|
IssuesEvent
|
2020-05-02 19:11:46
|
srid/neuron
|
https://api.github.com/repos/srid/neuron
|
closed
|
Mention Windows support
|
Windows documentation
|
I tried out neuron on Windows via WSL. This is the only workaround needed: https://github.com/NixOS/nix/issues/2292#issuecomment-443933924
And neuron works fine.
Might want to look into how to install and setup Doom Emacs with neuron-mode.
|
1.0
|
Mention Windows support - I tried out neuron on Windows via WSL. This is the only workaround needed: https://github.com/NixOS/nix/issues/2292#issuecomment-443933924
And neuron works fine.
Might want to look into how to install and setup Doom Emacs with neuron-mode.
|
non_defect
|
mention windows support i tried out neuron on windows via wsl this is the only workaround needed and neuron works fine might want to look into how to install and setup doom emacs with neuron mode
| 0
|
169,574
| 20,841,811,446
|
IssuesEvent
|
2022-03-21 01:35:22
|
turkdevops/quasar
|
https://api.github.com/repos/turkdevops/quasar
|
opened
|
CVE-2022-24772 (High) detected in node-forge-0.10.0.tgz
|
security vulnerability
|
## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /cli/package.json</p>
<p>Path to vulnerable library: /cli/node_modules/node-forge/package.json,/app/node_modules/node-forge/package.json,/ui/node_modules/node-forge/package.json,/docs/node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.0.tgz (Root Library)
- selfsigned-1.10.11.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-24772 (High) detected in node-forge-0.10.0.tgz - ## CVE-2022-24772 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-forge-0.10.0.tgz</b></p></summary>
<p>JavaScript implementations of network transports, cryptography, ciphers, PKI, message digests, and various utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz">https://registry.npmjs.org/node-forge/-/node-forge-0.10.0.tgz</a></p>
<p>Path to dependency file: /cli/package.json</p>
<p>Path to vulnerable library: /cli/node_modules/node-forge/package.json,/app/node_modules/node-forge/package.json,/ui/node_modules/node-forge/package.json,/docs/node_modules/node-forge/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-3.11.0.tgz (Root Library)
- selfsigned-1.10.11.tgz
- :x: **node-forge-0.10.0.tgz** (Vulnerable Library)
<p>Found in base branch: <b>dev</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Forge (also called `node-forge`) is a native implementation of Transport Layer Security in JavaScript. Prior to version 1.3.0, RSA PKCS#1 v1.5 signature verification code does not check for tailing garbage bytes after decoding a `DigestInfo` ASN.1 structure. This can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used. The issue has been addressed in `node-forge` version 1.3.0. There are currently no known workarounds.
<p>Publish Date: 2022-03-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-24772>CVE-2022-24772</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-24772</a></p>
<p>Release Date: 2022-03-18</p>
<p>Fix Resolution: node-forge - 1.3.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_defect
|
cve high detected in node forge tgz cve high severity vulnerability vulnerable library node forge tgz javascript implementations of network transports cryptography ciphers pki message digests and various utilities library home page a href path to dependency file cli package json path to vulnerable library cli node modules node forge package json app node modules node forge package json ui node modules node forge package json docs node modules node forge package json dependency hierarchy webpack dev server tgz root library selfsigned tgz x node forge tgz vulnerable library found in base branch dev vulnerability details forge also called node forge is a native implementation of transport layer security in javascript prior to version rsa pkcs signature verification code does not check for tailing garbage bytes after decoding a digestinfo asn structure this can allow padding bytes to be removed and garbage data added to forge a signature when a low public exponent is being used the issue has been addressed in node forge version there are currently no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution node forge step up your open source security game with whitesource
| 0
|
831,879
| 32,064,164,272
|
IssuesEvent
|
2023-09-25 00:28:10
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
closed
|
stubgen: wrong import statements for base module when importing submodule
|
bug priority-1-normal topic-stubgen
|
There's a bug with the import statements generated in .pyi files.
Consider the folder structure:
```
somefolder/
test.py
foo_mod/
__init__.py
bar_mod.py
```
with file contents:
```python
# test.py
import foo_mod
import foo_mod.bar_mod as bar_mod
class baz(foo_mod.foo, bar_mod.bar):
pass
```
```python
# foo_mod/__init__.py
class foo:
pass
```
```python
# foo_mod/bar_mod.py
class bar:
pass
```
Then running `stubgen test.py` whilst in `somefolder` generates an `out/test.pyi` file with contents:
```python
# out/test.pyi
# Stubs for test (Python 3)
#
# NOTE: This dynamically typed stub was automatically generated by stubgen.
import foo_mod.bar_mod as bar_mod
import foo_mod.bar_mod
class baz(foo_mod.foo, bar_mod.bar): ...
```
Note how it says `foo_mod.bar_mod`, rather than just `foo_mod` as expected.
This issue generalises to subsubpackages as well (and presumably further). That is,
```python
# test.py
import foo_mod
import foo_mod.bar_mod as bar_mod
import foo_mod.bar_mod.quux_mod as quux_mod
...
```
generates
```python
# out/test.pyi
import foo_mod.bar_mod.quux_mod as bar_mod
import foo_mod.bar_mod.quux_mod
import foo_mod.bar_mod.quux_mod as quux_mod
...
```
Tested on Windows and Unix, with both mypy version 0.701 and the latest on the GitHub master branch, with Python 3.6.6. (At least, that's the version of Python which is bound to `python` on the command line, so presumably that's what stubgen is using?)
Possibly related? https://github.com/python/mypy/issues/6831
|
1.0
|
stubgen: wrong import statements for base module when importing submodule - There's a bug with the import statements generated in .pyi files.
Consider the folder structure:
```
somefolder/
test.py
foo_mod/
__init__.py
bar_mod.py
```
with file contents:
```python
# test.py
import foo_mod
import foo_mod.bar_mod as bar_mod
class baz(foo_mod.foo, bar_mod.bar):
pass
```
```python
# foo_mod/__init__.py
class foo:
pass
```
```python
# foo_mod/bar_mod.py
class bar:
pass
```
Then running `stubgen test.py` whilst in `somefolder` generates an `out/test.pyi` file with contents:
```python
# out/test.pyi
# Stubs for test (Python 3)
#
# NOTE: This dynamically typed stub was automatically generated by stubgen.
import foo_mod.bar_mod as bar_mod
import foo_mod.bar_mod
class baz(foo_mod.foo, bar_mod.bar): ...
```
Note how it says `foo_mod.bar_mod`, rather than just `foo_mod` as expected.
This issue generalises to subsubpackages as well (and presumably further). That is,
```python
# test.py
import foo_mod
import foo_mod.bar_mod as bar_mod
import foo_mod.bar_mod.quux_mod as quux_mod
...
```
generates
```python
# out/test.pyi
import foo_mod.bar_mod.quux_mod as bar_mod
import foo_mod.bar_mod.quux_mod
import foo_mod.bar_mod.quux_mod as quux_mod
...
```
Tested on Windows and Unix, with both mypy version 0.701 and the latest on the GitHub master branch, with Python 3.6.6. (At least, that's the version of Python which is bound to `python` on the command line, so presumably that's what stubgen is using?)
Possibly related? https://github.com/python/mypy/issues/6831
|
non_defect
|
stubgen wrong import statements for base module when importing submodule there s a bug with the import statements generated in pyi files consider the folder structure somefolder test py foo mod init py bar mod py with file contents python test py import foo mod import foo mod bar mod as bar mod class baz foo mod foo bar mod bar pass python foo mod init py class foo pass python foo mod bar mod py class bar pass then running stubgen test py whilst in somefolder generates an out test pyi file with contents python out test pyi stubs for test python note this dynamically typed stub was automatically generated by stubgen import foo mod bar mod as bar mod import foo mod bar mod class baz foo mod foo bar mod bar note how it says foo mod bar mod rather than just foo mod as expected this issue generalises to subsubpackages as well and presumably further that is python test py import foo mod import foo mod bar mod as bar mod import foo mod bar mod quux mod as quux mod generates python out test pyi import foo mod bar mod quux mod as bar mod import foo mod bar mod quux mod import foo mod bar mod quux mod as quux mod tested on windows and unix with both mypy version and the latest on the github master branch with python at least that s the version of python which is bound to python on the command line so presumably that s what stubgen is using possibly related
| 0
|
227,514
| 18,066,223,204
|
IssuesEvent
|
2021-09-20 19:30:21
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: restore/nodeShutdown/worker failed
|
C-test-failure O-robot O-roachtest T-bulkio branch-release-21.1
|
roachtest.restore/nodeShutdown/worker [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3429400&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3429400&tab=artifacts#/restore/nodeShutdown/worker) on release-21.1 @ [ab68cab1d88e2edf70d46631549159da1149dd6a](https://github.com/cockroachdb/cockroach/commits/ab68cab1d88e2edf70d46631549159da1149dd6a):
```
The test failed on branch=release-21.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/restore/nodeShutdown/worker/run_1
jobs.go:131,restore.go:288,test_runner.go:733: job too fast! job got to state succeeded before the target node could be shutdown
(1) attached stack trace
-- stack trace:
| main.jobSurvivesNodeShutdown.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/jobs.go:116
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2666
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1371
Wraps: (2) job too fast! job got to state succeeded before the target node could be shutdown
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
# From https://go.crdb.dev/p/roachstress, perhaps edited lightly.
caffeinate ./roachstress.sh restore/nodeShutdown/worker
```
</p>
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #70072 roachtest: restore/nodeShutdown/worker failed [C-test-failure O-roachtest O-robot T-bulkio branch-release-21.2 release-blocker]
- #69078 roachtest: restore/nodeShutdown/worker failed [C-test-failure O-roachtest O-robot T-bulkio branch-master]
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restore/nodeShutdown/worker.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: restore/nodeShutdown/worker failed - roachtest.restore/nodeShutdown/worker [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=3429400&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=3429400&tab=artifacts#/restore/nodeShutdown/worker) on release-21.1 @ [ab68cab1d88e2edf70d46631549159da1149dd6a](https://github.com/cockroachdb/cockroach/commits/ab68cab1d88e2edf70d46631549159da1149dd6a):
```
The test failed on branch=release-21.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/restore/nodeShutdown/worker/run_1
jobs.go:131,restore.go:288,test_runner.go:733: job too fast! job got to state succeeded before the target node could be shutdown
(1) attached stack trace
-- stack trace:
| main.jobSurvivesNodeShutdown.func2
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/jobs.go:116
| main.(*monitor).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2666
| golang.org/x/sync/errgroup.(*Group).Go.func1
| /home/agent/work/.go/src/github.com/cockroachdb/cockroach/vendor/golang.org/x/sync/errgroup/errgroup.go:57
| runtime.goexit
| /usr/local/go/src/runtime/asm_amd64.s:1371
Wraps: (2) job too fast! job got to state succeeded before the target node could be shutdown
Error types: (1) *withstack.withStack (2) *errutil.leafError
```
<details><summary>Reproduce</summary>
<p>
<p>To reproduce, try:
```bash
# From https://go.crdb.dev/p/roachstress, perhaps edited lightly.
caffeinate ./roachstress.sh restore/nodeShutdown/worker
```
</p>
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #70072 roachtest: restore/nodeShutdown/worker failed [C-test-failure O-roachtest O-robot T-bulkio branch-release-21.2 release-blocker]
- #69078 roachtest: restore/nodeShutdown/worker failed [C-test-failure O-roachtest O-robot T-bulkio branch-master]
</p>
</details>
/cc @cockroachdb/bulk-io
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*restore/nodeShutdown/worker.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_defect
|
roachtest restore nodeshutdown worker failed roachtest restore nodeshutdown worker with on release the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts restore nodeshutdown worker run jobs go restore go test runner go job too fast job got to state succeeded before the target node could be shutdown attached stack trace stack trace main jobsurvivesnodeshutdown home agent work go src github com cockroachdb cockroach pkg cmd roachtest jobs go main monitor go home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go golang org x sync errgroup group go home agent work go src github com cockroachdb cockroach vendor golang org x sync errgroup errgroup go runtime goexit usr local go src runtime asm s wraps job too fast job got to state succeeded before the target node could be shutdown error types withstack withstack errutil leaferror reproduce to reproduce try bash from perhaps edited lightly caffeinate roachstress sh restore nodeshutdown worker same failure on other branches roachtest restore nodeshutdown worker failed roachtest restore nodeshutdown worker failed cc cockroachdb bulk io
| 0
|
23,451
| 4,018,968,354
|
IssuesEvent
|
2016-05-16 13:14:53
|
FreeCodeCamp/FreeCodeCamp
|
https://api.github.com/repos/FreeCodeCamp/FreeCodeCamp
|
closed
|
Similar code written in different style gives error!!
|
confirmed help wanted tests
|
Challenge [Generate Random Whole Numbers with JavaScript](https://www.freecodecamp.com/challenges/generate-random-whole-numbers-with-javascript
User Agent is: <code>Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36</code>.
Please describe how to reproduce this issue, and include links to screenshots if possible.
My code:
```javascript
var randomNumberBetween0and19 = Math.floor(Math.rando
m() * 20);
function randomWholeNum() {
// Only change code below this line.
var rand = Math.random() * 10;
return Math.floor(rand);
// return Math.floor(Math.random() * 10); //works fine
//but above doesn't work
}
```

|
1.0
|
Similar code written in different style gives error!! - Challenge [Generate Random Whole Numbers with JavaScript](https://www.freecodecamp.com/challenges/generate-random-whole-numbers-with-javascript
User Agent is: <code>Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36</code>.
Please describe how to reproduce this issue, and include links to screenshots if possible.
My code:
```javascript
var randomNumberBetween0and19 = Math.floor(Math.rando
m() * 20);
function randomWholeNum() {
// Only change code below this line.
var rand = Math.random() * 10;
return Math.floor(rand);
// return Math.floor(Math.random() * 10); //works fine
//but above doesn't work
}
```

|
non_defect
|
similar code written in different style gives error challenge user agent is mozilla windows nt applewebkit khtml like gecko chrome safari please describe how to reproduce this issue and include links to screenshots if possible my code javascript var math floor math rando m function randomwholenum only change code below this line var rand math random return math floor rand return math floor math random works fine but above doesn t work
| 0
|
56,720
| 15,308,982,462
|
IssuesEvent
|
2021-02-24 23:28:50
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Images derived from cropped image styles do not respect the crop.
|
Core Application Team Defect Drupal engineering Planned work
|
**Describe the defect**
Some cropped images do not respect the user-specified crops
**To Reproduce**
I'm not able to reproduce this when uploading new images at /media/add/image, but i am able to find issues where existing images that are derivative of crops do not respect the crops.
Try reproducing by uploading new photos, eg at /media/add/image. Crop at 2x1, and change the default crop. Derivative images of that crop (eg image styles `2_1_large` and `2_1_medium_thumbnail`) should be based on the crop you made.
On staging and on local, this was true. However, on prod there are definitely examples of images where the derivatives do nto match the crop, see screenshots below:
Possible causes i can think of:
* this is prod specific
* browser or operating system
* this occurs when adding images from /node/add/news_story (did not test), but not when adding at /media/add/image (did test)
* happens to older images (whether seconds old, or possibly much older)


|
1.0
|
Images derived from cropped image styles do not respect the crop. - **Describe the defect**
Some cropped images do not respect the user-specified crops
**To Reproduce**
I'm not able to reproduce this when uploading new images at /media/add/image, but i am able to find issues where existing images that are derivative of crops do not respect the crops.
Try reproducing by uploading new photos, eg at /media/add/image. Crop at 2x1, and change the default crop. Derivative images of that crop (eg image styles `2_1_large` and `2_1_medium_thumbnail`) should be based on the crop you made.
On staging and on local, this was true. However, on prod there are definitely examples of images where the derivatives do nto match the crop, see screenshots below:
Possible causes i can think of:
* this is prod specific
* browser or operating system
* this occurs when adding images from /node/add/news_story (did not test), but not when adding at /media/add/image (did test)
* happens to older images (whether seconds old, or possibly much older)


|
defect
|
images derived from cropped image styles do not respect the crop describe the defect some cropped images do not respect the user specified crops to reproduce i m not able to reproduce this when uploading new images at media add image but i am able to find issues where existing images that are derivative of crops do not respect the crops try reproducing by uploading new photos eg at media add image crop at and change the default crop derivative images of that crop eg image styles large and medium thumbnail should be based on the crop you made on staging and on local this was true however on prod there are definitely examples of images where the derivatives do nto match the crop see screenshots below possible causes i can think of this is prod specific browser or operating system this occurs when adding images from node add news story did not test but not when adding at media add image did test happens to older images whether seconds old or possibly much older
| 1
|
311,912
| 23,409,987,413
|
IssuesEvent
|
2022-08-12 16:25:06
|
jhipster/jhipster-lite
|
https://api.github.com/repos/jhipster/jhipster-lite
|
closed
|
Add sonar commands in readme
|
area: documentation:books: area: enhancement :wrench:
|
Add image start
```
docker-compose -f src/main/docker/sonar.yml up -d
```
And analysis run
```
./mvnw clean verify sonar:sonar
```
|
1.0
|
Add sonar commands in readme - Add image start
```
docker-compose -f src/main/docker/sonar.yml up -d
```
And analysis run
```
./mvnw clean verify sonar:sonar
```
|
non_defect
|
add sonar commands in readme add image start docker compose f src main docker sonar yml up d and analysis run mvnw clean verify sonar sonar
| 0
|
4,363
| 2,610,092,484
|
IssuesEvent
|
2015-02-26 18:27:55
|
chrsmith/dsdsdaadf
|
https://api.github.com/repos/chrsmith/dsdsdaadf
|
opened
|
深圳改善痤疮的价位
|
auto-migrated Priority-Medium Type-Defect
|
```
深圳改善痤疮的价位【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:57
|
1.0
|
深圳改善痤疮的价位 - ```
深圳改善痤疮的价位【深圳韩方科颜全国热线400-869-1818,24小
时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国��
�方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩�
��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”
健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专��
�治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的�
��痘。
```
-----
Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:57
|
defect
|
深圳改善痤疮的价位 深圳改善痤疮的价位【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国�� �方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩� ��科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹” 健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专�� �治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的� ��痘。 original issue reported on code google com by szft com on may at
| 1
|
67,577
| 21,000,135,092
|
IssuesEvent
|
2022-03-29 16:38:11
|
hazelcast/hazelcast
|
https://api.github.com/repos/hazelcast/hazelcast
|
closed
|
The IMPOSE_ORDER function should drop late items [HZ-1016]
|
Type: Defect Source: Internal Team: SQL to-jira
|
The IMPOSE_ORDER function should not only add watermarks to the stream, but also drop late items. The current implementation doesn't, we rely on the window functions (TUMBLE/HOP) to do it, but it should happen even without a window function.
This test should pass in `SqlImposeOrderFunctionTest`:
```java
@Test
public void test_lateItemsDropping() {
String name = createTable(
row(timestampTz(10), "Alice"),
row(timestampTz(11), "Bob"),
row(timestampTz(5), "Cecilia")
);
assertRowsEventuallyInAnyOrder(
"SELECT * FROM " +
"TABLE(IMPOSE_ORDER(" +
" lag => INTERVAL '0.001' SECOND" +
" , input => (TABLE " + name + ")" +
" , timeCol => DESCRIPTOR(ts)" +
"))",
asList(
new Row(timestampTz(0), "Alice"),
new Row(timestampTz(1), "Bob")
// Cecilia is dropped because ti's late
)
);
}
```
|
1.0
|
The IMPOSE_ORDER function should drop late items [HZ-1016] - The IMPOSE_ORDER function should not only add watermarks to the stream, but also drop late items. The current implementation doesn't, we rely on the window functions (TUMBLE/HOP) to do it, but it should happen even without a window function.
This test should pass in `SqlImposeOrderFunctionTest`:
```java
@Test
public void test_lateItemsDropping() {
String name = createTable(
row(timestampTz(10), "Alice"),
row(timestampTz(11), "Bob"),
row(timestampTz(5), "Cecilia")
);
assertRowsEventuallyInAnyOrder(
"SELECT * FROM " +
"TABLE(IMPOSE_ORDER(" +
" lag => INTERVAL '0.001' SECOND" +
" , input => (TABLE " + name + ")" +
" , timeCol => DESCRIPTOR(ts)" +
"))",
asList(
new Row(timestampTz(0), "Alice"),
new Row(timestampTz(1), "Bob")
// Cecilia is dropped because ti's late
)
);
}
```
|
defect
|
the impose order function should drop late items the impose order function should not only add watermarks to the stream but also drop late items the current implementation doesn t we rely on the window functions tumble hop to do it but it should happen even without a window function this test should pass in sqlimposeorderfunctiontest java test public void test lateitemsdropping string name createtable row timestamptz alice row timestamptz bob row timestamptz cecilia assertrowseventuallyinanyorder select from table impose order lag interval second input table name timecol descriptor ts aslist new row timestamptz alice new row timestamptz bob cecilia is dropped because ti s late
| 1
|
13,419
| 2,755,876,090
|
IssuesEvent
|
2015-04-27 01:01:20
|
imelven/hackazon
|
https://api.github.com/repos/imelven/hackazon
|
closed
|
/home/install does not work if empty db
|
auto-migrated Priority-Medium Type-Defect
|
```
See error http://screencast.com/t/rFIYdGVe
In /vendor/phpixie/core/classes/PHPixie method "run" contains
"$this->before();" which execute /classes/App/Page::before()
```
Original issue reported on code.google.com by `alisovt...@gmail.com` on 25 Jul 2014 at 12:39
|
1.0
|
/home/install does not work if empty db - ```
See error http://screencast.com/t/rFIYdGVe
In /vendor/phpixie/core/classes/PHPixie method "run" contains
"$this->before();" which execute /classes/App/Page::before()
```
Original issue reported on code.google.com by `alisovt...@gmail.com` on 25 Jul 2014 at 12:39
|
defect
|
home install does not work if empty db see error in vendor phpixie core classes phpixie method run contains this before which execute classes app page before original issue reported on code google com by alisovt gmail com on jul at
| 1
|
425,984
| 12,365,461,872
|
IssuesEvent
|
2020-05-18 08:52:30
|
telstra/open-kilda
|
https://api.github.com/repos/telstra/open-kilda
|
opened
|
Flow history can miss entries if flow is deleted right after flow status change
|
bug priority/4-low
|
1. A flow
2. Break all flow paths to make the flow Down
3. Get an ISL UP to make flow reroute successfully
4. Wait for flow to become UP and immediately send a Delete flow command
**Expected**: Flow history is reflecting all the changes that happened to flow and its status, expected sequence: create->failed reroute + down-> ok reroute + up -> delete
**Actual**: Flow history is missing an entry about flow becoming 'up': create -> failed reroute + down -> delete
|
1.0
|
Flow history can miss entries if flow is deleted right after flow status change - 1. A flow
2. Break all flow paths to make the flow Down
3. Get an ISL UP to make flow reroute successfully
4. Wait for flow to become UP and immediately send a Delete flow command
**Expected**: Flow history is reflecting all the changes that happened to flow and its status, expected sequence: create->failed reroute + down-> ok reroute + up -> delete
**Actual**: Flow history is missing an entry about flow becoming 'up': create -> failed reroute + down -> delete
|
non_defect
|
flow history can miss entries if flow is deleted right after flow status change a flow break all flow paths to make the flow down get an isl up to make flow reroute successfully wait for flow to become up and immediately send a delete flow command expected flow history is reflecting all the changes that happened to flow and its status expected sequence create failed reroute down ok reroute up delete actual flow history is missing an entry about flow becoming up create failed reroute down delete
| 0
|
363,407
| 25,449,485,907
|
IssuesEvent
|
2022-11-24 09:21:48
|
samuel-gomez/react-starter-toolkit
|
https://api.github.com/repos/samuel-gomez/react-starter-toolkit
|
closed
|
Storybook
|
documentation enhancement
|
- ajouter Storybook
- clean addons
- add docs addon
- add deployment on Chromatic (https://www.chromatic.com/)
|
1.0
|
Storybook - - ajouter Storybook
- clean addons
- add docs addon
- add deployment on Chromatic (https://www.chromatic.com/)
|
non_defect
|
storybook ajouter storybook clean addons add docs addon add deployment on chromatic
| 0
|
68,958
| 22,034,747,678
|
IssuesEvent
|
2022-05-28 11:45:05
|
meerk40t/meerk40t
|
https://api.github.com/repos/meerk40t/meerk40t
|
closed
|
[Bug] fill-rule:nonzero not respected
|
Type: Defect
|
V0.7.6
Overlapping text-to-path with explicit `fill-rule:nonzero` is rastered as `fill-rule:evenodd`
|
1.0
|
[Bug] fill-rule:nonzero not respected - V0.7.6
Overlapping text-to-path with explicit `fill-rule:nonzero` is rastered as `fill-rule:evenodd`
|
defect
|
fill rule nonzero not respected overlapping text to path with explicit fill rule nonzero is rastered as fill rule evenodd
| 1
|
159,073
| 12,454,193,827
|
IssuesEvent
|
2020-05-27 14:54:06
|
dialoguemd/covidflow
|
https://api.github.com/repos/dialoguemd/covidflow
|
closed
|
Mirror private E2E repository to github
|
chore tests
|
Our private gitlab project for the Rasa integration testing should be moved to a public Rasa project on github
|
1.0
|
Mirror private E2E repository to github - Our private gitlab project for the Rasa integration testing should be moved to a public Rasa project on github
|
non_defect
|
mirror private repository to github our private gitlab project for the rasa integration testing should be moved to a public rasa project on github
| 0
|
48,464
| 13,086,110,582
|
IssuesEvent
|
2020-08-02 04:42:31
|
googlefonts/noto-fonts
|
https://api.github.com/repos/googlefonts/noto-fonts
|
closed
|
Lepcha - stacked superscripts are misplaced horizontally
|
Android FoundIn-1.x Priority-Medium Script-Lepcha Type-Defect in-evaluation
|
Stacked superscripts including Lepcha character RAN (U+1C36) are out of alignment.
Screenshot: examples left - expected; examples right - actual result
|
1.0
|
Lepcha - stacked superscripts are misplaced horizontally - Stacked superscripts including Lepcha character RAN (U+1C36) are out of alignment.
Screenshot: examples left - expected; examples right - actual result
|
defect
|
lepcha stacked superscripts are misplaced horizontally stacked superscripts including lepcha character ran u are out of alignment screenshot examples left expected examples right actual result
| 1
|
91,726
| 3,862,239,565
|
IssuesEvent
|
2016-04-08 01:22:07
|
rsanchez-wsu/sp16-ceg3120
|
https://api.github.com/repos/rsanchez-wsu/sp16-ceg3120
|
opened
|
Merge our LayerUI for closing the tabs into the merge from Master
|
help wanted priority-high team-1
|
We need to put our LayerUI into the Dev branch with the merged Master code in it.
|
1.0
|
Merge our LayerUI for closing the tabs into the merge from Master - We need to put our LayerUI into the Dev branch with the merged Master code in it.
|
non_defect
|
merge our layerui for closing the tabs into the merge from master we need to put our layerui into the dev branch with the merged master code in it
| 0
|
49,223
| 13,185,304,352
|
IssuesEvent
|
2020-08-12 21:07:38
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
opened
|
[sim-services] Documentation is horrible (Trac #998)
|
Incomplete Migration Migrated from Trac combo simulation defect
|
<details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/998
, reported by olivas and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"description": "Most modules are not documented at all. This needs to be fixed before the next release.",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1458335643235016",
"component": "combo simulation",
"summary": "[sim-services] Documentation is horrible",
"priority": "blocker",
"keywords": "",
"time": "2015-05-26T21:32:53",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[sim-services] Documentation is horrible (Trac #998) - <details>
<summary><em>Migrated from https://code.icecube.wisc.edu/ticket/998
, reported by olivas and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-18T21:14:03",
"description": "Most modules are not documented at all. This needs to be fixed before the next release.",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1458335643235016",
"component": "combo simulation",
"summary": "[sim-services] Documentation is horrible",
"priority": "blocker",
"keywords": "",
"time": "2015-05-26T21:32:53",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
defect
|
documentation is horrible trac migrated from reported by olivas and owned by olivas json status closed changetime description most modules are not documented at all this needs to be fixed before the next release reporter olivas cc resolution fixed ts component combo simulation summary documentation is horrible priority blocker keywords time milestone owner olivas type defect
| 1
|
1,065
| 2,507,209,729
|
IssuesEvent
|
2015-01-12 16:47:22
|
chessmasterhong/WaterEmblem
|
https://api.github.com/repos/chessmasterhong/WaterEmblem
|
opened
|
Trading items with another player unit does not check if item is a valid equipable item.
|
bug high priority
|
Currently, any item can be traded into another player unit's equipment slot (slot 0) without restrictions. This can result in units equipping items that they should not be able to equip. The restriction of unit-specific item type equips was put into place by pull request #67.
Ideally, the objective is to extend the checks implemented in #67 to the trade system.
Below shows an example of a mage unit (right side) equipping a bow when it was specifically coded to only equip tomes. Even weirder is a unit (left side) equipping a consumable item.

|
1.0
|
Trading items with another player unit does not check if item is a valid equipable item. - Currently, any item can be traded into another player unit's equipment slot (slot 0) without restrictions. This can result in units equipping items that they should not be able to equip. The restriction of unit-specific item type equips was put into place by pull request #67.
Ideally, the objective is to extend the checks implemented in #67 to the trade system.
Below shows an example of a mage unit (right side) equipping a bow when it was specifically coded to only equip tomes. Even weirder is a unit (left side) equipping a consumable item.

|
non_defect
|
trading items with another player unit does not check if item is a valid equipable item currently any item can be traded into another player unit s equipment slot slot without restrictions this can result in units equipping items that they should not be able to equip the restriction of unit specific item type equips was put into place by pull request ideally the objective is to extend the checks implemented in to the trade system below shows an example of a mage unit right side equipping a bow when it was specifically coded to only equip tomes even weirder is a unit left side equipping a consumable item
| 0
|
76,731
| 26,569,405,201
|
IssuesEvent
|
2023-01-21 00:58:59
|
openzfs/zfs
|
https://api.github.com/repos/openzfs/zfs
|
closed
|
Writing to a pool with a high fragmented free space shows write errors
|
Type: Defect Status: Stale
|
<!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | gentoo
Distribution Version | 2.4.1
Linux Kernel | 4.7.10
Architecture | amd64
OpenZFS Version | 2.1.1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
I am using a pool that shows a high fragmentation of the free space.
```
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 2.70T 1.53T 1.18T - - 77% 56% 1.00x ONLINE -
```
The pool uses 12 250GByte SSD Disks.
```
pool: zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 1.14G in 00:00:17 with 0 errors on Tue Oct 12 14:53:08 2021
config:
NAME STATE READ WRITE CKSUM
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
zfs-0x5002538d4282b4e3 ONLINE 0 0 0
zfs-0x5002538d4282b4e2 ONLINE 0 0 0
zfs-0x5002538d4282b4e0 ONLINE 0 0 0
zfs-0x5002538d4282b4df ONLINE 0 0 0
zfs-0x5002538d4282b4d3 ONLINE 0 0 0
zfs-0x5002538d4282b4d4 ONLINE 0 0 0
zfs-0x5002538d4282b4d1 ONLINE 0 0 0
zfs-0x5002538d4282b4d2 ONLINE 0 0 0
zfs-0x5002538d4280bd98 ONLINE 0 0 0
zfs-0x5002538d423c659f ONLINE 0 0 0
zfs-0x5002538d4280c3ec ONLINE 0 0 0
zfs-0x5002538d423c657b ONLINE 0 0 0
errors: No known data errors
```
This pool was created with version 0.8.2.
With this version this pool shows a low write performance due to high CPU load during metaslab allocation / metaslab_load.
All disks are working fine and no write errors are visible with version 0.8.2.
Fragmentation:
```
zdb -M zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
pool zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 fragmentation 78%
11: 8659773 ****************
12: 16373067 ******************************
13: 22102772 ****************************************
14: 17029560 *******************************
15: 10903563 ********************
16: 521022 *
17: 27324 *
18: 1 *
```
sample space map object:
"zdb -mmm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55" is multiple times slower compared to version 0.8.2.
"zdb -mm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55" is fast.
```
zdb -mmm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
space map object 222:
smp_length = 0x5698c0
smp_alloc = 0x23b8ba000
metaslab 1 offset 400000000 spacemap 4 free 6.98G
segments 166687 maxsize 202K freepct 43%
In-memory histogram:
11: 39689 **************
12: 86763 ******************************
13: 117519 ****************************************
14: 97159 **********************************
15: 57320 ********************
16: 11649 ****
17: 559 *
On-disk histogram: fragmentation 78
11: 41162 **************
12: 89268 ******************************
13: 121123 ****************************************
14: 101229 **********************************
15: 71358 ************************
16: 1135 *
17: 10 *
```
Using this pool with version 2.1.1 now shows write errors.
Other pools are working fine on my setup.
### Describe how to reproduce the problem
Using the fio tool to write data
```
fio --name=/opt/zfs_mount/235cd8b3-835a-4216-a38b-b52c5c566f55/voldata/tmp/write12 --rw=write --direct=0 --ioengine=libaio --bs=64k --numjobs=2 --size=100G --runtime=600 --group_reporting
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
shows
```
2021-10-15T11:55:56.767372+02:00 controller-21 kernel: [ 372.348892] blk_update_request: I/O error, dev sdf, sector 335937206
2021-10-15T11:55:56.767372+02:00 controller-21 kernel: [ 372.348896] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d3 error=5 type=2 offset=170921749504 size=9216 flags=40080c80
2021-10-15T11:56:48.734371+02:00 controller-21 kernel: [ 424.310414] blk_update_request: I/O error, dev sdd, sector 335967046
2021-10-15T11:56:48.734371+02:00 controller-21 kernel: [ 424.310418] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e0 error=5 type=2 offset=170937027584 size=6656 flags=40080c80
2021-10-15T11:57:27.710369+02:00 controller-21 kernel: [ 463.282832] blk_update_request: I/O error, dev sdf, sector 183934405
2021-10-15T11:57:27.710369+02:00 controller-21 kernel: [ 463.282837] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d3 error=5 type=2 offset=93096315392 size=8192 flags=40080c80
2021-10-15T11:59:22.720958+02:00 controller-21 kernel: [ 578.280205] blk_update_request: I/O error, dev sdm, sector 249524636
2021-10-15T11:59:22.720958+02:00 controller-21 kernel: [ 578.280210] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d423c657b error=5 type=2 offset=126678513664 size=8192 flags=40080c80
2021-10-15T12:00:02.654389+02:00 controller-21 kernel: [ 618.212548] blk_update_request: I/O error, dev sdg, sector 336037547
2021-10-15T12:00:02.654389+02:00 controller-21 kernel: [ 618.212552] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d4 error=5 type=2 offset=170973124096 size=8192 flags=40080c80
2021-10-15T12:00:39.710381+02:00 controller-21 kernel: [ 655.265140] blk_update_request: I/O error, dev sdi, sector 184003452
2021-10-15T12:00:39.710381+02:00 controller-21 kernel: [ 655.265145] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d2 error=5 type=2 offset=93131667456 size=11776 flags=40080c80
2021-10-15T12:01:18.686373+02:00 controller-21 kernel: [ 694.237890] blk_update_request: I/O error, dev sdi, sector 336051191
2021-10-15T12:01:18.686373+02:00 controller-21 kernel: [ 694.237894] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d2 error=5 type=2 offset=170980109824 size=8192 flags=40080c80
2021-10-15T12:02:06.687818+02:00 controller-21 kernel: [ 742.233127] blk_update_request: I/O error, dev sde, sector 184031306
2021-10-15T12:02:06.687818+02:00 controller-21 kernel: [ 742.233131] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4df error=5 type=2 offset=93145928704 size=8704 flags=40080c80
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249653] blk_update_request: I/O error, dev sdl, sector 184058234
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249657] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4280c3ec error=5 type=2 offset=93159715840 size=11264 flags=40080c80
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229807] blk_update_request: I/O error, dev sdc, sector 336111453
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229812] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e2 error=5 type=2 offset=171010963968 size=8704 flags=40080c80
2021-10-15T12:03:27.242384+02:00 controller-21 kernel: [ 822.781578] sd 14:0:0:0: [sdl] tag#3 abort scheduled
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791567] sd 14:0:0:0: [sdl] tag#3 aborting command
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791570] sd 14:0:0:0: [sdl] tag#3 cmd abort failed
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791576] scsi host14: scsi_eh_14: waking up 0/1/1
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791585] ata15.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791587] ata15.00: failed command: WRITE DMA
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791591] ata15.00: cmd ca/00:16:7a:81:f8/00:00:00:00:00/ea tag 3 dma 11264 out
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791591] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791593] ata15.00: status: { DRDY }
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791597] ata15: hard resetting link
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.247532] ata15: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.247738] ata15.00: supports DRM functions and may not be fully accessible
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.248269] ata15.00: disabling queued TRIM support
2021-10-15T12:03:27.709366+02:00 controller-21 kernel: [ 823.248822] ata15.00: supports DRM functions and may not be fully accessible
2021-10-15T12:03:27.709366+02:00 controller-21 kernel: [ 823.249305] ata15.00: disabling queued TRIM support
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249633] ata15.00: configured for UDMA/133
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249637] ata15.00: device reported invalid CHS sector 0
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249641] sd 14:0:0:0: [sdl] tag#3 scsi_eh_14: flush finish cmd
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249646] sd 14:0:0:0: [sdl] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249648] sd 14:0:0:0: [sdl] tag#3 Sense Key : Illegal Request [current]
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249649] sd 14:0:0:0: [sdl] tag#3 Add. Sense: Unaligned write command
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249652] sd 14:0:0:0: [sdl] tag#3 CDB: Write(10) 2a 00 0a f8 81 7a 00 00 16 00
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249653] blk_update_request: I/O error, dev sdl, sector 184058234
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249657] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4280c3ec error=5 type=2 offset=93159715840 size=11264 flags=40080c80
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249664] ata15: EH complete
2021-10-15T12:03:27.710474+02:00 controller-21 kernel: [ 823.249666] scsi host14: waking up host to restart
2021-10-15T12:03:27.710474+02:00 controller-21 kernel: [ 823.249671] scsi host14: scsi_eh_14: sleeping
2021-10-15T12:04:09.226380+02:00 controller-21 kernel: [ 864.761706] sd 17:0:0:0: [sdc] tag#18 abort scheduled
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771700] sd 17:0:0:0: [sdc] tag#18 aborting command
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771703] sd 17:0:0:0: [sdc] tag#18 cmd abort failed
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771708] scsi host17: scsi_eh_17: waking up 0/1/1
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771717] ata18.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771720] ata18.00: failed command: WRITE DMA EXT
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771724] ata18.00: cmd 35/00:11:5d:a7:08/00:00:14:00:00/e0 tag 18 dma 8704 out
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771724] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771726] ata18.00: status: { DRDY }
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771729] ata18: hard resetting link
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.227673] ata18: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.227881] ata18.00: supports DRM functions and may not be fully accessible
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.228411] ata18.00: disabling queued TRIM support
2021-10-15T12:04:09.693367+02:00 controller-21 kernel: [ 865.228969] ata18.00: supports DRM functions and may not be fully accessible
2021-10-15T12:04:09.693367+02:00 controller-21 kernel: [ 865.229453] ata18.00: disabling queued TRIM support
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229787] ata18.00: configured for UDMA/133
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229790] ata18.00: device reported invalid CHS sector 0
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229794] sd 17:0:0:0: [sdc] tag#18 scsi_eh_17: flush finish cmd
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229800] sd 17:0:0:0: [sdc] tag#18 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229802] sd 17:0:0:0: [sdc] tag#18 Sense Key : Illegal Request [current]
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229803] sd 17:0:0:0: [sdc] tag#18 Add. Sense: Unaligned write command
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229805] sd 17:0:0:0: [sdc] tag#18 CDB: Write(10) 2a 00 14 08 a7 5d 00 00 11 00
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229807] blk_update_request: I/O error, dev sdc, sector 336111453
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229812] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e2 error=5 type=2 offset=171010963968 size=8704 flags=40080c80
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229820] ata18: EH complete
2021-10-15T12:04:09.694452+02:00 controller-21 kernel: [ 865.229822] scsi host17: waking up host to restart
2021-10-15T12:04:09.694452+02:00 controller-21 kernel: [ 865.229828] scsi host17: scsi_eh_17: sleeping
```
resulting in
```
pool: zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 1.14G in 00:00:17 with 0 errors on Tue Oct 12 14:53:08 2021
config:
NAME STATE READ WRITE CKSUM
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
zfs-0x5002538d4282b4e3 ONLINE 0 0 0
zfs-0x5002538d4282b4e2 ONLINE 0 18 0
zfs-0x5002538d4282b4e0 ONLINE 0 14 0
zfs-0x5002538d4282b4df ONLINE 0 18 0
zfs-0x5002538d4282b4d3 ONLINE 0 36 0
zfs-0x5002538d4282b4d4 ONLINE 0 17 0
zfs-0x5002538d4282b4d1 ONLINE 0 0 0
zfs-0x5002538d4282b4d2 ONLINE 0 41 0
zfs-0x5002538d4280bd98 ONLINE 0 0 0
zfs-0x5002538d423c659f ONLINE 0 0 0
zfs-0x5002538d4280c3ec ONLINE 0 23 0
zfs-0x5002538d423c657b ONLINE 0 17 0
errors: No known data errors
```
|
1.0
|
Writing to a pool with a high fragmented free space shows write errors - <!-- Please fill out the following template, which will help other contributors address your issue. -->
<!--
Thank you for reporting an issue.
*IMPORTANT* - Please check our issue tracker before opening a new issue.
Additional valuable information can be found in the OpenZFS documentation
and mailing list archives.
Please fill in as much of the template as possible.
-->
### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | gentoo
Distribution Version | 2.4.1
Linux Kernel | 4.7.10
Architecture | amd64
OpenZFS Version | 2.1.1
<!--
Command to find OpenZFS version:
zfs version
Commands to find kernel version:
uname -r # Linux
freebsd-version -r # FreeBSD
-->
### Describe the problem you're observing
I am using a pool that shows a high fragmentation of the free space.
```
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 2.70T 1.53T 1.18T - - 77% 56% 1.00x ONLINE -
```
The pool uses 12 250GByte SSD Disks.
```
pool: zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 1.14G in 00:00:17 with 0 errors on Tue Oct 12 14:53:08 2021
config:
NAME STATE READ WRITE CKSUM
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
zfs-0x5002538d4282b4e3 ONLINE 0 0 0
zfs-0x5002538d4282b4e2 ONLINE 0 0 0
zfs-0x5002538d4282b4e0 ONLINE 0 0 0
zfs-0x5002538d4282b4df ONLINE 0 0 0
zfs-0x5002538d4282b4d3 ONLINE 0 0 0
zfs-0x5002538d4282b4d4 ONLINE 0 0 0
zfs-0x5002538d4282b4d1 ONLINE 0 0 0
zfs-0x5002538d4282b4d2 ONLINE 0 0 0
zfs-0x5002538d4280bd98 ONLINE 0 0 0
zfs-0x5002538d423c659f ONLINE 0 0 0
zfs-0x5002538d4280c3ec ONLINE 0 0 0
zfs-0x5002538d423c657b ONLINE 0 0 0
errors: No known data errors
```
This pool was created with version 0.8.2.
With this version this pool shows a low write performance due to high CPU load during metaslab allocation / metaslab_load.
All disks are working fine and no write errors are visible with version 0.8.2.
Fragmentation:
```
zdb -M zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
pool zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 fragmentation 78%
11: 8659773 ****************
12: 16373067 ******************************
13: 22102772 ****************************************
14: 17029560 *******************************
15: 10903563 ********************
16: 521022 *
17: 27324 *
18: 1 *
```
sample space map object:
"zdb -mmm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55" is multiple times slower compared to version 0.8.2.
"zdb -mm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55" is fast.
```
zdb -mmm zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
space map object 222:
smp_length = 0x5698c0
smp_alloc = 0x23b8ba000
metaslab 1 offset 400000000 spacemap 4 free 6.98G
segments 166687 maxsize 202K freepct 43%
In-memory histogram:
11: 39689 **************
12: 86763 ******************************
13: 117519 ****************************************
14: 97159 **********************************
15: 57320 ********************
16: 11649 ****
17: 559 *
On-disk histogram: fragmentation 78
11: 41162 **************
12: 89268 ******************************
13: 121123 ****************************************
14: 101229 **********************************
15: 71358 ************************
16: 1135 *
17: 10 *
```
Using this pool with version 2.1.1 now shows write errors.
Other pools are working fine on my setup.
### Describe how to reproduce the problem
Using the fio tool to write data
```
fio --name=/opt/zfs_mount/235cd8b3-835a-4216-a38b-b52c5c566f55/voldata/tmp/write12 --rw=write --direct=0 --ioengine=libaio --bs=64k --numjobs=2 --size=100G --runtime=600 --group_reporting
```
### Include any warning/errors/backtraces from the system logs
<!--
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example how log text should be marked (wrap it with ```)
```
-->
shows
```
2021-10-15T11:55:56.767372+02:00 controller-21 kernel: [ 372.348892] blk_update_request: I/O error, dev sdf, sector 335937206
2021-10-15T11:55:56.767372+02:00 controller-21 kernel: [ 372.348896] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d3 error=5 type=2 offset=170921749504 size=9216 flags=40080c80
2021-10-15T11:56:48.734371+02:00 controller-21 kernel: [ 424.310414] blk_update_request: I/O error, dev sdd, sector 335967046
2021-10-15T11:56:48.734371+02:00 controller-21 kernel: [ 424.310418] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e0 error=5 type=2 offset=170937027584 size=6656 flags=40080c80
2021-10-15T11:57:27.710369+02:00 controller-21 kernel: [ 463.282832] blk_update_request: I/O error, dev sdf, sector 183934405
2021-10-15T11:57:27.710369+02:00 controller-21 kernel: [ 463.282837] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d3 error=5 type=2 offset=93096315392 size=8192 flags=40080c80
2021-10-15T11:59:22.720958+02:00 controller-21 kernel: [ 578.280205] blk_update_request: I/O error, dev sdm, sector 249524636
2021-10-15T11:59:22.720958+02:00 controller-21 kernel: [ 578.280210] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d423c657b error=5 type=2 offset=126678513664 size=8192 flags=40080c80
2021-10-15T12:00:02.654389+02:00 controller-21 kernel: [ 618.212548] blk_update_request: I/O error, dev sdg, sector 336037547
2021-10-15T12:00:02.654389+02:00 controller-21 kernel: [ 618.212552] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d4 error=5 type=2 offset=170973124096 size=8192 flags=40080c80
2021-10-15T12:00:39.710381+02:00 controller-21 kernel: [ 655.265140] blk_update_request: I/O error, dev sdi, sector 184003452
2021-10-15T12:00:39.710381+02:00 controller-21 kernel: [ 655.265145] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d2 error=5 type=2 offset=93131667456 size=11776 flags=40080c80
2021-10-15T12:01:18.686373+02:00 controller-21 kernel: [ 694.237890] blk_update_request: I/O error, dev sdi, sector 336051191
2021-10-15T12:01:18.686373+02:00 controller-21 kernel: [ 694.237894] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4d2 error=5 type=2 offset=170980109824 size=8192 flags=40080c80
2021-10-15T12:02:06.687818+02:00 controller-21 kernel: [ 742.233127] blk_update_request: I/O error, dev sde, sector 184031306
2021-10-15T12:02:06.687818+02:00 controller-21 kernel: [ 742.233131] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4df error=5 type=2 offset=93145928704 size=8704 flags=40080c80
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249653] blk_update_request: I/O error, dev sdl, sector 184058234
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249657] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4280c3ec error=5 type=2 offset=93159715840 size=11264 flags=40080c80
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229807] blk_update_request: I/O error, dev sdc, sector 336111453
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229812] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e2 error=5 type=2 offset=171010963968 size=8704 flags=40080c80
2021-10-15T12:03:27.242384+02:00 controller-21 kernel: [ 822.781578] sd 14:0:0:0: [sdl] tag#3 abort scheduled
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791567] sd 14:0:0:0: [sdl] tag#3 aborting command
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791570] sd 14:0:0:0: [sdl] tag#3 cmd abort failed
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791576] scsi host14: scsi_eh_14: waking up 0/1/1
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791585] ata15.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791587] ata15.00: failed command: WRITE DMA
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791591] ata15.00: cmd ca/00:16:7a:81:f8/00:00:00:00:00/ea tag 3 dma 11264 out
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791591] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791593] ata15.00: status: { DRDY }
2021-10-15T12:03:27.252367+02:00 controller-21 kernel: [ 822.791597] ata15: hard resetting link
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.247532] ata15: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.247738] ata15.00: supports DRM functions and may not be fully accessible
2021-10-15T12:03:27.708381+02:00 controller-21 kernel: [ 823.248269] ata15.00: disabling queued TRIM support
2021-10-15T12:03:27.709366+02:00 controller-21 kernel: [ 823.248822] ata15.00: supports DRM functions and may not be fully accessible
2021-10-15T12:03:27.709366+02:00 controller-21 kernel: [ 823.249305] ata15.00: disabling queued TRIM support
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249633] ata15.00: configured for UDMA/133
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249637] ata15.00: device reported invalid CHS sector 0
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249641] sd 14:0:0:0: [sdl] tag#3 scsi_eh_14: flush finish cmd
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249646] sd 14:0:0:0: [sdl] tag#3 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249648] sd 14:0:0:0: [sdl] tag#3 Sense Key : Illegal Request [current]
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249649] sd 14:0:0:0: [sdl] tag#3 Add. Sense: Unaligned write command
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249652] sd 14:0:0:0: [sdl] tag#3 CDB: Write(10) 2a 00 0a f8 81 7a 00 00 16 00
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249653] blk_update_request: I/O error, dev sdl, sector 184058234
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249657] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4280c3ec error=5 type=2 offset=93159715840 size=11264 flags=40080c80
2021-10-15T12:03:27.710380+02:00 controller-21 kernel: [ 823.249664] ata15: EH complete
2021-10-15T12:03:27.710474+02:00 controller-21 kernel: [ 823.249666] scsi host14: waking up host to restart
2021-10-15T12:03:27.710474+02:00 controller-21 kernel: [ 823.249671] scsi host14: scsi_eh_14: sleeping
2021-10-15T12:04:09.226380+02:00 controller-21 kernel: [ 864.761706] sd 17:0:0:0: [sdc] tag#18 abort scheduled
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771700] sd 17:0:0:0: [sdc] tag#18 aborting command
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771703] sd 17:0:0:0: [sdc] tag#18 cmd abort failed
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771708] scsi host17: scsi_eh_17: waking up 0/1/1
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771717] ata18.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771720] ata18.00: failed command: WRITE DMA EXT
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771724] ata18.00: cmd 35/00:11:5d:a7:08/00:00:14:00:00/e0 tag 18 dma 8704 out
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771724] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771726] ata18.00: status: { DRDY }
2021-10-15T12:04:09.236366+02:00 controller-21 kernel: [ 864.771729] ata18: hard resetting link
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.227673] ata18: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.227881] ata18.00: supports DRM functions and may not be fully accessible
2021-10-15T12:04:09.692387+02:00 controller-21 kernel: [ 865.228411] ata18.00: disabling queued TRIM support
2021-10-15T12:04:09.693367+02:00 controller-21 kernel: [ 865.228969] ata18.00: supports DRM functions and may not be fully accessible
2021-10-15T12:04:09.693367+02:00 controller-21 kernel: [ 865.229453] ata18.00: disabling queued TRIM support
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229787] ata18.00: configured for UDMA/133
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229790] ata18.00: device reported invalid CHS sector 0
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229794] sd 17:0:0:0: [sdc] tag#18 scsi_eh_17: flush finish cmd
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229800] sd 17:0:0:0: [sdc] tag#18 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229802] sd 17:0:0:0: [sdc] tag#18 Sense Key : Illegal Request [current]
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229803] sd 17:0:0:0: [sdc] tag#18 Add. Sense: Unaligned write command
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229805] sd 17:0:0:0: [sdc] tag#18 CDB: Write(10) 2a 00 14 08 a7 5d 00 00 11 00
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229807] blk_update_request: I/O error, dev sdc, sector 336111453
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229812] zio pool=zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 vdev=/opt/fast/dev/bricks/e5082739-2cc0-42ca-837a-e9aedf739b58/zfs-0x5002538d4282b4e2 error=5 type=2 offset=171010963968 size=8704 flags=40080c80
2021-10-15T12:04:09.694372+02:00 controller-21 kernel: [ 865.229820] ata18: EH complete
2021-10-15T12:04:09.694452+02:00 controller-21 kernel: [ 865.229822] scsi host17: waking up host to restart
2021-10-15T12:04:09.694452+02:00 controller-21 kernel: [ 865.229828] scsi host17: scsi_eh_17: sleeping
```
resulting in
```
pool: zfs-235cd8b3-835a-4216-a38b-b52c5c566f55
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: resilvered 1.14G in 00:00:17 with 0 errors on Tue Oct 12 14:53:08 2021
config:
NAME STATE READ WRITE CKSUM
zfs-235cd8b3-835a-4216-a38b-b52c5c566f55 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
zfs-0x5002538d4282b4e3 ONLINE 0 0 0
zfs-0x5002538d4282b4e2 ONLINE 0 18 0
zfs-0x5002538d4282b4e0 ONLINE 0 14 0
zfs-0x5002538d4282b4df ONLINE 0 18 0
zfs-0x5002538d4282b4d3 ONLINE 0 36 0
zfs-0x5002538d4282b4d4 ONLINE 0 17 0
zfs-0x5002538d4282b4d1 ONLINE 0 0 0
zfs-0x5002538d4282b4d2 ONLINE 0 41 0
zfs-0x5002538d4280bd98 ONLINE 0 0 0
zfs-0x5002538d423c659f ONLINE 0 0 0
zfs-0x5002538d4280c3ec ONLINE 0 23 0
zfs-0x5002538d423c657b ONLINE 0 17 0
errors: No known data errors
```
|
defect
|
writing to a pool with a high fragmented free space shows write errors thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name gentoo distribution version linux kernel architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing i am using a pool that shows a high fragmentation of the free space zpool list name size alloc free ckpoint expandsz frag cap dedup health altroot zfs online the pool uses ssd disks pool zfs state online status one or more devices has experienced an unrecoverable error an attempt was made to correct the error applications are unaffected action determine if the device needs to be replaced and clear the errors using zpool clear or replace the device with zpool replace see scan resilvered in with errors on tue oct config name state read write cksum zfs online online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online errors no known data errors this pool was created with version with this version this pool shows a low write performance due to high cpu load during metaslab allocation metaslab load all disks are working fine and no write errors are visible with version fragmentation zdb m zfs pool zfs fragmentation sample space map object zdb mmm zfs is multiple times slower compared to version zdb mm zfs is fast zdb mmm zfs space map object smp length smp alloc metaslab offset spacemap free segments maxsize freepct in memory histogram on disk histogram fragmentation using this pool with version now shows write errors other pools are working fine on my setup describe how to reproduce the problem using the fio tool to write data fio name opt zfs mount voldata tmp rw write direct ioengine libaio bs numjobs size runtime group reporting include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with shows controller kernel blk update request i o error dev sdf sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdd sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdf sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdm sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdg sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdi sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdi sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sde sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdl sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel blk update request i o error dev sdc sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel sd tag abort scheduled controller kernel sd tag aborting command controller kernel sd tag cmd abort failed controller kernel scsi scsi eh waking up controller kernel exception emask sact serr action frozen controller kernel failed command write dma controller kernel cmd ca ea tag dma out controller kernel res emask timeout controller kernel status drdy controller kernel hard resetting link controller kernel sata link up gbps sstatus scontrol controller kernel supports drm functions and may not be fully accessible controller kernel disabling queued trim support controller kernel supports drm functions and may not be fully accessible controller kernel disabling queued trim support controller kernel configured for udma controller kernel device reported invalid chs sector controller kernel sd tag scsi eh flush finish cmd controller kernel sd tag failed result hostbyte did ok driverbyte driver sense controller kernel sd tag sense key illegal request controller kernel sd tag add sense unaligned write command controller kernel sd tag cdb write controller kernel blk update request i o error dev sdl sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel eh complete controller kernel scsi waking up host to restart controller kernel scsi scsi eh sleeping controller kernel sd tag abort scheduled controller kernel sd tag aborting command controller kernel sd tag cmd abort failed controller kernel scsi scsi eh waking up controller kernel exception emask sact serr action frozen controller kernel failed command write dma ext controller kernel cmd tag dma out controller kernel res emask timeout controller kernel status drdy controller kernel hard resetting link controller kernel sata link up gbps sstatus scontrol controller kernel supports drm functions and may not be fully accessible controller kernel disabling queued trim support controller kernel supports drm functions and may not be fully accessible controller kernel disabling queued trim support controller kernel configured for udma controller kernel device reported invalid chs sector controller kernel sd tag scsi eh flush finish cmd controller kernel sd tag failed result hostbyte did ok driverbyte driver sense controller kernel sd tag sense key illegal request controller kernel sd tag add sense unaligned write command controller kernel sd tag cdb write controller kernel blk update request i o error dev sdc sector controller kernel zio pool zfs vdev opt fast dev bricks zfs error type offset size flags controller kernel eh complete controller kernel scsi waking up host to restart controller kernel scsi scsi eh sleeping resulting in pool zfs state online status one or more devices has experienced an unrecoverable error an attempt was made to correct the error applications are unaffected action determine if the device needs to be replaced and clear the errors using zpool clear or replace the device with zpool replace see scan resilvered in with errors on tue oct config name state read write cksum zfs online online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online zfs online errors no known data errors
| 1
|
9,153
| 6,779,816,773
|
IssuesEvent
|
2017-10-29 05:28:44
|
melalawi/GlobalTwitchEmotes
|
https://api.github.com/repos/melalawi/GlobalTwitchEmotes
|
closed
|
BrowserAction/Options Pages Delayed On Open
|
performance
|
Due to these pages accessing sync storage/indexeddb independently.
|
True
|
BrowserAction/Options Pages Delayed On Open - Due to these pages accessing sync storage/indexeddb independently.
|
non_defect
|
browseraction options pages delayed on open due to these pages accessing sync storage indexeddb independently
| 0
|
65,071
| 19,089,564,763
|
IssuesEvent
|
2021-11-29 10:32:23
|
vector-im/element-android
|
https://api.github.com/repos/vector-im/element-android
|
closed
|
Emoji autocompleter sometimes leaves search string after replacement
|
T-Defect A-Emoji S-Minor O-Occasional
|
#### Describe the bug
When searching for an emoji via `:` autocompletion sometimes it leaves the search text after inserting the emoji. It only happens occasionally but I've found one reliable way to reproduce it, see below:
#### To Reproduce
Steps to reproduce the behavior:
1. Type `:tada` (it's important to type exactly this)
2. Select the :tada: emoji from the completer
3. The message composer will now say :tada:`tada`
#### Expected behavior
Tada text is removed
#### Smartphone (please complete the following information):
- Device: Nexus 5X
- OS: Android 8.1 (Lineage)
#### Additional context
- App version and store Schildichat `1.1.7sc33` (I *think* this reproduces with EA as well)
- Using AOSP keyboard.
|
1.0
|
Emoji autocompleter sometimes leaves search string after replacement - #### Describe the bug
When searching for an emoji via `:` autocompletion sometimes it leaves the search text after inserting the emoji. It only happens occasionally but I've found one reliable way to reproduce it, see below:
#### To Reproduce
Steps to reproduce the behavior:
1. Type `:tada` (it's important to type exactly this)
2. Select the :tada: emoji from the completer
3. The message composer will now say :tada:`tada`
#### Expected behavior
Tada text is removed
#### Smartphone (please complete the following information):
- Device: Nexus 5X
- OS: Android 8.1 (Lineage)
#### Additional context
- App version and store Schildichat `1.1.7sc33` (I *think* this reproduces with EA as well)
- Using AOSP keyboard.
|
defect
|
emoji autocompleter sometimes leaves search string after replacement describe the bug when searching for an emoji via autocompletion sometimes it leaves the search text after inserting the emoji it only happens occasionally but i ve found one reliable way to reproduce it see below to reproduce steps to reproduce the behavior type tada it s important to type exactly this select the tada emoji from the completer the message composer will now say tada tada expected behavior tada text is removed smartphone please complete the following information device nexus os android lineage additional context app version and store schildichat i think this reproduces with ea as well using aosp keyboard
| 1
|
76,398
| 26,409,144,585
|
IssuesEvent
|
2023-01-13 10:39:56
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: Segfault in scipy.sparse.csgraph.shortest_path() with v1.10.0
|
defect
|
### Describe your issue.
Hello! Since last week’s upgrade from scipy 1.9.3 to 1.10.0, we have encountered a reproducible segmentation fault on running `scipy.sparse.csgraph.shortest_path()` with a particular weighted adjacency matrix and two `method` parameters. The matrix is attached. The problem has been observed with Python 3.9.15 on both macOS 12.6.2/arm64 and Debian 11.6/x86\_64.
[test_case.zip](https://github.com/scipy/scipy/files/10410688/test_case.zip)
### Reproducing Code Example
```python
import scipy.sparse
mat = scipy.sparse.load_npz("test_case.npz")
# these crash
scipy.sparse.csgraph.shortest_path(mat)
scipy.sparse.csgraph.shortest_path(mat, method="D")
scipy.sparse.csgraph.shortest_path(mat, method="J")
# these don't
scipy.sparse.csgraph.shortest_path(mat, method="FW")
scipy.sparse.csgraph.shortest_path(mat, method="BF")
```
### Error message
```shell
segmentation fault
```
### SciPy/NumPy/Python version information
1.10.0 1.24.1 sys.version_info(major=3, minor=8, micro=15, releaselevel='final', serial=0)
|
1.0
|
BUG: Segfault in scipy.sparse.csgraph.shortest_path() with v1.10.0 - ### Describe your issue.
Hello! Since last week’s upgrade from scipy 1.9.3 to 1.10.0, we have encountered a reproducible segmentation fault on running `scipy.sparse.csgraph.shortest_path()` with a particular weighted adjacency matrix and two `method` parameters. The matrix is attached. The problem has been observed with Python 3.9.15 on both macOS 12.6.2/arm64 and Debian 11.6/x86\_64.
[test_case.zip](https://github.com/scipy/scipy/files/10410688/test_case.zip)
### Reproducing Code Example
```python
import scipy.sparse
mat = scipy.sparse.load_npz("test_case.npz")
# these crash
scipy.sparse.csgraph.shortest_path(mat)
scipy.sparse.csgraph.shortest_path(mat, method="D")
scipy.sparse.csgraph.shortest_path(mat, method="J")
# these don't
scipy.sparse.csgraph.shortest_path(mat, method="FW")
scipy.sparse.csgraph.shortest_path(mat, method="BF")
```
### Error message
```shell
segmentation fault
```
### SciPy/NumPy/Python version information
1.10.0 1.24.1 sys.version_info(major=3, minor=8, micro=15, releaselevel='final', serial=0)
|
defect
|
bug segfault in scipy sparse csgraph shortest path with describe your issue hello since last week’s upgrade from scipy to we have encountered a reproducible segmentation fault on running scipy sparse csgraph shortest path with a particular weighted adjacency matrix and two method parameters the matrix is attached the problem has been observed with python on both macos and debian reproducing code example python import scipy sparse mat scipy sparse load npz test case npz these crash scipy sparse csgraph shortest path mat scipy sparse csgraph shortest path mat method d scipy sparse csgraph shortest path mat method j these don t scipy sparse csgraph shortest path mat method fw scipy sparse csgraph shortest path mat method bf error message shell segmentation fault scipy numpy python version information sys version info major minor micro releaselevel final serial
| 1
|
446,328
| 12,855,097,458
|
IssuesEvent
|
2020-07-09 04:05:50
|
acl-org/acl-2020-virtual-conference
|
https://api.github.com/repos/acl-org/acl-2020-virtual-conference
|
closed
|
[Workshops] Verify all pre-recorded talks for July 9
|
priority:high
|
- Go to each workshop page. E.g. https://virtual.acl2020.org/workshop_W4.html
- Click on each pre-recorded talk and make sure they are visible.
- Verify that talks marked as 'LIVE PRESENTATION' in [livestream schedule spreadsheet](https://docs.google.com/spreadsheets/d/1BbjMuZyROC973mukGks45h4bwZwDziJmv8B1RNLfz1Q/edit#gid=1496320563) do not appear in w_papers.csv since those are not pre-recorded. For e.g.

|
1.0
|
[Workshops] Verify all pre-recorded talks for July 9 - - Go to each workshop page. E.g. https://virtual.acl2020.org/workshop_W4.html
- Click on each pre-recorded talk and make sure they are visible.
- Verify that talks marked as 'LIVE PRESENTATION' in [livestream schedule spreadsheet](https://docs.google.com/spreadsheets/d/1BbjMuZyROC973mukGks45h4bwZwDziJmv8B1RNLfz1Q/edit#gid=1496320563) do not appear in w_papers.csv since those are not pre-recorded. For e.g.

|
non_defect
|
verify all pre recorded talks for july go to each workshop page e g click on each pre recorded talk and make sure they are visible verify that talks marked as live presentation in do not appear in w papers csv since those are not pre recorded for e g
| 0
|
78,496
| 27,554,323,613
|
IssuesEvent
|
2023-03-07 16:50:53
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Investigate impacts of retry-fetch bug on Search, Jan 25 - Feb 2
|
Defect VA.gov frontend Public Websites VA.gov Search
|
## Description
### User story
> _Describe the audience/user, enhancement or fix, and value / outcome desired._
**AS A** Veteran
**I WANT** the team that owns site search on VAgov to ensure issues are noticed
**SO THAT** I don't experience failures for significant periods of time.
### Engineering notes / background
We heard from Steve Albers in a [slack thread](https://dsva.slack.com/archives/C52CL1PKQ/p1676477178721839) that a post-mortem on an issue impacting "letter downloads" also appears to have impacted va.gov/search. The problematic `fetch-retry` library was present in production from Jan 25 to Feb 2, 2023.
Details are in their post-mortem. Specifically, the [screenshot from Sentry monitoring](https://github.com/department-of-veterans-affairs/va.gov-team-sensitive/blob/va-albers-file-download-postmortem/Postmortems/2023/2023-02-03-letter-downloads.md#impact) shows significant impact to Search – see rows 1, 3, 4, and 8.
### Analytics considerations
na
### Quality / testing notes
na
## Acceptance criteria
- [x] Understand how the errors seen in Sentry could have impacted search
- [x] Document whether/how we think Veterans were impacted
- [ ] Document what evidence we should see in analytic platforms: Sentry, GA
- [x] Investigate why we didn't see issues
- [ ] What tests are present to protect Veterans from failures in Search
~~Document recommendations for better instrumenting Search~~ - because this bug / fix don't sit with the FE implementation, this AC is not relevant
- [x] Write ticket(s) for implementing tests or analytics to prevent these issues with Search
- [x] Update onsite-search product brief with notes about the API token / management
### Team
Please check the team(s) that will do this work.
- [ ] `CMS Team`
- [X] `Public Websites`
- [ ] `Facilities`
- [ ] `User support`
|
1.0
|
Investigate impacts of retry-fetch bug on Search, Jan 25 - Feb 2 - ## Description
### User story
> _Describe the audience/user, enhancement or fix, and value / outcome desired._
**AS A** Veteran
**I WANT** the team that owns site search on VAgov to ensure issues are noticed
**SO THAT** I don't experience failures for significant periods of time.
### Engineering notes / background
We heard from Steve Albers in a [slack thread](https://dsva.slack.com/archives/C52CL1PKQ/p1676477178721839) that a post-mortem on an issue impacting "letter downloads" also appears to have impacted va.gov/search. The problematic `fetch-retry` library was present in production from Jan 25 to Feb 2, 2023.
Details are in their post-mortem. Specifically, the [screenshot from Sentry monitoring](https://github.com/department-of-veterans-affairs/va.gov-team-sensitive/blob/va-albers-file-download-postmortem/Postmortems/2023/2023-02-03-letter-downloads.md#impact) shows significant impact to Search – see rows 1, 3, 4, and 8.
### Analytics considerations
na
### Quality / testing notes
na
## Acceptance criteria
- [x] Understand how the errors seen in Sentry could have impacted search
- [x] Document whether/how we think Veterans were impacted
- [ ] Document what evidence we should see in analytic platforms: Sentry, GA
- [x] Investigate why we didn't see issues
- [ ] What tests are present to protect Veterans from failures in Search
~~Document recommendations for better instrumenting Search~~ - because this bug / fix don't sit with the FE implementation, this AC is not relevant
- [x] Write ticket(s) for implementing tests or analytics to prevent these issues with Search
- [x] Update onsite-search product brief with notes about the API token / management
### Team
Please check the team(s) that will do this work.
- [ ] `CMS Team`
- [X] `Public Websites`
- [ ] `Facilities`
- [ ] `User support`
|
defect
|
investigate impacts of retry fetch bug on search jan feb description user story describe the audience user enhancement or fix and value outcome desired as a veteran i want the team that owns site search on vagov to ensure issues are noticed so that i don t experience failures for significant periods of time engineering notes background we heard from steve albers in a that a post mortem on an issue impacting letter downloads also appears to have impacted va gov search the problematic fetch retry library was present in production from jan to feb details are in their post mortem specifically the shows significant impact to search – see rows and analytics considerations na quality testing notes na acceptance criteria understand how the errors seen in sentry could have impacted search document whether how we think veterans were impacted document what evidence we should see in analytic platforms sentry ga investigate why we didn t see issues what tests are present to protect veterans from failures in search document recommendations for better instrumenting search because this bug fix don t sit with the fe implementation this ac is not relevant write ticket s for implementing tests or analytics to prevent these issues with search update onsite search product brief with notes about the api token management team please check the team s that will do this work cms team public websites facilities user support
| 1
|
56,522
| 15,161,342,292
|
IssuesEvent
|
2021-02-12 08:53:03
|
jOOQ/jOOQ
|
https://api.github.com/repos/jOOQ/jOOQ
|
closed
|
Parser::parseStatements parses DECLARE <decl> BEGIN <stmt> END; into bad structure
|
C: Functionality E: Enterprise Edition E: Professional Edition P: Medium R: Fixed T: Defect
|
The parser parses the PL/SQL block:
```sql
DECLARE
<decl>
BEGIN
<stmt>
END;
```
Into:
```java
statements(
declare(<decl>),
begin(<stmt>)
);
```
Instead of
```java
begin(
declare(<decl>),
<stmt>
);
```
This leads to inconsistencies when generating the resulting SQL again, including additional `BEGIN` blocks in Oracle, such as:
```sql
BEGIN
DECLARE
<decl>
BEGIN
<stmt>
END;
END;
```
Or even:
```sql
DECLARE
<decl>
BEGIN
BEGIN
<stmt>
END;
END;
```
See also: https://github.com/jOOQ/jOOQ/issues/11370#issuecomment-778043048
|
1.0
|
Parser::parseStatements parses DECLARE <decl> BEGIN <stmt> END; into bad structure - The parser parses the PL/SQL block:
```sql
DECLARE
<decl>
BEGIN
<stmt>
END;
```
Into:
```java
statements(
declare(<decl>),
begin(<stmt>)
);
```
Instead of
```java
begin(
declare(<decl>),
<stmt>
);
```
This leads to inconsistencies when generating the resulting SQL again, including additional `BEGIN` blocks in Oracle, such as:
```sql
BEGIN
DECLARE
<decl>
BEGIN
<stmt>
END;
END;
```
Or even:
```sql
DECLARE
<decl>
BEGIN
BEGIN
<stmt>
END;
END;
```
See also: https://github.com/jOOQ/jOOQ/issues/11370#issuecomment-778043048
|
defect
|
parser parsestatements parses declare begin end into bad structure the parser parses the pl sql block sql declare begin end into java statements declare begin instead of java begin declare this leads to inconsistencies when generating the resulting sql again including additional begin blocks in oracle such as sql begin declare begin end end or even sql declare begin begin end end see also
| 1
|
123,916
| 16,550,575,765
|
IssuesEvent
|
2021-05-28 08:07:00
|
sourcegraph/sourcegraph
|
https://api.github.com/repos/sourcegraph/sourcegraph
|
closed
|
Search input should have border radius on left corners
|
bug design-qa team/search
|
### Problem
With the introduction of the contexts selector, I think we lost a small design detail: the left corners of the search input have no radius, while the right corners of the whole (the search button) have a `2px` radius.
<img width="758" alt="CleanShot 2021-03-12 at 10 44 18@2x" src="https://user-images.githubusercontent.com/6304497/110922380-e8ecf400-831f-11eb-90d4-80e12f5c8692.png">
### Solution
The top left and bottom left corners of the search input should have `2px` radius on both the home page and the top nav bar.
|
1.0
|
Search input should have border radius on left corners - ### Problem
With the introduction of the contexts selector, I think we lost a small design detail: the left corners of the search input have no radius, while the right corners of the whole (the search button) have a `2px` radius.
<img width="758" alt="CleanShot 2021-03-12 at 10 44 18@2x" src="https://user-images.githubusercontent.com/6304497/110922380-e8ecf400-831f-11eb-90d4-80e12f5c8692.png">
### Solution
The top left and bottom left corners of the search input should have `2px` radius on both the home page and the top nav bar.
|
non_defect
|
search input should have border radius on left corners problem with the introduction of the contexts selector i think we lost a small design detail the left corners of the search input have no radius while the right corners of the whole the search button have a radius img width alt cleanshot at src solution the top left and bottom left corners of the search input should have radius on both the home page and the top nav bar
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.