Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 5 112 | repo_url stringlengths 34 141 | action stringclasses 3 values | title stringlengths 1 757 | labels stringlengths 4 664 | body stringlengths 3 261k | index stringclasses 10 values | text_combine stringlengths 96 261k | label stringclasses 2 values | text stringlengths 96 232k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
34,917 | 7,471,046,904 | IssuesEvent | 2018-04-03 07:56:54 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | opened | selectOneMenu : disabled on using placeholder | 6.2.3 defect | Reported By PRO User;
> We are using p:selectOneMenu with placeholder option.
We noticed that, ui-state-disabled is added when placeholder is present which makes it look disabled.
ex:
```
<p:selectOneMenu id="console" value="" style="width:125px" placeholder="Select One">
<f:selectItem itemLabel="" itemValue="" />
<f:selectItem itemLabel="Xbox One" itemValue="Xbox One" />
<f:selectItem itemLabel="PS4" itemValue="PS4" />
<f:selectItem itemLabel="Wii U" itemValue="Wii U" />
</p:selectOneMenu>
``` | 1.0 | selectOneMenu : disabled on using placeholder - Reported By PRO User;
> We are using p:selectOneMenu with placeholder option.
We noticed that, ui-state-disabled is added when placeholder is present which makes it look disabled.
ex:
```
<p:selectOneMenu id="console" value="" style="width:125px" placeholder="Select One">
<f:selectItem itemLabel="" itemValue="" />
<f:selectItem itemLabel="Xbox One" itemValue="Xbox One" />
<f:selectItem itemLabel="PS4" itemValue="PS4" />
<f:selectItem itemLabel="Wii U" itemValue="Wii U" />
</p:selectOneMenu>
``` | defect | selectonemenu disabled on using placeholder reported by pro user we are using p selectonemenu with placeholder option we noticed that ui state disabled is added when placeholder is present which makes it look disabled ex | 1 |
61,538 | 17,023,719,703 | IssuesEvent | 2021-07-03 03:28:47 | tomhughes/trac-tickets | https://api.github.com/repos/tomhughes/trac-tickets | closed | [landcover] Mapnik: Render Dune | Component: mapnik Priority: minor Resolution: duplicate Type: defect | **[Submitted to the original trac issue database at 1.46pm, Thursday, 9th June 2011]**
it would be nice to have a render-possiblity for the dune-object [1] because when map area near the coast this is a dominant and important object [2].
reagards Jan alias Lbeck :-)
[1] http://wiki.openstreetmap.org/wiki/Tag:natural%3Ddune
[2] http://www.openstreetmap.org/?lat=54.47866&lon=12.51828&zoom=15&layers=M | 1.0 | [landcover] Mapnik: Render Dune - **[Submitted to the original trac issue database at 1.46pm, Thursday, 9th June 2011]**
it would be nice to have a render-possiblity for the dune-object [1] because when map area near the coast this is a dominant and important object [2].
reagards Jan alias Lbeck :-)
[1] http://wiki.openstreetmap.org/wiki/Tag:natural%3Ddune
[2] http://www.openstreetmap.org/?lat=54.47866&lon=12.51828&zoom=15&layers=M | defect | mapnik render dune it would be nice to have a render possiblity for the dune object because when map area near the coast this is a dominant and important object reagards jan alias lbeck | 1 |
70,746 | 23,302,171,928 | IssuesEvent | 2022-08-07 13:40:38 | openzfs/zfs | https://api.github.com/repos/openzfs/zfs | opened | Warnings on compiling on Debian 11 AArch64 | Type: Defect | ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 11.4
Kernel Version | 5.10.0-16-arm64
Architecture | AArch64
OpenZFS Version | 947465b9
### Describe the problem you're observing
Given the interest in being -Werror-clean, I thought this was worth reporting after seeing it while testing #13741:
```
lib/libzfs/libzfs_dataset.c: In function ‘zfs_rename’:
lib/libzfs/libzfs_dataset.c:4416:1: note: parameter passing for argument of type ‘renameflags_t’ {aka ‘struct renameflags’} changed in GCC 9.1
4416 | zfs_rename(zfs_handle_t *zhp, const char *target, renameflags_t flags)
| ^~~~~~~~~~
CC lib/libzfs/os/linux/libzfs_la-libzfs_mount_os.lo
CC lib/libzfs/os/linux/libzfs_la-libzfs_pool_os.lo
lib/libzfs/libzfs_pool.c: In function ‘zpool_valid_proplist’:
lib/libzfs/libzfs_pool.c:453:1: note: parameter passing for argument of type ‘prop_flags_t’ {aka ‘struct prop_flags’} changed in GCC 9.1
453 | zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
| ^~~~~~~~~~~~~~~~~~~~
lib/libzfs/libzfs_dataset.c: In function ‘zfs_rename’:
lib/libzfs/libzfs_dataset.c:4416:1: note: parameter passing for argument of type ‘renameflags_t’ {aka ‘struct renameflags’} changed in GCC 9.1
4416 | zfs_rename(zfs_handle_t *zhp, const char *target, renameflags_t flags)
| ^~~~~~~~~~
CC lib/libzfs/os/linux/libzfs_la-libzfs_util_os.lo
CC lib/libshare/libshare_la-libshare.lo
lib/libzfs/libzfs_pool.c: In function ‘zpool_valid_proplist’:
lib/libzfs/libzfs_pool.c:453:1: note: parameter passing for argument of type ‘prop_flags_t’ {aka ‘struct prop_flags’} changed in GCC 9.1
453 | zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
| ^~~~~~~~~~~~~~~~~~~~
```
```
$ gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110
```
### Describe how to reproduce the problem
Build without --enable-debug. (I presume --enable-debug will also do this, but that's not what I was doing.)
### Include any warning/errors/backtraces from the system logs
Above. | 1.0 | Warnings on compiling on Debian 11 AArch64 - ### System information
<!-- add version after "|" character -->
Type | Version/Name
--- | ---
Distribution Name | Debian
Distribution Version | 11.4
Kernel Version | 5.10.0-16-arm64
Architecture | AArch64
OpenZFS Version | 947465b9
### Describe the problem you're observing
Given the interest in being -Werror-clean, I thought this was worth reporting after seeing it while testing #13741:
```
lib/libzfs/libzfs_dataset.c: In function ‘zfs_rename’:
lib/libzfs/libzfs_dataset.c:4416:1: note: parameter passing for argument of type ‘renameflags_t’ {aka ‘struct renameflags’} changed in GCC 9.1
4416 | zfs_rename(zfs_handle_t *zhp, const char *target, renameflags_t flags)
| ^~~~~~~~~~
CC lib/libzfs/os/linux/libzfs_la-libzfs_mount_os.lo
CC lib/libzfs/os/linux/libzfs_la-libzfs_pool_os.lo
lib/libzfs/libzfs_pool.c: In function ‘zpool_valid_proplist’:
lib/libzfs/libzfs_pool.c:453:1: note: parameter passing for argument of type ‘prop_flags_t’ {aka ‘struct prop_flags’} changed in GCC 9.1
453 | zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
| ^~~~~~~~~~~~~~~~~~~~
lib/libzfs/libzfs_dataset.c: In function ‘zfs_rename’:
lib/libzfs/libzfs_dataset.c:4416:1: note: parameter passing for argument of type ‘renameflags_t’ {aka ‘struct renameflags’} changed in GCC 9.1
4416 | zfs_rename(zfs_handle_t *zhp, const char *target, renameflags_t flags)
| ^~~~~~~~~~
CC lib/libzfs/os/linux/libzfs_la-libzfs_util_os.lo
CC lib/libshare/libshare_la-libshare.lo
lib/libzfs/libzfs_pool.c: In function ‘zpool_valid_proplist’:
lib/libzfs/libzfs_pool.c:453:1: note: parameter passing for argument of type ‘prop_flags_t’ {aka ‘struct prop_flags’} changed in GCC 9.1
453 | zpool_valid_proplist(libzfs_handle_t *hdl, const char *poolname,
| ^~~~~~~~~~~~~~~~~~~~
```
```
$ gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110
```
### Describe how to reproduce the problem
Build without --enable-debug. (I presume --enable-debug will also do this, but that's not what I was doing.)
### Include any warning/errors/backtraces from the system logs
Above. | defect | warnings on compiling on debian system information type version name distribution name debian distribution version kernel version architecture openzfs version describe the problem you re observing given the interest in being werror clean i thought this was worth reporting after seeing it while testing lib libzfs libzfs dataset c in function ‘zfs rename’ lib libzfs libzfs dataset c note parameter passing for argument of type ‘renameflags t’ aka ‘struct renameflags’ changed in gcc zfs rename zfs handle t zhp const char target renameflags t flags cc lib libzfs os linux libzfs la libzfs mount os lo cc lib libzfs os linux libzfs la libzfs pool os lo lib libzfs libzfs pool c in function ‘zpool valid proplist’ lib libzfs libzfs pool c note parameter passing for argument of type ‘prop flags t’ aka ‘struct prop flags’ changed in gcc zpool valid proplist libzfs handle t hdl const char poolname lib libzfs libzfs dataset c in function ‘zfs rename’ lib libzfs libzfs dataset c note parameter passing for argument of type ‘renameflags t’ aka ‘struct renameflags’ changed in gcc zfs rename zfs handle t zhp const char target renameflags t flags cc lib libzfs os linux libzfs la libzfs util os lo cc lib libshare libshare la libshare lo lib libzfs libzfs pool c in function ‘zpool valid proplist’ lib libzfs libzfs pool c note parameter passing for argument of type ‘prop flags t’ aka ‘struct prop flags’ changed in gcc zpool valid proplist libzfs handle t hdl const char poolname gcc version gcc debian describe how to reproduce the problem build without enable debug i presume enable debug will also do this but that s not what i was doing include any warning errors backtraces from the system logs above | 1 |
13,619 | 2,772,949,671 | IssuesEvent | 2015-05-03 05:40:52 | agronholm/pythonfutures | https://api.github.com/repos/agronholm/pythonfutures | closed | ThreadPoolExecutor#map() doesn't start tasks until generator is used | auto-migrated Priority-Medium Type-Defect | ```
See this simple source: http://paste.pound-python.org/show/ZTWE9gxRpAX5qsgMGS2H/
On Python 3, the prints happen before the generator is used at all. On Python
2, they don't happen and RuntimeError is raised when trying to iterate on the
returned generator.
Python 2.7.8 on OS X 10.9 with futures 2.1.6.
```
Original issue reported on code.google.com by `remirampin@gmail.com` on 30 Aug 2014 at 9:23 | 1.0 | ThreadPoolExecutor#map() doesn't start tasks until generator is used - ```
See this simple source: http://paste.pound-python.org/show/ZTWE9gxRpAX5qsgMGS2H/
On Python 3, the prints happen before the generator is used at all. On Python
2, they don't happen and RuntimeError is raised when trying to iterate on the
returned generator.
Python 2.7.8 on OS X 10.9 with futures 2.1.6.
```
Original issue reported on code.google.com by `remirampin@gmail.com` on 30 Aug 2014 at 9:23 | defect | threadpoolexecutor map doesn t start tasks until generator is used see this simple source on python the prints happen before the generator is used at all on python they don t happen and runtimeerror is raised when trying to iterate on the returned generator python on os x with futures original issue reported on code google com by remirampin gmail com on aug at | 1 |
355,509 | 10,581,611,542 | IssuesEvent | 2019-10-08 09:34:44 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | securedsearch.lavasoft.com - see bug description | browser-firefox engine-gecko priority-normal | <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://securedsearch.lavasoft.com/?pr=vmn&id=webcompa&ent=hp_WCYID10057_344_191006
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: i dont want this
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/10/ad65116e-b7d8-4473-9824-eca99e83b755.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191002212950</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | securedsearch.lavasoft.com - see bug description - <!-- @browser: Firefox 70.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:70.0) Gecko/20100101 Firefox/70.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://securedsearch.lavasoft.com/?pr=vmn&id=webcompa&ent=hp_WCYID10057_344_191006
**Browser / Version**: Firefox 70.0
**Operating System**: Windows 10
**Tested Another Browser**: Unknown
**Problem type**: Something else
**Description**: i dont want this
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/10/ad65116e-b7d8-4473-9824-eca99e83b755.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191002212950</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_defect | securedsearch lavasoft com see bug description url browser version firefox operating system windows tested another browser unknown problem type something else description i dont want this steps to reproduce browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
31,352 | 6,501,655,809 | IssuesEvent | 2017-08-23 10:29:36 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Add required property to AutoComplete, Spinner and InputMask | defect | **I'm submitting a ...** (check one with "x")
```
[x] feature request
```
**What is the motivation / use case for changing the behavior?**
Looking at the documentation on PrimeNG site, it appears that p-dropdown has a "required" property, but the autocomplete does not. It seems highly important that required also be a property of autocomplete because it something that would be used in forms. In my case, I have in several places autocompletes inside table cells that I would like to set to required and use angulars built in validation classes to style them.
* **Angular version:** 4.3.1
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 4.1.1
| 1.0 | Add required property to AutoComplete, Spinner and InputMask - **I'm submitting a ...** (check one with "x")
```
[x] feature request
```
**What is the motivation / use case for changing the behavior?**
Looking at the documentation on PrimeNG site, it appears that p-dropdown has a "required" property, but the autocomplete does not. It seems highly important that required also be a property of autocomplete because it something that would be used in forms. In my case, I have in several places autocompletes inside table cells that I would like to set to required and use angulars built in validation classes to style them.
* **Angular version:** 4.3.1
<!-- Check whether this is still an issue in the most recent Angular version -->
* **PrimeNG version:** 4.1.1
| defect | add required property to autocomplete spinner and inputmask i m submitting a check one with x feature request what is the motivation use case for changing the behavior looking at the documentation on primeng site it appears that p dropdown has a required property but the autocomplete does not it seems highly important that required also be a property of autocomplete because it something that would be used in forms in my case i have in several places autocompletes inside table cells that i would like to set to required and use angulars built in validation classes to style them angular version primeng version | 1 |
767,205 | 26,914,777,644 | IssuesEvent | 2023-02-07 05:00:23 | younginnovations/iatipublisher | https://api.github.com/repos/younginnovations/iatipublisher | closed | Issue found in translation | type: bug priority: high Backend | - [x] **Issue 1: The published text is missing from an activity detail page.**

- [x] **Issue 2: Inappropriate way to display the drop-down**
https://user-images.githubusercontent.com/78422663/216274366-191b0f50-0317-4c3a-a9f2-f3f04d883d69.mp4
- [x] **Issue 3: Invalid toast message after publishing the activity.**
https://user-images.githubusercontent.com/78422663/216274566-3a1c7626-d9fd-4ba3-938c-f36ad2301cde.mp4
- [x] **Issue 4: Footer overflows**
https://user-images.githubusercontent.com/78422663/216274489-5b6949ea-ba1a-4e0d-8c3f-65b0f5c64d48.mp4
- [x] Issue 5: the sector is missing from the activity detail page

- [x] **Issue 6: Invalid toast message when an element is updated and deleted.**

- [x] **Issue 7: Error message cannot be expanded.**
- [x] **Issue 8: Invalid toast message during the creation, and deletion of the user.**
- [x] **Issue 9: The button is missing from the About us page.**
- [x] **Issue 10: The font size of the role and status is small.**
- [x] **Issue 11: Two drop-down options for activity status, capital spend, and default tied status.**

- [x] **Issue 12: The activity detail page does not display any element data aside from the iati-identifier and title.**
- [x] **Issue 13: UI issue for reporting-org**

- [x] **Issue 14: Reporting orgs not displaying on the activity detail page.**

- [x] **Issue 15: Invalid error message when the user is inactive and the user enter invalid credentials.**

- [x] **Issue 16: The profile icon is hiding when a user hovers over the icon.**
- [x] **Issue 17: None of the element data is displayed in org.**
https://user-images.githubusercontent.com/78422663/216281069-de2b339c-a01d-4aa3-adf6-5a4da9d6f681.mp4
- [x] **Issue 18: The footer is disappear from the login/sign-up page.**
- [x] **Issue 19: The password recovery page gets disappears.**
- [x] **Issue 20: Sign up through with iati register page gets disappears**
- [x] **Issue 21: Final page gets disappears.**

- [x] **Issue 22: When the user is publishing the activity in bulk button gets disappears.**

| 1.0 | Issue found in translation - - [x] **Issue 1: The published text is missing from an activity detail page.**

- [x] **Issue 2: Inappropriate way to display the drop-down**
https://user-images.githubusercontent.com/78422663/216274366-191b0f50-0317-4c3a-a9f2-f3f04d883d69.mp4
- [x] **Issue 3: Invalid toast message after publishing the activity.**
https://user-images.githubusercontent.com/78422663/216274566-3a1c7626-d9fd-4ba3-938c-f36ad2301cde.mp4
- [x] **Issue 4: Footer overflows**
https://user-images.githubusercontent.com/78422663/216274489-5b6949ea-ba1a-4e0d-8c3f-65b0f5c64d48.mp4
- [x] Issue 5: the sector is missing from the activity detail page

- [x] **Issue 6: Invalid toast message when an element is updated and deleted.**

- [x] **Issue 7: Error message cannot be expanded.**
- [x] **Issue 8: Invalid toast message during the creation, and deletion of the user.**
- [x] **Issue 9: The button is missing from the About us page.**
- [x] **Issue 10: The font size of the role and status is small.**
- [x] **Issue 11: Two drop-down options for activity status, capital spend, and default tied status.**

- [x] **Issue 12: The activity detail page does not display any element data aside from the iati-identifier and title.**
- [x] **Issue 13: UI issue for reporting-org**

- [x] **Issue 14: Reporting orgs not displaying on the activity detail page.**

- [x] **Issue 15: Invalid error message when the user is inactive and the user enter invalid credentials.**

- [x] **Issue 16: The profile icon is hiding when a user hovers over the icon.**
- [x] **Issue 17: None of the element data is displayed in org.**
https://user-images.githubusercontent.com/78422663/216281069-de2b339c-a01d-4aa3-adf6-5a4da9d6f681.mp4
- [x] **Issue 18: The footer is disappear from the login/sign-up page.**
- [x] **Issue 19: The password recovery page gets disappears.**
- [x] **Issue 20: Sign up through with iati register page gets disappears**
- [x] **Issue 21: Final page gets disappears.**

- [x] **Issue 22: When the user is publishing the activity in bulk button gets disappears.**

| non_defect | issue found in translation issue the published text is missing from an activity detail page issue inappropriate way to display the drop down issue invalid toast message after publishing the activity issue footer overflows issue the sector is missing from the activity detail page issue invalid toast message when an element is updated and deleted issue error message cannot be expanded issue invalid toast message during the creation and deletion of the user issue the button is missing from the about us page issue the font size of the role and status is small issue two drop down options for activity status capital spend and default tied status issue the activity detail page does not display any element data aside from the iati identifier and title issue ui issue for reporting org issue reporting orgs not displaying on the activity detail page issue invalid error message when the user is inactive and the user enter invalid credentials issue the profile icon is hiding when a user hovers over the icon issue none of the element data is displayed in org issue the footer is disappear from the login sign up page issue the password recovery page gets disappears issue sign up through with iati register page gets disappears issue final page gets disappears issue when the user is publishing the activity in bulk button gets disappears | 0 |
31,487 | 6,538,911,876 | IssuesEvent | 2017-09-01 08:49:44 | SublimeText/PackageDev | https://api.github.com/repos/SublimeText/PackageDev | closed | Valid settings are marked as invalid | defect | In current beta4 all settings in the user settings file are marked as invalid.
The console shows the following error which might point to the reason.
```
code: Unexpected character
Parse Error: ">✏</a>
</body>
``` | 1.0 | Valid settings are marked as invalid - In current beta4 all settings in the user settings file are marked as invalid.
The console shows the following error which might point to the reason.
```
code: Unexpected character
Parse Error: ">✏</a>
</body>
``` | defect | valid settings are marked as invalid in current all settings in the user settings file are marked as invalid the console shows the following error which might point to the reason code unexpected character parse error ✏ | 1 |
68,356 | 21,647,663,566 | IssuesEvent | 2022-05-06 05:20:36 | klubcoin/lcn-mobile | https://api.github.com/repos/klubcoin/lcn-mobile | opened | [Token Tipping][Send Tips] Fix send tip button must not be stuck at loading state. | Defect Must Have Critical Token Tipping Services | ### **Description:**
Send tip button must not be stuck at loading state.
**Build Environment:** Staging Environment
**Affects Version:** 1.0.0.staging.27
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User has an existing Klubcoin Wallet Account
3. User received request tip link
4. User is currently at third party messaging app where he/she received request tip link
### **Steps to Reproduce:**
1. Access request tip link
2. Tap "here" link
3. Enter accurate TIP amount
4. Tap Send button
### **Expected Result:**
Successfully sent KLUB Token
### **Actual Result:**
Send Button is stuck at loading state
### **Attachment/s:**

 | 1.0 | [Token Tipping][Send Tips] Fix send tip button must not be stuck at loading state. - ### **Description:**
Send tip button must not be stuck at loading state.
**Build Environment:** Staging Environment
**Affects Version:** 1.0.0.staging.27
**Device Platform:** Android
**Device OS:** 11
**Test Device:** OnePlus 7T Pro
### **Pre-condition:**
1. User successfully installed Klubcoin App
2. User has an existing Klubcoin Wallet Account
3. User received request tip link
4. User is currently at third party messaging app where he/she received request tip link
### **Steps to Reproduce:**
1. Access request tip link
2. Tap "here" link
3. Enter accurate TIP amount
4. Tap Send button
### **Expected Result:**
Successfully sent KLUB Token
### **Actual Result:**
Send Button is stuck at loading state
### **Attachment/s:**

 | defect | fix send tip button must not be stuck at loading state description send tip button must not be stuck at loading state build environment staging environment affects version staging device platform android device os test device oneplus pro pre condition user successfully installed klubcoin app user has an existing klubcoin wallet account user received request tip link user is currently at third party messaging app where he she received request tip link steps to reproduce access request tip link tap here link enter accurate tip amount tap send button expected result successfully sent klub token actual result send button is stuck at loading state attachment s | 1 |
63,826 | 18,011,838,343 | IssuesEvent | 2021-09-16 09:29:48 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | opened | member restart persistence jvm crash memory.impl.AlignmentAwareMemoryAccessor.getInt | Type: Defect Source: Internal Module: Config Module: Hot Restart Module: HD Module: Persistence |
http://jenkins.hazelcast.com/view/hot-restart/job/hot-bounce/340/console
/disk1/workspace/hot-bounce/5.0-SNAPSHOT/2021_09_16-09_01_04/bounce
HzMember2HZ timeout restarting member node
./output/HZ/HzMember2HZ/hs_err_pid2972.log
```
--------------- T H R E A D ---------------
Current thread (0x0000ffff38004000): JavaThread "hz.confident_northcutt.s01.GC-thread" [_thread_in_Java, id=3057, stack(0x0000fffef4400000,0x0000fffef4600000)]
Stack: [0x0000fffef4400000,0x0000fffef4600000], sp=0x0000fffef45fdd70, free space=2039k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 4427 c2 com.hazelcast.internal.memory.impl.AlignmentAwareMemoryAccessor.getInt(J)I (29 bytes) @ 0x0000ffff9452fd80 [0x0000ffff9452fd40+0x0000000000000040]
J 4902 c2 com.hazelcast.internal.util.HashUtil.MurmurHash3_x86_32(Lcom/hazelcast/internal/util/HashUtil$LoadStrategy;Ljava/lang/Object;JII)I (252 bytes) @ 0x0000ffff94622d68 [0x0000ffff94622d00+0x00000000
00000068]
J 5118 c2 com.hazelcast.map.impl.recordstore.HDMapRamStoreImpl.copyEntry(Lcom/hazelcast/internal/hotrestart/KeyHandle;ILcom/hazelcast/internal/hotrestart/RecordDataSink;)Z (85 bytes) @ 0x0000ffff94666524
[0x0000ffff946660c0+0x0000000000000464]
J 5136 c1 com.hazelcast.internal.hotrestart.impl.gc.ValEvacuator.moveToSurvivors(Lcom/hazelcast/internal/hotrestart/impl/SortedBySeqRecordCursor;)V (277 bytes) @ 0x0000ffff8d84e778 [0x0000ffff8d84dc40+0x0
000000000000b38]
j com.hazelcast.internal.hotrestart.impl.gc.ValEvacuator.evacuate()V+52
J 7244 c1 com.hazelcast.internal.hotrestart.impl.gc.ChunkManager.valueGc(Lcom/hazelcast/internal/hotrestart/impl/gc/GcParams;Lcom/hazelcast/internal/hotrestart/impl/gc/MutatorCatchup;)Z (228 bytes) @ 0x00
00ffff8dc1bee0 [0x0000ffff8dc19bc0+0x0000000000002320]
J 5567% c1 com.hazelcast.internal.hotrestart.impl.gc.GcMainLoop.run()V (259 bytes) @ 0x0000ffff8d911d74 [0x0000ffff8d911540+0x0000000000000834]
j java.lang.Thread.run()V+11 java.base@11.0.12
v ~StubRoutines::call_stub
V [libjvm.so+0x74f574] JavaCalls::call_helper(JavaValue*, methodHandle const&, JavaCallArguments*, Thread*)+0x354
V [libjvm.so+0x74da40] JavaCalls::call_virtual(JavaValue*, Handle, Klass*, Symbol*, Symbol*, Thread*)+0x160
V [libjvm.so+0x7f23f0] thread_entry(JavaThread*, Thread*)+0x68
V [libjvm.so+0xcc5b84] JavaThread::thread_main_inner()+0xd8
V [libjvm.so+0xcc3674] Thread::call_run()+0x94
V [libjvm.so+0xa858b8] thread_native_entry(Thread*)+0x108
C [libpthread.so.0+0x71ec] start_thread+0xac
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000fffef1bf8000
```
cat ./output/HZ/HzMember2HZ/out.txt
```
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4624854758981714747
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -2064361361865565284
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 2683524247046331894
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -4732436703754734970
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 8199603974387877559
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 5213834991441292693
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -1718971108907240729
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -1718971108907240729
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 1663396822509246456
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 881310956686768009
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 7717987039307300113
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 7717987039307300113
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 8013580804879044008
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 334027424206507321
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 334027424206507321
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6531628355538506615
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 247243056385334241
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 247243056385334241
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -8415570138419441570
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -7287584596533912231
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 6451397601491303925
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -5246391068467577919
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 498541680710622948
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6896661516595937509
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 2215759797616234785
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6718433272659507100
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 474974634914488802
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4712428711619366141
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4712428711619366141
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -8907675406840087396
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000ffff9452fd80, pid=2972, tid=3057
#
# JRE version: OpenJDK Runtime Environment Corretto-11.0.12.7.1 (11.0.12+7) (build 11.0.12+7-LTS)
# Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.12.7.1 (11.0.12+7-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-aarch64)
# Problematic frame:
# J 4427 c2 com.hazelcast.internal.memory.impl.AlignmentAwareMemoryAccessor.getInt(J)I (29 bytes) @ 0x0000ffff9452fd80 [0x0000ffff9452fd40+0x0000000000000040]
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/ec2-user/hz-root/HzMember2HZ/hs_err_pid2972.log
Compiled method (c2) 30498 4902 4 com.hazelcast.internal.util.HashUtil::MurmurHash3_x86_32 (252 bytes)
total in heap [0x0000ffff94622a10,0x0000ffff946241a8] = 6040
relocation [0x0000ffff94622b80,0x0000ffff94622d00] = 384
main code [0x0000ffff94622d00,0x0000ffff94623580] = 2176
stub code [0x0000ffff94623580,0x0000ffff946237d0] = 592
oops [0x0000ffff946237d0,0x0000ffff946237d8] = 8
metadata [0x0000ffff946237d8,0x0000ffff94623848] = 112
scopes data [0x0000ffff94623848,0x0000ffff94623e78] = 1584
scopes pcs [0x0000ffff94623e78,0x0000ffff946240d8] = 608
dependencies [0x0000ffff946240d8,0x0000ffff946240e0] = 8
handler table [0x0000ffff946240e0,0x0000ffff94624140] = 96
nul chk table [0x0000ffff94624140,0x0000ffff946241a8] = 104
Compiled method (c2) 30499 4902 4 com.hazelcast.internal.util.HashUtil::MurmurHash3_x86_32 (252 bytes)
total in heap [0x0000ffff94622a10,0x0000ffff946241a8] = 6040
relocation [0x0000ffff94622b80,0x0000ffff94622d00] = 384
main code [0x0000ffff94622d00,0x0000ffff94623580] = 2176
stub code [0x0000ffff94623580,0x0000ffff946237d0] = 592
oops [0x0000ffff946237d0,0x0000ffff946237d8] = 8
metadata [0x0000ffff946237d8,0x0000ffff94623848] = 112
scopes data [0x0000ffff94623848,0x0000ffff94623e78] = 1584
scopes pcs [0x0000ffff94623e78,0x0000ffff946240d8] = 608
dependencies [0x0000ffff946240d8,0x0000ffff946240e0] = 8
handler table [0x0000ffff946240e0,0x0000ffff94624140] = 96
nul chk table [0x0000ffff94624140,0x0000ffff946241a8] = 104
Could not load hsdis-aarch64.so; library not loadable; PrintAssembly is disabled
#
# If you would like to submit a bug report, please visit:
# https://github.com/corretto/corretto-11/issues/
#
(base) [ec2-user@ip-10-0-0-252 bounce]$
``` | 1.0 | member restart persistence jvm crash memory.impl.AlignmentAwareMemoryAccessor.getInt -
http://jenkins.hazelcast.com/view/hot-restart/job/hot-bounce/340/console
/disk1/workspace/hot-bounce/5.0-SNAPSHOT/2021_09_16-09_01_04/bounce
HzMember2HZ timeout restarting member node
./output/HZ/HzMember2HZ/hs_err_pid2972.log
```
--------------- T H R E A D ---------------
Current thread (0x0000ffff38004000): JavaThread "hz.confident_northcutt.s01.GC-thread" [_thread_in_Java, id=3057, stack(0x0000fffef4400000,0x0000fffef4600000)]
Stack: [0x0000fffef4400000,0x0000fffef4600000], sp=0x0000fffef45fdd70, free space=2039k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
J 4427 c2 com.hazelcast.internal.memory.impl.AlignmentAwareMemoryAccessor.getInt(J)I (29 bytes) @ 0x0000ffff9452fd80 [0x0000ffff9452fd40+0x0000000000000040]
J 4902 c2 com.hazelcast.internal.util.HashUtil.MurmurHash3_x86_32(Lcom/hazelcast/internal/util/HashUtil$LoadStrategy;Ljava/lang/Object;JII)I (252 bytes) @ 0x0000ffff94622d68 [0x0000ffff94622d00+0x00000000
00000068]
J 5118 c2 com.hazelcast.map.impl.recordstore.HDMapRamStoreImpl.copyEntry(Lcom/hazelcast/internal/hotrestart/KeyHandle;ILcom/hazelcast/internal/hotrestart/RecordDataSink;)Z (85 bytes) @ 0x0000ffff94666524
[0x0000ffff946660c0+0x0000000000000464]
J 5136 c1 com.hazelcast.internal.hotrestart.impl.gc.ValEvacuator.moveToSurvivors(Lcom/hazelcast/internal/hotrestart/impl/SortedBySeqRecordCursor;)V (277 bytes) @ 0x0000ffff8d84e778 [0x0000ffff8d84dc40+0x0
000000000000b38]
j com.hazelcast.internal.hotrestart.impl.gc.ValEvacuator.evacuate()V+52
J 7244 c1 com.hazelcast.internal.hotrestart.impl.gc.ChunkManager.valueGc(Lcom/hazelcast/internal/hotrestart/impl/gc/GcParams;Lcom/hazelcast/internal/hotrestart/impl/gc/MutatorCatchup;)Z (228 bytes) @ 0x00
00ffff8dc1bee0 [0x0000ffff8dc19bc0+0x0000000000002320]
J 5567% c1 com.hazelcast.internal.hotrestart.impl.gc.GcMainLoop.run()V (259 bytes) @ 0x0000ffff8d911d74 [0x0000ffff8d911540+0x0000000000000834]
j java.lang.Thread.run()V+11 java.base@11.0.12
v ~StubRoutines::call_stub
V [libjvm.so+0x74f574] JavaCalls::call_helper(JavaValue*, methodHandle const&, JavaCallArguments*, Thread*)+0x354
V [libjvm.so+0x74da40] JavaCalls::call_virtual(JavaValue*, Handle, Klass*, Symbol*, Symbol*, Thread*)+0x160
V [libjvm.so+0x7f23f0] thread_entry(JavaThread*, Thread*)+0x68
V [libjvm.so+0xcc5b84] JavaThread::thread_main_inner()+0xd8
V [libjvm.so+0xcc3674] Thread::call_run()+0x94
V [libjvm.so+0xa858b8] thread_native_entry(Thread*)+0x108
C [libpthread.so.0+0x71ec] start_thread+0xac
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000fffef1bf8000
```
cat ./output/HZ/HzMember2HZ/out.txt
```
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4624854758981714747
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -2064361361865565284
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 2683524247046331894
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -4732436703754734970
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 8199603974387877559
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 5213834991441292693
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -1718971108907240729
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -1718971108907240729
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 1663396822509246456
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 881310956686768009
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 7717987039307300113
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 7717987039307300113
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 8013580804879044008
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 334027424206507321
09:05:15 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 334027424206507321
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:15 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6531628355538506615
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 247243056385334241
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 247243056385334241
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -8415570138419441570
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -7287584596533912231
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 6451397601491303925
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -5246391068467577919
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 498541680710622948
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6896661516595937509
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 2215759797616234785
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-2]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -6718433272659507100
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 474974634914488802
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.generic-operation.thread-1]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4712428711619366141
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.partition-operation.thread-3]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: 4712428711619366141
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-3]:52 Applying differential sync for mapBak1HDIdle
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-2]:52 Applying differential sync for mapBak1HD
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-0]:52 Applying differential sync for mapBak1HD
09:05:16 TRACE ClusterMetadataManager [hz.confident_northcutt.priority-generic-operation.thread-0]:57 [10.0.0.102]:5701 [HZ] [5.0-SNAPSHOT] Will persist partition table with stamp: -8907675406840087396
09:05:16 INFO EnterpriseMapReplicationStateHolder [hz.confident_northcutt.partition-operation.thread-1]:52 Applying differential sync for mapBak1HD
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000ffff9452fd80, pid=2972, tid=3057
#
# JRE version: OpenJDK Runtime Environment Corretto-11.0.12.7.1 (11.0.12+7) (build 11.0.12+7-LTS)
# Java VM: OpenJDK 64-Bit Server VM Corretto-11.0.12.7.1 (11.0.12+7-LTS, mixed mode, tiered, compressed oops, g1 gc, linux-aarch64)
# Problematic frame:
# J 4427 c2 com.hazelcast.internal.memory.impl.AlignmentAwareMemoryAccessor.getInt(J)I (29 bytes) @ 0x0000ffff9452fd80 [0x0000ffff9452fd40+0x0000000000000040]
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/ec2-user/hz-root/HzMember2HZ/hs_err_pid2972.log
Compiled method (c2) 30498 4902 4 com.hazelcast.internal.util.HashUtil::MurmurHash3_x86_32 (252 bytes)
total in heap [0x0000ffff94622a10,0x0000ffff946241a8] = 6040
relocation [0x0000ffff94622b80,0x0000ffff94622d00] = 384
main code [0x0000ffff94622d00,0x0000ffff94623580] = 2176
stub code [0x0000ffff94623580,0x0000ffff946237d0] = 592
oops [0x0000ffff946237d0,0x0000ffff946237d8] = 8
metadata [0x0000ffff946237d8,0x0000ffff94623848] = 112
scopes data [0x0000ffff94623848,0x0000ffff94623e78] = 1584
scopes pcs [0x0000ffff94623e78,0x0000ffff946240d8] = 608
dependencies [0x0000ffff946240d8,0x0000ffff946240e0] = 8
handler table [0x0000ffff946240e0,0x0000ffff94624140] = 96
nul chk table [0x0000ffff94624140,0x0000ffff946241a8] = 104
Compiled method (c2) 30499 4902 4 com.hazelcast.internal.util.HashUtil::MurmurHash3_x86_32 (252 bytes)
total in heap [0x0000ffff94622a10,0x0000ffff946241a8] = 6040
relocation [0x0000ffff94622b80,0x0000ffff94622d00] = 384
main code [0x0000ffff94622d00,0x0000ffff94623580] = 2176
stub code [0x0000ffff94623580,0x0000ffff946237d0] = 592
oops [0x0000ffff946237d0,0x0000ffff946237d8] = 8
metadata [0x0000ffff946237d8,0x0000ffff94623848] = 112
scopes data [0x0000ffff94623848,0x0000ffff94623e78] = 1584
scopes pcs [0x0000ffff94623e78,0x0000ffff946240d8] = 608
dependencies [0x0000ffff946240d8,0x0000ffff946240e0] = 8
handler table [0x0000ffff946240e0,0x0000ffff94624140] = 96
nul chk table [0x0000ffff94624140,0x0000ffff946241a8] = 104
Could not load hsdis-aarch64.so; library not loadable; PrintAssembly is disabled
#
# If you would like to submit a bug report, please visit:
# https://github.com/corretto/corretto-11/issues/
#
(base) [ec2-user@ip-10-0-0-252 bounce]$
``` | defect | member restart persistence jvm crash memory impl alignmentawarememoryaccessor getint workspace hot bounce snapshot bounce timeout restarting member node output hz hs err log t h r e a d current thread javathread hz confident northcutt gc thread stack sp free space native frames j compiled java code a aot compiled java code j interpreted vv vm code c native code j com hazelcast internal memory impl alignmentawarememoryaccessor getint j i bytes j com hazelcast internal util hashutil lcom hazelcast internal util hashutil loadstrategy ljava lang object jii i bytes j com hazelcast map impl recordstore hdmapramstoreimpl copyentry lcom hazelcast internal hotrestart keyhandle ilcom hazelcast internal hotrestart recorddatasink z bytes j com hazelcast internal hotrestart impl gc valevacuator movetosurvivors lcom hazelcast internal hotrestart impl sortedbyseqrecordcursor v bytes j com hazelcast internal hotrestart impl gc valevacuator evacuate v j com hazelcast internal hotrestart impl gc chunkmanager valuegc lcom hazelcast internal hotrestart impl gc gcparams lcom hazelcast internal hotrestart impl gc mutatorcatchup z bytes j com hazelcast internal hotrestart impl gc gcmainloop run v bytes j java lang thread run v java base v stubroutines call stub v javacalls call helper javavalue methodhandle const javacallarguments thread v javacalls call virtual javavalue handle klass symbol symbol thread v thread entry javathread thread v javathread thread main inner v thread call run v thread native entry thread c start thread siginfo si signo sigsegv si code segv maperr si addr cat output hz out txt info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for info enterprisemapreplicationstateholder applying differential sync for trace clustermetadatamanager will persist partition table with stamp info enterprisemapreplicationstateholder applying differential sync for a fatal error has been detected by the java runtime environment sigsegv at pc pid tid jre version openjdk runtime environment corretto build lts java vm openjdk bit server vm corretto lts mixed mode tiered compressed oops gc linux problematic frame j com hazelcast internal memory impl alignmentawarememoryaccessor getint j i bytes no core dump will be written core dumps have been disabled to enable core dumping try ulimit c unlimited before starting java again an error report file with more information is saved as home user hz root hs err log compiled method com hazelcast internal util hashutil bytes total in heap relocation main code stub code oops metadata scopes data scopes pcs dependencies handler table nul chk table compiled method com hazelcast internal util hashutil bytes total in heap relocation main code stub code oops metadata scopes data scopes pcs dependencies handler table nul chk table could not load hsdis so library not loadable printassembly is disabled if you would like to submit a bug report please visit base | 1 |
143,726 | 19,248,596,736 | IssuesEvent | 2021-12-09 01:05:43 | Benderr-TP/blip | https://api.github.com/repos/Benderr-TP/blip | opened | CVE-2021-32723 (Medium) detected in prismjs-1.17.1.tgz, prismjs-1.19.0.tgz | security vulnerability | ## CVE-2021-32723 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>prismjs-1.17.1.tgz</b>, <b>prismjs-1.19.0.tgz</b></p></summary>
<p>
<details><summary><b>prismjs-1.17.1.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz</a></p>
<p>
Dependency Hierarchy:
- addon-knobs-5.3.19.tgz (Root Library)
- components-5.3.19.tgz
- react-syntax-highlighter-11.0.2.tgz
- refractor-2.10.1.tgz
- :x: **prismjs-1.17.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>prismjs-1.19.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz</a></p>
<p>
Dependency Hierarchy:
- addon-knobs-5.3.19.tgz (Root Library)
- components-5.3.19.tgz
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.19.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prism is a syntax highlighting library. Some languages before 1.24.0 are vulnerable to Regular Expression Denial of Service (ReDoS). When Prism is used to highlight untrusted (user-given) text, an attacker can craft a string that will take a very very long time to highlight. This problem has been fixed in Prism v1.24. As a workaround, do not use ASCIIDoc or ERB to highlight untrusted text. Other languages are not affected and can be used to highlight untrusted text.
<p>Publish Date: 2021-06-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32723>CVE-2021-32723</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/PrismJS/prism/security/advisories/GHSA-gj77-59wh-66hg">https://github.com/PrismJS/prism/security/advisories/GHSA-gj77-59wh-66hg</a></p>
<p>Release Date: 2021-06-28</p>
<p>Fix Resolution: prismjs - 1.24.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.17.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-knobs:5.3.19;@storybook/components:5.3.19;react-syntax-highlighter:11.0.2;refractor:2.10.1;prismjs:1.17.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"prismjs - 1.24.0","isBinary":true},{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.19.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-knobs:5.3.19;@storybook/components:5.3.19;react-syntax-highlighter:11.0.2;prismjs:1.19.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"prismjs - 1.24.0","isBinary":true}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-32723","vulnerabilityDetails":"Prism is a syntax highlighting library. Some languages before 1.24.0 are vulnerable to Regular Expression Denial of Service (ReDoS). When Prism is used to highlight untrusted (user-given) text, an attacker can craft a string that will take a very very long time to highlight. This problem has been fixed in Prism v1.24. As a workaround, do not use ASCIIDoc or ERB to highlight untrusted text. Other languages are not affected and can be used to highlight untrusted text.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32723","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-32723 (Medium) detected in prismjs-1.17.1.tgz, prismjs-1.19.0.tgz - ## CVE-2021-32723 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>prismjs-1.17.1.tgz</b>, <b>prismjs-1.19.0.tgz</b></p></summary>
<p>
<details><summary><b>prismjs-1.17.1.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.17.1.tgz</a></p>
<p>
Dependency Hierarchy:
- addon-knobs-5.3.19.tgz (Root Library)
- components-5.3.19.tgz
- react-syntax-highlighter-11.0.2.tgz
- refractor-2.10.1.tgz
- :x: **prismjs-1.17.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>prismjs-1.19.0.tgz</b></p></summary>
<p>Lightweight, robust, elegant syntax highlighting. A spin-off project from Dabblet.</p>
<p>Library home page: <a href="https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz">https://registry.npmjs.org/prismjs/-/prismjs-1.19.0.tgz</a></p>
<p>
Dependency Hierarchy:
- addon-knobs-5.3.19.tgz (Root Library)
- components-5.3.19.tgz
- react-syntax-highlighter-11.0.2.tgz
- :x: **prismjs-1.19.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>develop</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Prism is a syntax highlighting library. Some languages before 1.24.0 are vulnerable to Regular Expression Denial of Service (ReDoS). When Prism is used to highlight untrusted (user-given) text, an attacker can craft a string that will take a very very long time to highlight. This problem has been fixed in Prism v1.24. As a workaround, do not use ASCIIDoc or ERB to highlight untrusted text. Other languages are not affected and can be used to highlight untrusted text.
<p>Publish Date: 2021-06-28
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32723>CVE-2021-32723</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/PrismJS/prism/security/advisories/GHSA-gj77-59wh-66hg">https://github.com/PrismJS/prism/security/advisories/GHSA-gj77-59wh-66hg</a></p>
<p>Release Date: 2021-06-28</p>
<p>Fix Resolution: prismjs - 1.24.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.17.1","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-knobs:5.3.19;@storybook/components:5.3.19;react-syntax-highlighter:11.0.2;refractor:2.10.1;prismjs:1.17.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"prismjs - 1.24.0","isBinary":true},{"packageType":"javascript/Node.js","packageName":"prismjs","packageVersion":"1.19.0","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"@storybook/addon-knobs:5.3.19;@storybook/components:5.3.19;react-syntax-highlighter:11.0.2;prismjs:1.19.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"prismjs - 1.24.0","isBinary":true}],"baseBranches":["develop"],"vulnerabilityIdentifier":"CVE-2021-32723","vulnerabilityDetails":"Prism is a syntax highlighting library. Some languages before 1.24.0 are vulnerable to Regular Expression Denial of Service (ReDoS). When Prism is used to highlight untrusted (user-given) text, an attacker can craft a string that will take a very very long time to highlight. This problem has been fixed in Prism v1.24. As a workaround, do not use ASCIIDoc or ERB to highlight untrusted text. Other languages are not affected and can be used to highlight untrusted text.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32723","cvss3Severity":"medium","cvss3Score":"6.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_defect | cve medium detected in prismjs tgz prismjs tgz cve medium severity vulnerability vulnerable libraries prismjs tgz prismjs tgz prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href dependency hierarchy addon knobs tgz root library components tgz react syntax highlighter tgz refractor tgz x prismjs tgz vulnerable library prismjs tgz lightweight robust elegant syntax highlighting a spin off project from dabblet library home page a href dependency hierarchy addon knobs tgz root library components tgz react syntax highlighter tgz x prismjs tgz vulnerable library found in base branch develop vulnerability details prism is a syntax highlighting library some languages before are vulnerable to regular expression denial of service redos when prism is used to highlight untrusted user given text an attacker can craft a string that will take a very very long time to highlight this problem has been fixed in prism as a workaround do not use asciidoc or erb to highlight untrusted text other languages are not affected and can be used to highlight untrusted text publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution prismjs isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree storybook addon knobs storybook components react syntax highlighter refractor prismjs isminimumfixversionavailable true minimumfixversion prismjs isbinary true packagetype javascript node js packagename prismjs packageversion packagefilepaths istransitivedependency true dependencytree storybook addon knobs storybook components react syntax highlighter prismjs isminimumfixversionavailable true minimumfixversion prismjs isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails prism is a syntax highlighting library some languages before are vulnerable to regular expression denial of service redos when prism is used to highlight untrusted user given text an attacker can craft a string that will take a very very long time to highlight this problem has been fixed in prism as a workaround do not use asciidoc or erb to highlight untrusted text other languages are not affected and can be used to highlight untrusted text vulnerabilityurl | 0 |
925 | 4,629,279,646 | IssuesEvent | 2016-09-28 08:45:01 | Particular/ServicePulse | https://api.github.com/repos/Particular/ServicePulse | closed | Confirmation messages should be unique and more detailed about what's exactly affected | Impact: S Size: S Tag: Maintainer Prio Tag: Triaged Type: Feature | The current confirmation messages are too generic

In this example, the title should indicate what's being done exactly (Retry messages) and which items are affected exactly (number of messages)
CC // @mauroservienti | True | Confirmation messages should be unique and more detailed about what's exactly affected - The current confirmation messages are too generic

In this example, the title should indicate what's being done exactly (Retry messages) and which items are affected exactly (number of messages)
CC // @mauroservienti | non_defect | confirmation messages should be unique and more detailed about what s exactly affected the current confirmation messages are too generic in this example the title should indicate what s being done exactly retry messages and which items are affected exactly number of messages cc mauroservienti | 0 |
468,301 | 13,464,911,262 | IssuesEvent | 2020-09-09 19:58:24 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Can not export Coupon Code to CSV,XML | Component: Admin Component: Rule Fixed in 2.4.x Issue: Clear Description Issue: Confirmed Issue: Format is valid Issue: Ready for Work Priority: P2 Progress: ready for dev Reproduced on 2.4.x Severity: S3 Triage: Done | <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Magento 2.4-develop
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Go to backend, Marketing->Cart Price Rules
2. Create new rule, select auto generate coupon , generate coupon
3. In the grid, choose some coupon (must select by checkbox)
4. Click Export CSV or xml

### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Export Successfully
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. 404 not found
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| 1.0 | Can not export Coupon Code to CSV,XML - <!---
Please review our guidelines before adding a new issue: https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
Fields marked with (*) are required. Please don't remove the template.
-->
### Preconditions (*)
<!---
Provide the exact Magento version (example: 2.4.0) and any important information on the environment where bug is reproducible.
-->
1. Magento 2.4-develop
### Steps to reproduce (*)
<!---
Important: Provide a set of clear steps to reproduce this bug. We can not provide support without clear instructions on how to reproduce.
-->
1. Go to backend, Marketing->Cart Price Rules
2. Create new rule, select auto generate coupon , generate coupon
3. In the grid, choose some coupon (must select by checkbox)
4. Click Export CSV or xml

### Expected result (*)
<!--- Tell us what do you expect to happen. -->
1. Export Successfully
### Actual result (*)
<!--- Tell us what happened instead. Include error messages and issues. -->
1. 404 not found
---
Please provide [Severity](https://devdocs.magento.com/guides/v2.3/contributor-guide/contributing.html#backlog) assessment for the Issue as Reporter. This information will help during Confirmation and Issue triage processes.
- [ ] Severity: **S0** _- Affects critical data or functionality and leaves users without workaround._
- [ ] Severity: **S1** _- Affects critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S2** _- Affects non-critical data or functionality and forces users to employ a workaround._
- [ ] Severity: **S3** _- Affects non-critical data or functionality and does not force users to employ a workaround._
- [ ] Severity: **S4** _- Affects aesthetics, professional look and feel, “quality” or “usability”._
| non_defect | can not export coupon code to csv xml please review our guidelines before adding a new issue fields marked with are required please don t remove the template preconditions provide the exact magento version example and any important information on the environment where bug is reproducible magento develop steps to reproduce important provide a set of clear steps to reproduce this bug we can not provide support without clear instructions on how to reproduce go to backend marketing cart price rules create new rule select auto generate coupon generate coupon in the grid choose some coupon must select by checkbox click export csv or xml expected result export successfully actual result not found please provide assessment for the issue as reporter this information will help during confirmation and issue triage processes severity affects critical data or functionality and leaves users without workaround severity affects critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and forces users to employ a workaround severity affects non critical data or functionality and does not force users to employ a workaround severity affects aesthetics professional look and feel “quality” or “usability” | 0 |
54,746 | 13,911,150,822 | IssuesEvent | 2020-10-20 16:59:52 | Alfresco/alfresco-php-sdk | https://api.github.com/repos/Alfresco/alfresco-php-sdk | closed | PHP Library shows sensitive information, when alfresco server is not responding/reachable | Priority-Medium Type-Defect auto-migrated | ```
Steps to reproduce:
1. Stop Alfresco Server
2. Try to run one of the supplied PHP example
Result:
- A SOAP uncatched exception message/trace will be displayed, showing the login
credentials of the alresco database.
When the service is up again and exposed to the internet, these information
could be used to login to the alfresco backend.
Expected Result:
- An error message with no sensitive Information should be displayed.
Possible solution:
Catching the exception in
Alfresco/Service/WebService/AlfrescoWebservice.php:119 and die out with an
appropriate message.
Gunter Coelle added a comment - 16-Sep-08 08:10 AM
Hmmm, I think I was a little bit too hasty with reporting.
The problem can be avoided easily by catching the exception at the level of the
application, so it is not a real problem indeed. So, from my side, I think we
can close this issue.
Thanks anyway - Gunter
Moved from https://issues.alfresco.com/jira/browse/PHP-15
```
Original issue reported on code.google.com by `rwether...@gmail.com` on 25 Nov 2011 at 5:43
| 1.0 | PHP Library shows sensitive information, when alfresco server is not responding/reachable - ```
Steps to reproduce:
1. Stop Alfresco Server
2. Try to run one of the supplied PHP example
Result:
- A SOAP uncatched exception message/trace will be displayed, showing the login
credentials of the alresco database.
When the service is up again and exposed to the internet, these information
could be used to login to the alfresco backend.
Expected Result:
- An error message with no sensitive Information should be displayed.
Possible solution:
Catching the exception in
Alfresco/Service/WebService/AlfrescoWebservice.php:119 and die out with an
appropriate message.
Gunter Coelle added a comment - 16-Sep-08 08:10 AM
Hmmm, I think I was a little bit too hasty with reporting.
The problem can be avoided easily by catching the exception at the level of the
application, so it is not a real problem indeed. So, from my side, I think we
can close this issue.
Thanks anyway - Gunter
Moved from https://issues.alfresco.com/jira/browse/PHP-15
```
Original issue reported on code.google.com by `rwether...@gmail.com` on 25 Nov 2011 at 5:43
| defect | php library shows sensitive information when alfresco server is not responding reachable steps to reproduce stop alfresco server try to run one of the supplied php example result a soap uncatched exception message trace will be displayed showing the login credentials of the alresco database when the service is up again and exposed to the internet these information could be used to login to the alfresco backend expected result an error message with no sensitive information should be displayed possible solution catching the exception in alfresco service webservice alfrescowebservice php and die out with an appropriate message gunter coelle added a comment sep am hmmm i think i was a little bit too hasty with reporting the problem can be avoided easily by catching the exception at the level of the application so it is not a real problem indeed so from my side i think we can close this issue thanks anyway gunter moved from original issue reported on code google com by rwether gmail com on nov at | 1 |
145,291 | 11,683,505,039 | IssuesEvent | 2020-03-05 03:37:06 | brave/brave-browser | https://api.github.com/repos/brave/brave-browser | closed | Option to hide Brave Shields activity count | QA/Test-Plan-Specified QA/Yes browser-laptop-parity feature/global-settings feature/shields priority/P3 release-notes/include | ## Test plan
See https://github.com/brave/brave-core/pull/3150
## Description
Some people think the incrementing block count on Brave Shields is distracting.
> The color of the icon is enough for me to know whether it's enabled or not, and if it's enabled I trust that it's working. I'm happy to click the icon if I need more details, but that's rare.
Provide the option to hide the block count while still keeping the shields up.
## Designs
Add "Indicate the number of blocked items on the Shields icon" to Shields settings:

It will be toggled ON by default. If the user toggles the indicator OFF, the Shields icon will not have the incrementing blocked item counter, but will still be colored when it's enabled. | 1.0 | Option to hide Brave Shields activity count - ## Test plan
See https://github.com/brave/brave-core/pull/3150
## Description
Some people think the incrementing block count on Brave Shields is distracting.
> The color of the icon is enough for me to know whether it's enabled or not, and if it's enabled I trust that it's working. I'm happy to click the icon if I need more details, but that's rare.
Provide the option to hide the block count while still keeping the shields up.
## Designs
Add "Indicate the number of blocked items on the Shields icon" to Shields settings:

It will be toggled ON by default. If the user toggles the indicator OFF, the Shields icon will not have the incrementing blocked item counter, but will still be colored when it's enabled. | non_defect | option to hide brave shields activity count test plan see description some people think the incrementing block count on brave shields is distracting the color of the icon is enough for me to know whether it s enabled or not and if it s enabled i trust that it s working i m happy to click the icon if i need more details but that s rare provide the option to hide the block count while still keeping the shields up designs add indicate the number of blocked items on the shields icon to shields settings it will be toggled on by default if the user toggles the indicator off the shields icon will not have the incrementing blocked item counter but will still be colored when it s enabled | 0 |
49,507 | 13,187,222,960 | IssuesEvent | 2020-08-13 02:44:18 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | Remove NFE usage in wavereform examples/docs (Trac #1557) | Incomplete Migration Migrated from Trac combo reconstruction defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1557">https://code.icecube.wisc.edu/ticket/1557</a>, reported by jbraun and owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-19T23:38:55",
"description": "",
"reporter": "jbraun",
"cc": "",
"resolution": "fixed",
"_ts": "1455925135769376",
"component": "combo reconstruction",
"summary": "Remove NFE usage in wavereform examples/docs",
"priority": "normal",
"keywords": "",
"time": "2016-02-19T20:29:27",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Remove NFE usage in wavereform examples/docs (Trac #1557) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1557">https://code.icecube.wisc.edu/ticket/1557</a>, reported by jbraun and owned by jbraun</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-02-19T23:38:55",
"description": "",
"reporter": "jbraun",
"cc": "",
"resolution": "fixed",
"_ts": "1455925135769376",
"component": "combo reconstruction",
"summary": "Remove NFE usage in wavereform examples/docs",
"priority": "normal",
"keywords": "",
"time": "2016-02-19T20:29:27",
"milestone": "",
"owner": "jbraun",
"type": "defect"
}
```
</p>
</details>
| defect | remove nfe usage in wavereform examples docs trac migrated from json status closed changetime description reporter jbraun cc resolution fixed ts component combo reconstruction summary remove nfe usage in wavereform examples docs priority normal keywords time milestone owner jbraun type defect | 1 |
328,764 | 9,999,656,298 | IssuesEvent | 2019-07-12 11:17:44 | mantidproject/mslice | https://api.github.com/repos/mantidproject/mslice | closed | Cuts from instruments with PSD tubes are not working | High priority bug | I'm not sure if this has broken here but I can no longer do a cut from a data set with PSD tubes.
I tried the `MER30936_Ei10.00meV_one2one` data set. The steps I took were:
* load the file
* perform a slice with default parameters
* perform a cut with parameters: along `|Q|` from 1 to 4, over `DeltaE` from -2 to 2
* hit plot
An error is generated:
```python
Traceback (most recent call last):
File "/media/data1/source/github/mantidproject/mslice/mslice/widgets/cut/cut.py", line 53, in _btn_clicked
self._presenter.notify(command)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/validation_decorators.py", line 11, in wrapper
return function(*args, **kwargs)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_widget_presenter.py", line 38, in notify
self._cut()
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_widget_presenter.py", line 60, in _cut
self._cut_plotter_presenter.run_cut(workspace, Cut(*params), plot_over=plot_over, save_only=save_only)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 21, in run_cut
self._plot_with_width(workspace, cut, plot_over)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 46, in _plot_with_width
self._plot_cut(workspace, cut, plot_over)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 33, in _plot_cut
plot_cut_impl(cut_ws, (cut.intensity_start, cut.intensity_end), plot_over, legend, en_conversion)
File "/media/data1/source/github/mantidproject/mslice/mslice/plotting/globalfiguremanager.py", line 371, in wrapper
return_value = function(*args, **kwargs)
File "/media/data1/source/github/mantidproject/mslice/mslice/views/cut_plotter.py", line 39, in plot_cut_impl
plot_over=plot_over, en_conversion=en_conversion)
File "/media/data1/source/github/mantidproject/mslice/mslice/cli/__init__.py", line 30, in errorbar
return Axes.errorbar(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1814, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 3016, in errorbar
l0, = self.plot(x, y, fmt, label='_nolegend_', **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1814, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 1424, in plot
for line in self._get_lines(*args, **kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 386, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 374, in _plot_args
seg = func(x[:, j % ncx], y[:, j % ncy], kw, kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 281, in _makeline
self.set_lineprops(seg, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 189, in set_lineprops
line.set(**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/artist.py", line 936, in set
(self.__class__.__name__, k))
TypeError: There is no Line2D property "plot_over"
```
Cuts from MARI data seem to be fine still. | 1.0 | Cuts from instruments with PSD tubes are not working - I'm not sure if this has broken here but I can no longer do a cut from a data set with PSD tubes.
I tried the `MER30936_Ei10.00meV_one2one` data set. The steps I took were:
* load the file
* perform a slice with default parameters
* perform a cut with parameters: along `|Q|` from 1 to 4, over `DeltaE` from -2 to 2
* hit plot
An error is generated:
```python
Traceback (most recent call last):
File "/media/data1/source/github/mantidproject/mslice/mslice/widgets/cut/cut.py", line 53, in _btn_clicked
self._presenter.notify(command)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/validation_decorators.py", line 11, in wrapper
return function(*args, **kwargs)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_widget_presenter.py", line 38, in notify
self._cut()
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_widget_presenter.py", line 60, in _cut
self._cut_plotter_presenter.run_cut(workspace, Cut(*params), plot_over=plot_over, save_only=save_only)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 21, in run_cut
self._plot_with_width(workspace, cut, plot_over)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 46, in _plot_with_width
self._plot_cut(workspace, cut, plot_over)
File "/media/data1/source/github/mantidproject/mslice/mslice/presenters/cut_plotter_presenter.py", line 33, in _plot_cut
plot_cut_impl(cut_ws, (cut.intensity_start, cut.intensity_end), plot_over, legend, en_conversion)
File "/media/data1/source/github/mantidproject/mslice/mslice/plotting/globalfiguremanager.py", line 371, in wrapper
return_value = function(*args, **kwargs)
File "/media/data1/source/github/mantidproject/mslice/mslice/views/cut_plotter.py", line 39, in plot_cut_impl
plot_over=plot_over, en_conversion=en_conversion)
File "/media/data1/source/github/mantidproject/mslice/mslice/cli/__init__.py", line 30, in errorbar
return Axes.errorbar(self, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1814, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 3016, in errorbar
l0, = self.plot(x, y, fmt, label='_nolegend_', **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1814, in inner
return func(ax, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 1424, in plot
for line in self._get_lines(*args, **kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 386, in _grab_next_args
for seg in self._plot_args(remaining, kwargs):
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 374, in _plot_args
seg = func(x[:, j % ncx], y[:, j % ncy], kw, kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 281, in _makeline
self.set_lineprops(seg, **kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_base.py", line 189, in set_lineprops
line.set(**kwargs)
File "/usr/lib/python2.7/dist-packages/matplotlib/artist.py", line 936, in set
(self.__class__.__name__, k))
TypeError: There is no Line2D property "plot_over"
```
Cuts from MARI data seem to be fine still. | non_defect | cuts from instruments with psd tubes are not working i m not sure if this has broken here but i can no longer do a cut from a data set with psd tubes i tried the data set the steps i took were load the file perform a slice with default parameters perform a cut with parameters along q from to over deltae from to hit plot an error is generated python traceback most recent call last file media source github mantidproject mslice mslice widgets cut cut py line in btn clicked self presenter notify command file media source github mantidproject mslice mslice presenters validation decorators py line in wrapper return function args kwargs file media source github mantidproject mslice mslice presenters cut widget presenter py line in notify self cut file media source github mantidproject mslice mslice presenters cut widget presenter py line in cut self cut plotter presenter run cut workspace cut params plot over plot over save only save only file media source github mantidproject mslice mslice presenters cut plotter presenter py line in run cut self plot with width workspace cut plot over file media source github mantidproject mslice mslice presenters cut plotter presenter py line in plot with width self plot cut workspace cut plot over file media source github mantidproject mslice mslice presenters cut plotter presenter py line in plot cut plot cut impl cut ws cut intensity start cut intensity end plot over legend en conversion file media source github mantidproject mslice mslice plotting globalfiguremanager py line in wrapper return value function args kwargs file media source github mantidproject mslice mslice views cut plotter py line in plot cut impl plot over plot over en conversion en conversion file media source github mantidproject mslice mslice cli init py line in errorbar return axes errorbar self args kwargs file usr lib dist packages matplotlib init py line in inner return func ax args kwargs file usr lib dist packages matplotlib axes axes py line in errorbar self plot x y fmt label nolegend kwargs file usr lib dist packages matplotlib init py line in inner return func ax args kwargs file usr lib dist packages matplotlib axes axes py line in plot for line in self get lines args kwargs file usr lib dist packages matplotlib axes base py line in grab next args for seg in self plot args remaining kwargs file usr lib dist packages matplotlib axes base py line in plot args seg func x y kw kwargs file usr lib dist packages matplotlib axes base py line in makeline self set lineprops seg kwargs file usr lib dist packages matplotlib axes base py line in set lineprops line set kwargs file usr lib dist packages matplotlib artist py line in set self class name k typeerror there is no property plot over cuts from mari data seem to be fine still | 0 |
14,738 | 2,831,388,863 | IssuesEvent | 2015-05-24 15:53:41 | nobodyguy/dslrdashboard | https://api.github.com/repos/nobodyguy/dslrdashboard | closed | Focus Bracketing - two exposures for each focus point? | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. Set up focus bracketing
2. Start sequence
3.
What is the expected output? What do you see instead? If I set the software to
make five exposures, I get ten instead; two exposures for each focus point
(slice). Otherwise everything seems OK. Have I done something wrong?
What version of the product are you using? Not sure of software version, but
the program was downloaded only this week from Google Play.
On what operating system? Android 4.3 on a Samsung Galaxy S3 phone.
Please provide any additional information below.
Camera is a Nikon D800 with a 24-85mm f/3.5-4.5 G AF VR zoom
```
Original issue reported on code.google.com by `robbohl2...@gmail.com` on 30 Jan 2014 at 9:50 | 1.0 | Focus Bracketing - two exposures for each focus point? - ```
What steps will reproduce the problem?
1. Set up focus bracketing
2. Start sequence
3.
What is the expected output? What do you see instead? If I set the software to
make five exposures, I get ten instead; two exposures for each focus point
(slice). Otherwise everything seems OK. Have I done something wrong?
What version of the product are you using? Not sure of software version, but
the program was downloaded only this week from Google Play.
On what operating system? Android 4.3 on a Samsung Galaxy S3 phone.
Please provide any additional information below.
Camera is a Nikon D800 with a 24-85mm f/3.5-4.5 G AF VR zoom
```
Original issue reported on code.google.com by `robbohl2...@gmail.com` on 30 Jan 2014 at 9:50 | defect | focus bracketing two exposures for each focus point what steps will reproduce the problem set up focus bracketing start sequence what is the expected output what do you see instead if i set the software to make five exposures i get ten instead two exposures for each focus point slice otherwise everything seems ok have i done something wrong what version of the product are you using not sure of software version but the program was downloaded only this week from google play on what operating system android on a samsung galaxy phone please provide any additional information below camera is a nikon with a f g af vr zoom original issue reported on code google com by gmail com on jan at | 1 |
57,973 | 16,236,175,081 | IssuesEvent | 2021-05-07 01:12:44 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | We don't set device display names when registering | A-E2EE A-Registration A-Session-Mgmt P1 S-Major T-Defect | Only when logging in.
The fun part about this is that we can easily send the parameter, but because they are cached by the server, the device you click the registration email link on will get the display name from the one you started the registration on. :/ | 1.0 | We don't set device display names when registering - Only when logging in.
The fun part about this is that we can easily send the parameter, but because they are cached by the server, the device you click the registration email link on will get the display name from the one you started the registration on. :/ | defect | we don t set device display names when registering only when logging in the fun part about this is that we can easily send the parameter but because they are cached by the server the device you click the registration email link on will get the display name from the one you started the registration on | 1 |
52,720 | 13,224,969,199 | IssuesEvent | 2020-08-17 20:13:17 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | closed | Too many back-to-back short runs in Pdaq cause PnF processing to stall (Trac #222) | Migrated from Trac defect jeb + pnf | Too many short pdaq runs (that actually produce a few events each) cause PnF processing to slow to a crawl.
Run transitions are hard at PnF, with clients needing to request new GCD, etc, server needing to close open files, etc...
Need to manage this transition better. Options:
1. Improve GCDispatch, so it caches several recent runs, so a client flopping between runs isn't a problem
2. Make PFServer block data from new run until it finishes current run. May slow transitons a bit, but might be more robust.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/222">https://code.icecube.wisc.edu/projects/icecube/ticket/222</a>, reported by blaufussand owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-25T13:41:49",
"_ts": "1337953309000000",
"description": "Too many short pdaq runs (that actually produce a few events each) cause PnF processing to slow to a crawl. \n\nRun transitions are hard at PnF, with clients needing to request new GCD, etc, server needing to close open files, etc...\n\nNeed to manage this transition better. Options:\n\n1. Improve GCDispatch, so it caches several recent runs, so a client flopping between runs isn't a problem\n\n2. Make PFServer block data from new run until it finishes current run. May slow transitons a bit, but might be more robust.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2010-12-01T16:43:51",
"component": "jeb + pnf",
"summary": "Too many back-to-back short runs in Pdaq cause PnF processing to stall",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| 1.0 | Too many back-to-back short runs in Pdaq cause PnF processing to stall (Trac #222) - Too many short pdaq runs (that actually produce a few events each) cause PnF processing to slow to a crawl.
Run transitions are hard at PnF, with clients needing to request new GCD, etc, server needing to close open files, etc...
Need to manage this transition better. Options:
1. Improve GCDispatch, so it caches several recent runs, so a client flopping between runs isn't a problem
2. Make PFServer block data from new run until it finishes current run. May slow transitons a bit, but might be more robust.
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/222">https://code.icecube.wisc.edu/projects/icecube/ticket/222</a>, reported by blaufussand owned by tschmidt</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2012-05-25T13:41:49",
"_ts": "1337953309000000",
"description": "Too many short pdaq runs (that actually produce a few events each) cause PnF processing to slow to a crawl. \n\nRun transitions are hard at PnF, with clients needing to request new GCD, etc, server needing to close open files, etc...\n\nNeed to manage this transition better. Options:\n\n1. Improve GCDispatch, so it caches several recent runs, so a client flopping between runs isn't a problem\n\n2. Make PFServer block data from new run until it finishes current run. May slow transitons a bit, but might be more robust.",
"reporter": "blaufuss",
"cc": "",
"resolution": "fixed",
"time": "2010-12-01T16:43:51",
"component": "jeb + pnf",
"summary": "Too many back-to-back short runs in Pdaq cause PnF processing to stall",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "tschmidt",
"type": "defect"
}
```
</p>
</details>
| defect | too many back to back short runs in pdaq cause pnf processing to stall trac too many short pdaq runs that actually produce a few events each cause pnf processing to slow to a crawl run transitions are hard at pnf with clients needing to request new gcd etc server needing to close open files etc need to manage this transition better options improve gcdispatch so it caches several recent runs so a client flopping between runs isn t a problem make pfserver block data from new run until it finishes current run may slow transitons a bit but might be more robust migrated from json status closed changetime ts description too many short pdaq runs that actually produce a few events each cause pnf processing to slow to a crawl n nrun transitions are hard at pnf with clients needing to request new gcd etc server needing to close open files etc n nneed to manage this transition better options n improve gcdispatch so it caches several recent runs so a client flopping between runs isn t a problem n make pfserver block data from new run until it finishes current run may slow transitons a bit but might be more robust reporter blaufuss cc resolution fixed time component jeb pnf summary too many back to back short runs in pdaq cause pnf processing to stall priority normal keywords milestone owner tschmidt type defect | 1 |
506,480 | 14,666,006,134 | IssuesEvent | 2020-12-29 15:25:16 | gnosis/conditional-tokens-explorer | https://api.github.com/repos/gnosis/conditional-tokens-explorer | opened | Transaction is failed when merge 'deeper' positions | Medium priority bug | Related to #789
1 create a condition
2. split positions for it (C1P1+C1P2)
3. create condition C2
4. split: condition C2+ select position option (and use C1P1). Do not split the whole amount. split 1 token, as an example (**C2C1P1-1** +**C2C1P1-2** + **C2C1P1-3**)
5. Merge **C2C1P1-1** +**C2C1P1-2** = **C2C1P1-1/2**
6. Try to merge **C2C1P1-1/2** +**C2C1P1-3**
**AR:** impossible. see the image

**ER**: transaction is successful, tokens return back to the position C1P1
| 1.0 | Transaction is failed when merge 'deeper' positions - Related to #789
1 create a condition
2. split positions for it (C1P1+C1P2)
3. create condition C2
4. split: condition C2+ select position option (and use C1P1). Do not split the whole amount. split 1 token, as an example (**C2C1P1-1** +**C2C1P1-2** + **C2C1P1-3**)
5. Merge **C2C1P1-1** +**C2C1P1-2** = **C2C1P1-1/2**
6. Try to merge **C2C1P1-1/2** +**C2C1P1-3**
**AR:** impossible. see the image

**ER**: transaction is successful, tokens return back to the position C1P1
| non_defect | transaction is failed when merge deeper positions related to create a condition split positions for it create condition split condition select position option and use do not split the whole amount split token as an example merge try to merge ar impossible see the image er transaction is successful tokens return back to the position | 0 |
27,064 | 6,813,359,348 | IssuesEvent | 2017-11-06 08:58:07 | BTDF/DeploymentFramework | https://api.github.com/repos/BTDF/DeploymentFramework | closed | Issue: SetUpFileAdapterPaths task removes existing permissions | bug CodePlexMigrationInitiated General Impact: Low Release 5.0 | One would expect the SetUpFileAdapterPaths task to preserve permissions on the folders, but SetUpFileAdapterPaths in SetUp mode removes existing permissions.
See this snippet from SetUpFileAdapterPaths.cs SetDirectorySecurity function:
// Add the FileSystemAccessRule to the security settings.
dSecurity.SetAccessRule(
new FileSystemAccessRule(
userName, rights, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.None, controlType));
To comply with the comment, and fix the issue, AddAccessRule should be used instead of the SetAccessRule function.
#### This work item was migrated from CodePlex
CodePlex work item ID: '8452'
Assigned to: 'tfabraham'
Vote count: '2'
| 1.0 | Issue: SetUpFileAdapterPaths task removes existing permissions - One would expect the SetUpFileAdapterPaths task to preserve permissions on the folders, but SetUpFileAdapterPaths in SetUp mode removes existing permissions.
See this snippet from SetUpFileAdapterPaths.cs SetDirectorySecurity function:
// Add the FileSystemAccessRule to the security settings.
dSecurity.SetAccessRule(
new FileSystemAccessRule(
userName, rights, InheritanceFlags.ContainerInherit | InheritanceFlags.ObjectInherit, PropagationFlags.None, controlType));
To comply with the comment, and fix the issue, AddAccessRule should be used instead of the SetAccessRule function.
#### This work item was migrated from CodePlex
CodePlex work item ID: '8452'
Assigned to: 'tfabraham'
Vote count: '2'
| non_defect | issue setupfileadapterpaths task removes existing permissions one would expect the setupfileadapterpaths task to preserve permissions on the folders but setupfileadapterpaths in setup mode removes existing permissions see this snippet from setupfileadapterpaths cs setdirectorysecurity function add the filesystemaccessrule to the security settings dsecurity setaccessrule new filesystemaccessrule username rights inheritanceflags containerinherit inheritanceflags objectinherit propagationflags none controltype to comply with the comment and fix the issue addaccessrule should be used instead of the setaccessrule function this work item was migrated from codeplex codeplex work item id assigned to tfabraham vote count | 0 |
1,056 | 2,594,480,786 | IssuesEvent | 2015-02-20 04:03:44 | BALL-Project/ball | https://api.github.com/repos/BALL-Project/ball | opened | Transparency slider in Material settings tab is broken | C: VIEW P: minor T: defect | **Reported by nicste on 3 Apr 44253881 00:09 UTC**
The transparency slider in the "Material Settings" tab of the ModifyRepresentation dialog does not work. Transparency can only be changed through the transparency slider in the "Drawing mode" tab. | 1.0 | Transparency slider in Material settings tab is broken - **Reported by nicste on 3 Apr 44253881 00:09 UTC**
The transparency slider in the "Material Settings" tab of the ModifyRepresentation dialog does not work. Transparency can only be changed through the transparency slider in the "Drawing mode" tab. | defect | transparency slider in material settings tab is broken reported by nicste on apr utc the transparency slider in the material settings tab of the modifyrepresentation dialog does not work transparency can only be changed through the transparency slider in the drawing mode tab | 1 |
182,718 | 14,147,756,870 | IssuesEvent | 2020-11-10 21:20:39 | hashicorp/terraform-provider-aws | https://api.github.com/repos/hashicorp/terraform-provider-aws | closed | tests/provider: Daily tests of payer (Org) account acceptance tests | tests | <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Several tests are not run daily in any test account. These tests have to be run in a payer (Org) account and so are skipped in daily test runs in non payer accounts. They should probably be added to the daily Organizations test run.
```
TestAccAWSConfigConfigurationAggregator_organization
TestAccAWSConfigConfigurationAggregator_switch
TestAccAwsFmsAdminAccount_basic
TestAccAWSAccessAnalyzer_serial/Analyzer/Type_Organization testAccAWSAccessAnalyzerAnalyzer_Type_Organization
TestAccAWSCloudTrail_serial/Trail/isOrganization testAccAWSCloudTrail_is_organization
TestAccAWSConfig_serial/OrganizationCustomRule/basic testAccConfigOrganizationCustomRule_basic
TestAccAWSConfig_serial/OrganizationCustomRule/Description testAccConfigOrganizationCustomRule_Description
TestAccAWSConfig_serial/OrganizationCustomRule/disappears testAccConfigOrganizationCustomRule_disappears
TestAccAWSConfig_serial/OrganizationCustomRule/errorHandling testAccConfigOrganizationCustomRule_errorHandling
TestAccAWSConfig_serial/OrganizationCustomRule/ExcludedAccounts testAccConfigOrganizationCustomRule_ExcludedAccounts
TestAccAWSConfig_serial/OrganizationCustomRule/InputParameters testAccConfigOrganizationCustomRule_InputParameters
TestAccAWSConfig_serial/OrganizationCustomRule/LambdaFunctionArn testAccConfigOrganizationCustomRule_LambdaFunctionArn
TestAccAWSConfig_serial/OrganizationCustomRule/MaximumExecutionFrequency testAccConfigOrganizationCustomRule_MaximumExecutionFrequency
TestAccAWSConfig_serial/OrganizationCustomRule/ResourceIdScope testAccConfigOrganizationCustomRule_ResourceIdScope
TestAccAWSConfig_serial/OrganizationCustomRule/ResourceTypesScope testAccConfigOrganizationCustomRule_ResourceTypesScope
TestAccAWSConfig_serial/OrganizationCustomRule/TagKeyScope testAccConfigOrganizationCustomRule_TagKeyScope
TestAccAWSConfig_serial/OrganizationCustomRule/TagValueScope testAccConfigOrganizationCustomRule_TagValueScope
TestAccAWSConfig_serial/OrganizationCustomRule/TriggerTypes testAccConfigOrganizationCustomRule_TriggerTypes
TestAccAWSConfig_serial/OrganizationManagedRule/basic testAccConfigOrganizationManagedRule_basic
TestAccAWSConfig_serial/OrganizationManagedRule/Description testAccConfigOrganizationManagedRule_Description
TestAccAWSConfig_serial/OrganizationManagedRule/disappears testAccConfigOrganizationManagedRule_disappears
TestAccAWSConfig_serial/OrganizationManagedRule/errorHandling testAccConfigOrganizationManagedRule_errorHandling
TestAccAWSConfig_serial/OrganizationManagedRule/ExcludedAccounts testAccConfigOrganizationManagedRule_ExcludedAccounts
TestAccAWSConfig_serial/OrganizationManagedRule/InputParameters testAccConfigOrganizationManagedRule_InputParameters
TestAccAWSConfig_serial/OrganizationManagedRule/MaximumExecutionFrequency testAccConfigOrganizationManagedRule_MaximumExecutionFrequency
TestAccAWSConfig_serial/OrganizationManagedRule/ResourceIdScope testAccConfigOrganizationManagedRule_ResourceIdScope
TestAccAWSConfig_serial/OrganizationManagedRule/ResourceTypesScope testAccConfigOrganizationManagedRule_ResourceTypesScope
TestAccAWSConfig_serial/OrganizationManagedRule/RuleIdentifier testAccConfigOrganizationManagedRule_RuleIdentifier
TestAccAWSConfig_serial/OrganizationManagedRule/TagKeyScope testAccConfigOrganizationManagedRule_TagKeyScope
TestAccAWSConfig_serial/OrganizationManagedRule/TagValueScope testAccConfigOrganizationManagedRule_TagValueScope
TestAccAWSGuardDuty_serial/OrganizationAdminAccount/basic testAccAwsGuardDutyOrganizationAdminAccount_basic
TestAccAWSGuardDuty_serial/OrganizationConfiguration/basic testAccAwsGuardDutyOrganizationConfiguration_basic
``` | 1.0 | tests/provider: Daily tests of payer (Org) account acceptance tests - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Description
Several tests are not run daily in any test account. These tests have to be run in a payer (Org) account and so are skipped in daily test runs in non payer accounts. They should probably be added to the daily Organizations test run.
```
TestAccAWSConfigConfigurationAggregator_organization
TestAccAWSConfigConfigurationAggregator_switch
TestAccAwsFmsAdminAccount_basic
TestAccAWSAccessAnalyzer_serial/Analyzer/Type_Organization testAccAWSAccessAnalyzerAnalyzer_Type_Organization
TestAccAWSCloudTrail_serial/Trail/isOrganization testAccAWSCloudTrail_is_organization
TestAccAWSConfig_serial/OrganizationCustomRule/basic testAccConfigOrganizationCustomRule_basic
TestAccAWSConfig_serial/OrganizationCustomRule/Description testAccConfigOrganizationCustomRule_Description
TestAccAWSConfig_serial/OrganizationCustomRule/disappears testAccConfigOrganizationCustomRule_disappears
TestAccAWSConfig_serial/OrganizationCustomRule/errorHandling testAccConfigOrganizationCustomRule_errorHandling
TestAccAWSConfig_serial/OrganizationCustomRule/ExcludedAccounts testAccConfigOrganizationCustomRule_ExcludedAccounts
TestAccAWSConfig_serial/OrganizationCustomRule/InputParameters testAccConfigOrganizationCustomRule_InputParameters
TestAccAWSConfig_serial/OrganizationCustomRule/LambdaFunctionArn testAccConfigOrganizationCustomRule_LambdaFunctionArn
TestAccAWSConfig_serial/OrganizationCustomRule/MaximumExecutionFrequency testAccConfigOrganizationCustomRule_MaximumExecutionFrequency
TestAccAWSConfig_serial/OrganizationCustomRule/ResourceIdScope testAccConfigOrganizationCustomRule_ResourceIdScope
TestAccAWSConfig_serial/OrganizationCustomRule/ResourceTypesScope testAccConfigOrganizationCustomRule_ResourceTypesScope
TestAccAWSConfig_serial/OrganizationCustomRule/TagKeyScope testAccConfigOrganizationCustomRule_TagKeyScope
TestAccAWSConfig_serial/OrganizationCustomRule/TagValueScope testAccConfigOrganizationCustomRule_TagValueScope
TestAccAWSConfig_serial/OrganizationCustomRule/TriggerTypes testAccConfigOrganizationCustomRule_TriggerTypes
TestAccAWSConfig_serial/OrganizationManagedRule/basic testAccConfigOrganizationManagedRule_basic
TestAccAWSConfig_serial/OrganizationManagedRule/Description testAccConfigOrganizationManagedRule_Description
TestAccAWSConfig_serial/OrganizationManagedRule/disappears testAccConfigOrganizationManagedRule_disappears
TestAccAWSConfig_serial/OrganizationManagedRule/errorHandling testAccConfigOrganizationManagedRule_errorHandling
TestAccAWSConfig_serial/OrganizationManagedRule/ExcludedAccounts testAccConfigOrganizationManagedRule_ExcludedAccounts
TestAccAWSConfig_serial/OrganizationManagedRule/InputParameters testAccConfigOrganizationManagedRule_InputParameters
TestAccAWSConfig_serial/OrganizationManagedRule/MaximumExecutionFrequency testAccConfigOrganizationManagedRule_MaximumExecutionFrequency
TestAccAWSConfig_serial/OrganizationManagedRule/ResourceIdScope testAccConfigOrganizationManagedRule_ResourceIdScope
TestAccAWSConfig_serial/OrganizationManagedRule/ResourceTypesScope testAccConfigOrganizationManagedRule_ResourceTypesScope
TestAccAWSConfig_serial/OrganizationManagedRule/RuleIdentifier testAccConfigOrganizationManagedRule_RuleIdentifier
TestAccAWSConfig_serial/OrganizationManagedRule/TagKeyScope testAccConfigOrganizationManagedRule_TagKeyScope
TestAccAWSConfig_serial/OrganizationManagedRule/TagValueScope testAccConfigOrganizationManagedRule_TagValueScope
TestAccAWSGuardDuty_serial/OrganizationAdminAccount/basic testAccAwsGuardDutyOrganizationAdminAccount_basic
TestAccAWSGuardDuty_serial/OrganizationConfiguration/basic testAccAwsGuardDutyOrganizationConfiguration_basic
``` | non_defect | tests provider daily tests of payer org account acceptance tests community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment description several tests are not run daily in any test account these tests have to be run in a payer org account and so are skipped in daily test runs in non payer accounts they should probably be added to the daily organizations test run testaccawsconfigconfigurationaggregator organization testaccawsconfigconfigurationaggregator switch testaccawsfmsadminaccount basic testaccawsaccessanalyzer serial analyzer type organization testaccawsaccessanalyzeranalyzer type organization testaccawscloudtrail serial trail isorganization testaccawscloudtrail is organization testaccawsconfig serial organizationcustomrule basic testaccconfigorganizationcustomrule basic testaccawsconfig serial organizationcustomrule description testaccconfigorganizationcustomrule description testaccawsconfig serial organizationcustomrule disappears testaccconfigorganizationcustomrule disappears testaccawsconfig serial organizationcustomrule errorhandling testaccconfigorganizationcustomrule errorhandling testaccawsconfig serial organizationcustomrule excludedaccounts testaccconfigorganizationcustomrule excludedaccounts testaccawsconfig serial organizationcustomrule inputparameters testaccconfigorganizationcustomrule inputparameters testaccawsconfig serial organizationcustomrule lambdafunctionarn testaccconfigorganizationcustomrule lambdafunctionarn testaccawsconfig serial organizationcustomrule maximumexecutionfrequency testaccconfigorganizationcustomrule maximumexecutionfrequency testaccawsconfig serial organizationcustomrule resourceidscope testaccconfigorganizationcustomrule resourceidscope testaccawsconfig serial organizationcustomrule resourcetypesscope testaccconfigorganizationcustomrule resourcetypesscope testaccawsconfig serial organizationcustomrule tagkeyscope testaccconfigorganizationcustomrule tagkeyscope testaccawsconfig serial organizationcustomrule tagvaluescope testaccconfigorganizationcustomrule tagvaluescope testaccawsconfig serial organizationcustomrule triggertypes testaccconfigorganizationcustomrule triggertypes testaccawsconfig serial organizationmanagedrule basic testaccconfigorganizationmanagedrule basic testaccawsconfig serial organizationmanagedrule description testaccconfigorganizationmanagedrule description testaccawsconfig serial organizationmanagedrule disappears testaccconfigorganizationmanagedrule disappears testaccawsconfig serial organizationmanagedrule errorhandling testaccconfigorganizationmanagedrule errorhandling testaccawsconfig serial organizationmanagedrule excludedaccounts testaccconfigorganizationmanagedrule excludedaccounts testaccawsconfig serial organizationmanagedrule inputparameters testaccconfigorganizationmanagedrule inputparameters testaccawsconfig serial organizationmanagedrule maximumexecutionfrequency testaccconfigorganizationmanagedrule maximumexecutionfrequency testaccawsconfig serial organizationmanagedrule resourceidscope testaccconfigorganizationmanagedrule resourceidscope testaccawsconfig serial organizationmanagedrule resourcetypesscope testaccconfigorganizationmanagedrule resourcetypesscope testaccawsconfig serial organizationmanagedrule ruleidentifier testaccconfigorganizationmanagedrule ruleidentifier testaccawsconfig serial organizationmanagedrule tagkeyscope testaccconfigorganizationmanagedrule tagkeyscope testaccawsconfig serial organizationmanagedrule tagvaluescope testaccconfigorganizationmanagedrule tagvaluescope testaccawsguardduty serial organizationadminaccount basic testaccawsguarddutyorganizationadminaccount basic testaccawsguardduty serial organizationconfiguration basic testaccawsguarddutyorganizationconfiguration basic | 0 |
641,412 | 20,826,031,669 | IssuesEvent | 2022-03-18 21:01:31 | monarch-initiative/mondo | https://api.github.com/repos/monarch-initiative/mondo | closed | MONDO:0013907 bilateral generalized polymicrogyria | Revise subclass relabel term high priority | **Mondo term (ID and Label):**
MONDO:0013907 bilateral generalized polymicrogyria
**Suggested new label:**
MICROCEPHALY, SHORT STATURE, AND POLYMICROGYRIA WITH OR WITHOUT SEIZURES ??
**Your nano-attribution (ORCID)**
If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
**_Optional_: Any additional information (like supporting evidence, PubMed ID, etc.)**
The current preferred name from Mondo conflicts with https://hpo.jax.org/app/browse/term/HP:0032410 "Bilateral generalized polymicrogyria" but Mondo does not assert this term maps as similar to the HPO term.
I would suggest using the preferred name from OMIM for this entry (many "related" synonyms are OMIM names) But I am having a hard time telling if this term or the parent term (MONDO:0018764 , microcephalic primordial dwarfism due to RTTN deficiency) is intended as the exact match the MIM entry? If the term from GARD (https://rarediseases.info.nih.gov/diseases/10786/bilateral-generalized-polymicrogyria) and the Orphanet terms on MONDO:0013907 aren't an exact match the OMIM entry, then maybe this should be restructured so the GARD-linked concept is the parent (broader) concept instead of the child?
| 1.0 | MONDO:0013907 bilateral generalized polymicrogyria - **Mondo term (ID and Label):**
MONDO:0013907 bilateral generalized polymicrogyria
**Suggested new label:**
MICROCEPHALY, SHORT STATURE, AND POLYMICROGYRIA WITH OR WITHOUT SEIZURES ??
**Your nano-attribution (ORCID)**
If you don't have an ORCID, you can sign up for one [here](https://orcid.org/)
**_Optional_: Any additional information (like supporting evidence, PubMed ID, etc.)**
The current preferred name from Mondo conflicts with https://hpo.jax.org/app/browse/term/HP:0032410 "Bilateral generalized polymicrogyria" but Mondo does not assert this term maps as similar to the HPO term.
I would suggest using the preferred name from OMIM for this entry (many "related" synonyms are OMIM names) But I am having a hard time telling if this term or the parent term (MONDO:0018764 , microcephalic primordial dwarfism due to RTTN deficiency) is intended as the exact match the MIM entry? If the term from GARD (https://rarediseases.info.nih.gov/diseases/10786/bilateral-generalized-polymicrogyria) and the Orphanet terms on MONDO:0013907 aren't an exact match the OMIM entry, then maybe this should be restructured so the GARD-linked concept is the parent (broader) concept instead of the child?
| non_defect | mondo bilateral generalized polymicrogyria mondo term id and label mondo bilateral generalized polymicrogyria suggested new label microcephaly short stature and polymicrogyria with or without seizures your nano attribution orcid if you don t have an orcid you can sign up for one optional any additional information like supporting evidence pubmed id etc the current preferred name from mondo conflicts with bilateral generalized polymicrogyria but mondo does not assert this term maps as similar to the hpo term i would suggest using the preferred name from omim for this entry many related synonyms are omim names but i am having a hard time telling if this term or the parent term mondo microcephalic primordial dwarfism due to rttn deficiency is intended as the exact match the mim entry if the term from gard and the orphanet terms on mondo aren t an exact match the omim entry then maybe this should be restructured so the gard linked concept is the parent broader concept instead of the child | 0 |
375,766 | 11,134,188,038 | IssuesEvent | 2019-12-20 11:06:36 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.0 staging-1304] Delete game.db file during migration. | Medium Priority | For migration from 0.8.3.1 to 0.9.0 you should delete game.db file.
You need to make this an automatic process during migration because you have error if you don't delete this file. | 1.0 | [0.9.0 staging-1304] Delete game.db file during migration. - For migration from 0.8.3.1 to 0.9.0 you should delete game.db file.
You need to make this an automatic process during migration because you have error if you don't delete this file. | non_defect | delete game db file during migration for migration from to you should delete game db file you need to make this an automatic process during migration because you have error if you don t delete this file | 0 |
76,138 | 26,258,862,350 | IssuesEvent | 2023-01-06 05:04:25 | DependencyTrack/dependency-track | https://api.github.com/repos/DependencyTrack/dependency-track | closed | GHSA vulenrabilities not showing. | defect in triage | ### Current Behavior
Although I have enabled GHSA and all the GHSA vuln DB has been in the Dependency track dashboard. But it's not detecting the vulnerabilities. After enabling, Do I have to perform any extra steps?
As you can see from the attached screenshot, GHSA-36jr-mh4h-2g58 has not been detected. It's just an example, I have 30 repos, none of them have GHSA-related issue.



### Steps to Reproduce
1. install dependency-track.
2. enabled GHSA.
3. upload SBOM.
### Expected Behavior
find
### Dependency-Track Version
4.5.x
### Dependency-Track Distribution
Container Image
### Database Server
PostgreSQL
### Database Server Version
_No response_
### Browser
Google Chrome
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported | 1.0 | GHSA vulenrabilities not showing. - ### Current Behavior
Although I have enabled GHSA and all the GHSA vuln DB has been in the Dependency track dashboard. But it's not detecting the vulnerabilities. After enabling, Do I have to perform any extra steps?
As you can see from the attached screenshot, GHSA-36jr-mh4h-2g58 has not been detected. It's just an example, I have 30 repos, none of them have GHSA-related issue.



### Steps to Reproduce
1. install dependency-track.
2. enabled GHSA.
3. upload SBOM.
### Expected Behavior
find
### Dependency-Track Version
4.5.x
### Dependency-Track Distribution
Container Image
### Database Server
PostgreSQL
### Database Server Version
_No response_
### Browser
Google Chrome
### Checklist
- [X] I have read and understand the [contributing guidelines](https://github.com/DependencyTrack/dependency-track/blob/master/CONTRIBUTING.md#filing-issues)
- [X] I have checked the [existing issues](https://github.com/DependencyTrack/dependency-track/issues) for whether this defect was already reported | defect | ghsa vulenrabilities not showing current behavior although i have enabled ghsa and all the ghsa vuln db has been in the dependency track dashboard but it s not detecting the vulnerabilities after enabling do i have to perform any extra steps as you can see from the attached screenshot ghsa has not been detected it s just an example i have repos none of them have ghsa related issue steps to reproduce install dependency track enabled ghsa upload sbom expected behavior find dependency track version x dependency track distribution container image database server postgresql database server version no response browser google chrome checklist i have read and understand the i have checked the for whether this defect was already reported | 1 |
37,172 | 8,273,645,447 | IssuesEvent | 2018-09-17 06:58:08 | hazelcast/hazelcast | https://api.github.com/repos/hazelcast/hazelcast | closed | com.hazelcast.core.OperationTimeoutException: MapSizeOperation invocation failed to complete due to operation-heartbeat-timeout. | Estimation: S Module: IMap Module: Query Source: Community Team: Core Type: Defect | Hi,
We are using Hazelcast 3.7.4 in embedded version in our application which is deployed on 8 JVM,s across 4 physical servers(2 instances on each server). When the application was deployed everything was fine and we did not see any problem. On app start up the logic will cache the values to the Hazelcast IMAP. After this there is no write to the cache again and all the other operations only read from this cache.
Last time when we had to back out there were hung threads as we had updated the cache and we could see that when this was being replicated in the cluster there were hung threads. But that was the last time.
This time there was no write operation on the cache after the onetime load on app start up.
This is the 3rd version of Hazelcast (3.6.1, 3.6.3, 3.7.4) that we had to back out from production.
We need support on this issue as it is impacting production.
We started receiving the below exception in production continuously and then the CPU usage shooted to 100% up after this exceptions started showing up. The server CPU was being utilized at 100% for quite sometime which does not happen usually. We had to revert back Hazelcast implementation again.
```
com.hazelcast.core.OperationTimeoutException: MapSizeOperation invocation failed to complete due to operation-heartbeat-timeout. Current time:
2017-01-18 15:02:56.396. Total elapsed time: 121000 ms. Last operation heartbeat: never. Last operation heartbeat from member: 2017-01-18 15:
02:43.810. Invocation{op=com.hazelcast.map.impl.operation.MapSizeOperation{serviceName='hz:impl:mapService', identityHash=1386528744, partitio
nId=9, replicaIndex=0, callId=0, invocationTime=1484773255395 (2017-01-18 15:00:55.395), waitTimeout=-1, callTimeout=60000, name=VALUE_M
AP}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1484773255396, firstInvocationTime='2017-
01-18 15:00:55.396', lastHeartbeatMillis=0, lastHeartbeatTime='1969-12-31 18:00:00.000', target=[SERVER2]:5701, pendingResponse=
{VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=Connection[id=12, /111.111.111.111:5700->/111.111.111.112:34519, endpoint=[SERVER2]
:5701, alive=true, type=MEMBER]}
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.newOperationTimeoutException(InvocationFuture.java:150)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:98)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrow(InvocationFuture.java:74)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:158)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.retryFailedPartitions(InvokeOnPartitions.java:139)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.invoke(InvokeOnPartitions.java:70)
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnAllPartitions(OperationServiceImpl.java:359)
at com.hazelcast.map.impl.proxy.MapProxySupport.size(MapProxySupport.java:630)
at com.hazelcast.map.impl.proxy.MapProxyImpl.size(MapProxyImpl.java:82)
``` | 1.0 | com.hazelcast.core.OperationTimeoutException: MapSizeOperation invocation failed to complete due to operation-heartbeat-timeout. - Hi,
We are using Hazelcast 3.7.4 in embedded version in our application which is deployed on 8 JVM,s across 4 physical servers(2 instances on each server). When the application was deployed everything was fine and we did not see any problem. On app start up the logic will cache the values to the Hazelcast IMAP. After this there is no write to the cache again and all the other operations only read from this cache.
Last time when we had to back out there were hung threads as we had updated the cache and we could see that when this was being replicated in the cluster there were hung threads. But that was the last time.
This time there was no write operation on the cache after the onetime load on app start up.
This is the 3rd version of Hazelcast (3.6.1, 3.6.3, 3.7.4) that we had to back out from production.
We need support on this issue as it is impacting production.
We started receiving the below exception in production continuously and then the CPU usage shooted to 100% up after this exceptions started showing up. The server CPU was being utilized at 100% for quite sometime which does not happen usually. We had to revert back Hazelcast implementation again.
```
com.hazelcast.core.OperationTimeoutException: MapSizeOperation invocation failed to complete due to operation-heartbeat-timeout. Current time:
2017-01-18 15:02:56.396. Total elapsed time: 121000 ms. Last operation heartbeat: never. Last operation heartbeat from member: 2017-01-18 15:
02:43.810. Invocation{op=com.hazelcast.map.impl.operation.MapSizeOperation{serviceName='hz:impl:mapService', identityHash=1386528744, partitio
nId=9, replicaIndex=0, callId=0, invocationTime=1484773255395 (2017-01-18 15:00:55.395), waitTimeout=-1, callTimeout=60000, name=VALUE_M
AP}, tryCount=250, tryPauseMillis=500, invokeCount=1, callTimeoutMillis=60000, firstInvocationTimeMs=1484773255396, firstInvocationTime='2017-
01-18 15:00:55.396', lastHeartbeatMillis=0, lastHeartbeatTime='1969-12-31 18:00:00.000', target=[SERVER2]:5701, pendingResponse=
{VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=Connection[id=12, /111.111.111.111:5700->/111.111.111.112:34519, endpoint=[SERVER2]
:5701, alive=true, type=MEMBER]}
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.newOperationTimeoutException(InvocationFuture.java:150)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:98)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveAndThrow(InvocationFuture.java:74)
at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:158)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.retryFailedPartitions(InvokeOnPartitions.java:139)
at com.hazelcast.spi.impl.operationservice.impl.InvokeOnPartitions.invoke(InvokeOnPartitions.java:70)
at com.hazelcast.spi.impl.operationservice.impl.OperationServiceImpl.invokeOnAllPartitions(OperationServiceImpl.java:359)
at com.hazelcast.map.impl.proxy.MapProxySupport.size(MapProxySupport.java:630)
at com.hazelcast.map.impl.proxy.MapProxyImpl.size(MapProxyImpl.java:82)
``` | defect | com hazelcast core operationtimeoutexception mapsizeoperation invocation failed to complete due to operation heartbeat timeout hi we are using hazelcast in embedded version in our application which is deployed on jvm s across physical servers instances on each server when the application was deployed everything was fine and we did not see any problem on app start up the logic will cache the values to the hazelcast imap after this there is no write to the cache again and all the other operations only read from this cache last time when we had to back out there were hung threads as we had updated the cache and we could see that when this was being replicated in the cluster there were hung threads but that was the last time this time there was no write operation on the cache after the onetime load on app start up this is the version of hazelcast that we had to back out from production we need support on this issue as it is impacting production we started receiving the below exception in production continuously and then the cpu usage shooted to up after this exceptions started showing up the server cpu was being utilized at for quite sometime which does not happen usually we had to revert back hazelcast implementation again com hazelcast core operationtimeoutexception mapsizeoperation invocation failed to complete due to operation heartbeat timeout current time total elapsed time ms last operation heartbeat never last operation heartbeat from member invocation op com hazelcast map impl operation mapsizeoperation servicename hz impl mapservice identityhash partitio nid replicaindex callid invocationtime waittimeout calltimeout name value m ap trycount trypausemillis invokecount calltimeoutmillis firstinvocationtimems firstinvocationtime lastheartbeatmillis lastheartbeattime target pendingresponse void backupsacksexpected backupsacksreceived connection connection alive true type member at com hazelcast spi impl operationservice impl invocationfuture newoperationtimeoutexception invocationfuture java at com hazelcast spi impl operationservice impl invocationfuture resolve invocationfuture java at com hazelcast spi impl operationservice impl invocationfuture resolveandthrow invocationfuture java at com hazelcast spi impl abstractinvocationfuture get abstractinvocationfuture java at com hazelcast spi impl operationservice impl invokeonpartitions retryfailedpartitions invokeonpartitions java at com hazelcast spi impl operationservice impl invokeonpartitions invoke invokeonpartitions java at com hazelcast spi impl operationservice impl operationserviceimpl invokeonallpartitions operationserviceimpl java at com hazelcast map impl proxy mapproxysupport size mapproxysupport java at com hazelcast map impl proxy mapproxyimpl size mapproxyimpl java | 1 |
72,712 | 24,253,541,378 | IssuesEvent | 2022-09-27 15:51:18 | vector-im/element-web | https://api.github.com/repos/vector-im/element-web | closed | Joining a room from a space causes an error and requires cache to be cleared | T-Defect | ### Steps to reproduce
1. Navigate to the home page of a space.
2. Attempt to join a room.
3. Wait, and get presented with an error. "Something went wrong!"
### Outcome
#### What did you expect?
I expected to be able to join the room from the space without issue.
#### What happened instead?
An error occured and I was presented with "Something went wrong!". This happens whenever trying to join a room from a space. If I clear the cache, everything reflects normally and the room was actually joined successfully. I find myself having to clear cache quite frequently just to do things like joining rooms.

### Operating system
Windows 11
### Application version
Element version: 1.11.5 Olm version: 3.2.12
### How did you install the app?
From https://element.io/get-started
### Homeserver
Dendrite 0.9.9
### Will you send logs?
Yes | 1.0 | Joining a room from a space causes an error and requires cache to be cleared - ### Steps to reproduce
1. Navigate to the home page of a space.
2. Attempt to join a room.
3. Wait, and get presented with an error. "Something went wrong!"
### Outcome
#### What did you expect?
I expected to be able to join the room from the space without issue.
#### What happened instead?
An error occured and I was presented with "Something went wrong!". This happens whenever trying to join a room from a space. If I clear the cache, everything reflects normally and the room was actually joined successfully. I find myself having to clear cache quite frequently just to do things like joining rooms.

### Operating system
Windows 11
### Application version
Element version: 1.11.5 Olm version: 3.2.12
### How did you install the app?
From https://element.io/get-started
### Homeserver
Dendrite 0.9.9
### Will you send logs?
Yes | defect | joining a room from a space causes an error and requires cache to be cleared steps to reproduce navigate to the home page of a space attempt to join a room wait and get presented with an error something went wrong outcome what did you expect i expected to be able to join the room from the space without issue what happened instead an error occured and i was presented with something went wrong this happens whenever trying to join a room from a space if i clear the cache everything reflects normally and the room was actually joined successfully i find myself having to clear cache quite frequently just to do things like joining rooms operating system windows application version element version olm version how did you install the app from homeserver dendrite will you send logs yes | 1 |
70,408 | 23,155,815,816 | IssuesEvent | 2022-07-29 12:54:43 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | opened | Listbox: No accessible name (WCAG: 4.1.2) | defect | ### Describe the bug
Listbox unordered list has no accessible name. The `ul.p-listbox-list` element with `role="listbox"`, needs at least one of the following to comply with WCAG 4.1.2:
- `aria-label` attribute
- `aria-labelledby` attribute
- `title` attribute
https://www.w3.org/WAI/ARIA/apg/patterns/listbox/
### Environment
N/A
### Reproducer
_No response_
### Angular version
N/A
### PrimeNG version
14.0.0
### Build / Runtime
TypeScript
### Language
ALL
### Node version (for AoT issues node --version)
N/A
### Browser(s)
_No response_
### Steps to reproduce the behavior
1. Go to http://primefaces.org/primeng/listbox
2. Right-click and inspect HTML on any listbox
3. Check attributes for `ul.p-listbox-list` element
Alternatively, analyse listbox with accessibility tools (e.g. axe DevTools).
### Expected behavior
Element with `role="listbox"` has an accessible name. | 1.0 | Listbox: No accessible name (WCAG: 4.1.2) - ### Describe the bug
Listbox unordered list has no accessible name. The `ul.p-listbox-list` element with `role="listbox"`, needs at least one of the following to comply with WCAG 4.1.2:
- `aria-label` attribute
- `aria-labelledby` attribute
- `title` attribute
https://www.w3.org/WAI/ARIA/apg/patterns/listbox/
### Environment
N/A
### Reproducer
_No response_
### Angular version
N/A
### PrimeNG version
14.0.0
### Build / Runtime
TypeScript
### Language
ALL
### Node version (for AoT issues node --version)
N/A
### Browser(s)
_No response_
### Steps to reproduce the behavior
1. Go to http://primefaces.org/primeng/listbox
2. Right-click and inspect HTML on any listbox
3. Check attributes for `ul.p-listbox-list` element
Alternatively, analyse listbox with accessibility tools (e.g. axe DevTools).
### Expected behavior
Element with `role="listbox"` has an accessible name. | defect | listbox no accessible name wcag describe the bug listbox unordered list has no accessible name the ul p listbox list element with role listbox needs at least one of the following to comply with wcag aria label attribute aria labelledby attribute title attribute environment n a reproducer no response angular version n a primeng version build runtime typescript language all node version for aot issues node version n a browser s no response steps to reproduce the behavior go to right click and inspect html on any listbox check attributes for ul p listbox list element alternatively analyse listbox with accessibility tools e g axe devtools expected behavior element with role listbox has an accessible name | 1 |
17,261 | 2,993,541,366 | IssuesEvent | 2015-07-22 04:54:09 | WebLogo/weblogo | https://api.github.com/repos/WebLogo/weblogo | closed | plain format concatenates all input in one string | auto-migrated Priority-Medium Type-Defect | ```
What steps will reproduce the problem?
1. python -c 'print("AT\nCG")' | ~/.local/bin/weblogo -D plain -F pdf >
output.pdf
What is the expected output? What do you see instead?
Expected a logo like this:
AT
CG
Seen:
ATCG
What version of the product are you using? On what operating system?
3.4, python 3.4 linux
Please provide any additional information below.
Here's a fix, at function iterseq in plain_io.py:
--- plain_io.bug.py 2015-01-13 18:36:19.376814142 -0800
+++ plain_io.py 2015-01-13 18:36:17.983454330 -0800
@@ -102,7 +102,8 @@
"Character on line: %d not in alphabet: %s : %s" % \
(linenum, alphabet, line) )
lines.append(line)
- yield Seq(''.join(lines), alphabet)
+ yield Seq(line, alphabet)
+
def write(afile, seqs):
```
Original issue reported on code.google.com by `apuapaqu...@gmail.com` on 14 Jan 2015 at 2:44 | 1.0 | plain format concatenates all input in one string - ```
What steps will reproduce the problem?
1. python -c 'print("AT\nCG")' | ~/.local/bin/weblogo -D plain -F pdf >
output.pdf
What is the expected output? What do you see instead?
Expected a logo like this:
AT
CG
Seen:
ATCG
What version of the product are you using? On what operating system?
3.4, python 3.4 linux
Please provide any additional information below.
Here's a fix, at function iterseq in plain_io.py:
--- plain_io.bug.py 2015-01-13 18:36:19.376814142 -0800
+++ plain_io.py 2015-01-13 18:36:17.983454330 -0800
@@ -102,7 +102,8 @@
"Character on line: %d not in alphabet: %s : %s" % \
(linenum, alphabet, line) )
lines.append(line)
- yield Seq(''.join(lines), alphabet)
+ yield Seq(line, alphabet)
+
def write(afile, seqs):
```
Original issue reported on code.google.com by `apuapaqu...@gmail.com` on 14 Jan 2015 at 2:44 | defect | plain format concatenates all input in one string what steps will reproduce the problem python c print at ncg local bin weblogo d plain f pdf output pdf what is the expected output what do you see instead expected a logo like this at cg seen atcg what version of the product are you using on what operating system python linux please provide any additional information below here s a fix at function iterseq in plain io py plain io bug py plain io py character on line d not in alphabet s s linenum alphabet line lines append line yield seq join lines alphabet yield seq line alphabet def write afile seqs original issue reported on code google com by apuapaqu gmail com on jan at | 1 |
32,393 | 13,797,461,294 | IssuesEvent | 2020-10-09 22:17:56 | cityofaustin/atd-data-tech | https://api.github.com/repos/cityofaustin/atd-data-tech | opened | Bug: TIA Mod Attachments Tables | Impact: 2-Major Product: TIA Module Service: Apps Type: Bug Report Workgroup: TDSD | - TIA Review Details Attachments Tables: Show all attachments (Scope & TIA Submission)
- Scope Page Attachments & TIA Submittal Attachments Tables: Not populating attachment's | 1.0 | Bug: TIA Mod Attachments Tables - - TIA Review Details Attachments Tables: Show all attachments (Scope & TIA Submission)
- Scope Page Attachments & TIA Submittal Attachments Tables: Not populating attachment's | non_defect | bug tia mod attachments tables tia review details attachments tables show all attachments scope tia submission scope page attachments tia submittal attachments tables not populating attachment s | 0 |
204,585 | 7,088,879,904 | IssuesEvent | 2018-01-11 23:21:16 | Microsoft/PTVS | https://api.github.com/repos/Microsoft/PTVS | opened | Re-disable django html extensions for next preview build | area:Editor bug priority:release-blocking | in product.settings:
```
<!-- UNDONE: Exclude HTML editor extensions until we get an API stability guarantee from that team -->
<IncludeDjangoHtmlExtensions Condition="'$(IncludeDjangoHtmlExtensions)' == ''">false</IncludeDjangoHtmlExtensions>
```
Reapply this change, in whatever branch we build next preview from. | 1.0 | Re-disable django html extensions for next preview build - in product.settings:
```
<!-- UNDONE: Exclude HTML editor extensions until we get an API stability guarantee from that team -->
<IncludeDjangoHtmlExtensions Condition="'$(IncludeDjangoHtmlExtensions)' == ''">false</IncludeDjangoHtmlExtensions>
```
Reapply this change, in whatever branch we build next preview from. | non_defect | re disable django html extensions for next preview build in product settings false reapply this change in whatever branch we build next preview from | 0 |
1,397 | 20,599,216,497 | IssuesEvent | 2022-03-06 01:20:00 | verilator/verilator | https://api.github.com/repos/verilator/verilator | closed | Split gives Vdeeptemp error | resolution: fixed area: portability | Code similar to the following
```
always_ff @(posedge clk_1, negedge rstn_1) if ((rstn_1 == 0)) out[1] <= 0;
always_ff @(posedge clk_2, negedge rstn_2) if ((rstn_2 == 0)) out[2] <= 0;
always_ff @(posedge clk_3, negedge rstn_3) if ((rstn_3 == 0)) out[3] <= 0;
always_ff @(posedge clk_4, negedge rstn_4) if ((rstn_4 == 0)) out[4] <= 0;
always_ff @(posedge clk_5, negedge rstn_5) if ((rstn_5 == 0)) out[5] <= 0;
always_ff @(posedge clk_6, negedge rstn_6) if ((rstn_6 == 0)) out[6] <= 0;
always_ff @(posedge clk_7, negedge rstn_7) if ((rstn_7 == 0)) out[7] <= 0;
always_ff @(posedge clk_8, negedge rstn_8) if ((rstn_8 == 0)) out[8] <= 0;
always_ff @(posedge clk_9, negedge rstn_9) if ((rstn_9 == 0)) out[9] <= 0;
always_ff @(posedge clk_10, negedge rstn_10) if ((rstn_10 == 0)) out[10] <= 0;
always_ff @(posedge clk_11, negedge rstn_11) if ((rstn_11 == 0)) out[11] <= 0;
always_ff @(posedge clk_12, negedge rstn_12) if ((rstn_12 == 0)) out[12] <= 0;
always_ff @(posedge clk_13, negedge rstn_13) if ((rstn_13 == 0)) out[13] <= 0;
always_ff @(posedge clk_14, negedge rstn_14) if ((rstn_14 == 0)) out[14] <= 0;
always_ff @(posedge clk_15, negedge rstn_15) if ((rstn_15 == 0)) out[15] <= 0;
always_ff @(posedge clk_16, negedge rstn_16) if ((rstn_16 == 0)) out[16] <= 0;
always_ff @(posedge clk_17, negedge rstn_17) if ((rstn_17 == 0)) out[17] <= 0;
always_ff @(posedge clk_18, negedge rstn_18) if ((rstn_18 == 0)) out[18] <= 0;
always_ff @(posedge clk_19, negedge rstn_19) if ((rstn_19 == 0)) out[19] <= 0;
```
Gives
```
Vt_depth_flop___024root__DepSet_h66e87802__0.cpp:183:5: error: '__Vdeeptemp_h55f4913b__2' was not declared in this scope
183 | __Vdeeptemp_h55f4913b__2 = ((((((((((((((((((((
```
Only when using --threads and --compiler clang. This is a bug in the V3Depth compiler workaround. | True | Split gives Vdeeptemp error - Code similar to the following
```
always_ff @(posedge clk_1, negedge rstn_1) if ((rstn_1 == 0)) out[1] <= 0;
always_ff @(posedge clk_2, negedge rstn_2) if ((rstn_2 == 0)) out[2] <= 0;
always_ff @(posedge clk_3, negedge rstn_3) if ((rstn_3 == 0)) out[3] <= 0;
always_ff @(posedge clk_4, negedge rstn_4) if ((rstn_4 == 0)) out[4] <= 0;
always_ff @(posedge clk_5, negedge rstn_5) if ((rstn_5 == 0)) out[5] <= 0;
always_ff @(posedge clk_6, negedge rstn_6) if ((rstn_6 == 0)) out[6] <= 0;
always_ff @(posedge clk_7, negedge rstn_7) if ((rstn_7 == 0)) out[7] <= 0;
always_ff @(posedge clk_8, negedge rstn_8) if ((rstn_8 == 0)) out[8] <= 0;
always_ff @(posedge clk_9, negedge rstn_9) if ((rstn_9 == 0)) out[9] <= 0;
always_ff @(posedge clk_10, negedge rstn_10) if ((rstn_10 == 0)) out[10] <= 0;
always_ff @(posedge clk_11, negedge rstn_11) if ((rstn_11 == 0)) out[11] <= 0;
always_ff @(posedge clk_12, negedge rstn_12) if ((rstn_12 == 0)) out[12] <= 0;
always_ff @(posedge clk_13, negedge rstn_13) if ((rstn_13 == 0)) out[13] <= 0;
always_ff @(posedge clk_14, negedge rstn_14) if ((rstn_14 == 0)) out[14] <= 0;
always_ff @(posedge clk_15, negedge rstn_15) if ((rstn_15 == 0)) out[15] <= 0;
always_ff @(posedge clk_16, negedge rstn_16) if ((rstn_16 == 0)) out[16] <= 0;
always_ff @(posedge clk_17, negedge rstn_17) if ((rstn_17 == 0)) out[17] <= 0;
always_ff @(posedge clk_18, negedge rstn_18) if ((rstn_18 == 0)) out[18] <= 0;
always_ff @(posedge clk_19, negedge rstn_19) if ((rstn_19 == 0)) out[19] <= 0;
```
Gives
```
Vt_depth_flop___024root__DepSet_h66e87802__0.cpp:183:5: error: '__Vdeeptemp_h55f4913b__2' was not declared in this scope
183 | __Vdeeptemp_h55f4913b__2 = ((((((((((((((((((((
```
Only when using --threads and --compiler clang. This is a bug in the V3Depth compiler workaround. | non_defect | split gives vdeeptemp error code similar to the following always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out always ff posedge clk negedge rstn if rstn out gives vt depth flop depset cpp error vdeeptemp was not declared in this scope vdeeptemp only when using threads and compiler clang this is a bug in the compiler workaround | 0 |
226,349 | 18,013,305,149 | IssuesEvent | 2021-09-16 11:09:21 | ansible-collections/amazon.aws | https://api.github.com/repos/ansible-collections/amazon.aws | closed | aws_service_ip_ranges suppport for ipv6 | feature has_pr integration tests plugins | ### Summary
We are using amazon.aws collection and we noticed that the aws_service_ip_ranges does not have an option to return IPv6 ranges.
### Issue Type
Feature Idea
### Component Name
`{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }`
Should return a list of IPv6 addresses that correspond to the Route53 health check.
### Pull Request
#430
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
vars:
rt53_ranges: "{{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }}"
tasks:
- name: "use list return option and iterate as a loop"
debug: msg="{% for x in rt53_ranges %}{{ x }} {% endfor %}"
# ###"2600:1f14:7ff:f800::/56,2600:1f14:fff:f800::/56"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | 1.0 | aws_service_ip_ranges suppport for ipv6 - ### Summary
We are using amazon.aws collection and we noticed that the aws_service_ip_ranges does not have an option to return IPv6 ranges.
### Issue Type
Feature Idea
### Component Name
`{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }`
Should return a list of IPv6 addresses that correspond to the Route53 health check.
### Pull Request
#430
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```
vars:
rt53_ranges: "{{ lookup('aws_service_ip_ranges', region='us-west-2', service='ROUTE53_HEALTHCHECKS', ipv6_prefix=True, wantlist=True) }}"
tasks:
- name: "use list return option and iterate as a loop"
debug: msg="{% for x in rt53_ranges %}{{ x }} {% endfor %}"
# ###"2600:1f14:7ff:f800::/56,2600:1f14:fff:f800::/56"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | non_defect | aws service ip ranges suppport for summary we are using amazon aws collection and we noticed that the aws service ip ranges does not have an option to return ranges issue type feature idea component name lookup aws service ip ranges region us west service healthchecks prefix true wantlist true should return a list of addresses that correspond to the health check pull request additional information vars ranges lookup aws service ip ranges region us west service healthchecks prefix true wantlist true tasks name use list return option and iterate as a loop debug msg for x in ranges x endfor fff code of conduct i agree to follow the ansible code of conduct | 0 |
2,291 | 2,603,992,431 | IssuesEvent | 2015-02-24 19:06:58 | chrsmith/nishazi6 | https://api.github.com/repos/chrsmith/nishazi6 | opened | 沈阳疱疹疱疹的治疗方法 | auto-migrated Priority-Medium Type-Defect | ```
沈阳疱疹疱疹的治疗方法〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:37 | 1.0 | 沈阳疱疹疱疹的治疗方法 - ```
沈阳疱疹疱疹的治疗方法〓沈陽軍區政治部醫院性病〓TEL:02
4-31023308〓成立于1946年,68年專注于性傳播疾病的研究和治療�
��位于沈陽市沈河區二緯路32號。是一所與新中國同建立共輝�
��的歷史悠久、設備精良、技術權威、專家云集,是預防、保
健、醫療、科研康復為一體的綜合性醫院。是國家首批公立��
�等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學�
��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍
空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集��
�二等功。
```
-----
Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:37 | defect | 沈阳疱疹疱疹的治疗方法 沈阳疱疹疱疹的治疗方法〓沈陽軍區政治部醫院性病〓tel: 〓 , � �� 。是一所與新中國同建立共輝� ��的歷史悠久、設備精良、技術權威、專家云集,是預防、保 健、醫療、科研康復為一體的綜合性醫院。是國家首批公立�� �等部隊醫院、全國首批醫療規范定點單位,是第四軍醫大學� ��東南大學等知名高等院校的教學醫院。曾被中國人民解放軍 空軍后勤部衛生部評為衛生工作先進單位,先后兩次榮立集�� �二等功。 original issue reported on code google com by gmail com on jun at | 1 |
52,132 | 13,211,391,730 | IssuesEvent | 2020-08-15 22:48:29 | icecube-trac/tix4 | https://api.github.com/repos/icecube-trac/tix4 | opened | [weighting] 11058 has upper cutoffs proportional to energy per nucleon (Trac #1728) | Incomplete Migration Migrated from Trac cmake defect | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1728">https://code.icecube.wisc.edu/projects/icecube/ticket/1728</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-06T07:38:06",
"_ts": "1465198686659705",
"description": "Set 11058's upper cutoff parses as energy per particle, when it's clearly energy per nucleon. ",
"reporter": "jvansanten",
"cc": "chraab",
"resolution": "fixed",
"time": "2016-06-06T07:36:41",
"component": "cmake",
"summary": "[weighting] 11058 has upper cutoffs proportional to energy per nucleon",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| 1.0 | [weighting] 11058 has upper cutoffs proportional to energy per nucleon (Trac #1728) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1728">https://code.icecube.wisc.edu/projects/icecube/ticket/1728</a>, reported by jvansantenand owned by jvansanten</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-06-06T07:38:06",
"_ts": "1465198686659705",
"description": "Set 11058's upper cutoff parses as energy per particle, when it's clearly energy per nucleon. ",
"reporter": "jvansanten",
"cc": "chraab",
"resolution": "fixed",
"time": "2016-06-06T07:36:41",
"component": "cmake",
"summary": "[weighting] 11058 has upper cutoffs proportional to energy per nucleon",
"priority": "normal",
"keywords": "",
"milestone": "",
"owner": "jvansanten",
"type": "defect"
}
```
</p>
</details>
| defect | has upper cutoffs proportional to energy per nucleon trac migrated from json status closed changetime ts description set s upper cutoff parses as energy per particle when it s clearly energy per nucleon reporter jvansanten cc chraab resolution fixed time component cmake summary has upper cutoffs proportional to energy per nucleon priority normal keywords milestone owner jvansanten type defect | 1 |
190,157 | 22,047,254,225 | IssuesEvent | 2022-05-30 04:10:33 | nanopathi/linux-4.19.72_CVE-2021-32399 | https://api.github.com/repos/nanopathi/linux-4.19.72_CVE-2021-32399 | closed | CVE-2020-25285 (Medium) detected in linuxlinux-4.19.236 - autoclosed | security vulnerability | ## CVE-2020-25285 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-32399/commit/03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66">03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/hugetlb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A race condition between hugetlb sysctl handlers in mm/hugetlb.c in the Linux kernel before 5.8.8 could be used by local attackers to corrupt memory, cause a NULL pointer dereference, or possibly have unspecified other impact, aka CID-17743798d812.
<p>Publish Date: 2020-09-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25285>CVE-2020-25285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25284</a></p>
<p>Release Date: 2020-09-13</p>
<p>Fix Resolution: v4.14.197,v4.19.144,v5.4.64,v5.8.8,v5.9-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-25285 (Medium) detected in linuxlinux-4.19.236 - autoclosed - ## CVE-2020-25285 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.236</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nanopathi/linux-4.19.72_CVE-2021-32399/commit/03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66">03cb3c6f0e0b62b5cbcd747df63781fbb2a6ef66</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/mm/hugetlb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A race condition between hugetlb sysctl handlers in mm/hugetlb.c in the Linux kernel before 5.8.8 could be used by local attackers to corrupt memory, cause a NULL pointer dereference, or possibly have unspecified other impact, aka CID-17743798d812.
<p>Publish Date: 2020-09-13
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-25285>CVE-2020-25285</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25284">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-25284</a></p>
<p>Release Date: 2020-09-13</p>
<p>Fix Resolution: v4.14.197,v4.19.144,v5.4.64,v5.8.8,v5.9-rc4</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in linuxlinux autoclosed cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files mm hugetlb c vulnerability details a race condition between hugetlb sysctl handlers in mm hugetlb c in the linux kernel before could be used by local attackers to corrupt memory cause a null pointer dereference or possibly have unspecified other impact aka cid publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
22,775 | 7,209,070,792 | IssuesEvent | 2018-02-07 07:00:57 | Icinga/icinga2 | https://api.github.com/repos/Icinga/icinga2 | opened | Compiler warning in checkercomponent-ti.hpp | build-fix | #5988 introduces a new compiler warning:
```
-- Build files have been written to: /Users/gunnar/icinga2/build
[26/37] Building CXX object lib/checker/CMakeFiles/checker.dir/checker_unity.cpp.o
In file included from lib/checker/checker_unity.cpp:1:
In file included from ../lib/checker/checkercomponent.cpp:20:
In file included from ../lib/checker/checkercomponent.hpp:23:
lib/checker/checkercomponent-ti.hpp:74:6: warning: private field 'm_ConcurrentChecks' is not used [-Wunused-private-field]
int m_ConcurrentChecks;
^
1 warning generated.
[37/37] Linking CXX executable Bin/Debug/icinga2
``` | 1.0 | Compiler warning in checkercomponent-ti.hpp - #5988 introduces a new compiler warning:
```
-- Build files have been written to: /Users/gunnar/icinga2/build
[26/37] Building CXX object lib/checker/CMakeFiles/checker.dir/checker_unity.cpp.o
In file included from lib/checker/checker_unity.cpp:1:
In file included from ../lib/checker/checkercomponent.cpp:20:
In file included from ../lib/checker/checkercomponent.hpp:23:
lib/checker/checkercomponent-ti.hpp:74:6: warning: private field 'm_ConcurrentChecks' is not used [-Wunused-private-field]
int m_ConcurrentChecks;
^
1 warning generated.
[37/37] Linking CXX executable Bin/Debug/icinga2
``` | non_defect | compiler warning in checkercomponent ti hpp introduces a new compiler warning build files have been written to users gunnar build building cxx object lib checker cmakefiles checker dir checker unity cpp o in file included from lib checker checker unity cpp in file included from lib checker checkercomponent cpp in file included from lib checker checkercomponent hpp lib checker checkercomponent ti hpp warning private field m concurrentchecks is not used int m concurrentchecks warning generated linking cxx executable bin debug | 0 |
161,033 | 12,529,929,939 | IssuesEvent | 2020-06-04 12:14:15 | mswjs/msw | https://api.github.com/repos/mswjs/msw | closed | Memory leak in integration tests | bug help wanted internal needs:tests | Current setup of integration tests does not handle certain asynchronous operations in its setup well, which results into memory leaks often happening when testing locally.
## Details
### Jest warning
```
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't stopped in your tests. Consider running Jest with `--detectOpenHandles` to troubleshoot this issue.
```
> This was caused by the missing `afterAll(() => api.cleanup())` hook.
### Memory leak exception
```
<--- Last few GCs --->
[7874:0x102812000] 85649 ms: Mark-sweep 1381.6 (1455.2) -> 1371.0 (1455.2) MB, 742.3 / 0.0 ms (average mu = 0.160, current mu = 0.072) allocation failure scavenge might not succeed
[7874:0x102812000] 86412 ms: Mark-sweep 1378.7 (1455.7) -> 1375.3 (1436.7) MB, 635.4 / 0.0 ms (+ 78.4 ms in 60 steps since start of marking, biggest step 5.1 ms, walltime since start of marking 739 ms) (average mu = 0.111, current mu = 0.064) allocati
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0xaae3f75fd61]
Security context: 0x0317e0f1e6e9 <JSObject>
1: stringSlice(aka stringSlice) [0x317ad8136c1] [buffer.js:~589] [pc=0xaae4065e42f](this=0x0317141826f1 <undefined>,buf=0x0317639bbb89 <Uint8Array map = 0x317d50d5df1>,encoding=0x0317e0f3e981 <String[4]: utf8>,start=0,end=840098)
2: toString [0x3170a7a0819] [buffer.js:~643] [pc=0xaae3f68c2fb](this=0x0317639bbb89 <Uint8Array map = 0x317d50d5df1>,encodi...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x10003d035 node::Abort() [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
2: 0x10003d23f node::OnFatalError(char const*, char const*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
3: 0x1001b8e15 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
4: 0x100586d72 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
5: 0x100589845 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
6: 0x1005856ef v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
7: 0x1005838c4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
8: 0x100590188 v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
9: 0x1005901df v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
10: 0x100562064 v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
11: 0x100561ca9 v8::internal::Factory::NewStringFromUtf8(v8::internal::Vector<char const>, v8::internal::PretenureFlag) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
12: 0x1001db1b8 v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
13: 0x1000e8822 node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding, v8::Local<v8::Value>*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
14: 0x100056889 void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)1>(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
15: 0xaae3f75fd61
```
## Expected behavior
- Integration test cleans up all the side-effects its establishes before the next integration test starts.
- `runBrowserWith` and `spawnServer` utilities have tests that assert they clean up after themselves (i.e. asserting no process is running at the port previously occupied by the server after closing)
## Area of effect
- `runBrowserWith`
- `spawnServer` | 1.0 | Memory leak in integration tests - Current setup of integration tests does not handle certain asynchronous operations in its setup well, which results into memory leaks often happening when testing locally.
## Details
### Jest warning
```
Jest did not exit one second after the test run has completed.
This usually means that there are asynchronous operations that weren't stopped in your tests. Consider running Jest with `--detectOpenHandles` to troubleshoot this issue.
```
> This was caused by the missing `afterAll(() => api.cleanup())` hook.
### Memory leak exception
```
<--- Last few GCs --->
[7874:0x102812000] 85649 ms: Mark-sweep 1381.6 (1455.2) -> 1371.0 (1455.2) MB, 742.3 / 0.0 ms (average mu = 0.160, current mu = 0.072) allocation failure scavenge might not succeed
[7874:0x102812000] 86412 ms: Mark-sweep 1378.7 (1455.7) -> 1375.3 (1436.7) MB, 635.4 / 0.0 ms (+ 78.4 ms in 60 steps since start of marking, biggest step 5.1 ms, walltime since start of marking 739 ms) (average mu = 0.111, current mu = 0.064) allocati
<--- JS stacktrace --->
==== JS stack trace =========================================
0: ExitFrame [pc: 0xaae3f75fd61]
Security context: 0x0317e0f1e6e9 <JSObject>
1: stringSlice(aka stringSlice) [0x317ad8136c1] [buffer.js:~589] [pc=0xaae4065e42f](this=0x0317141826f1 <undefined>,buf=0x0317639bbb89 <Uint8Array map = 0x317d50d5df1>,encoding=0x0317e0f3e981 <String[4]: utf8>,start=0,end=840098)
2: toString [0x3170a7a0819] [buffer.js:~643] [pc=0xaae3f68c2fb](this=0x0317639bbb89 <Uint8Array map = 0x317d50d5df1>,encodi...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x10003d035 node::Abort() [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
2: 0x10003d23f node::OnFatalError(char const*, char const*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
3: 0x1001b8e15 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
4: 0x100586d72 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
5: 0x100589845 v8::internal::Heap::CheckIneffectiveMarkCompact(unsigned long, double) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
6: 0x1005856ef v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
7: 0x1005838c4 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
8: 0x100590188 v8::internal::Heap::AllocateRawWithLigthRetry(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
9: 0x1005901df v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
10: 0x100562064 v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
11: 0x100561ca9 v8::internal::Factory::NewStringFromUtf8(v8::internal::Vector<char const>, v8::internal::PretenureFlag) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
12: 0x1001db1b8 v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::NewStringType, int) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
13: 0x1000e8822 node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding, v8::Local<v8::Value>*) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
14: 0x100056889 void node::Buffer::(anonymous namespace)::StringSlice<(node::encoding)1>(v8::FunctionCallbackInfo<v8::Value> const&) [/Users/kettanaito/.nvm/versions/node/v10.16.3/bin/node]
15: 0xaae3f75fd61
```
## Expected behavior
- Integration test cleans up all the side-effects its establishes before the next integration test starts.
- `runBrowserWith` and `spawnServer` utilities have tests that assert they clean up after themselves (i.e. asserting no process is running at the port previously occupied by the server after closing)
## Area of effect
- `runBrowserWith`
- `spawnServer` | non_defect | memory leak in integration tests current setup of integration tests does not handle certain asynchronous operations in its setup well which results into memory leaks often happening when testing locally details jest warning jest did not exit one second after the test run has completed this usually means that there are asynchronous operations that weren t stopped in your tests consider running jest with detectopenhandles to troubleshoot this issue this was caused by the missing afterall api cleanup hook memory leak exception ms mark sweep mb ms average mu current mu allocation failure scavenge might not succeed ms mark sweep mb ms ms in steps since start of marking biggest step ms walltime since start of marking ms average mu current mu allocati js stack trace exitframe security context stringslice aka stringslice this buf encoding start end tostring this encodi fatal error ineffective mark compacts near heap limit allocation failed javascript heap out of memory node abort node onfatalerror char const char const internal fatalprocessoutofmemory internal isolate char const bool internal heap fatalprocessoutofmemory char const internal heap checkineffectivemarkcompact unsigned long double internal heap performgarbagecollection internal garbagecollector gccallbackflags internal heap collectgarbage internal allocationspace internal garbagecollectionreason gccallbackflags internal heap allocaterawwithligthretry int internal allocationspace internal allocationalignment internal heap allocaterawwithretryorfail int internal allocationspace internal allocationalignment internal factory newrawtwobytestring int internal pretenureflag internal factory internal vector internal pretenureflag string isolate char const newstringtype int node stringbytes encode isolate char const unsigned long node encoding local void node buffer anonymous namespace stringslice functioncallbackinfo const expected behavior integration test cleans up all the side effects its establishes before the next integration test starts runbrowserwith and spawnserver utilities have tests that assert they clean up after themselves i e asserting no process is running at the port previously occupied by the server after closing area of effect runbrowserwith spawnserver | 0 |
29,043 | 5,512,061,635 | IssuesEvent | 2017-03-17 08:06:58 | primefaces/primeng | https://api.github.com/repos/primefaces/primeng | closed | Calendar does not populate date the second time | defect | [x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
Current behavior
If the date text of the calendar component is cleared and the same date is selected in the date picker, the text of that date does not populate the input textbox again.
Expected behavior
When any date is selected in the date picker, the input textbox should be filled with the selected date text.
To reproduce issue:
Click the calendar icon and select a date in the date picker
Select and clear the date text from the input textbox
Click the calendar icon and select the same date which was just cleared.
Observe that the date which was just selected is not in the textbox
Plunkr Case
http://plnkr.co/FcrYwA
PrimeNG version:
1.0.0-rc.6 | 1.0 | Calendar does not populate date the second time - [x] bug report => Search github for a similar issue or PR before submitting
[ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap
[ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35
Current behavior
If the date text of the calendar component is cleared and the same date is selected in the date picker, the text of that date does not populate the input textbox again.
Expected behavior
When any date is selected in the date picker, the input textbox should be filled with the selected date text.
To reproduce issue:
Click the calendar icon and select a date in the date picker
Select and clear the date text from the input textbox
Click the calendar icon and select the same date which was just cleared.
Observe that the date which was just selected is not in the textbox
Plunkr Case
http://plnkr.co/FcrYwA
PrimeNG version:
1.0.0-rc.6 | defect | calendar does not populate date the second time bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see current behavior if the date text of the calendar component is cleared and the same date is selected in the date picker the text of that date does not populate the input textbox again expected behavior when any date is selected in the date picker the input textbox should be filled with the selected date text to reproduce issue click the calendar icon and select a date in the date picker select and clear the date text from the input textbox click the calendar icon and select the same date which was just cleared observe that the date which was just selected is not in the textbox plunkr case primeng version rc | 1 |
23,168 | 3,774,014,803 | IssuesEvent | 2016-03-17 06:48:58 | daelsepara/sofia-ml | https://api.github.com/repos/daelsepara/sofia-ml | closed | sf-sparse-vector.cc bug, in Init function | auto-migrated Priority-Medium Type-Defect | ```
Make all_test, then find an error occured white testing sf-sparse-vector_test,
assertion assert(x1.GetGroupId() == "2"); failed at line 27 of file
sf-sparse-vector_test.cc.
Solution. Add a line "group_id_c_string[end - position]=0;" in
sf-sparse-vector.cc line 145. cause string generated by strncpy is not always
'\0' terminated.
```
Original issue reported on code.google.com by `starshin...@gmail.com` on 5 Nov 2012 at 4:22 | 1.0 | sf-sparse-vector.cc bug, in Init function - ```
Make all_test, then find an error occured white testing sf-sparse-vector_test,
assertion assert(x1.GetGroupId() == "2"); failed at line 27 of file
sf-sparse-vector_test.cc.
Solution. Add a line "group_id_c_string[end - position]=0;" in
sf-sparse-vector.cc line 145. cause string generated by strncpy is not always
'\0' terminated.
```
Original issue reported on code.google.com by `starshin...@gmail.com` on 5 Nov 2012 at 4:22 | defect | sf sparse vector cc bug in init function make all test then find an error occured white testing sf sparse vector test assertion assert getgroupid failed at line of file sf sparse vector test cc solution add a line group id c string in sf sparse vector cc line cause string generated by strncpy is not always terminated original issue reported on code google com by starshin gmail com on nov at | 1 |
24,301 | 3,960,697,970 | IssuesEvent | 2016-05-02 08:35:09 | CocoaPods/CocoaPods | https://api.github.com/repos/CocoaPods/CocoaPods | closed | Bundles inside Bundles | t2:defect | :rainbow:
Running `pod install` with the latest release:
### Stack
```
CocoaPods : 0.36.0.beta.1
Ruby : ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-darwin13.1.0]
RubyGems : 2.2.2
Host : Mac OS X 10.10.1 (14B25)
Xcode : 6.1.1 (6A2008a)
Git : git version 2.2.1
Ruby lib dir : /Users/felixkrause/.rvm/rubies/ruby-2.0.0-p481/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ 52a5b86c95ff99d6233d3f7b214f15b21bcc0201
```
### Plugins
```
cocoapods-plugins : 0.4.0
cocoapods-trunk : 0.5.0
cocoapods-try : 0.4.3
```
### Error
```
NoMethodError - undefined method `new_file' for #<Xcodeproj::Project::Object::PBXFileReference:0x007f7fc98b1a78>
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/project.rb:173:in `add_file_reference'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:168:in `block (2 levels) in add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:166:in `each'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:166:in `block in add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:162:in `each'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:162:in `add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:104:in `block in add_resources'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/user_interface.rb:110:in `message'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:102:in `add_resources'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:38:in `install!'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:427:in `install_file_references'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:127:in `block in generate_pods_project'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/user_interface.rb:49:in `section'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:125:in `generate_pods_project'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:93:in `install!'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command/project.rb:71:in `run_install_with_update'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command/project.rb:101:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/claide-0.8.0/lib/claide/command.rb:312:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command.rb:45:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/bin/pod:43:in `<top (required)>'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/bin/pod:23:in `load'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/bin/pod:23:in `<main>'
```
### My Podfile:
```ruby
Pod::Spec.new do |s|
s.name = "Core"
s.version = "1.0"
s.homepage = "http://sunapps.net"
s.platform = :ios, '7.0'
s.requires_arc = true
s.subspec 'Core' do |core|
# Resources
core.resource_bundles = {
'GCCore' => [
"Resources/**/*.{lproj,png,jpg,zip,json,html,plist,sql}"
]
}
core.source_files = 'Classes/**/*.{h,m}',
'Views/**/*.{h,m}',
"AppSpecific/#{app_id.to_s}/*"
core.frameworks = 'QuartzCore', 'MessageUI', 'CoreLocation', 'SystemConfiguration'
core.xcconfig = { 'FRAMEWORK_SEARCH_PATHS' => '"$(PODS_ROOT)/../.." "$(PODS_ROOT)/.." "$(SRCROOT)/.."' }
core.xcconfig = { 'OTHER_LDFLAGS' => '-all_load' }
core.resources = [ "Resources/**/*.bundle" ]
end
```
The line causing this problem is:
```ruby
core.resource_bundles = {
'GCCore' => [
"Resources/**/*.{lproj,png,jpg,zip,json,html,plist,sql}"
]
}
```
Inside `Resources` is a `bundle` named `Settings.bundle`, containing the app's settings. If I remove the `bundle` or remove this line from my `Podfile`, it works just fine.
The `Settings.bundle` looks like this:

It has been working until now, and broke just recently. It probably makes sense, not having the bundle inside a bundle (not sure if that's even possible) but the error message is cryptic and should probably state a problem with bundles containing bundles.
What do you think? | 1.0 | Bundles inside Bundles - :rainbow:
Running `pod install` with the latest release:
### Stack
```
CocoaPods : 0.36.0.beta.1
Ruby : ruby 2.0.0p481 (2014-05-08 revision 45883) [x86_64-darwin13.1.0]
RubyGems : 2.2.2
Host : Mac OS X 10.10.1 (14B25)
Xcode : 6.1.1 (6A2008a)
Git : git version 2.2.1
Ruby lib dir : /Users/felixkrause/.rvm/rubies/ruby-2.0.0-p481/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ 52a5b86c95ff99d6233d3f7b214f15b21bcc0201
```
### Plugins
```
cocoapods-plugins : 0.4.0
cocoapods-trunk : 0.5.0
cocoapods-try : 0.4.3
```
### Error
```
NoMethodError - undefined method `new_file' for #<Xcodeproj::Project::Object::PBXFileReference:0x007f7fc98b1a78>
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/project.rb:173:in `add_file_reference'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:168:in `block (2 levels) in add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:166:in `each'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:166:in `block in add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:162:in `each'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:162:in `add_file_accessors_paths_to_pods_group'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:104:in `block in add_resources'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/user_interface.rb:110:in `message'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:102:in `add_resources'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer/file_references_installer.rb:38:in `install!'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:427:in `install_file_references'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:127:in `block in generate_pods_project'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/user_interface.rb:49:in `section'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:125:in `generate_pods_project'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/installer.rb:93:in `install!'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command/project.rb:71:in `run_install_with_update'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command/project.rb:101:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/claide-0.8.0/lib/claide/command.rb:312:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/lib/cocoapods/command.rb:45:in `run'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/gems/cocoapods-0.36.0.beta.1/bin/pod:43:in `<top (required)>'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/bin/pod:23:in `load'
/Users/felixkrause/.rvm/gems/ruby-2.0.0-p481/bin/pod:23:in `<main>'
```
### My Podfile:
```ruby
Pod::Spec.new do |s|
s.name = "Core"
s.version = "1.0"
s.homepage = "http://sunapps.net"
s.platform = :ios, '7.0'
s.requires_arc = true
s.subspec 'Core' do |core|
# Resources
core.resource_bundles = {
'GCCore' => [
"Resources/**/*.{lproj,png,jpg,zip,json,html,plist,sql}"
]
}
core.source_files = 'Classes/**/*.{h,m}',
'Views/**/*.{h,m}',
"AppSpecific/#{app_id.to_s}/*"
core.frameworks = 'QuartzCore', 'MessageUI', 'CoreLocation', 'SystemConfiguration'
core.xcconfig = { 'FRAMEWORK_SEARCH_PATHS' => '"$(PODS_ROOT)/../.." "$(PODS_ROOT)/.." "$(SRCROOT)/.."' }
core.xcconfig = { 'OTHER_LDFLAGS' => '-all_load' }
core.resources = [ "Resources/**/*.bundle" ]
end
```
The line causing this problem is:
```ruby
core.resource_bundles = {
'GCCore' => [
"Resources/**/*.{lproj,png,jpg,zip,json,html,plist,sql}"
]
}
```
Inside `Resources` is a `bundle` named `Settings.bundle`, containing the app's settings. If I remove the `bundle` or remove this line from my `Podfile`, it works just fine.
The `Settings.bundle` looks like this:

It has been working until now, and broke just recently. It probably makes sense, not having the bundle inside a bundle (not sure if that's even possible) but the error message is cryptic and should probably state a problem with bundles containing bundles.
What do you think? | defect | bundles inside bundles rainbow running pod install with the latest release stack cocoapods beta ruby ruby revision rubygems host mac os x xcode git git version ruby lib dir users felixkrause rvm rubies ruby lib repositories master plugins cocoapods plugins cocoapods trunk cocoapods try error nomethoderror undefined method new file for users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods project rb in add file reference users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in block levels in add file accessors paths to pods group users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in each users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in block in add file accessors paths to pods group users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in each users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in add file accessors paths to pods group users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in block in add resources users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods user interface rb in message users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in add resources users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer file references installer rb in install users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer rb in install file references users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer rb in block in generate pods project users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods user interface rb in section users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer rb in generate pods project users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods installer rb in install users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods command project rb in run install with update users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods command project rb in run users felixkrause rvm gems ruby gems claide lib claide command rb in run users felixkrause rvm gems ruby gems cocoapods beta lib cocoapods command rb in run users felixkrause rvm gems ruby gems cocoapods beta bin pod in users felixkrause rvm gems ruby bin pod in load users felixkrause rvm gems ruby bin pod in my podfile ruby pod spec new do s s name core s version s homepage s platform ios s requires arc true s subspec core do core resources core resource bundles gccore resources lproj png jpg zip json html plist sql core source files classes h m views h m appspecific app id to s core frameworks quartzcore messageui corelocation systemconfiguration core xcconfig framework search paths pods root pods root srcroot core xcconfig other ldflags all load core resources end the line causing this problem is ruby core resource bundles gccore resources lproj png jpg zip json html plist sql inside resources is a bundle named settings bundle containing the app s settings if i remove the bundle or remove this line from my podfile it works just fine the settings bundle looks like this it has been working until now and broke just recently it probably makes sense not having the bundle inside a bundle not sure if that s even possible but the error message is cryptic and should probably state a problem with bundles containing bundles what do you think | 1 |
371,282 | 10,964,304,362 | IssuesEvent | 2019-11-27 22:09:42 | Luna-Interactive/catastrophe | https://api.github.com/repos/Luna-Interactive/catastrophe | closed | Death animation | Backend Bug Gameplay Priority: Critical | **Describe the bug**
Player simply disappears when killed.
**NOTE: Must be synched in backend** | 1.0 | Death animation - **Describe the bug**
Player simply disappears when killed.
**NOTE: Must be synched in backend** | non_defect | death animation describe the bug player simply disappears when killed note must be synched in backend | 0 |
293,016 | 22,042,097,911 | IssuesEvent | 2022-05-29 14:08:11 | corka149/jsonpatch | https://api.github.com/repos/corka149/jsonpatch | closed | Something wrong in handling list | documentation enhancement help wanted good first issue | **Describe the bug**
1. The array is a legal json structured type, but can't apply a patch for a list directly. (Maybe it's your feature...)
2. Can't handle list well. (see examples below)
3. (Suggestion) Swap arguments of `Jsonpatch.apply_patch/2` so that the pipeline can be used. `map |> Jsonpatch.apply_patch!(diff_1) |> Jsonpatch.apply_patch!(diff_2)`
**To Reproduce**
Example 1:
```elixir
base = %{list: [1, 2, 3]}
target = %{list: [2, 3, 1]}
diff = Jsonpatch.diff(base, target)
Jsonpatch.apply_patch!(diff, base)
```
Example 2:
```elixir
base = %{list: [%{a: 1}, %{b: 2}, %{c: 3}]}
target = %{list: [%{b: 2}, %{c: 3}, %{a: 1}]}
diff = Jsonpatch.diff(base, target)
Jsonpatch.apply_patch!(diff, base)
```
**Expected behavior**
I hope the result of `Jsonpatch.after_patch!(diff, base)` is equal to `target` .
**Screenshots**
Example 1:
<img width="1340" alt="image" src="https://user-images.githubusercontent.com/3273295/167575287-30157417-8413-4e9a-96d5-917c439dbb88.png">
Example 2:
<img width="823" alt="image" src="https://user-images.githubusercontent.com/3273295/167575453-32171b5c-877f-4945-a1a4-a21a6b7720ab.png">
**Desktop (please complete the following information):**
- Elixir - 1.13.3
- Erlang - 24.2.1
- System - MacOS 12.3.1
| 1.0 | Something wrong in handling list - **Describe the bug**
1. The array is a legal json structured type, but can't apply a patch for a list directly. (Maybe it's your feature...)
2. Can't handle list well. (see examples below)
3. (Suggestion) Swap arguments of `Jsonpatch.apply_patch/2` so that the pipeline can be used. `map |> Jsonpatch.apply_patch!(diff_1) |> Jsonpatch.apply_patch!(diff_2)`
**To Reproduce**
Example 1:
```elixir
base = %{list: [1, 2, 3]}
target = %{list: [2, 3, 1]}
diff = Jsonpatch.diff(base, target)
Jsonpatch.apply_patch!(diff, base)
```
Example 2:
```elixir
base = %{list: [%{a: 1}, %{b: 2}, %{c: 3}]}
target = %{list: [%{b: 2}, %{c: 3}, %{a: 1}]}
diff = Jsonpatch.diff(base, target)
Jsonpatch.apply_patch!(diff, base)
```
**Expected behavior**
I hope the result of `Jsonpatch.after_patch!(diff, base)` is equal to `target` .
**Screenshots**
Example 1:
<img width="1340" alt="image" src="https://user-images.githubusercontent.com/3273295/167575287-30157417-8413-4e9a-96d5-917c439dbb88.png">
Example 2:
<img width="823" alt="image" src="https://user-images.githubusercontent.com/3273295/167575453-32171b5c-877f-4945-a1a4-a21a6b7720ab.png">
**Desktop (please complete the following information):**
- Elixir - 1.13.3
- Erlang - 24.2.1
- System - MacOS 12.3.1
| non_defect | something wrong in handling list describe the bug the array is a legal json structured type but can t apply a patch for a list directly maybe it s your feature can t handle list well see examples below suggestion swap arguments of jsonpatch apply patch so that the pipeline can be used map jsonpatch apply patch diff jsonpatch apply patch diff to reproduce example elixir base list target list diff jsonpatch diff base target jsonpatch apply patch diff base example elixir base list target list diff jsonpatch diff base target jsonpatch apply patch diff base expected behavior i hope the result of jsonpatch after patch diff base is equal to target screenshots example img width alt image src example img width alt image src desktop please complete the following information elixir erlang system macos | 0 |
23,425 | 3,814,980,207 | IssuesEvent | 2016-03-28 15:57:27 | mitlm/mitlm | https://api.github.com/repos/mitlm/mitlm | closed | Installation issues. estimate-ngram: command not found | auto-migrated Priority-Medium Type-Defect | ```
1. I decompressed the mitlm-0.4.1.tar.gz file on OSX Yosemite
From Terminal:
2. ./compile.
3. make -j
Error:
Input is: estimate-ngram -text mysent.txt -write-lm mysent.lm
But then I get: estimate-ngram: command not found
```
Original issue reported on code.google.com by `vis...@gmail.com` on 19 Jun 2015 at 9:31 | 1.0 | Installation issues. estimate-ngram: command not found - ```
1. I decompressed the mitlm-0.4.1.tar.gz file on OSX Yosemite
From Terminal:
2. ./compile.
3. make -j
Error:
Input is: estimate-ngram -text mysent.txt -write-lm mysent.lm
But then I get: estimate-ngram: command not found
```
Original issue reported on code.google.com by `vis...@gmail.com` on 19 Jun 2015 at 9:31 | defect | installation issues estimate ngram command not found i decompressed the mitlm tar gz file on osx yosemite from terminal compile make j error input is estimate ngram text mysent txt write lm mysent lm but then i get estimate ngram command not found original issue reported on code google com by vis gmail com on jun at | 1 |
111,606 | 17,030,476,390 | IssuesEvent | 2021-07-04 13:06:34 | turkdevops/next-auth | https://api.github.com/repos/turkdevops/next-auth | closed | CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js, jquery-3.2.1.min.js - autoclosed | security vulnerability | ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: next-auth/www/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>Path to vulnerable library: next-auth/www/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: next-auth/www/node_modules/autocomplete.js/examples/basic_jquery.html</p>
<p>Path to vulnerable library: next-auth/www/node_modules/autocomplete.js/examples/basic_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/next-auth/commit/407f761b37202be130dc73cd869b925d6cd7ca15">407f761b37202be130dc73cd869b925d6cd7ca15</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2019-11358 (Medium) detected in jquery-2.1.4.min.js, jquery-3.2.1.min.js - autoclosed - ## CVE-2019-11358 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-3.2.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: next-auth/www/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>Path to vulnerable library: next-auth/www/node_modules/autocomplete.js/examples/basic_angular.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-3.2.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/3.2.1/jquery.min.js</a></p>
<p>Path to dependency file: next-auth/www/node_modules/autocomplete.js/examples/basic_jquery.html</p>
<p>Path to vulnerable library: next-auth/www/node_modules/autocomplete.js/examples/basic_jquery.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-3.2.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/turkdevops/next-auth/commit/407f761b37202be130dc73cd869b925d6cd7ca15">407f761b37202be130dc73cd869b925d6cd7ca15</a></p>
<p>Found in base branch: <b>canary</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
<p>Publish Date: 2019-04-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-11358>CVE-2019-11358</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11358</a></p>
<p>Release Date: 2019-04-20</p>
<p>Fix Resolution: 3.4.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in jquery min js jquery min js autoclosed cve medium severity vulnerability vulnerable libraries jquery min js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file next auth www node modules autocomplete js examples basic angular html path to vulnerable library next auth www node modules autocomplete js examples basic angular html dependency hierarchy x jquery min js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file next auth www node modules autocomplete js examples basic jquery html path to vulnerable library next auth www node modules autocomplete js examples basic jquery html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch canary vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
527,778 | 15,352,383,982 | IssuesEvent | 2021-03-01 06:55:14 | magento/magento2 | https://api.github.com/repos/magento/magento2 | closed | Attribute Options can't get reloaded, after adding new option | Component: Eav Issue: Confirmed Priority: P4 Progress: ready for dev Reproduced on 2.2.x Reproduced on 2.3.x Severity: S4 help wanted stale issue | <!---
Thank you for contributing to Magento.
To help us process this issue we recommend that you add the following information:
- Summary of the issue,
- Information on your environment,
- Steps to reproduce,
- Expected and actual results,
Please also have a look at our guidelines article before adding a new issue https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
-->
### Preconditions
1. PHP 7.1
2. Magento 2.2.4
### Steps to reproduce
1. Create a Script for adding new options on Select Attributes.
```php
protected function getSelectAttributeValue($value, $attribute)
{
$value = trim($value);
if ($value == '') {
return null;
}
$optionId = $attribute->getSource()->getOptionId($value);
if ($optionId) {
return $optionId;
}
$option = [
'attribute_id' => $attribute->getId(),
'values' => [0 => trim($value)]
];
/** @var $this->eavSetup Magento\Eav\Setup\EavSetup */
$this->eavSetup->addAttributeOption($option);
return $attribute->getSource()->getOptionId($value);
}
```
2. Try to get option id for newly added value
3. Try to load Attribute. Jump to step Number 2.
### Expected result
<!--- Tell us what should happen -->
1. $attribute->getSource()->getOptionId() should return new value
or
2. All Options for Attribute can be reloaded by some function.
### Actual result
<!--- Tell us what happens instead -->
1. It's not possible to get the newly added Option Id / value from Database. I have also tried to reload the attribute with different functions (load, loadByCode, get from Repository). The new Option is not included in getAllOptions().
In my opinion getAllOptions() from Magento\Eav\Model\Entity\Attribute\Source\Table should get a third parameter called $reload, to reload the options, when needed. So you just had to add a line like: ```$attribute->getSource()->getAllOptions(null,null,true)``` to reload the options manually.
| 1.0 | Attribute Options can't get reloaded, after adding new option - <!---
Thank you for contributing to Magento.
To help us process this issue we recommend that you add the following information:
- Summary of the issue,
- Information on your environment,
- Steps to reproduce,
- Expected and actual results,
Please also have a look at our guidelines article before adding a new issue https://github.com/magento/magento2/wiki/Issue-reporting-guidelines
-->
### Preconditions
1. PHP 7.1
2. Magento 2.2.4
### Steps to reproduce
1. Create a Script for adding new options on Select Attributes.
```php
protected function getSelectAttributeValue($value, $attribute)
{
$value = trim($value);
if ($value == '') {
return null;
}
$optionId = $attribute->getSource()->getOptionId($value);
if ($optionId) {
return $optionId;
}
$option = [
'attribute_id' => $attribute->getId(),
'values' => [0 => trim($value)]
];
/** @var $this->eavSetup Magento\Eav\Setup\EavSetup */
$this->eavSetup->addAttributeOption($option);
return $attribute->getSource()->getOptionId($value);
}
```
2. Try to get option id for newly added value
3. Try to load Attribute. Jump to step Number 2.
### Expected result
<!--- Tell us what should happen -->
1. $attribute->getSource()->getOptionId() should return new value
or
2. All Options for Attribute can be reloaded by some function.
### Actual result
<!--- Tell us what happens instead -->
1. It's not possible to get the newly added Option Id / value from Database. I have also tried to reload the attribute with different functions (load, loadByCode, get from Repository). The new Option is not included in getAllOptions().
In my opinion getAllOptions() from Magento\Eav\Model\Entity\Attribute\Source\Table should get a third parameter called $reload, to reload the options, when needed. So you just had to add a line like: ```$attribute->getSource()->getAllOptions(null,null,true)``` to reload the options manually.
| non_defect | attribute options can t get reloaded after adding new option thank you for contributing to magento to help us process this issue we recommend that you add the following information summary of the issue information on your environment steps to reproduce expected and actual results please also have a look at our guidelines article before adding a new issue preconditions php magento steps to reproduce create a script for adding new options on select attributes php protected function getselectattributevalue value attribute value trim value if value return null optionid attribute getsource getoptionid value if optionid return optionid option attribute id attribute getid values var this eavsetup magento eav setup eavsetup this eavsetup addattributeoption option return attribute getsource getoptionid value try to get option id for newly added value try to load attribute jump to step number expected result attribute getsource getoptionid should return new value or all options for attribute can be reloaded by some function actual result it s not possible to get the newly added option id value from database i have also tried to reload the attribute with different functions load loadbycode get from repository the new option is not included in getalloptions in my opinion getalloptions from magento eav model entity attribute source table should get a third parameter called reload to reload the options when needed so you just had to add a line like attribute getsource getalloptions null null true to reload the options manually | 0 |
15,280 | 2,850,525,532 | IssuesEvent | 2015-05-31 17:08:14 | damonkohler/android-scripting | https://api.github.com/repos/damonkohler/android-scripting | closed | getCellLocation() and getNeighboringCellInfo() return {} and [] in case of CDMA | auto-migrated Priority-Medium Type-Defect | ```
What device(s) are you experiencing the problem on?
Motorola Android
What firmware version are you running on the device?
android 2.2.3 baseband C_01.43.01P kernel 2.8.32.9-g68eeef5 android-build@apa26
#1 Build number FRK76
What steps will reproduce the problem?
1.in ASE use droid.getCellLocation() or droid.getNeighboringCellInfo()
2.using droid.StartTrackingPhoneState() before does not solve the problem
3.
What is the expected output? What do you see instead?
expect to see map with information about cell connected to (such as MCCMNC,
psc, LAC, CID for GSM -- or -- MCC, SID, LAC, BID in case of CDMA
see instead {} for getCellLocation() and
see [] for getNeighboringCellInfo()
What version of the product are you using? On what operating system?
sl4a_r5x.apk (and earlier) Also, I am on Verizon Wireless, so this is CDMA
rather than GSM (I don't know if it works for GSM)
Please provide any additional information below.
(1) getNeighboringCellInfo() may return [] because this information is perhaps
not available in CDMA, but
(2) getCellLocation() should return valid cellID information
However, seems to me you need to first to something like
mTelephonyManager.listen(mPhoneStateListener,
PhoneStateListener.LISTEN_CELL_LOCATION) to generate the events?
I looked in
http://code.google.com/p/android-scripting/source/browse/android/Common/src/com/
googlecode/android_scripting/facade/PhoneFacade.java
and see that startTrackingPhoneState() does something similar but only for the
phone call state mTelephonyManager.listen(mPhoneStateListener,
PhoneStateListener.LISTEN_CALL_STATE);
```
Original issue reported on code.google.com by `bkph...@gmail.com` on 14 Mar 2012 at 12:38 | 1.0 | getCellLocation() and getNeighboringCellInfo() return {} and [] in case of CDMA - ```
What device(s) are you experiencing the problem on?
Motorola Android
What firmware version are you running on the device?
android 2.2.3 baseband C_01.43.01P kernel 2.8.32.9-g68eeef5 android-build@apa26
#1 Build number FRK76
What steps will reproduce the problem?
1.in ASE use droid.getCellLocation() or droid.getNeighboringCellInfo()
2.using droid.StartTrackingPhoneState() before does not solve the problem
3.
What is the expected output? What do you see instead?
expect to see map with information about cell connected to (such as MCCMNC,
psc, LAC, CID for GSM -- or -- MCC, SID, LAC, BID in case of CDMA
see instead {} for getCellLocation() and
see [] for getNeighboringCellInfo()
What version of the product are you using? On what operating system?
sl4a_r5x.apk (and earlier) Also, I am on Verizon Wireless, so this is CDMA
rather than GSM (I don't know if it works for GSM)
Please provide any additional information below.
(1) getNeighboringCellInfo() may return [] because this information is perhaps
not available in CDMA, but
(2) getCellLocation() should return valid cellID information
However, seems to me you need to first to something like
mTelephonyManager.listen(mPhoneStateListener,
PhoneStateListener.LISTEN_CELL_LOCATION) to generate the events?
I looked in
http://code.google.com/p/android-scripting/source/browse/android/Common/src/com/
googlecode/android_scripting/facade/PhoneFacade.java
and see that startTrackingPhoneState() does something similar but only for the
phone call state mTelephonyManager.listen(mPhoneStateListener,
PhoneStateListener.LISTEN_CALL_STATE);
```
Original issue reported on code.google.com by `bkph...@gmail.com` on 14 Mar 2012 at 12:38 | defect | getcelllocation and getneighboringcellinfo return and in case of cdma what device s are you experiencing the problem on motorola android what firmware version are you running on the device android baseband c kernel android build build number what steps will reproduce the problem in ase use droid getcelllocation or droid getneighboringcellinfo using droid starttrackingphonestate before does not solve the problem what is the expected output what do you see instead expect to see map with information about cell connected to such as mccmnc psc lac cid for gsm or mcc sid lac bid in case of cdma see instead for getcelllocation and see for getneighboringcellinfo what version of the product are you using on what operating system apk and earlier also i am on verizon wireless so this is cdma rather than gsm i don t know if it works for gsm please provide any additional information below getneighboringcellinfo may return because this information is perhaps not available in cdma but getcelllocation should return valid cellid information however seems to me you need to first to something like mtelephonymanager listen mphonestatelistener phonestatelistener listen cell location to generate the events i looked in googlecode android scripting facade phonefacade java and see that starttrackingphonestate does something similar but only for the phone call state mtelephonymanager listen mphonestatelistener phonestatelistener listen call state original issue reported on code google com by bkph gmail com on mar at | 1 |
9,765 | 11,813,103,604 | IssuesEvent | 2020-03-19 21:35:14 | jiangdashao/Matrix-Issues | https://api.github.com/repos/jiangdashao/Matrix-Issues | opened | [INCOMPATIBILITY] Being kicked from server when using packet sign gui | Incompatibility | ## Troubleshooting Information
`Change - [ ] to - [X] to check the checkboxes below.`
- [ X] The incompatible plugin is up-to-date
- [X ] Matrix and ProtocolLib are up-to-date
- [X ] Matrix is running on a 1.8, 1.12, 1.13, 1.14, or 1.15 server
- [ X] The issue happens on default config.yml and checks.yml
- [X ] I've tested if the issue happens on default config
## Issue Information
**Server version**: Spigot 1.13.2 and BungeeCord
**Incompatible plugin**: Sign input by using packets (https://www.spigotmc.org/threads/signmenu-1-15-2-get-player-sign-input.249381/)
**Verbose messages (or) console errors**: none
**How/when does this happen**: Only happening with Bungeecord. When a player click on Done on the sign menu
**Video of incompatibility**: https://youtu.be/JsyUrxxMDDM
**Other information**:
Without bungeecord, it works fine. With bungeecord, it happens.
When Matrix is not installed, it doesn't happen.
I've tried disabling all checks, incluid BadPackets.

## Configuration Files
**Link to checks.yml file**: default
| True | [INCOMPATIBILITY] Being kicked from server when using packet sign gui - ## Troubleshooting Information
`Change - [ ] to - [X] to check the checkboxes below.`
- [ X] The incompatible plugin is up-to-date
- [X ] Matrix and ProtocolLib are up-to-date
- [X ] Matrix is running on a 1.8, 1.12, 1.13, 1.14, or 1.15 server
- [ X] The issue happens on default config.yml and checks.yml
- [X ] I've tested if the issue happens on default config
## Issue Information
**Server version**: Spigot 1.13.2 and BungeeCord
**Incompatible plugin**: Sign input by using packets (https://www.spigotmc.org/threads/signmenu-1-15-2-get-player-sign-input.249381/)
**Verbose messages (or) console errors**: none
**How/when does this happen**: Only happening with Bungeecord. When a player click on Done on the sign menu
**Video of incompatibility**: https://youtu.be/JsyUrxxMDDM
**Other information**:
Without bungeecord, it works fine. With bungeecord, it happens.
When Matrix is not installed, it doesn't happen.
I've tried disabling all checks, incluid BadPackets.

## Configuration Files
**Link to checks.yml file**: default
| non_defect | being kicked from server when using packet sign gui troubleshooting information change to to check the checkboxes below the incompatible plugin is up to date matrix and protocollib are up to date matrix is running on a or server the issue happens on default config yml and checks yml i ve tested if the issue happens on default config issue information server version spigot and bungeecord incompatible plugin sign input by using packets verbose messages or console errors none how when does this happen only happening with bungeecord when a player click on done on the sign menu video of incompatibility other information without bungeecord it works fine with bungeecord it happens when matrix is not installed it doesn t happen i ve tried disabling all checks incluid badpackets configuration files link to checks yml file default | 0 |
11,657 | 2,660,025,168 | IssuesEvent | 2015-03-19 01:42:57 | perfsonar/project | https://api.github.com/repos/perfsonar/project | closed | bwctl post hook information is out of date | Component-BWCTL Milestone-Release3.5 Priority-Medium Type-Defect | Original [issue 1056](https://code.google.com/p/perfsonar-ps/issues/detail?id=1056) created by arlake228 on 2015-01-27T14:19:34.000Z:
The information that bwctl passes to the post hook was never updated after non-throughput, and 'reverse' tests got added to bwctl. This means that the results being passed are either wrong, or confusing. | 1.0 | bwctl post hook information is out of date - Original [issue 1056](https://code.google.com/p/perfsonar-ps/issues/detail?id=1056) created by arlake228 on 2015-01-27T14:19:34.000Z:
The information that bwctl passes to the post hook was never updated after non-throughput, and 'reverse' tests got added to bwctl. This means that the results being passed are either wrong, or confusing. | defect | bwctl post hook information is out of date original created by on the information that bwctl passes to the post hook was never updated after non throughput and reverse tests got added to bwctl this means that the results being passed are either wrong or confusing | 1 |
64,790 | 18,905,619,984 | IssuesEvent | 2021-11-16 08:45:52 | owncloud/ocis | https://api.github.com/repos/owncloud/ocis | closed | Trying to restore file version creates another new version | Type:Bug Category:Defect Priority:p3-medium | Run OCIS with OCIS-STORAGE
https://user-images.githubusercontent.com/52366632/104712269-7231e100-574a-11eb-8f9b-034786d68fba.mp4
| 1.0 | Trying to restore file version creates another new version - Run OCIS with OCIS-STORAGE
https://user-images.githubusercontent.com/52366632/104712269-7231e100-574a-11eb-8f9b-034786d68fba.mp4
| defect | trying to restore file version creates another new version run ocis with ocis storage | 1 |
83,402 | 3,634,550,839 | IssuesEvent | 2016-02-11 18:25:16 | JMurk/Valve_Hydrant_Mobile_Issues | https://api.github.com/repos/JMurk/Valve_Hydrant_Mobile_Issues | opened | Map - Home Button - Dev Version Returns to Prod Home | bug moderate priority | **Issue:** In the development version of the application, the home button navigates users to the production portal page.

**Resolution:** Please update the link for the home button in the development version to return to the development application portal. | 1.0 | Map - Home Button - Dev Version Returns to Prod Home - **Issue:** In the development version of the application, the home button navigates users to the production portal page.

**Resolution:** Please update the link for the home button in the development version to return to the development application portal. | non_defect | map home button dev version returns to prod home issue in the development version of the application the home button navigates users to the production portal page resolution please update the link for the home button in the development version to return to the development application portal | 0 |
45,065 | 12,532,107,062 | IssuesEvent | 2020-06-04 15:27:51 | combatopera/aridity | https://api.github.com/repos/combatopera/aridity | opened | a way to refer to the root context | defect | to avoid clashes by using a path of length 2+ where the first word refers to the root | 1.0 | a way to refer to the root context - to avoid clashes by using a path of length 2+ where the first word refers to the root | defect | a way to refer to the root context to avoid clashes by using a path of length where the first word refers to the root | 1 |
45,482 | 5,717,825,746 | IssuesEvent | 2017-04-19 18:08:05 | brave/browser-ios | https://api.github.com/repos/brave/browser-ios | opened | Manual tests for iPhone 6s+ (iOS9) 1.3.2 Preview 1 | tests | ## Pre-release check list
1. [ ] Shield stats are not properly shown on sites ([#807](https://github.com/brave/browser-ios/issues/807))
2. [ ] Search in page option should not be shown on a new tab ([#808](https://github.com/brave/browser-ios/issues/808))
3. [ ] History items are listed even with irrelevant character entered in URL bar ([#809](https://github.com/brave/browser-ios/issues/809))
4. [ ] In Private Mode URL Bar should use Dark Keyboard ([#811](https://github.com/brave/browser-ios/issues/811))
5. [ ] Editing bookmark should not allow empty URL field ([#806](https://github.com/brave/browser-ios/issues/806))
6. [ ] Opening a bookmark doesn't show the bookmark indicator(Orange Star) ([#805](https://github.com/brave/browser-ios/issues/805))
7. [ ] [Beta]1Password button shown by default even when 3rd party password manager is set to don't use ([#535](https://github.com/brave/browser-ios/issues/535))
8. [ ] Tab restore bug: current tab not restoring, background tabs seem ok ([#695](https://github.com/brave/browser-ios/issues/695))
9. [ ] Folder title is shown only after tapping on Done ([#778](https://github.com/brave/browser-ios/issues/778))
10. [ ] Top Sites: is based on page visits, not domain visits, see if this can be changed ([#446](https://github.com/brave/browser-ios/issues/446))
11. [ ] Wrong favicons shown on top site tiles bug needs investigation ([#737](https://github.com/brave/browser-ios/issues/737))
12. [ ] Scroll gesture can trigger navigation ([#741](https://github.com/brave/browser-ios/issues/741))
13. [ ] Unable to delete added bookmark bug ([#766](https://github.com/brave/browser-ios/issues/766))
14. [ ] Add Safari-like Find in Page enhancement ([#804](https://github.com/brave/browser-ios/issues/804))
15. [ ] HTTPSE issue with http://journals.plos.org/ ([#775](https://github.com/brave/browser-ios/issues/775))
16. [ ] Unable to save Bookmarklet for Unmark bookmark manager (e.g. unable to edit URL on bookmark ([#728](https://github.com/brave/browser-ios/issues/728))
17. [ ] Disabling Private Browsing Only still creates new private tabs enhancement ([#439](https://github.com/brave/browser-ios/issues/439))
18. [ ] Have default Favicon Icon ([#797](https://github.com/brave/browser-ios/issues/797))
19. [ ] Always Show "New Folder" Option ([#798](https://github.com/brave/browser-ios/issues/798))
20. [ ] Allow Folders with Records to be Deleted ([#800](https://github.com/brave/browser-ios/issues/800))
21. [ ] Report a bug opens in normal tab even when private browsing only is enabled bug # ([779](https://github.com/brave/browser-ios/issues/779))
22. [ ] Stay in Private mode, make it a persistent app state enhancement ([#319](https://github.com/brave/browser-ios/issues/800))
23. [ ] Minimize Telemetry Events ([#795](https://github.com/brave/browser-ios/issues/795))
24. [ ] Support section missing from the new beta build bug ([#792](https://github.com/brave/browser-ios/issues/792))
25. [ ] Tab Restoration Fails ([#788](https://github.com/brave/browser-ios/issues/788))
26. [ ] Buggy transition of newtab on app launch ([#635](https://github.com/brave/browser-ios/issues/635))
27. [ ] Buggy transition to top sites screen (stats show then hide, fonts change for final layout) ([#627](https://github.com/brave/browser-ios/issues/627))
28. [ ] Unable to access current page after not using for a while ([#644](https://github.com/brave/browser-ios/issues/644))
29. [ ] Add Iran add blocking filter ([#758](https://github.com/brave/browser-ios/issues/758))
30. [ ] Browser crashes when trying to move bookmark from root folder to empty titled bookmark folder bug crash ([#771](https://github.com/brave/browser-ios/issues/771))
31. [ ] Renaming a bookmark folder behaves like a bookmark QA/Steps-specified ([#777](https://github.com/brave/browser-ios/issues/777))
32. [ ] Tab count has strange animation before loading the tabs ([#774](https://github.com/brave/browser-ios/issues/774))
33. [ ] Removing bookmark folder name causes it to show as root folder when trying to move a boomark ([#770](https://github.com/brave/browser-ios/issues/770))
34. [ ] Opening a empty title folder shows Bookmarks as the title enhancemen ([#769](https://github.com/brave/browser-ios/issues/769))
35. [ ] Keep bookmark icon color consistent when navigating between settings and webview suggestion UX ([#688](https://github.com/brave/browser-ios/issues/688))
36. [ ] iTunes app details needs to update support information ([#742](https://github.com/brave/browser-ios/issues/742))
37. [ ] Support bitwarden password manager autofill app extension ([#743](https://github.com/brave/browser-ios/issues/743))
38. [ ] Folder names cannot be edited to have space-only titles ([#705](https://github.com/brave/browser-ios/issues/705))
39. [ ] Tabs bar covers the top part of webpage ([#729](https://github.com/brave/browser-ios/issues/729))
## Installer
1. [ ] Check that installer is close to the size of last release.
2. [ ] Check the Brave version in About and make sure it is EXACTLY as expected.
## Data
1. [ ] Make sure that data from the last version appears in the new version OK.
2. [ ] Test that the previous version's cookies are preserved in the next version.
## Bookmarks
1. [ ] Test that creating a bookmark in the left well works
2. [ ] Test that clicking a bookmark in the left well loads the bookmark.
3. [ ] Test that deleting a bookmark in the left well works
## Context menus
1. [ ] Make sure context menu items in the URL bar work
2. [ ] Make sure context menu items on content work with no selected text.
3. [ ] Make sure context menu items on content work with selected text.
4. [ ] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
5. [ ] Context menu: verify you can Open in Background Tab, and Open in Private Tab
## Find on page
1. [ ] Ensure search box is shown when selected via the share menu
2. [ ] Test successful find
3. [ ] Test forward and backward find navigation
4. [ ] Test failed find shows 0 results
## Private Mode
1. [ ] Create private tab, go to http://google.com, search for 'yumyums', exit private mode, go to http://google.com search box and begin typing 'yumyums' and verify that word is not in the autocomplete list
## Reader Mode
1. [ ] Visit http://m.slashdot.org, open any article, verify the reader mode icon is shown in the URL bar
2. [ ] Verify tapping on the reader mode icon opens the article in reader mode
3. [ ] Edit reader mode settings and open different pages in reader mode and verify if the setting is retained across each article
## History
1. [ ] On youtube.com, thestar.com (or any other site using push state nav), navigate the site and verify history is added. Also note if the progress bar activates and shows progress.
2. [ ] Settings > Clear Private Data, and clear all. Check history is cleared, and top sites are cleared.
## Site hacks
1. [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [ ] Test that you can save an image from a site.
## Fullscreen
1. [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Gestures
1. [ ] Test zoom in / out gestures work
2. [ ] Test that navigating to a different origin resets the zoom
3. [ ] Swipe back and forward to navigate, verify this works as expected
## Bravery settings
1. [ ] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [ ] Check that ad replacement works on http://slashdot.org
4. [ ] Check that toggling to blocking and allow ads works as expected.
5. [ ] Test that clicking through a cert error in https://badssl.com/ works.
6. [ ] Test that Safe Browsing works (http://downloadme.org/)
7. [ ] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [ ] Test that preferences default Bravery settings take effect on pages with no site settings.
10. [ ] Test that turning on fingerprinting protection in preferences shows 1 fingerprint blocked at https://browserleaks.com/canvas/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/7/ when 3rd party cookies are blocked.
12. [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`.
5. [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
6. [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
7. [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
8. [ ] Test that news.google.com sites open in a new tab (due to target being _blank).
## Top sites view
1. [ ] long-press on top sites to get to deletion mode, and delete a top site (note this will stop that site from showing up again on top sites, so you may not want to do this a site you want to keep there)
## Background
1. [ ] Start loading a page, background the app, wait >5 sec, then bring to front, ensure splash screen is not shown
## Session storage
1. [ ] Test that tabs restore when closed, including active tab.
| 1.0 | Manual tests for iPhone 6s+ (iOS9) 1.3.2 Preview 1 - ## Pre-release check list
1. [ ] Shield stats are not properly shown on sites ([#807](https://github.com/brave/browser-ios/issues/807))
2. [ ] Search in page option should not be shown on a new tab ([#808](https://github.com/brave/browser-ios/issues/808))
3. [ ] History items are listed even with irrelevant character entered in URL bar ([#809](https://github.com/brave/browser-ios/issues/809))
4. [ ] In Private Mode URL Bar should use Dark Keyboard ([#811](https://github.com/brave/browser-ios/issues/811))
5. [ ] Editing bookmark should not allow empty URL field ([#806](https://github.com/brave/browser-ios/issues/806))
6. [ ] Opening a bookmark doesn't show the bookmark indicator(Orange Star) ([#805](https://github.com/brave/browser-ios/issues/805))
7. [ ] [Beta]1Password button shown by default even when 3rd party password manager is set to don't use ([#535](https://github.com/brave/browser-ios/issues/535))
8. [ ] Tab restore bug: current tab not restoring, background tabs seem ok ([#695](https://github.com/brave/browser-ios/issues/695))
9. [ ] Folder title is shown only after tapping on Done ([#778](https://github.com/brave/browser-ios/issues/778))
10. [ ] Top Sites: is based on page visits, not domain visits, see if this can be changed ([#446](https://github.com/brave/browser-ios/issues/446))
11. [ ] Wrong favicons shown on top site tiles bug needs investigation ([#737](https://github.com/brave/browser-ios/issues/737))
12. [ ] Scroll gesture can trigger navigation ([#741](https://github.com/brave/browser-ios/issues/741))
13. [ ] Unable to delete added bookmark bug ([#766](https://github.com/brave/browser-ios/issues/766))
14. [ ] Add Safari-like Find in Page enhancement ([#804](https://github.com/brave/browser-ios/issues/804))
15. [ ] HTTPSE issue with http://journals.plos.org/ ([#775](https://github.com/brave/browser-ios/issues/775))
16. [ ] Unable to save Bookmarklet for Unmark bookmark manager (e.g. unable to edit URL on bookmark ([#728](https://github.com/brave/browser-ios/issues/728))
17. [ ] Disabling Private Browsing Only still creates new private tabs enhancement ([#439](https://github.com/brave/browser-ios/issues/439))
18. [ ] Have default Favicon Icon ([#797](https://github.com/brave/browser-ios/issues/797))
19. [ ] Always Show "New Folder" Option ([#798](https://github.com/brave/browser-ios/issues/798))
20. [ ] Allow Folders with Records to be Deleted ([#800](https://github.com/brave/browser-ios/issues/800))
21. [ ] Report a bug opens in normal tab even when private browsing only is enabled bug # ([779](https://github.com/brave/browser-ios/issues/779))
22. [ ] Stay in Private mode, make it a persistent app state enhancement ([#319](https://github.com/brave/browser-ios/issues/800))
23. [ ] Minimize Telemetry Events ([#795](https://github.com/brave/browser-ios/issues/795))
24. [ ] Support section missing from the new beta build bug ([#792](https://github.com/brave/browser-ios/issues/792))
25. [ ] Tab Restoration Fails ([#788](https://github.com/brave/browser-ios/issues/788))
26. [ ] Buggy transition of newtab on app launch ([#635](https://github.com/brave/browser-ios/issues/635))
27. [ ] Buggy transition to top sites screen (stats show then hide, fonts change for final layout) ([#627](https://github.com/brave/browser-ios/issues/627))
28. [ ] Unable to access current page after not using for a while ([#644](https://github.com/brave/browser-ios/issues/644))
29. [ ] Add Iran add blocking filter ([#758](https://github.com/brave/browser-ios/issues/758))
30. [ ] Browser crashes when trying to move bookmark from root folder to empty titled bookmark folder bug crash ([#771](https://github.com/brave/browser-ios/issues/771))
31. [ ] Renaming a bookmark folder behaves like a bookmark QA/Steps-specified ([#777](https://github.com/brave/browser-ios/issues/777))
32. [ ] Tab count has strange animation before loading the tabs ([#774](https://github.com/brave/browser-ios/issues/774))
33. [ ] Removing bookmark folder name causes it to show as root folder when trying to move a boomark ([#770](https://github.com/brave/browser-ios/issues/770))
34. [ ] Opening a empty title folder shows Bookmarks as the title enhancemen ([#769](https://github.com/brave/browser-ios/issues/769))
35. [ ] Keep bookmark icon color consistent when navigating between settings and webview suggestion UX ([#688](https://github.com/brave/browser-ios/issues/688))
36. [ ] iTunes app details needs to update support information ([#742](https://github.com/brave/browser-ios/issues/742))
37. [ ] Support bitwarden password manager autofill app extension ([#743](https://github.com/brave/browser-ios/issues/743))
38. [ ] Folder names cannot be edited to have space-only titles ([#705](https://github.com/brave/browser-ios/issues/705))
39. [ ] Tabs bar covers the top part of webpage ([#729](https://github.com/brave/browser-ios/issues/729))
## Installer
1. [ ] Check that installer is close to the size of last release.
2. [ ] Check the Brave version in About and make sure it is EXACTLY as expected.
## Data
1. [ ] Make sure that data from the last version appears in the new version OK.
2. [ ] Test that the previous version's cookies are preserved in the next version.
## Bookmarks
1. [ ] Test that creating a bookmark in the left well works
2. [ ] Test that clicking a bookmark in the left well loads the bookmark.
3. [ ] Test that deleting a bookmark in the left well works
## Context menus
1. [ ] Make sure context menu items in the URL bar work
2. [ ] Make sure context menu items on content work with no selected text.
3. [ ] Make sure context menu items on content work with selected text.
4. [ ] Make sure context menu items on content work inside an editable control (input, textarea, or contenteditable).
5. [ ] Context menu: verify you can Open in Background Tab, and Open in Private Tab
## Find on page
1. [ ] Ensure search box is shown when selected via the share menu
2. [ ] Test successful find
3. [ ] Test forward and backward find navigation
4. [ ] Test failed find shows 0 results
## Private Mode
1. [ ] Create private tab, go to http://google.com, search for 'yumyums', exit private mode, go to http://google.com search box and begin typing 'yumyums' and verify that word is not in the autocomplete list
## Reader Mode
1. [ ] Visit http://m.slashdot.org, open any article, verify the reader mode icon is shown in the URL bar
2. [ ] Verify tapping on the reader mode icon opens the article in reader mode
3. [ ] Edit reader mode settings and open different pages in reader mode and verify if the setting is retained across each article
## History
1. [ ] On youtube.com, thestar.com (or any other site using push state nav), navigate the site and verify history is added. Also note if the progress bar activates and shows progress.
2. [ ] Settings > Clear Private Data, and clear all. Check history is cleared, and top sites are cleared.
## Site hacks
1. [ ] Test https://www.twitch.tv/adobe sub-page loads a video and you can play it
## Downloads
1. [ ] Test that you can save an image from a site.
## Fullscreen
1. [ ] Test that entering HTML5 full screen works. And Esc to go back. (youtube.com)
## Gestures
1. [ ] Test zoom in / out gestures work
2. [ ] Test that navigating to a different origin resets the zoom
3. [ ] Swipe back and forward to navigate, verify this works as expected
## Bravery settings
1. [ ] Check that HTTPS Everywhere works by loading http://www.apple.com
2. [ ] Turning HTTPS Everywhere off and shields off both disable the redirect to https://www.apple.com
3. [ ] Check that ad replacement works on http://slashdot.org
4. [ ] Check that toggling to blocking and allow ads works as expected.
5. [ ] Test that clicking through a cert error in https://badssl.com/ works.
6. [ ] Test that Safe Browsing works (http://downloadme.org/)
7. [ ] Turning Safe Browsing off and shields off both disable safe browsing for http://downloadme.org/.
8. [ ] Visit https://brianbondy.com/ and then turn on script blocking, nothing should load. Allow it from the script blocking UI in the URL bar and it should work.
9. [ ] Test that preferences default Bravery settings take effect on pages with no site settings.
10. [ ] Test that turning on fingerprinting protection in preferences shows 1 fingerprint blocked at https://browserleaks.com/canvas/. Test that turning it off in the Bravery menu shows 0 fingerprints blocked.
11. [ ] Test that 3rd party storage results are blank at https://jsfiddle.net/7ke9r14a/7/ when 3rd party cookies are blocked.
12. [ ] Test that audio fingerprint is blocked at https://audiofingerprint.openwpm.com/ when fingerprinting protection is on.
## Content tests
1. [ ] Go to https://brianbondy.com/ and click on the twitter icon on the top right. Test that context menus work in the new twitter tab.
2. [ ] Load twitter and click on a tweet so the popup div shows. Click to dismiss and repeat with another div. Make sure it shows.
3. [ ] Go to http://www.bennish.net/web-notifications.html and test that clicking on 'Show' pops up a notification asking for permission. Make sure that clicking 'Deny' leads to no notifications being shown.
4. [ ] Go to https://trac.torproject.org/projects/tor/login and make sure that the password can be saved. Make sure the saved password shows up in `about:passwords`.
5. [ ] Open an email on http://mail.google.com/ or inbox.google.com and click on a link. Make sure it works.
6. [ ] Test that PDF is loaded at http://www.orimi.com/pdf-test.pdf
7. [ ] Test that https://mixed-script.badssl.com/ shows up as grey not red (no mixed content scripts are run).
8. [ ] Test that news.google.com sites open in a new tab (due to target being _blank).
## Top sites view
1. [ ] long-press on top sites to get to deletion mode, and delete a top site (note this will stop that site from showing up again on top sites, so you may not want to do this a site you want to keep there)
## Background
1. [ ] Start loading a page, background the app, wait >5 sec, then bring to front, ensure splash screen is not shown
## Session storage
1. [ ] Test that tabs restore when closed, including active tab.
| non_defect | manual tests for iphone preview pre release check list shield stats are not properly shown on sites search in page option should not be shown on a new tab history items are listed even with irrelevant character entered in url bar in private mode url bar should use dark keyboard editing bookmark should not allow empty url field opening a bookmark doesn t show the bookmark indicator orange star button shown by default even when party password manager is set to don t use tab restore bug current tab not restoring background tabs seem ok folder title is shown only after tapping on done top sites is based on page visits not domain visits see if this can be changed wrong favicons shown on top site tiles bug needs investigation scroll gesture can trigger navigation unable to delete added bookmark bug add safari like find in page enhancement httpse issue with unable to save bookmarklet for unmark bookmark manager e g unable to edit url on bookmark disabling private browsing only still creates new private tabs enhancement have default favicon icon always show new folder option allow folders with records to be deleted report a bug opens in normal tab even when private browsing only is enabled bug stay in private mode make it a persistent app state enhancement minimize telemetry events support section missing from the new beta build bug tab restoration fails buggy transition of newtab on app launch buggy transition to top sites screen stats show then hide fonts change for final layout unable to access current page after not using for a while add iran add blocking filter browser crashes when trying to move bookmark from root folder to empty titled bookmark folder bug crash renaming a bookmark folder behaves like a bookmark qa steps specified tab count has strange animation before loading the tabs removing bookmark folder name causes it to show as root folder when trying to move a boomark opening a empty title folder shows bookmarks as the title enhancemen keep bookmark icon color consistent when navigating between settings and webview suggestion ux itunes app details needs to update support information support bitwarden password manager autofill app extension folder names cannot be edited to have space only titles tabs bar covers the top part of webpage installer check that installer is close to the size of last release check the brave version in about and make sure it is exactly as expected data make sure that data from the last version appears in the new version ok test that the previous version s cookies are preserved in the next version bookmarks test that creating a bookmark in the left well works test that clicking a bookmark in the left well loads the bookmark test that deleting a bookmark in the left well works context menus make sure context menu items in the url bar work make sure context menu items on content work with no selected text make sure context menu items on content work with selected text make sure context menu items on content work inside an editable control input textarea or contenteditable context menu verify you can open in background tab and open in private tab find on page ensure search box is shown when selected via the share menu test successful find test forward and backward find navigation test failed find shows results private mode create private tab go to search for yumyums exit private mode go to search box and begin typing yumyums and verify that word is not in the autocomplete list reader mode visit open any article verify the reader mode icon is shown in the url bar verify tapping on the reader mode icon opens the article in reader mode edit reader mode settings and open different pages in reader mode and verify if the setting is retained across each article history on youtube com thestar com or any other site using push state nav navigate the site and verify history is added also note if the progress bar activates and shows progress settings clear private data and clear all check history is cleared and top sites are cleared site hacks test sub page loads a video and you can play it downloads test that you can save an image from a site fullscreen test that entering full screen works and esc to go back youtube com gestures test zoom in out gestures work test that navigating to a different origin resets the zoom swipe back and forward to navigate verify this works as expected bravery settings check that https everywhere works by loading turning https everywhere off and shields off both disable the redirect to check that ad replacement works on check that toggling to blocking and allow ads works as expected test that clicking through a cert error in works test that safe browsing works turning safe browsing off and shields off both disable safe browsing for visit and then turn on script blocking nothing should load allow it from the script blocking ui in the url bar and it should work test that preferences default bravery settings take effect on pages with no site settings test that turning on fingerprinting protection in preferences shows fingerprint blocked at test that turning it off in the bravery menu shows fingerprints blocked test that party storage results are blank at when party cookies are blocked test that audio fingerprint is blocked at when fingerprinting protection is on content tests go to and click on the twitter icon on the top right test that context menus work in the new twitter tab load twitter and click on a tweet so the popup div shows click to dismiss and repeat with another div make sure it shows go to and test that clicking on show pops up a notification asking for permission make sure that clicking deny leads to no notifications being shown go to and make sure that the password can be saved make sure the saved password shows up in about passwords open an email on or inbox google com and click on a link make sure it works test that pdf is loaded at test that shows up as grey not red no mixed content scripts are run test that news google com sites open in a new tab due to target being blank top sites view long press on top sites to get to deletion mode and delete a top site note this will stop that site from showing up again on top sites so you may not want to do this a site you want to keep there background start loading a page background the app wait sec then bring to front ensure splash screen is not shown session storage test that tabs restore when closed including active tab | 0 |
246,617 | 26,611,896,318 | IssuesEvent | 2023-01-24 01:18:53 | JMD60260/llsolidaires | https://api.github.com/repos/JMD60260/llsolidaires | opened | CVE-2022-44566 (High) detected in activerecord-6.0.2.1.gem | security vulnerability | ## CVE-2022-44566 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activerecord-6.0.2.1.gem</b></p></summary>
<p>Databases on Rails. Build a persistent domain model by mapping database tables to Ruby classes. Strong conventions for associations, validations, aggregations, migrations, and testing come baked-in.</p>
<p>Library home page: <a href="https://rubygems.org/gems/activerecord-6.0.2.1.gem">https://rubygems.org/gems/activerecord-6.0.2.1.gem</a></p>
<p>
Dependency Hierarchy:
- rails-6.0.2.1.gem (Root Library)
- :x: **activerecord-6.0.2.1.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In ActiveRecord <7.0.4.1 and <6.1.7.1, when a value outside the range for a 64bit signed integer is provided to the PostgreSQL connection adapter, it will treat the target column type as numeric. Comparing integer values against numeric values can result in a slow sequential scan resulting in potential Denial of Service.
<p>Publish Date: 2022-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-44566>CVE-2022-44566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-579w-22j4-4749">https://github.com/advisories/GHSA-579w-22j4-4749</a></p>
<p>Release Date: 2022-11-02</p>
<p>Fix Resolution: activerecord - 6.1.7.1,7.0.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-44566 (High) detected in activerecord-6.0.2.1.gem - ## CVE-2022-44566 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>activerecord-6.0.2.1.gem</b></p></summary>
<p>Databases on Rails. Build a persistent domain model by mapping database tables to Ruby classes. Strong conventions for associations, validations, aggregations, migrations, and testing come baked-in.</p>
<p>Library home page: <a href="https://rubygems.org/gems/activerecord-6.0.2.1.gem">https://rubygems.org/gems/activerecord-6.0.2.1.gem</a></p>
<p>
Dependency Hierarchy:
- rails-6.0.2.1.gem (Root Library)
- :x: **activerecord-6.0.2.1.gem** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In ActiveRecord <7.0.4.1 and <6.1.7.1, when a value outside the range for a 64bit signed integer is provided to the PostgreSQL connection adapter, it will treat the target column type as numeric. Comparing integer values against numeric values can result in a slow sequential scan resulting in potential Denial of Service.
<p>Publish Date: 2022-11-02
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-44566>CVE-2022-44566</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-579w-22j4-4749">https://github.com/advisories/GHSA-579w-22j4-4749</a></p>
<p>Release Date: 2022-11-02</p>
<p>Fix Resolution: activerecord - 6.1.7.1,7.0.4.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in activerecord gem cve high severity vulnerability vulnerable library activerecord gem databases on rails build a persistent domain model by mapping database tables to ruby classes strong conventions for associations validations aggregations migrations and testing come baked in library home page a href dependency hierarchy rails gem root library x activerecord gem vulnerable library vulnerability details in activerecord and when a value outside the range for a signed integer is provided to the postgresql connection adapter it will treat the target column type as numeric comparing integer values against numeric values can result in a slow sequential scan resulting in potential denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution activerecord step up your open source security game with mend | 0 |
73,025 | 24,411,837,547 | IssuesEvent | 2022-10-05 13:01:38 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | closed | Bug / Reply format broken when too many new lines|ws? in <mx-reply> | T-Defect | This displays correctly
`<mx-reply><blockquote><a href=\"https://matrix.to/#/!ybTPiZFJfZqybWGFeU:matrix.org/$1558973971435611GBlYs:matrix.org\">In reply to</a><a href=\"https://matrix.to/#/@valere35:matrix.org\">@valere35:matrix.org</a><br /><p>copy cou</p>\n</blockquote></mx-reply>ok
`
But not this:
`<mx-reply>\n <blockquote>\n <a href=\"https://matrix.to/#/!ybTPiZFJfZqybWGFeU:matrix.org/$1558973683434084MSYOS:matrix.org\">In reply to</a>\n <a href=\"https://matrix.to/#/@valere35:matrix.org\">@valere35:matrix.org</a>\n <br />\n <p>Bonjour</p>\n\n </blockquote>\n </mx-reply>\n salut
`
<img width="234" alt="image" src="https://user-images.githubusercontent.com/9841565/58431167-dc48ce80-80ac-11e9-9794-78b6366fd135.png">
| 1.0 | Bug / Reply format broken when too many new lines|ws? in <mx-reply> - This displays correctly
`<mx-reply><blockquote><a href=\"https://matrix.to/#/!ybTPiZFJfZqybWGFeU:matrix.org/$1558973971435611GBlYs:matrix.org\">In reply to</a><a href=\"https://matrix.to/#/@valere35:matrix.org\">@valere35:matrix.org</a><br /><p>copy cou</p>\n</blockquote></mx-reply>ok
`
But not this:
`<mx-reply>\n <blockquote>\n <a href=\"https://matrix.to/#/!ybTPiZFJfZqybWGFeU:matrix.org/$1558973683434084MSYOS:matrix.org\">In reply to</a>\n <a href=\"https://matrix.to/#/@valere35:matrix.org\">@valere35:matrix.org</a>\n <br />\n <p>Bonjour</p>\n\n </blockquote>\n </mx-reply>\n salut
`
<img width="234" alt="image" src="https://user-images.githubusercontent.com/9841565/58431167-dc48ce80-80ac-11e9-9794-78b6366fd135.png">
| defect | bug reply format broken when too many new lines ws in this displays correctly copy cou n ok but not this n n n n bonjour n n n n salut img width alt image src | 1 |
73 | 2,502,733,283 | IssuesEvent | 2015-01-09 12:06:35 | jOOQ/jOOQ | https://api.github.com/repos/jOOQ/jOOQ | opened | Bad caching of JPA-annotated getters leads to wrong mapping into POJOs that have @ConstructorProperties | C: Functionality P: Urgent T: Defect | The internal `Utils.getMatchingGetter()` is not cached correctly using both the `type` and `name` parameters, but only using the `name` parameter. This leads to getters being cached for all names, regardless of the target POJO type.
The workaround is to turn off caching, or to implement a custom `RecordMapperProvider` | 1.0 | Bad caching of JPA-annotated getters leads to wrong mapping into POJOs that have @ConstructorProperties - The internal `Utils.getMatchingGetter()` is not cached correctly using both the `type` and `name` parameters, but only using the `name` parameter. This leads to getters being cached for all names, regardless of the target POJO type.
The workaround is to turn off caching, or to implement a custom `RecordMapperProvider` | defect | bad caching of jpa annotated getters leads to wrong mapping into pojos that have constructorproperties the internal utils getmatchinggetter is not cached correctly using both the type and name parameters but only using the name parameter this leads to getters being cached for all names regardless of the target pojo type the workaround is to turn off caching or to implement a custom recordmapperprovider | 1 |
17,852 | 3,013,534,133 | IssuesEvent | 2015-07-29 09:34:30 | yawlfoundation/yawl | https://api.github.com/repos/yawlfoundation/yawl | closed | Issues with Editor engine/resource svc connections (patch included) | auto-migrated Priority-Medium Type-Defect | ```
The Editor (since 2.1) handles lost and regained engine/RS connections. There
is one small bug and one bug/enhancement.
(i) The Resource Service connections dialog has the password blank instead of
yEditor (clear any stored user prefs first to see this: in Linux, these are in
~/.java/.userPrefs).
This was because SetResourcingServiceAction was using an empty string equals
test instead of a zero length char[] one (see comparable code in
ManageResourcingAction).
Diff attached.
(ii) The online/offline indicators for the engine and resource service
currently reflect whether the services are up/reachable, *not* whether the
Editor has valid connections to them. The latter seems more sensible and less
confusing. (In fact, I only realised after coding a 'fix' that this might be
the designed behaviour; it just didn't make sense to see green icons when there
is no connection. Of course, due to the default editor user setup, the two
states are normally synonymous, but aren't if the user preferences or control
panel settings are changed. I understand if this is considered 'as designed',
but the fix seems robust and relatively simple, so I hope you can consider it,
especially as it took me ages to understand the code!)
I attach a ZIP file with a separate patch to correct this (also includes fix
for (i) above): this contains the revised source and a diff. Note that
JConnectionStatus could also be changed if the terms 'online' and 'offline'
need to change, but these seem fine to me (i.e. could imply either an
established connection or that the service/engine was up).
```
Original issue reported on code.google.com by `monsieur...@gmail.com` on 23 Jan 2011 at 1:55
Attachments:
* [SetResourcingServiceAction_r1647.diff](https://storage.googleapis.com/google-code-attachments/yawl/issue-412/comment-0/SetResourcingServiceAction_r1647.diff)
* [OfflineChange.zip](https://storage.googleapis.com/google-code-attachments/yawl/issue-412/comment-0/OfflineChange.zip)
| 1.0 | Issues with Editor engine/resource svc connections (patch included) - ```
The Editor (since 2.1) handles lost and regained engine/RS connections. There
is one small bug and one bug/enhancement.
(i) The Resource Service connections dialog has the password blank instead of
yEditor (clear any stored user prefs first to see this: in Linux, these are in
~/.java/.userPrefs).
This was because SetResourcingServiceAction was using an empty string equals
test instead of a zero length char[] one (see comparable code in
ManageResourcingAction).
Diff attached.
(ii) The online/offline indicators for the engine and resource service
currently reflect whether the services are up/reachable, *not* whether the
Editor has valid connections to them. The latter seems more sensible and less
confusing. (In fact, I only realised after coding a 'fix' that this might be
the designed behaviour; it just didn't make sense to see green icons when there
is no connection. Of course, due to the default editor user setup, the two
states are normally synonymous, but aren't if the user preferences or control
panel settings are changed. I understand if this is considered 'as designed',
but the fix seems robust and relatively simple, so I hope you can consider it,
especially as it took me ages to understand the code!)
I attach a ZIP file with a separate patch to correct this (also includes fix
for (i) above): this contains the revised source and a diff. Note that
JConnectionStatus could also be changed if the terms 'online' and 'offline'
need to change, but these seem fine to me (i.e. could imply either an
established connection or that the service/engine was up).
```
Original issue reported on code.google.com by `monsieur...@gmail.com` on 23 Jan 2011 at 1:55
Attachments:
* [SetResourcingServiceAction_r1647.diff](https://storage.googleapis.com/google-code-attachments/yawl/issue-412/comment-0/SetResourcingServiceAction_r1647.diff)
* [OfflineChange.zip](https://storage.googleapis.com/google-code-attachments/yawl/issue-412/comment-0/OfflineChange.zip)
| defect | issues with editor engine resource svc connections patch included the editor since handles lost and regained engine rs connections there is one small bug and one bug enhancement i the resource service connections dialog has the password blank instead of yeditor clear any stored user prefs first to see this in linux these are in java userprefs this was because setresourcingserviceaction was using an empty string equals test instead of a zero length char one see comparable code in manageresourcingaction diff attached ii the online offline indicators for the engine and resource service currently reflect whether the services are up reachable not whether the editor has valid connections to them the latter seems more sensible and less confusing in fact i only realised after coding a fix that this might be the designed behaviour it just didn t make sense to see green icons when there is no connection of course due to the default editor user setup the two states are normally synonymous but aren t if the user preferences or control panel settings are changed i understand if this is considered as designed but the fix seems robust and relatively simple so i hope you can consider it especially as it took me ages to understand the code i attach a zip file with a separate patch to correct this also includes fix for i above this contains the revised source and a diff note that jconnectionstatus could also be changed if the terms online and offline need to change but these seem fine to me i e could imply either an established connection or that the service engine was up original issue reported on code google com by monsieur gmail com on jan at attachments | 1 |
108,751 | 11,610,012,803 | IssuesEvent | 2020-02-26 01:42:50 | machineagency/jubilee | https://api.github.com/repos/machineagency/jubilee | opened | Toolchanger Lock Actuator Assembly 7 of 15 | assembly_instructions documentation | Toolchanger Lock Actuator Assembly sheet 7 of 15
Make it more obvious how the wire harness is assembled. Maybe a diagram with special attention called to the orientation of the switches and how the wires go.
Also, replace the photo with new one of the completely soldered wire harness.

| 1.0 | Toolchanger Lock Actuator Assembly 7 of 15 - Toolchanger Lock Actuator Assembly sheet 7 of 15
Make it more obvious how the wire harness is assembled. Maybe a diagram with special attention called to the orientation of the switches and how the wires go.
Also, replace the photo with new one of the completely soldered wire harness.

| non_defect | toolchanger lock actuator assembly of toolchanger lock actuator assembly sheet of make it more obvious how the wire harness is assembled maybe a diagram with special attention called to the orientation of the switches and how the wires go also replace the photo with new one of the completely soldered wire harness | 0 |
62,555 | 3,188,924,162 | IssuesEvent | 2015-09-29 01:00:09 | onyxfish/agate | https://api.github.com/repos/onyxfish/agate | closed | There is no way to use TypeTester for data created in Python | feature priority-low | Because we can't specify just column names to the constructor | 1.0 | There is no way to use TypeTester for data created in Python - Because we can't specify just column names to the constructor | non_defect | there is no way to use typetester for data created in python because we can t specify just column names to the constructor | 0 |
659,814 | 21,942,411,746 | IssuesEvent | 2022-05-23 19:36:11 | disorderedmaterials/dissolve | https://api.github.com/repos/disorderedmaterials/dissolve | closed | Epic / 0.7 / Data Management | Priority: High | ### Focus
Improved handling / management of generated data from within the GUI
### Tasks
- [x] #139 Protect data access by Renderables when simulation thread is active
- [x] #149 Clear specific Module data
- [x] #113 Module management should also manage associated data | 1.0 | Epic / 0.7 / Data Management - ### Focus
Improved handling / management of generated data from within the GUI
### Tasks
- [x] #139 Protect data access by Renderables when simulation thread is active
- [x] #149 Clear specific Module data
- [x] #113 Module management should also manage associated data | non_defect | epic data management focus improved handling management of generated data from within the gui tasks protect data access by renderables when simulation thread is active clear specific module data module management should also manage associated data | 0 |
446,229 | 12,842,511,948 | IssuesEvent | 2020-07-08 02:13:26 | MatthewSpofford/Multiscale-Statistical-Analysis | https://api.github.com/repos/MatthewSpofford/Multiscale-Statistical-Analysis | closed | Divide by zero encountered in double_scalars for StatsTestUI | bug help wanted in progress priority high | This error took place when running a F-test analysis on all four sample files.
```
C:\workspace\MultiscaleStatisticalAnalysis\StatsTestsUI.py:285: RuntimeWarning: divide by zero encountered in double_scalars
f_val = np.round_(var[1] / var[0], 4)
``` | 1.0 | Divide by zero encountered in double_scalars for StatsTestUI - This error took place when running a F-test analysis on all four sample files.
```
C:\workspace\MultiscaleStatisticalAnalysis\StatsTestsUI.py:285: RuntimeWarning: divide by zero encountered in double_scalars
f_val = np.round_(var[1] / var[0], 4)
``` | non_defect | divide by zero encountered in double scalars for statstestui this error took place when running a f test analysis on all four sample files c workspace multiscalestatisticalanalysis statstestsui py runtimewarning divide by zero encountered in double scalars f val np round var var | 0 |
635,988 | 20,516,709,701 | IssuesEvent | 2022-03-01 12:32:59 | eclipse/dirigible | https://api.github.com/repos/eclipse/dirigible | closed | [Releng] Missing destination configuration to be warning | bug component-core priority-low | Issue cloned from https://github.com/SAP/xsk/issues/1197
### Background
Currently, when deployed to Kyma without destination service binding an exception is thrown - https://github.com/eclipse/dirigible/blob/master/releng/sap-kyma-base/src/main/java/org/eclipse/dirigible/kyma/KymaModule.java#L102
Destination service is optional dependency and as such not finding the configuration should be a warning and not an exception.
### Target
Change the logger message to warning and continue do not set destination configuration | 1.0 | [Releng] Missing destination configuration to be warning - Issue cloned from https://github.com/SAP/xsk/issues/1197
### Background
Currently, when deployed to Kyma without destination service binding an exception is thrown - https://github.com/eclipse/dirigible/blob/master/releng/sap-kyma-base/src/main/java/org/eclipse/dirigible/kyma/KymaModule.java#L102
Destination service is optional dependency and as such not finding the configuration should be a warning and not an exception.
### Target
Change the logger message to warning and continue do not set destination configuration | non_defect | missing destination configuration to be warning issue cloned from background currently when deployed to kyma without destination service binding an exception is thrown destination service is optional dependency and as such not finding the configuration should be a warning and not an exception target change the logger message to warning and continue do not set destination configuration | 0 |
337,190 | 24,529,884,775 | IssuesEvent | 2022-10-11 15:36:51 | deephaven/deephaven-core | https://api.github.com/repos/deephaven/deephaven-core | opened | BarrageStreamGenerator: spice up with javadocs | documentation barrage NoDocumentationNeeded | The BarrageStreamGenerator impl is both tricky and hard to recall. Javadocs would greatly reduce the cognitive overhead when debugging or reviewing changes. | 2.0 | BarrageStreamGenerator: spice up with javadocs - The BarrageStreamGenerator impl is both tricky and hard to recall. Javadocs would greatly reduce the cognitive overhead when debugging or reviewing changes. | non_defect | barragestreamgenerator spice up with javadocs the barragestreamgenerator impl is both tricky and hard to recall javadocs would greatly reduce the cognitive overhead when debugging or reviewing changes | 0 |
33,422 | 9,125,290,997 | IssuesEvent | 2019-02-24 12:30:20 | cgeo/cgeo | https://api.github.com/repos/cgeo/cgeo | closed | AS 3.3 Build errors | CI server / Build tools Feedback required | ##### Detailed steps causing the problem:
- Install/Upgrade Android Studio to version 3.3
##### Actual behavior after performing these steps:
Build works not more. Maybe from this error:
"D:\git\cgeo\main\build\intermediates\incremental\mergeBasicDebugResources\merged.dir\values\values.xml:3609: AAPT: error: resource android:attr/preserveIconSpacing is private.
error: failed linking references.
at com.android.builder.internal.aapt.v2.Aapt2Exception$Companion.create(Aapt2Exception.kt:45)
...
Caused by: com.android.builder.internal.aapt.v2.Aapt2Exception: Android resource linking failed"
##### Expected behavior after performing these steps:
Gradle sync/Build with warnings as before
##### Version of c:geo used:
current master branch
##### Is the problem reproducible for you?
Yes
##### System information:
-
##### Other comments and remarks:
I think this point in GradleSync is the reason:
"WARNING: The following project options are deprecated and have been removed:
android.enableAapt2
This property has no effect, AAPT2 is now always used.
Affected Modules: cgeo-contacts, main, mapswithme-api"
Maybe also this point:
"WARNING: Configuration 'compile' is obsolete and has been replaced with 'implementation' and 'api'.
It will be removed at the end of 2018. For more information see: http://d.android.com/r/tools/update-dependency-configurations.html
Affected Modules: main"
| 1.0 | AS 3.3 Build errors - ##### Detailed steps causing the problem:
- Install/Upgrade Android Studio to version 3.3
##### Actual behavior after performing these steps:
Build works not more. Maybe from this error:
"D:\git\cgeo\main\build\intermediates\incremental\mergeBasicDebugResources\merged.dir\values\values.xml:3609: AAPT: error: resource android:attr/preserveIconSpacing is private.
error: failed linking references.
at com.android.builder.internal.aapt.v2.Aapt2Exception$Companion.create(Aapt2Exception.kt:45)
...
Caused by: com.android.builder.internal.aapt.v2.Aapt2Exception: Android resource linking failed"
##### Expected behavior after performing these steps:
Gradle sync/Build with warnings as before
##### Version of c:geo used:
current master branch
##### Is the problem reproducible for you?
Yes
##### System information:
-
##### Other comments and remarks:
I think this point in GradleSync is the reason:
"WARNING: The following project options are deprecated and have been removed:
android.enableAapt2
This property has no effect, AAPT2 is now always used.
Affected Modules: cgeo-contacts, main, mapswithme-api"
Maybe also this point:
"WARNING: Configuration 'compile' is obsolete and has been replaced with 'implementation' and 'api'.
It will be removed at the end of 2018. For more information see: http://d.android.com/r/tools/update-dependency-configurations.html
Affected Modules: main"
| non_defect | as build errors detailed steps causing the problem install upgrade android studio to version actual behavior after performing these steps build works not more maybe from this error d git cgeo main build intermediates incremental mergebasicdebugresources merged dir values values xml aapt error resource android attr preserveiconspacing is private error failed linking references at com android builder internal aapt companion create kt caused by com android builder internal aapt android resource linking failed expected behavior after performing these steps gradle sync build with warnings as before version of c geo used current master branch is the problem reproducible for you yes system information other comments and remarks i think this point in gradlesync is the reason warning the following project options are deprecated and have been removed android this property has no effect is now always used affected modules cgeo contacts main mapswithme api maybe also this point warning configuration compile is obsolete and has been replaced with implementation and api it will be removed at the end of for more information see affected modules main | 0 |
45,201 | 12,651,340,822 | IssuesEvent | 2020-06-17 00:00:46 | naev/naev | https://api.github.com/repos/naev/naev | closed | Crash on startup | Priority-Medium Type-Defect | Original [issue 207](https://code.google.com/p/naev/issues/detail?id=207) created by k0s_J532... on 2012-09-17T12:32:44.000Z:
Naev v0.6.0
git HEAD at naev-0.5.3-473-ga59929c
BFD backtrace catching enabled.
Sea of Darkness
SDL: 1.2.15 [compiled: 1.2.15]
ERROR opengl.c:462 [gl_createWindow]: Unable to create OpenGL window: Could not create GL context
Naev received SIGABRT!
naev() [0x80acbb5] music_mix_setPos(...):0 ??
linux-gate.so.1(__kernel_rt_sigreturn+0) [0xb77aa40c] ??(...):0 ??
linux-gate.so.1(__kernel_vsyscall+0x10) [0xb77aa424] ??(...):0 ??
/usr/lib/libc.so.6(gsignal+0x4f) [0xb714f5df] ??(...):0 ??
/usr/lib/libc.so.6(abort+0x143) [0xb7150ec3] ??(...):0 ??
naev(gl_init+0x9f4) [0x80ca3d4] gl_init(...):0 ??
naev(main+0x357) [0x8061fb7] main(...):0 ??
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0xb713a605] ??(...):0 ??
naev() [0x8062c6d] _start(...):0 ??
Report this to project maintainer with the backtrace.
After updating git build I get this message on every startup.
Naev 0.6.0 (git build from git://github.com/BTAxis/naev).
Arch Linux
xorg-server 1.12.4-1
nvidia-drivers 173.14.35
| 1.0 | Crash on startup - Original [issue 207](https://code.google.com/p/naev/issues/detail?id=207) created by k0s_J532... on 2012-09-17T12:32:44.000Z:
Naev v0.6.0
git HEAD at naev-0.5.3-473-ga59929c
BFD backtrace catching enabled.
Sea of Darkness
SDL: 1.2.15 [compiled: 1.2.15]
ERROR opengl.c:462 [gl_createWindow]: Unable to create OpenGL window: Could not create GL context
Naev received SIGABRT!
naev() [0x80acbb5] music_mix_setPos(...):0 ??
linux-gate.so.1(__kernel_rt_sigreturn+0) [0xb77aa40c] ??(...):0 ??
linux-gate.so.1(__kernel_vsyscall+0x10) [0xb77aa424] ??(...):0 ??
/usr/lib/libc.so.6(gsignal+0x4f) [0xb714f5df] ??(...):0 ??
/usr/lib/libc.so.6(abort+0x143) [0xb7150ec3] ??(...):0 ??
naev(gl_init+0x9f4) [0x80ca3d4] gl_init(...):0 ??
naev(main+0x357) [0x8061fb7] main(...):0 ??
/usr/lib/libc.so.6(__libc_start_main+0xf5) [0xb713a605] ??(...):0 ??
naev() [0x8062c6d] _start(...):0 ??
Report this to project maintainer with the backtrace.
After updating git build I get this message on every startup.
Naev 0.6.0 (git build from git://github.com/BTAxis/naev).
Arch Linux
xorg-server 1.12.4-1
nvidia-drivers 173.14.35
| defect | crash on startup original created by on naev git head at naev bfd backtrace catching enabled sea of darkness sdl error opengl c unable to create opengl window could not create gl context naev received sigabrt naev music mix setpos linux gate so kernel rt sigreturn linux gate so kernel vsyscall usr lib libc so gsignal usr lib libc so abort naev gl init gl init naev main main usr lib libc so libc start main naev start report this to project maintainer with the backtrace after updating git build i get this message on every startup naev git build from git github com btaxis naev arch linux xorg server nvidia drivers | 1 |
227,337 | 25,043,811,911 | IssuesEvent | 2022-11-05 02:00:27 | DataBiosphere/azul | https://api.github.com/repos/DataBiosphere/azul | opened | EC2 instances should be managed by AWS Systems Manager | orange securityhub severity:medium | {
"GeneratorIds": [
"aws-foundational-security-best-practices/v/1.0.0/SSM.1"
]
}
Instance:
- azul-gitlab
---
- [ ] Security design review completed; the Resolution of this issue does **not** …
- [ ] … affect authentication; for example:
- OAuth 2.0 with the application (API or Swagger UI)
- Authentication of developers with Google Cloud APIs
- Authentication of developers with AWS APIs
- Authentication with a GitLab instance in the system
- Password and 2FA authentication with GitHub
- API access token authentication with GitHub
- Authentication with
- [ ] … affect the permissions of internal users like access to
- Cloud resources on AWS and GCP
- GitLab repositories, projects and groups, administration
- an EC2 instance via SSH
- GitHub issues, pull requests, commits, commit statuses, wikis, repositories, organizations
- [ ] … affect the permissions of external users like access to
- TDR snapshots
- [ ] … affect permissions of service or bot accounts
- Cloud resources on AWS and GCP
- [ ] … affect audit logging in the system, like
- adding, removing or changing a log message that represents an auditable event
- changing the routing of log messages through the system
- [ ] … affect monitoring of the system
- [ ] … introduce a new software dependency like
- Python packages on PYPI
- Command-line utilities
- Docker images
- Terraform providers
- [ ] … add an interface that exposes sensitive or confidential data at the security boundary
- [ ] … affect the encryption of data at rest
- [ ] … require persistence of sensitive or confidential data that might require encryption at rest
- [ ] … require unencrypted transmission of data within the security boundary
- [ ] … affect the network security layer; for example by
- modifying, adding or removing firewall rules
- modifying, adding or removing security groups
- changing or adding a port a service, proxy or load balancer listens on
- [ ] Documentation on any unchecked boxes is provided in comments below
| True | EC2 instances should be managed by AWS Systems Manager - {
"GeneratorIds": [
"aws-foundational-security-best-practices/v/1.0.0/SSM.1"
]
}
Instance:
- azul-gitlab
---
- [ ] Security design review completed; the Resolution of this issue does **not** …
- [ ] … affect authentication; for example:
- OAuth 2.0 with the application (API or Swagger UI)
- Authentication of developers with Google Cloud APIs
- Authentication of developers with AWS APIs
- Authentication with a GitLab instance in the system
- Password and 2FA authentication with GitHub
- API access token authentication with GitHub
- Authentication with
- [ ] … affect the permissions of internal users like access to
- Cloud resources on AWS and GCP
- GitLab repositories, projects and groups, administration
- an EC2 instance via SSH
- GitHub issues, pull requests, commits, commit statuses, wikis, repositories, organizations
- [ ] … affect the permissions of external users like access to
- TDR snapshots
- [ ] … affect permissions of service or bot accounts
- Cloud resources on AWS and GCP
- [ ] … affect audit logging in the system, like
- adding, removing or changing a log message that represents an auditable event
- changing the routing of log messages through the system
- [ ] … affect monitoring of the system
- [ ] … introduce a new software dependency like
- Python packages on PYPI
- Command-line utilities
- Docker images
- Terraform providers
- [ ] … add an interface that exposes sensitive or confidential data at the security boundary
- [ ] … affect the encryption of data at rest
- [ ] … require persistence of sensitive or confidential data that might require encryption at rest
- [ ] … require unencrypted transmission of data within the security boundary
- [ ] … affect the network security layer; for example by
- modifying, adding or removing firewall rules
- modifying, adding or removing security groups
- changing or adding a port a service, proxy or load balancer listens on
- [ ] Documentation on any unchecked boxes is provided in comments below
| non_defect | instances should be managed by aws systems manager generatorids aws foundational security best practices v ssm instance azul gitlab security design review completed the resolution of this issue does not … … affect authentication for example oauth with the application api or swagger ui authentication of developers with google cloud apis authentication of developers with aws apis authentication with a gitlab instance in the system password and authentication with github api access token authentication with github authentication with … affect the permissions of internal users like access to cloud resources on aws and gcp gitlab repositories projects and groups administration an instance via ssh github issues pull requests commits commit statuses wikis repositories organizations … affect the permissions of external users like access to tdr snapshots … affect permissions of service or bot accounts cloud resources on aws and gcp … affect audit logging in the system like adding removing or changing a log message that represents an auditable event changing the routing of log messages through the system … affect monitoring of the system … introduce a new software dependency like python packages on pypi command line utilities docker images terraform providers … add an interface that exposes sensitive or confidential data at the security boundary … affect the encryption of data at rest … require persistence of sensitive or confidential data that might require encryption at rest … require unencrypted transmission of data within the security boundary … affect the network security layer for example by modifying adding or removing firewall rules modifying adding or removing security groups changing or adding a port a service proxy or load balancer listens on documentation on any unchecked boxes is provided in comments below | 0 |
457,925 | 13,165,170,950 | IssuesEvent | 2020-08-11 05:56:42 | redhat-developer/vscode-openshift-tools | https://api.github.com/repos/redhat-developer/vscode-openshift-tools | closed | Create URL fails | kind/bug priority/blocker | Creating a URL for a component fails with the following error:
```
Failed to create URL 'url' for component 'component'. Cannot read property 'length' of undefined
```
But it looks like some configuration is applied, pushing the component actually creates the endpoint. No URL object is present in the explorer, though.
```
Applying URL changes
✓ URL url: https://url-app-test.apps.ci-ln-nwfrs8t-f76d1.origin-ci-int-gce.dev.openshift.com created
``` | 1.0 | Create URL fails - Creating a URL for a component fails with the following error:
```
Failed to create URL 'url' for component 'component'. Cannot read property 'length' of undefined
```
But it looks like some configuration is applied, pushing the component actually creates the endpoint. No URL object is present in the explorer, though.
```
Applying URL changes
✓ URL url: https://url-app-test.apps.ci-ln-nwfrs8t-f76d1.origin-ci-int-gce.dev.openshift.com created
``` | non_defect | create url fails creating a url for a component fails with the following error failed to create url url for component component cannot read property length of undefined but it looks like some configuration is applied pushing the component actually creates the endpoint no url object is present in the explorer though applying url changes ✓ url url created | 0 |
122,534 | 17,703,947,417 | IssuesEvent | 2021-08-25 04:10:05 | Chiencc/Sample_Webgoat | https://api.github.com/repos/Chiencc/Sample_Webgoat | closed | CVE-2014-3596 (Medium) detected in axis-1.4.jar - autoclosed | security vulnerability | ## CVE-2014-3596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axis-1.4.jar</b></p></summary>
<p>An implementation of the SOAP ("Simple Object Access Protocol") submission to W3C.</p>
<p>Library home page: <a href="http://ws.apache.org/axis">http://ws.apache.org/axis</a></p>
<p>Path to dependency file: Sample_Webgoat_depth_0/SourceCode/webgoat-standalone/webgoat-standalone/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/axis/axis/1.4/axis-1.4.jar</p>
<p>
Dependency Hierarchy:
- webgoat-container-7.1.jar (Root Library)
- :x: **axis-1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/Sample_Webgoat/commit/8c8daafbebc152c1aabb39157cf71791044ee1af">8c8daafbebc152c1aabb39157cf71791044ee1af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The getCN function in Apache Axis 1.4 and earlier does not properly verify that the server hostname matches a domain name in the subject's Common Name (CN) or subjectAltName field of the X.509 certificate, which allows man-in-the-middle attackers to spoof SSL servers via a certificate with a subject that specifies a common name in a field that is not the CN field. NOTE: this issue exists because of an incomplete fix for CVE-2012-5784.
<p>Publish Date: 2014-08-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-3596>CVE-2014-3596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://xforce.iss.net/xforce/xfdb/95377">http://xforce.iss.net/xforce/xfdb/95377</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Refer to Apache Web site for patch, upgrade or suggested workaround information. See References.
For IBM products:
Refer to the appropriate IBM Security Bulletin for patch, upgrade or suggested workaround information. See References.
For other distributions:
Apply the appropriate update for your system.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2014-3596 (Medium) detected in axis-1.4.jar - autoclosed - ## CVE-2014-3596 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>axis-1.4.jar</b></p></summary>
<p>An implementation of the SOAP ("Simple Object Access Protocol") submission to W3C.</p>
<p>Library home page: <a href="http://ws.apache.org/axis">http://ws.apache.org/axis</a></p>
<p>Path to dependency file: Sample_Webgoat_depth_0/SourceCode/webgoat-standalone/webgoat-standalone/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/axis/axis/1.4/axis-1.4.jar</p>
<p>
Dependency Hierarchy:
- webgoat-container-7.1.jar (Root Library)
- :x: **axis-1.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Chiencc/Sample_Webgoat/commit/8c8daafbebc152c1aabb39157cf71791044ee1af">8c8daafbebc152c1aabb39157cf71791044ee1af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The getCN function in Apache Axis 1.4 and earlier does not properly verify that the server hostname matches a domain name in the subject's Common Name (CN) or subjectAltName field of the X.509 certificate, which allows man-in-the-middle attackers to spoof SSL servers via a certificate with a subject that specifies a common name in a field that is not the CN field. NOTE: this issue exists because of an incomplete fix for CVE-2012-5784.
<p>Publish Date: 2014-08-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2014-3596>CVE-2014-3596</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: N/A
- Attack Complexity: N/A
- Privileges Required: N/A
- User Interaction: N/A
- Scope: N/A
- Impact Metrics:
- Confidentiality Impact: N/A
- Integrity Impact: N/A
- Availability Impact: N/A
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://xforce.iss.net/xforce/xfdb/95377">http://xforce.iss.net/xforce/xfdb/95377</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Refer to Apache Web site for patch, upgrade or suggested workaround information. See References.
For IBM products:
Refer to the appropriate IBM Security Bulletin for patch, upgrade or suggested workaround information. See References.
For other distributions:
Apply the appropriate update for your system.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in axis jar autoclosed cve medium severity vulnerability vulnerable library axis jar an implementation of the soap simple object access protocol submission to library home page a href path to dependency file sample webgoat depth sourcecode webgoat standalone webgoat standalone pom xml path to vulnerable library home wss scanner repository axis axis axis jar dependency hierarchy webgoat container jar root library x axis jar vulnerable library found in head commit a href found in base branch master vulnerability details the getcn function in apache axis and earlier does not properly verify that the server hostname matches a domain name in the subject s common name cn or subjectaltname field of the x certificate which allows man in the middle attackers to spoof ssl servers via a certificate with a subject that specifies a common name in a field that is not the cn field note this issue exists because of an incomplete fix for cve publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution refer to apache web site for patch upgrade or suggested workaround information see references for ibm products refer to the appropriate ibm security bulletin for patch upgrade or suggested workaround information see references for other distributions apply the appropriate update for your system step up your open source security game with whitesource | 0 |
174,567 | 21,300,234,166 | IssuesEvent | 2022-04-15 01:25:43 | dof-dss/architecture-catalogue | https://api.github.com/repos/dof-dss/architecture-catalogue | opened | CVE-2021-32640 (Medium) detected in opennmsopennms-source-26.0.0-1 | security vulnerability | ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/ws/lib/websocket-server.js</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: 5.2.3,6.2.2,7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-32640 (Medium) detected in opennmsopennms-source-26.0.0-1 - ## CVE-2021-32640 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>opennmsopennms-source-26.0.0-1</b></p></summary>
<p>
<p>A Java based fault and performance management system</p>
<p>Library home page: <a href=https://sourceforge.net/projects/opennms/>https://sourceforge.net/projects/opennms/</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/node_modules/ws/lib/websocket-server.js</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
ws is an open source WebSocket client and server library for Node.js. A specially crafted value of the `Sec-Websocket-Protocol` header can be used to significantly slow down a ws server. The vulnerability has been fixed in ws@7.4.6 (https://github.com/websockets/ws/commit/00c425ec77993773d823f018f64a5c44e17023ff). In vulnerable versions of ws, the issue can be mitigated by reducing the maximum allowed length of the request headers using the [`--max-http-header-size=size`](https://nodejs.org/api/cli.html#cli_max_http_header_size_size) and/or the [`maxHeaderSize`](https://nodejs.org/api/http.html#http_http_createserver_options_requestlistener) options.
<p>Publish Date: 2021-05-25
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32640>CVE-2021-32640</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693">https://github.com/websockets/ws/security/advisories/GHSA-6fc8-4gx4-v693</a></p>
<p>Release Date: 2021-05-25</p>
<p>Fix Resolution: 5.2.3,6.2.2,7.4.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve medium detected in opennmsopennms source cve medium severity vulnerability vulnerable library opennmsopennms source a java based fault and performance management system library home page a href found in base branch master vulnerable source files node modules ws lib websocket server js vulnerability details ws is an open source websocket client and server library for node js a specially crafted value of the sec websocket protocol header can be used to significantly slow down a ws server the vulnerability has been fixed in ws in vulnerable versions of ws the issue can be mitigated by reducing the maximum allowed length of the request headers using the and or the options publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
738,618 | 25,569,204,932 | IssuesEvent | 2022-11-30 16:22:53 | bounswe/bounswe2022group3 | https://api.github.com/repos/bounswe/bounswe2022group3 | closed | [Backend]: Test infrastructure setup | effort: moderate priority: urgent backend | ### Issue
We need to handle unit tests. So I will make an initial setup for test structure.
### Task(s)
- [x] Setup test structure for backend
- [ ] Provide teammates with necessary info and examples to start unit testing
### Deliverable(s)
* PR for initial testing struct
* Readme for testing
### Acceptance Criteria
* PR reviewed and merged.
### Deadline of the issue
26.11.2022 @23.59
### Reviewer
_No response_
### Deadline for Review
_No response_ | 1.0 | [Backend]: Test infrastructure setup - ### Issue
We need to handle unit tests. So I will make an initial setup for test structure.
### Task(s)
- [x] Setup test structure for backend
- [ ] Provide teammates with necessary info and examples to start unit testing
### Deliverable(s)
* PR for initial testing struct
* Readme for testing
### Acceptance Criteria
* PR reviewed and merged.
### Deadline of the issue
26.11.2022 @23.59
### Reviewer
_No response_
### Deadline for Review
_No response_ | non_defect | test infrastructure setup issue we need to handle unit tests so i will make an initial setup for test structure task s setup test structure for backend provide teammates with necessary info and examples to start unit testing deliverable s pr for initial testing struct readme for testing acceptance criteria pr reviewed and merged deadline of the issue reviewer no response deadline for review no response | 0 |
65,825 | 19,712,572,028 | IssuesEvent | 2022-01-13 07:40:00 | martinrotter/rssguard | https://api.github.com/repos/martinrotter/rssguard | closed | [BUG]: 4.1.2 build fails with Qt 5.12 | Type-Defect Status-Fixed | ### Brief description of the issue
`expandRecursively` not available
### How to reproduce the bug?
1. Unpack source tarball
2. Try to build the package
### What was the expected result?
A successful build.
### What actually happened?
See [log snippet](https://github.com/martinrotter/rssguard/files/7846103/rssguard412_Qt5127.txt).
### Other information
_No response_
### Operating system and version
* OS: openSUSE Leap 15.3 / Qt 5.12.7
* RSS Guard version: 4.1.2
| 1.0 | [BUG]: 4.1.2 build fails with Qt 5.12 - ### Brief description of the issue
`expandRecursively` not available
### How to reproduce the bug?
1. Unpack source tarball
2. Try to build the package
### What was the expected result?
A successful build.
### What actually happened?
See [log snippet](https://github.com/martinrotter/rssguard/files/7846103/rssguard412_Qt5127.txt).
### Other information
_No response_
### Operating system and version
* OS: openSUSE Leap 15.3 / Qt 5.12.7
* RSS Guard version: 4.1.2
| defect | build fails with qt brief description of the issue expandrecursively not available how to reproduce the bug unpack source tarball try to build the package what was the expected result a successful build what actually happened see other information no response operating system and version os opensuse leap qt rss guard version | 1 |
89,344 | 25,761,213,917 | IssuesEvent | 2022-12-08 20:42:59 | mozilla-mobile/fenix | https://api.github.com/repos/mozilla-mobile/fenix | opened | Disable intermittently failing run-build on Pull Requests | eng:automation eng:build | We have the `run-build` task that was meant for contributors to have a simple CI check for feedback. This was a temporary solution we tried out at the time to solve our problems of running FTL on contributor PRs.
Unfortunately, this task runs also for internal contributors and also intermittent fails (more often than before, nowadays?)
Since we do not need this solution when we move to the firefox-android project (because TC can run CI without the FTL task), I recommend we disable this task for now to avoid confusion within the team in situations where a Taskcluster CI build passes. | 1.0 | Disable intermittently failing run-build on Pull Requests - We have the `run-build` task that was meant for contributors to have a simple CI check for feedback. This was a temporary solution we tried out at the time to solve our problems of running FTL on contributor PRs.
Unfortunately, this task runs also for internal contributors and also intermittent fails (more often than before, nowadays?)
Since we do not need this solution when we move to the firefox-android project (because TC can run CI without the FTL task), I recommend we disable this task for now to avoid confusion within the team in situations where a Taskcluster CI build passes. | non_defect | disable intermittently failing run build on pull requests we have the run build task that was meant for contributors to have a simple ci check for feedback this was a temporary solution we tried out at the time to solve our problems of running ftl on contributor prs unfortunately this task runs also for internal contributors and also intermittent fails more often than before nowadays since we do not need this solution when we move to the firefox android project because tc can run ci without the ftl task i recommend we disable this task for now to avoid confusion within the team in situations where a taskcluster ci build passes | 0 |
38,823 | 8,967,315,903 | IssuesEvent | 2019-01-29 02:47:21 | opencaching/opencaching-pl | https://api.github.com/repos/opencaching/opencaching-pl | opened | Limit exploit | Type Defect | The URL argument `startat=110000000000000` (or larger number) will crash all pages that pass the `startat` value to a LIMIT clause. This of course applies to all numerical URI arguments that are directly inserted into an SQL statement. | 1.0 | Limit exploit - The URL argument `startat=110000000000000` (or larger number) will crash all pages that pass the `startat` value to a LIMIT clause. This of course applies to all numerical URI arguments that are directly inserted into an SQL statement. | defect | limit exploit the url argument startat or larger number will crash all pages that pass the startat value to a limit clause this of course applies to all numerical uri arguments that are directly inserted into an sql statement | 1 |
303 | 2,636,193,801 | IssuesEvent | 2015-03-10 00:35:30 | tsumobi/issues | https://api.github.com/repos/tsumobi/issues | closed | test_invite_metrics blocks deploy | blocks-deploy bug infrastructure | Failures in this test block deploy.
Stack trace from deploy 8465:
```ruby
TestInviteMetrics:
[29/46] FAIL test_invite_metrics (30.21s)
Unable to find {"regexp"=>/logs\.\w+\.download\.invite/, "value"=>"1", "type"=>"c"} in ["skynet-web.download.unknown:1|c", "skynet-web.download.incompatible_user_agent:1|c"] using byte offset of 0 of 80 in /tmp/statsd.output.pSYQR
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:33
30 assertion['value'] && assertion['value'] == m[:value])
31 end
32
=> 33 assert(actual,
34 "Unable to find #{assertion.inspect} in #{new_metrics.inspect}" \
35 " using byte offset of #{before.length} of #{new_size} in" \
36 " #{STATSD_OUTPUT_FILE}")
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:26
23 parsed_metrics.push(match) if match
24 end
25
=> 26 assertions.each do |assertion|
27 actual = parsed_metrics.find do |m|
28 next false unless assertion['regexp'].match(m[:name])
29 (assertion['type'] && assertion['type'] == m[:type] &&
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:26
23 parsed_metrics.push(match) if match
24 end
25
=> 26 assertions.each do |assertion|
27 actual = parsed_metrics.find do |m|
28 next false unless assertion['regexp'].match(m[:name])
29 (assertion['type'] && assertion['type'] == m[:type] &&
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/await.rb:6
3 deadline = Time.now + timeout
4
5 begin
=> 6 yield
7 rescue MiniTest::Assertion
8 raise unless deadline > Time.now
9
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:14
11 before = File.read(STATSD_OUTPUT_FILE)
12 yield
13
=> 14 await(timeout) do
15 new_size = File.size(STATSD_OUTPUT_FILE)
16 new_metrics = File.read(STATSD_OUTPUT_FILE, nil, before.length).split("\n")
17
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/safe_tests/test_invite_metrics.rb:44
41
42 # NB(xyu): logstash may be slow to start up and become ready to process logs
43 # so up this timeout to give it time to catch up.
=> 44 assert_new_metrics([assertion], 30) do
45 # Set our collection as close as possible to the query itself to minimize
46 # the potential for a race condition
47 collection = "#{COLLECTION_BASE_NAME}-#{Time.now.utc.strftime("%F-%H")}"
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:180
177 # test duration reported on buildbot.
178 def self.run_one_method klass, method_name, reporter
179 reporter.test_start_time = Time.now if reporter.respond_to?(:test_start_time=)
=> 180 reporter.record Minitest.run_one_method(klass, method_name)
181 end
182
183 # HACK(xyu.2015-02-23): Normally, minitest supports the '-n' option to only
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:197
194 end
195 with_info_handler reporter do
196 filtered_methods.each do |method_name|
=> 197 run_one_method self, method_name, reporter
198 end
199 end
200 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:196
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
195 with_info_handler reporter do
=> 196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
199 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:196
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
195 with_info_handler reporter do
=> 196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
199 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:195
192 filtered_methods = self.runnable_methods.find_all do |m|
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
=> 195 with_info_handler reporter do
196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:237
234
235 Minitest.reporter = nil # runnables shouldn't depend on the reporter, ever
236 reporter.start
=> 237 Minitest.__run(reporter, options) unless header_only
238 Minitest.parallel_executor.shutdown
239 reporter.report
240
``` | 1.0 | test_invite_metrics blocks deploy - Failures in this test block deploy.
Stack trace from deploy 8465:
```ruby
TestInviteMetrics:
[29/46] FAIL test_invite_metrics (30.21s)
Unable to find {"regexp"=>/logs\.\w+\.download\.invite/, "value"=>"1", "type"=>"c"} in ["skynet-web.download.unknown:1|c", "skynet-web.download.incompatible_user_agent:1|c"] using byte offset of 0 of 80 in /tmp/statsd.output.pSYQR
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:33
30 assertion['value'] && assertion['value'] == m[:value])
31 end
32
=> 33 assert(actual,
34 "Unable to find #{assertion.inspect} in #{new_metrics.inspect}" \
35 " using byte offset of #{before.length} of #{new_size} in" \
36 " #{STATSD_OUTPUT_FILE}")
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:26
23 parsed_metrics.push(match) if match
24 end
25
=> 26 assertions.each do |assertion|
27 actual = parsed_metrics.find do |m|
28 next false unless assertion['regexp'].match(m[:name])
29 (assertion['type'] && assertion['type'] == m[:type] &&
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:26
23 parsed_metrics.push(match) if match
24 end
25
=> 26 assertions.each do |assertion|
27 actual = parsed_metrics.find do |m|
28 next false unless assertion['regexp'].match(m[:name])
29 (assertion['type'] && assertion['type'] == m[:type] &&
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/await.rb:6
3 deadline = Time.now + timeout
4
5 begin
=> 6 yield
7 rescue MiniTest::Assertion
8 raise unless deadline > Time.now
9
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/lib/statsd_assertions.rb:14
11 before = File.read(STATSD_OUTPUT_FILE)
12 yield
13
=> 14 await(timeout) do
15 new_size = File.size(STATSD_OUTPUT_FILE)
16 new_metrics = File.read(STATSD_OUTPUT_FILE, nil, before.length).split("\n")
17
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/contents/safe_tests/test_invite_metrics.rb:44
41
42 # NB(xyu): logstash may be slow to start up and become ready to process logs
43 # so up this timeout to give it time to catch up.
=> 44 assert_new_metrics([assertion], 30) do
45 # Set our collection as close as possible to the query itself to minimize
46 # the potential for a race condition
47 collection = "#{COLLECTION_BASE_NAME}-#{Time.now.utc.strftime("%F-%H")}"
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:180
177 # test duration reported on buildbot.
178 def self.run_one_method klass, method_name, reporter
179 reporter.test_start_time = Time.now if reporter.respond_to?(:test_start_time=)
=> 180 reporter.record Minitest.run_one_method(klass, method_name)
181 end
182
183 # HACK(xyu.2015-02-23): Normally, minitest supports the '-n' option to only
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:197
194 end
195 with_info_handler reporter do
196 filtered_methods.each do |method_name|
=> 197 run_one_method self, method_name, reporter
198 end
199 end
200 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:196
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
195 with_info_handler reporter do
=> 196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
199 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:196
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
195 with_info_handler reporter do
=> 196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
199 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:195
192 filtered_methods = self.runnable_methods.find_all do |m|
193 omit_tests.none? { |filter| filter == "#{self}##{m}" }
194 end
=> 195 with_info_handler reporter do
196 filtered_methods.each do |method_name|
197 run_one_method self, method_name, reporter
198 end
/Users/vagrant/Library/SlugState/ci-slave/Session/slave/builddirs/deploy/build/.sloth/gimme-builds/4/6d/e3/5c718b3285e3ec531e0eba45826fa2cfd821994e31e499a685037897b065/result/deps/tap-harness/minitest-runner/bundle/+macosx+x86_64/bin/minitest-runner:237
234
235 Minitest.reporter = nil # runnables shouldn't depend on the reporter, ever
236 reporter.start
=> 237 Minitest.__run(reporter, options) unless header_only
238 Minitest.parallel_executor.shutdown
239 reporter.report
240
``` | non_defect | test invite metrics blocks deploy failures in this test block deploy stack trace from deploy ruby testinvitemetrics fail test invite metrics unable to find regexp logs w download invite value type c in using byte offset of of in tmp statsd output psyqr users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents lib statsd assertions rb assertion assertion m end assert actual unable to find assertion inspect in new metrics inspect using byte offset of before length of new size in statsd output file users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents lib statsd assertions rb parsed metrics push match if match end assertions each do assertion actual parsed metrics find do m next false unless assertion match m assertion assertion m users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents lib statsd assertions rb parsed metrics push match if match end assertions each do assertion actual parsed metrics find do m next false unless assertion match m assertion assertion m users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents lib await rb deadline time now timeout begin yield rescue minitest assertion raise unless deadline time now users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents lib statsd assertions rb before file read statsd output file yield await timeout do new size file size statsd output file new metrics file read statsd output file nil before length split n users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result contents safe tests test invite metrics rb nb xyu logstash may be slow to start up and become ready to process logs so up this timeout to give it time to catch up assert new metrics do set our collection as close as possible to the query itself to minimize the potential for a race condition collection collection base name time now utc strftime f h users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner test duration reported on buildbot def self run one method klass method name reporter reporter test start time time now if reporter respond to test start time reporter record minitest run one method klass method name end hack xyu normally minitest supports the n option to only users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner end with info handler reporter do filtered methods each do method name run one method self method name reporter end end end users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner omit tests none filter filter self m end with info handler reporter do filtered methods each do method name run one method self method name reporter end end users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner omit tests none filter filter self m end with info handler reporter do filtered methods each do method name run one method self method name reporter end end users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner filtered methods self runnable methods find all do m omit tests none filter filter self m end with info handler reporter do filtered methods each do method name run one method self method name reporter end users vagrant library slugstate ci slave session slave builddirs deploy build sloth gimme builds result deps tap harness minitest runner bundle macosx bin minitest runner minitest reporter nil runnables shouldn t depend on the reporter ever reporter start minitest run reporter options unless header only minitest parallel executor shutdown reporter report | 0 |
732,462 | 25,260,222,085 | IssuesEvent | 2022-11-15 22:00:45 | googleapis/java-notebooks | https://api.github.com/repos/googleapis/java-notebooks | closed | The build failed | type: bug priority: p1 api: notebooks flakybot: issue | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: fccbcf038ffd678dee9a7460cb6bd29e2f52aa1e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f80afa2b-128b-42d0-bee6-f7d0ff875292), [Sponge](http://sponge2/f80afa2b-128b-42d0-bee6-f7d0ff875292)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.ResourceExhaustedException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:125)
at com.google.cloud.notebooks.v1beta1.it.ITNotebookServiceClientTest.tearDown(ITNotebookServiceClientTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581)
Caused by: com.google.api.gax.rpc.ResourceExhaustedException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:100)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:67)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1132)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:574)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:544)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 14 more
</pre></details> | 1.0 | The build failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: fccbcf038ffd678dee9a7460cb6bd29e2f52aa1e
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f80afa2b-128b-42d0-bee6-f7d0ff875292), [Sponge](http://sponge2/f80afa2b-128b-42d0-bee6-f7d0ff875292)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.ResourceExhaustedException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:91)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:125)
at com.google.cloud.notebooks.v1beta1.it.ITNotebookServiceClientTest.tearDown(ITNotebookServiceClientTest.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
at org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:456)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:169)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:595)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:581)
Caused by: com.google.api.gax.rpc.ResourceExhaustedException: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:100)
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:41)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:86)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:66)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:67)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1132)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1270)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:574)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:544)
at io.grpc.PartialForwardingClientCallListener.onClose(PartialForwardingClientCallListener.java:39)
at io.grpc.ForwardingClientCallListener.onClose(ForwardingClientCallListener.java:23)
at io.grpc.ForwardingClientCallListener$SimpleForwardingClientCallListener.onClose(ForwardingClientCallListener.java:40)
at com.google.api.gax.grpc.ChannelPool$ReleasingClientCall$1.onClose(ChannelPool.java:535)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:563)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:70)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:744)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:723)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: Too many requests are currently being executed, try again later.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 14 more
</pre></details> | non_defect | the build failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google api gax rpc resourceexhaustedexception io grpc statusruntimeexception resource exhausted too many requests are currently being executed try again later at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com google cloud notebooks it itnotebookserviceclienttest teardown itnotebookserviceclienttest java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runafters invokemethod runafters java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore junitcore run junitcore java at org apache maven surefire junitcore junitcorewrapper createrequestandrun junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper executeeager junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcorewrapper execute junitcorewrapper java at org apache maven surefire junitcore junitcoreprovider invoke junitcoreprovider java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc resourceexhaustedexception io grpc statusruntimeexception resource exhausted too many requests are currently being executed try again later at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc partialforwardingclientcalllistener onclose partialforwardingclientcalllistener java at io grpc forwardingclientcalllistener onclose forwardingclientcalllistener java at io grpc forwardingclientcalllistener simpleforwardingclientcalllistener onclose forwardingclientcalllistener java at com google api gax grpc channelpool releasingclientcall onclose channelpool java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception resource exhausted too many requests are currently being executed try again later at io grpc status asruntimeexception status java more | 0 |
31,532 | 6,546,816,671 | IssuesEvent | 2017-09-04 12:06:38 | netty/netty | https://api.github.com/repos/netty/netty | closed | too many releases in Http2FrameCodec in cleartext upgrade | defect | ### Expected behavior
No refcounting issues
### Actual behavior
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
### Steps to reproduce
Try to use HttpServerUpgradeHandler with Http2MultiplexCodec. The request will come in, and netty writes an upgrade response to the wire. After that succeeds, netty turns the request into a user event, with a ref count of 1. Its ref count is incremented to 2, then it's sent to Http2FrameCodec through a userEventTriggered. It expects that after the userEventTriggered is called, it'll come back with a ref count of 1 again.
However, inside of Http2FrameCodec#userEventTriggered, it decrements the ref count until it hits 0. In particular, `InboundHttpToHttp2Adapter.handle(ctx, connection(), decoder().frameListener(), upgrade.upgradeRequest())` and `upgrade.release()` each decrement the ref count once.
### Minimal yet complete reproducer code (or URL to code)
https://github.com/twitter/finagle/blob/develop/finagle-http2/src/main/scala/com/twitter/finagle/http2/Http2CleartextServerInitializer.scala#L104-L108
try upgrading this to 4.1.15
### Netty version
4.1.15 | 1.0 | too many releases in Http2FrameCodec in cleartext upgrade - ### Expected behavior
No refcounting issues
### Actual behavior
io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1
### Steps to reproduce
Try to use HttpServerUpgradeHandler with Http2MultiplexCodec. The request will come in, and netty writes an upgrade response to the wire. After that succeeds, netty turns the request into a user event, with a ref count of 1. Its ref count is incremented to 2, then it's sent to Http2FrameCodec through a userEventTriggered. It expects that after the userEventTriggered is called, it'll come back with a ref count of 1 again.
However, inside of Http2FrameCodec#userEventTriggered, it decrements the ref count until it hits 0. In particular, `InboundHttpToHttp2Adapter.handle(ctx, connection(), decoder().frameListener(), upgrade.upgradeRequest())` and `upgrade.release()` each decrement the ref count once.
### Minimal yet complete reproducer code (or URL to code)
https://github.com/twitter/finagle/blob/develop/finagle-http2/src/main/scala/com/twitter/finagle/http2/Http2CleartextServerInitializer.scala#L104-L108
try upgrading this to 4.1.15
### Netty version
4.1.15 | defect | too many releases in in cleartext upgrade expected behavior no refcounting issues actual behavior io netty util illegalreferencecountexception refcnt decrement steps to reproduce try to use httpserverupgradehandler with the request will come in and netty writes an upgrade response to the wire after that succeeds netty turns the request into a user event with a ref count of its ref count is incremented to then it s sent to through a usereventtriggered it expects that after the usereventtriggered is called it ll come back with a ref count of again however inside of usereventtriggered it decrements the ref count until it hits in particular handle ctx connection decoder framelistener upgrade upgraderequest and upgrade release each decrement the ref count once minimal yet complete reproducer code or url to code try upgrading this to netty version | 1 |
176,523 | 21,411,751,052 | IssuesEvent | 2022-04-22 06:56:02 | AlexRogalskiy/java-patterns | https://api.github.com/repos/AlexRogalskiy/java-patterns | opened | CVE-2022-21681 (High) detected in marked-0.3.19.tgz | security vulnerability | ## CVE-2022-21681 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.3.19.tgz">https://registry.npmjs.org/marked/-/marked-0.3.19.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- roadmarks-1.6.3.tgz (Root Library)
- :x: **marked-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6">0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `inline.reflinkSearch` may cause catastrophic backtracking against some strings and lead to a denial of service (DoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21681>CVE-2022-21681</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5v2h-r2cx-5xgj">https://github.com/advisories/GHSA-5v2h-r2cx-5xgj</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-21681 (High) detected in marked-0.3.19.tgz - ## CVE-2022-21681 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.3.19.tgz</b></p></summary>
<p>A markdown parser built for speed</p>
<p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.3.19.tgz">https://registry.npmjs.org/marked/-/marked-0.3.19.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/marked/package.json</p>
<p>
Dependency Hierarchy:
- roadmarks-1.6.3.tgz (Root Library)
- :x: **marked-0.3.19.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/java-patterns/commit/0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6">0e3f838823fb09cc237bb3fc8f2e2651a2d0f0e6</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Marked is a markdown parser and compiler. Prior to version 4.0.10, the regular expression `inline.reflinkSearch` may cause catastrophic backtracking against some strings and lead to a denial of service (DoS). Anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected. This issue is patched in version 4.0.10. As a workaround, avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources.
<p>Publish Date: 2022-01-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-21681>CVE-2022-21681</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/advisories/GHSA-5v2h-r2cx-5xgj">https://github.com/advisories/GHSA-5v2h-r2cx-5xgj</a></p>
<p>Release Date: 2022-01-14</p>
<p>Fix Resolution: marked - 4.0.10</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in marked tgz cve high severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file package json path to vulnerable library node modules marked package json dependency hierarchy roadmarks tgz root library x marked tgz vulnerable library found in head commit a href found in base branch master vulnerability details marked is a markdown parser and compiler prior to version the regular expression inline reflinksearch may cause catastrophic backtracking against some strings and lead to a denial of service dos anyone who runs untrusted markdown through a vulnerable version of marked and does not use a worker with a time limit may be affected this issue is patched in version as a workaround avoid running untrusted markdown through marked or run marked on a worker thread and set a reasonable time limit to prevent draining resources publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution marked step up your open source security game with whitesource | 0 |
11,789 | 9,422,166,965 | IssuesEvent | 2019-04-11 08:45:15 | fluencelabs/fluence | https://api.github.com/repos/fluencelabs/fluence | closed | Integrate CircleCI | critical enhancement ~infrastructure | Currently we're using TravisCI. It's a bit too slow, and hard to get right. Let's try CircleCI, it looks so much faster on a first try.
[Here's my first try with CircleCI](https://github.com/folex/fluence/blob/master/.circleci/config.yml)
[And the same, but with specific commmit](https://github.com/folex/fluence/blob/3c0ed1a44ebf7c0b32b228cafca8fa75d02e3b7d/.circleci/config.yml)
Out-of-the-box it supports Scala, Rust and Node. It even has containers Scala+Node and Rust+Node. But integration tests require Scala + Rust + Node, it would require a custom container. | 1.0 | Integrate CircleCI - Currently we're using TravisCI. It's a bit too slow, and hard to get right. Let's try CircleCI, it looks so much faster on a first try.
[Here's my first try with CircleCI](https://github.com/folex/fluence/blob/master/.circleci/config.yml)
[And the same, but with specific commmit](https://github.com/folex/fluence/blob/3c0ed1a44ebf7c0b32b228cafca8fa75d02e3b7d/.circleci/config.yml)
Out-of-the-box it supports Scala, Rust and Node. It even has containers Scala+Node and Rust+Node. But integration tests require Scala + Rust + Node, it would require a custom container. | non_defect | integrate circleci currently we re using travisci it s a bit too slow and hard to get right let s try circleci it looks so much faster on a first try out of the box it supports scala rust and node it even has containers scala node and rust node but integration tests require scala rust node it would require a custom container | 0 |
83,363 | 10,346,725,914 | IssuesEvent | 2019-09-04 15:49:01 | techlahoma/user-groups | https://api.github.com/repos/techlahoma/user-groups | opened | Book StarSpace46 for OKC Design Tech | 2019-09-25 | UG/OKC Design Tech scheduling | What: UX Interviews: Bringing Design to Business
When: 09/25/2019 11:30 am
Where: StarSpace 46
Check meetup for RSVP count: https://www.meetup.com/OKC-Design-Tech/events/264553697/
cc @nexocentric @vianka-a | 1.0 | Book StarSpace46 for OKC Design Tech | 2019-09-25 - What: UX Interviews: Bringing Design to Business
When: 09/25/2019 11:30 am
Where: StarSpace 46
Check meetup for RSVP count: https://www.meetup.com/OKC-Design-Tech/events/264553697/
cc @nexocentric @vianka-a | non_defect | book for okc design tech what ux interviews bringing design to business when am where starspace check meetup for rsvp count cc nexocentric vianka a | 0 |
178,766 | 30,006,773,935 | IssuesEvent | 2023-06-26 12:57:57 | wagtail/wagtail | https://api.github.com/repos/wagtail/wagtail | closed | Review chooser action button labels | type:Enhancement component:Design system | #9194. We changed chooser widgets’ "Choose another" button to "Change" in #8934, and back to "Chooser another <thing>" in #9113.
I believe we need to revisit this again because the initial change had been done with UX/UI considerations in mind that aren’t reflected in the bug fix.
- Either retaining the "Chooser another <thing>" label in the current design is desirable, and all we need to do is update our [design system](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=5172%3A30787) to match.
- Or we need to retain the "Choose another <thing>" label but with a different design, to account for the now longer button label
- Or change the label back to shorter text that isn’t dependent on what type the chooser is for. | 1.0 | Review chooser action button labels - #9194. We changed chooser widgets’ "Choose another" button to "Change" in #8934, and back to "Chooser another <thing>" in #9113.
I believe we need to revisit this again because the initial change had been done with UX/UI considerations in mind that aren’t reflected in the bug fix.
- Either retaining the "Chooser another <thing>" label in the current design is desirable, and all we need to do is update our [design system](https://www.figma.com/file/h67EsVXdWsfu38WGGxWfpi/Wagtail-Design-System?node-id=5172%3A30787) to match.
- Or we need to retain the "Choose another <thing>" label but with a different design, to account for the now longer button label
- Or change the label back to shorter text that isn’t dependent on what type the chooser is for. | non_defect | review chooser action button labels we changed chooser widgets’ choose another button to change in and back to chooser another in i believe we need to revisit this again because the initial change had been done with ux ui considerations in mind that aren’t reflected in the bug fix either retaining the chooser another label in the current design is desirable and all we need to do is update our to match or we need to retain the choose another label but with a different design to account for the now longer button label or change the label back to shorter text that isn’t dependent on what type the chooser is for | 0 |
264,708 | 28,212,233,402 | IssuesEvent | 2023-04-05 05:57:59 | hshivhare67/platform_frameworks_av_AOSP10_r33 | https://api.github.com/repos/hshivhare67/platform_frameworks_av_AOSP10_r33 | closed | CVE-2023-21000 (High) detected in avandroid-9.0.0_r56 - autoclosed | Mend: dependency security vulnerability | ## CVE-2023-21000 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>avandroid-9.0.0_r56</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/av>https://android.googlesource.com/platform/frameworks/av</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/media/libstagefright/MediaCodec.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In MediaCodec.cpp, there is a possible use after free due to improper locking. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-13Android ID: A-194783918
<p>Publish Date: 2023-03-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-21000>CVE-2023-21000</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/av/+/cf32c23098ef7410b70ffdbfe2a05146ce79ef04">https://android.googlesource.com/platform/frameworks/av/+/cf32c23098ef7410b70ffdbfe2a05146ce79ef04</a></p>
<p>Release Date: 2023-03-24</p>
<p>Fix Resolution: android-13.0.0_r32</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2023-21000 (High) detected in avandroid-9.0.0_r56 - autoclosed - ## CVE-2023-21000 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>avandroid-9.0.0_r56</b></p></summary>
<p>
<p>Library home page: <a href=https://android.googlesource.com/platform/frameworks/av>https://android.googlesource.com/platform/frameworks/av</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/media/libstagefright/MediaCodec.cpp</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In MediaCodec.cpp, there is a possible use after free due to improper locking. This could lead to local escalation of privilege with no additional execution privileges needed. User interaction is not needed for exploitation.Product: AndroidVersions: Android-13Android ID: A-194783918
<p>Publish Date: 2023-03-24
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-21000>CVE-2023-21000</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://android.googlesource.com/platform/frameworks/av/+/cf32c23098ef7410b70ffdbfe2a05146ce79ef04">https://android.googlesource.com/platform/frameworks/av/+/cf32c23098ef7410b70ffdbfe2a05146ce79ef04</a></p>
<p>Release Date: 2023-03-24</p>
<p>Fix Resolution: android-13.0.0_r32</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_defect | cve high detected in avandroid autoclosed cve high severity vulnerability vulnerable library avandroid library home page a href found in base branch main vulnerable source files media libstagefright mediacodec cpp vulnerability details in mediacodec cpp there is a possible use after free due to improper locking this could lead to local escalation of privilege with no additional execution privileges needed user interaction is not needed for exploitation product androidversions android id a publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution android step up your open source security game with mend | 0 |
55,872 | 14,727,897,310 | IssuesEvent | 2021-01-06 09:12:57 | PowerDNS/pdns | https://api.github.com/repos/PowerDNS/pdns | closed | rec: www.dpaste.com sometimes bogus; insecure domain with apex CNAME | defect rec | - Program: Recursor
- Issue type: Bug report
### Short description
`www.dpaste.com` sometimes validates as bogus. `dpaste.com` is always fine.
`dpaste.com` is an insecure domain with a zone apex `CNAME`, so none of it deserves to resolve, but the precise way in which it's failing is a bit odd.
I'm not sure what is required for it to be bogus, but it seems to involve learning the `dpaste.com` `SOA` record and/or the `CNAME` record.
### Environment
- Operating system: Ubuntu 16.04 x86-64
- Software version: 4.5.0~alpha0+master.190.g8446e56b8-1pdns.xenial and a couple slightly older builds
- Software source: PowerDNS repository
### Steps to reproduce
1. Restart Recursor
2. `dig dpaste.com aaaa`
3. `dig www.dpaste.com`
### Expected behaviour
Consistent success or consistent failure.
### Actual behaviour
Some things are bogus and some things aren't depending on what steps you take.
### Other information
Unless I'm making a mistake, this isn't bogus:
1. `dig dpaste.com`
2. `dig www.dpaste.com`
Or this:
1. `dig dpaste.com soa`
2. `dig www.dpaste.com`
But this is:
1. `dig dpaste.com`
2. `dig dpaste.com soa`
3. `dig www.dpaste.com`
`trace-regex 'dpaste\.com\.$'` for `dig dpaste.com aaaa` and `dig www.dpaste.com`: https://mn0.us/ojCK
`recursor.conf`:
```
allow-from=127.0.0.0/8, ::1/128
allow-trust-anchor-query
carbon-ourname=clover_mattnordhoff_net
carbon-server=2a02:2770:8::2635:0:1
config-dir=/etc/powerdns
dnssec=validate
dnssec-log-bogus
hint-file=/usr/share/dns/root.hints
include-dir=/etc/powerdns/recursor.d
local-address=127.0.0.1, ::1
log-common-errors
lua-config-file=/etc/powerdns/recursor.lua
max-cache-ttl=172800
max-negative-ttl=10800
query-local-address=0.0.0.0, ::
quiet=yes
setgid=pdns
setuid=pdns
threads=1
udp-truncation-threshold=4096
```
(I know, I'm a Flag Day traitor!) | 1.0 | rec: www.dpaste.com sometimes bogus; insecure domain with apex CNAME - - Program: Recursor
- Issue type: Bug report
### Short description
`www.dpaste.com` sometimes validates as bogus. `dpaste.com` is always fine.
`dpaste.com` is an insecure domain with a zone apex `CNAME`, so none of it deserves to resolve, but the precise way in which it's failing is a bit odd.
I'm not sure what is required for it to be bogus, but it seems to involve learning the `dpaste.com` `SOA` record and/or the `CNAME` record.
### Environment
- Operating system: Ubuntu 16.04 x86-64
- Software version: 4.5.0~alpha0+master.190.g8446e56b8-1pdns.xenial and a couple slightly older builds
- Software source: PowerDNS repository
### Steps to reproduce
1. Restart Recursor
2. `dig dpaste.com aaaa`
3. `dig www.dpaste.com`
### Expected behaviour
Consistent success or consistent failure.
### Actual behaviour
Some things are bogus and some things aren't depending on what steps you take.
### Other information
Unless I'm making a mistake, this isn't bogus:
1. `dig dpaste.com`
2. `dig www.dpaste.com`
Or this:
1. `dig dpaste.com soa`
2. `dig www.dpaste.com`
But this is:
1. `dig dpaste.com`
2. `dig dpaste.com soa`
3. `dig www.dpaste.com`
`trace-regex 'dpaste\.com\.$'` for `dig dpaste.com aaaa` and `dig www.dpaste.com`: https://mn0.us/ojCK
`recursor.conf`:
```
allow-from=127.0.0.0/8, ::1/128
allow-trust-anchor-query
carbon-ourname=clover_mattnordhoff_net
carbon-server=2a02:2770:8::2635:0:1
config-dir=/etc/powerdns
dnssec=validate
dnssec-log-bogus
hint-file=/usr/share/dns/root.hints
include-dir=/etc/powerdns/recursor.d
local-address=127.0.0.1, ::1
log-common-errors
lua-config-file=/etc/powerdns/recursor.lua
max-cache-ttl=172800
max-negative-ttl=10800
query-local-address=0.0.0.0, ::
quiet=yes
setgid=pdns
setuid=pdns
threads=1
udp-truncation-threshold=4096
```
(I know, I'm a Flag Day traitor!) | defect | rec sometimes bogus insecure domain with apex cname program recursor issue type bug report short description sometimes validates as bogus dpaste com is always fine dpaste com is an insecure domain with a zone apex cname so none of it deserves to resolve but the precise way in which it s failing is a bit odd i m not sure what is required for it to be bogus but it seems to involve learning the dpaste com soa record and or the cname record environment operating system ubuntu software version master xenial and a couple slightly older builds software source powerdns repository steps to reproduce restart recursor dig dpaste com aaaa dig expected behaviour consistent success or consistent failure actual behaviour some things are bogus and some things aren t depending on what steps you take other information unless i m making a mistake this isn t bogus dig dpaste com dig or this dig dpaste com soa dig but this is dig dpaste com dig dpaste com soa dig trace regex dpaste com for dig dpaste com aaaa and dig recursor conf allow from allow trust anchor query carbon ourname clover mattnordhoff net carbon server config dir etc powerdns dnssec validate dnssec log bogus hint file usr share dns root hints include dir etc powerdns recursor d local address log common errors lua config file etc powerdns recursor lua max cache ttl max negative ttl query local address quiet yes setgid pdns setuid pdns threads udp truncation threshold i know i m a flag day traitor | 1 |
802,530 | 28,966,121,166 | IssuesEvent | 2023-05-10 08:02:32 | beda-software/fhir-emr | https://api.github.com/repos/beda-software/fhir-emr | closed | Code formatting | bug Priority::High | Auto code formatting not working on commit.
We also need to reformat all code to fix commits that weren't formated. | 1.0 | Code formatting - Auto code formatting not working on commit.
We also need to reformat all code to fix commits that weren't formated. | non_defect | code formatting auto code formatting not working on commit we also need to reformat all code to fix commits that weren t formated | 0 |
30,581 | 6,187,826,396 | IssuesEvent | 2017-07-04 08:38:04 | contao/core-bundle | https://api.github.com/repos/contao/core-bundle | closed | display error in filemanager / public parents ignored | defect | How to reproduce:
- create a folder and make it public
- create a subfolder
- restrict the tree to show only the subfolder (clicking on the subfolder name)
- the icon now looks like a private folder (lock symbol)
[core-bundle 4.4.0] | 1.0 | display error in filemanager / public parents ignored - How to reproduce:
- create a folder and make it public
- create a subfolder
- restrict the tree to show only the subfolder (clicking on the subfolder name)
- the icon now looks like a private folder (lock symbol)
[core-bundle 4.4.0] | defect | display error in filemanager public parents ignored how to reproduce create a folder and make it public create a subfolder restrict the tree to show only the subfolder clicking on the subfolder name the icon now looks like a private folder lock symbol | 1 |
65,767 | 19,684,096,081 | IssuesEvent | 2022-01-11 19:57:02 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | opened | Bug in Legals screen | T-Defect S-Minor O-Occasional A-Legals | There is a problem on my main account, the homeserver policy cannot be retrieved.
<img width="463" alt="image" src="https://user-images.githubusercontent.com/3940906/149012015-05508e5c-48a1-4628-856f-24816a462f36.png">
There is no network issue (since the terms for the identity server can be retrieved).
This is what I see in the "Homeserver" screen if it helps:
<img width="461" alt="image" src="https://user-images.githubusercontent.com/3940906/149012172-046d9f8b-ba50-4014-adf7-a1bc06712194.png">
From the rageshake:
```
2022-01-11T19:49:42*805GMT+00:00Z 370689 E/ /Tag: Error while getting homeserver terms
NetworkConnection(ioException=java.net.UnknownHostException: Unable to resolve host "matrix.org_matrix": No address associated with hostname)
at org.matrix.android.sdk.internal.session.terms.DefaultTermsService.getHomeserverTerms(DefaultTermsService.kt:32)
```
Element Android 1.3.13. | 1.0 | Bug in Legals screen - There is a problem on my main account, the homeserver policy cannot be retrieved.
<img width="463" alt="image" src="https://user-images.githubusercontent.com/3940906/149012015-05508e5c-48a1-4628-856f-24816a462f36.png">
There is no network issue (since the terms for the identity server can be retrieved).
This is what I see in the "Homeserver" screen if it helps:
<img width="461" alt="image" src="https://user-images.githubusercontent.com/3940906/149012172-046d9f8b-ba50-4014-adf7-a1bc06712194.png">
From the rageshake:
```
2022-01-11T19:49:42*805GMT+00:00Z 370689 E/ /Tag: Error while getting homeserver terms
NetworkConnection(ioException=java.net.UnknownHostException: Unable to resolve host "matrix.org_matrix": No address associated with hostname)
at org.matrix.android.sdk.internal.session.terms.DefaultTermsService.getHomeserverTerms(DefaultTermsService.kt:32)
```
Element Android 1.3.13. | defect | bug in legals screen there is a problem on my main account the homeserver policy cannot be retrieved img width alt image src there is no network issue since the terms for the identity server can be retrieved this is what i see in the homeserver screen if it helps img width alt image src from the rageshake e tag error while getting homeserver terms networkconnection ioexception java net unknownhostexception unable to resolve host matrix org matrix no address associated with hostname at org matrix android sdk internal session terms defaulttermsservice gethomeserverterms defaulttermsservice kt element android | 1 |
142,693 | 19,102,734,582 | IssuesEvent | 2021-11-30 01:23:01 | MValle21/riposte | https://api.github.com/repos/MValle21/riposte | opened | CVE-2021-39153 (High) detected in xstream-1.4.11.1.jar | security vulnerability | ## CVE-2021-39153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.11.1.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: riposte/riposte-service-registration-eureka/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.11.1/6c120c45a8c480bb2fea5b56502e3993ddd74fd2/xstream-1.4.11.1.jar</p>
<p>
Dependency Hierarchy:
- eureka-client-1.9.21.jar (Root Library)
- :x: **xstream-1.4.11.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream, if using the version out of the box with Java runtime version 14 to 8 or with JavaFX installed. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39153>CVE-2021-39153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-39153">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-39153</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.11.1","packageFilePaths":["/riposte-service-registration-eureka/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.netflix.eureka:eureka-client:1.9.21;com.thoughtworks.xstream:xstream:1.4.11.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-39153","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream, if using the version out of the box with Java runtime version 14 to 8 or with JavaFX installed. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39153","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-39153 (High) detected in xstream-1.4.11.1.jar - ## CVE-2021-39153 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.11.1.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: riposte/riposte-service-registration-eureka/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.11.1/6c120c45a8c480bb2fea5b56502e3993ddd74fd2/xstream-1.4.11.1.jar</p>
<p>
Dependency Hierarchy:
- eureka-client-1.9.21.jar (Root Library)
- :x: **xstream-1.4.11.1.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream, if using the version out of the box with Java runtime version 14 to 8 or with JavaFX installed. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.
<p>Publish Date: 2021-08-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39153>CVE-2021-39153</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-39153">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-39153</a></p>
<p>Release Date: 2021-08-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.18</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.thoughtworks.xstream","packageName":"xstream","packageVersion":"1.4.11.1","packageFilePaths":["/riposte-service-registration-eureka/build.gradle"],"isTransitiveDependency":true,"dependencyTree":"com.netflix.eureka:eureka-client:1.9.21;com.thoughtworks.xstream:xstream:1.4.11.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.thoughtworks.xstream:xstream:1.4.18","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-39153","vulnerabilityDetails":"XStream is a simple library to serialize objects to XML and back again. In affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream, if using the version out of the box with Java runtime version 14 to 8 or with JavaFX installed. No user is affected, who followed the recommendation to setup XStream\u0027s security framework with a whitelist limited to the minimal required types. XStream 1.4.18 uses no longer a blacklist by default, since it cannot be secured for general purpose.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-39153","cvss3Severity":"high","cvss3Score":"8.5","cvss3Metrics":{"A":"High","AC":"High","PR":"Low","S":"Changed","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_defect | cve high detected in xstream jar cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file riposte riposte service registration eureka build gradle path to vulnerable library home wss scanner gradle caches modules files com thoughtworks xstream xstream xstream jar dependency hierarchy eureka client jar root library x xstream jar vulnerable library found in base branch master vulnerability details xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream if using the version out of the box with java runtime version to or with javafx installed no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com netflix eureka eureka client com thoughtworks xstream xstream isminimumfixversionavailable true minimumfixversion com thoughtworks xstream xstream isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails xstream is a simple library to serialize objects to xml and back again in affected versions this vulnerability may allow a remote attacker to load and execute arbitrary code from a remote host only by manipulating the processed input stream if using the version out of the box with java runtime version to or with javafx installed no user is affected who followed the recommendation to setup xstream security framework with a whitelist limited to the minimal required types xstream uses no longer a blacklist by default since it cannot be secured for general purpose vulnerabilityurl | 0 |
159,898 | 25,074,160,523 | IssuesEvent | 2022-11-07 14:22:24 | StatCan/aaw-argocd-manifests | https://api.github.com/repos/StatCan/aaw-argocd-manifests | opened | [aaw-kubeflow-profiles] Change Argo-Dev Target Revision | kind/cleanup kind/design | To remove unused/non-dev team user profiles from aaw-dev, we created a branch (aaw-dev-cc-00) specifically for aaw-dev's profiles. Changes to the aaw-kubeflow-profiles repo can be tested in dev and cherry-picked into aaw-prod-cc-00. This creates a cleaner dev env and adds a layer of isolation to prod.
| 1.0 | [aaw-kubeflow-profiles] Change Argo-Dev Target Revision - To remove unused/non-dev team user profiles from aaw-dev, we created a branch (aaw-dev-cc-00) specifically for aaw-dev's profiles. Changes to the aaw-kubeflow-profiles repo can be tested in dev and cherry-picked into aaw-prod-cc-00. This creates a cleaner dev env and adds a layer of isolation to prod.
| non_defect | change argo dev target revision to remove unused non dev team user profiles from aaw dev we created a branch aaw dev cc specifically for aaw dev s profiles changes to the aaw kubeflow profiles repo can be tested in dev and cherry picked into aaw prod cc this creates a cleaner dev env and adds a layer of isolation to prod | 0 |
39,190 | 9,304,591,094 | IssuesEvent | 2019-03-25 02:04:04 | slacka/WoeUSB | https://api.github.com/repos/slacka/WoeUSB | reopened | woeusb hangs on 'install-grub' step | defect need-info | ## Issue Reproduce Instructions
> 1. Launch WoeUSB by running `sudo woeusb --device Win10_1809Oct_English_x64.iso /dev/sdg --tgt-fs NTFS`
> 2. Wait until the program finishes `Copying files from source media...` and proceeds to `Installing GRUB bootloader for legacy PC booting support...`
## Expected Behavior
> `Installing for i386-pc platform.`
> `Install finished. No error reported.`
## Current Behavior
> `Installing for i386-pc platform.`
> The grub-install process hangs here forever, and I have to eventually press Ctrl-C to abort, leaving the USB disk unbootable.
## Info of My Environment
### WoeUSB Version 3.2.12
### WoeUSB Source
> Installed from Ubuntu 18.04 software archive
### GNU Bash Version 4.4.19
### Information about the Operating System
> KDE neon User Edition 5.15, based on Ubuntu 18.04 "Bionic Beaver"
### Information about the Source Media
> "Windows 10" downloaded from <https://microsoft.com/en-us/software-download/Windows10ISO>
### Information about the Target Device
> PNY USB 2.0 FD, 16GB
| 1.0 | woeusb hangs on 'install-grub' step - ## Issue Reproduce Instructions
> 1. Launch WoeUSB by running `sudo woeusb --device Win10_1809Oct_English_x64.iso /dev/sdg --tgt-fs NTFS`
> 2. Wait until the program finishes `Copying files from source media...` and proceeds to `Installing GRUB bootloader for legacy PC booting support...`
## Expected Behavior
> `Installing for i386-pc platform.`
> `Install finished. No error reported.`
## Current Behavior
> `Installing for i386-pc platform.`
> The grub-install process hangs here forever, and I have to eventually press Ctrl-C to abort, leaving the USB disk unbootable.
## Info of My Environment
### WoeUSB Version 3.2.12
### WoeUSB Source
> Installed from Ubuntu 18.04 software archive
### GNU Bash Version 4.4.19
### Information about the Operating System
> KDE neon User Edition 5.15, based on Ubuntu 18.04 "Bionic Beaver"
### Information about the Source Media
> "Windows 10" downloaded from <https://microsoft.com/en-us/software-download/Windows10ISO>
### Information about the Target Device
> PNY USB 2.0 FD, 16GB
| defect | woeusb hangs on install grub step issue reproduce instructions launch woeusb by running sudo woeusb device english iso dev sdg tgt fs ntfs wait until the program finishes copying files from source media and proceeds to installing grub bootloader for legacy pc booting support expected behavior installing for pc platform install finished no error reported current behavior installing for pc platform the grub install process hangs here forever and i have to eventually press ctrl c to abort leaving the usb disk unbootable info of my environment woeusb version woeusb source installed from ubuntu software archive gnu bash version information about the operating system kde neon user edition based on ubuntu bionic beaver information about the source media windows downloaded from information about the target device pny usb fd | 1 |
134,917 | 10,948,151,016 | IssuesEvent | 2019-11-26 08:14:17 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | roachtest: gopg failed | C-test-failure O-roachtest O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/02d62674ad2f9ca16183184ea6552691506675f1
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=gopg PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1611136&tab=artifacts#/gopg
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20191126-1611136/gopg/run_1
orm_helpers.go:181,gopg.go:146,gopg.go:158,test_runner.go:697:
Tests run on Cockroach v19.2.1-28-g02d6267
Tests run against gopg v9.0.1
178 Total Tests Run
141 tests passed
37 tests failed
0 tests skipped
6 tests ignored
1 test passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
For a full summary look at the gopg artifacts
An updated blacklist (gopgBlackList19_2) is available in the artifacts' gopg log
``` | 2.0 | roachtest: gopg failed - SHA: https://github.com/cockroachdb/cockroach/commits/02d62674ad2f9ca16183184ea6552691506675f1
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=gopg PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1611136&tab=artifacts#/gopg
```
The test failed on branch=release-19.2, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20191126-1611136/gopg/run_1
orm_helpers.go:181,gopg.go:146,gopg.go:158,test_runner.go:697:
Tests run on Cockroach v19.2.1-28-g02d6267
Tests run against gopg v9.0.1
178 Total Tests Run
141 tests passed
37 tests failed
0 tests skipped
6 tests ignored
1 test passed unexpectedly
0 tests failed unexpectedly
0 tests expected failed but skipped
0 tests expected failed but not run
---
For a full summary look at the gopg artifacts
An updated blacklist (gopgBlackList19_2) is available in the artifacts' gopg log
``` | non_defect | roachtest gopg failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests gopg pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts gopg run orm helpers go gopg go gopg go test runner go tests run on cockroach tests run against gopg total tests run tests passed tests failed tests skipped tests ignored test passed unexpectedly tests failed unexpectedly tests expected failed but skipped tests expected failed but not run for a full summary look at the gopg artifacts an updated blacklist is available in the artifacts gopg log | 0 |
10,685 | 7,276,642,001 | IssuesEvent | 2018-02-21 16:56:12 | status-im/status-react | https://api.github.com/repos/status-im/status-react | closed | [Perf] Test hypothesis 7: Use another JS thread for heavy computations | performance | ### Identified Behavior
Sometimes view rendering is slow as main thread is occupied with heavy computations. As an example, account creation process is slow and laggy, message could appear in slow motion, etc.
### Hypothesis
Using another JS thread to do heavy computations could free main JS thread to do UI related computations (subscriptions, hiccup rendering, react diffs, etc).
### Assumptions
Main JS thread responsible for UI rendering is sometimes occupied by other computations which prevents UI to be updated smoothly.
### Metrics
As UI metric we could use https://github.com/status-im/status-react/issues/3111 to test against. Moving heavy computations to another thread should let UI render smooth during account creation.
### Acceptance criteria
- [ ] Spawning second JS thread works on both Android and iOS
- [ ] App memory usage doesn't grow much and stay within accepted range (to be defined)
- [ ] Ability to run any ClojureScript code on another thread.
### Notes
Among many libraries that implement RN threading capabilities , this library seems good to try for our needs https://github.com/joltup/react-native-threads
Also, https://github.com/seantempesta/cljsrn-re-frame-workers could provide insights on how to use ClojureScript with RN threads (workers) library.
| True | [Perf] Test hypothesis 7: Use another JS thread for heavy computations - ### Identified Behavior
Sometimes view rendering is slow as main thread is occupied with heavy computations. As an example, account creation process is slow and laggy, message could appear in slow motion, etc.
### Hypothesis
Using another JS thread to do heavy computations could free main JS thread to do UI related computations (subscriptions, hiccup rendering, react diffs, etc).
### Assumptions
Main JS thread responsible for UI rendering is sometimes occupied by other computations which prevents UI to be updated smoothly.
### Metrics
As UI metric we could use https://github.com/status-im/status-react/issues/3111 to test against. Moving heavy computations to another thread should let UI render smooth during account creation.
### Acceptance criteria
- [ ] Spawning second JS thread works on both Android and iOS
- [ ] App memory usage doesn't grow much and stay within accepted range (to be defined)
- [ ] Ability to run any ClojureScript code on another thread.
### Notes
Among many libraries that implement RN threading capabilities , this library seems good to try for our needs https://github.com/joltup/react-native-threads
Also, https://github.com/seantempesta/cljsrn-re-frame-workers could provide insights on how to use ClojureScript with RN threads (workers) library.
| non_defect | test hypothesis use another js thread for heavy computations identified behavior sometimes view rendering is slow as main thread is occupied with heavy computations as an example account creation process is slow and laggy message could appear in slow motion etc hypothesis using another js thread to do heavy computations could free main js thread to do ui related computations subscriptions hiccup rendering react diffs etc assumptions main js thread responsible for ui rendering is sometimes occupied by other computations which prevents ui to be updated smoothly metrics as ui metric we could use to test against moving heavy computations to another thread should let ui render smooth during account creation acceptance criteria spawning second js thread works on both android and ios app memory usage doesn t grow much and stay within accepted range to be defined ability to run any clojurescript code on another thread notes among many libraries that implement rn threading capabilities this library seems good to try for our needs also could provide insights on how to use clojurescript with rn threads workers library | 0 |
21,982 | 3,587,536,978 | IssuesEvent | 2016-01-30 11:13:26 | ariya/phantomjs | https://api.github.com/repos/ariya/phantomjs | closed | PhantomJS vrashes when opening a JavaScript via casperjs | old.Priority-Medium old.Status-New old.Type-Defect | _**[jannis.k...@gmail.com](http://code.google.com/u/111786749513633168836/) commented:**_
> Which version of PhantomJS are you using? 1.7
> <b>What steps will reproduce the problem?</b>
1. Starting a .js via casperjs
> 2. Tells me Unable to open *.js File
> 3. Phantomjs crahsed
>
> <b>What is the expected output? What do you see instead?</b>
The output of the javascript
>
> <b>Which operating system are you using?</b>
Mac OS X 10.8.2
>
> <b>Did you use binary PhantomJS or did you compile it from source?</b>
> <b>Please provide any additional information below.</b>
> Using casperjs 1.0.0
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #798](http://code.google.com/p/phantomjs/issues/detail?id=798).
:star2: **2** people had starred this issue at the time of migration. | 1.0 | PhantomJS vrashes when opening a JavaScript via casperjs - _**[jannis.k...@gmail.com](http://code.google.com/u/111786749513633168836/) commented:**_
> Which version of PhantomJS are you using? 1.7
> <b>What steps will reproduce the problem?</b>
1. Starting a .js via casperjs
> 2. Tells me Unable to open *.js File
> 3. Phantomjs crahsed
>
> <b>What is the expected output? What do you see instead?</b>
The output of the javascript
>
> <b>Which operating system are you using?</b>
Mac OS X 10.8.2
>
> <b>Did you use binary PhantomJS or did you compile it from source?</b>
> <b>Please provide any additional information below.</b>
> Using casperjs 1.0.0
**Disclaimer:**
This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #798](http://code.google.com/p/phantomjs/issues/detail?id=798).
:star2: **2** people had starred this issue at the time of migration. | defect | phantomjs vrashes when opening a javascript via casperjs commented which version of phantomjs are you using what steps will reproduce the problem starting a js via casperjs tells me unable to open js file phantomjs crahsed what is the expected output what do you see instead the output of the javascript which operating system are you using mac os x did you use binary phantomjs or did you compile it from source please provide any additional information below using casperjs disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration | 1 |
69,799 | 22,678,392,485 | IssuesEvent | 2022-07-04 07:39:56 | vector-im/element-android | https://api.github.com/repos/vector-im/element-android | closed | Formatted (HTML) messages with markdown features are rendered incorrectly as markdown | T-Defect A-Event rendering A-Markdown S-Minor O-Occasional | When the `formatted_body` field of a `"format": "org.matrix.custom.html"` message contains special markdown characters, they're rendered as markdown instead of as plain text. This could be an actual problem if you escape markdown characters in a message that contains other formatting
<details>
<summary>Exaggerated example</summary>
### Expected rendering


### Actual rendering

### Event source
```json
{
"type": "m.room.message",
"sender": "@tulir:maunium.net",
"content": {
"msgtype": "m.text",
"body": "Blank message (or is it? 🤔)",
"format": "org.matrix.custom.html",
"formatted_body": "---\n> * # [`RiotX` renders this message ***completely incorrectly***](https://github.com/vector-im/riotX-android/issues/289#issuecomment-510894293)\n---"
},
"event_id": "$156295723912295JSonL:maunium.net",
"origin_server_ts": 1562957239380,
"unsigned": {
"age": 52
},
"room_id": "!tTIucwUZLRtKnXeurb:matrix.org"
}
```
</details>
<details>
<summary>Non-exaggerated example</summary>
### Expected rendering


### Actual rendering

### Event source
```json
{
"type": "m.room.message",
"sender": "@tulir:maunium.net",
"content": {
"msgtype": "m.text",
"format": "org.matrix.custom.html",
"body": "*less exaggerated example*: \\*\\*stars!**",
"formatted_body": "<em>less exaggerated example</em>: **stars!**"
},
"event_id": "$156295796112315whymM:maunium.net",
"origin_server_ts": 1562957961390,
"unsigned": {
"age": 170
},
"room_id": "!jhpZBTbckszblMYjMK:matrix.org"
}
```
</details>
<details>
<summary>Real-life example</summary>
[matrix:room/riotx:matrix.org/event/$15630377604255dZoDF:hacklab.fi](https://matrix.to/#/!tTIucwUZLRtKnXeurb:matrix.org/$15630377604255dZoDF:hacklab.fi?via=matrix.org&via=privacytools.io&via=feneas.org)
This one was a reply sent from RiotX, which had indentation in the fallback format for some reason. The indentation of course turned into a code block because that's what a markdown parser does.
This specific example could be fixed by
* fixing whatever caused the indentation, or
* adding reply rendering to RiotX,
but of course neither of those is a proper fix to this issue.
### Expected rendering


### Actual rendering

### Event source
```json
{
"content": {
"body": "><@marmulak:converser.eu> Does the \"black\" theme work for reducing power consumption on oled displays?\n\nYes, the dark theme will reduce power consumption and the black theme will reduce it a bit more",
"format": "org.matrix.custom.html",
"formatted_body": "<mx-reply>\n <blockquote>\n <a href=\"https://matrix.to/#/!tTIucwUZLRtKnXeurb:matrix.org/$1563036634937vaAEg:converser.eu\">In reply to</a>\n <a href=\"https://matrix.to/#/@marmulak:converser.eu\">@marmulak:converser.eu</a>\n <br />\n <p>Does the "black" theme work for reducing power consumption on oled displays?</p>\n\n </blockquote>\n </mx-reply>\n Yes, the dark theme will reduce power consumption and the black theme will reduce it a bit more",
"m.relates_to": {
"m.in_reply_to": {
"event_id": "$1563036634937vaAEg:converser.eu"
}
},
"msgtype": "m.text"
},
"event_id": "$15630377604255dZoDF:hacklab.fi",
"origin_server_ts": 1563037760897,
"sender": "@vurpo:hacklab.fi",
"type": "m.room.message",
"unsigned": {
"age": 630
},
"room_id": "!tTIucwUZLRtKnXeurb:matrix.org"
}
```
</details> | 1.0 | Formatted (HTML) messages with markdown features are rendered incorrectly as markdown - When the `formatted_body` field of a `"format": "org.matrix.custom.html"` message contains special markdown characters, they're rendered as markdown instead of as plain text. This could be an actual problem if you escape markdown characters in a message that contains other formatting
<details>
<summary>Exaggerated example</summary>
### Expected rendering


### Actual rendering

### Event source
```json
{
"type": "m.room.message",
"sender": "@tulir:maunium.net",
"content": {
"msgtype": "m.text",
"body": "Blank message (or is it? 🤔)",
"format": "org.matrix.custom.html",
"formatted_body": "---\n> * # [`RiotX` renders this message ***completely incorrectly***](https://github.com/vector-im/riotX-android/issues/289#issuecomment-510894293)\n---"
},
"event_id": "$156295723912295JSonL:maunium.net",
"origin_server_ts": 1562957239380,
"unsigned": {
"age": 52
},
"room_id": "!tTIucwUZLRtKnXeurb:matrix.org"
}
```
</details>
<details>
<summary>Non-exaggerated example</summary>
### Expected rendering


### Actual rendering

### Event source
```json
{
"type": "m.room.message",
"sender": "@tulir:maunium.net",
"content": {
"msgtype": "m.text",
"format": "org.matrix.custom.html",
"body": "*less exaggerated example*: \\*\\*stars!**",
"formatted_body": "<em>less exaggerated example</em>: **stars!**"
},
"event_id": "$156295796112315whymM:maunium.net",
"origin_server_ts": 1562957961390,
"unsigned": {
"age": 170
},
"room_id": "!jhpZBTbckszblMYjMK:matrix.org"
}
```
</details>
<details>
<summary>Real-life example</summary>
[matrix:room/riotx:matrix.org/event/$15630377604255dZoDF:hacklab.fi](https://matrix.to/#/!tTIucwUZLRtKnXeurb:matrix.org/$15630377604255dZoDF:hacklab.fi?via=matrix.org&via=privacytools.io&via=feneas.org)
This one was a reply sent from RiotX, which had indentation in the fallback format for some reason. The indentation of course turned into a code block because that's what a markdown parser does.
This specific example could be fixed by
* fixing whatever caused the indentation, or
* adding reply rendering to RiotX,
but of course neither of those is a proper fix to this issue.
### Expected rendering


### Actual rendering

### Event source
```json
{
"content": {
"body": "><@marmulak:converser.eu> Does the \"black\" theme work for reducing power consumption on oled displays?\n\nYes, the dark theme will reduce power consumption and the black theme will reduce it a bit more",
"format": "org.matrix.custom.html",
"formatted_body": "<mx-reply>\n <blockquote>\n <a href=\"https://matrix.to/#/!tTIucwUZLRtKnXeurb:matrix.org/$1563036634937vaAEg:converser.eu\">In reply to</a>\n <a href=\"https://matrix.to/#/@marmulak:converser.eu\">@marmulak:converser.eu</a>\n <br />\n <p>Does the "black" theme work for reducing power consumption on oled displays?</p>\n\n </blockquote>\n </mx-reply>\n Yes, the dark theme will reduce power consumption and the black theme will reduce it a bit more",
"m.relates_to": {
"m.in_reply_to": {
"event_id": "$1563036634937vaAEg:converser.eu"
}
},
"msgtype": "m.text"
},
"event_id": "$15630377604255dZoDF:hacklab.fi",
"origin_server_ts": 1563037760897,
"sender": "@vurpo:hacklab.fi",
"type": "m.room.message",
"unsigned": {
"age": 630
},
"room_id": "!tTIucwUZLRtKnXeurb:matrix.org"
}
```
</details> | defect | formatted html messages with markdown features are rendered incorrectly as markdown when the formatted body field of a format org matrix custom html message contains special markdown characters they re rendered as markdown instead of as plain text this could be an actual problem if you escape markdown characters in a message that contains other formatting exaggerated example expected rendering actual rendering event source json type m room message sender tulir maunium net content msgtype m text body blank message or is it 🤔 format org matrix custom html formatted body n event id maunium net origin server ts unsigned age room id ttiucwuzlrtknxeurb matrix org non exaggerated example expected rendering actual rendering event source json type m room message sender tulir maunium net content msgtype m text format org matrix custom html body less exaggerated example stars formatted body less exaggerated example stars event id maunium net origin server ts unsigned age room id jhpzbtbckszblmyjmk matrix org real life example this one was a reply sent from riotx which had indentation in the fallback format for some reason the indentation of course turned into a code block because that s what a markdown parser does this specific example could be fixed by fixing whatever caused the indentation or adding reply rendering to riotx but of course neither of those is a proper fix to this issue expected rendering actual rendering event source json content body does the black theme work for reducing power consumption on oled displays n nyes the dark theme will reduce power consumption and the black theme will reduce it a bit more format org matrix custom html formatted body n n n n does the quot black quot theme work for reducing power consumption on oled displays n n n n yes the dark theme will reduce power consumption and the black theme will reduce it a bit more m relates to m in reply to event id converser eu msgtype m text event id hacklab fi origin server ts sender vurpo hacklab fi type m room message unsigned age room id ttiucwuzlrtknxeurb matrix org | 1 |
648,554 | 21,189,198,174 | IssuesEvent | 2022-04-08 15:31:57 | bcgov/entity | https://api.github.com/repos/bcgov/entity | closed | PPR Search NIL report - Include Legal Disclaimer | enhancement Priority2 Assets UX Assurance | From Client Usability Testing:
The legacy application currently includes a legal disclaimer at the bottom of a NIL report.
There are two scenarios for a search report output with no results:
1. No exact or similar matches were found by the search
2. Only similar matches were found by the search but the client didn't select any for the report output
### Scenario 1
include the following bold header and text in the Search report output:
NIL RESULT
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been found.
**Scenario 1 Visual Design Comp:**
https://invis.io/HN125W4CAYZD
### Scenario 2
include the following bold header and text in the Search report output:
NO REGISTRATIONS SELECTED
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been selected by the searching party.
**Scenario 2 Visual Design Comp:**
https://invis.io/J6125W44KS8Z
--------------------
Acceptance Criteria:
1.
Given I am a client executing a PPR search
When no exact or similar matches are found for the search
Then the text
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been found.
is displayed at the bottom of the search result report.
2.
Given I am a client executing a PPR search
When only similar matches are found on the search AND I don't select any of those similar matches
Then the text
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been selected by the searching party.
is displayed at the bottom of the search result report.

| 1.0 | PPR Search NIL report - Include Legal Disclaimer - From Client Usability Testing:
The legacy application currently includes a legal disclaimer at the bottom of a NIL report.
There are two scenarios for a search report output with no results:
1. No exact or similar matches were found by the search
2. Only similar matches were found by the search but the client didn't select any for the report output
### Scenario 1
include the following bold header and text in the Search report output:
NIL RESULT
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been found.
**Scenario 1 Visual Design Comp:**
https://invis.io/HN125W4CAYZD
### Scenario 2
include the following bold header and text in the Search report output:
NO REGISTRATIONS SELECTED
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been selected by the searching party.
**Scenario 2 Visual Design Comp:**
https://invis.io/J6125W44KS8Z
--------------------
Acceptance Criteria:
1.
Given I am a client executing a PPR search
When no exact or similar matches are found for the search
Then the text
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been found.
is displayed at the bottom of the search result report.
2.
Given I am a client executing a PPR search
When only similar matches are found on the search AND I don't select any of those similar matches
Then the text
No registered liens or encumbrances have been found on file that match EXACTLY to the search criteria listed above and no similar matches to the criteria have been selected by the searching party.
is displayed at the bottom of the search result report.

| non_defect | ppr search nil report include legal disclaimer from client usability testing the legacy application currently includes a legal disclaimer at the bottom of a nil report there are two scenarios for a search report output with no results no exact or similar matches were found by the search only similar matches were found by the search but the client didn t select any for the report output scenario include the following bold header and text in the search report output nil result no registered liens or encumbrances have been found on file that match exactly to the search criteria listed above and no similar matches to the criteria have been found scenario visual design comp scenario include the following bold header and text in the search report output no registrations selected no registered liens or encumbrances have been found on file that match exactly to the search criteria listed above and no similar matches to the criteria have been selected by the searching party scenario visual design comp acceptance criteria given i am a client executing a ppr search when no exact or similar matches are found for the search then the text no registered liens or encumbrances have been found on file that match exactly to the search criteria listed above and no similar matches to the criteria have been found is displayed at the bottom of the search result report given i am a client executing a ppr search when only similar matches are found on the search and i don t select any of those similar matches then the text no registered liens or encumbrances have been found on file that match exactly to the search criteria listed above and no similar matches to the criteria have been selected by the searching party is displayed at the bottom of the search result report | 0 |
728,122 | 25,066,857,861 | IssuesEvent | 2022-11-07 09:03:21 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | opened | [DocDB][PITR][Tablegroups] Unable to restore to timestamp with FATALs | area/docdb priority/medium QA status/awaiting-triage pitr | ### Description
Restore a given timestamp failed with `Error running restore_snapshot_schedule: Network error (yb/util/net/socket.cc:534): Failed to restore snapshot from schedule: 0843c9e4-f0e6-4e71-ad78-a4c5eedfa763: recvmsg got EOF from remote (system error 108)`
Observed multiple FATAL files in master logs, fatal details:
```
F20221107 05:23:36 ../../src/yb/util/status.cc:658] Check failed: value
@ 0x555cad35b227 google::LogMessage::SendToLog()
@ 0x555cad35c16d google::LogMessage::Flush()
@ 0x555cad35c679 google::LogMessageFatal::~LogMessageFatal()
@ 0x555cadcbedf1 yb::master::MasterSnapshotCoordinator::Impl::ExecuteRestoreOperations()
@ 0x555cadcbb98f yb::master::MasterSnapshotCoordinator::Impl::Poll()
@ 0x555cae0be0ab yb::rpc::Poller::Poll()
@ 0x555cae0e101c _ZN5boost4asio6detail18completion_handlerIZN2yb3rpc9Scheduler4Impl11HandleTimerERKNS_6system10error_codeEEUlvE_NS0_10io_context19basic_executor_typeINSt3__19allocatorIvEELm0EEEE11do_completeEPvPNS1_19scheduler_operationESA_m
@ 0x555cae0a976a boost::asio::detail::scheduler::run()
@ 0x555cae0a8bd5 yb::rpc::IoThreadPool::Impl::Execute()
@ 0x555cae5ef9df yb::Thread::SuperviseThread()
@ 0x7f6d757b4694 start_thread
@ 0x7f6d75cb641d __clone
```
Universe: https://10.9.139.108/universes/8b6e9dfc-3949-4e00-a250-f718eb11fa60
P.S. Please let me know if the universe is needed for further debugging | 1.0 | [DocDB][PITR][Tablegroups] Unable to restore to timestamp with FATALs - ### Description
Restore a given timestamp failed with `Error running restore_snapshot_schedule: Network error (yb/util/net/socket.cc:534): Failed to restore snapshot from schedule: 0843c9e4-f0e6-4e71-ad78-a4c5eedfa763: recvmsg got EOF from remote (system error 108)`
Observed multiple FATAL files in master logs, fatal details:
```
F20221107 05:23:36 ../../src/yb/util/status.cc:658] Check failed: value
@ 0x555cad35b227 google::LogMessage::SendToLog()
@ 0x555cad35c16d google::LogMessage::Flush()
@ 0x555cad35c679 google::LogMessageFatal::~LogMessageFatal()
@ 0x555cadcbedf1 yb::master::MasterSnapshotCoordinator::Impl::ExecuteRestoreOperations()
@ 0x555cadcbb98f yb::master::MasterSnapshotCoordinator::Impl::Poll()
@ 0x555cae0be0ab yb::rpc::Poller::Poll()
@ 0x555cae0e101c _ZN5boost4asio6detail18completion_handlerIZN2yb3rpc9Scheduler4Impl11HandleTimerERKNS_6system10error_codeEEUlvE_NS0_10io_context19basic_executor_typeINSt3__19allocatorIvEELm0EEEE11do_completeEPvPNS1_19scheduler_operationESA_m
@ 0x555cae0a976a boost::asio::detail::scheduler::run()
@ 0x555cae0a8bd5 yb::rpc::IoThreadPool::Impl::Execute()
@ 0x555cae5ef9df yb::Thread::SuperviseThread()
@ 0x7f6d757b4694 start_thread
@ 0x7f6d75cb641d __clone
```
Universe: https://10.9.139.108/universes/8b6e9dfc-3949-4e00-a250-f718eb11fa60
P.S. Please let me know if the universe is needed for further debugging | non_defect | unable to restore to timestamp with fatals description restore a given timestamp failed with error running restore snapshot schedule network error yb util net socket cc failed to restore snapshot from schedule recvmsg got eof from remote system error observed multiple fatal files in master logs fatal details src yb util status cc check failed value google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal yb master mastersnapshotcoordinator impl executerestoreoperations yb master mastersnapshotcoordinator impl poll yb rpc poller poll codeeeulve executor operationesa m boost asio detail scheduler run yb rpc iothreadpool impl execute yb thread supervisethread start thread clone universe p s please let me know if the universe is needed for further debugging | 0 |
3,819 | 2,610,069,603 | IssuesEvent | 2015-02-26 18:20:25 | chrsmith/jsjsj122 | https://api.github.com/repos/chrsmith/jsjsj122 | opened | 台州割包茎哪家医院权威 | auto-migrated Priority-Medium Type-Defect | ```
台州割包茎哪家医院权威【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 11:47 | 1.0 | 台州割包茎哪家医院权威 - ```
台州割包茎哪家医院权威【台州五洲生殖医院】24小时健康咨
询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州
市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108�
��118、198及椒江一金清公交车直达枫南小区,乘坐107、105、109
、112、901、 902公交车到星星广场下车,步行即可到院。
诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,��
�精,无精。包皮包茎,精索静脉曲张,淋病等。
台州五洲生殖医院是台州最大的男科医院,权威专家在线免��
�咨询,拥有专业完善的男科检查治疗设备,严格按照国家标�
��收费。尖端医疗设备,与世界同步。权威专家,成就专业典
范。人性化服务,一切以患者为中心。
看男科就选台州五洲生殖医院,专业男科为男人。
```
-----
Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 11:47 | defect | 台州割包茎哪家医院权威 台州割包茎哪家医院权威【台州五洲生殖医院】 询热线 微信号tzwzszyy 医院地址 台州 (枫南大转盘旁)乘车线路 、 � �� 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at | 1 |
255,623 | 19,318,357,924 | IssuesEvent | 2021-12-14 00:39:56 | ecadlabs/taquito | https://api.github.com/repos/ecadlabs/taquito | opened | Update documentation magic bytes | documentation I-protocol? | Reflect changes about magic bytes in the following comments:
- https://github.com/ecadlabs/taquito/blob/master/packages/taquito/src/signer/interface.ts#L8
- https://github.com/ecadlabs/taquito/blob/master/packages/taquito-utils/src/verify-signature.ts#L23
> magic byte has changes from 0x01 for blocks and 0x02 for endorsements, to 0x11 for blocks, 0x12 for preendorsements, 0x13 for endorsements.
Source: http://tezos.gitlab.io/protocols/tenderbake.html#signer | 1.0 | Update documentation magic bytes - Reflect changes about magic bytes in the following comments:
- https://github.com/ecadlabs/taquito/blob/master/packages/taquito/src/signer/interface.ts#L8
- https://github.com/ecadlabs/taquito/blob/master/packages/taquito-utils/src/verify-signature.ts#L23
> magic byte has changes from 0x01 for blocks and 0x02 for endorsements, to 0x11 for blocks, 0x12 for preendorsements, 0x13 for endorsements.
Source: http://tezos.gitlab.io/protocols/tenderbake.html#signer | non_defect | update documentation magic bytes reflect changes about magic bytes in the following comments magic byte has changes from for blocks and for endorsements to for blocks for preendorsements for endorsements source | 0 |
331,380 | 24,305,895,351 | IssuesEvent | 2022-09-29 17:23:01 | Vakzu/musical-wars-doc | https://api.github.com/repos/Vakzu/musical-wars-doc | closed | Change enumeration of entities structure in first step | documentation | Хочется видеть более детализированное описание типов сущностей, характеристические, стержневые и тд. | 1.0 | Change enumeration of entities structure in first step - Хочется видеть более детализированное описание типов сущностей, характеристические, стержневые и тд. | non_defect | change enumeration of entities structure in first step хочется видеть более детализированное описание типов сущностей характеристические стержневые и тд | 0 |
3,055 | 2,607,977,170 | IssuesEvent | 2015-02-26 00:47:35 | chrsmith/zen-coding | https://api.github.com/repos/chrsmith/zen-coding | opened | Doesn't work with SCSS (sass) syntax in Komodo | auto-migrated Priority-Medium Type-Defect | ```
Could you please fix the Komodo extension so that Zen CSS would work with SCSS
(sass) file type selected. Currently it only works with CSS syntax.
Thanks!
```
-----
Original issue reported on code.google.com by `leem...@gmail.com` on 26 Apr 2012 at 7:31 | 1.0 | Doesn't work with SCSS (sass) syntax in Komodo - ```
Could you please fix the Komodo extension so that Zen CSS would work with SCSS
(sass) file type selected. Currently it only works with CSS syntax.
Thanks!
```
-----
Original issue reported on code.google.com by `leem...@gmail.com` on 26 Apr 2012 at 7:31 | defect | doesn t work with scss sass syntax in komodo could you please fix the komodo extension so that zen css would work with scss sass file type selected currently it only works with css syntax thanks original issue reported on code google com by leem gmail com on apr at | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.